Sample records for split table machine

  1. A hybrid flowshop scheduling model considering dedicated machines and lot-splitting for the solar cell industry

    NASA Astrophysics Data System (ADS)

    Wang, Li-Chih; Chen, Yin-Yann; Chen, Tzu-Li; Cheng, Chen-Yang; Chang, Chin-Wei

    2014-10-01

    This paper studies a solar cell industry scheduling problem, which is similar to traditional hybrid flowshop scheduling (HFS). In a typical HFS problem, the allocation of machine resources for each order should be scheduled in advance. However, the challenge in solar cell manufacturing is the number of machines that can be adjusted dynamically to complete the job. An optimal production scheduling model is developed to explore these issues, considering the practical characteristics, such as hybrid flowshop, parallel machine system, dedicated machines, sequence independent job setup times and sequence dependent job setup times. The objective of this model is to minimise the makespan and to decide the processing sequence of the orders/lots in each stage, lot-splitting decisions for the orders and the number of machines used to satisfy the demands in each stage. From the experimental results, lot-splitting has significant effect on shortening the makespan, and the improvement effect is influenced by the processing time and the setup time of orders. Therefore, the threshold point to improve the makespan can be identified. In addition, the model also indicates that more lot-splitting approaches, that is, the flexibility of allocating orders/lots to machines is larger, will result in a better scheduling performance.

  2. Air-Bearing Table for Machine Shops

    NASA Technical Reports Server (NTRS)

    Ambrisco, D.

    1986-01-01

    Frequent workpiece repositioning made easier. Air-bearing table facilitates movement of heavy workpiece during machining or between repeated operations at different positions. Table assembly consists of workpiece supporting fixture riding on air bearing. Table especially useful for inertia welding, in which ease of mobility is important.

  3. Triadic split-merge sampler

    NASA Astrophysics Data System (ADS)

    van Rossum, Anne C.; Lin, Hai Xiang; Dubbeldam, Johan; van der Herik, H. Jaap

    2018-04-01

    In machine vision typical heuristic methods to extract parameterized objects out of raw data points are the Hough transform and RANSAC. Bayesian models carry the promise to optimally extract such parameterized objects given a correct definition of the model and the type of noise at hand. A category of solvers for Bayesian models are Markov chain Monte Carlo methods. Naive implementations of MCMC methods suffer from slow convergence in machine vision due to the complexity of the parameter space. Towards this blocked Gibbs and split-merge samplers have been developed that assign multiple data points to clusters at once. In this paper we introduce a new split-merge sampler, the triadic split-merge sampler, that perform steps between two and three randomly chosen clusters. This has two advantages. First, it reduces the asymmetry between the split and merge steps. Second, it is able to propose a new cluster that is composed out of data points from two different clusters. Both advantages speed up convergence which we demonstrate on a line extraction problem. We show that the triadic split-merge sampler outperforms the conventional split-merge sampler. Although this new MCMC sampler is demonstrated in this machine vision context, its application extend to the very general domain of statistical inference.

  4. Table-driven software architecture for a stitching system

    NASA Technical Reports Server (NTRS)

    Thrash, Patrick J. (Inventor); Miller, Jeffrey L. (Inventor); Pallas, Ken (Inventor); Trank, Robert C. (Inventor); Fox, Rhoda (Inventor); Korte, Mike (Inventor); Codos, Richard (Inventor); Korolev, Alexandre (Inventor); Collan, William (Inventor)

    2001-01-01

    Native code for a CNC stitching machine is generated by generating a geometry model of a preform; generating tool paths from the geometry model, the tool paths including stitching instructions for making stitches; and generating additional instructions indicating thickness values. The thickness values are obtained from a lookup table. When the stitching machine runs the native code, it accesses a lookup table to determine a thread tension value corresponding to the thickness value. The stitching machine accesses another lookup table to determine a thread path geometry value corresponding to the thickness value.

  5. Precision aligned split V-block

    DOEpatents

    George, Irwin S.

    1984-01-01

    A precision aligned split V-block for holding a workpiece during a milling operation having an expandable frame for allowing various sized workpieces to be accommodated, is easily secured directly to the mill table and having key lugs in one base of the split V-block that assures constant alignment.

  6. Looking southwest at dualtrack transfer table, with Machine Shop (Bldg. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Looking southwest at dual-track transfer table, with Machine Shop (Bldg. 163) in background - Atchison, Topeka, Santa Fe Railroad, Albuquerque Shops, 908 Second Street, Southwest, Albuquerque, Bernalillo County, NM

  7. Documentation for the machine-readable version of a table of Redshifts for Abell clusters (Sarazin, Rood and Struble 1982)

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1983-01-01

    The machine readable catalog is described. The machine version contains the same data as the published table, which includes a second file with the notes. The computerized data files are prepared at the Astronomical Data Center. Detected discrepancies and cluster identifications based on photometric estimators are included.

  8. A multiplet table for Mn I (Adelman, Svatek, Van Winkler, Warren 1989): Documentation for the machine-readable version

    NASA Technical Reports Server (NTRS)

    Warren, Wayne H., Jr.; Adelman, Saul J.

    1989-01-01

    The machine-readable version of the multiplet table, as it is currently being distributed from the Astronomical Data Center, is described. The computerized version of the table contains data on excitation potentials, J values, multiplet terms, intensities of the transitions, and multiplet numbers. Files ordered by multiplet and by wavelength are included in the distributed version.

  9. 7 CFR 51.1995 - U.S. No. 1.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) Blanks; and, (ii) Broken or split shells. (4) Free from damage caused by: (i) Stains; and, (ii) Adhering... accordance with one of the size classifications in Table I. Table I Size classifications Maximum size—Will...

  10. 7 CFR 51.1995 - U.S. No. 1.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) Blanks; and, (ii) Broken or split shells. (4) Free from damage caused by: (i) Stains; and, (ii) Adhering... accordance with one of the size classifications in Table I. Table I Size classifications Maximum size—Will...

  11. HD-SAO-DM cross index

    NASA Technical Reports Server (NTRS)

    Nagy, T. A.; Mead, J.

    1978-01-01

    A table of correspondence SAO-HD-DM-GC was prepared by Morin (1973). The machine-readable version of this cross identification was obtained from the Centre de Donnees Stellaires (Strasbourg, France). The table was sorted at the Goddard Space Flight Center by HD number and all blank HD number records were removed to produce the HD-SAO-DM table presented. There were 258997 entries in the original table; there are 180411 entries after removing the blank HD records. The Boss General Catalogue (GC) numbers were retained on the machine-readable version after the sort.

  12. MIP models and hybrid algorithms for simultaneous job splitting and scheduling on unrelated parallel machines.

    PubMed

    Eroglu, Duygu Yilmaz; Ozmutlu, H Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms.

  13. Uses of the Westrup brush machine

    Treesearch

    Jill Barbour

    2002-01-01

    The Westrup brush machine can be used as the first step in the conditioning process of seeds. Even though there are various sizes of the machine, only the laboratory model (LA-H) is described. The machine is designed to separate seed from pods or flowers, dewing tree seed, remove appendages or hairs from seed, split twin seed, de-lint cotton seed, scarify hard coated...

  14. MIP Models and Hybrid Algorithms for Simultaneous Job Splitting and Scheduling on Unrelated Parallel Machines

    PubMed Central

    Ozmutlu, H. Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms. PMID:24977204

  15. Comparison of torsional stability of 2 types of split crimpable surgical hooks with soldered brass surgical hooks.

    PubMed

    O'Bannon, Shawn P; Dunn, William J; Lenk, Jason S

    2006-10-01

    The purpose of this in-vitro study was to compare the torsional stability of split crimpable surgical hooks and soldered brass surgical hooks on a rectangular stabilizing archwire. Coated split crimpable hooks (Never-Slip Grip, TP Orthodontics, LaPorte, Ind), ribbed crimpable hooks (TP Orthodontics), and .032-in brass soldered hooks/notched electrodes (Ormco/Sybron Dental Specialties, Orange, Calif) were attached to a 0.019 x 0.025-in stainless steel archwire. The archwire/hook attachment assembly was secured into a dual contact jig and statically mounted to the base of a universal testing machine. The hooks were engaged by a wire loop attached to the upper load cell of the machine, which pulled the wire until the hook was torsionally displaced from the archwire. The mean forces, measured in newtons (N), required to dislodge the hooks were as follows: soldered brass surgical hooks (51.3 +/- 5.2 N), coated split crimpable hooks (49.9 +/- 6.6 N), and ribbed split crimpable hooks (31.3 +/- 5.4 N). Data were analyzed with 1-way ANOVA and Tukey HSD post-hoc tests at alpha = .05. Ribbed split crimpable hooks provided significantly less resistance to torsional displacement than the other types of hooks (P <001). There was no difference between coated split crimpable hooks and soldered brass surgical hooks (P >05). Under the conditions of this study, the results suggest that soldered brass surgical hooks and coated split crimpable hook attachments provide more stability to torsional dislodgement from a rectangular stabilizing archwire than ribbed split crimpable hooks.

  16. Direct Machining of Low-Loss THz Waveguide Components With an RF Choke.

    PubMed

    Lewis, Samantha M; Nanni, Emilio A; Temkin, Richard J

    2014-12-01

    We present results for the successful fabrication of low-loss THz metallic waveguide components using direct machining with a CNC end mill. The approach uses a split-block machining process with the addition of an RF choke running parallel to the waveguide. The choke greatly reduces coupling to the parasitic mode of the parallel-plate waveguide produced by the split-block. This method has demonstrated loss as low as 0.2 dB/cm at 280 GHz for a copper WR-3 waveguide. It has also been used in the fabrication of 3 and 10 dB directional couplers in brass, demonstrating excellent agreement with design simulations from 240-260 GHz. The method may be adapted to structures with features on the order of 200 μm.

  17. Polarization nondegenerate fiber Fabry-Perot cavities with large tunable splittings

    NASA Astrophysics Data System (ADS)

    Cui, Jin-Ming; Zhou, Kun; Zhao, Ming-Shu; Ai, Ming-Zhong; Hu, Chang-Kang; Li, Qiang; Liu, Bi-Heng; Peng, Jin-Lan; Huang, Yun-Feng; Li, Chuan-Feng; Guo, Guang-Can

    2018-04-01

    We demonstrate a type of microcavity with large tunable splitting of polarization modes. This polarization nondegenerate cavity consists of two ellipsoidal concave mirrors with controllable eccentricity by CO2 laser machining on fiber end facets. The experiment shows that the cavities can combine the advantages of high finesse above 104 and large tunable polarization mode splitting to the GHz range. As the splitting of the cavity can be finely controlled to match atom hyperfine levels or optomechanics phonons, it will blaze a way in experiments on cavity quantum electrodynamics and cavity optomechanics.

  18. Looking northeast from roof of Machine Shop (Bldg. 163) at ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Looking northeast from roof of Machine Shop (Bldg. 163) at transfer table pit and Boiler Shop (Bldg. 152) - Atchison, Topeka, Santa Fe Railroad, Albuquerque Shops, Machine Shop, 908 Second Street, Southwest, Albuquerque, Bernalillo County, NM

  19. Breeding highbush blueberry cultivars adapted to machine harvest for the fresh market

    USDA-ARS?s Scientific Manuscript database

    In recent years, world blueberry production has been split evenly between processing and fresh fruit markets. Machine harvest of highbush blueberry [northern highbush (NHB, Vaccinium corymbosum L.), southern highbush (SHB, Vaccinium corymbosum interspecific hybrids), and rabbiteye (RE, Vaccinium vi...

  20. 7 CFR 51.1995 - U.S. No. 1.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...) Well formed; and, (2) Clean and bright. (3) Free from: (i) Blanks; and, (ii) Broken or split shells. (4... minimum diameter, minimum and maximum diameters, or in accordance with one of the size classifications in Table I. Table I Size classifications Maximum size—Will pass through a round opening of the following...

  1. An Analysis of the Multiple Objective Capital Budgeting Problem via Fuzzy Linear Integer (0-1) Programming.

    DTIC Science & Technology

    1980-05-31

    34 International Journal of Man- Machine Studies , Vol. 9, No. 1, 1977, pp. 1-68. [16] Zimmermann, H. J., Theory and Applications of Fuzzy Sets, Institut...Boston, Inc., Hingham, MA, 1978. [18] Yager, R. R., "Multiple Objective Decision-Making Using Fuzzy Sets," International Journal of Man- Machine Studies ...Professor of Industria Engineering ... iv t TABLE OF CONTENTS page ABSTRACT .. .. . ...... . .... ...... ........ iii LIST OF TABLES

  2. Documentation for the machine-readable version of A Finding List for the Multiplet Tables of NSRDS-NBS 3, Sections 1-10 (Adelman, Adelman, Fischel and Warren 1984)

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1984-01-01

    The machine-readable finding list, as it is currently being distributed from the Astronomical Data Center, is described. This version of the list supersedes an earlier one (1977) containing only Sections 1 through 7 of the NSRDS-NBS 3 multiplet tables publications. Additional sections are to be incorporated into this list as they are published.

  3. Sequence invariant state machines

    NASA Technical Reports Server (NTRS)

    Whitaker, S.; Manjunath, S.

    1990-01-01

    A synthesis method and new VLSI architecture are introduced to realize sequential circuits that have the ability to implement any state machine having N states and m inputs, regardless of the actual sequence specified in the flow table. A design method is proposed that utilizes BTS logic to implement regular and dense circuits. A given state sequence can be programmed with power supply connections or dynamically reallocated if stored in a register. Arbitrary flow table sequences can be modified or programmed to dynamically alter the function of the machine. This allows VLSI controllers to be designed with the programmability of a general purpose processor but with the compact size and performance of dedicated logic.

  4. 31 CFR 1021.311 - Filing obligations.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... the same table game without leaving the table; (3) Bills inserted into electronic gaming devices in... would not apply); and (4) Jackpots from slot machines or video lottery terminals. [75 FR 65812, Oct. 26...

  5. 31 CFR 1021.311 - Filing obligations.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... the same table game without leaving the table; (3) Bills inserted into electronic gaming devices in... would not apply); and (4) Jackpots from slot machines or video lottery terminals. [75 FR 65812, Oct. 26...

  6. 31 CFR 1021.311 - Filing obligations.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... the same table game without leaving the table; (3) Bills inserted into electronic gaming devices in... would not apply); and (4) Jackpots from slot machines or video lottery terminals. [75 FR 65812, Oct. 26...

  7. Design of an ultraprecision computerized numerical control chemical mechanical polishing machine and its implementation

    NASA Astrophysics Data System (ADS)

    Zhang, Chupeng; Zhao, Huiying; Zhu, Xueliang; Zhao, Shijie; Jiang, Chunye

    2018-01-01

    The chemical mechanical polishing (CMP) is a key process during the machining route of plane optics. To improve the polishing efficiency and accuracy, a CMP model and machine tool were developed. Based on the Preston equation and the axial run-out error measurement results of the m circles on the tin plate, a CMP model that could simulate the material removal at any point on the workpiece was presented. An analysis of the model indicated that lower axial run-out error led to lower material removal but better polishing efficiency and accuracy. Based on this conclusion, the CMP machine was designed, and the ultraprecision gas hydrostatic guideway and rotary table as well as the Siemens 840Dsl numerical control system were incorporated in the CMP machine. To verify the design principles of machine, a series of detection and machining experiments were conducted. The LK-G5000 laser sensor was employed for detecting the straightness error of the gas hydrostatic guideway and the axial run-out error of the gas hydrostatic rotary table. A 300-mm-diameter optic was chosen for the surface profile machining experiments performed to determine the CMP efficiency and accuracy.

  8. Touchable Tornadoes

    ERIC Educational Resources Information Center

    Gilhousen, David

    2004-01-01

    In this article, the author discusses a tornado-producing machine that he used in teacher-led, student assisted demonstrations in order to reinforce concepts learned during a unit on weather. The machine, or simulator, was powered by a hair dryer, fan, and cool-mist humidifier. The machine consists of a demonstration table containing a plenum box,…

  9. A microdynamic version of the tensile test machine

    NASA Technical Reports Server (NTRS)

    Glaser, R. J.

    1991-01-01

    Very large space structures require structural reactions to control forces associated with nanometer-level displacements; JPL has accordingly built a tensile test machine capable of mN-level force measurements and nm-level displacement measurements, with a view to the study of structural linear joining technology at the lower limit of its resolution. The tester is composed of a moving table that is supported by six flexured legs and a test specimen cantilevered off the table to ground. Three vertical legs contain piezoactuators allowing changes in length up to 200 microns while generating axial load and bending moments. Displacements between ground and table are measured by means of three laser-interferometric channels.

  10. Image Understanding Workshop. Proceedings of a Workshop Held in Los Angeles, California on 23-25 February 1987. Volume 2

    DTIC Science & Technology

    1987-02-25

    Modellierung von Kanten bei unregel. Navigation within a building, to be published in IEEE mifliger Rasterung in Bildverarbeitun uand Muster...converted them into equivalent machine cycles in Table 3-1. We took into account of 100 nanosecond 0 - 0, machine cycle time of the MPP. In MPP, NON- VON ...We show the result for the conjugate gradient method in of NON- VON . We assumed that the instructions which carry Table 4-4. The computation of four

  11. Adaptation of machine translation for multilingual information retrieval in the medical domain.

    PubMed

    Pecina, Pavel; Dušek, Ondřej; Goeuriot, Lorraine; Hajič, Jan; Hlaváčová, Jaroslava; Jones, Gareth J F; Kelly, Liadh; Leveling, Johannes; Mareček, David; Novák, Michal; Popel, Martin; Rosa, Rudolf; Tamchyna, Aleš; Urešová, Zdeňka

    2014-07-01

    We investigate machine translation (MT) of user search queries in the context of cross-lingual information retrieval (IR) in the medical domain. The main focus is on techniques to adapt MT to increase translation quality; however, we also explore MT adaptation to improve effectiveness of cross-lingual IR. Our MT system is Moses, a state-of-the-art phrase-based statistical machine translation system. The IR system is based on the BM25 retrieval model implemented in the Lucene search engine. The MT techniques employed in this work include in-domain training and tuning, intelligent training data selection, optimization of phrase table configuration, compound splitting, and exploiting synonyms as translation variants. The IR methods include morphological normalization and using multiple translation variants for query expansion. The experiments are performed and thoroughly evaluated on three language pairs: Czech-English, German-English, and French-English. MT quality is evaluated on data sets created within the Khresmoi project and IR effectiveness is tested on the CLEF eHealth 2013 data sets. The search query translation results achieved in our experiments are outstanding - our systems outperform not only our strong baselines, but also Google Translate and Microsoft Bing Translator in direct comparison carried out on all the language pairs. The baseline BLEU scores increased from 26.59 to 41.45 for Czech-English, from 23.03 to 40.82 for German-English, and from 32.67 to 40.82 for French-English. This is a 55% improvement on average. In terms of the IR performance on this particular test collection, a significant improvement over the baseline is achieved only for French-English. For Czech-English and German-English, the increased MT quality does not lead to better IR results. Most of the MT techniques employed in our experiments improve MT of medical search queries. Especially the intelligent training data selection proves to be very successful for domain adaptation of MT. Certain improvements are also obtained from German compound splitting on the source language side. Translation quality, however, does not appear to correlate with the IR performance - better translation does not necessarily yield better retrieval. We discuss in detail the contribution of the individual techniques and state-of-the-art features and provide future research directions. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Application of XML to Journal Table Archiving

    NASA Astrophysics Data System (ADS)

    Shaya, E. J.; Blackwell, J. H.; Gass, J. E.; Kargatis, V. E.; Schneider, G. L.; Weiland, J. L.; Borne, K. D.; White, R. A.; Cheung, C. Y.

    1998-12-01

    The Astronomical Data Center (ADC) at the NASA Goddard Space Flight Center is a major archive for machine-readable astronomical data tables. Many ADC tables are derived from published journal articles. Article tables are reformatted to be machine-readable and documentation is crafted to facilitate proper reuse by researchers. The recent switch of journals to web based electronic format has resulted in the generation of large amounts of tabular data that could be captured into machine-readable archive format at fairly low cost. The large data flow of the tables from all major North American astronomical journals (a factor of 100 greater than the present rate at the ADC) necessitates the development of rigorous standards for the exchange of data between researchers, publishers, and the archives. We have selected a suitable markup language that can fully describe the large variety of astronomical information contained in ADC tables. The eXtensible Markup Language XML is a powerful internet-ready documentation format for data. It provides a precise and clear data description language that is both machine- and human-readable. It is rapidly becoming the standard format for business and information transactions on the internet and it is an ideal common metadata exchange format. By labelling, or "marking up", all elements of the information content, documents are created that computers can easily parse. An XML archive can easily and automatically be maintained, ingested into standard databases or custom software, and even totally restructured whenever necessary. Structuring astronomical data into XML format will enable efficient and focused search capabilities via off-the-shelf software. The ADC is investigating XML's expanded hyperlinking power to enhance connectivity within the ADC data/metadata and developing XSL display scripts to enhance display of astronomical data. The ADC XML Definition Type Document can be viewed at http://messier.gsfc.nasa.gov/dtdhtml/DTD-TREE.html

  13. 40 CFR Table 9 to Subpart Wwww of... - Initial Compliance With Work Practice Standards

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... compression/injection molding uncover, unwrap or expose only one charge per mold cycle per compression/injection molding machine. For machines with multiple molds, one charge means sufficient material to fill... cycle per compression/injection molding machine, or prior to the loader, hoppers are closed except when...

  14. A Multiple Sensor Machine Vision System for Automatic Hardwood Feature Detection

    Treesearch

    D. Earl Kline; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman; Robert L. Brisbin

    1993-01-01

    A multiple sensor machine vision prototype is being developed to scan full size hardwood lumber at industrial speeds for automatically detecting features such as knots holes, wane, stain, splits, checks, and color. The prototype integrates a multiple sensor imaging system, a materials handling system, a computer system, and application software. The prototype provides...

  15. 41. PATTERN STORAGE, GRIND STONE, WATER TANK, SHAFTING, AND TABLE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    41. PATTERN STORAGE, GRIND STONE, WATER TANK, SHAFTING, AND TABLE SAW (L TO R)-LOOKING WEST. - W. A. Young & Sons Foundry & Machine Shop, On Water Street along Monongahela River, Rices Landing, Greene County, PA

  16. NCEP BUFR File Structure

    Science.gov Websites

    . These tables may be defined within a separate ASCII text file (see Description and Format of BUFR Tables time, the BUFR tables are usually read from an external ASCII text file (although it is also possible reports. Click here to view the ASCII text file (called /nwprod/fix/bufrtab.002 on the NCEP CCS machines

  17. High speed machining of space shuttle external tank liquid hydrogen barrel panel

    NASA Technical Reports Server (NTRS)

    Hankins, J. D.

    1983-01-01

    Actual and projected optimum High Speed Machining data for producing shuttle external tank liquid hydrogen barrel panels of aluminum alloy 2219-T87 are reported. The data included various machining parameters; e.g., spindle speeds, cutting speed, table feed, chip load, metal removal rate, horsepower, cutting efficiency, cutter wear (lack of) and chip removal methods.

  18. High speed machining of space shuttle external tank liquid hydrogen barrel panel

    NASA Astrophysics Data System (ADS)

    Hankins, J. D.

    1983-11-01

    Actual and projected optimum High Speed Machining data for producing shuttle external tank liquid hydrogen barrel panels of aluminum alloy 2219-T87 are reported. The data included various machining parameters; e.g., spindle speeds, cutting speed, table feed, chip load, metal removal rate, horsepower, cutting efficiency, cutter wear (lack of) and chip removal methods.

  19. SU-G-TeP2-04: Comprehensive Machine Isocenter Evaluation with Separation of Gantry, Collimator, and Table Variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hancock, S; Clements, C; Hyer, D

    2016-06-15

    Purpose: To develop and demonstrate application of a method that characterizes deviation of linac x-ray beams from the centroid of the volumetric radiation isocenter as a function of gantry, collimator, and table variables. Methods: A set of Winston-Lutz ball-bearing images was used to determine the gantry radiation isocenter as the midrange of deviation values resulting from gantry and collimator rotation. Also determined were displacement of table axis from gantry isocenter and recommended table axis adjustment. The method, previously reported, has been extended to include the effect of collimator walkout by obtaining measurements with 0 and 180 degree collimator rotation formore » each gantry angle. Twelve images were used to characterize the volumetric isocenter for the full range of available gantry, collimator, and table rotations. Results: Three Varian True Beam, two Elekta Infinity and four Versa HD linacs at five institutions were tested using identical methodology. Varian linacs exhibited substantially less deviation due to head sag than Elekta linacs (0.4 mm vs. 1.2 mm on average). One linac from each manufacturer had additional isocenter deviation of 0.3 to 0.4 mm due to jaw instability with gantry and collimator rotation. For all linacs, the achievable isocenter tolerance was dependent on adjustment of collimator position offset, transverse position steering, and alignment of the table axis with gantry isocenter, facilitated by these test results. The pattern and magnitude of table axis wobble vs. table angle was reproducible and unique to each machine. Conclusion: This new method provides a comprehensive set of isocenter deviation values including all variables. It effectively facilitates minimization of deviation between beam center and target (ball-bearing) position. This method was used to quantify the effect of jaw instability on isocenter deviation and to identify the offending jaw. The test is suitable for incorporation into a routine machine QA program. Software development was performed by Radiological Imaging Technology, Inc.« less

  20. Segmenting overlapping nano-objects in atomic force microscopy image

    NASA Astrophysics Data System (ADS)

    Wang, Qian; Han, Yuexing; Li, Qing; Wang, Bing; Konagaya, Akihiko

    2018-01-01

    Recently, techniques for nanoparticles have rapidly been developed for various fields, such as material science, medical, and biology. In particular, methods of image processing have widely been used to automatically analyze nanoparticles. A technique to automatically segment overlapping nanoparticles with image processing and machine learning is proposed. Here, two tasks are necessary: elimination of image noises and action of the overlapping shapes. For the first task, mean square error and the seed fill algorithm are adopted to remove noises and improve the quality of the original image. For the second task, four steps are needed to segment the overlapping nanoparticles. First, possibility split lines are obtained by connecting the high curvature pixels on the contours. Second, the candidate split lines are classified with a machine learning algorithm. Third, the overlapping regions are detected with the method of density-based spatial clustering of applications with noise (DBSCAN). Finally, the best split lines are selected with a constrained minimum value. We give some experimental examples and compare our technique with two other methods. The results can show the effectiveness of the proposed technique.

  1. Automated edge finishing using an active XY table

    DOEpatents

    Loucks, Clifford S.; Starr, Gregory P.

    1993-01-01

    The disclosure is directed to an apparatus and method for automated edge finishing using hybrid position/force control of an XY table. The disclosure is particularly directed to learning the trajectory of the edge of a workpiece by "guarded moves". Machining is done by controllably moving the XY table, with the workpiece mounted thereon, along the learned trajectory with feedback from a force sensor. Other similar workpieces can be mounted, without a fixture on the XY table, located and the learned trajectory adjusted

  2. 49 CFR 40.153 - How does the MRO notify employees of their right to a test of the split specimen?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... must have the ability to receive the employee's calls at all times during the 72 hour period (e.g., by use of an answering machine with a “time stamp” feature when there is no one in your office to answer... 72 hours from the time you provide this notification to him or her to request a test of the split...

  3. Mass Property Measurements of the Mars Science Laboratory Rover

    NASA Technical Reports Server (NTRS)

    Fields, Keith

    2012-01-01

    The NASA/JPL Mars Science Laboratory (MSL) spacecraft mass properties were measured on a spin balance table prior to launch. This paper discusses the requirements and issues encountered with the setup, qualification, and testing using the spin balance table, and the idiosyncrasies encountered with the test system. The final mass measurements were made in the Payload Hazardous Servicing Facility (PHSF) at Kennedy Space Center on the fully assembled and fueled spacecraft. This set of environmental tests required that the control system for the spin balance machine be at a remote location, which posed additional challenges to the operation of the machine

  4. Reprographics Career Ladder AFSC 703X0.

    DTIC Science & Technology

    1981-07-01

    LINEUP AND REGISTER TABLES 39 BINDING MACHINES 36 FLOURESCENT LAMPS 36 WET PROCESS PLATEMAKERS 36 ELECTRIC STAPLERS 32 MANUAL PAPER CUTTERS 32...ELECTROSTATIC COPIERS/PLATEMAKERS 78% PAPER CUTTERS 57% ELECTRIC STAPLERS 47% BINDING MACHINES 42% SINGLE HEAD DRILLS 37% PADDING RACKS 31% PLATEMAKING...HEAD DRILLS 78% MANUAL PAPER CUTTERS 71% STATION COLLATORS 51% BINDING MACHiNES 46% ELECTRIC STAPLERS 46% PLATEMAKING CAMERAS 44% SADDLE STITCHERS 42

  5. Design and Analysis of Linear Fault-Tolerant Permanent-Magnet Vernier Machines

    PubMed Central

    Xu, Liang; Liu, Guohai; Du, Yi; Liu, Hu

    2014-01-01

    This paper proposes a new linear fault-tolerant permanent-magnet (PM) vernier (LFTPMV) machine, which can offer high thrust by using the magnetic gear effect. Both PMs and windings of the proposed machine are on short mover, while the long stator is only manufactured from iron. Hence, the proposed machine is very suitable for long stroke system applications. The key of this machine is that the magnetizer splits the two movers with modular and complementary structures. Hence, the proposed machine offers improved symmetrical and sinusoidal back electromotive force waveform and reduced detent force. Furthermore, owing to the complementary structure, the proposed machine possesses favorable fault-tolerant capability, namely, independent phases. In particular, differing from the existing fault-tolerant machines, the proposed machine offers fault tolerance without sacrificing thrust density. This is because neither fault-tolerant teeth nor the flux-barriers are adopted. The electromagnetic characteristics of the proposed machine are analyzed using the time-stepping finite-element method, which verifies the effectiveness of the theoretical analysis. PMID:24982959

  6. Design and analysis of linear fault-tolerant permanent-magnet vernier machines.

    PubMed

    Xu, Liang; Ji, Jinghua; Liu, Guohai; Du, Yi; Liu, Hu

    2014-01-01

    This paper proposes a new linear fault-tolerant permanent-magnet (PM) vernier (LFTPMV) machine, which can offer high thrust by using the magnetic gear effect. Both PMs and windings of the proposed machine are on short mover, while the long stator is only manufactured from iron. Hence, the proposed machine is very suitable for long stroke system applications. The key of this machine is that the magnetizer splits the two movers with modular and complementary structures. Hence, the proposed machine offers improved symmetrical and sinusoidal back electromotive force waveform and reduced detent force. Furthermore, owing to the complementary structure, the proposed machine possesses favorable fault-tolerant capability, namely, independent phases. In particular, differing from the existing fault-tolerant machines, the proposed machine offers fault tolerance without sacrificing thrust density. This is because neither fault-tolerant teeth nor the flux-barriers are adopted. The electromagnetic characteristics of the proposed machine are analyzed using the time-stepping finite-element method, which verifies the effectiveness of the theoretical analysis.

  7. Machined versus roughened immediately loaded and finally restored single implants inserted flapless: Preliminary 6-month data from a split- mouth randomised controlled trial.

    PubMed

    Cannizzaro, Gioacchino; Felice, Pietro; Loi, Ignazio; Viola, Paolo; Ferri, Vittorio; Leone, Michele; Lazzarini, Matteo; Trullenque-Eriksson, Anna; Esposito, Marco

    To compare the outcome of immediately loaded single implants with a machined or a roughened surface. Fifty patients had two implant sites randomly allocated to receive flaplessplaced single Syra implants (Sweden & Martina), one with a machined and one with a roughened surface (sand-blasted with zirconia powder and acid etched), according to a split-mouth design. To be loaded immediately, implants had to be inserted with a torque superior to 50 Ncm. Implants were restored with definitive crowns in direct occlusal contact within 48 h. Patients were followed for 6 months after loading. Outcome measures were prosthetic and implant failures and complications. Two machined implants and four roughened implants were not loaded immediately. Six months after loading no dropout occurred. One implant loaded late, which had a rough implant surface, failed 20 days after loading (P (McNemar test) = 0.625; difference in proportions = -0.04; 95% CI: -0.15 to 0.07). Three crowns had to be remade on machined implants and four on roughened implants (P (McNemar test) = 1.000; difference in proportions = -0.02; 95% CI: -0.12 to 0.08). Three machined and five roughened implants experienced complications (P (McNemar test) = 0.625; difference in proportions = -0.04; 95% CI: -0.15 to 0.07). There were no statistically significant differences between groups for crown and implant losses as well as complications. Up to 6 months after loading both machined and roughened flapless-placed and immediately loaded single implants provided good and similar results, however, longer follow-ups are needed to evaluate the long-term prognosis of implants with different surfaces.

  8. New system speeds bundling of split firewood

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1979-01-01

    A firewood compacting and strapping machine is manufactured by Carolson Stapler and Shippers Supply, Omaha, and FMC Industrial Packaging Division, Philadelphia. A hydraulic compactor applies 20,000 lbs of compressive force to each bundle of split logs, reducing each package to a diameter of about 12 inches. A polypropylene band is applied and heat sealed around each bundle. Bundles are stacked on end, twenty-four to a pallet, and the entire load is banded with one horizontal strap.

  9. Numerical study on the splitting of a vapor bubble in the ultrasonic assisted EDM process with the curved tool and workpiece.

    PubMed

    Shervani-Tabar, M T; Seyed-Sadjadi, M H; Shabgard, M R

    2013-01-01

    Electrical discharge machining (EDM) is a powerful and modern method of machining. In the EDM process, a vapor bubble is generated between the tool and the workpiece in the dielectric liquid due to an electrical discharge. In this process dynamic behavior of the vapor bubble affects machining process. Vibration of the tool surface affects bubble behavior and consequently affects material removal rate (MRR). In this paper, dynamic behavior of the vapor bubble in an ultrasonic assisted EDM process after the appearance of the necking phenomenon is investigated. It is noteworthy that necking phenomenon occurs when the bubble takes the shape of an hour-glass. After the appearance of the necking phenomenon, the vapor bubble splits into two parts and two liquid jets are developed on the boundaries of the upper and lower parts of the vapor bubble. The liquid jet developed on the upper part of the bubble impinges to the tool and the liquid jet developed on the lower part of the bubble impinges to the workpiece. These liquid jets cause evacuation of debris from the gap between the tool and the workpiece and also cause erosion of the workpiece and the tool. Curved tool and workpiece affect the shape and the velocity of the liquid jets during splitting of the vapor bubble. In this paper dynamics of the vapor bubble after its splitting near the curved tool and workpiece is investigated in three cases. In the first case surfaces of the tool and the workpiece are flat, in the second case surfaces of the tool and the workpiece are convex and in the third case surfaces of the tool and workpiece are concave. Numerical results show that in the third case, the velocity of liquid jets which are developed on the boundaries of the upper and lower parts of the vapor bubble after its splitting have the highest magnitude and their shape are broader than the other cases. Copyright © 2012 Elsevier B.V. All rights reserved.

  10. Creation of a Machine File and Subsequent Computer-Assisted Production of Publishing Outputs, Including a Translation Journal and an Index.

    ERIC Educational Resources Information Center

    Buckland, Lawrence F.; Weaver, Vance

    Reported are the findings of the Uspekhi experiment in creating a labeled machine file, as well as sample products of this system - an article from a scientific journal and an index page. Production cost tables are presented for the machine file, primary journals, and journal indexes. Comparisons were made between the 1965 predicted costs and the…

  11. Looking north through Machine Shop (Bldg. 163) Track 409 Doors ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Looking north through Machine Shop (Bldg. 163) Track 409 Doors at transfer table, with Boiler Shop (Bldg. 152) at left and C.W.E. Shop No. 2 (Bldg. 47) at right - Atchison, Topeka, Santa Fe Railroad, Albuquerque Shops, 908 Second Street, Southwest, Albuquerque, Bernalillo County, NM

  12. Drilling Precise Orifices and Slots

    NASA Technical Reports Server (NTRS)

    Richards, C. W.; Seidler, J. E.

    1983-01-01

    Reaction control thrustor injector requires precisely machined orifices and slots. Tooling setup consists of rotary table, numerical control system and torque sensitive drill press. Components used to drill oxidizer orifices. Electric discharge machine drills fuel-feed orifices. Device automates production of identical parts so several are completed in less time than previously.

  13. Testing filamentary composites

    NASA Technical Reports Server (NTRS)

    Dow, N. F.; Rosen, B. W.

    1970-01-01

    NOL ring split-dee tensile test has the advantages that the specimen is readily fabricated by winding and the test is performed in a conventional testing machine without special fixtures. Strain gages cannot be mounted, however, and substantial bending moments are introduced.

  14. Comparison between laser interferometric and calibrated artifacts for the geometric test of machine tools

    NASA Astrophysics Data System (ADS)

    Sousa, Andre R.; Schneider, Carlos A.

    2001-09-01

    A touch probe is used on a 3-axis vertical machine center to check against a hole plate, calibrated on a coordinate measuring machine (CMM). By comparing the results obtained from the machine tool and CMM, the main machine tool error components are measured, attesting the machine accuracy. The error values can b used also t update the error compensation table at the CNC, enhancing the machine accuracy. The method is easy to us, has a lower cost than classical test techniques, and preliminary results have shown that its uncertainty is comparable to well established techniques. In this paper the method is compared with the laser interferometric system, regarding reliability, cost and time efficiency.

  15. An Analysis of the Machine Trades Occupation.

    ERIC Educational Resources Information Center

    Hall, Charles W.; Emory, Harold L.

    The general purpose of the occupational analysis is to provide workable, basic information dealing with the many and varied duties performed in the machine trades occupation. The document opens with a brief introduction followed by a job description. The bulk of the document is presented in table form. Fifteen duties are broken down into a number…

  16. Score Big! Pinball Project Teaches Simple Machine Basics

    ERIC Educational Resources Information Center

    Freeman, Matthew K.

    2009-01-01

    This article presents a design brief for a pinball game. The design brief helps students get a better grasp on the operation and uses of simple machines. It also gives them an opportunity to develop their problem-solving skills and use design skills to complete an interesting, fun product. (Contains 2 tables and 3 photos.)

  17. Detecting epileptic seizure with different feature extracting strategies using robust machine learning classification techniques by applying advance parameter optimization approach.

    PubMed

    Hussain, Lal

    2018-06-01

    Epilepsy is a neurological disorder produced due to abnormal excitability of neurons in the brain. The research reveals that brain activity is monitored through electroencephalogram (EEG) of patients suffered from seizure to detect the epileptic seizure. The performance of EEG detection based epilepsy require feature extracting strategies. In this research, we have extracted varying features extracting strategies based on time and frequency domain characteristics, nonlinear, wavelet based entropy and few statistical features. A deeper study was undertaken using novel machine learning classifiers by considering multiple factors. The support vector machine kernels are evaluated based on multiclass kernel and box constraint level. Likewise, for K-nearest neighbors (KNN), we computed the different distance metrics, Neighbor weights and Neighbors. Similarly, the decision trees we tuned the paramours based on maximum splits and split criteria and ensemble classifiers are evaluated based on different ensemble methods and learning rate. For training/testing tenfold Cross validation was employed and performance was evaluated in form of TPR, NPR, PPV, accuracy and AUC. In this research, a deeper analysis approach was performed using diverse features extracting strategies using robust machine learning classifiers with more advanced optimal options. Support Vector Machine linear kernel and KNN with City block distance metric give the overall highest accuracy of 99.5% which was higher than using the default parameters for these classifiers. Moreover, highest separation (AUC = 0.9991, 0.9990) were obtained at different kernel scales using SVM. Additionally, the K-nearest neighbors with inverse squared distance weight give higher performance at different Neighbors. Moreover, to distinguish the postictal heart rate oscillations from epileptic ictal subjects, and highest performance of 100% was obtained using different machine learning classifiers.

  18. A Model for Flexibly Editing CSCL Scripts

    ERIC Educational Resources Information Center

    Sobreira, Pericles; Tchounikine, Pierre

    2012-01-01

    This article presents a model whose primary concern and design rationale is to offer users (teachers) with basic ICT skills an intuitive, easy, and flexible way of editing scripts. The proposal is based on relating an end-user representation as a table and a machine model as a tree. The table-tree model introduces structural expressiveness and…

  19. Multiclass Classification of Cardiac Arrhythmia Using Improved Feature Selection and SVM Invariants.

    PubMed

    Mustaqeem, Anam; Anwar, Syed Muhammad; Majid, Muahammad

    2018-01-01

    Arrhythmia is considered a life-threatening disease causing serious health issues in patients, when left untreated. An early diagnosis of arrhythmias would be helpful in saving lives. This study is conducted to classify patients into one of the sixteen subclasses, among which one class represents absence of disease and the other fifteen classes represent electrocardiogram records of various subtypes of arrhythmias. The research is carried out on the dataset taken from the University of California at Irvine Machine Learning Data Repository. The dataset contains a large volume of feature dimensions which are reduced using wrapper based feature selection technique. For multiclass classification, support vector machine (SVM) based approaches including one-against-one (OAO), one-against-all (OAA), and error-correction code (ECC) are employed to detect the presence and absence of arrhythmias. The SVM method results are compared with other standard machine learning classifiers using varying parameters and the performance of the classifiers is evaluated using accuracy, kappa statistics, and root mean square error. The results show that OAO method of SVM outperforms all other classifiers by achieving an accuracy rate of 81.11% when used with 80/20 data split and 92.07% using 90/10 data split option.

  20. [Comparisons and analysis of the spectral response functions' difference between FY-2E's and FY2C's split window channels].

    PubMed

    Zhang, Yong; Li, Yuan; Rong, Zhi-Guo

    2010-06-01

    Remote sensors' channel spectral response function (SRF) was one of the key factors to influence the quantitative products' inversion algorithm, accuracy and the geophysical characteristics. Aiming at the adjustments of FY-2E's split window channels' SRF, detailed comparisons between the FY-2E and FY-2C corresponding channels' SRF differences were carried out based on three data collections: the NOAA AVHRR corresponding channels' calibration look up tables, field measured water surface radiance and atmospheric profiles at Lake Qinghai and radiance calculated from the PLANK function within all dynamic range of FY-2E/C. The results showed that the adjustments of FY-2E's split window channels' SRF would result in the spectral range's movements and influence the inversion algorithms of some ground quantitative products. On the other hand, these adjustments of FY-2E SRFs would increase the brightness temperature differences between FY-2E's two split window channels within all dynamic range relative to FY-2C's. This would improve the inversion ability of FY-2E's split window channels.

  1. Retrieval of aerosol optical depth from surface solar radiation measurements using machine learning algorithms, non-linear regression and a radiative transfer-based look-up table

    NASA Astrophysics Data System (ADS)

    Huttunen, Jani; Kokkola, Harri; Mielonen, Tero; Esa Juhani Mononen, Mika; Lipponen, Antti; Reunanen, Juha; Vilhelm Lindfors, Anders; Mikkonen, Santtu; Erkki Juhani Lehtinen, Kari; Kouremeti, Natalia; Bais, Alkiviadis; Niska, Harri; Arola, Antti

    2016-07-01

    In order to have a good estimate of the current forcing by anthropogenic aerosols, knowledge on past aerosol levels is needed. Aerosol optical depth (AOD) is a good measure for aerosol loading. However, dedicated measurements of AOD are only available from the 1990s onward. One option to lengthen the AOD time series beyond the 1990s is to retrieve AOD from surface solar radiation (SSR) measurements taken with pyranometers. In this work, we have evaluated several inversion methods designed for this task. We compared a look-up table method based on radiative transfer modelling, a non-linear regression method and four machine learning methods (Gaussian process, neural network, random forest and support vector machine) with AOD observations carried out with a sun photometer at an Aerosol Robotic Network (AERONET) site in Thessaloniki, Greece. Our results show that most of the machine learning methods produce AOD estimates comparable to the look-up table and non-linear regression methods. All of the applied methods produced AOD values that corresponded well to the AERONET observations with the lowest correlation coefficient value being 0.87 for the random forest method. While many of the methods tended to slightly overestimate low AODs and underestimate high AODs, neural network and support vector machine showed overall better correspondence for the whole AOD range. The differences in producing both ends of the AOD range seem to be caused by differences in the aerosol composition. High AODs were in most cases those with high water vapour content which might affect the aerosol single scattering albedo (SSA) through uptake of water into aerosols. Our study indicates that machine learning methods benefit from the fact that they do not constrain the aerosol SSA in the retrieval, whereas the LUT method assumes a constant value for it. This would also mean that machine learning methods could have potential in reproducing AOD from SSR even though SSA would have changed during the observation period.

  2. Support Vector Machines: Relevance Feedback and Information Retrieval.

    ERIC Educational Resources Information Center

    Drucker, Harris; Shahrary, Behzad; Gibbon, David C.

    2002-01-01

    Compares support vector machines (SVMs) to Rocchio, Ide regular and Ide dec-hi algorithms in information retrieval (IR) of text documents using relevancy feedback. If the preliminary search is so poor that one has to search through many documents to find at least one relevant document, then SVM is preferred. Includes nine tables. (Contains 24…

  3. Real English: A Translator to Enable Natural Language Man-Machine Conversation.

    ERIC Educational Resources Information Center

    Gautin, Harvey

    This dissertation presents a pragmatic interpreter/translator called Real English to serve as a natural language man-machine communication interface in a multi-mode on-line information retrieval system. This multi-mode feature affords the user a library-like searching tool by giving him access to a dictionary, lexicon, thesaurus, synonym table,…

  4. A Web-Based Visualization and Animation Platform for Digital Logic Design

    ERIC Educational Resources Information Center

    Shoufan, Abdulhadi; Lu, Zheng; Huss, Sorin A.

    2015-01-01

    This paper presents a web-based education platform for the visualization and animation of the digital logic design process. This includes the design of combinatorial circuits using logic gates, multiplexers, decoders, and look-up-tables as well as the design of finite state machines. Various configurations of finite state machines can be selected…

  5. Reliability Study of Solder Paste Alloy for the Improvement of Solder Joint at Surface Mount Fine-Pitch Components.

    PubMed

    Rahman, Mohd Nizam Ab; Zubir, Noor Suhana Mohd; Leuveano, Raden Achmad Chairdino; Ghani, Jaharah A; Mahmood, Wan Mohd Faizal Wan

    2014-12-02

    The significant increase in metal costs has forced the electronics industry to provide new materials and methods to reduce costs, while maintaining customers' high-quality expectations. This paper considers the problem of most electronic industries in reducing costly materials, by introducing a solder paste with alloy composition tin 98.3%, silver 0.3%, and copper 0.7%, used for the construction of the surface mount fine-pitch component on a Printing Wiring Board (PWB). The reliability of the solder joint between electronic components and PWB is evaluated through the dynamic characteristic test, thermal shock test, and Taguchi method after the printing process. After experimenting with the dynamic characteristic test and thermal shock test with 20 boards, the solder paste was still able to provide a high-quality solder joint. In particular, the Taguchi method is used to determine the optimal control parameters and noise factors of the Solder Printer (SP) machine, that affects solder volume and solder height. The control parameters include table separation distance, squeegee speed, squeegee pressure, and table speed of the SP machine. The result shows that the most significant parameter for the solder volume is squeegee pressure (2.0 mm), and the solder height is the table speed of the SP machine (2.5 mm/s).

  6. Reliability Study of Solder Paste Alloy for the Improvement of Solder Joint at Surface Mount Fine-Pitch Components

    PubMed Central

    Rahman, Mohd Nizam Ab.; Zubir, Noor Suhana Mohd; Leuveano, Raden Achmad Chairdino; Ghani, Jaharah A.; Mahmood, Wan Mohd Faizal Wan

    2014-01-01

    The significant increase in metal costs has forced the electronics industry to provide new materials and methods to reduce costs, while maintaining customers’ high-quality expectations. This paper considers the problem of most electronic industries in reducing costly materials, by introducing a solder paste with alloy composition tin 98.3%, silver 0.3%, and copper 0.7%, used for the construction of the surface mount fine-pitch component on a Printing Wiring Board (PWB). The reliability of the solder joint between electronic components and PWB is evaluated through the dynamic characteristic test, thermal shock test, and Taguchi method after the printing process. After experimenting with the dynamic characteristic test and thermal shock test with 20 boards, the solder paste was still able to provide a high-quality solder joint. In particular, the Taguchi method is used to determine the optimal control parameters and noise factors of the Solder Printer (SP) machine, that affects solder volume and solder height. The control parameters include table separation distance, squeegee speed, squeegee pressure, and table speed of the SP machine. The result shows that the most significant parameter for the solder volume is squeegee pressure (2.0 mm), and the solder height is the table speed of the SP machine (2.5 mm/s). PMID:28788270

  7. 26 CFR 1.382-1 - Table of contents.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... certain issuances of stock. (1) Introduction. (2) Small issuance exception. (i) In general. (ii) Small...) Adjustments for stock splits and similar transactions. (D) Exception. (iv) Short taxable years. (3) Other...) Computation of value. (c) Short taxable year. (d) Successive ownership changes and absorption of a section 382...

  8. Shifting Weights: Adapting Object Detectors from Image to Video (Author’s Manuscript)

    DTIC Science & Technology

    2012-12-08

    Skateboard Sewing Machine Sandwich Figure 1: Images of the “Skateboard”, “ Sewing machine ”, and “Sandwich” classes taken from (top row) ImageNet [7...9.85% 9.45% 12.49% 0.21% 6.68% Sewing machine 9.76% 9.71% 10.35% 10.35% 0.12% 3.81% Mean AP 6.63% 6.33% 8.29% 9.36% 0.74% 5.06% Table 2: Average...Animal”, “Tire”, “Vehicle”, “Sandwich”, and “ Sewing machine ”. These objects appear respectively in the events “Attempting a 6 Sandwich Car New

  9. Machine translation project alternatives analysis

    NASA Technical Reports Server (NTRS)

    Bajis, Catherine J.; Bedford, Denise A. D.

    1993-01-01

    The Machine Translation Project consists of several components, two of which, the Project Plan and the Requirements Analysis, have already been delivered. The Project Plan details the overall rationale, objectives and time-table for the project as a whole. The Requirements Analysis compares a number of available machine translation systems, their capabilities, possible configurations, and costs. The Alternatives Analysis has resulted in a number of conclusions and recommendations to the NASA STI program concerning the acquisition of specific MT systems and related hardware and software.

  10. Systems and methods for displaying data in split dimension levels

    DOEpatents

    Stolte, Chris; Hanrahan, Patrick

    2015-07-28

    Systems and methods for displaying data in split dimension levels are disclosed. In some implementations, a method includes: at a computer, obtaining a dimensional hierarchy associated with a dataset, wherein the dimensional hierarchy includes at least one dimension and a sub-dimension of the at least one dimension; and populating information representing data included in the dataset into a visual table having a first axis and a second axis, wherein the first axis corresponds to the at least one dimension and the second axis corresponds to the sub-dimension of the at least one dimension.

  11. The Application of LT-Table in TRIZ Contradiction Resolving Process

    NASA Astrophysics Data System (ADS)

    Wei, Zihui; Li, Qinghai; Wang, Donglin; Tian, Yumei

    TRIZ is used to resolve invention problems. ARIZ is the most powerful systematic method which integrates all of TRIZ heuristics. Definition of ideal final result (IFR), identification of contradictions and resource utilization are main lines of ARIZ. But resource searching of ARIZ has fault of blindness. Alexandr sets up mathematical model of transformation of the hereditary information in an invention problem using the theory of catastrophes, and provides method of resource searching using LT-table. The application of LT-table on contradiction resolving is introduced. Resource utilization using LT-table is joined into ARIZ step as an addition of TRIZ, apply this method in separator paper punching machine design.

  12. Volatility, house edge and prize structure of gambling games.

    PubMed

    Turner, Nigel E

    2011-12-01

    This study used simulations to examine the effect of prize structure on the outcome volatility and the number of winners of various game configurations. The two most common prize structures found in gambling games are even money payoff games (bet $1; win $2) found on most table games and multilevel prizes structures found in gambling machine games. Simulations were set up to examine the effect of prize structure on the long-term outcomes of these games. Eight different prize structures were compared in terms of the number of winners and volatility. It was found that the standard table game and commercial gambling machines produced fairly high numbers of short term winners (1 h), but few long term winners (50 h). It was found that the typical even money game set up produced the lowest level of volatility. Of the multilevel prize structures examined, the three simulations based on commercial gambling machines were the least volatile. The results are examined in terms of the pragmatics of game design.

  13. Heart pacemaker - discharge

    MedlinePlus

    ... table saws) Electric lawnmowers and leaf blowers Slot machines Stereo speakers Tell all providers that you have a pacemaker before any tests are done. Some medical equipment may interfere with ...

  14. A Delphi Study of Additive Manufacturing Applicability for United States Air Force Civil Engineer Contingency Operations

    DTIC Science & Technology

    2015-03-26

    10 Table 2. Additive Manufacturing Categories (ASTM International , 2012) ..................... 14 Table 3. Delphi... flexibility in the design and structure of manufactured parts. It also allows for the creation of thousands of possible parts or tools from a single...machine. These benefits of 3 precision and flexibility in design and manufacturing show promising possibilities for addressing the general nature of

  15. Multi-objective component sizing of a power-split plug-in hybrid electric vehicle powertrain using Pareto-based natural optimization machines

    NASA Astrophysics Data System (ADS)

    Mozaffari, Ahmad; Vajedi, Mahyar; Chehresaz, Maryyeh; Azad, Nasser L.

    2016-03-01

    The urgent need to meet increasingly tight environmental regulations and new fuel economy requirements has motivated system science researchers and automotive engineers to take advantage of emerging computational techniques to further advance hybrid electric vehicle and plug-in hybrid electric vehicle (PHEV) designs. In particular, research has focused on vehicle powertrain system design optimization, to reduce the fuel consumption and total energy cost while improving the vehicle's driving performance. In this work, two different natural optimization machines, namely the synchronous self-learning Pareto strategy and the elitism non-dominated sorting genetic algorithm, are implemented for component sizing of a specific power-split PHEV platform with a Toyota plug-in Prius as the baseline vehicle. To do this, a high-fidelity model of the Toyota plug-in Prius is employed for the numerical experiments using the Autonomie simulation software. Based on the simulation results, it is demonstrated that Pareto-based algorithms can successfully optimize the design parameters of the vehicle powertrain.

  16. An Overview of Starfish: A Table-Centric Tool for Interactive Synthesis

    NASA Technical Reports Server (NTRS)

    Tsow, Alex

    2008-01-01

    Engineering is an interactive process that requires intelligent interaction at many levels. My thesis [1] advances an engineering discipline for high-level synthesis and architectural decomposition that integrates perspicuous representation, designer interaction, and mathematical rigor. Starfish, the software prototype for the design method, implements a table-centric transformation system for reorganizing control-dominated system expressions into high-level architectures. Based on the digital design derivation (DDD) system a designer-guided synthesis technique that applies correctness preserving transformations to synchronous data flow specifications expressed as co- recursive stream equations Starfish enhances user interaction and extends the reachable design space by incorporating four innovations: behavior tables, serialization tables, data refinement, and operator retiming. Behavior tables express systems of co-recursive stream equations as a table of guarded signal updates. Developers and users of the DDD system used manually constructed behavior tables to help them decide which transformations to apply and how to specify them. These design exercises produced several formally constructed hardware implementations: the FM9001 microprocessor, an SECD machine for evaluating LISP, and the SchemEngine, garbage collected machine for interpreting a byte-code representation of compiled Scheme programs. Bose and Tuna, two of DDD s developers, have subsequently commercialized the design derivation methodology at Derivation Systems, Inc. (DSI). DSI has formally derived and validated PCI bus interfaces and a Java byte-code processor; they further executed a contract to prototype SPIDER-NASA's ultra-reliable communications bus. To date, most derivations from DDD and DRS have targeted hardware due to its synchronous design paradigm. However, Starfish expressions are independent of the synchronization mechanism; there is no commitment to hardware or globally broadcast clocks. Though software back-ends for design derivation are limited to the DDD stream-interpreter, targeting synchronous or real-time software is not substantively different from targeting hardware.

  17. The Armys M-1 Abrams, M-2/M-3 Bradley, and M-1126 Stryker: Background and Issues for Congress

    DTIC Science & Technology

    2016-04-05

    smoothbore gun 1 x coaxial mounted 7.62 mm M-240 machine gun 1 x roof mounted 12.7 mm M-2 HB machine gun 1 x roof mounted 7.62 mm M-240 machine gun 12...Bradley Fighting Vehicle Table 2. Selected Basic Characteristics— M2 /3-A2 Armament 1 x turret mounted M-242 25mm “Bushmaster” chain gun 2 x turret...mounted TOW anti-tank missiles 1 x coaxial mounted 7.62 mm M-240C machine gun 8 x turret mounted smoke grenade launchers Crew M-2: 3 crew, 6

  18. Polytropic scaling of a flow Z-pinch

    NASA Astrophysics Data System (ADS)

    Hughes, M. C.; Shumlak, U.; Nelson, B. A.; Golingo, R. P.; Claveau, E. L.; Doty, S. A.; Forbes, E. G.; Kim, B.; Ross, M. P.; Weed, J. R.

    2015-11-01

    The ZaP Flow Z-Pinch project investigates the use of velocity shear to mitigate MHD instabilities. The ZaP-HD experiment produces 50 cm long pinches of varying radii. The power to the experiment is split between the plasma formation and acceleration process and the pinch assembly and compression process. Once the pinch is formed, low magnetic fluctuations indicate a quiescent, long-lived pinch. The split power supply allows more control of the pinch current than previous machine iterations, with a designed range from 50 to 150 kA. Radial force balance leads to the Bennett relation which indicates that as the pinch compresses due to increasing currents, the plasma pressure and/or linear density must change. Through ion spectroscopy and digital holographic interferometry coupled with magnetic measurements of the pinch current, the components of the Bennett relation can be fully measured. A scaling relation is then assumed to follow a polytrope as the pinch pressure, initially approximately 250 kPa, increases from an initially formed state to much higher values, approaching 100 MPa. A preliminary analysis of pinch scaling is shown corroborating with other diagnostics on the machine along with extrapolations to required currents for an HEDLP machine. This work is supported by grants from the U.S. Department of Energy and the U.S. National Nuclear Security Administration.

  19. Estimating photometric redshifts for X-ray sources in the X-ATLAS field using machine-learning techniques

    NASA Astrophysics Data System (ADS)

    Mountrichas, G.; Corral, A.; Masoura, V. A.; Georgantopoulos, I.; Ruiz, A.; Georgakakis, A.; Carrera, F. J.; Fotopoulou, S.

    2017-12-01

    We present photometric redshifts for 1031 X-ray sources in the X-ATLAS field using the machine-learning technique TPZ. X-ATLAS covers 7.1 deg2 observed with XMM-Newton within the Science Demonstration Phase of the H-ATLAS field, making it one of the largest contiguous areas of the sky with both XMM-Newton and Herschel coverage. All of the sources have available SDSS photometry, while 810 additionally have mid-IR and/or near-IR photometry. A spectroscopic sample of 5157 sources primarily in the XMM/XXL field, but also from several X-ray surveys and the SDSS DR13 redshift catalogue, was used to train the algorithm. Our analysis reveals that the algorithm performs best when the sources are split, based on their optical morphology, into point-like and extended sources. Optical photometry alone is not enough to estimate accurate photometric redshifts, but the results greatly improve when at least mid-IR photometry is added in the training process. In particular, our measurements show that the estimated photometric redshifts for the X-ray sources of the training sample have a normalized absolute median deviation, nmad ≈ 0.06, and a percentage of outliers, η = 10-14%, depending upon whether the sources are extended or point like. Our final catalogue contains photometric redshifts for 933 out of the 1031 X-ray sources with a median redshift of 0.9. The table of the photometric redshifts is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/608/A39

  20. Application of machine learning techniques to lepton energy reconstruction in water Cherenkov detectors

    NASA Astrophysics Data System (ADS)

    Drakopoulou, E.; Cowan, G. A.; Needham, M. D.; Playfer, S.; Taani, M.

    2018-04-01

    The application of machine learning techniques to the reconstruction of lepton energies in water Cherenkov detectors is discussed and illustrated for TITUS, a proposed intermediate detector for the Hyper-Kamiokande experiment. It is found that applying these techniques leads to an improvement of more than 50% in the energy resolution for all lepton energies compared to an approach based upon lookup tables. Machine learning techniques can be easily applied to different detector configurations and the results are comparable to likelihood-function based techniques that are currently used.

  1. High Order Accuracy Methods for Supersonic Reactive Flows

    DTIC Science & Technology

    2008-06-25

    k = 0, · · · , N and N is the polynomial order used. The positive constant M is chosen such that σ(N) becomes machine zero. Typically M ∼ 32. Table...function used in this study is the Exponential filter given by σ(η) = exp(−αηp), (38) where α = − ln() and is the machine zero. The spectral...respectively. All numerical experiments were run 49 on a 667 MHz Compaq Alpha machine with 1GB memory and with an Alpha internal floating point processor. 9.1

  2. The DREO Pilot Paper Machine.

    DTIC Science & Technology

    1980-01-01

    verre ainsi que des fibres synthftiques. Les cinq sections, preparation de la pate, table de fabrication, presse, s~cherie et enrouleuse sont dcrites...installation, DREO had been concerned with two types of paper and the machine was designed specifically to process these particular papers and at the same...time to offer considerable versatility for the possible preparation of other papers. The two types were detector papers for liquid chemical warfare

  3. 26. July 1974. BENCH SHOP, VIEW LOOKING SOUTH, SHOWING THE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    26. July 1974. BENCH SHOP, VIEW LOOKING SOUTH, SHOWING THE BORING MACHINE PURCHASED IN 1885. THE BIT MAY BE LOWERED BY THE HANGING LINKAGE OR THE TABLE RAISED BY THE FOOT PEDAL. NOTICE THE CHASE FOR THE BELTS, BUILT NO LESS CAREFULLY THAN THE MACHINE ITSELF. - Gruber Wagon Works, Pennsylvania Route 183 & State Hill Road at Red Bridge Park, Bernville, Berks County, PA

  4. Documentation for the machine-readable version of the catalogue of 20457 Star positions obtained by photography in the declination zone -48 deg to -54 deg (1950)

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1983-01-01

    The machine readable catalog, as it is distributed from the Astronomical Data Center, is described. Some minor reformatting of the magnetic tape version is received to decrease the record size and conserve space; the data content is identical to the sample shown in Table VI of the source reference.

  5. Status of research and development in coordinate-measurement technology

    NASA Astrophysics Data System (ADS)

    Dich, L. Z.; Latyev, S. M.

    1994-09-01

    This paper discusses problems involved in developing and operating coordinate-measuring machines. The status of this area of precision instrumentation is analyzed. These problems are made critical not only by the requirements of the machine-tool industry but also by those of the microelectronics industry, both of which use coordinate tables, step-up gears, and other equipment in which precise coordinate measurements are necessary.

  6. Estimation of an origin–destination table for U.S. imports of waterborne containerized freight

    DOE PAGES

    Wang, Hao; Gearhart, Jared; Jones, Katherine; ...

    2016-01-01

    This study presents a probabilistic origin–destination table for waterborne containerized imports. The analysis makes use of 2012 Port Import/Export Reporting Service data, 2012 Surface Transportation Board waybill data, a gravity model, and information on the landside transportation mode split associated with specific ports. This analysis suggests that about 70% of the origin–destination table entries have a coefficient of variation of less than 20%. This 70% of entries is associated with about 78% of the total volume. This analysis also makes evident the importance of rail interchange points in Chicago, Illinois; Memphis, Tennessee; Dallas, Texas; and Kansas City, Missouri, in supportingmore » the transportation of containerized goods from Asia through West Coast ports to the eastern United States.« less

  7. Minimal universal quantum heat machine.

    PubMed

    Gelbwaser-Klimovsky, D; Alicki, R; Kurizki, G

    2013-01-01

    In traditional thermodynamics the Carnot cycle yields the ideal performance bound of heat engines and refrigerators. We propose and analyze a minimal model of a heat machine that can play a similar role in quantum regimes. The minimal model consists of a single two-level system with periodically modulated energy splitting that is permanently, weakly, coupled to two spectrally separated heat baths at different temperatures. The equation of motion allows us to compute the stationary power and heat currents in the machine consistent with the second law of thermodynamics. This dual-purpose machine can act as either an engine or a refrigerator (heat pump) depending on the modulation rate. In both modes of operation, the maximal Carnot efficiency is reached at zero power. We study the conditions for finite-time optimal performance for several variants of the model. Possible realizations of the model are discussed.

  8. Energy Survey of Machine Tools: Separating Power Information of the Main Transmission System During Machining Process

    NASA Astrophysics Data System (ADS)

    Liu, Shuang; Liu, Fei; Hu, Shaohua; Yin, Zhenbiao

    The major power information of the main transmission system in machine tools (MTSMT) during machining process includes effective output power (i.e. cutting power), input power and power loss from the mechanical transmission system, and the main motor power loss. These information are easy to obtain in the lab but difficult to evaluate in a manufacturing process. To solve this problem, a separation method is proposed here to extract the MTSMT power information during machining process. In this method, the energy flow and the mathematical models of major power information of MTSMT during the machining process are set up first. Based on the mathematical models and the basic data tables obtained from experiments, the above mentioned power information during machining process can be separated just by measuring the real time total input power of the spindle motor. The operation program of this method is also given.

  9. Vibration Damping Response of Composite Materials

    DTIC Science & Technology

    1991-04-01

    using a diamond-impregnated cutoff wheel mounted on a milling machine . This procedure was followed to minimize damage to the composite specimens prior to...Development Report Vibration Damping Response of Composite Materials by Roger M. Crane 0E DTIC0 • ELECTE 16 - MAY 28 19914S8 0 E 5; 91--00524 Approved for...Damping Response of Composite Materials by Roger M. Crane TABLE OF CONTENTS Page LIST OF TABLES

  10. Development of Design Review Procedures for Army Air Pollution Abatement Projects. Volume I.

    DTIC Science & Technology

    1980-07-01

    Sanding machines 3.25 Silica 2.75 Soap 2.25 Soapstone 2.25 Starch 2.25 Sugar 2.25 Talc 2.25 Tobacco 3.5 Wood 3.5 2-107 TABLE 2-9 RECOMMENDED MAXIMUM...about 1,600 and the motor bhp is 44. According to Table 4-12, the motor rpm should be 1,800; hence, the corresponding price is about $600. If a magnetic

  11. Machinability of experimental Ti-Ag alloys.

    PubMed

    Kikuchi, Masafumi; Takahashi, Masatoshi; Okuno, Osamu

    2008-03-01

    This study investigated the machinability of experimental Ti-Ag alloys (5, 10, 20, and 30 mass% Ag) as a new dental titanium alloy candidate for CAD/CAM use. The alloys were slotted with a vertical milling machine and carbide square end mills under two cutting conditions. Machinability was evaluated through cutting force using a three-component force transducer fixed on the table of the milling machine. The horizontal cutting force of the Ti-Ag alloys tended to decrease as the concentration of silver increased. Values of the component of the horizontal cutting force perpendicular to the feed direction for Ti-20% Ag and Ti-30% Ag were more than 20% lower than those for titanium under both cutting conditions. Alloying with silver significantly improved the machinability of titanium in terms of cutting force under the present cutting conditions.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnette, Daniel W.

    eCo-PylotDB, written completely in Python, provides a script that parses incoming emails and prepares extracted data for submission to a database table. The script extracts the database server, the database table, the server password, and the server username all from the email address to which the email is sent. The database table is specified on the Subject line. Any text in the body of the email is extracted as user comments for the database table. Attached files are extracted as data files with each file submitted to a specified table field but in separate rows of the targeted database table.more » Other information such as sender, date, time, and machine from which the email was sent is extracted and submitted to the database table as well. An email is sent back to the user specifying whether the data from the initial email was accepted or rejected by the database server. If rejected, the return email includes details as to why.« less

  13. Optical Implementation of the Optimal Universal and Phase-Covariant Quantum Cloning Machines

    NASA Astrophysics Data System (ADS)

    Ye, Liu; Song, Xue-Ke; Yang, Jie; Yang, Qun; Ma, Yang-Cheng

    Quantum cloning relates to the security of quantum computation and quantum communication. In this paper, firstly we propose a feasible unified scheme to implement optimal 1 → 2 universal, 1 → 2 asymmetric and symmetric phase-covariant cloning, and 1 → 2 economical phase-covariant quantum cloning machines only via a beam splitter. Then 1 → 3 economical phase-covariant quantum cloning machines also can be realized by adding another beam splitter in context of linear optics. The scheme is based on the interference of two photons on a beam splitter with different splitting ratios for vertical and horizontal polarization components. It is shown that under certain condition, the scheme is feasible by current experimental technology.

  14. Stirling cryocooler test results and design model verification

    NASA Astrophysics Data System (ADS)

    Shimko, Martin A.; Stacy, W. D.; McCormick, John A.

    A long-life Stirling cycle cryocooler being developed for spaceborne applications is described. The results from tests on a preliminary breadboard version of the cryocooler used to demonstrate the feasibility of the technology and to validate the generator design code used in its development are presented. This machine achieved a cold-end temperature of 65 K while carrying a 1/2-W cooling load. The basic machine is a double-acting, flexure-bearing, split Stirling design with linear electromagnetic drives for the expander and compressors. Flat metal diaphragms replace pistons for sweeping and sealing the machine working volumes. The double-acting expander couples to a laminar-channel counterflow recuperative heat exchanger for regeneration. The PC-compatible design code developed for this design approach calculates regenerator loss, including heat transfer irreversibilities, pressure drop, and axial conduction in the regenerator walls. The code accurately predicted cooler performance and assisted in diagnosing breadboard machine flaws during shakedown and development testing.

  15. Documentation for the machine-readable version of A Finding List of Stars of Spectral Type F2 and Earlier in a North Galactic Pole Region

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1982-01-01

    The machine-readable data set is the result of an objective-prism survey made with an 80 cm/120 cm Schmidt telescope. The F2 and earlier stars were isolated from later type objects by using the MK classification criteria. The catalog contains 601 stars and includes cross identifications to the BD and HD catalogs, coordinates, photographic magnitudes and spectral types. A separate file contains the remarks from the original data tables merged with those following the data. The machine-readable files are described.

  16. LASER APPLICATIONS AND OTHER TOPICS IN QUANTUM ELECTRONICS: Polarisation splitting of laser beams by large angles with minimal reflection losses

    NASA Astrophysics Data System (ADS)

    Davydov, B. L.

    2006-05-01

    New crystal anisotropic prisms for splitting orthogonally polarised components of laser radiation by large angles with minimal reflection losses caused by the Brewster refraction and total internal reflection of polarised waves from the crystal—air interface are considered and the method for their calculation is described. It is shown that, by assembling glue-free combinations of two or three prisms, thermally stable beamsplitters can be fabricated, which are free from the beam astigmatism and the wave dispersion of the output angles of the beams. The parameters and properties of new beamsplitters are presented in a convenient form in figures and tables.

  17. The many faces of the second law

    NASA Astrophysics Data System (ADS)

    Van den Broeck, C.

    2010-10-01

    There exists no perpetuum mobile of the second kind. We review the implications of this observation on the second law, on the efficiency of thermal machines, on Onsager symmetry, on Brownian motors and Brownian refrigerators, and on the universality of efficiency of thermal machines at maximum power. We derive a microscopic expression for the stochastic entropy production, and obtain from it the detailed and integral fluctuation theorem. We close with the remarkable observation that the second law can be split in two: the total entropy production is the sum of two contributions each of which is growing independently in time.

  18. Method for laser machining explosives and ordnance

    DOEpatents

    Muenchausen, Ross E.; Rivera, Thomas; Sanchez, John A.

    2003-05-06

    Method for laser machining explosives and related articles. A laser beam is directed at a surface portion of a mass of high explosive to melt and/or vaporize the surface portion while directing a flow of gas at the melted and/or vaporized surface portion. The gas flow sends the melted and/or vaporized explosive away from the charge of explosive that remains. The method also involves splitting the casing of a munition having an encased explosive. The method includes rotating a munition while directing a laser beam to a surface portion of the casing of an article of ordnance. While the beam melts and/or vaporizes the surface portion, a flow of gas directed at the melted and/or vaporized surface portion sends it away from the remaining portion of ordnance. After cutting through the casing, the beam then melts and/or vaporizes portions of the encased explosive and the gas stream sends the melted/vaporized explosive away from the ordnance. The beam is continued until it splits the article, after which the encased explosive, now accessible, can be removed safely for recycle or disposal.

  19. Accelerating Chemical Discovery with Machine Learning: Simulated Evolution of Spin Crossover Complexes with an Artificial Neural Network.

    PubMed

    Janet, Jon Paul; Chan, Lydia; Kulik, Heather J

    2018-03-01

    Machine learning (ML) has emerged as a powerful complement to simulation for materials discovery by reducing time for evaluation of energies and properties at accuracy competitive with first-principles methods. We use genetic algorithm (GA) optimization to discover unconventional spin-crossover complexes in combination with efficient scoring from an artificial neural network (ANN) that predicts spin-state splitting of inorganic complexes. We explore a compound space of over 5600 candidate materials derived from eight metal/oxidation state combinations and a 32-ligand pool. We introduce a strategy for error-aware ML-driven discovery by limiting how far the GA travels away from the nearest ANN training points while maximizing property (i.e., spin-splitting) fitness, leading to discovery of 80% of the leads from full chemical space enumeration. Over a 51-complex subset, average unsigned errors (4.5 kcal/mol) are close to the ANN's baseline 3 kcal/mol error. By obtaining leads from the trained ANN within seconds rather than days from a DFT-driven GA, this strategy demonstrates the power of ML for accelerating inorganic material discovery.

  20. The "Difference" in Babbage's Difference Engine.

    ERIC Educational Resources Information Center

    Crowley, Mary L.

    1985-01-01

    Discoveries of Charles Babbage in the 1800s are described. Origins of the difference engine, his calculating machine, the principles of computation applied to tables, and the design and construction of his engine are included. (MNS)

  1. Ballistics Tests of Fibrous Concrete Dome and Plate Specimens

    DTIC Science & Technology

    1976-04-01

    x 0.22 x 1 in. chopped steel fibers from U.S. Steel. KG denotes 1 in. fiberglass fibers from Owens - Corning . Table 3 Dome Test Results Test Fiber...1 in. drawn steel fibers Innii National Standard. FG denotes fiberglass fibers from Owens - Corning . Table 4b 30-Callber Machine Gun Plate Teat...drawn steel fibers from National Standard. FG denotes fiberglass fibers from Owens - Corning . { ♦ Tabk4c 45-Callbcr Pbtol Plate Teat Reantti lypeof

  2. Very High Load Capacity Air Bearing Spindle for Large Diamond Turning Machines

    DTIC Science & Technology

    2010-06-08

    testing and a surplus air bearing rotary table has been located. A prototype spindle has been designed to work with the table. 15. SUBJECT TERMS...MSFC) • PROTOTYPE SPINDLE DESIGN June 8, 2010Mirror Technology Workshop 3 Introduction • DT is a proven method of manufacturing aspheric off-axis... designed to hold in a strain-free condition. This spindle development is aimed at producing 3 meter diameter components. This requirement results in the

  3. Adapter plate assembly for adjustable mounting of objects

    DOEpatents

    Blackburn, R.S.

    1986-05-02

    An adapter plate and two locking discs are together affixed to an optic table with machine screws or bolts threaded into a fixed array of internally threaded holes provided in the table surface. The adapter plate preferably has two, and preferably parallel, elongated locating slots each freely receiving a portion of one of the locking discs for secure affixation of the adapter plate to the optic table. A plurality of threaded apertures provided in the adapter plate are available to attach optical mounts or other devices onto the adapter plate in an orientation not limited by the disposition of the array of threaded holes in the table surface. An axially aligned but radially offset hole through each locking disc receives a screw that tightens onto the table, such that prior to tightening of the screw the locking disc may rotate and translate within each locating slot of the adapter plate for maximum flexibility of the orientation thereof.

  4. Adapter plate assembly for adjustable mounting of objects

    DOEpatents

    Blackburn, Robert S.

    1987-01-01

    An adapter plate and two locking discs are together affixed to an optic table with machine screws or bolts threaded into a fixed array of internally threaded holes provided in the table surface. The adapter plate preferably has two, and preferably parallel, elongated locating slots each freely receiving a portion of one of the locking discs for secure affixation of the adapter plate to the optic table. A plurality of threaded apertures provided in the adapter plate are available to attach optical mounts or other devices onto the adapter plate in an orientation not limited by the disposition of the array of threaded holes in the table surface. An axially aligned but radially offset hole through each locking disc receives a screw that tightens onto the table, such that prior to tightening of the screw the locking disc may rotate and translate within each locating slot of the adapter plate for maximum flexibility of the orientation thereof.

  5. Evaluation of workability and strength of green concrete using waste steel scrap

    NASA Astrophysics Data System (ADS)

    Neeraja, D.; Arshad, Shaik Mohammed; Nawaz Nadaf, Alisha K.; Reddy, Mani Kumar

    2017-11-01

    This project works on the study of workability and mechanical properties of concrete using waste steel scrap from the lathe industry. Lathe industries produce waste steel scrap from the lathe machines. In this study, an attempt is made to use this waste in concrete, as accumulation of waste steel scrap cause disposal problem. Tests like compressive test, split tensile test, NDT test (UPV test) were conducted to determine the impact of steel scrap in concrete. The percentages of steel scrap considered in the study were 0%, 0.5%, 1%, 1.5%, and 2% respectively by volume of concrete, 7 day, 28 days test were conducted to find out strength of steel scrap concrete. It is observed that split tensile strength of steel scrap concrete is increased slightly. Split tensile strength of Steel scrap concrete is found to be maximum with volume fraction of 2.0% steel scrap. The steel scrap gives good result in split tensile strength of concrete. From the study concluded that steel scrap can be used in concrete to reduce brittleness of concrete to some extent.

  6. Improving machine learning reproducibility in genetic association studies with proportional instance cross validation (PICV).

    PubMed

    Piette, Elizabeth R; Moore, Jason H

    2018-01-01

    Machine learning methods and conventions are increasingly employed for the analysis of large, complex biomedical data sets, including genome-wide association studies (GWAS). Reproducibility of machine learning analyses of GWAS can be hampered by biological and statistical factors, particularly so for the investigation of non-additive genetic interactions. Application of traditional cross validation to a GWAS data set may result in poor consistency between the training and testing data set splits due to an imbalance of the interaction genotypes relative to the data as a whole. We propose a new cross validation method, proportional instance cross validation (PICV), that preserves the original distribution of an independent variable when splitting the data set into training and testing partitions. We apply PICV to simulated GWAS data with epistatic interactions of varying minor allele frequencies and prevalences and compare performance to that of a traditional cross validation procedure in which individuals are randomly allocated to training and testing partitions. Sensitivity and positive predictive value are significantly improved across all tested scenarios for PICV compared to traditional cross validation. We also apply PICV to GWAS data from a study of primary open-angle glaucoma to investigate a previously-reported interaction, which fails to significantly replicate; PICV however improves the consistency of testing and training results. Application of traditional machine learning procedures to biomedical data may require modifications to better suit intrinsic characteristics of the data, such as the potential for highly imbalanced genotype distributions in the case of epistasis detection. The reproducibility of genetic interaction findings can be improved by considering this variable imbalance in cross validation implementation, such as with PICV. This approach may be extended to problems in other domains in which imbalanced variable distributions are a concern.

  7. CAD-CAE in Electrical Machines and Drives Teaching.

    ERIC Educational Resources Information Center

    Belmans, R.; Geysen, W.

    1988-01-01

    Describes the use of computer-aided design (CAD) techniques in teaching the design of electrical motors. Approaches described include three technical viewpoints, such as electromagnetics, thermal, and mechanical aspects. Provides three diagrams, a table, and conclusions. (YP)

  8. HAL/S-FC and HAL/S-360 compiler system program description

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The compiler is a large multi-phase design and can be broken into four phases: Phase 1 inputs the source language and does a syntactic and semantic analysis generating the source listing, a file of instructions in an internal format (HALMAT) and a collection of tables to be used in subsequent phases. Phase 1.5 massages the code produced by Phase 1, performing machine independent optimization. Phase 2 inputs the HALMAT produced by Phase 1 and outputs machine language object modules in a form suitable for the OS-360 or FCOS linkage editor. Phase 3 produces the SDF tables. The four phases described are written in XPL, a language specifically designed for compiler implementation. In addition to the compiler, there is a large library containing all the routines that can be explicitly called by the source language programmer plus a large collection of routines for implementing various facilities of the language.

  9. Specific modes of vibratory technological machines: mathematical models, peculiarities of interaction of system elements

    NASA Astrophysics Data System (ADS)

    Eliseev, A. V.; Sitov, I. S.; Eliseev, S. V.

    2018-03-01

    The methodological basis of constructing mathematical models of vibratory technological machines is developed in the article. An approach is proposed that makes it possible to introduce a vibration table in a specific mode that provides conditions for the dynamic damping of oscillations for the zone of placement of a vibration exciter while providing specified vibration parameters in the working zone of the vibration table. The aim of the work is to develop methods of mathematical modeling, oriented to technological processes with long cycles. The technologies of structural mathematical modeling are used with structural schemes, transfer functions and amplitude-frequency characteristics. The concept of the work is to test the possibilities of combining the conditions for reducing loads with working components of a vibration exciter while simultaneously maintaining sufficiently wide limits in variating the parameters of the vibrational field.

  10. Sine-Bar Attachment For Machine Tools

    NASA Technical Reports Server (NTRS)

    Mann, Franklin D.

    1988-01-01

    Sine-bar attachment for collets, spindles, and chucks helps machinists set up quickly for precise angular cuts that require greater precision than provided by graduations of machine tools. Machinist uses attachment to index head, carriage of milling machine or lathe relative to table or turning axis of tool. Attachment accurate to 1 minute or arc depending on length of sine bar and precision of gauge blocks in setup. Attachment installs quickly and easily on almost any type of lathe or mill. Requires no special clamps or fixtures, and eliminates many trial-and-error measurements. More stable than improvised setups and not jarred out of position readily.

  11. Sequence-invariant state machines

    NASA Technical Reports Server (NTRS)

    Whitaker, Sterling R.; Manjunath, Shamanna K.; Maki, Gary K.

    1991-01-01

    A synthesis method and an MOS VLSI architecture are presented to realize sequential circuits that have the ability to implement any state machine having N states and m inputs, regardless of the actual sequence specified in the flow table. The design method utilizes binary tree structured (BTS) logic to implement regular and dense circuits. The desired state sequence can be hardwired with power supply connections or can be dynamically reallocated if stored in a register. This allows programmable VLSI controllers to be designed with a compact size and performance approaching that of dedicated logic. Results of ICV implementations are reported and an example sequence-invariant state machine is contrasted with implementations based on traditional methods.

  12. Data mining in bioinformatics using Weka.

    PubMed

    Frank, Eibe; Hall, Mark; Trigg, Len; Holmes, Geoffrey; Witten, Ian H

    2004-10-12

    The Weka machine learning workbench provides a general-purpose environment for automatic classification, regression, clustering and feature selection-common data mining problems in bioinformatics research. It contains an extensive collection of machine learning algorithms and data pre-processing methods complemented by graphical user interfaces for data exploration and the experimental comparison of different machine learning techniques on the same problem. Weka can process data given in the form of a single relational table. Its main objectives are to (a) assist users in extracting useful information from data and (b) enable them to easily identify a suitable algorithm for generating an accurate predictive model from it. http://www.cs.waikato.ac.nz/ml/weka.

  13. Documentation for the machine-readable version of the Revised S210 Catalog of Far-Ultraviolet Objects (Page, Carruthers and Heckathorn 1982)

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1984-01-01

    A detailed description of the machine-readable revised catalog as it is currently being distributed from the Astronomical Data Center is given. This catalog of star images was compiled from imagery obtained by the Naval Research Laboratory (NRL) Far-Ultraviolet Camera/Spectrograph (Experiments S201) operated from 21 to 23 April 1972 on the lunar surface during the Apollo 16 mission. The documentation includes a detailed data format description, a table of indigenous characteristics of the magnetic tape file, and a sample listing of data records exactly as they are presented in the machine-readable version.

  14. MLBCD: a machine learning tool for big clinical data.

    PubMed

    Luo, Gang

    2015-01-01

    Predictive modeling is fundamental for extracting value from large clinical data sets, or "big clinical data," advancing clinical research, and improving healthcare. Machine learning is a powerful approach to predictive modeling. Two factors make machine learning challenging for healthcare researchers. First, before training a machine learning model, the values of one or more model parameters called hyper-parameters must typically be specified. Due to their inexperience with machine learning, it is hard for healthcare researchers to choose an appropriate algorithm and hyper-parameter values. Second, many clinical data are stored in a special format. These data must be iteratively transformed into the relational table format before conducting predictive modeling. This transformation is time-consuming and requires computing expertise. This paper presents our vision for and design of MLBCD (Machine Learning for Big Clinical Data), a new software system aiming to address these challenges and facilitate building machine learning predictive models using big clinical data. The paper describes MLBCD's design in detail. By making machine learning accessible to healthcare researchers, MLBCD will open the use of big clinical data and increase the ability to foster biomedical discovery and improve care.

  15. Time-resolved observation of coherent excitonic nonlinear response with a table-top narrowband THz pulse wave

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uchida, K.; Hirori, H., E-mail: hirori@icems.kyoto-u.ac.jp; CREST, Japan Science and Technology Agency, Kawaguchi, Saitama 332-0012

    2015-11-30

    By combining a tilted-pulse-intensity-front scheme using a LiNbO{sub 3} crystal and a chirped-pulse-beating method, we generated a narrowband intense terahertz (THz) pulse, which had a maximum electric field of more than 10 kV/cm at around 2 THz, a bandwidth of ∼50 GHz, and frequency tunability from 0.5 to 2 THz. By performing THz-pump and near-infrared-probe experiments on GaAs quantum wells, we observed that the resonant excitation of the intraexcitonic 1s-2p transition induces a clear and large Autler-Townes splitting. Our time-resolved measurements show that the splitting energy observed in the rising edge region of electric field is larger than in the constant region.more » This result implies that the splitting energy depends on the time-averaged THz field over the excitonic dephasing time rather than that at the instant of the exciton creation by a probe pulse.« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hao; Gearhart, Jared; Jones, Katherine

    This study presents a probabilistic origin–destination table for waterborne containerized imports. The analysis makes use of 2012 Port Import/Export Reporting Service data, 2012 Surface Transportation Board waybill data, a gravity model, and information on the landside transportation mode split associated with specific ports. This analysis suggests that about 70% of the origin–destination table entries have a coefficient of variation of less than 20%. This 70% of entries is associated with about 78% of the total volume. This analysis also makes evident the importance of rail interchange points in Chicago, Illinois; Memphis, Tennessee; Dallas, Texas; and Kansas City, Missouri, in supportingmore » the transportation of containerized goods from Asia through West Coast ports to the eastern United States.« less

  17. Deformation of products cut on AWJ x-y tables and its suppression

    NASA Astrophysics Data System (ADS)

    Hlaváč, L. M.; Hlaváčová, I. M.; Plančár, Š.; Krenický, T.; Geryk, V.

    2018-02-01

    The aim of this study is namely investigation of the abrasive water jet (AWJ) cutting of column pieces on commercial x-y cutting machines with AWJ. The shape deformation in curved and/or stepped parts of cutting trajectories caused by both the trailback (declination angle) and the taper (inclination of cut walls) can be calculated from submitted analytical model. Some of the results were compared with data measured on samples cut on two types of commercial tables. The main motivation of this investigation is determination of the percentage difference between predicted and real distortion of cutting product, i.e. accuracy of prepared analytical model. Subsequently, the possibility of reduction of the distortion can be studied through implementation of the theoretical model into the control systems of the cutting machines with the system for cutting head tilting. Despite some limitations of the used AWJ machines the comparison of calculated dimensions with the real ones shows very good correlation of model and experimental data lying within the range of measurement uncertainty. Results on special device demonstrated that the shape deformation in curved parts of the cutting trajectory can be substantially reduced through tilting of the cutting head.

  18. Ringo: Interactive Graph Analytics on Big-Memory Machines

    PubMed Central

    Perez, Yonathan; Sosič, Rok; Banerjee, Arijit; Puttagunta, Rohan; Raison, Martin; Shah, Pararth; Leskovec, Jure

    2016-01-01

    We present Ringo, a system for analysis of large graphs. Graphs provide a way to represent and analyze systems of interacting objects (people, proteins, webpages) with edges between the objects denoting interactions (friendships, physical interactions, links). Mining graphs provides valuable insights about individual objects as well as the relationships among them. In building Ringo, we take advantage of the fact that machines with large memory and many cores are widely available and also relatively affordable. This allows us to build an easy-to-use interactive high-performance graph analytics system. Graphs also need to be built from input data, which often resides in the form of relational tables. Thus, Ringo provides rich functionality for manipulating raw input data tables into various kinds of graphs. Furthermore, Ringo also provides over 200 graph analytics functions that can then be applied to constructed graphs. We show that a single big-memory machine provides a very attractive platform for performing analytics on all but the largest graphs as it offers excellent performance and ease of use as compared to alternative approaches. With Ringo, we also demonstrate how to integrate graph analytics with an iterative process of trial-and-error data exploration and rapid experimentation, common in data mining workloads. PMID:27081215

  19. Ringo: Interactive Graph Analytics on Big-Memory Machines.

    PubMed

    Perez, Yonathan; Sosič, Rok; Banerjee, Arijit; Puttagunta, Rohan; Raison, Martin; Shah, Pararth; Leskovec, Jure

    2015-01-01

    We present Ringo, a system for analysis of large graphs. Graphs provide a way to represent and analyze systems of interacting objects (people, proteins, webpages) with edges between the objects denoting interactions (friendships, physical interactions, links). Mining graphs provides valuable insights about individual objects as well as the relationships among them. In building Ringo, we take advantage of the fact that machines with large memory and many cores are widely available and also relatively affordable. This allows us to build an easy-to-use interactive high-performance graph analytics system. Graphs also need to be built from input data, which often resides in the form of relational tables. Thus, Ringo provides rich functionality for manipulating raw input data tables into various kinds of graphs. Furthermore, Ringo also provides over 200 graph analytics functions that can then be applied to constructed graphs. We show that a single big-memory machine provides a very attractive platform for performing analytics on all but the largest graphs as it offers excellent performance and ease of use as compared to alternative approaches. With Ringo, we also demonstrate how to integrate graph analytics with an iterative process of trial-and-error data exploration and rapid experimentation, common in data mining workloads.

  20. Eccentric superconducting RF cavity separator structure

    DOEpatents

    Aggus, John R.; Giordano, Salvatore T.; Halama, Henry J.

    1976-01-01

    Accelerator apparatus having an eccentric-shaped, iris-loaded deflecting cavity for an rf separator for a high energy high momentum, charged particle accelerator beam. In one embodiment, the deflector is superconducting, and the apparatus of this invention provides simplified machining and electron beam welding techniques. Model tests have shown that the electrical characteristics provide the desired mode splitting without adverse effects.

  1. Ergonomic risk factor identification for sewing machine operators through supervised occupational therapy fieldwork in Bangladesh: A case study.

    PubMed

    Habib, Md Monjurul

    2015-01-01

    Many sewing machine operators are working with high risk factors for musculoskeletal health in the garments industries in Bangladesh. To identify the physical risk factors among sewing machine operators in a Bangladeshi garments factory. Sewing machine operators (327, 83% female), were evaluated. The mean age of the participants was 25.25 years. Six ergonomic risk factors were determined using the Musculoskeletal Disorders risk assessment. Data collection included measurements of sewing machine table and chair heights; this data was combined with information from informal interviews. Significant ergonomic risk factors found included the combination of awkward postures of the neck and back, repetitive hand and arm movements, poor ergonomic workstations and prolonged working hours without adequate breaks; these risk factors resulted in musculoskeletal complaints, sick leave, and switching jobs. One aspect of improving worker health in garment factories includes addressing musculoskeletal risk factors through ergonomic interventions.

  2. Research on carrying capacity of hydrostatic slideway on heavy-duty gantry CNC machine

    NASA Astrophysics Data System (ADS)

    Cui, Chao; Guo, Tieneng; Wang, Yijie; Dai, Qin

    2017-05-01

    Hydrostatic slideway is a key part in the heavy-duty gantry CNC machine, which supports the total weight of the gantry and moves smoothly along the table. Therefore, the oil film between sliding rails plays an important role on the carrying capacity and precision of machine. In this paper, the oil film in no friction is simulated with three-dimensional CFD. The carrying capacity of heavy hydrostatic slideway, pressure and velocity characteristic of the flow field are analyzed. The simulation result is verified through comparing with the experimental data obtained from the heavy-duty gantry machine. For the requirement of engineering, the oil film carrying capacity is analyzed with simplified theoretical method. The precision of the simplified method is evaluated and the effectiveness is verified with the experimental data. The simplified calculation method is provided for designing oil pad on heavy-duty gantry CNC machine hydrostatic slideway.

  3. Dictionary Based Machine Translation from Kannada to Telugu

    NASA Astrophysics Data System (ADS)

    Sindhu, D. V.; Sagar, B. M.

    2017-08-01

    Machine Translation is a task of translating from one language to another language. For the languages with less linguistic resources like Kannada and Telugu Dictionary based approach is the best approach. This paper mainly focuses on Dictionary based machine translation for Kannada to Telugu. The proposed methodology uses dictionary for translating word by word without much correlation of semantics between them. The dictionary based machine translation process has the following sub process: Morph analyzer, dictionary, transliteration, transfer grammar and the morph generator. As a part of this work bilingual dictionary with 8000 entries is developed and the suffix mapping table at the tag level is built. This system is tested for the children stories. In near future this system can be further improved by defining transfer grammar rules.

  4. 2. FOURTH FLOOR VIEW TO NORTHEAST, WITH I.J. STOKES DENTAL ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. FOURTH FLOOR VIEW TO NORTHEAST, WITH I.J. STOKES DENTAL POWDER FILLING MACHINE (CENTER) AND PACKING TABLE (LEFT CENTER) - Colgate & Company Jersey City Plant, G Block, 81-95 Greene Street, Jersey City, Hudson County, NJ

  5. Automated detection and classification of dice

    NASA Astrophysics Data System (ADS)

    Correia, Bento A. B.; Silva, Jeronimo A.; Carvalho, Fernando D.; Guilherme, Rui; Rodrigues, Fernando C.; de Silva Ferreira, Antonio M.

    1995-03-01

    This paper describes a typical machine vision system in an unusual application, the automated visual inspection of a Casino's playing tables. The SORTE computer vision system was developed at INETI under a contract with the Portuguese Gaming Inspection Authorities IGJ. It aims to automate the tasks of detection and classification of the dice's scores on the playing tables of the game `Banca Francesa' (which means French Banking) in Casinos. The system is based on the on-line analysis of the images captured by a monochrome CCD camera placed over the playing tables, in order to extract relevant information concerning the score indicated by the dice. Image processing algorithms for real time automatic throwing detection and dice classification were developed and implemented.

  6. Upper and Lower Hamburg Bend 2011 Flood Evaluation on the Missouri River near Hamburg, Iowa

    DTIC Science & Technology

    2017-01-01

    flood event. The evaluation required numerical hydrodynamic modeling of a pre-2011 flood condition of the entire floodplain and main channel with...59 Figure 50. Task 6.3 elevation differences for the degraded main channel and chutes...Table 2. Model computed flow splits between the chutes and the main channel . ............................. 76 ERDC/CHL TR-17-1 vii Preface This

  7. Objective research of auscultation signals in Traditional Chinese Medicine based on wavelet packet energy and support vector machine.

    PubMed

    Yan, Jianjun; Shen, Xiaojing; Wang, Yiqin; Li, Fufeng; Xia, Chunming; Guo, Rui; Chen, Chunfeng; Shen, Qingwei

    2010-01-01

    This study aims at utilising Wavelet Packet Transform (WPT) and Support Vector Machine (SVM) algorithm to make objective analysis and quantitative research for the auscultation in Traditional Chinese Medicine (TCM) diagnosis. First, Wavelet Packet Decomposition (WPD) at level 6 was employed to split more elaborate frequency bands of the auscultation signals. Then statistic analysis was made based on the extracted Wavelet Packet Energy (WPE) features from WPD coefficients. Furthermore, the pattern recognition was used to distinguish mixed subjects' statistical feature values of sample groups through SVM. Finally, the experimental results showed that the classification accuracies were at a high level.

  8. Design and implementation of a system for laser assisted milling of advanced materials

    NASA Astrophysics Data System (ADS)

    Wu, Xuefeng; Feng, Gaocheng; Liu, Xianli

    2016-09-01

    Laser assisted machining is an effective method to machine advanced materials with the added benefits of longer tool life and increased material removal rates. While extensive studies have investigated the machining properties for laser assisted milling(LAML), few attempts have been made to extend LAML to machining parts with complex geometric features. A methodology for continuous path machining for LAML is developed by integration of a rotary and movable table into an ordinary milling machine with a laser beam system. The machining strategy and processing path are investigated to determine alignment of the machining path with the laser spot. In order to keep the material removal temperatures above the softening temperature of silicon nitride, the transformation is coordinated and the temperature interpolated, establishing a transient thermal model. The temperatures of the laser center and cutting zone are also carefully controlled to achieve optimal machining results and avoid thermal damage. These experiments indicate that the system results in no surface damage as well as good surface roughness, validating the application of this machining strategy and thermal model in the development of a new LAML system for continuous path processing of silicon nitride. The proposed approach can be easily applied in LAML system to achieve continuous processing and improve efficiency in laser assisted machining.

  9. Improving Classification Performance through an Advanced Ensemble Based Heterogeneous Extreme Learning Machines.

    PubMed

    Abuassba, Adnan O M; Zhang, Dezheng; Luo, Xiong; Shaheryar, Ahmad; Ali, Hazrat

    2017-01-01

    Extreme Learning Machine (ELM) is a fast-learning algorithm for a single-hidden layer feedforward neural network (SLFN). It often has good generalization performance. However, there are chances that it might overfit the training data due to having more hidden nodes than needed. To address the generalization performance, we use a heterogeneous ensemble approach. We propose an Advanced ELM Ensemble (AELME) for classification, which includes Regularized-ELM, L 2 -norm-optimized ELM (ELML2), and Kernel-ELM. The ensemble is constructed by training a randomly chosen ELM classifier on a subset of training data selected through random resampling. The proposed AELM-Ensemble is evolved by employing an objective function of increasing diversity and accuracy among the final ensemble. Finally, the class label of unseen data is predicted using majority vote approach. Splitting the training data into subsets and incorporation of heterogeneous ELM classifiers result in higher prediction accuracy, better generalization, and a lower number of base classifiers, as compared to other models (Adaboost, Bagging, Dynamic ELM ensemble, data splitting ELM ensemble, and ELM ensemble). The validity of AELME is confirmed through classification on several real-world benchmark datasets.

  10. Improving Classification Performance through an Advanced Ensemble Based Heterogeneous Extreme Learning Machines

    PubMed Central

    Abuassba, Adnan O. M.; Ali, Hazrat

    2017-01-01

    Extreme Learning Machine (ELM) is a fast-learning algorithm for a single-hidden layer feedforward neural network (SLFN). It often has good generalization performance. However, there are chances that it might overfit the training data due to having more hidden nodes than needed. To address the generalization performance, we use a heterogeneous ensemble approach. We propose an Advanced ELM Ensemble (AELME) for classification, which includes Regularized-ELM, L2-norm-optimized ELM (ELML2), and Kernel-ELM. The ensemble is constructed by training a randomly chosen ELM classifier on a subset of training data selected through random resampling. The proposed AELM-Ensemble is evolved by employing an objective function of increasing diversity and accuracy among the final ensemble. Finally, the class label of unseen data is predicted using majority vote approach. Splitting the training data into subsets and incorporation of heterogeneous ELM classifiers result in higher prediction accuracy, better generalization, and a lower number of base classifiers, as compared to other models (Adaboost, Bagging, Dynamic ELM ensemble, data splitting ELM ensemble, and ELM ensemble). The validity of AELME is confirmed through classification on several real-world benchmark datasets. PMID:28546808

  11. Development of automated system based on neural network algorithm for detecting defects on molds installed on casting machines

    NASA Astrophysics Data System (ADS)

    Bazhin, V. Yu; Danilov, I. V.; Petrov, P. A.

    2018-05-01

    During the casting of light alloys and ligatures based on aluminum and magnesium, problems of the qualitative distribution of the metal and its crystallization in the mold arise. To monitor the defects of molds on the casting conveyor, a camera with a resolution of 780 x 580 pixels and a shooting rate of 75 frames per second was selected. Images of molds from casting machines were used as input data for neural network algorithm. On the preparation of a digital database and its analytical evaluation stage, the architecture of the convolutional neural network was chosen for the algorithm. The information flow from the local controller is transferred to the OPC server and then to the SCADA system of foundry. After the training, accuracy of neural network defect recognition was about 95.1% on a validation split. After the training, weight coefficients of the neural network were used on testing split and algorithm had identical accuracy with validation images. The proposed technical solutions make it possible to increase the efficiency of the automated process control system in the foundry by expanding the digital database.

  12. The Aerodynamic Plane Table

    NASA Technical Reports Server (NTRS)

    Zahm, A F

    1924-01-01

    This report gives the description and the use of a specially designed aerodynamic plane table. For the accurate and expeditious geometrical measurement of models in an aerodynamic laboratory, and for miscellaneous truing operations, there is frequent need for a specially equipped plan table. For example, one may have to measure truly to 0.001 inch the offsets of an airfoil at many parts of its surface. Or the offsets of a strut, airship hull, or other carefully formed figure may require exact calipering. Again, a complete airplane model may have to be adjusted for correct incidence at all parts of its surfaces or verified in those parts for conformance to specifications. Such work, if but occasional, may be done on a planing or milling machine; but if frequent, justifies the provision of a special table. For this reason it was found desirable in 1918 to make the table described in this report and to equip it with such gauges and measures as the work should require.

  13. Technology of welding aluminum alloys-II

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Step-by-step procedures were developed for high integrity manual and machine welding of aluminum alloys. Detailed instructions are given for each step with tables and graphs to specify materials and dimensions. Throughout work sequence, processing procedure designates manufacturing verification points and inspection points.

  14. 5. NORTH ACROSS INTERIOR OF OFFICE IN SOUTHEAST CORNER OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    5. NORTH ACROSS INTERIOR OF OFFICE IN SOUTHEAST CORNER OF BUILDING FROM DOORWAY IN SOUTH FRONT, SHOWING DRAFTING TABLE, MERCHANDISE DISPLAY CASE, DESKS, AND OFFICE FIXTURES/BUSINESS MACHINES. - Kregel Windmill Company Factory, 1416 Central Avenue, Nebraska City, Otoe County, NE

  15. ZPR-6 assembly 7 high {sup 240}Pu core experiments : a fast reactor core with mixed (Pu,U)-oxide fuel and a centeral high{sup 240}Pu zone.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lell, R. M.; Morman, J. A.; Schaefer, R.W.

    ZPR-6 Assembly 7 (ZPR-6/7) encompasses a series of experiments performed at the ZPR-6 facility at Argonne National Laboratory in 1970 and 1971 as part of the Demonstration Reactor Benchmark Program (Reference 1). Assembly 7 simulated a large sodium-cooled LMFBR with mixed oxide fuel, depleted uranium radial and axial blankets, and a core H/D near unity. ZPR-6/7 was designed to test fast reactor physics data and methods, so configurations in the Assembly 7 program were as simple as possible in terms of geometry and composition. ZPR-6/7 had a very uniform core assembled from small plates of depleted uranium, sodium, iron oxide,more » U{sub 3}O{sub 8} and Pu-U-Mo alloy loaded into stainless steel drawers. The steel drawers were placed in square stainless steel tubes in the two halves of a split table machine. ZPR-6/7 had a simple, symmetric core unit cell whose neutronic characteristics were dominated by plutonium and {sup 238}U. The core was surrounded by thick radial and axial regions of depleted uranium to simulate radial and axial blankets and to isolate the core from the surrounding room. The ZPR-6/7 program encompassed 139 separate core loadings which include the initial approach to critical and all subsequent core loading changes required to perform specific experiments and measurements. In this context a loading refers to a particular configuration of fueled drawers, radial blanket drawers and experimental equipment (if present) in the matrix of steel tubes. Two principal core configurations were established. The uniform core (Loadings 1-84) had a relatively uniform core composition. The high {sup 240}Pu core (Loadings 85-139) was a variant on the uniform core. The plutonium in the Pu-U-Mo fuel plates in the uniform core contains 11% {sup 240}Pu. In the high {sup 240}Pu core, all Pu-U-Mo plates in the inner core region (central 61 matrix locations per half of the split table machine) were replaced by Pu-U-Mo plates containing 27% {sup 240}Pu in the plutonium component to construct a central core zone with a composition closer to that in an LMFBR core with high burnup. The high {sup 240}Pu configuration was constructed for two reasons. First, the composition of the high {sup 240}Pu zone more closely matched the composition of LMFBR cores anticipated in design work in 1970. Second, comparison of measurements in the ZPR-6/7 uniform core with corresponding measurements in the high {sup 240}Pu zone provided an assessment of some of the effects of long-term {sup 240}Pu buildup in LMFBR cores. The uniform core version of ZPR-6/7 is evaluated in ZPR-LMFR-EXP-001. This document only addresses measurements in the high {sup 240}Pu core version of ZPR-6/7. Many types of measurements were performed as part of the ZPR-6/7 program. Measurements of criticality, sodium void worth, control rod worth and reaction rate distributions in the high {sup 240}Pu core configuration are evaluated here. For each category of measurements, the uncertainties are evaluated, and benchmark model data are provided.« less

  16. Youth Attitude Tracking Study. Volume 1. Spring 1976.

    DTIC Science & Technology

    1976-07-01

    Service In the Spring wave the question as to when the positive youth would plan to enlist was split into active duty and National Guard/Reserve parts...In Table 5.6 it is shown that positive propensity respondents usually do not know more about the educational benefits than negative propensity respond...GI BILL EDUCATIONAL BENEFITS RELATED TO PROPENSITY Propensity Toward Each Service Significant Positive Negative Difference Difference Air Force 5.03

  17. Some problems of control of dynamical conditions of technological vibrating machines

    NASA Astrophysics Data System (ADS)

    Kuznetsov, N. K.; Lapshin, V. L.; Eliseev, A. V.

    2017-10-01

    The possibility of control of dynamical condition of the shakers that are designed for vibration treatment of parts interacting with granular media is discussed. The aim of this article is to develop the methodological basis of technology of creation of mathematical models of shake tables and the development of principles of formation of vibrational fields, estimation of their parameters and control of the structure vibration fields. Approaches to build mathematical models that take into account unilateral constraints, the relationships between elements, with the vibrating surface are developed. Methods intended to construct mathematical model of linear mechanical oscillation systems are used. Small oscillations about the position of static equilibrium are performed. The original method of correction of vibration fields by introduction of the oscillating system additional ties to the structure are proposed. Additional ties are implemented in the form of a mass-inertial device for changing the inertial parameters of the working body of the vibration table by moving the mass-inertial elements. The concept of monitoring the dynamic state of the vibration table based on the original measuring devices is proposed. Estimation for possible changes in dynamic properties is produced. The article is of interest for specialists in the field of creation of vibration technology machines and equipment.

  18. Identification of a novel SPLIT-HULL (SPH) gene associated with hull splitting in rice (Oryza sativa L.).

    PubMed

    Lee, Gileung; Lee, Kang-Ie; Lee, Yunjoo; Kim, Backki; Lee, Dongryung; Seo, Jeonghwan; Jang, Su; Chin, Joong Hyoun; Koh, Hee-Jong

    2018-07-01

    The split-hull phenotype caused by reduced lemma width and low lignin content is under control of SPH encoding a type-2 13-lipoxygenase and contributes to high dehulling efficiency. Rice hulls consist of two bract-like structures, the lemma and palea. The hull is an important organ that helps to protect seeds from environmental stress, determines seed shape, and ensures grain filling. Achieving optimal hull size and morphology is beneficial for seed development. We characterized the split-hull (sph) mutant in rice, which exhibits hull splitting in the interlocking part between lemma and palea and/or the folded part of the lemma during the grain filling stage. Morphological and chemical analysis revealed that reduction in the width of the lemma and lignin content of the hull in the sph mutant might be the cause of hull splitting. Genetic analysis indicated that the mutant phenotype was controlled by a single recessive gene, sph (Os04g0447100), which encodes a type-2 13-lipoxygenase. SPH knockout and knockdown transgenic plants displayed the same split-hull phenotype as in the mutant. The sph mutant showed significantly higher linoleic and linolenic acid (substrates of lipoxygenase) contents in spikelets compared to the wild type. It is probably due to the genetic defect of SPH and subsequent decrease in lipoxygenase activity. In dehulling experiment, the sph mutant showed high dehulling efficiency even by a weak tearing force in a dehulling machine. Collectively, the results provide a basis for understanding of the functional role of lipoxygenase in structure and maintenance of hulls, and would facilitate breeding of easy-dehulling rice.

  19. Sub-diffraction limit laser ablation via multiple exposures using a digital micromirror device.

    PubMed

    Heath, Daniel J; Grant-Jacob, James A; Feinaeugle, Matthias; Mills, Ben; Eason, Robert W

    2017-08-01

    We present the use of digital micromirror devices as variable illumination masks for pitch-splitting multiple exposures to laser machine the surfaces of materials. Ultrafast laser pulses of length 150 fs and 800 nm central wavelength were used for the sequential machining of contiguous patterns on the surface of samples in order to build up complex structures with sub-diffraction limit features. Machined patterns of tens to hundreds of micrometers in lateral dimensions with feature separations as low as 270 nm were produced in electroless nickel on an optical setup diffraction limited to 727 nm, showing a reduction factor below the Abbe diffraction limit of ∼2.7×. This was compared to similar patterns in a photoresist optimized for two-photon absorption, which showed a reduction factor of only 2×, demonstrating that multiple exposures via ablation can produce a greater resolution enhancement than via two-photon polymerization.

  20. Winding Schemes for Wide Constant Power Range of Double Stator Transverse Flux Machine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Husain, Tausif; Hassan, Iftekhar; Sozer, Yilmaz

    2015-05-01

    Different ring winding schemes for double sided transverse flux machines are investigated in this paper for wide speed operation. The windings under investigation are based on two inverters used in parallel. At higher power applications this arrangement improves the drive efficiency. The new winding structure through manipulation of the end connection splits individual sets into two and connects the partitioned turns from individual stator sets in series. This configuration offers the flexibility of torque profiling and a greater flux weakening region. At low speeds and low torque only one winding set is capable of providing the required torque thus providingmore » greater fault tolerance. At higher speeds one set is dedicated to torque production and the other for flux control. The proposed method improves the machine efficiency and allows better flux weakening which is desirable for traction applications.« less

  1. Modelling of Mechanical Behavior at High Strain Rate of Ti-6al-4v Manufactured By Means of Direct Metal Laser Sintering Technique

    NASA Astrophysics Data System (ADS)

    Iannitti, Gianluca; Bonora, Nicola; Gentile, Domenico; Ruggiero, Andrew; Testa, Gabriel; Gubbioni, Simone

    2017-06-01

    In this work, the mechanical behavior of Ti-6Al-4V obtained by additive manufacturing technique was investigated, also considering the build direction. Dog-bone shaped specimens and Taylor cylinders were machined from rods manufactured by means of the EOSSINT M2 80 machine, based on Direct Metal Laser Sintering technique. Tensile tests were performed at strain rate ranging from 5E-4 s-1 to 1000 s-1 using an Instron electromechanical machine for quasistatic tests and a Direct-Tension Split Hopkinson Bar for dynamic tests. The mechanical strength of the material was described by a Johnson-Cook model modified to account for stress saturation occurring at high strain. Taylor cylinder tests and their corresponding numerical simulations were carried out in order to validate the constitutive model under a complex deformation path, high strain rates, and high temperatures.

  2. 40 CFR 63.1541 - Applicability.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Hazardous Air Pollutants for Primary Lead Smelting § 63.1541 Applicability. (a) The provisions of this subpart apply to the following affected sources at primary lead smelters: sinter machine, blast furnace... not apply to secondary lead smelters, lead refiners, or lead remelters. (b) Table 1 of this subpart...

  3. Universal precision sine bar attachment

    NASA Technical Reports Server (NTRS)

    Mann, Franklin D. (Inventor)

    1989-01-01

    This invention relates to an attachment for a sine bar which can be used to perform measurements during lathe operations or other types of machining operations. The attachment can be used for setting precision angles on vises, dividing heads, rotary tables and angle plates. It can also be used in the inspection of machined parts, when close tolerances are required, and in the layout of precision hardware. The novelty of the invention is believed to reside in a specific versatile sine bar attachment for measuring a variety of angles on a number of different types of equipment.

  4. Characteristics of Navy Medium-Weight High-Impact Shock Machine

    DTIC Science & Technology

    1951-09-14

    Ar 2-4~ -C 0 ofoL "- oili 0jit -1 b. V’ A . -- c- MC a y - w CmLUA ~ E~8.9. - flb. A 9 Er IL v II - W43 4P " It IIf ~NRL REPORT I CHARACTERISTICS OF...this machine under specification operation. A comnarison of datta is intended to correlate this shock to shipboard shock experienced in actual combat...table reversal are discussed, and it is shown that this secondsry shock can be the largest under certain condition. Theoretical equations of motion are

  5. Balancing fast-rotating parts of hand-held machine drive

    NASA Astrophysics Data System (ADS)

    Korotkov, V. S.; Sicora, E. A.; Nadeina, L. V.; Yongzheng, Wang

    2018-03-01

    The article considers the issues related to the balancing of fast rotating parts of the hand-held machine drive including a wave transmission with intermediate rolling elements, which is constructed on the basis of the single-phase collector motor with a useful power of 1 kW and a nominal rotation frequency of 15000 rpm. The forms of balancers and their location are chosen. The method of balancing is described. The scheme for determining of residual unbalance in two correction planes is presented. Measurement results are given in tables.

  6. Behavioral Studies Following Ionizing Radiation Exposures: A Data Base.

    DTIC Science & Technology

    1981-08-01

    48 APPENDIX B. PERFORMANCE DATA FILE FORMAT 63 Tasks 63 Cued 63 Uncued 63 Mixed 64 Data File Format 64 Record 1 Variables 64 Record 2 Through Record N ...Variables 65 Record N + 1 65 Last Four Records 66 APPENDIX C. CROSS-REFERENCE TABLES 67 Subject Search Items 68 Dose Search Items 70 APPENDIX D. TASKS...storage. N EWSPP/SCAT R Because the PDP-8 is a 12-bit machine and the PDP-11’s are 16-bit machines, direct transmission of data collected by the SCAT

  7. Development of speckle-free channel-cut crystal optics using plasma chemical vaporization machining for coherent x-ray applications.

    PubMed

    Hirano, Takashi; Osaka, Taito; Sano, Yasuhisa; Inubushi, Yuichi; Matsuyama, Satoshi; Tono, Kensuke; Ishikawa, Tetsuya; Yabashi, Makina; Yamauchi, Kazuto

    2016-06-01

    We have developed a method of fabricating speckle-free channel-cut crystal optics with plasma chemical vaporization machining, an etching method using atmospheric-pressure plasma, for coherent X-ray applications. We investigated the etching characteristics to silicon crystals and achieved a small surface roughness of less than 1 nm rms at a removal depth of >10 μm, which satisfies the requirements for eliminating subsurface damage while suppressing diffuse scattering from rough surfaces. We applied this method for fabricating channel-cut Si(220) crystals for a hard X-ray split-and-delay optical system and confirmed that the crystals provided speckle-free reflection profiles under coherent X-ray illumination.

  8. Business Case Analysis for Replacing the Mazak 30Y Mill-Turn Machine in SM-39. Summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Booth, Steven Richard; Dinehart, Timothy Grant; Benson, Faith Ann

    2015-03-19

    Business case studies are being looked at to support procurement of new machines and capital equipment in the SM-39 and TA-03-0102 machine shops. The first effort conducted economic analysis of replacing the Mazak 30Y Mill-Turn Machine located in SM-39. To determine the value of switching machinery, a baseline scenario was compared with a future scenario where new machinery was purchased and installed. The conditions under the two scenarios were defined via interviews with subject matter experts in terms of one-time and periodic costs. The results of the analysis were compiled in a life-cycle cost/benefit table. The costs of procuring, installing,more » and maintaining a new machine were balanced against the costs avoided by replacing older machinery. Productivity savings were included as a measure to show the costs avoided by being able to produce parts at a quicker and more efficient pace.« less

  9. Predicting the stability of ternary intermetallics with density functional theory and machine learning

    NASA Astrophysics Data System (ADS)

    Schmidt, Jonathan; Chen, Liming; Botti, Silvana; Marques, Miguel A. L.

    2018-06-01

    We use a combination of machine learning techniques and high-throughput density-functional theory calculations to explore ternary compounds with the AB2C2 composition. We chose the two most common intermetallic prototypes for this composition, namely, the tI10-CeAl2Ga2 and the tP10-FeMo2B2 structures. Our results suggest that there may be ˜10 times more stable compounds in these phases than previously known. These are mostly metallic and non-magnetic. While the use of machine learning reduces the overall calculation cost by around 75%, some limitations of its predictive power still exist, in particular, for compounds involving the second-row of the periodic table or magnetic elements.

  10. Southern Durchmusterung (Schoenfeld 1886): Documentation for the machine-readable version

    NASA Technical Reports Server (NTRS)

    Warren, Wayne H., Jr.; Ochsenbein, Francois

    1989-01-01

    The machine-readable version of the catalog, as it is currently being distributed from the Astronomical Data Center, is described. The Southern Durchmusterung (SD) was computerized at the Centre de Donnees Astronomiques de Strasbourg and at the Astronomical Data Center at the National Space Science Data Center, NASA/Goddard Space Flight Center. Corrigenda listed in the original SD volume and published by Kuenster and Sticker were incorporated into the machine file. In addition, one star indicated to be missing in a published list, and later verified, is flagged so that it can be omitted from computer plotted charts if desired. Stars deleted in the various errata lists were similarly flagged, while those with revised data are flagged and listed in a separate table. This catalog covers the zones -02 to -23 degrees; zones +89 to -01 degrees (the Bonner Durchmusterung) are included in a separate catalog available in machine-readable form.

  11. Adjusting to Technology

    ERIC Educational Resources Information Center

    Kennedy, Mike

    2007-01-01

    With classroom Internet access nearly universal in public schools and computers ubiquitous on every school and university campus, classroom furnishings have evolved to accommodate the machines so students can take full advantage of the technology. The desks, tables and other furniture that a school chooses for its computers will depend on the…

  12. Desired machines: cinema and the world in its own image.

    PubMed

    Canales, Jimena

    2011-09-01

    In 1895 when the Lumière brothers unveiled their cinematographic camera, many scientists were elated. Scientists hoped that the machine would fulfill a desire that had driven research for nearly half a century: that of capturing the world in its own image. But their elation was surprisingly short-lived, and many researchers quickly distanced themselves from the new medium. The cinematographic camera was soon split into two machines, one for recording and one for projecting, enabling it to further escape from the laboratory. The philosopher Henri Bergson joined scientists, such as Etienne-Jules Marey, who found problems with the new cinematographic order. Those who had worked to make the dream come true found that their efforts had been subverted. This essay focuses on the desire to build a cinematographic camera, with the purpose of elucidating how dreams and reality mix in the development of science and technology. It is about desired machines and their often unexpected results. The interplay between what "is" (the technical), what "ought" (the ethical), and what "could" be (the fantastical) drives scientific research.

  13. Resolving Transition Metal Chemical Space: Feature Selection for Machine Learning and Structure-Property Relationships.

    PubMed

    Janet, Jon Paul; Kulik, Heather J

    2017-11-22

    Machine learning (ML) of quantum mechanical properties shows promise for accelerating chemical discovery. For transition metal chemistry where accurate calculations are computationally costly and available training data sets are small, the molecular representation becomes a critical ingredient in ML model predictive accuracy. We introduce a series of revised autocorrelation functions (RACs) that encode relationships of the heuristic atomic properties (e.g., size, connectivity, and electronegativity) on a molecular graph. We alter the starting point, scope, and nature of the quantities evaluated in standard ACs to make these RACs amenable to inorganic chemistry. On an organic molecule set, we first demonstrate superior standard AC performance to other presently available topological descriptors for ML model training, with mean unsigned errors (MUEs) for atomization energies on set-aside test molecules as low as 6 kcal/mol. For inorganic chemistry, our RACs yield 1 kcal/mol ML MUEs on set-aside test molecules in spin-state splitting in comparison to 15-20× higher errors for feature sets that encode whole-molecule structural information. Systematic feature selection methods including univariate filtering, recursive feature elimination, and direct optimization (e.g., random forest and LASSO) are compared. Random-forest- or LASSO-selected subsets 4-5× smaller than the full RAC set produce sub- to 1 kcal/mol spin-splitting MUEs, with good transferability to metal-ligand bond length prediction (0.004-5 Å MUE) and redox potential on a smaller data set (0.2-0.3 eV MUE). Evaluation of feature selection results across property sets reveals the relative importance of local, electronic descriptors (e.g., electronegativity, atomic number) in spin-splitting and distal, steric effects in redox potential and bond lengths.

  14. AdiosStMan: Parallelizing Casacore Table Data System using Adaptive IO System

    NASA Astrophysics Data System (ADS)

    Wang, R.; Harris, C.; Wicenec, A.

    2016-07-01

    In this paper, we investigate the Casacore Table Data System (CTDS) used in the casacore and CASA libraries, and methods to parallelize it. CTDS provides a storage manager plugin mechanism for third-party developers to design and implement their own CTDS storage managers. Having this in mind, we looked into various storage backend techniques that can possibly enable parallel I/O for CTDS by implementing new storage managers. After carrying on benchmarks showing the excellent parallel I/O throughput of the Adaptive IO System (ADIOS), we implemented an ADIOS based parallel CTDS storage manager. We then applied the CASA MSTransform frequency split task to verify the ADIOS Storage Manager. We also ran a series of performance tests to examine the I/O throughput in a massively parallel scenario.

  15. Astrometric properties of the Tautenburg Plate Scanner

    NASA Astrophysics Data System (ADS)

    Brunzendorf, Jens; Meusinger, Helmut

    The Tautenburg Plate Scanner (TPS) is an advanced plate-measuring machine run by the Thüringer Landessternwarte Tautenburg (Karl Schwarzschild Observatory), where the machine is housed. It is capable of digitising photographic plates up to 30 cm × 30 cm in size. In our poster, we reported on tests and preliminary results of its astrometric properties. The essential components of the TPS consist of an x-y table movable between an illumination system and a direct imaging system. A telecentric lens images the light transmitted through the photographic emulsion onto a CCD line of 6000 pixels of 10 µm square size each. All components are mounted on a massive air-bearing table. Scanning is performed in lanes of up to 55 mm width by moving the x-y table in a continuous drift-scan mode perpendicular to the CCD line. The analogue output from the CCD is digitised to 12 bit with a total signal/noise ratio of 1000 : 1, corresponding to a photographic density range of three. The pixel map is produced as a series of optionally overlapping lane scans. The pixel data are stored onto CD-ROM or DAT. A Tautenburg Schmidt plate 24 cm × 24 cm in size is digitised within 2.5 hours resulting in 1.3 GB of data. Subsequent high-level data processing is performed off-line on other computers. During the scanning process, the geometry of the optical components is kept fixed. The optimal focussing of the optics is performed prior to the scan. Due to the telecentric lens refocussing is not required. Therefore, the main source of astrometric errors (beside the emulsion itself) are mechanical imperfections in the drive system, which have to be divided into random and systematic ones. The r.m.s. repeatability over the whole plate as measured by repeated scans of the same plate is about 0.5 µm for each axis. The mean plate-to-plate accuracy of the object positions on two plates with the same epoch and the same plate centre has been determined to be about 1 µm. This accuracy is comparable to results obtained with established measuring machines used for astrometric purposes and is mainly limited by the emulsion itself. The mechanical design of the x-y table introduces low-frequency systematic errors of up to 5 µm on both axes. Because of the high stability of the machine it is expected that these deviations from a perfectly uniform coordinate system will remain systematic on a long timescale. Such systematic errors can be corrected either directly once they have been determined or in the course of the general astrometric reduction process. The TPS is well suited for accurate relative measurements like proper motions on plates with the same scale and plate centre. The systematic errors of the x-y table can be determined by interferometric means, and there are plans for this in the next future.

  16. Process Parameters for Banding 155-mm M483A1 Projectiles in High-Capacity Inertia Welding Machine.

    DTIC Science & Technology

    1982-09-01

    appropriate material properties for the analysis. In fact, the projectile is manufactured by Chamberlain using AISI 1340 steel, but AISI 4140 material...Chamberlain competitor using AISI 4140 steel. Table 6 lists the material properties used in the analysis. The symbols and units of Table 6 are SI and given in...T diagram for AISI 4140 steel taken from Figure 1. A I A-61 59 CONTINUOUS COOLING CURVE T 800 E Pt E E 40 C e 8 20 38 48 5 o TIME , SECONDS Fig. 26

  17. Supervisory Control of Multiple Uninhabited Systems - Methodologies and Enabling Human-Robot Interface Technologies (Commande et surveillance de multiples systemes sans pilote - Methodologies et technologies habilitantes d’interfaces homme-machine)

    DTIC Science & Technology

    2012-12-01

    FRANCE 6.1 DATES SMAART (2006 – 2008) and SUSIE (2009 – 2011). 6.2 LOCATION Brest – Nancy – Paris (France). 6.3 SCENARIO/TASKS The setting...Agency (RTA), a dedicated staff with its headquarters in Neuilly, near Paris , France. In order to facilitate contacts with the military users and...Mission Delay for the Helicopter 8-12 Table 8-2 Assistant Interventions and Commander’s Reactions 8-13 Table 10-1 Partial LOA Matrix as Originally

  18. 30 CFR 18.36 - Cables between machine components.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... mechanical damage by position, flame-resistant hose conduit, metal tubing, or troughs (flexible or threaded rigid metal conduit will not be acceptable), (3) isolated from hydraulic lines, and (4) protected from... heavy jackets, the sizes of which are stated in Table 6 of Appendix I. Cables (cords) provided with hose...

  19. 30 CFR 18.36 - Cables between machine components.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... mechanical damage by position, flame-resistant hose conduit, metal tubing, or troughs (flexible or threaded rigid metal conduit will not be acceptable), (3) isolated from hydraulic lines, and (4) protected from... heavy jackets, the sizes of which are stated in Table 6 of Appendix I. Cables (cords) provided with hose...

  20. 30 CFR 18.36 - Cables between machine components.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... mechanical damage by position, flame-resistant hose conduit, metal tubing, or troughs (flexible or threaded rigid metal conduit will not be acceptable), (3) isolated from hydraulic lines, and (4) protected from... heavy jackets, the sizes of which are stated in Table 6 of Appendix I. Cables (cords) provided with hose...

  1. 29 CFR 779.317 - Partial list of establishments lacking “retail concept.”

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (such as operating instruments, X-ray machines, operating tables, etc.); establishments engaged in the... goods or facilities for the operation of such carriers (Idaho Sheet Metal Works v. Wirtz, 383 U.S. 190... distributors. Security dealers. Sheet metal contractors. Ship equipment, commercial; establishments engaged in...

  2. 29 CFR 779.317 - Partial list of establishments lacking “retail concept.”

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (such as operating instruments, X-ray machines, operating tables, etc.); establishments engaged in the... goods or facilities for the operation of such carriers (Idaho Sheet Metal Works v. Wirtz, 383 U.S. 190... distributors. Security dealers. Sheet metal contractors. Ship equipment, commercial; establishments engaged in...

  3. 46 CFR 176.113 - Passengers permitted.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... machines, tables, or other room furnishings; (iii) Toilets and washrooms; (iv) Spaces occupied by and... may be permitted for each 760 millimeters (30 inches) of rail space available to the passengers at the periphery of each deck. The following rail space may not be used in determining the maximum number of...

  4. 46 CFR 176.113 - Passengers permitted.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... machines, tables, or other room furnishings; (iii) Toilets and washrooms; (iv) Spaces occupied by and... may be permitted for each 760 millimeters (30 inches) of rail space available to the passengers at the periphery of each deck. The following rail space may not be used in determining the maximum number of...

  5. 46 CFR 176.113 - Passengers permitted.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... machines, tables, or other room furnishings; (iii) Toilets and washrooms; (iv) Spaces occupied by and... may be permitted for each 760 millimeters (30 inches) of rail space available to the passengers at the periphery of each deck. The following rail space may not be used in determining the maximum number of...

  6. SCHOOL LUNCH, SUGGESTED GUIDES FOR SELECTING LARGE EQUIPMENT.

    ERIC Educational Resources Information Center

    South Carolina State Dept. of Education, Columbia.

    THE TYPE AND CAPACITY OF A WIDE RANGE OF SCHOOL KITCHEN EQUIPMENT IS RECOMMENDED WITH RESPECT TO THE NUMBER OF MEALS SERVED PER DAY. THESE RECOMMENDATIONS ARE GIVEN FOR RANGES, SINKS, ELECTRIC HEATING, GAS HEATING, REFRIGERATION, TABLES, KITCHEN MACHINES, TRUCK DOLLIES, SCALES, STORAGE CABINETS, OFFICE SPACES, LOUNGES, GARBAGE AND CAN WASHING…

  7. Machine Intelligence, a Foreword: The Brain as Electronic Circuitry; Electronic Circuitry as a Brain

    DTIC Science & Technology

    1992-06-01

    Precribed byv ANSi Sto Z39-!8 296-.102 TABLE OF CONTENTS THE BOTTO M LINE ............................................................. I BACKG RO UN D...DIRECTOR US ARMY BALLISTIC RESEARCH LABORATORY ATTN: SLCBR-IB-M (DR. BRUCE BURNS ) 1 ABERDEEN PROVING GROUND, MD 21005-5066 NOTE: PLEASE NOTIFY COMMANDER

  8. Machine Intelligence

    DTIC Science & Technology

    2013-03-01

    Although this approach is an improvement over fixed tile coding methods like CMAC, it suffers from a significant drawback . This approach splits the tiles in...serious drawbacks to using them effectively. Engineering a tiling is typically done by hand, and it can be very difficult to find an appropriate tiling...however, there are still some drawbacks that need to be addressed. While the boundaries are redrawn according to the chance of a mutation to the neural

  9. Using Symmetry Group Correlation Tables to Explain why Erham (and Other Programs) cannot BE Used to Analyze Torsional Splittings of Some Molecules

    NASA Astrophysics Data System (ADS)

    Groner, Peter

    2016-06-01

    ERHAM has been used to analyze rotational spectra of many molecules with torsional splitting caused by one or two internal rotors. The gauche form of dimethyl ether-d1 whose equilibrium structure has C1 symmetry is an example of a molecule for which ERHAM could not model additional small splittings resolvable for many transitions, whereas the spectrum of the symmetric (anti, trans) form with a C{_s} equilibrium structure could be analyzed successfully with ERHAM. A more recent example where ERHAM failed is pinacolone CH_3-CO-C(CH_3)_3. In this case, the barriers to internal rotation of the methyl groups within the -C(CH_3)_3 unit are too high to produce observable internal rotation splittings, but the splittings due to the CH_3-CO methyl group could not be modeled correctly with ERHAM nor with any other available program (XIAM, BELGI-Cs, BELGI-C1, RAM36). In the paper, it was speculated that BELGI-Cs-2tops might be able to the job, but arguments against this possibility have also been put forward. The correlation between irreducible representations of groups and their subgroups according to Watson can be used not only to determine the total number of substates (components) to be expected but also to help decide which particular program has a chance for a successful analysis. As it turns out, the number of components of split lines depends on the molecular symmetry at equilibrium in relation to the highest possible symmetry for a given molecular symmetry group. Therefore, for pinacolone, the vibrational ground state is split into 10 torsional substates. P. Groner, J. Mol. Spectrosc. 278 (2012) 52-67. C. Richard et al. A&A 552 (2013), A117. Y. Zhao et al., J. Mol. Spectrosc. 318 (2015) 91-100, with references to all other programs mentioned in the abstract. J. K. G. Watson, Can. J. Physics 43 (1965) 1996-2007.

  10. Pre-use anesthesia machine check; certified anesthesia technician based quality improvement audit.

    PubMed

    Al Suhaibani, Mazen; Al Malki, Assaf; Al Dosary, Saad; Al Barmawi, Hanan; Pogoku, Mahdhav

    2014-01-01

    Quality assurance of providing a work ready machine in multiple theatre operating rooms in a tertiary modern medical center in Riyadh. The aim of the following study is to keep high quality environment for workers and patients in surgical operating rooms. Technicians based audit by using key performance indicators to assure inspection, passing test of machine worthiness for use daily and in between cases and in case of unexpected failure to provide quick replacement by ready to use another anesthetic machine. The anesthetic machines in all operating rooms are daily and continuously inspected and passed as ready by technicians and verified by anesthesiologist consultant or assistant consultant. The daily records of each machines were collected then inspected for data analysis by quality improvement committee department for descriptive analysis and report the degree of staff compliance to daily inspection as "met" items. Replaced machine during use and overall compliance. Distractive statistic using Microsoft Excel 2003 tables and graphs of sums and percentages of item studied in this audit. Audit obtained highest compliance percentage and low rate of replacement of machine which indicate unexpected machine state of use and quick machine switch. The authors are able to conclude that following regular inspection and running self-check recommended by the manufacturers can contribute to abort any possibility of hazard of anesthesia machine failure during operation. Furthermore in case of unexpected reason to replace the anesthesia machine in quick maneuver contributes to high assured operative utilization of man machine inter-phase in modern surgical operating rooms.

  11. What Orthopaedic Operating Room Surfaces Are Contaminated With Bioburden? A Study Using the ATP Bioluminescence Assay.

    PubMed

    Richard, Raveesh Daniel; Bowen, Thomas R

    2017-07-01

    Contaminated operating room surfaces can increase the risk of orthopaedic infections, particularly after procedures in which hardware implantation and instrumentation are used. The question arises as to how surgeons can measure surface cleanliness to detect increased levels of bioburden. This study aims to highlight the utility of adenosine triphosphate (ATP) bioluminescence technology as a novel technique in detecting the degree of contamination within the sterile operating room environment. What orthopaedic operating room surfaces are contaminated with bioburden? When energy is required for cellular work, ATP breaks down into adenosine biphosphate (ADP) and phosphate (P) and in that process releases energy. This process is inherent to all living things and can be detected as light emission with the use of bioluminescence assays. On a given day, six different orthopaedic surgery operating rooms (two adult reconstruction, two trauma, two spine) were tested before surgery with an ATP bioluminescence assay kit. All of the cases were considered clean surgery without infection, and this included the previously performed cases in each sampled room. These rooms had been cleaned and prepped for surgery but the patients had not been physically brought into the room. A total of 13 different surfaces were sampled once in each room: the operating room (OR) preparation table (both pre- and postdraping), OR light handles, Bovie machine buttons, supply closet countertops, the inside of the Bair Hugger™ hose, Bair Hugger™ buttons, right side of the OR table headboard, tourniquet machine buttons, the Clark-socket attachment, and patient positioners used for total hip and spine positioning. The relative light units (RLUs) obtained from each sample were recorded and data were compiled and averaged for analysis. These values were compared with previously published ATP benchmark values of 250 to 500 RLUs to define cleanliness in both the hospital and restaurant industries. All surfaces had bioburden. The ATP RLUs (mean ± SD) are reported for each surface in ascending order: the OR preparation table (postdraping; 8.3 ± 3.4), inside the sterilized pan (9.2 ± 5.5), the inside of the Bair Hugger™ hose (212.5 ± 155.7), supply closet countertops (281.7 ± 236.7), OR light handles (647.8 ± 903.7), the OR preparation table (predraping; 1054 ± 387.5), the Clark-socket attachment (1135.7 ± 705.3), patient positioners used for total hip and spine positioning (1201.7 ± 1144.9), Bovie machine buttons (1264.5 ± 638.8), Bair Hugger™ buttons (1340.8 ± 1064.1), tourniquet machine buttons (1666.5 ± 2144.9), computer keyboard (1810.8 ± 929.6), and the right side of the OR table headboard (2539 ± 5635.8). ATP bioluminescence is a novel method to measure cleanliness within the orthopaedic OR and can help identify environmental trouble spots that can potentially lead to increased infection rates. Future studies correlating ATP bioluminescence findings with microbiology cultures could add to the clinical utility of this technology. Surfaces such as the undersurface of the OR table headboard, Bair Hugger™ buttons, and tourniquet machine buttons should be routinely cleansed as part of an institutional protocol. Although correlation between ATP bioluminescence and clinical infection was not evaluated in this study, it is the subject of future research. Specifically, evaluating microbiology samples taken from these environmental surfaces and correlating them with increased bioburden found with ATP bioluminescence technology can help promote improved surgical cleaning practices.

  12. QTLTableMiner++: semantic mining of QTL tables in scientific articles.

    PubMed

    Singh, Gurnoor; Kuzniar, Arnold; van Mulligen, Erik M; Gavai, Anand; Bachem, Christian W; Visser, Richard G F; Finkers, Richard

    2018-05-25

    A quantitative trait locus (QTL) is a genomic region that correlates with a phenotype. Most of the experimental information about QTL mapping studies is described in tables of scientific publications. Traditional text mining techniques aim to extract information from unstructured text rather than from tables. We present QTLTableMiner ++ (QTM), a table mining tool that extracts and semantically annotates QTL information buried in (heterogeneous) tables of plant science literature. QTM is a command line tool written in the Java programming language. This tool takes scientific articles from the Europe PMC repository as input, extracts QTL tables using keyword matching and ontology-based concept identification. The tables are further normalized using rules derived from table properties such as captions, column headers and table footers. Furthermore, table columns are classified into three categories namely column descriptors, properties and values based on column headers and data types of cell entries. Abbreviations found in the tables are expanded using the Schwartz and Hearst algorithm. Finally, the content of QTL tables is semantically enriched with domain-specific ontologies (e.g. Crop Ontology, Plant Ontology and Trait Ontology) using the Apache Solr search platform and the results are stored in a relational database and a text file. The performance of the QTM tool was assessed by precision and recall based on the information retrieved from two manually annotated corpora of open access articles, i.e. QTL mapping studies in tomato (Solanum lycopersicum) and in potato (S. tuberosum). In summary, QTM detected QTL statements in tomato with 74.53% precision and 92.56% recall and in potato with 82.82% precision and 98.94% recall. QTM is a unique tool that aids in providing QTL information in machine-readable and semantically interoperable formats.

  13. Users Manual for FAA Cost Allocation Model.

    DTIC Science & Technology

    1986-12-01

    29 2.10 Ramsey Files................29 2.11 Allocation Tables.............33 2.12 MINSYS....................34 2.13 TAXRAM1...Budget V p. ARTRFUT1~TOWRFUTI " ’T RA RF UT 1 FSSRFUT1 Ramsey Pricing Allocations to User Groups i OPSRFUTI OPS RFUIA V 08RFUTI MINSYS GA Minimum...Splits Aviation Standards--O&M Budget V ARTRFUT 2 TOWRFUT2 TRARFUT2 FSSRFUT2 Ramsey Pricing Allocations to User Groups OPSRFU2 OPSVRFU 2A OPSVFUT2

  14. Neutronics Benchmarks for the Utilization of Mixed-Oxide Fuel: Joint U.S./ Russian Progress Report for Fiscal Year 1997, Volume 4, Part 8 - Neutron Poison Plates in Assemblies Containing Homogeneous Mixtures of Polystyrene-Moderated Plutonium and Uranium Oxides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yavuz, M.

    1999-05-01

    In the 1970s at the Battelle Pacific Northwest Laboratory (PNL), a series of critical experiments using a remotely operated Split-Table Machine was performed with homogeneous mixtures of (Pu-U)O{sub 2}-polystyrene fuels in the form of square compacts having different heights. The experiments determined the critical geometric configurations of MOX fuel assemblies with and without neutron poison plates. With respect to PuO{sub 2} content and moderation [H/(Pu+U)atomic] ratio (MR), two different homogeneous (Pu-U) O{sub 2}-polystyrene mixtures were considered: Mixture (1) 14.62 wt% PuO{sub 2} with 30.6 MR, and Mixture (2) 30.3 wt% PuO{sub 2} with 2.8 MR. In all mixtures, the uraniummore » was depleted to about O.151 wt% U{sup 235}. Assemblies contained copper, copper-cadmium or aluminum neutron poison plates having thicknesses up to {approximately}2.5 cm. This evaluation contains 22 experiments for Mixture 1, and 10 for Mixture 2 compacts. For Mixture 1, there are 10 configurations with copper plates, 6 with aluminum, and 5 with copper-cadmium. One experiment contained no poison plate. For Mixture 2 compacts, there are 3 configurations with copper, 3 with aluminum, and 3 with copper-cadmium poison plates. One experiment contained no poison plate.« less

  15. Measuring 20-100 T B-fields using Zeeman splitting of sodium emission lines on a 500 kA pulsed power machine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banasek, J. T., E-mail: jtb254@cornell.edu; Engelbrecht, J. T.; Pikuz, S. A.

    2016-11-15

    We have shown that Zeeman splitting of the sodium (Na) D-lines at 5890 and 5896 Å can be used to measure the magnetic field (B-field) produced in high current pulsed power experiments. We have measured the B-field next to a return current conductor in a hybrid X-pinch experiment near a peak current of about 500 kA. Na is deposited on the conductor and then is desorbed and excited by radiation from the hybrid X-pinch. The D-line emission spectrum implies B-fields of about 20 T with a return current post of 4 mm diameter or up to 120 T with amore » return current wire of 0.455 mm diameter. These measurements were consistent or lower than the expected B-field, thereby showing that basic Zeeman splitting can be used to measure the B-field in a pulsed-power-driven high-energy-density (HED) plasma experiment. We hope to extend these measurement techniques using suitable ionized species to measurements within HED plasmas.« less

  16. Bonner Durchmusterung (Argelander 1859-1862): Documentation for the machine-readable version

    NASA Technical Reports Server (NTRS)

    Warren, Wayne H., Jr.; Ochsenbein, Francois

    1989-01-01

    The machine-readable version of the catalog, as it is currently being distributed from the Astronomical Data Center, is described. The entire Bonner Durchmusterung (BD) was computerized through the collaborative efforts of the Centre de Donnees Astronomiques de Strasbourg, l'Observatoire de Nice, and the Astronomical Data Center at the NASA/Goddard Space Flight Center. All corrigenda published in the original BD volumes were incorporated into the machine file, along with changes published following the 1903 edition. In addition, stars indicated to be missing in published lists and verified by various techniques are flagged so that they can be omitted from computer plotted charts if desired. Stars deleted in the various errata lists were similarly flagged, while those with revised data are flagged and listed in a separate table.

  17. Advanced Machine Learning Emulators of Radiative Transfer Models

    NASA Astrophysics Data System (ADS)

    Camps-Valls, G.; Verrelst, J.; Martino, L.; Vicent, J.

    2017-12-01

    Physically-based model inversion methodologies are based on physical laws and established cause-effect relationships. A plethora of remote sensing applications rely on the physical inversion of a Radiative Transfer Model (RTM), which lead to physically meaningful bio-geo-physical parameter estimates. The process is however computationally expensive, needs expert knowledge for both the selection of the RTM, its parametrization and the the look-up table generation, as well as its inversion. Mimicking complex codes with statistical nonlinear machine learning algorithms has become the natural alternative very recently. Emulators are statistical constructs able to approximate the RTM, although at a fraction of the computational cost, providing an estimation of uncertainty, and estimations of the gradient or finite integral forms. We review the field and recent advances of emulation of RTMs with machine learning models. We posit Gaussian processes (GPs) as the proper framework to tackle the problem. Furthermore, we introduce an automatic methodology to construct emulators for costly RTMs. The Automatic Gaussian Process Emulator (AGAPE) methodology combines the interpolation capabilities of GPs with the accurate design of an acquisition function that favours sampling in low density regions and flatness of the interpolation function. We illustrate the good capabilities of our emulators in toy examples, leaf and canopy levels PROSPECT and PROSAIL RTMs, and for the construction of an optimal look-up-table for atmospheric correction based on MODTRAN5.

  18. A Flexure-Based Tool Holder for Sub-(micro)m Positioning of a Single Point Cutting Tool on a Four-axis Lathe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bono, M J; Hibbard, R L

    2005-12-05

    A tool holder was designed to facilitate the machining of precision meso-scale components with complex three-dimensional shapes with sub-{micro}m accuracy on a four-axis lathe. A four-axis lathe incorporates a rotary table that allows the cutting tool to swivel with respect to the workpiece to enable the machining of complex workpiece forms, and accurately machining complex meso-scale parts often requires that the cutting tool be aligned precisely along the axis of rotation of the rotary table. The tool holder designed in this study has greatly simplified the process of setting the tool in the correct location with sub-{micro}m precision. The toolmore » holder adjusts the tool position using flexures that were designed using finite element analyses. Two flexures adjust the lateral position of the tool to align the center of the nose of the tool with the axis of rotation of the B-axis, and another flexure adjusts the height of the tool. The flexures are driven by manual micrometer adjusters, each of which provides a minimum increment of motion of 20 nm. This tool holder has simplified the process of setting a tool with sub-{micro}m accuracy, and it has significantly reduced the time required to set a tool.« less

  19. Strike point splitting in the heat and particle flux profiles compared with the edge magnetic topology in a n = 2 resonant magnetic perturbation field at JET

    NASA Astrophysics Data System (ADS)

    Harting, D. M.; Liang, Y.; Jachmich, S.; Koslowski, R.; Arnoux, G.; Devaux, S.; Eich, T.; Nardon, E.; Reiter, D.; Thomsen, H.; EFDA contributors, JET

    2012-05-01

    At JET the error field correction coils can be used to generate an n = 1 or n = 2 magnetic perturbation field (Liang et al 2007 Plasma Phys. Control. Fusion 49 B581). Various experiments at JET have already been carried out to investigate the mitigation of ELMs by resonant magnetic perturbations (RMPs) (Liang et al 2010 Nucl. Fusion 50 025013, Liang et al 2011 Nucl. Fusion 51 073001). However, the typical formation of a secondary strike point (strike point splitting) by RMPs observed in other machines (Jakubowski et al 2010 Contrib. Plasma Phys. 50 701-7, Jakubowski et al 2004 Nucl. Fusion 44 S1-11, Nardon et al 2011 J. Nucl. Mater. 415 S914-7, Eich et al 2003 Phys. Rev. Lett. 91 195003, Evans et al 2007 J. Nucl. Mater. 363-365 570-4, Evans et al 2005 J. Phys.: Conf. Ser. 7 174-90, Watkins et al 2009 J. Nucl. Mater. 390-391 839-42) has never been observed at JET before. In this work we will present discharges where for the first time a strike point splitting by RMPs at JET has been observed. We will show that in these particular cases the strike point splitting matches the vacuum edge magnetic field topology. This is done by comparing heat and particle flux profiles on the outer divertor plate with the magnetic footprint pattern obtained by field line tracing. Further the evolution of the strike point splitting during the ramp up phase of the perturbation field and during a q95-scan is investigated, and it will be shown that the spontaneous appearance of the strike point splitting is only related to some geometrical effects of the toroidal asymmetric magnetic topology.

  20. Impact of equalizing currents on losses and torque ripples in electrical machines with fractional slot concentrated windings

    NASA Astrophysics Data System (ADS)

    Toporkov, D. M.; Vialcev, G. B.

    2017-10-01

    The implementation of parallel branches is a commonly used manufacturing method of the realizing of fractional slot concentrated windings in electrical machines. If the rotor eccentricity is enabled in a machine with parallel branches, the equalizing currents can arise. The simulation approach of the equalizing currents in parallel branches of an electrical machine winding based on magnetic field calculation by using Finite Elements Method is discussed in the paper. The high accuracy of the model is provided by the dynamic improvement of the inductances in the differential equation system describing a machine. The pre-computed table flux linkage functions are used for that. The functions are the dependences of the flux linkage of parallel branches on the branches currents and rotor position angle. The functions permit to calculate self-inductances and mutual inductances by partial derivative. The calculated results obtained for the electric machine specimen are presented. The results received show that the adverse combination of design solutions and the rotor eccentricity leads to a high value of the equalizing currents and windings heating. Additional torque ripples also arise. The additional ripples harmonic content is not similar to the cogging torque or ripples caused by the rotor eccentricity.

  1. Constructing Stylish Characters on Computer Graphics Systems.

    ERIC Educational Resources Information Center

    Goldman, Gary S.

    1980-01-01

    Computer graphics systems typically produce a single, machine-like character font. At most, these systems enable the user to (1) alter the aspect ratio (height-to-width ratio) of the characters, (2) specify a transformation matrix to slant the characters, and (3) define a virtual pen table to change the lineweight of the plotted characters.…

  2. 36 CFR Appendix A to Part 1191 - Table Of Contents

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Protruding Objects 205 Operable Parts 206 Accessible Routes 207 Accessible Means of Egress 208 Parking Spaces..., Kitchenettes, and Sinks 213 Toilet Facilities and Bathing Facilities 214 Washing Machines and Clothes Dryers... F205 Operable Parts F206 Accessible Routes F207 Accessible Means of Egress F208 Parking Spaces F209...

  3. Using shadow page cache to improve isolated drivers performance.

    PubMed

    Zheng, Hao; Dong, Xiaoshe; Wang, Endong; Chen, Baoke; Zhu, Zhengdong; Liu, Chengzhe

    2015-01-01

    With the advantage of the reusability property of the virtualization technology, users can reuse various types and versions of existing operating systems and drivers in a virtual machine, so as to customize their application environment. In order to prevent users' virtualization environments being impacted by driver faults in virtual machine, Chariot examines the correctness of driver's write operations by the method of combining a driver's write operation capture and a driver's private access control table. However, this method needs to keep the write permission of shadow page table as read-only, so as to capture isolated driver's write operations through page faults, which adversely affect the performance of the driver. Based on delaying setting frequently used shadow pages' write permissions to read-only, this paper proposes an algorithm using shadow page cache to improve the performance of isolated drivers and carefully study the relationship between the performance of drivers and the size of shadow page cache. Experimental results show that, through the shadow page cache, the performance of isolated drivers can be greatly improved without impacting Chariot's reliability too much.

  4. Using Shadow Page Cache to Improve Isolated Drivers Performance

    PubMed Central

    Dong, Xiaoshe; Wang, Endong; Chen, Baoke; Zhu, Zhengdong; Liu, Chengzhe

    2015-01-01

    With the advantage of the reusability property of the virtualization technology, users can reuse various types and versions of existing operating systems and drivers in a virtual machine, so as to customize their application environment. In order to prevent users' virtualization environments being impacted by driver faults in virtual machine, Chariot examines the correctness of driver's write operations by the method of combining a driver's write operation capture and a driver's private access control table. However, this method needs to keep the write permission of shadow page table as read-only, so as to capture isolated driver's write operations through page faults, which adversely affect the performance of the driver. Based on delaying setting frequently used shadow pages' write permissions to read-only, this paper proposes an algorithm using shadow page cache to improve the performance of isolated drivers and carefully study the relationship between the performance of drivers and the size of shadow page cache. Experimental results show that, through the shadow page cache, the performance of isolated drivers can be greatly improved without impacting Chariot's reliability too much. PMID:25815373

  5. Automated Planning and Scheduling for Space Mission Operations

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Jonsson, Ari; Knight, Russell

    2005-01-01

    Research Trends: a) Finite-capacity scheduling under more complex constraints and increased problem dimensionality (subcontracting, overtime, lot splitting, inventory, etc.) b) Integrated planning and scheduling. c) Mixed-initiative frameworks. d) Management of uncertainty (proactive and reactive). e) Autonomous agent architectures and distributed production management. e) Integration of machine learning capabilities. f) Wider scope of applications: 1) analysis of supplier/buyer protocols & tradeoffs; 2) integration of strategic & tactical decision-making; and 3) enterprise integration.

  6. Translations on USSR Science and Technology, Physical Sciences and Technology, Number 16

    DTIC Science & Technology

    1977-08-05

    34INVESTIGATION OF SPLITTING OF LIGHT NUCLEI WITH HIGH-ENERGY y -RAYS WITH THE METHOD OF WILSON’S CHAMBER OPERATING IN POWERFUL BEAMS OF ELECTRONIC...boast high reliability, high speed, and extremely modest power requirements. Information oh the Screen Visual display devices greatly facilitate...area of application of these units Includes navigation, control of power systems, machine tools, and manufac- turing processes. Th» ^»abilities of

  7. Lathe Attachment Finishes Inner Surface of Tubes

    NASA Technical Reports Server (NTRS)

    Lancki, A. J.

    1982-01-01

    Extremely smooth finishes are machined on inside surfaces of tubes by new attachment for a lathe. The relatively inexpensive accessory, called a "microhone," holds a honing stone against workpiece by rigid tangs instead of springs as in conventional honing tools. Inner rod permits adjustment of microhoning stone, while outer tube supports assembly. Outer tube is held between split blocks on lathe toolpost. Microhoning can be done with either microhone or workpiece moving and other member stationary.

  8. 45. VIEW OF UPPER LEVEL CRUSHER ADDITION FROM CRUSHED OXIDIZED ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    45. VIEW OF UPPER LEVEL CRUSHER ADDITION FROM CRUSHED OXIDIZED ORE BIN. 18 INCH BELT CONVEYOR BIN FEED, LOWER CENTER, WITH STEPHENS-ADAMSON 25 TON/HR ELEVATOR SPLIT DISCHARGE (OXIDIZED/UNOXIDIZED) IN CENTER. CRUDE ORE BINS AND MACHINE SHOP BEYOND. NOTE TOP OF CRUSHED OXIDIZED ORE BIN IS BELOW TOP OF CRUDE ORE BINS. - Bald Mountain Gold Mill, Nevada Gulch at head of False Bottom Creek, Lead, Lawrence County, SD

  9. A fisheye viewer for microarray-based gene expression data

    PubMed Central

    Wu, Min; Thao, Cheng; Mu, Xiangming; Munson, Ethan V

    2006-01-01

    Background Microarray has been widely used to measure the relative amounts of every mRNA transcript from the genome in a single scan. Biologists have been accustomed to reading their experimental data directly from tables. However, microarray data are quite large and are stored in a series of files in a machine-readable format, so direct reading of the full data set is not feasible. The challenge is to design a user interface that allows biologists to usefully view large tables of raw microarray-based gene expression data. This paper presents one such interface – an electronic table (E-table) that uses fisheye distortion technology. Results The Fisheye Viewer for microarray-based gene expression data has been successfully developed to view MIAME data stored in the MAGE-ML format. The viewer can be downloaded from the project web site . The fisheye viewer was implemented in Java so that it could run on multiple platforms. We implemented the E-table by adapting JTable, a default table implementation in the Java Swing user interface library. Fisheye views use variable magnification to balance magnification for easy viewing and compression for maximizing the amount of data on the screen. Conclusion This Fisheye Viewer is a lightweight but useful tool for biologists to quickly overview the raw microarray-based gene expression data in an E-table. PMID:17038193

  10. Comparing nonparametric Bayesian tree priors for clonal reconstruction of tumors.

    PubMed

    Deshwar, Amit G; Vembu, Shankar; Morris, Quaid

    2015-01-01

    Statistical machine learning methods, especially nonparametric Bayesian methods, have become increasingly popular to infer clonal population structure of tumors. Here we describe the treeCRP, an extension of the Chinese restaurant process (CRP), a popular construction used in nonparametric mixture models, to infer the phylogeny and genotype of major subclonal lineages represented in the population of cancer cells. We also propose new split-merge updates tailored to the subclonal reconstruction problem that improve the mixing time of Markov chains. In comparisons with the tree-structured stick breaking prior used in PhyloSub, we demonstrate superior mixing and running time using the treeCRP with our new split-merge procedures. We also show that given the same number of samples, TSSB and treeCRP have similar ability to recover the subclonal structure of a tumor…

  11. US Army Two-Surgeon Teams Operating in Remote Afghanistan - An Evaluation of Split-Based Forward Surgical Team Operations

    DTIC Science & Technology

    2009-04-01

    Patient Status ABD (%) Ext (%) Vasc (%) Uro (%) GYN (%) Thor (%) HN (%) Neuro (%) Burn (%) Other (%) Total USF (n 178) 6 (2.6) 125 (54.3) 3 (1.3) 0...Ext, extremity; Vasc, vascular; Uro , urological; GYN, gynecologic; Thor, thoracic; HN, head and neck; Neuro, neurologic. Table 8 Age, Sex, and...Shock Trauma Platoon with a similar patient cohort at Los Angeles County trauma center, found that 12.7% of patients treated by the Surgical Shock

  12. Targeting Cancer Protein Profiles with Split-Enzyme Reporter Fragments to Achieve Chemical Resolution for Molecular Imaging

    DTIC Science & Technology

    2014-11-01

    near-infrared fluorophore, Cy5.5, linked with up to three units of amino-ethoxy-ethoxy- acid (AEEA) at the N-terminal amine of the peptide. Table 1...RPMI or Dulbecco’s Modified Eagle’s Medium (DMEM; Gibco), respectively, and supplemented with 10% FBS and 1% penicillin–streptomycin. The cells were...peptide, compound 6, using the amino acid residues of the parent peptide (compound 5) in random order. Compound 2 targeted the tumor efficiently

  13. Mechanical Properties of Misers Bluff Sand.

    DTIC Science & Technology

    1986-09-01

    in Chapter 4. 4 .7 Y~ e -~1 % CHAPTER 2 LABORATORY TESTS 2.1 CONVENTIONAL SOIL TESTS Samples of MB sand were split from the available supply of...air Va , and void ratio e (the ratio of void volume to solid volume). These composition data are listed in Table 2.1 for each test. 5 2.3 MECHANICAL...and diameter changes are made. The data can be plotted as principal stress difference versus axial strain, the slope of which is Young’s modulus E

  14. Distributed state machine supervision for long-baseline gravitational-wave detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rollins, Jameson Graef, E-mail: jameson.rollins@ligo.org

    The Laser Interferometer Gravitational-wave Observatory (LIGO) consists of two identical yet independent, widely separated, long-baseline gravitational-wave detectors. Each Advanced LIGO detector consists of complex optical-mechanical systems isolated from the ground by multiple layers of active seismic isolation, all controlled by hundreds of fast, digital, feedback control systems. This article describes a novel state machine-based automation platform developed to handle the automation and supervisory control challenges of these detectors. The platform, called Guardian, consists of distributed, independent, state machine automaton nodes organized hierarchically for full detector control. User code is written in standard Python and the platform is designed to facilitatemore » the fast-paced development process associated with commissioning the complicated Advanced LIGO instruments. While developed specifically for the Advanced LIGO detectors, Guardian is a generic state machine automation platform that is useful for experimental control at all levels, from simple table-top setups to large-scale multi-million dollar facilities.« less

  15. Novel jet observables from machine learning

    NASA Astrophysics Data System (ADS)

    Datta, Kaustuv; Larkoski, Andrew J.

    2018-03-01

    Previous studies have demonstrated the utility and applicability of machine learning techniques to jet physics. In this paper, we construct new observables for the discrimination of jets from different originating particles exclusively from information identified by the machine. The approach we propose is to first organize information in the jet by resolved phase space and determine the effective N -body phase space at which discrimination power saturates. This then allows for the construction of a discrimination observable from the N -body phase space coordinates. A general form of this observable can be expressed with numerous parameters that are chosen so that the observable maximizes the signal vs. background likelihood. Here, we illustrate this technique applied to discrimination of H\\to b\\overline{b} decays from massive g\\to b\\overline{b} splittings. We show that for a simple parametrization, we can construct an observable that has discrimination power comparable to, or better than, widely-used observables motivated from theory considerations. For the case of jets on which modified mass-drop tagger grooming is applied, the observable that the machine learns is essentially the angle of the dominant gluon emission off of the b\\overline{b} pair.

  16. Hypothesis to Explain the Size Effect Observed in APO-BMI Compression Tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schembri, Philip Edward; Siranosian, Antranik Antonio; Kingston, Lance Allen

    2015-01-07

    In 2013 compression tests were performed on cylindrical specimens of carbon-microballoon-APOBMI syntactic foam machined to different lengths (0.25, 0.5, and 2.8 inches1) (Kingston, 2013). In 2014 similar tests were performed on glass-microballoon-APO-BMI of different lengths (~0.15”, ~0.32”, and ~0.57”). In all these tests it was observed that, when strains were calculated from the platen displacement (corrected for machine compliance), the apparent Young’s modulus of the material decreased with specimen size, as shown in Table 1. The reason for this size effect was speculated to be a layer of damage on or near the top and bottom machined surfaces of themore » specimens (Kingston, Schembri, & Siranosian, 2014). This report examines that hypothesis in further detail.« less

  17. Zeeman splitting of 6.7 GHz methanol masers. On the uncertainty of magnetic field strength determinations

    NASA Astrophysics Data System (ADS)

    Vlemmings, W. H. T.; Torres, R. M.; Dodson, R.

    2011-05-01

    Context. To properly determine the role of magnetic fields during massive star formation, a statistically significant sample of field measurements probing different densities and regions around massive protostars needs to be established. However, relating Zeeman splitting measurements to magnetic field strengths needs a carefully determined splitting coefficient. Aims: Polarization observations of, in particular, the very abundant 6.7 GHz methanol maser, indicate that these masers appear to be good probes of the large scale magnetic field around massive protostars at number densities up to nH2 ≈ 109 cm-3. We thus investigate the Zeeman splitting of the 6.7 GHz methanol maser transition. Methods: We have observed of a sample of 46 bright northern hemisphere maser sources with the Effelsberg 100-m telescope and an additional 34 bright southern masers with the Parkes 64-m telescope in an attempt to measure their Zeeman splitting. We also revisit the previous calculation of the methanol Zeeman splitting coefficients and show that these were severely overestimated making the determination of magnetic field strengths highly uncertain. Results: In total 44 of the northern masers were detected and significant splitting between the right- and left-circular polarization spectra is determined in >75% of the sources with a flux density >20 Jy beam-1. Assuming the splitting is due to a magnetic field according to the regular Zeeman effect, the average detected Zeeman splitting corrected for field geometry is ~0.6 m s-1. Using an estimate of the 6.7 GHz A-type methanol maser Zeeman splitting coefficient based on old laboratory measurements of 25 GHz E-type methanol transitions this corresponds to a magnetic field of ~120 mG in the methanol maser region. This is significantly higher than expected using the typically assumed relation between magnetic field and density (B∝ n_H_20.47) and potentially indicates the extrapolation of the available laboratory measurements is invalid. The stability of the right- and left-circular calibration of the Parkes observations was insufficient to determine the Zeeman splitting of the Southern sample. Spectra are presented for all sources in both samples. Conclusions: There is no strong indication that the measured splitting between right- and left-circular polarization is due to non-Zeeman effects, although this cannot be ruled out until the Zeeman coefficient is properly determined. However, although the 6.7 GHz methanol masers are still excellent magnetic field morphology probes through linear polarization observations, previous derivations of magnetic fields strength turn out to be highly uncertain. A solution to this problem will require new laboratory measurements of the methanol Landé-factors. Table 2 and Figs. 5-7 are only available in electronic form at http://www.aanda.org

  18. CD process control through machine learning

    NASA Astrophysics Data System (ADS)

    Utzny, Clemens

    2016-10-01

    For the specific requirements of the 14nm and 20nm site applications a new CD map approach was developed at the AMTC. This approach relies on a well established machine learning technique called recursive partitioning. Recursive partitioning is a powerful technique which creates a decision tree by successively testing whether the quantity of interest can be explained by one of the supplied covariates. The test performed is generally a statistical test with a pre-supplied significance level. Once the test indicates significant association between the variable of interest and a covariate a split performed at a threshold value which minimizes the variation within the newly attained groups. This partitioning is recurred until either no significant association can be detected or the resulting sub group size falls below a pre-supplied level.

  19. Strain-rate behavior in tension of the tempered martensitic reduced activation steel Eurofer97

    NASA Astrophysics Data System (ADS)

    Cadoni, Ezio; Dotta, Matteo; Forni, Daniele; Spätig, Philippe

    2011-07-01

    The tensile properties of the high-chromium tempered martensitic reduced activation steel Eurofer97 were determined from tests carried out over a wide range of strain-rates on cylindrical specimens. The quasi-static tests were performed with a universal electro-mechanical machine, whereas a hydro-pneumatic machine and a JRC-split Hopkinson tensile bar apparatus were used for medium and high strain-rates respectively. This tempered martensitic stainless steel showed significant strain-rate sensitivity. The constitutive behavior was investigated within a framework of dislocations dynamics model using Kock's approach. The parameters of the model were determined and then used to predict the deformation range of the tensile deformation stability. A very good agreement between the experimental results and predictions of the model was found.

  20. Comparison of Test Procedures and Energy Efficiency Criteria in Selected International Standards & Labeling Programs for Copy Machines, External Power Supplies, LED Displays, Residential Gas Cooktops and Televisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Nina; Zhou, Nan; Fridley, David

    2012-03-01

    This report presents a technical review of international minimum energy performance standards (MEPS), voluntary and mandatory energy efficiency labels and test procedures for five products being considered for new or revised MEPS in China: copy machines, external power supply, LED displays, residential gas cooktops and flat-screen televisions. For each product, an overview of the scope of existing international standards and labeling programs, energy values and energy performance metrics and description and detailed summary table of criteria and procedures in major test standards are presented.

  1. Using Animal-Borne Cameras to Quantify Prey Field, Habitat Characteristics and Foraging Success in a Marine Top Predator

    DTIC Science & Technology

    2012-09-30

    purpose, machine- learning method with a simple and precise mathematical formulation, and it has a number of aspects that make it well-suited for...encounters (%) Gurnard 42 Octopus 33 Jack Mackeral 6 Rays/ skates 5 Squid 3 Miscellaneous/Unidentified 11 Table 2: Proportion of dive time spent

  2. Grinding Glass Disks On A Belt Sander

    NASA Technical Reports Server (NTRS)

    Lyons, James J., III

    1995-01-01

    Small machine attached to table-top belt sander makes possible to use belt sander to grind glass disk quickly to specified diameter within tolerance of about plus or minus 0.002 in. Intended to be used in place of production-shop glass grinder. Held on driveshaft by vacuum, glass disk rotated while periphery ground by continuous sanding belt.

  3. Teaching in 2020: The Impact of Neuroscience

    ERIC Educational Resources Information Center

    Frith, Uta

    2005-01-01

    The brain has evolved to educate and to be educated, often instinctively and effortlessly. The brain is the machine that allows all forms of learning to take place--from baby squirrels learning how to crack nuts, birds learning to fly, children learning to ride a bike and memorising times-tables to adults learning a new language or mastering how…

  4. 6. VIEW OF BORING MILL. Chuck action of locomotive wheel ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    6. VIEW OF BORING MILL. Chuck action of locomotive wheel Wheel weight 1200 pounds, 3'-0' diameter. Table 53' in diameter Wheel is 48'. Largest hole that can be bored is 9-1/2' plus (GE axle is 10'). - Juniata Shops, Erecting Shop & Machine Shop, East of Fourth Avenue, between Fourth & Fifth Streets, Altoona, Blair County, PA

  5. VizieR Online Data Catalog: OCSVM anomalies (Solarz+, 2017)

    NASA Astrophysics Data System (ADS)

    Solarz, A.; Bilicki, M.; Gromadzki, M.; Pollo, A.; Durkalec, A.; Wypych, M.

    2017-07-01

    One table containing 642,353 sources selected as anomalous with one-class support vector machine algorithm in AllWISE data release. Data have AllWISE photometry in W1, W2 and W3 passband and include W3 flux correction described in Krakowski et al. (2016A&A...596A..39K). (1 data file).

  6. Building a biomedical tokenizer using the token lattice design pattern and the adapted Viterbi algorithm

    PubMed Central

    2011-01-01

    Background Tokenization is an important component of language processing yet there is no widely accepted tokenization method for English texts, including biomedical texts. Other than rule based techniques, tokenization in the biomedical domain has been regarded as a classification task. Biomedical classifier-based tokenizers either split or join textual objects through classification to form tokens. The idiosyncratic nature of each biomedical tokenizer’s output complicates adoption and reuse. Furthermore, biomedical tokenizers generally lack guidance on how to apply an existing tokenizer to a new domain (subdomain). We identify and complete a novel tokenizer design pattern and suggest a systematic approach to tokenizer creation. We implement a tokenizer based on our design pattern that combines regular expressions and machine learning. Our machine learning approach differs from the previous split-join classification approaches. We evaluate our approach against three other tokenizers on the task of tokenizing biomedical text. Results Medpost and our adapted Viterbi tokenizer performed best with a 92.9% and 92.4% accuracy respectively. Conclusions Our evaluation of our design pattern and guidelines supports our claim that the design pattern and guidelines are a viable approach to tokenizer construction (producing tokenizers matching leading custom-built tokenizers in a particular domain). Our evaluation also demonstrates that ambiguous tokenizations can be disambiguated through POS tagging. In doing so, POS tag sequences and training data have a significant impact on proper text tokenization. PMID:21658288

  7. Predicting human decisions in socioeconomic interaction using real-time functional magnetic resonance imaging (rtfMRI)

    NASA Astrophysics Data System (ADS)

    Hollmann, Maurice; Mönch, Tobias; Müller, Charles; Bernarding, Johannes

    2009-02-01

    A major field in cognitive neuroscience investigates neuronal correlates of human decision-making processes [1, 2]. Is it possible to predict a decision before it is actually revealed by the volunteer? In the presented manuscript we use a standard paradigm from economic behavioral research that proved emotional influences on human decision making: the Ultimatum Game (UG). In the UG, two players have the opportunity to split a sum of money. One player is deemed the proposer and the other, the responder. The proposer makes an offer as to how this money should be split between the two. The second player can either accept or reject this offer. If it is accepted, the money is split as proposed. If rejected, then neither player receives anything. In the presented study a real-time fMRI system was used to derive the brain activation of the responder. Using a Relevance-Vector-Machine classifier it was possible to predict if the responder will accept or reject an offer. The classification result was presented to the operator 1-2 seconds before the volunteer pressed a button to convey his decision. The classification accuracy reached about 70% averaged over six subjects.

  8. Pretreatment tables predicting pathologic stage of locally advanced prostate cancer.

    PubMed

    Joniau, Steven; Spahn, Martin; Briganti, Alberto; Gandaglia, Giorgio; Tombal, Bertrand; Tosco, Lorenzo; Marchioro, Giansilvio; Hsu, Chao-Yu; Walz, Jochen; Kneitz, Burkhard; Bader, Pia; Frohneberg, Detlef; Tizzani, Alessandro; Graefen, Markus; van Cangh, Paul; Karnes, R Jeffrey; Montorsi, Francesco; van Poppel, Hein; Gontero, Paolo

    2015-02-01

    Pretreatment tables for the prediction of pathologic stage have been published and validated for localized prostate cancer (PCa). No such tables are available for locally advanced (cT3a) PCa. To construct tables predicting pathologic outcome after radical prostatectomy (RP) for patients with cT3a PCa with the aim to help guide treatment decisions in clinical practice. This was a multicenter retrospective cohort study including 759 consecutive patients with cT3a PCa treated with RP between 1987 and 2010. Retropubic RP and pelvic lymphadenectomy. Patients were divided into pretreatment prostate-specific antigen (PSA) and biopsy Gleason score (GS) subgroups. These parameters were used to construct tables predicting pathologic outcome and the presence of positive lymph nodes (LNs) after RP for cT3a PCa using ordinal logistic regression. In the model predicting pathologic outcome, the main effects of biopsy GS and pretreatment PSA were significant. A higher GS and/or higher PSA level was associated with a more unfavorable pathologic outcome. The validation procedure, using a repeated split-sample method, showed good predictive ability. Regression analysis also showed an increasing probability of positive LNs with increasing PSA levels and/or higher GS. Limitations of the study are the retrospective design and the long study period. These novel tables predict pathologic stage after RP for patients with cT3a PCa based on pretreatment PSA level and biopsy GS. They can be used to guide decision making in men with locally advanced PCa. Our study might provide physicians with a useful tool to predict pathologic stage in locally advanced prostate cancer that might help select patients who may need multimodal treatment. Copyright © 2014 European Association of Urology. Published by Elsevier B.V. All rights reserved.

  9. Prediction of multi performance characteristics of wire EDM process using grey ANFIS

    NASA Astrophysics Data System (ADS)

    Kumanan, Somasundaram; Nair, Anish

    2017-09-01

    Super alloys are used to fabricate components in ultra-supercritical power plants. These hard to machine materials are processed using non-traditional machining methods like Wire cut electrical discharge machining and needs attention. This paper details about multi performance optimization of wire EDM process using Grey ANFIS. Experiments are designed to establish the performance characteristics of wire EDM such as surface roughness, material removal rate, wire wear rate and geometric tolerances. The control parameters are pulse on time, pulse off time, current, voltage, flushing pressure, wire tension, table feed and wire speed. Grey relational analysis is employed to optimise the multi objectives. Analysis of variance of the grey grades is used to identify the critical parameters. A regression model is developed and used to generate datasets for the training of proposed adaptive neuro fuzzy inference system. The developed prediction model is tested for its prediction ability.

  10. Systems Design and Pilot Operation of a Regional Center for Technical Processing for the Libraries of the New England State Universities. NELINET, New England Library Information Network. Progress Report, July 1, 1967 - March 30, 1968, Volume II, Appendices.

    ERIC Educational Resources Information Center

    Agenbroad, James E.; And Others

    Included in this volume of appendices to LI 000 979 are acquisitions flow charts; a current operations questionnaire; an algorithm for splitting the Library of Congress call number; analysis of the Machine-Readable Cataloging (MARC II) format; production problems and decisions; operating procedures for information transmittal in the New England…

  11. Cloud Fingerprinting: Using Clock Skews To Determine Co Location Of Virtual Machines

    DTIC Science & Technology

    2016-09-01

    DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) Cloud computing has quickly revolutionized computing practices of organizations, to include the Department of... Cloud computing has quickly revolutionized computing practices of organizations, to in- clude the Department of Defense. However, security concerns...vi Table of Contents 1 Introduction 1 1.1 Proliferation of Cloud Computing . . . . . . . . . . . . . . . . . . 1 1.2 Problem Statement

  12. Microstereolithography: A Review

    DTIC Science & Technology

    2003-04-01

    initiator Table I. Characteristics of the integral microstereolithography machines described by Bertsch, Chatwin and Loubere. b) Digital Micromirror ...DeviceTM as pattern generator The Digital Micromirror Device (DMDTM) produced by Texas Instruments, which is an array of micromirrors actuated by...feasibility of the technology, an array of micromirrors having a VGA resolution (640 x 480) was used in a first prototype developed to work with visible

  13. Automated Handling of Garments for Pressing

    DTIC Science & Technology

    1991-09-30

    Parallel Algorithms for 2D Kalman Filtering ................................. 47 DJ. Potter and M.P. Cline Hash Table and Sorted Array: A Case Study of... Kalman Filtering on the Connection Machine ............................ 55 MA. Palis and D.K. Krecker Parallel Sorting of Large Arrays on the MasPar...ALGORITHM’VS FOR SEAM SENSING. .. .. .. ... ... .... ..... 24 6.1 KarelTW Algorithms .. .. ... ... ... ... .... ... ...... 24 6.1.1 Image Filtering

  14. Software Testbed for Developing and Evaluating Integrated Autonomous Systems

    DTIC Science & Technology

    2015-03-01

    EUROPA planning system for plan generation. The adaptive controller executes the new plan, using augmented, hierarchical finite state machines to...using the Internet Communications Engine ( ICE ), an object-oriented toolkit for building distributed applications. TABLE OF CONTENTS 1...ANML model is translated into the New Domain Definition Language (NDDL) and sent to NASA???s EUROPA planning system for plan generation. The adaptive

  15. JPRS Report, Science & Technology, Japan

    DTIC Science & Technology

    1991-01-31

    final test. Keywords: Spherical Pressure Hull, Titanium Alloy , Three-Dimensional Machining, Electron Beam Welding . 1. Introduction In bodies like... processed (the heat treatment involving high-temperature heating and rapid quenching in order to obtain finer grains of the titanium alloy ) and...given m Table 3. The test results were all satisfactory. Forged material of titanium alloy , manufactured by forging, beta processing , and billet

  16. Improving the efficiency of a user-driven learning system with reconfigurable hardware. Application to DNA splicing.

    PubMed

    Lemoine, E; Merceron, D; Sallantin, J; Nguifo, E M

    1999-01-01

    This paper describes a new approach to problem solving by splitting up problem component parts between software and hardware. Our main idea arises from the combination of two previously published works. The first one proposed a conceptual environment of concept modelling in which the machine and the human expert interact. The second one reported an algorithm based on reconfigurable hardware system which outperforms any kind of previously published genetic data base scanning hardware or algorithms. Here we show how efficient the interaction between the machine and the expert is when the concept modelling is based on reconfigurable hardware system. Their cooperation is thus achieved with an real time interaction speed. The designed system has been partially applied to the recognition of primate splice junctions sites in genetic sequences.

  17. Overview of fast algorithm in 3D dynamic holographic display

    NASA Astrophysics Data System (ADS)

    Liu, Juan; Jia, Jia; Pan, Yijie; Wang, Yongtian

    2013-08-01

    3D dynamic holographic display is one of the most attractive techniques for achieving real 3D vision with full depth cue without any extra devices. However, huge 3D information and data should be preceded and be computed in real time for generating the hologram in 3D dynamic holographic display, and it is a challenge even for the most advanced computer. Many fast algorithms are proposed for speeding the calculation and reducing the memory usage, such as:look-up table (LUT), compressed look-up table (C-LUT), split look-up table (S-LUT), and novel look-up table (N-LUT) based on the point-based method, and full analytical polygon-based methods, one-step polygon-based method based on the polygon-based method. In this presentation, we overview various fast algorithms based on the point-based method and the polygon-based method, and focus on the fast algorithm with low memory usage, the C-LUT, and one-step polygon-based method by the 2D Fourier analysis of the 3D affine transformation. The numerical simulations and the optical experiments are presented, and several other algorithms are compared. The results show that the C-LUT algorithm and the one-step polygon-based method are efficient methods for saving calculation time. It is believed that those methods could be used in the real-time 3D holographic display in future.

  18. Efficient Multiscale Computation with Improved Momentum Flux Coupling via Operator-Splitting and Probabilistic Uncertainty Quantification

    DTIC Science & Technology

    2016-08-23

    Different percentages of clay (10 to 30%) and sand (35 to 55%) have been used to represent various flow concentrations (Table 1). Dynamic viscosity of the... viscosity , was adopted as the wall boundary treatment method. 2.2 Physical Domain The domain consists of a 7.0m long flume, which has an inclination of...the shear stress, μapp is the apparent viscosity , K is the flow consistency index, n is the flow behavior index, and γ is the shear rate, which is

  19. Temporal Evolution of Non-equilibrium Gamma’ Precipitates in a Rapidly Quenched Nickel Base Superalloy (Preprint)

    DTIC Science & Technology

    2014-04-01

    with the binomial distribution for a particular dataset. This technique is more commonly known as the Langer, Bar-on and Miller ( LBM ) method [22,23...distribution unlimited. Using the LBM method, the frequency distribution plot for a dataset corresponding to a phase separated system, exhibiting a split peak...estimated parameters (namely μ1, μ2, σ, fγ’ and fγ) obtained from the LBM plots in Fig. 5 are summarized in Table 3. The EWQ sample does not exhibit any

  20. The Effects of Tropical and Leather Combat Boots on Lower Extremity Disorders Among US Marine Corps Recruits

    DTIC Science & Technology

    1976-03-01

    tape up the back and around the top and a 5.08-cm wide, nylon webbing diagonally across the ankle . The leather insole is split into two pieces and a...from a platoon (Table 1). The category of ankle fracture and/or sprain is a tally of the number of recruits reporting the occurrence of an ankle or...foot fracture within the previous four years, a recent ankle sprain, or a history of chronic ankle sprain. Next to pes planus, this category included

  1. Integrated configurable equipment selection and line balancing for mass production with serial-parallel machining systems

    NASA Astrophysics Data System (ADS)

    Battaïa, Olga; Dolgui, Alexandre; Guschinsky, Nikolai; Levin, Genrikh

    2014-10-01

    Solving equipment selection and line balancing problems together allows better line configurations to be reached and avoids local optimal solutions. This article considers jointly these two decision problems for mass production lines with serial-parallel workplaces. This study was motivated by the design of production lines based on machines with rotary or mobile tables. Nevertheless, the results are more general and can be applied to assembly and production lines with similar structures. The designers' objectives and the constraints are studied in order to suggest a relevant mathematical model and an efficient optimization approach to solve it. A real case study is used to validate the model and the developed approach.

  2. Documentation for the machine-readable version of A Library of Stellar Spectra (Jacoby, Hunter and Christian 1984)

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1984-01-01

    The machine readable library as it is currently being distributed from the Astronomical Data Center is described. The library contains digital spectral for 161 stars of spectral classes O through M and luminosity classes 1, 3 and 5 in the wavelength range 3510 A to 7427 A. The resolution is approximately 4.5 A, while the typical photometric uncertainty of each resolution element is approximately 1 percent and broadband variations are 3 percent. The documentation includes a format description, a table of the indigenous characteristics of the magnetic tape file, and a sample listing of logical records exactly as they are recorded on the tape.

  3. Documentation for the machine-readable version of the Stellar Spectrophotometric Atlas, 3130 A lambda 10800 A of Gunn and Stryker (1983)

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1984-01-01

    The machine-readable version of the Atlas as it is currently being distributed from the Astronomical Data Center is described. The data were obtained with the Oke multichannel scanner on the 5-meter Hale reflector for purposes of synthesizing galaxy spectra, and the digitized Atlas contains normalized spectral energy distributions, computed colors, scan line and continuum indices for 175 selected stars covering the complete ranges of spectral type and luminosity class. The documentation includes a byte-by-byte format description, a table of the indigenous characteristics of the magnetic tape file, and a sample listing of logical records exactly as they are recorded on the tape.

  4. Integrating a local database into the StarView distributed user interface

    NASA Technical Reports Server (NTRS)

    Silberberg, D. P.

    1992-01-01

    A distributed user interface to the Space Telescope Data Archive and Distribution Service (DADS) known as StarView is being developed. The DADS architecture consists of the data archive as well as a relational database catalog describing the archive. StarView is a client/server system in which the user interface is the front-end client to the DADS catalog and archive servers. Users query the DADS catalog from the StarView interface. Query commands are transmitted via a network and evaluated by the database. The results are returned via the network and are displayed on StarView forms. Based on the results, users decide which data sets to retrieve from the DADS archive. Archive requests are packaged by StarView and sent to DADS, which returns the requested data sets to the users. The advantages of distributed client/server user interfaces over traditional one-machine systems are well known. Since users run software on machines separate from the database, the overall client response time is much faster. Also, since the server is free to process only database requests, the database response time is much faster. Disadvantages inherent in this architecture are slow overall database access time due to the network delays, lack of a 'get previous row' command, and that refinements of a previously issued query must be submitted to the database server, even though the domain of values have already been returned by the previous query. This architecture also does not allow users to cross correlate DADS catalog data with other catalogs. Clearly, a distributed user interface would be more powerful if it overcame these disadvantages. A local database is being integrated into StarView to overcome these disadvantages. When a query is made through a StarView form, which is often composed of fields from multiple tables, it is translated to an SQL query and issued to the DADS catalog. At the same time, a local database table is created to contain the resulting rows of the query. The returned rows are displayed on the form as well as inserted into the local database table. Identical results are produced by reissuing the query to either the DADS catalog or to the local table. Relational databases do not provide a 'get previous row' function because of the inherent complexity of retrieving previous rows of multiple-table joins. However, since this function is easily implemented on a single table, StarView uses the local table to retrieve the previous row. Also, StarView issues subsequent query refinements to the local table instead of the DADS catalog, eliminating the network transmission overhead. Finally, other catalogs can be imported into the local database for cross correlation with local tables. Overall, it is believe that this is a more powerful architecture for distributed, database user interfaces.

  5. Machinery Bearing Fault Diagnosis Using Variational Mode Decomposition and Support Vector Machine as a Classifier

    NASA Astrophysics Data System (ADS)

    Rama Krishna, K.; Ramachandran, K. I.

    2018-02-01

    Crack propagation is a major cause of failure in rotating machines. It adversely affects the productivity, safety, and the machining quality. Hence, detecting the crack’s severity accurately is imperative for the predictive maintenance of such machines. Fault diagnosis is an established concept in identifying the faults, for observing the non-linear behaviour of the vibration signals at various operating conditions. In this work, we find the classification efficiencies for both original and the reconstructed vibrational signals. The reconstructed signals are obtained using Variational Mode Decomposition (VMD), by splitting the original signal into three intrinsic mode functional components and framing them accordingly. Feature extraction, feature selection and feature classification are the three phases in obtaining the classification efficiencies. All the statistical features from the original signals and reconstructed signals are found out in feature extraction process individually. A few statistical parameters are selected in feature selection process and are classified using the SVM classifier. The obtained results show the best parameters and appropriate kernel in SVM classifier for detecting the faults in bearings. Hence, we conclude that better results were obtained by VMD and SVM process over normal process using SVM. This is owing to denoising and filtering the raw vibrational signals.

  6. Occupational Accidents with Agricultural Machinery in Austria.

    PubMed

    Kogler, Robert; Quendler, Elisabeth; Boxberger, Josef

    2016-01-01

    The number of recognized accidents with fatalities during agricultural and forestry work, despite better technology and coordinated prevention and trainings, is still very high in Austria. The accident scenarios in which people are injured are very different on farms. The common causes of accidents in agriculture and forestry are the loss of control of machine, means of transport or handling equipment, hand-held tool, and object or animal, followed by slipping, stumbling and falling, breakage, bursting, splitting, slipping, fall, and collapse of material agent. In the literature, a number of studies of general (machine- and animal-related accidents) and specific (machine-related accidents) agricultural and forestry accident situations can be found that refer to different databases. From the database Data of the Austrian Workers Compensation Board (AUVA) about occupational accidents with different agricultural machinery over the period 2008-2010 in Austria, main characteristics of the accident, the victim, and the employer as well as variables on causes and circumstances by frequency and contexts of parameters were statistically analyzed by employing the chi-square test and odds ratio. The aim of the study was to determine the information content and quality of the European Statistics on Accidents at Work (ESAW) variables to evaluate safety gaps and risks as well as the accidental man-machine interaction.

  7. Integrating Gender and Group Differences into Bridging Strategy

    NASA Astrophysics Data System (ADS)

    Yılmaz, Serkan; Eryılmaz, Ali

    2010-08-01

    The main goal of this study was to integrate gender and group effect into bridging strategy in order to assess the effect of bridging analogy-based instruction on sophomore students' misconceptions in Newton's Third Law. Specifically, the authors developed and benefited from anchoring analogy diagnostic test to merge the effect of group and gender into the strategy. Newton's third law misconception test, attitude scale toward Newton's third law, and classroom observation checklists were the other measuring tools utilized throughout this quasi-experimental study. The researchers also developed or used several teaching/learning materials such as gender and group splitted concept diagrams, lesson plans, gender splitted frequency tables, make sense scales, PowerPoint slides, flash cards, and demonstrations. The convenience sample of the study chosen from the accessible population involved 308 students from two public universities. The results of multivariate analysis of covariance indicated that the bridging strategy had a significant effect on students' misconceptions in Newton's third law whereas it had no significant effect on students' attitudes toward Newton's third law.

  8. Clutch pressure estimation for a power-split hybrid transmission using nonlinear robust observer

    NASA Astrophysics Data System (ADS)

    Zhou, Bin; Zhang, Jianwu; Gao, Ji; Yu, Haisheng; Liu, Dong

    2018-06-01

    For a power-split hybrid transmission, using the brake clutch to realize the transition from electric drive mode to hybrid drive mode is an available strategy. Since the pressure information of the brake clutch is essential for the mode transition control, this research designs a nonlinear robust reduced-order observer to estimate the brake clutch pressure. Model uncertainties or disturbances are considered as additional inputs, thus the observer is designed in order that the error dynamics is input-to-state stable. The nonlinear characteristics of the system are expressed as the lookup tables in the observer. Moreover, the gain matrix of the observer is solved by two optimization procedures under the constraints of the linear matrix inequalities. The proposed observer is validated by offline simulation and online test, the results have shown that the observer achieves significant performance during the mode transition, as the estimation error is within a reasonable range, more importantly, it is asymptotically stable.

  9. Closeout of JOYO-1 Specimen Fabrication Efforts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ME Petrichek; JL Bump; RF Luther

    2005-10-31

    Fabrication was well under way for the JOYO biaxial creep and tensile specimens when the NR Space program was canceled. Tubes of FS-85, ASTAR-811C, and T-111 for biaxial creep specimens had been drawn at True Tube (Paso Robles, CA), while tubes of Mo-47.5 Re were being drawn at Rhenium Alloys (Cleveland, OH). The Mo-47.5 Re tubes are now approximately 95% complete. Their fabrication and the quantities produced will be documented at a later date. End cap material for FS-85, ASTAR-811C, and T-111 had been swaged at Pittsburgh Materials Technology, Inc. (PMTI) (Large, PA) and machined at Vangura (Clairton, PA). Cuttingmore » of tubes, pickling, annealing, and laser engraving were in process at PMTI. Several biaxial creep specimen sets of FS-85, ASTAR-811C, and T-111 had already been sent to Pacific Northwest National Laboratory (PNNL) for weld development. In addition, tensile specimens of FS-85, ASTAR-811C, T-111, and Mo-47.5 Re had been machined at Kin-Tech (North Huntington, PA). Actual machining of the other specimen types had not been initiated. Flowcharts 1-3 detail the major processing steps each piece of material has experienced. A more detailed description of processing will be provided in a separate document [B-MT(SRME)-51]. Table 1 lists the in-process materials and finished specimens. Also included are current metallurgical condition of these materials and specimens. The available chemical analyses for these alloys at various points in the process are provided in Table 2.« less

  10. Circuit board routing attachment for Fermilab Gerber plotter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lindenmeyer, C.

    1984-05-10

    A new and potentially important method of producing large circuit boards has been developed at Fermilab. A Gerber Flat Bed Plotter with an active area of 5' x 16' has been fitted with a machining head to produce a circuit board without the use of photography or chemicals. The modifications of the Gerber Plotter do not impair its use as a photoplotter or pen plotter, the machining head is merely exchanged with the standard attachments. The modifications to the program are minimal; this will be described in another report. The machining head is fitted with an air bearing motorized spindlemore » driven at a speed of 40,000 rpm to 90,000 rpm. The spindle also is provided with air bearings on its outside diameter, offering frictionless vertical travel guidance. Vertical travel of the spindle is driven by a spring return single acting air cylinder. An adjustable hydraulic damper slows the spindle travel near the end of its downward stroke. Two programmable stops control spindle down stroke position, and limit switches are provided for position feedback to the control system. A vacuum system collects chips at the cutter head. No lubrication or regular maintenance is required. The circuit board to be fabricated is supported on a porous plastic mat which allows table vacuum to hold the board in place while allowing the cutters or drills to cut through the board without damaging the rubber platen of the plotter. The perimeter of the board must be covered to the limits of the table vacuum area used to prevent excessive leakage.« less

  11. TU-G-BRD-02: Automated Systematic Quality Assurance Program for Radiation Oncology Information System Upgrades

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, B; Yi, B; Eley, J

    Purpose: To: (1) describe an independent, automated, systematic software-based protocol for verifying clinical data accuracy/integrity for mitigation of data corruption/loss risks following radiation oncology information system (ROIS) upgrades; and (2) report on application of this approach in an academic/community practice environment. Methods: We propose a robust approach to perform quality assurance on the ROIS after an upgrade, targeting four data sources: (1) ROIS relational database; (2) ROIS DICOM interface; (3) ROIS treatment machine data configuration; and (4) ROIS-generated clinical reports. We investigated the database schema for differences between pre-/post-upgrade states. Paired DICOM data streams for the same object (such asmore » RT-Plan/Treatment Record) were compared between pre-/post-upgrade states for data corruption. We examined machine configuration and related commissioning data files for changes and corruption. ROIS-generated treatment appointment and treatment parameter reports were compared to ensure patient encounter and treatment plan accuracy. This protocol was supplemented by an end-to-end clinical workflow test to verify essential ROI functionality and integrity of components interfaced during patient care chain of activities. We describe the implementation of this protocol during a Varian ARIA system upgrade at our clinic. Results: We verified 1,638 data tables with 2.4 billion data records. For 222 under-treatment patients, 605 DICOM RT plans and 13,480 DICOM treatment records retrieved from the ROIS DICOM interface were compared, with no differences in fractions, doses delivered, or treatment parameters. We identified 82 new data tables and 78 amended/deleted tables consistent with the upgrade. Reports for 5,073 patient encounters over a 2-week horizon were compared and were identical to those before the upgrade. Content in 12,237 xml machine files was compared, with no differences identified. Conclusion: An independent QA/validation approach for ROIS upgrades was developed and implemented at our clinic. The success of this approach ensures a robust QA of ROIS upgrades without manual paper/electronic checks and associated intensive labor.« less

  12. Co-PylotDB - A Python-Based Single-Window User Interface for Transmitting Information to a Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnette, Daniel W.

    2012-01-05

    Co-PylotDB, written completely in Python, provides a user interface (UI) with which to select user and data file(s), directories, and file content, and provide or capture various other information for sending data collected from running any computer program to a pre-formatted database table for persistent storage. The interface allows the user to select input, output, make, source, executable, and qsub files. It also provides fields for specifying the machine name on which the software was run, capturing compile and execution lines, and listing relevant user comments. Data automatically captured by Co-PylotDB and sent to the database are user, current directory,more » local hostname, current date, and time of send. The UI provides fields for logging into a local or remote database server, specifying a database and a table, and sending the information to the selected database table. If a server is not available, the UI provides for saving the command that would have saved the information to a database table for either later submission or for sending via email to a collaborator who has access to the desired database.« less

  13. Integrated verification and testing system (IVTS) for HAL/S programs

    NASA Technical Reports Server (NTRS)

    Senn, E. H.; Ames, K. R.; Smith, K. A.

    1983-01-01

    The IVTS is a large software system designed to support user-controlled verification analysis and testing activities for programs written in the HAL/S language. The system is composed of a user interface and user command language, analysis tools and an organized data base of host system files. The analysis tools are of four major types: (1) static analysis, (2) symbolic execution, (3) dynamic analysis (testing), and (4) documentation enhancement. The IVTS requires a split HAL/S compiler, divided at the natural separation point between the parser/lexical analyzer phase and the target machine code generator phase. The IVTS uses the internal program form (HALMAT) between these two phases as primary input for the analysis tools. The dynamic analysis component requires some way to 'execute' the object HAL/S program. The execution medium may be an interpretive simulation or an actual host or target machine.

  14. Split-luciferase complementary assay: applications, recent developments, and future perspectives.

    PubMed

    Azad, Taha; Tashakor, Amin; Hosseinkhani, Saman

    2014-09-01

    Bioluminescent systems are considered as potent reporter systems for bioanalysis since they have specific characteristics, such as relatively high quantum yields and photon emission over a wide range of colors from green to red. Biochemical events are mostly accomplished through large protein machines. These molecular complexes are built from a few to many proteins organized through their interactions. These protein-protein interactions are vital to facilitate the biological activity of cells. The split-luciferase complementation assay makes the study of two or more interacting proteins possible. In this technique, each of the two domains of luciferase is attached to each partner of two interacting proteins. On interaction of those proteins, luciferase fragments are placed close to each other and form a complemented luciferase, which produces a luminescent signal. Split luciferase is an effective tool for assaying biochemical metabolites, where a domain or an intact protein is inserted into an internally fragmented luciferase, resulting in ligand binding, which causes a change in the emitted signals. We review the various applications of this novel luminescent biosensor in studying protein-protein interactions and assaying metabolites involved in analytical biochemistry, cell communication and cell signaling, molecular biology, and the fate of the whole cell, and show that luciferase-based biosensors are powerful tools that can be applied for diagnostic and therapeutic purposes.

  15. Miniaturized multiwavelength digital holography sensor for extensive in-machine tool measurement

    NASA Astrophysics Data System (ADS)

    Seyler, Tobias; Fratz, Markus; Beckmann, Tobias; Bertz, Alexander; Carl, Daniel

    2017-06-01

    In this paper we present a miniaturized digital holographic sensor (HoloCut) for operation inside a machine tool. With state-of-the-art 3D measurement systems, short-range structures such as tool marks cannot be resolved inside a machine tool chamber. Up to now, measurements had to be conducted outside the machine tool and thus processing data are generated offline. The sensor presented here uses digital multiwavelength holography to get 3D-shape-information of the machined sample. By using three wavelengths, we get a large artificial wavelength with a large unambiguous measurement range of 0.5mm and achieve micron repeatability even in the presence of laser speckles on rough surfaces. In addition, a digital refocusing algorithm based on phase noise is implemented to extend the measurement range beyond the limits of the artificial wavelength and geometrical depth-of-focus. With complex wave field propagation, the focus plane can be shifted after the camera images have been taken and a sharp image with extended depth of focus is constructed consequently. With 20mm x 20mm field of view the sensor enables measurement of both macro- and micro-structure (such as tool marks) with an axial resolution of 1 µm, lateral resolution of 7 µm and consequently allows processing data to be generated online which in turn qualifies it as a machine tool control. To make HoloCut compact enough for operation inside a machining center, the beams are arranged in two planes: The beams are split into reference beam and object beam in the bottom plane and combined onto the camera in the top plane later on. Using a mechanical standard interface according to DIN 69893 and having a very compact size of 235mm x 140mm x 215mm (WxHxD) and a weight of 7.5 kg, HoloCut can be easily integrated into different machine tools and extends no more in height than a typical processing tool.

  16. Investigation of Dynamic Force/Vibration Transmission Characteristics of Four-Square Type Gear Durability Test Machines

    NASA Technical Reports Server (NTRS)

    Kahraman, Ahmet

    2002-01-01

    In this study, design requirements for a dynamically viable, four-square type gear test machine are investigated. Variations of four-square type gear test machines have been in use for durability and dynamics testing of both parallel- and cross-axis gear set. The basic layout of these machines is illustrated. The test rig is formed by two gear pairs, of the same reduction ratio, a test gear pair and a reaction gear pair, connected to each other through shafts of certain torsional flexibility to form an efficient, closed-loop system. A desired level of constant torque is input to the circuit through mechanical (a split coupling with a torque arm) or hydraulic (a hydraulic actuator) means. The system is then driven at any desired speed by a small DC motor. The main task in hand is the isolation of the test gear pair from the reaction gear pair under dynamic conditions. Any disturbances originated at the reaction gear mesh might potentially travel to the test gearbox, altering the dynamic loading conditions of the test gear mesh, and hence, influencing the outcome of the durability or dynamics test. Therefore, a proper design of connecting structures becomes a major priority. Also, equally important is the issue of how close the operating speed of the machine is to the resonant frequencies of the gear meshes. This study focuses on a detailed analysis of the current NASA Glenn Research Center gear pitting test machine for evaluation of its resonance and vibration isolation characteristics. A number of these machines as the one illustrated has been used over last 30 years to establish an extensive database regarding the influence of the gear materials, processes surface treatments and lubricants on gear durability. This study is intended to guide an optimum design of next generation test machines for the most desirable dynamic characteristics.

  17. Recognition and Quantification of Area Damaged by Oligonychus Perseae in Avocado Leaves

    NASA Astrophysics Data System (ADS)

    Díaz, Gloria; Romero, Eduardo; Boyero, Juan R.; Malpica, Norberto

    The measure of leaf damage is a basic tool in plant epidemiology research. Measuring the area of a great number of leaves is subjective and time consuming. We investigate the use of machine learning approaches for the objective segmentation and quantification of leaf area damaged by mites in avocado leaves. After extraction of the leaf veins, pixels are labeled with a look-up table generated using a Support Vector Machine with a polynomial kernel of degree 3, on the chrominance components of YCrCb color space. Spatial information is included in the segmentation process by rating the degree of membership to a certain class and the homogeneity of the classified region. Results are presented on real images with different degrees of damage.

  18. Spectral and spatial characterisation of laser-driven positron beams

    DOE PAGES

    Sarri, G.; Warwick, J.; Schumaker, W.; ...

    2016-10-18

    The generation of high-quality relativistic positron beams is a central area of research in experimental physics, due to their potential relevance in a wide range of scientific and engineering areas, ranging from fundamental science to practical applications. There is now growing interest in developing hybrid machines that will combine plasma-based acceleration techniques with more conventional radio-frequency accelerators, in order to minimise the size and cost of these machines. Here we report on recent experiments on laser-driven generation of high-quality positron beams using a relatively low energy and potentially table-top laser system. Lastly, the results obtained indicate that current technology allowsmore » to create, in a compact setup, positron beams suitable for injection in radio-frequency accelerators.« less

  19. Could machine learning improve the prediction of pelvic nodal status of prostate cancer patients? Preliminary results of a pilot study.

    PubMed

    De Bari, B; Vallati, M; Gatta, R; Simeone, C; Girelli, G; Ricardi, U; Meattini, I; Gabriele, P; Bellavita, R; Krengli, M; Cafaro, I; Cagna, E; Bunkheila, F; Borghesi, S; Signor, M; Di Marco, A; Bertoni, F; Stefanacci, M; Pasinetti, N; Buglione, M; Magrini, S M

    2015-07-01

    We tested and compared performances of Roach formula, Partin tables and of three Machine Learning (ML) based algorithms based on decision trees in identifying N+ prostate cancer (PC). 1,555 cN0 and 50 cN+ PC were analyzed. Results were also verified on an independent population of 204 operated cN0 patients, with a known pN status (187 pN0, 17 pN1 patients). ML performed better, also when tested on the surgical population, with accuracy, specificity, and sensitivity ranging between 48-86%, 35-91%, and 17-79%, respectively. ML potentially allows better prediction of the nodal status of PC, potentially allowing a better tailoring of pelvic irradiation.

  20. A/E/C CAD Standard, Release 4.0

    DTIC Science & Technology

    2009-07-01

    Insulating (Transformer) Oil System Lubrication Oil Hot Water Heating System Machine Design Appendix A Model File Level/Layer Assignment Tables A51...of the A /E/C CAD Standard are:  “Uniform Drawing System ” The Construction Specifications Institute 99 Canal Center Plaza, Suite 300 Alexandria, VA...FM – Facility Management  GIS – Geographic Information System  IAI – International Alliance for Interoperability  IFC – Industry Foundation

  1. Joint Sparse Representation for Robust Multimodal Biometrics Recognition

    DTIC Science & Technology

    2014-01-01

    comprehensive multimodal dataset and a face database are described in section V. Finally, in section VI, we discuss the computational complexity of...fingerprint, iris, palmprint , hand geometry and voice from subjects of different age, gender and ethnicity as described in Table I. It is a...Taylor, “Constructing nonlinear discriminants from multiple data views,” Machine Learning and Knowl- edge Discovery in Databases , pp. 328–343, 2010

  2. USSR and Eastern Europe Scientific Abstracts, Biomedical and Behavioral Sciences, Number 81.

    DTIC Science & Technology

    1977-11-28

    Hydrobiology 21 Industrial Microbiology 22 Industrial Toxicology 31 Marine Mammals 35 Microbiology 36 Molecular Biology 38 Neuros ciences...in progress. Factors involved in increasing productivity were calculated and presented in 4 tables: duration of use of equipment in 1 day (hours...machines no longer in production but omits materials on some new equipment and some new forms of organization of the work of the agrochemical

  3. Experiments on PIM in Support of the Development of IVA Technology for Radiography at AWE

    NASA Astrophysics Data System (ADS)

    Clough, Stephen G.; Thomas, Kenneth J.; Williamson, Mark C.; Phillips, Martin J.; Smith, Ian D.; Bailey, Vernon L.; Kishi, Hiroshi J.; Maenchen, John E.; Johnson, David L.

    2002-12-01

    The PIM machine has been designed and constructed at AWE as part of a program to investigate IVA technology for radiographic applications. PIM, as originally constructed, was a prospective single module of a 14 MV, 100 kA, ten module machine. The design of such a machine is a primary goal of the program as several are required to provide multi-axis radiography in a new Hydrodynamics Research Facility (HRF). Another goal is to design lower voltage machines (ranging from 1 to 5 MV) utilizing PIM style components. The original PIM machine consisted of a single inductive cavity pulsed by a 10 ohm water dielectric Blumlein pulse forming line (PFL) charged by a Marx generator. These components successfully achieved their design voltages and data on the prepulse was obtained showing it to be worse than expected. This information provided a basis for design work on the 14 MV HRF IVA, carried out by Titan-PSD, resulting in a proposal for a prepulse switch, a prototype of which should be installed on PIM by the end of this year. The original single, coaxial switch used to initiate the Blumlein has been replaced by a prototype laser triggered switching arrangement, also designed by Titan-PSD, which it was desired to test prior to its eventual use in the HRF. Despite problems with the laser, which will necessitate further experiments, it was determined that laser triggering with low jitter was occurring. A split oil co-ax feed has now been used to install a second cavity, in parallel with the first, on the PIM Blumlein. This two cavity configuration provides a prototype for future radiographic machines operating at up to 3 MV and a test facility for diode research.

  4. Quadruple Axis Neutron Computed Tomography

    NASA Astrophysics Data System (ADS)

    Schillinger, Burkhard; Bausenwein, Dominik

    Neutron computed tomography takes more time for a full tomography than X-rays or Synchrotron radiation, because the source intensity is limited. Most neutron imaging detectors have a square field of view, so if tomography of elongated, narrow samples, e.g. fuel rods, sword blades is recorded, much of the detector area is wasted. Using multiple rotation axes, several samples can be placed inside the field of view, and multiple tomographies can be recorded at the same time by later splitting the recorded images into separate tomography data sets. We describe a new multiple-axis setup using four independent miniaturized rotation tables.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shettel, D.L. Jr.; Langfeldt, S.L.; Youngquist, C.A.

    This report presents a Hydrogeochemical and Stream Sediment Reconnaissance of the Christian NTMS Quadrangle, Alaska. In addition to this abbreviated data release, more complete data are available to the public in machine-readable form. These machine-readable data, as well as quarterly or semiannual program progress reports containing further information on the HSSR program in general, or on the Los Alamos National Laboratory portion of the program in particular, are available from DOE's Technical Library at its Grand Junction Area Office. Presented in this data release are location data, field analyses, and laboratory analyses of several different sample media. For the sakemore » of brevity, many field site observations have not been included in this volume; these data are, however, available on the magnetic tape. Appendices A through D describe the sample media and summarize the analytical results for each medium. The data have been subdivided by one of the Los Alamos National Laboratory sorting programs of Zinkl and others (1981a) into groups of stream-sediment, lake-sediment, stream-water, lake-water, and ground-water samples. For each group which contains a sufficient number of observations, statistical tables, tables of raw data, and 1:1,000,000 scale maps of pertinent elements have been included in this report. Also included are maps showing results of multivariate statistical analyses.« less

  6. Helical Face Gear Development Under the Enhanced Rotorcraft Drive System Program

    NASA Technical Reports Server (NTRS)

    Heath, Gregory F.; Slaughter, Stephen C.; Fisher, David J.; Lewicki, David G.; Fetty, Jason

    2011-01-01

    U.S. Army goals for the Enhanced Rotorcraft Drive System Program are to achieve a 40 percent increase in horsepower to weight ratio, a 15 dB reduction in drive system generated noise, 30 percent reduction in drive system operating, support, and acquisition cost, and 75 percent automatic detection of critical mechanical component failures. Boeing s technology transition goals are that the operational endurance level of the helical face gearing and related split-torque designs be validated to a TRL 6, and that analytical and manufacturing tools be validated. Helical face gear technology is being developed in this project to augment, and transition into, a Boeing AH-64 Block III split-torque face gear main transmission stage, to yield increased power density and reduced noise. To date, helical face gear grinding development on Northstar s new face gear grinding machine and pattern-development tests at the NASA Glenn/U.S. Army Research Laboratory have been completed and are described.

  7. Performance analysis and prediction in triathlon.

    PubMed

    Ofoghi, Bahadorreza; Zeleznikow, John; Macmahon, Clare; Rehula, Jan; Dwyer, Dan B

    2016-01-01

    Performance in triathlon is dependent upon factors that include somatotype, physiological capacity, technical proficiency and race strategy. Given the multidisciplinary nature of triathlon and the interaction between each of the three race components, the identification of target split times that can be used to inform the design of training plans and race pacing strategies is a complex task. The present study uses machine learning techniques to analyse a large database of performances in Olympic distance triathlons (2008-2012). The analysis reveals patterns of performance in five components of triathlon (three race "legs" and two transitions) and the complex relationships between performance in each component and overall performance in a race. The results provide three perspectives on the relationship between performance in each component of triathlon and the final placing in a race. These perspectives allow the identification of target split times that are required to achieve a certain final place in a race and the opportunity to make evidence-based decisions about race tactics in order to optimise performance.

  8. Crystal structure of RecBCD enzyme reveals a machine for processing DNA breaks

    NASA Astrophysics Data System (ADS)

    Singleton, Martin R.; Dillingham, Mark S.; Gaudier, Martin; Kowalczykowski, Stephen C.; Wigley, Dale B.

    2004-11-01

    RecBCD is a multi-functional enzyme complex that processes DNA ends resulting from a double-strand break. RecBCD is a bipolar helicase that splits the duplex into its component strands and digests them until encountering a recombinational hotspot (Chi site). The nuclease activity is then attenuated and RecBCD loads RecA onto the 3' tail of the DNA. Here we present the crystal structure of RecBCD bound to a DNA substrate. In this initiation complex, the DNA duplex has been split across the RecC subunit to create a fork with the separated strands each heading towards different helicase motor subunits. The strands pass along tunnels within the complex, both emerging adjacent to the nuclease domain of RecB. Passage of the 3' tail through one of these tunnels provides a mechanism for the recognition of a Chi sequence by RecC within the context of double-stranded DNA. Gating of this tunnel suggests how nuclease activity might be regulated.

  9. Multidimensional upwind hydrodynamics on unstructured meshes using graphics processing units - I. Two-dimensional uniform meshes

    NASA Astrophysics Data System (ADS)

    Paardekooper, S.-J.

    2017-08-01

    We present a new method for numerical hydrodynamics which uses a multidimensional generalization of the Roe solver and operates on an unstructured triangular mesh. The main advantage over traditional methods based on Riemann solvers, which commonly use one-dimensional flux estimates as building blocks for a multidimensional integration, is its inherently multidimensional nature, and as a consequence its ability to recognize multidimensional stationary states that are not hydrostatic. A second novelty is the focus on graphics processing units (GPUs). By tailoring the algorithms specifically to GPUs, we are able to get speedups of 100-250 compared to a desktop machine. We compare the multidimensional upwind scheme to a traditional, dimensionally split implementation of the Roe solver on several test problems, and we find that the new method significantly outperforms the Roe solver in almost all cases. This comes with increased computational costs per time-step, which makes the new method approximately a factor of 2 slower than a dimensionally split scheme acting on a structured grid.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tanny, S; Bogue, J; Parsai, E

    Purpose: Potential collisions between the gantry head and the patient or table assembly are difficult to detect in most treatment planning systems. We have developed and implemented a novel software package for the representation of potential gantry collisions with the couch assembly at the time of treatment planning. Methods: Physical dimensions of the Varian Edge linear accelerator treatment head were measured and reproduced using the Visual Python display package. A script was developed for the Pinnacle treatment planning system to generate a file with the relevant couch, gantry, and isocenter positions for each beam in a planning trial. A pythonmore » program was developed to parse the information from the TPS and produce a representative model of the couch/gantry system. Using the model and the Visual Python libraries, a rendering window is generated for each beam that allows the planner to evaluate the possibility of a collision. Results: Comparison against heuristic methods and direct verification on the machine validated the collision model generated by the software. Encounters of <1 cm between the gantry treatment head and table were visualized as collisions in our virtual model. Visual windows were created depicting the angle of collision for each beam, including the anticipated table coordinates. Visual rendering of a 6 arc trial with multiple couch positions was completed in under 1 minute, with network bandwidth being the primary bottleneck. Conclusion: The developed software allows for quick examination of possible collisions during the treatment planning process and helps to prevent major collisions prior to plan approval. The software can easily be implemented on future planning systems due to the versatility and platform independence of the Python programming language. Further integration of the software with the treatment planning system will allow the possibility of patient-gantry collision detection for a range of treatment machines.« less

  11. Angular approach combined to mechanical model for tool breakage detection by eddy current sensors

    NASA Astrophysics Data System (ADS)

    Ritou, M.; Garnier, S.; Furet, B.; Hascoet, J. Y.

    2014-02-01

    The paper presents a new complete approach for Tool Condition Monitoring (TCM) in milling. The aim is the early detection of small damages so that catastrophic tool failures are prevented. A versatile in-process monitoring system is introduced for reliability concerns. The tool condition is determined by estimates of the radial eccentricity of the teeth. An adequate criterion is proposed combining mechanical model of milling and angular approach.Then, a new solution is proposed for the estimate of cutting force using eddy current sensors implemented close to spindle nose. Signals are analysed in the angular domain, notably by synchronous averaging technique. Phase shifts induced by changes of machining direction are compensated. Results are compared with cutting forces measured with a dynamometer table.The proposed method is implemented in an industrial case of pocket machining operation. One of the cutting edges has been slightly damaged during the machining, as shown by a direct measurement of the tool. A control chart is established with the estimates of cutter eccentricity obtained during the machining from the eddy current sensors signals. Efficiency and reliability of the method is demonstrated by a successful detection of the damage.

  12. The experience of the FERMI@Elettra photon beam transport and diagnostics system (PADReS) during three years of continuous support of machine and user experiments: achievements, lessons learned, and future upgrades

    NASA Astrophysics Data System (ADS)

    Zangrando, Marco; Fava, Claudio; Gerusina, Simone; Gobessi, Riccardo; Mahne, Nicola; Mazzucco, Eric; Raimondi, Lorenzo; Rumiz, Luca; Svetina, Cristian

    2014-09-01

    The FERMI FEL facility has begun delivering photons in 2011, becoming in late 2012 the first seeded facility open to external users worldwide. Since then, several tens of experiments have been carried out on the three operative endstations LDM, DiProI, and EIS-TIMEX. Starting from the commissioning phase, the transport and diagnostics system (PADReS) has been continuously developed and upgraded, becoming the indispensable interface between the machine and the experimental chambers. Moreover, PADReS itself has served as an active player for several machine studies as well as for various state-of-the-art experiments. In particular, some elements of PADReS have become key features to perform cutting edge experiments: the online energy spectrometer, the active optics refocusing systems, the split and delay line, and so on. For each of them the peculiar advantages will be described showing the actual implementation in the experiments. The experience gathered so far in fulfilling the needs of both machine and experimental physicists will be discussed, with particular emphasis on the solutions adopted in different scenarios. Recurrent requests and major difficulties will be reported so to give a glimpse about the standard tasks to be solved when preparing new and demanding experiments. Finally, some ideas and near-future improvements will be presented and discussed.

  13. Fault-Tolerant Coding for State Machines

    NASA Technical Reports Server (NTRS)

    Naegle, Stephanie Taft; Burke, Gary; Newell, Michael

    2008-01-01

    Two reliable fault-tolerant coding schemes have been proposed for state machines that are used in field-programmable gate arrays and application-specific integrated circuits to implement sequential logic functions. The schemes apply to strings of bits in state registers, which are typically implemented in practice as assemblies of flip-flop circuits. If a single-event upset (SEU, a radiation-induced change in the bit in one flip-flop) occurs in a state register, the state machine that contains the register could go into an erroneous state or could hang, by which is meant that the machine could remain in undefined states indefinitely. The proposed fault-tolerant coding schemes are intended to prevent the state machine from going into an erroneous or hang state when an SEU occurs. To ensure reliability of the state machine, the coding scheme for bits in the state register must satisfy the following criteria: 1. All possible states are defined. 2. An SEU brings the state machine to a known state. 3. There is no possibility of a hang state. 4. No false state is entered. 5. An SEU exerts no effect on the state machine. Fault-tolerant coding schemes that have been commonly used include binary encoding and "one-hot" encoding. Binary encoding is the simplest state machine encoding and satisfies criteria 1 through 3 if all possible states are defined. Binary encoding is a binary count of the state machine number in sequence; the table represents an eight-state example. In one-hot encoding, N bits are used to represent N states: All except one of the bits in a string are 0, and the position of the 1 in the string represents the state. With proper circuit design, one-hot encoding can satisfy criteria 1 through 4. Unfortunately, the requirement to use N bits to represent N states makes one-hot coding inefficient.

  14. Using Pipelined XNOR Logic to Reduce SEU Risks in State Machines

    NASA Technical Reports Server (NTRS)

    Le, Martin; Zheng, Xin; Katanyoutant, Sunant

    2008-01-01

    Single-event upsets (SEUs) pose great threats to avionic systems state machine control logic, which are frequently used to control sequence of events and to qualify protocols. The risks of SEUs manifest in two ways: (a) the state machine s state information is changed, causing the state machine to unexpectedly transition to another state; (b) due to the asynchronous nature of SEU, the state machine's state registers become metastable, consequently causing any combinational logic associated with the metastable registers to malfunction temporarily. Effect (a) can be mitigated with methods such as triplemodular redundancy (TMR). However, effect (b) cannot be eliminated and can degrade the effectiveness of any mitigation method of effect (a). Although there is no way to completely eliminate the risk of SEU-induced errors, the risk can be made very small by use of a combination of very fast state-machine logic and error-detection logic. Therefore, one goal of two main elements of the present method is to design the fastest state-machine logic circuitry by basing it on the fastest generic state-machine design, which is that of a one-hot state machine. The other of the two main design elements is to design fast error-detection logic circuitry and to optimize it for implementation in a field-programmable gate array (FPGA) architecture: In the resulting design, the one-hot state machine is fitted with a multiple-input XNOR gate for detection of illegal states. The XNOR gate is implemented with lookup tables and with pipelines for high speed. In this method, the task of designing all the logic must be performed manually because no currently available logic synthesis software tool can produce optimal solutions of design problems of this type. However, some assistance is provided by a script, written for this purpose in the Python language (an object-oriented interpretive computer language) to automatically generate hardware description language (HDL) code from state-transition rules.

  15. Predicting ground contact events for a continuum of gait types: An application of targeted machine learning using principal component analysis.

    PubMed

    Osis, Sean T; Hettinga, Blayne A; Ferber, Reed

    2016-05-01

    An ongoing challenge in the application of gait analysis to clinical settings is the standardized detection of temporal events, with unobtrusive and cost-effective equipment, for a wide range of gait types. The purpose of the current study was to investigate a targeted machine learning approach for the prediction of timing for foot strike (or initial contact) and toe-off, using only kinematics for walking, forefoot running, and heel-toe running. Data were categorized by gait type and split into a training set (∼30%) and a validation set (∼70%). A principal component analysis was performed, and separate linear models were trained and validated for foot strike and toe-off, using ground reaction force data as a gold-standard for event timing. Results indicate the model predicted both foot strike and toe-off timing to within 20ms of the gold-standard for more than 95% of cases in walking and running gaits. The machine learning approach continues to provide robust timing predictions for clinical use, and may offer a flexible methodology to handle new events and gait types. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. A hybrid prognostic model for multistep ahead prediction of machine condition

    NASA Astrophysics Data System (ADS)

    Roulias, D.; Loutas, T. H.; Kostopoulos, V.

    2012-05-01

    Prognostics are the future trend in condition based maintenance. In the current framework a data driven prognostic model is developed. The typical procedure of developing such a model comprises a) the selection of features which correlate well with the gradual degradation of the machine and b) the training of a mathematical tool. In this work the data are taken from a laboratory scale single stage gearbox under multi-sensor monitoring. Tests monitoring the condition of the gear pair from healthy state until total brake down following several days of continuous operation were conducted. After basic pre-processing of the derived data, an indicator that correlated well with the gearbox condition was obtained. Consecutively the time series is split in few distinguishable time regions via an intelligent data clustering scheme. Each operating region is modelled with a feed-forward artificial neural network (FFANN) scheme. The performance of the proposed model is tested by applying the system to predict the machine degradation level on unseen data. The results show the plausibility and effectiveness of the model in following the trend of the timeseries even in the case that a sudden change occurs. Moreover the model shows ability to generalise for application in similar mechanical assets.

  17. An Integrated Framework for Human-Robot Collaborative Manipulation.

    PubMed

    Sheng, Weihua; Thobbi, Anand; Gu, Ye

    2015-10-01

    This paper presents an integrated learning framework that enables humanoid robots to perform human-robot collaborative manipulation tasks. Specifically, a table-lifting task performed jointly by a human and a humanoid robot is chosen for validation purpose. The proposed framework is split into two phases: 1) phase I-learning to grasp the table and 2) phase II-learning to perform the manipulation task. An imitation learning approach is proposed for phase I. In phase II, the behavior of the robot is controlled by a combination of two types of controllers: 1) reactive and 2) proactive. The reactive controller lets the robot take a reactive control action to make the table horizontal. The proactive controller lets the robot take proactive actions based on human motion prediction. A measure of confidence of the prediction is also generated by the motion predictor. This confidence measure determines the leader/follower behavior of the robot. Hence, the robot can autonomously switch between the behaviors during the task. Finally, the performance of the human-robot team carrying out the collaborative manipulation task is experimentally evaluated on a platform consisting of a Nao humanoid robot and a Vicon motion capture system. Results show that the proposed framework can enable the robot to carry out the collaborative manipulation task successfully.

  18. Prediction of mitochondrial proteins of malaria parasite using split amino acid composition and PSSM profile.

    PubMed

    Verma, Ruchi; Varshney, Grish C; Raghava, G P S

    2010-06-01

    The rate of human death due to malaria is increasing day-by-day. Thus the malaria causing parasite Plasmodium falciparum (PF) remains the cause of concern. With the wealth of data now available, it is imperative to understand protein localization in order to gain deeper insight into their functional roles. In this manuscript, an attempt has been made to develop prediction method for the localization of mitochondrial proteins. In this study, we describe a method for predicting mitochondrial proteins of malaria parasite using machine-learning technique. All models were trained and tested on 175 proteins (40 mitochondrial and 135 non-mitochondrial proteins) and evaluated using five-fold cross validation. We developed a Support Vector Machine (SVM) model for predicting mitochondrial proteins of P. falciparum, using amino acids and dipeptides composition and achieved maximum MCC 0.38 and 0.51, respectively. In this study, split amino acid composition (SAAC) is used where composition of N-termini, C-termini, and rest of protein is computed separately. The performance of SVM model improved significantly from MCC 0.38 to 0.73 when SAAC instead of simple amino acid composition was used as input. In addition, SVM model has been developed using composition of PSSM profile with MCC 0.75 and accuracy 91.38%. We achieved maximum MCC 0.81 with accuracy 92% using a hybrid model, which combines PSSM profile and SAAC. When evaluated on an independent dataset our method performs better than existing methods. A web server PFMpred has been developed for predicting mitochondrial proteins of malaria parasites ( http://www.imtech.res.in/raghava/pfmpred/).

  19. Estimating sleep parameters using nasal pressure signals applicable to continuous positive airway pressure devices.

    PubMed

    Park, Jong-Uk; Erdenebayar, Urtnasan; Joo, Eun-Yeon; Lee, Kyoung-Joung

    2017-06-27

    This paper proposes a method for classifying sleep-wakefulness and estimating sleep parameters using nasal pressure signals applicable to a continuous positive airway pressure (CPAP) device. In order to classify the sleep-wakefulness states of patients with sleep-disordered breathing (SDB), apnea-hypopnea and snoring events are first detected. Epochs detected as SDB are classified as sleep, and time-domain- and frequency-domain-based features are extracted from the epochs that are detected as normal breathing. Subsequently, sleep-wakefulness is classified using a support vector machine (SVM) classifier in the normal breathing epoch. Finally, four sleep parameters-sleep onset, wake after sleep onset, total sleep time and sleep efficiency-are estimated based on the classified sleep-wakefulness. In order to develop and test the algorithm, 110 patients diagnosed with SDB participated in this study. Ninety of the subjects underwent full-night polysomnography (PSG) and twenty underwent split-night PSG. The subjects were divided into 50 patients of a training set (full/split: 42/8), 30 of a validation set (full/split: 24/6) and 30 of a test set (full/split: 24/6). In the experiments conducted, sleep-wakefulness classification accuracy was found to be 83.2% in the test set, compared with the PSG scoring results of clinical experts. Furthermore, all four sleep parameters showed higher correlations than the results obtained via PSG (r  ⩾  0.84, p  <  0.05). In order to determine whether the proposed method is applicable to CPAP, sleep-wakefulness classification performances were evaluated for each CPAP in the split-night PSG data. The results indicate that the accuracy and sensitivity of sleep-wakefulness classification by CPAP variation shows no statistically significant difference (p  <  0.05). The contributions made in this study are applicable to the automatic classification of sleep-wakefulness states in CPAP devices and evaluation of the quality of sleep.

  20. Research and Development for Continued Performance Improvement in Flexible a-Si PV

    DTIC Science & Technology

    2010-12-14

    accomplished, however, at low temperatures silicides tend to form on the surface of the filament, which affected filament lifetime and deposition rate...considered. Titanium Nitride, sputtered As an alternative to the hot wire deposition of silicon, samples were prepared with various thicknesses of...Silicon 21 Insitu DC Sputtering Titanium Nitride 22 Metal Machine 2 ឈ> RF Oxygen Plasma Silicon Dioxide 20. Oxygen Etch Table A.4.1 Open circuit

  1. KinLinks: Software Toolkit for Kinship Analysis and Pedigree Generation from NGS Datasets

    DTIC Science & Technology

    2015-04-21

    Retinitis pigmentosa families 2110 and 2111 of 52 individuals across 6 generations (Figure 5a), and 54 geographically diverse samples (Supplementary Table...relationships within the Retinitis pigmentosa family. Machine Learning Classifier for pairwise kinship prediction Ten features were identified for training...family (Figure 4b), and the Retinitis pigmentosa family (Figure 5b). The auto-generated pedigrees were graphed as well as in family-tree format using

  2. Second Report of the Multirate Processor (MRP) for Digital Voice Communications.

    DTIC Science & Technology

    1982-09-30

    machine are: * two arithmetic logic units (ALUs)-one for data processing, and the other for address generation, * two memorys -6144 words (70 bits per word...of program memory , and 6094 words (16 bits per word) of data memory , q * input/output through modem and teletype, -15 .9 S-;. KANG AND FRANSEN Table...provides a measure of intelligibility and allows one to evaluate the discriminability of six distinctive features: voicing, nasality, sustention

  3. X-Ray Simulator Theory Support

    DTIC Science & Technology

    1993-11-01

    the pulse power elements in existing and future DNA flash x-ray simulators, in particular DECADE. The pulse power for this machine is based on...usually requires usage at less than the radiation the longer the radiation pulse. full power . Energy delivered to the plasma load is converted into...on the Proto II generator sured with ap-i-n diode filtered with 25 pm ofaluminum; the TABLE 1. Nominal parameters for some pulse power generators used

  4. Factors Influencing Material Removal And Surface Finish Of The Polishing Of Silica Glasses

    DTIC Science & Technology

    2006-01-01

    Mechanical Properties of Quartz and Zerodur ® ..................................... 48 TABLE 4.2: Results from variable load and lap velocity experiments...of glass and glass-ceramic substrates which are used in a vast amount of applications, from optics for lithographic machines to mirrors and lenses...SiO2) glass polishing with metal oxide abrasive particles. This scheme will mirror the experimentation in this thesis, and hopefully provide a better

  5. Nontraditional Machining Guide, 26 Newcomers for Production

    DTIC Science & Technology

    1976-07-01

    essential: Frequent coarse wheel dressing to maintain sharpness Lower wheel speeds (under 3500 sfpm) Lower infeed rates (0.0002 to 0.0005 inch per pass...Oil-base lubricants with good flow control Soft wheels (H, I or J grades) Higher table speeds (50 sfpm or more) Solid fixtures and well...the wheel and the workpiece as in ECG. Electrical discharges from the graphite wheel are initiated from the higher a-c voltage superimposed on

  6. Identification of 1.4 Million Active Galactic Nuclei In the Mid-Infrared Using WISE Data

    DTIC Science & Technology

    2015-11-01

    galaxies – infrared: stars – galaxies : active – quasars: general Supporting material: machine-readable table 1. INTRODUCTION The International Celestial...AGN-dominated galaxies , optical emission is thought to originate from the compact accretion disk surrounding the supermassive black hole (SMBH), while... galaxies , an optical centroid can be shifted relative to the radio position because of contamination from the host galaxy . Depending on the distance to

  7. Efficient Execution Methods of Pivoting for Bulk Extraction of Entity-Attribute-Value-Modeled Data

    PubMed Central

    Luo, Gang; Frey, Lewis J.

    2017-01-01

    Entity-attribute-value (EAV) tables are widely used to store data in electronic medical records and clinical study data management systems. Before they can be used by various analytical (e.g., data mining and machine learning) programs, EAV-modeled data usually must be transformed into conventional relational table format through pivot operations. This time-consuming and resource-intensive process is often performed repeatedly on a regular basis, e.g., to provide a daily refresh of the content in a clinical data warehouse. Thus, it would be beneficial to make pivot operations as efficient as possible. In this paper, we present three techniques for improving the efficiency of pivot operations: 1) filtering out EAV tuples related to unneeded clinical parameters early on; 2) supporting pivoting across multiple EAV tables; and 3) conducting multi-query optimization. We demonstrate the effectiveness of our techniques through implementation. We show that our optimized execution method of pivoting using these techniques significantly outperforms the current basic execution method of pivoting. Our techniques can be used to build a data extraction tool to simplify the specification of and improve the efficiency of extracting data from the EAV tables in electronic medical records and clinical study data management systems. PMID:25608318

  8. Predicting groundwater level fluctuations with meteorological effect implications—A comparative study among soft computing techniques

    NASA Astrophysics Data System (ADS)

    Shiri, Jalal; Kisi, Ozgur; Yoon, Heesung; Lee, Kang-Kun; Hossein Nazemi, Amir

    2013-07-01

    The knowledge of groundwater table fluctuations is important in agricultural lands as well as in the studies related to groundwater utilization and management levels. This paper investigates the abilities of Gene Expression Programming (GEP), Adaptive Neuro-Fuzzy Inference System (ANFIS), Artificial Neural Networks (ANN) and Support Vector Machine (SVM) techniques for groundwater level forecasting in following day up to 7-day prediction intervals. Several input combinations comprising water table level, rainfall and evapotranspiration values from Hongcheon Well station (South Korea), covering a period of eight years (2001-2008) were used to develop and test the applied models. The data from the first six years were used for developing (training) the applied models and the last two years data were reserved for testing. A comparison was also made between the forecasts provided by these models and the Auto-Regressive Moving Average (ARMA) technique. Based on the comparisons, it was found that the GEP models could be employed successfully in forecasting water table level fluctuations up to 7 days beyond data records.

  9. Three-dimensional tool radius compensation for multi-axis peripheral milling

    NASA Astrophysics Data System (ADS)

    Chen, Youdong; Wang, Tianmiao

    2013-05-01

    Few function about 3D tool radius compensation is applied to generating executable motion control commands in the existing computer numerical control (CNC) systems. Once the tool radius is changed, especially in the case of tool size changing with tool wear in machining, a new NC program has to be recreated. A generic 3D tool radius compensation method for multi-axis peripheral milling in CNC systems is presented. The offset path is calculated by offsetting the tool path along the direction of the offset vector with a given distance. The offset vector is perpendicular to both the tangent vector of the tool path and the orientation vector of the tool axis relative to the workpiece. The orientation vector equations of the tool axis relative to the workpiece are obtained through homogeneous coordinate transformation matrix and forward kinematics of generalized kinematics model of multi-axis machine tools. To avoid cutting into the corner formed by the two adjacent tool paths, the coordinates of offset path at the intersection point have been calculated according to the transition type that is determined by the angle between the two tool path tangent vectors at the corner. Through the verification by the solid cutting simulation software VERICUT® with different tool radiuses on a table-tilting type five-axis machine tool, and by the real machining experiment of machining a soup spoon on a five-axis machine tool with the developed CNC system, the effectiveness of the proposed 3D tool radius compensation method is confirmed. The proposed compensation method can be suitable for all kinds of three- to five-axis machine tools as a general form.

  10. Experimental and numerical analysis of the dynamic behaviour in tension of an armour steel for applications in defence industry

    NASA Astrophysics Data System (ADS)

    Cadoni, Ezio; Dotta, Matteo; Forni, Daniele; Riganti, Gianmario; Kaufmann, Hanspeter

    2015-09-01

    The dynamic behaviour of armour steel in tension was investigated over a wide range of strain-rates on round specimens. The experiments were carried out by means of a Split Hopkinson Tensile Bar device and by a Hydro Pneumatic Machine. The target strain rate were set at the following six levels: 10-3, 5, 25, 100, 500 and 1000 s-1. Two material models were calibrated and used to replicate the experiments and to simulate blasting event on steel plate. Finally, the two responses are compared.

  11. Improved cache performance in Monte Carlo transport calculations using energy banding

    NASA Astrophysics Data System (ADS)

    Siegel, A.; Smith, K.; Felker, K.; Romano, P.; Forget, B.; Beckman, P.

    2014-04-01

    We present an energy banding algorithm for Monte Carlo (MC) neutral particle transport simulations which depend on large cross section lookup tables. In MC codes, read-only cross section data tables are accessed frequently, exhibit poor locality, and are typically too much large to fit in fast memory. Thus, performance is often limited by long latencies to RAM, or by off-node communication latencies when the data footprint is very large and must be decomposed on a distributed memory machine. The proposed energy banding algorithm allows maximal temporal reuse of data in band sizes that can flexibly accommodate different architectural features. The energy banding algorithm is general and has a number of benefits compared to the traditional approach. In the present analysis we explore its potential to achieve improvements in time-to-solution on modern cache-based architectures.

  12. Data analysis on physical and mechanical properties of cassava pellets.

    PubMed

    Oguntunde, Pelumi E; Adejumo, Oluyemisi A; Odetunmibi, Oluwole A; Okagbue, Hilary I; Adejumo, Adebowale O

    2018-02-01

    In this data article, laboratory experimental investigation results carried out at National Centre for Agricultural Mechanization (NCAM) on moisture content, machine speed, die diameter of the rig, and the outputs (hardness, durability, bulk density, and unit density of the pellets) at different levels of cassava pellets were observed. Analysis of variance using randomized complete block design with factorial was used to perform analysis for each of the outputs: hardness, durability, bulk density, and unit density of the pellets. A clear description on each of these outputs was considered separately using tables and figures. It was observed that for all the output with the exception of unit density, their main factor effects as well as two and three ways interactions is significant at 5% level. This means that the hardness, bulk density and durability of cassava pellets respectively depend on the moisture content of the cassava dough, the machine speed, the die diameter of the extrusion rig and the combinations of these factors in pairs as well as the three altogether. Higher machine speeds produced more quality pellets at lower die diameters while lower machine speed is recommended for higher die diameter. Also the unit density depends on die diameter and the three-way interaction only. Unit density of cassava pellets is neither affected by machine parameters nor moisture content of the cassava dough. Moisture content of cassava dough, speed of the machine and die diameter of the extrusion rig are significant factors to be considered in pelletizing cassava to produce pellets. Increase in moisture content of cassava dough increase the quality of cassava pellets.

  13. Astronomical Data Center Bulletin, volume 1, number 3

    NASA Technical Reports Server (NTRS)

    Mead, J. M.; Warren, W. H., Jr.; Nagy, T. A.

    1983-01-01

    A catalog of galactic O-type stars, a machine-readable version of the bright star catalog, a two-micron sky survey, sky survey sources with problematical Durchmusterung identifications, data retrieval for visual binary stars, faint blue objects, the sixth catalog of galactic Wolf-Rayet stars, declination versus magnitude distribution, the SAO-HD-GC-DM cross index catalog, star cross-identification tables, astronomical sources, bibliographical star index search updates, DO-HD and HD-DO cross indices, and catalogs, are reviewed.

  14. Cache Coherence Protocols for Large-Scale Multiprocessors

    DTIC Science & Technology

    1990-09-01

    and is compared with the other protocols for large-scale machines. In later analysis, this coherence method is designated by the acronym OCPD , which...private read misses 2 6 6 ( OCPD ) private write misses 2 6 6 Table 4.2: Transaction Types and Costs. the performance of the memory system. These...methodologies. Figure 4-2 shows the processor utiliza- tions of the Weather program, with special code in the dyn-nic post-mortem sched- 94 OCPD DlrINB

  15. A System Description of the Cocaine Trade

    DTIC Science & Technology

    1994-01-01

    72 C.17. Drug Market Hirarchy Tables (Cells A112 to N155) ............ 74 C.18. Purity Levels (Cells A156 to E71...This report also provides detailed information on how to use the model. The spreadsheets are available for either IBM (DOS) or Apple -based machines upon...red square (IBM) or arrow ( Apple ) in the upper right- hand corner have a note "behind" the cell explaining something about the data in the cell, or if

  16. Dynamic tensile stress-strain characteristics of carbon/epoxy laminated composites in through-thickness direction

    NASA Astrophysics Data System (ADS)

    Nakai, Kenji; Yokoyama, Takashi

    2015-09-01

    The effect of strain rate up to approximately ɛ˙ = 102/s on the tensile stress-strain properties of unidirectional and cross-ply carbon/epoxy laminated composites in the through-thickness direction is investigated. Waisted cylindrical specimens machined out of the laminated composites in the through-thickness direction are used in both static and dynamic tests. The dynamic tensile stress-strain curves up to fracture are determined using the split Hopkinson bar (SHB). The low and intermediate strain-rate tensile stress-strain relations up to fracture are measured on an Instron 5500R testing machine. It is demonstrated that the ultimate tensile strength and absorbed energy up to fracture increase significantly, while the fracture strain decreases slightly with increasing strain rate. Macro- and micro-scopic examinations reveal a marked difference in the fracture surfaces between the static and dynamic tension specimens.

  17. Image analysis and machine learning for detecting malaria.

    PubMed

    Poostchi, Mahdieh; Silamut, Kamolrat; Maude, Richard J; Jaeger, Stefan; Thoma, George

    2018-04-01

    Malaria remains a major burden on global health, with roughly 200 million cases worldwide and more than 400,000 deaths per year. Besides biomedical research and political efforts, modern information technology is playing a key role in many attempts at fighting the disease. One of the barriers toward a successful mortality reduction has been inadequate malaria diagnosis in particular. To improve diagnosis, image analysis software and machine learning methods have been used to quantify parasitemia in microscopic blood slides. This article gives an overview of these techniques and discusses the current developments in image analysis and machine learning for microscopic malaria diagnosis. We organize the different approaches published in the literature according to the techniques used for imaging, image preprocessing, parasite detection and cell segmentation, feature computation, and automatic cell classification. Readers will find the different techniques listed in tables, with the relevant articles cited next to them, for both thin and thick blood smear images. We also discussed the latest developments in sections devoted to deep learning and smartphone technology for future malaria diagnosis. Published by Elsevier Inc.

  18. A Machine Reading System for Assembling Synthetic Paleontological Databases

    PubMed Central

    Peters, Shanan E.; Zhang, Ce; Livny, Miron; Ré, Christopher

    2014-01-01

    Many aspects of macroevolutionary theory and our understanding of biotic responses to global environmental change derive from literature-based compilations of paleontological data. Existing manually assembled databases are, however, incomplete and difficult to assess and enhance with new data types. Here, we develop and validate the quality of a machine reading system, PaleoDeepDive, that automatically locates and extracts data from heterogeneous text, tables, and figures in publications. PaleoDeepDive performs comparably to humans in several complex data extraction and inference tasks and generates congruent synthetic results that describe the geological history of taxonomic diversity and genus-level rates of origination and extinction. Unlike traditional databases, PaleoDeepDive produces a probabilistic database that systematically improves as information is added. We show that the system can readily accommodate sophisticated data types, such as morphological data in biological illustrations and associated textual descriptions. Our machine reading approach to scientific data integration and synthesis brings within reach many questions that are currently underdetermined and does so in ways that may stimulate entirely new modes of inquiry. PMID:25436610

  19. Comment on "An Efficient and Stable Hydrodynamic Model With Novel Source Term Discretization Schemes for Overland Flow and Flood Simulations" by Xilin Xia et al.

    NASA Astrophysics Data System (ADS)

    Lu, Xinhua; Mao, Bing; Dong, Bingjiang

    2018-01-01

    Xia et al. (2017) proposed a novel, fully implicit method for the discretization of the bed friction terms for solving the shallow-water equations. The friction terms contain h-7/3 (h denotes water depth), which may be extremely large, introducing machine error when h approaches zero. To address this problem, Xia et al. (2017) introduces auxiliary variables (their equations (37) and (38)) so that h-4/3 rather than h-7/3 is calculated and solves a transformed equation (their equation (39)). The introduced auxiliary variables require extra storage. We implemented an analysis on the magnitude of the friction terms to find that these terms on the whole do not exceed the machine floating-point number precision, and thus we proposed a simple-to-implement technique by splitting h-7/3 into different parts of the friction terms to avoid introducing machine error. This technique does not need extra storage or to solve a transformed equation and thus is more efficient for simulations. We also showed that the surface reconstruction method proposed by Xia et al. (2017) may lead to predictions with spurious wiggles because the reconstructed Riemann states may misrepresent the water gravitational effect.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rovang, Dean C.; Lamppa, Derek C.; Cuneo, Michael Edward

    We have successfully integrated the capability to apply uniform, high magnetic fields (10–30 T) to high energy density experiments on the Z facility. This system uses an 8-mF, 15-kV capacitor bank to drive large-bore (5 cm diameter), high-inductance (1–3 mH) multi-turn, multi-layer electromagnets that slowly magnetize the conductive targets used on Z over several milliseconds (time to peak field of 2–7 ms). This system was commissioned in February 2013 and has been used successfully to magnetize more than 30 experiments up to 10 T that have produced exciting and surprising physics results. These experiments used split-magnet topologies to maintain diagnosticmore » lines of sight to the target. We then describe the design, integration, and operation of the pulsed coil system into the challenging and harsh environment of the Z Machine. We also describe our plans and designs for achieving fields up to 20 T with a reduced-gap split-magnet configuration, and up to 30 T with a solid magnet configuration in pursuit of the Magnetized Liner Inertial Fusion concept.« less

  1. Experimental and Numerical Study on Tensile Strength of Concrete under Different Strain Rates

    PubMed Central

    Min, Fanlu; Yao, Zhanhu; Jiang, Teng

    2014-01-01

    The dynamic characterization of concrete is fundamental to understand the material behavior in case of heavy earthquakes and dynamic events. The implementation of material constitutive law is of capital importance for the numerical simulation of the dynamic processes as those caused by earthquakes. Splitting tensile concrete specimens were tested at strain rates of 10−7 s−1 to 10−4 s−1 in an MTS material test machine. Results of tensile strength versus strain rate are presented and compared with compressive strength and existing models at similar strain rates. Dynamic increase factor versus strain rate curves for tensile strength were also evaluated and discussed. The same tensile data are compared with strength data using a thermodynamic model. Results of the tests show a significant strain rate sensitive behavior, exhibiting dynamic tensile strength increasing with strain rate. In the quasistatic strain rate regime, the existing models often underestimate the experimental results. The thermodynamic theory for the splitting tensile strength of concrete satisfactorily describes the experimental findings of strength as effect of strain rates. PMID:24883355

  2. Identification and classification of similar looking food grains

    NASA Astrophysics Data System (ADS)

    Anami, B. S.; Biradar, Sunanda D.; Savakar, D. G.; Kulkarni, P. V.

    2013-01-01

    This paper describes the comparative study of Artificial Neural Network (ANN) and Support Vector Machine (SVM) classifiers by taking a case study of identification and classification of four pairs of similar looking food grains namely, Finger Millet, Mustard, Soyabean, Pigeon Pea, Aniseed, Cumin-seeds, Split Greengram and Split Blackgram. Algorithms are developed to acquire and process color images of these grains samples. The developed algorithms are used to extract 18 colors-Hue Saturation Value (HSV), and 42 wavelet based texture features. Back Propagation Neural Network (BPNN)-based classifier is designed using three feature sets namely color - HSV, wavelet-texture and their combined model. SVM model for color- HSV model is designed for the same set of samples. The classification accuracies ranging from 93% to 96% for color-HSV, ranging from 78% to 94% for wavelet texture model and from 92% to 97% for combined model are obtained for ANN based models. The classification accuracy ranging from 80% to 90% is obtained for color-HSV based SVM model. Training time required for the SVM based model is substantially lesser than ANN for the same set of images.

  3. Extended-criteria donors in liver transplantation Part II: reviewing the impact of extended-criteria donors on the complications and outcomes of liver transplantation.

    PubMed

    Nemes, Balázs; Gámán, György; Polak, Wojciech G; Gelley, Fanni; Hara, Takanobu; Ono, Shinichiro; Baimakhanov, Zhassulan; Piros, Laszlo; Eguchi, Susumu

    2016-07-01

    Extended-criteria donors (ECDs) have an impact on early allograft dysfunction (EAD), biliary complications, relapse of hepatitis C virus (HCV), and survivals. Early allograft dysfunction was frequently seen in grafts with moderate and severe steatosis. Donors after cardiac death (DCD) have been associated with higher rates of graft failure and biliary complications compared to donors after brain death. Extended warm ischemia, reperfusion injury and endothelial activation trigger a cascade, leading to microvascular thrombosis, resulting in biliary necrosis, cholangitis, and graft failure. The risk of HCV recurrence increased by donor age, and associated with using moderately and severely steatotic grafts. With the administration of protease inhibitors sustained virological response was achieved in majority of the patients. Donor risk index and EC donor scores (DS) are reported to be useful, to assess the outcome. The 1-year survival rates were 87% and 40% respectively, for donors with a DS of 0 and 3. Graft survival was excellent up to a DS of 2, however a DS >2 should be avoided in higher-risk recipients. The 1, 3 and 5-year survival of DCD recipients was comparable to optimal donors. However ECDs had minor survival means of 85%, 78.6%, and 72.3%. The graft survival of split liver transplantation (SLT) was comparable to that of whole liver orthotopic liver transplantation. SLT was not regarded as an ECD factor in the MELD era any more. Full-right-full-left split liver transplantation has a significant advantage to extend the high quality donor pool. Hypothermic oxygenated machine perfusion can be applied clinically in DCD liver grafts. Feasibility and safety were confirmed. Reperfusion injury was also rare in machine perfused DCD livers.

  4. Feasibility Study of a Precision Cast Loading Machine for Small Ammunition Items

    DTIC Science & Technology

    1975-05-01

    distribution unlimited. READ INStRUCTIONS BEFORE COMPLETING FORM 1. RECIPIENT’S CATALOG NUMBER S. TYRE OF REPORT « PERIOD COVERED S. PCRFORMINO ORO...through a short length rubber hose into the hemisphere. A clamp actuated by an air cylinder can close or open the rubber hose. Because of unknown... rubber hose in the recess on the face of the fixture, exposing opposite hole for observation by the TV monitor through the TV camera. 8. Release table

  5. Optimal inventories for overhaul of repairable redundant systems - A Markov decision model

    NASA Technical Reports Server (NTRS)

    Schaefer, M. K.

    1984-01-01

    A Markovian decision model was developed to calculate the optimal inventory of repairable spare parts for an avionics control system for commercial aircraft. Total expected shortage costs, repair costs, and holding costs are minimized for a machine containing a single system of redundant parts. Transition probabilities are calculated for each repair state and repair rate, and optimal spare parts inventory and repair strategies are determined through linear programming. The linear programming solutions are given in a table.

  6. The Concept of Electrically Assisted Friction Stir Welding (EAFSW) and Application to the Processing of Various Metals

    DTIC Science & Technology

    2008-09-01

    unavailability of a precise load cell. This scale was set on the machine working table. Since the shaft spindle had been placed in fixed vertical...4 Figure 4. Mill Spindle with Slip Ring and Brush Assembly, Weld Tip is in W orking Position...areas . FSW of HSLA 65 and Type 304L Processing Parameter HSLA-65 304L Spindle Speed (RPM) 850 850 Travel Speed (ipm) 6 2 Z-Load (Ibs) 3500 3500

  7. Travelogue--a newcomer encounters statistics and the computer.

    PubMed

    Bruce, Peter

    2011-11-01

    Computer-intensive methods have revolutionized statistics, giving rise to new areas of analysis and expertise in predictive analytics, image processing, pattern recognition, machine learning, genomic analysis, and more. Interest naturally centers on the new capabilities the computer allows the analyst to bring to the table. This article, instead, focuses on the account of how computer-based resampling methods, with their relative simplicity and transparency, enticed one individual, untutored in statistics or mathematics, on a long journey into learning statistics, then teaching it, then starting an education institution.

  8. DELPHI: An introduction to output layout and data content

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, C.F.

    1994-08-16

    DELPHI was the data summary and interpretation code used by gas diagnostics personnel during the period from 1968 through 1986. It was written by Floyd Momyer, and went through several revisions during its period of use. Described here is the final version, which provided the most extensive set of summary tables. Earlier versions of the code lacked some of the capabilities of the final version, but what they did include was of substantially the same format. DELPHI was run against most available input decks in the mid 1980s. Microfiche and hardcopy output were generated. Both now reside in our archives.more » These reruns used modified input decks, which may not have had the proper {open_quotes}trigger{close_quotes} to instruct DELPHI to output some tables. These tables could, therefore be missing from a printout even though the necessary data was present. Also, modifications to DELPHI did, in some instances, eliminate DELPHIs` capability to correctly output some of the earlier optional tables. This monologue is intended to compliment the archived printout, and to provide enough insight so that someone unfamiliar with the techniques of Gas Diagnostics can retrieve the results at some future date. DELPHI last ran on the CDC-7600 machines, and was not converted to run on the Crays when the CDC-7600`s were decommissioned. DELPHI accepted data from various analytical systems, set up data summary tables, and combined preshot tracer and detector data with these results to calculate the total production of measured species and the indicated fission yields and detector conversions.« less

  9. Thoth: Software for data visualization & statistics

    NASA Astrophysics Data System (ADS)

    Laher, R. R.

    2016-10-01

    Thoth is a standalone software application with a graphical user interface for making it easy to query, display, visualize, and analyze tabular data stored in relational databases and data files. From imported data tables, it can create pie charts, bar charts, scatter plots, and many other kinds of data graphs with simple menus and mouse clicks (no programming required), by leveraging the open-source JFreeChart library. It also computes useful table-column data statistics. A mature tool, having underwent development and testing over several years, it is written in the Java computer language, and hence can be run on any computing platform that has a Java Virtual Machine and graphical-display capability. It can be downloaded and used by anyone free of charge, and has general applicability in science, engineering, medical, business, and other fields. Special tools and features for common tasks in astronomy and astrophysical research are included in the software.

  10. VizieR Online Data Catalog: EBHIS spectra and HI column density maps (Winkel+, 2016)

    NASA Astrophysics Data System (ADS)

    Winkel, B.; Kerp, J.; Floeer, L.; Kalberla, P. M. W.; Ben Bekhti, N.; Keller, R.; Lenz, D.

    2015-11-01

    The EBHIS 1st data release comprises 21-cm neutral atomic hydrogen data of the Milky Way (-600km/s

  11. Segmentation, Splitting, and Classification of Overlapping Bacteria in Microscope Images for Automatic Bacterial Vaginosis Diagnosis.

    PubMed

    Song, Youyi; He, Liang; Zhou, Feng; Chen, Siping; Ni, Dong; Lei, Baiying; Wang, Tianfu

    2017-07-01

    Quantitative analysis of bacterial morphotypes in the microscope images plays a vital role in diagnosis of bacterial vaginosis (BV) based on the Nugent score criterion. However, there are two main challenges for this task: 1) It is quite difficult to identify the bacterial regions due to various appearance, faint boundaries, heterogeneous shapes, low contrast with the background, and small bacteria sizes with regards to the image. 2) There are numerous bacteria overlapping each other, which hinder us to conduct accurate analysis on individual bacterium. To overcome these challenges, we propose an automatic method in this paper to diagnose BV by quantitative analysis of bacterial morphotypes, which consists of a three-step approach, i.e., bacteria regions segmentation, overlapping bacteria splitting, and bacterial morphotypes classification. Specifically, we first segment the bacteria regions via saliency cut, which simultaneously evaluates the global contrast and spatial weighted coherence. And then Markov random field model is applied for high-quality unsupervised segmentation of small object. We then decompose overlapping bacteria clumps into markers, and associate a pixel with markers to identify evidence for eventual individual bacterium splitting. Next, we extract morphotype features from each bacterium to learn the descriptors and to characterize the types of bacteria using an Adaptive Boosting machine learning framework. Finally, BV diagnosis is implemented based on the Nugent score criterion. Experiments demonstrate that our proposed method achieves high accuracy and efficiency in computation for BV diagnosis.

  12. Discriminant forest classification method and system

    DOEpatents

    Chen, Barry Y.; Hanley, William G.; Lemmond, Tracy D.; Hiller, Lawrence J.; Knapp, David A.; Mugge, Marshall J.

    2012-11-06

    A hybrid machine learning methodology and system for classification that combines classical random forest (RF) methodology with discriminant analysis (DA) techniques to provide enhanced classification capability. A DA technique which uses feature measurements of an object to predict its class membership, such as linear discriminant analysis (LDA) or Andersen-Bahadur linear discriminant technique (AB), is used to split the data at each node in each of its classification trees to train and grow the trees and the forest. When training is finished, a set of n DA-based decision trees of a discriminant forest is produced for use in predicting the classification of new samples of unknown class.

  13. An experimental investigation on the three-point bending behavior of composite laminate

    NASA Astrophysics Data System (ADS)

    A, Azzam; W, Li

    2014-08-01

    The response of composite laminate structure to three-point bending load was investigated by subjecting two types of stacking sequences of composite laminate structure by using electronic universal tester (Type: WDW-20) machine. Optical microscope was selected in order to characterize bending damage, delamination, and damage shapes in composite laminate structures. The results showed that the [0/90/-45/45]2s exhibits a brittle behavior, while other laminates exhibit a progressive failure mode consisting of fiber failure, debonding (splitting), and delamination. The [45/45/90/0]2s laminate has a highly nonlinear load- displacement curve due to compressive yielding.

  14. Numerical approach in defining milling force taking into account curved cutting-edge of applied mills

    NASA Astrophysics Data System (ADS)

    Bondarenko, I. R.

    2018-03-01

    The paper tackles the task of applying the numerical approach to determine the cutting forces of carbon steel machining with curved cutting edge mill. To solve the abovementioned task the curved surface of the cutting edge was subject to step approximation, and the chips section was split into discrete elements. As a result, the cutting force was defined as the sum of elementary forces observed during the cut of every element. Comparison and analysis of calculations with regard to the proposed method and the method with Kienzle dependence showed its sufficient accuracy, which makes it possible to apply the method in practice.

  15. A note on windowing for the waveform relaxation

    NASA Technical Reports Server (NTRS)

    Zhang, Hong

    1994-01-01

    The technique of windowing has been often used in the implementation of the waveform relaxations for solving ODE's or time dependent PDE's. Its efficiency depends upon problem stiffness and operator splitting. Using model problems, the estimates for window length and convergence rate are derived. The electiveness of windowing is then investigated for non-stiff and stiff cases respectively. lt concludes that for the former, windowing is highly recommended when a large discrepancy exists between the convergence rate on a time interval and the ones on its subintervals. For the latter, windowing does not provide any computational advantage if machine features are disregarded. The discussion is supported by experimental results.

  16. The Artilect Debate

    NASA Astrophysics Data System (ADS)

    de Garis, Hugo; Halioris, Sam

    Twenty-first-century technologies will allow the creation of massively intelligent machines, many trillions of times as smart, fast, and durable as humans. Issues concerning industrial, consumer, and military applications of mobile autonomous robots, cyborgs, and computer-based AI systems could divisively split humanity into ideological camps regarding whether "artilects" (artificial intellects) should be built or not. The artilect debate, unlike any before it, could dominate the 21st-century political landscape, and has the potential to cause conflict on a global scale. Research is needed to inform policy and individual decisions; and healthy debate should be initiated now to prepare institutions and individuals alike for the impact of AI.

  17. Integrative relational machine-learning for understanding drug side-effect profiles

    PubMed Central

    2013-01-01

    Background Drug side effects represent a common reason for stopping drug development during clinical trials. Improving our ability to understand drug side effects is necessary to reduce attrition rates during drug development as well as the risk of discovering novel side effects in available drugs. Today, most investigations deal with isolated side effects and overlook possible redundancy and their frequent co-occurrence. Results In this work, drug annotations are collected from SIDER and DrugBank databases. Terms describing individual side effects reported in SIDER are clustered with a semantic similarity measure into term clusters (TCs). Maximal frequent itemsets are extracted from the resulting drug x TC binary table, leading to the identification of what we call side-effect profiles (SEPs). A SEP is defined as the longest combination of TCs which are shared by a significant number of drugs. Frequent SEPs are explored on the basis of integrated drug and target descriptors using two machine learning methods: decision-trees and inductive-logic programming. Although both methods yield explicit models, inductive-logic programming method performs relational learning and is able to exploit not only drug properties but also background knowledge. Learning efficiency is evaluated by cross-validation and direct testing with new molecules. Comparison of the two machine-learning methods shows that the inductive-logic-programming method displays a greater sensitivity than decision trees and successfully exploit background knowledge such as functional annotations and pathways of drug targets, thereby producing rich and expressive rules. All models and theories are available on a dedicated web site. Conclusions Side effect profiles covering significant number of drugs have been extracted from a drug ×side-effect association table. Integration of background knowledge concerning both chemical and biological spaces has been combined with a relational learning method for discovering rules which explicitly characterize drug-SEP associations. These rules are successfully used for predicting SEPs associated with new drugs. PMID:23802887

  18. Integrative relational machine-learning for understanding drug side-effect profiles.

    PubMed

    Bresso, Emmanuel; Grisoni, Renaud; Marchetti, Gino; Karaboga, Arnaud Sinan; Souchet, Michel; Devignes, Marie-Dominique; Smaïl-Tabbone, Malika

    2013-06-26

    Drug side effects represent a common reason for stopping drug development during clinical trials. Improving our ability to understand drug side effects is necessary to reduce attrition rates during drug development as well as the risk of discovering novel side effects in available drugs. Today, most investigations deal with isolated side effects and overlook possible redundancy and their frequent co-occurrence. In this work, drug annotations are collected from SIDER and DrugBank databases. Terms describing individual side effects reported in SIDER are clustered with a semantic similarity measure into term clusters (TCs). Maximal frequent itemsets are extracted from the resulting drug x TC binary table, leading to the identification of what we call side-effect profiles (SEPs). A SEP is defined as the longest combination of TCs which are shared by a significant number of drugs. Frequent SEPs are explored on the basis of integrated drug and target descriptors using two machine learning methods: decision-trees and inductive-logic programming. Although both methods yield explicit models, inductive-logic programming method performs relational learning and is able to exploit not only drug properties but also background knowledge. Learning efficiency is evaluated by cross-validation and direct testing with new molecules. Comparison of the two machine-learning methods shows that the inductive-logic-programming method displays a greater sensitivity than decision trees and successfully exploit background knowledge such as functional annotations and pathways of drug targets, thereby producing rich and expressive rules. All models and theories are available on a dedicated web site. Side effect profiles covering significant number of drugs have been extracted from a drug ×side-effect association table. Integration of background knowledge concerning both chemical and biological spaces has been combined with a relational learning method for discovering rules which explicitly characterize drug-SEP associations. These rules are successfully used for predicting SEPs associated with new drugs.

  19. Design and implementation of online automatic judging system

    NASA Astrophysics Data System (ADS)

    Liang, Haohui; Chen, Chaojie; Zhong, Xiuyu; Chen, Yuefeng

    2017-06-01

    For lower efficiency and poorer reliability in programming training and competition by currently artificial judgment, design an Online Automatic Judging (referred to as OAJ) System. The OAJ system including the sandbox judging side and Web side, realizes functions of automatically compiling and running the tested codes, and generating evaluation scores and corresponding reports. To prevent malicious codes from damaging system, the OAJ system utilizes sandbox, ensuring the safety of the system. The OAJ system uses thread pools to achieve parallel test, and adopt database optimization mechanism, such as horizontal split table, to improve the system performance and resources utilization rate. The test results show that the system has high performance, high reliability, high stability and excellent extensibility.

  20. Using a Smart-pulley Atwood machine to study rocket motion

    NASA Astrophysics Data System (ADS)

    Greenwood, M. Stautberg; Bernett, R.; Benavides, M.; Granger, S.; Plass, R.; Walters, S.

    1989-10-01

    The Atwood machine consisted of a funnel partly filled with water on one side and a fixed mass of 400 g on the other side. The ``rocket'' begins to ascend when the acceleration is momentarily zero. This occurs when the total mass of the rocket is slightly larger than 400 g due to the rocket's thrust. As the wheel of Pasco's Smart pulley rotates, signals are sent to the Apple computer and the software generates tables and graphs of position, speed, and acceleration as a function of time. Also presented is a numerical differentiation scheme that greatly reduces the scatter in the experimental data for the acceleration. The data are compared with theory, assuming that dm/dt is constant. The value of dm/dt necessary to fit the data is compared with that found by measuring the flow rate from the funnel directly. Excellent agreement is obtained for the two values of dm/dt.

  1. MerMade: An Oligodeoxyribonucleotide Synthesizer for High Throughput Oligonucleotide Production in Dual 96-Well Plates

    PubMed Central

    Rayner, Simon; Brignac, Stafford; Bumeister, Ron; Belosludtsev, Yuri; Ward, Travis; Grant, O’dell; O’Brien, Kevin; Evans, Glen A.; Garner, Harold R.

    1998-01-01

    We have designed and constructed a machine that synthesizes two standard 96-well plates of oligonucleotides in a single run using standard phosphoramidite chemistry. The machine is capable of making a combination of standard, degenerate, or modified oligos in a single plate. The run time is typically 17 hr for two plates of 20-mers and a reaction scale of 40 nm. The reaction vessel is a standard polypropylene 96-well plate with a hole drilled in the bottom of each well. The two plates are placed in separate vacuum chucks and mounted on an xy table. Each well in turn is positioned under the appropriate reagent injection line and the reagent is injected by switching a dedicated valve. All aspects of machine operation are controlled by a Macintosh computer, which also guides the user through the startup and shutdown procedures, provides a continuous update on the status of the run, and facilitates a number of service procedures that need to be carried out periodically. Over 25,000 oligos have been synthesized for use in dye terminator sequencing reactions, polymerase chain reactions (PCRs), hybridization, and RT–PCR. Oligos up to 100 bases in length have been made with a coupling efficiency in excess of 99%. These machines, working in conjunction with our oligo prediction code are particularly well suited to application in automated high throughput genomic sequencing. PMID:9685322

  2. Law machines: scale models, forensic materiality and the making of modern patent law.

    PubMed

    Pottage, Alain

    2011-10-01

    Early US patent law was machine made. Before the Patent Office took on the function of examining patent applications in 1836, questions of novelty and priority were determined in court, within the forum of the infringement action. And at all levels of litigation, from the circuit courts up to the Supreme Court, working models were the media through which doctrine, evidence and argument were made legible, communicated and interpreted. A model could be set on a table, pointed at, picked up, rotated or upended so as to display a point of interest to a particular audience within the courtroom, and, crucially, set in motion to reveal the 'mode of operation' of a machine. The immediate object of demonstration was to distinguish the intangible invention from its tangible embodiment, but models also'machined' patent law itself. Demonstrations of patent claims with models articulated and resolved a set of conceptual tensions that still make the definition and apprehension of the invention difficult, even today, but they resolved these tensions in the register of materiality, performativity and visibility, rather than the register of conceptuality. The story of models tells us something about how inventions emerge and subsist within the context of patent litigation and patent doctrine, and it offers a starting point for renewed reflection on the question of how technology becomes property.

  3. MerMade: an oligodeoxyribonucleotide synthesizer for high throughput oligonucleotide production in dual 96-well plates.

    PubMed

    Rayner, S; Brignac, S; Bumeister, R; Belosludtsev, Y; Ward, T; Grant, O; O'Brien, K; Evans, G A; Garner, H R

    1998-07-01

    We have designed and constructed a machine that synthesizes two standard 96-well plates of oligonucleotides in a single run using standard phosphoramidite chemistry. The machine is capable of making a combination of standard, degenerate, or modified oligos in a single plate. The run time is typically 17 hr for two plates of 20-mers and a reaction scale of 40 nM. The reaction vessel is a standard polypropylene 96-well plate with a hole drilled in the bottom of each well. The two plates are placed in separate vacuum chucks and mounted on an xy table. Each well in turn is positioned under the appropriate reagent injection line and the reagent is injected by switching a dedicated valve. All aspects of machine operation are controlled by a Macintosh computer, which also guides the user through the startup and shutdown procedures, provides a continuous update on the status of the run, and facilitates a number of service procedures that need to be carried out periodically. Over 25,000 oligos have been synthesized for use in dye terminator sequencing reactions, polymerase chain reactions (PCRs), hybridization, and RT-PCR. Oligos up to 100 bases in length have been made with a coupling efficiency in excess of 99%. These machines, working in conjunction with our oligo prediction code are particularly well suited to application in automated high throughput genomic sequencing.

  4. Omega-X micromachining system

    DOEpatents

    Miller, Donald M.

    1978-01-01

    A micromachining tool system with X- and omega-axes is used to machine spherical, aspherical, and irregular surfaces with a maximum contour error of 100 nonometers (nm) and surface waviness of no more than 0.8 nm RMS. The omega axis, named for the angular measurement of the rotation of an eccentric mechanism supporting one end of a tool bar, enables the pulse increments of the tool toward the workpiece to be as little as 0 to 4.4 nm. A dedicated computer coordinates motion in the two axes to produce the workpiece contour. Inertia is reduced by reducing the mass pulsed toward the workpiece to about one-fifth of its former value. The tool system includes calibration instruments to calibrate the micromachining tool system. Backlash is reduced and flexing decreased by using a rotary table and servomotor to pulse the tool in the omega-axis instead of a ball screw mechanism. A thermally-stabilized spindle rotates the workpiece and is driven by a motor not mounted on the micromachining tool base through a torque-smoothing pulley and vibrationless rotary coupling. Abbe offset errors are almost eliminated by tool setting and calibration at spindle center height. Tool contour and workpiece contour are gaged on the machine; this enables the source of machining errors to be determined more readily, because the workpiece is gaged before its shape can be changed by removal from the machine.

  5. Harmonic reduction of Direct Torque Control of six-phase induction motor.

    PubMed

    Taheri, A

    2016-07-01

    In this paper, a new switching method in Direct Torque Control (DTC) of a six-phase induction machine for reduction of current harmonics is introduced. Selecting a suitable vector in each sampling period is an ordinal method in the ST-DTC drive of a six-phase induction machine. The six-phase induction machine has 64 voltage vectors and divided further into four groups. In the proposed DTC method, the suitable voltage vectors are selected from two vector groups. By a suitable selection of two vectors in each sampling period, the harmonic amplitude is decreased more, in and various comparison to that of the ST-DTC drive. The harmonics loss is greater reduced, while the electromechanical energy is decreased with switching loss showing a little increase. Spectrum analysis of the phase current in the standard and new switching table DTC of the six-phase induction machine and determination for the amplitude of each harmonics is proposed in this paper. The proposed method has a less sampling time in comparison to the ordinary method. The Harmonic analyses of the current in the low and high speed shows the performance of the presented method. The simplicity of the proposed method and its implementation without any extra hardware is other advantages of the proposed method. The simulation and experimental results show the preference of the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Modern laser technologies used for cutting textile materials

    NASA Astrophysics Data System (ADS)

    Isarie, Claudiu; Dragan, Anca; Isarie, Laura; Nastase, Dan

    2006-02-01

    With modern laser technologies we can cut multiple layers at once, yielding high production levels and short setup times between cutting runs. One example could be the operation of cutting the material named Nylon 66, used to manufacture automobile airbags. With laser, up to seven layers of Nylon 66 can be cut in one pass, that means high production rates on a single machine. Airbags must be precisely crafted piece of critical safety equipment that is built to very high levels of precision in a mass production environment. Of course, synthetic material, used for airbags, can be cut also by a conventional fixed blade system, but for a high production rates and a long term low-maintenance, laser cutting is most suitable. Most systems, are equipped with two material handling systems, which can cut on one half of he table while the finished product is being removed from the other half and the new stock material laid out. The laser system is reliable and adaptable to any flatbed-cutting task. Computer controlled industrial cutting and plotting machines are the latest offerings from a well established and experienced industrial engineering company that is dedicated to reduce cutting costs and boosting productivity in today's competitive industrial machine tool market. In this way, just one machine can carry out a multitude of production tasks. Authors have studied the cutting parameters for different textile materials, to reach the maximum output of the process.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hussain, A

    Purpose: Novel linac machines, TrueBeam (TB) and Elekta Versa have updated head designing and software control system, include flattening-filter-free (FFF) photon and electron beams. Later on FFF beams were also introduced on C-Series machines. In this work FFF beams for same energy 6MV but from different machine versions were studied with reference to beam data parameters. Methods: The 6MV-FFF percent depth doses, profile symmetry and flatness, dose rate tables, and multi-leaf collimator (MLC) transmission factors were measured during commissioning process of both C-series and Truebeam machines. The scanning and dosimetric data for 6MV-FFF beam from Truebeam and C-Series linacs wasmore » compared. A correlation of 6MV-FFF beam from Elekta Versa with that of Varian linacs was also found. Results: The scanning files were plotted for both qualitative and quantitative analysis. The dosimetric leaf gap (DLG) for C-Series 6MV-FFF beam is 1.1 mm. Published values for Truebeam dosimetric leaf gap is 1.16 mm. 6MV MLC transmission factor varies between 1.3 % and 1.4 % in two separate measurements and measured DLG values vary between 1.32 mm and 1.33 mm on C-Series machine. MLC transmission factor from C-Series machine varies between 1.5 % and 1.6 %. Some of the measured data values from C-Series FFF beam are compared with Truebeam representative data. 6MV-FFF beam parameter values like dmax, OP factors, beam symmetry and flatness and additional parameters for C-Series and Truebeam liancs will be presented and compared in graphical form and tabular data form if selected. Conclusion: The 6MV flattening filter (FF) beam data from C-Series & Truebeam and 6MV-FFF beam data from Truebeam has already presented. This particular analysis to compare 6MV-FFF beam from C-Series and Truebeam provides opportunity to better elaborate FFF mode on novel machines. It was found that C-Series and Truebeam 6MV-FFF dosimetric and beam data was quite similar.« less

  8. The DARPA compact superconducting x-ray lithography source features. [Defense Advanced Research Projects Agency (DARPA)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heese, R.; Kalsi, S.; Leung, E.

    1991-01-01

    Under DARPA sponsorship, a compact Superconducting X-ray Lithography Source (SXLS) is being designed and built by the Brookhaven National Laboratory (BNL) with industry participation from Grumman Corporation and General Dynamics. This source is optimized for lithography work for sub-micron high density computer chips, and is about the size of a billiard table (1.5 m {times} 4.0 m). The machine has a racetrack configuration with two 180{degree} bending magnets being designed and built by General Dynamics under a subcontract with Grumman Corporation. The machine will have 18 photon ports which would deliver light peaked at a wave length of 10 Angstroms.more » Grumman is commercializing the SXLS device and plans to book orders for delivery of industrialized SXLS (ISXLS) versions in 1995. This paper will describe the major features of this device. The commercial machine will be equipped with a fully automated user-friendly control systems, major features of which are already working on a compact warm dipole ring at BNL. This ring has normal dipole magnets with dimensions identical to the SXLS device, and has been successfully commissioned. 4 figs., 1 tab.« less

  9. On the suitability of the connection machine for direct particle simulation

    NASA Technical Reports Server (NTRS)

    Dagum, Leonard

    1990-01-01

    The algorithmic structure was examined of the vectorizable Stanford particle simulation (SPS) method and the structure is reformulated in data parallel form. Some of the SPS algorithms can be directly translated to data parallel, but several of the vectorizable algorithms have no direct data parallel equivalent. This requires the development of new, strictly data parallel algorithms. In particular, a new sorting algorithm is developed to identify collision candidates in the simulation and a master/slave algorithm is developed to minimize communication cost in large table look up. Validation of the method is undertaken through test calculations for thermal relaxation of a gas, shock wave profiles, and shock reflection from a stationary wall. A qualitative measure is provided of the performance of the Connection Machine for direct particle simulation. The massively parallel architecture of the Connection Machine is found quite suitable for this type of calculation. However, there are difficulties in taking full advantage of this architecture because of lack of a broad based tradition of data parallel programming. An important outcome of this work has been new data parallel algorithms specifically of use for direct particle simulation but which also expand the data parallel diction.

  10. Advanced Online Survival Analysis Tool for Predictive Modelling in Clinical Data Science.

    PubMed

    Montes-Torres, Julio; Subirats, José Luis; Ribelles, Nuria; Urda, Daniel; Franco, Leonardo; Alba, Emilio; Jerez, José Manuel

    2016-01-01

    One of the prevailing applications of machine learning is the use of predictive modelling in clinical survival analysis. In this work, we present our view of the current situation of computer tools for survival analysis, stressing the need of transferring the latest results in the field of machine learning to biomedical researchers. We propose a web based software for survival analysis called OSA (Online Survival Analysis), which has been developed as an open access and user friendly option to obtain discrete time, predictive survival models at individual level using machine learning techniques, and to perform standard survival analysis. OSA employs an Artificial Neural Network (ANN) based method to produce the predictive survival models. Additionally, the software can easily generate survival and hazard curves with multiple options to personalise the plots, obtain contingency tables from the uploaded data to perform different tests, and fit a Cox regression model from a number of predictor variables. In the Materials and Methods section, we depict the general architecture of the application and introduce the mathematical background of each of the implemented methods. The study concludes with examples of use showing the results obtained with public datasets.

  11. Development and Implementation of a Simplified Tool Measuring System

    NASA Astrophysics Data System (ADS)

    Chen, Jenn-Yih; Lee, Bean-Yin; Lee, Kuang-Chyi; Chen, Zhao-Kai

    2010-01-01

    This paper presents a simplified system for measuring geometric profiles of end mills. Firstly, a CCD camera was used to capture images of cutting tools. Then, an image acquisition card with the encoding function was adopted to convert the source of image into an USB port of a PC, and the image could be shown on a monitor. In addition, two linear scales were mounted on the X-Y table for positioning and measuring purposes. The signals of the linear scales were transmitted into a 4-axis quadrature encoder with 4-channel counter card for position monitoring. The C++ Builder was utilized for designing the user friendly human machine interface of the measuring system of tools. There is a cross line on the image of the interface to show a coordinate for the position measurement. Finally, a well-known tool measuring and inspection machine was employed for the measuring standard. This study compares the difference of the measuring results by using the machine and the proposed system. Experimental results show that the percentage of measuring error is acceptable for some geometric parameters of the square or ball nose end mills. Therefore, the results demonstrate the effectiveness of the presented approach.

  12. Advanced Online Survival Analysis Tool for Predictive Modelling in Clinical Data Science

    PubMed Central

    Montes-Torres, Julio; Subirats, José Luis; Ribelles, Nuria; Urda, Daniel; Franco, Leonardo; Alba, Emilio; Jerez, José Manuel

    2016-01-01

    One of the prevailing applications of machine learning is the use of predictive modelling in clinical survival analysis. In this work, we present our view of the current situation of computer tools for survival analysis, stressing the need of transferring the latest results in the field of machine learning to biomedical researchers. We propose a web based software for survival analysis called OSA (Online Survival Analysis), which has been developed as an open access and user friendly option to obtain discrete time, predictive survival models at individual level using machine learning techniques, and to perform standard survival analysis. OSA employs an Artificial Neural Network (ANN) based method to produce the predictive survival models. Additionally, the software can easily generate survival and hazard curves with multiple options to personalise the plots, obtain contingency tables from the uploaded data to perform different tests, and fit a Cox regression model from a number of predictor variables. In the Materials and Methods section, we depict the general architecture of the application and introduce the mathematical background of each of the implemented methods. The study concludes with examples of use showing the results obtained with public datasets. PMID:27532883

  13. A new pre-classification method based on associative matching method

    NASA Astrophysics Data System (ADS)

    Katsuyama, Yutaka; Minagawa, Akihiro; Hotta, Yoshinobu; Omachi, Shinichiro; Kato, Nei

    2010-01-01

    Reducing the time complexity of character matching is critical to the development of efficient Japanese Optical Character Recognition (OCR) systems. To shorten processing time, recognition is usually split into separate preclassification and recognition stages. For high overall recognition performance, the pre-classification stage must both have very high classification accuracy and return only a small number of putative character categories for further processing. Furthermore, for any practical system, the speed of the pre-classification stage is also critical. The associative matching (AM) method has often been used for fast pre-classification, because its use of a hash table and reliance solely on logical bit operations to select categories makes it highly efficient. However, redundant certain level of redundancy exists in the hash table because it is constructed using only the minimum and maximum values of the data on each axis and therefore does not take account of the distribution of the data. We propose a modified associative matching method that satisfies the performance criteria described above but in a fraction of the time by modifying the hash table to reflect the underlying distribution of training characters. Furthermore, we show that our approach outperforms pre-classification by clustering, ANN and conventional AM in terms of classification accuracy, discriminative power and speed. Compared to conventional associative matching, the proposed approach results in a 47% reduction in total processing time across an evaluation test set comprising 116,528 Japanese character images.

  14. [Cleanliness Norms 1964-1975].

    PubMed

    Noelle-Neumann, E

    1976-01-01

    In 1964 the Institut für Demoskopie Allensbach made a first survey taking stock of norms concerning cleanliness in the Federal Republic of Germany. At that time, 78% of respondents thought that the vogue among young people of cultivating an unkempt look was past or on the wane (Table 1.). Today we know that this fashion was an indicator of more serious desires for change in many different areas like politics, sexual morality, education and that its high point was still to come. In the fall of 1975 a second survey, modelled on the one of 1964, was conducted. Again, it concentrated on norms, not on behavior. As expected, norms have changed over this period but not in a one-directional or simple manner. In general, people are much more large-minded about children's looks: neat, clean school-dress, properly combed hair, clean shoes, all this and also holding their things in order has become less important in 1975 (Table 2). To carry a clean handkerchief is becoming oldfashioned (Table 3). On the other hand, principles of bringing-up children have not loosened concerning personal hygiene - brushing ones teeth, washing hands, feet, and neck, clean fingernails (Table 4). On one item related to protection of the environment, namely throwing around waste paper, standards have even become more strict (Table 5). With regard to school-leavers, norms of personal hygiene have generally become more strict (Table 6). As living standards have gone up and the number of full bathrooms has risen from 42% to 75% of households, norms of personal hygiene have also increased: one warm bath a week seemed enough to 56% of adults in 1964, but to only 32% in 1975 (Table 7). Also standards for changing underwear have changed a lot: in 1964 only 12% of respondents said "every day", in 1975 48% said so (Table 8). Even more stringent norms are applied to young women (Tables 9/10). For comparison: 1964 there were automatic washing machines in 16%, 1975 in 79% of households. Answers to questions which qualities men value especially in women and which qualities women value especially in men show a decrease in valutation of "cleanliness". These results can be interpreted in different ways (Tables 11/12). It seems, however, that "cleanliness" is not going out as a cultural value. We have found that young people today do not consider clean dress important but that they are probably better washed under their purposely neglected clothing than young people were ten years ago. As a nation, Germans still consider cleanliness to be a articularly German virtue, 1975 even more so than 1964 (Table 13). An association test, first made in March 1976, confirms this: When they hear "Germany", 68% of Germans think of "cleanliness" (Table 14).

  15. SU-F-T-365: Clinical Commissioning of the Monaco Treatment Planning System for the Novalis Tx to Deliver VMAT, SRS and SBRT Treatments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adnani, N

    Purpose: To commission the Monaco Treatment Planning System for the Novalis Tx machine. Methods: The commissioning of Monte-Carlo (MC), Collapsed Cone (CC) and electron Monte-Carlo (eMC) beam models was performed through a series of measurements and calculations in medium and in water. In medium measurements relied Octavius 4D QA system with the 1000 SRS detector array for field sizes less than 4 cm × 4 cm and the 1500 detector array for larger field sizes. Heterogeneity corrections were validated using a custom built phantom. Prior to clinical implementation, an end to end testing of a Prostate and H&N VMAT plansmore » was performed. Results: Using a 0.5% uncertainty and 2 mm grid sizes, Tables I and II summarize the MC validation at 6 MV and 18 MV in both medium and water. Tables III and IV show similar comparisons for CC. Using the custom heterogeneity phantom setup of Figure 1 and IGRT guidance summarized in Figure 2, Table V lists the percent pass rate for a 2%, 2 mm gamma criteria at 6 and 18 MV for both MC and CC. The relationship between MC calculations settings of uncertainty and grid size and the gamma passing rate for a prostate and H&N case is shown in Table VI. Table VII lists the results of the eMC calculations compared to measured data for clinically available applicators and Table VIII for small field cutouts. Conclusion: MU calculations using MC are highly sensitive to uncertainty and grid size settings. The difference can be of the order of several per cents. MC is superior to CC for small fields and when using heterogeneity corrections, regardless of field size, making it more suitable for SRS, SBRT and VMAT deliveries. eMC showed good agreement with measurements down to 2 cm − 2 cm field size.« less

  16. User's guide to the LLL BASIC interpreter. [For 8080-based MCS-80 microcomputer system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allison, T.; Eckard, R.; Barber, J.

    1977-06-09

    Scientists are finding increased applications for microcomputers as process controllers in their experiments. However, while microcomputers are small and inexpensive, they are difficult to program in machine or assembly language. A high-level language is needed to enable scientists to develop their own microcomputer programs for their experiments on location. Recognizing this need, LLL contracted to have such a language developed. This report describes the result--the LLL BASIC interpreter, which operates with LLL's 8080-based MCS-80 microcomputer system. 4 tables.

  17. Mergeable nervous systems for robots.

    PubMed

    Mathews, Nithin; Christensen, Anders Lyhne; O'Grady, Rehan; Mondada, Francesco; Dorigo, Marco

    2017-09-12

    Robots have the potential to display a higher degree of lifetime morphological adaptation than natural organisms. By adopting a modular approach, robots with different capabilities, shapes, and sizes could, in theory, construct and reconfigure themselves as required. However, current modular robots have only been able to display a limited range of hardwired behaviors because they rely solely on distributed control. Here, we present robots whose bodies and control systems can merge to form entirely new robots that retain full sensorimotor control. Our control paradigm enables robots to exhibit properties that go beyond those of any existing machine or of any biological organism: the robots we present can merge to form larger bodies with a single centralized controller, split into separate bodies with independent controllers, and self-heal by removing or replacing malfunctioning body parts. This work takes us closer to robots that can autonomously change their size, form and function.Robots that can self-assemble into different morphologies are desired to perform tasks that require different physical capabilities. Mathews et al. design robots whose bodies and control systems can merge and split to form new robots that retain full sensorimotor control and act as a single entity.

  18. Unbreakable distributed storage with quantum key distribution network and password-authenticated secret sharing

    PubMed Central

    Fujiwara, M.; Waseda, A.; Nojima, R.; Moriai, S.; Ogata, W.; Sasaki, M.

    2016-01-01

    Distributed storage plays an essential role in realizing robust and secure data storage in a network over long periods of time. A distributed storage system consists of a data owner machine, multiple storage servers and channels to link them. In such a system, secret sharing scheme is widely adopted, in which secret data are split into multiple pieces and stored in each server. To reconstruct them, the data owner should gather plural pieces. Shamir’s (k, n)-threshold scheme, in which the data are split into n pieces (shares) for storage and at least k pieces of them must be gathered for reconstruction, furnishes information theoretic security, that is, even if attackers could collect shares of less than the threshold k, they cannot get any information about the data, even with unlimited computing power. Behind this scenario, however, assumed is that data transmission and authentication must be perfectly secure, which is not trivial in practice. Here we propose a totally information theoretically secure distributed storage system based on a user-friendly single-password-authenticated secret sharing scheme and secure transmission using quantum key distribution, and demonstrate it in the Tokyo metropolitan area (≤90 km). PMID:27363566

  19. Dynamic compressive properties obtained from a split Hopkinson pressure bar test of Boryeong shale

    NASA Astrophysics Data System (ADS)

    Kang, Minju; Cho, Jung-Woo; Kim, Yang Gon; Park, Jaeyeong; Jeong, Myeong-Sik; Lee, Sunghak

    2016-09-01

    Dynamic compressive properties of a Boryeong shale were evaluated by using a split Hopkinson pressure bar, and were compared with those of a Hwangdeung granite which is a typical hard rock. The results indicated that the dynamic compressive loading reduced the resistance to fracture. The dynamic compressive strength was lower in the shale than in the granite, and was raised with increasing strain rate by microcracking effect as well as strain rate strengthening effect. Since the number of microcracked fragments increased with increasing strain rate in the shale having laminated weakness planes, the shale showed the better fragmentation performance than the granite at high strain rates. The effect of transversely isotropic plane on compressive strength decreased with increasing strain rate, which was desirable for increasing the fragmentation performance. Thus, the shale can be more reliably applied to industrial areas requiring good fragmentation performance as the striking speed of drilling or hydraulic fracturing machines increased. The present dynamic compressive test effectively evaluated the fragmentation performance as well as compressive strength and strain energy density by controlling the air pressure, and provided an important idea on which rock was more readily fragmented under dynamically processing conditions such as high-speed drilling and blasting.

  20. Pulsed-coil magnet systems for applying 10-30 Tesla Fields to cm-scale targets on Sandia's Z facility

    DOE PAGES

    Rovang, Dean C.; Lamppa, Derek C.; Cuneo, Michael Edward; ...

    2014-12-04

    We have successfully integrated the capability to apply uniform, high magnetic fields (10–30 T) to high energy density experiments on the Z facility. This system uses an 8-mF, 15-kV capacitor bank to drive large-bore (5 cm diameter), high-inductance (1–3 mH) multi-turn, multi-layer electromagnets that slowly magnetize the conductive targets used on Z over several milliseconds (time to peak field of 2–7 ms). This system was commissioned in February 2013 and has been used successfully to magnetize more than 30 experiments up to 10 T that have produced exciting and surprising physics results. These experiments used split-magnet topologies to maintain diagnosticmore » lines of sight to the target. We then describe the design, integration, and operation of the pulsed coil system into the challenging and harsh environment of the Z Machine. We also describe our plans and designs for achieving fields up to 20 T with a reduced-gap split-magnet configuration, and up to 30 T with a solid magnet configuration in pursuit of the Magnetized Liner Inertial Fusion concept.« less

  1. Pulsed-coil magnet systems for applying uniform 10-30 T fields to centimeter-scale targets on Sandia's Z facility

    NASA Astrophysics Data System (ADS)

    Rovang, D. C.; Lamppa, D. C.; Cuneo, M. E.; Owen, A. C.; McKenney, J.; Johnson, D. W.; Radovich, S.; Kaye, R. J.; McBride, R. D.; Alexander, C. S.; Awe, T. J.; Slutz, S. A.; Sefkow, A. B.; Haill, T. A.; Jones, P. A.; Argo, J. W.; Dalton, D. G.; Robertson, G. K.; Waisman, E. M.; Sinars, D. B.; Meissner, J.; Milhous, M.; Nguyen, D. N.; Mielke, C. H.

    2014-12-01

    Sandia has successfully integrated the capability to apply uniform, high magnetic fields (10-30 T) to high energy density experiments on the Z facility. This system uses an 8-mF, 15-kV capacitor bank to drive large-bore (5 cm diameter), high-inductance (1-3 mH) multi-turn, multi-layer electromagnets that slowly magnetize the conductive targets used on Z over several milliseconds (time to peak field of 2-7 ms). This system was commissioned in February 2013 and has been used successfully to magnetize more than 30 experiments up to 10 T that have produced exciting and surprising physics results. These experiments used split-magnet topologies to maintain diagnostic lines of sight to the target. We describe the design, integration, and operation of the pulsed coil system into the challenging and harsh environment of the Z Machine. We also describe our plans and designs for achieving fields up to 20 T with a reduced-gap split-magnet configuration, and up to 30 T with a solid magnet configuration in pursuit of the Magnetized Liner Inertial Fusion concept.

  2. Unbreakable distributed storage with quantum key distribution network and password-authenticated secret sharing.

    PubMed

    Fujiwara, M; Waseda, A; Nojima, R; Moriai, S; Ogata, W; Sasaki, M

    2016-07-01

    Distributed storage plays an essential role in realizing robust and secure data storage in a network over long periods of time. A distributed storage system consists of a data owner machine, multiple storage servers and channels to link them. In such a system, secret sharing scheme is widely adopted, in which secret data are split into multiple pieces and stored in each server. To reconstruct them, the data owner should gather plural pieces. Shamir's (k, n)-threshold scheme, in which the data are split into n pieces (shares) for storage and at least k pieces of them must be gathered for reconstruction, furnishes information theoretic security, that is, even if attackers could collect shares of less than the threshold k, they cannot get any information about the data, even with unlimited computing power. Behind this scenario, however, assumed is that data transmission and authentication must be perfectly secure, which is not trivial in practice. Here we propose a totally information theoretically secure distributed storage system based on a user-friendly single-password-authenticated secret sharing scheme and secure transmission using quantum key distribution, and demonstrate it in the Tokyo metropolitan area (≤90 km).

  3. Dirac Cellular Automaton from Split-step Quantum Walk

    PubMed Central

    Mallick, Arindam; Chandrashekar, C. M.

    2016-01-01

    Simulations of one quantum system by an other has an implication in realization of quantum machine that can imitate any quantum system and solve problems that are not accessible to classical computers. One of the approach to engineer quantum simulations is to discretize the space-time degree of freedom in quantum dynamics and define the quantum cellular automata (QCA), a local unitary update rule on a lattice. Different models of QCA are constructed using set of conditions which are not unique and are not always in implementable configuration on any other system. Dirac Cellular Automata (DCA) is one such model constructed for Dirac Hamiltonian (DH) in free quantum field theory. Here, starting from a split-step discrete-time quantum walk (QW) which is uniquely defined for experimental implementation, we recover the DCA along with all the fine oscillations in position space and bridge the missing connection between DH-DCA-QW. We will present the contribution of the parameters resulting in the fine oscillations on the Zitterbewegung frequency and entanglement. The tuneability of the evolution parameters demonstrated in experimental implementation of QW will establish it as an efficient tool to design quantum simulator and approach quantum field theory from principles of quantum information theory. PMID:27184159

  4. A mutual information-Dempster-Shafer based decision ensemble system for land cover classification of hyperspectral data

    NASA Astrophysics Data System (ADS)

    Pahlavani, Parham; Bigdeli, Behnaz

    2017-12-01

    Hyperspectral images contain extremely rich spectral information that offer great potential to discriminate between various land cover classes. However, these images are usually composed of tens or hundreds of spectrally close bands, which result in high redundancy and great amount of computation time in hyperspectral classification. Furthermore, in the presence of mixed coverage pixels, crisp classifiers produced errors, omission and commission. This paper presents a mutual information-Dempster-Shafer system through an ensemble classification approach for classification of hyperspectral data. First, mutual information is applied to split data into a few independent partitions to overcome high dimensionality. Then, a fuzzy maximum likelihood classifies each band subset. Finally, Dempster-Shafer is applied to fuse the results of the fuzzy classifiers. In order to assess the proposed method, a crisp ensemble system based on a support vector machine as the crisp classifier and weighted majority voting as the crisp fusion method are applied on hyperspectral data. Furthermore, a dimension reduction system is utilized to assess the effectiveness of mutual information band splitting of the proposed method. The proposed methodology provides interesting conclusions on the effectiveness and potentiality of mutual information-Dempster-Shafer based classification of hyperspectral data.

  5. Time-dependent wave front propagation simulation of a hard x-ray split-and-delay unit: Towards a measurement of the temporal coherence properties of x-ray free electron lasers

    DOE PAGES

    Roling, S.; Zacharias, H.; Samoylova, L.; ...

    2014-11-18

    For the European x-ray free electron laser (XFEL) a split-and-delay unit based on geometrical wavefront beam splitting and multilayer mirrors is built which covers the range of photon energies from 5 keV up to 20 keV. Maximum delays between Δτ = ±2.5 ps at hν=20 keV and up to Δτ = ±23 ps at hν = 5 keV will be possible. Time-dependent wave-optics simulations have been performed by means of Synchrotron Radiation Workshop software for XFEL pulses at hν = 5 keV. The XFEL radiation was simulated using results of time-dependent simulations applying the self-amplified spontaneous emission code FAST. Mainmore » features of the optical layout, including diffraction on the beam splitter edge and optics imperfections measured with a nanometer optic component measuring machine slope measuring profiler, were taken into account. The impact of these effects on the characterization of the temporal properties of XFEL pulses is analyzed. An approach based on fast Fourier transformation allows for the evaluation of the temporal coherence despite large wavefront distortions caused by the optics imperfections. In this manner, the fringes resulting from time-dependent two-beam interference can be filtered and evaluated yielding a coherence time of τ c = 0.187 fs (HWHM) for real, nonperfect mirrors, while for ideal mirrors a coherence time of τ c = 0.191 fs (HWHM) is expected.« less

  6. Analysis of Monte Carlo accelerated iterative methods for sparse linear systems: Analysis of Monte Carlo accelerated iterative methods for sparse linear systems

    DOE PAGES

    Benzi, Michele; Evans, Thomas M.; Hamilton, Steven P.; ...

    2017-03-05

    Here, we consider hybrid deterministic-stochastic iterative algorithms for the solution of large, sparse linear systems. Starting from a convergent splitting of the coefficient matrix, we analyze various types of Monte Carlo acceleration schemes applied to the original preconditioned Richardson (stationary) iteration. We expect that these methods will have considerable potential for resiliency to faults when implemented on massively parallel machines. We also establish sufficient conditions for the convergence of the hybrid schemes, and we investigate different types of preconditioners including sparse approximate inverses. Numerical experiments on linear systems arising from the discretization of partial differential equations are presented.

  7. Parallel processing in finite element structural analysis

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1987-01-01

    A brief review is made of the fundamental concepts and basic issues of parallel processing. Discussion focuses on parallel numerical algorithms, performance evaluation of machines and algorithms, and parallelism in finite element computations. A computational strategy is proposed for maximizing the degree of parallelism at different levels of the finite element analysis process including: 1) formulation level (through the use of mixed finite element models); 2) analysis level (through additive decomposition of the different arrays in the governing equations into the contributions to a symmetrized response plus correction terms); 3) numerical algorithm level (through the use of operator splitting techniques and application of iterative processes); and 4) implementation level (through the effective combination of vectorization, multitasking and microtasking, whenever available).

  8. Quantifying the Beauty of Words: A Neurocognitive Poetics Perspective

    PubMed Central

    Jacobs, Arthur M.

    2017-01-01

    In this paper I would like to pave the ground for future studies in Computational Stylistics and (Neuro-)Cognitive Poetics by describing procedures for predicting the subjective beauty of words. A set of eight tentative word features is computed via Quantitative Narrative Analysis (QNA) and a novel metric for quantifying word beauty, the aesthetic potential is proposed. Application of machine learning algorithms fed with this QNA data shows that a classifier of the decision tree family excellently learns to split words into beautiful vs. ugly ones. The results shed light on surface and semantic features theoretically relevant for affective-aesthetic processes in literary reading and generate quantitative predictions for neuroaesthetic studies of verbal materials. PMID:29311877

  9. Quantifying the Beauty of Words: A Neurocognitive Poetics Perspective.

    PubMed

    Jacobs, Arthur M

    2017-01-01

    In this paper I would like to pave the ground for future studies in Computational Stylistics and (Neuro-)Cognitive Poetics by describing procedures for predicting the subjective beauty of words. A set of eight tentative word features is computed via Quantitative Narrative Analysis (QNA) and a novel metric for quantifying word beauty, the aesthetic potential is proposed. Application of machine learning algorithms fed with this QNA data shows that a classifier of the decision tree family excellently learns to split words into beautiful vs. ugly ones. The results shed light on surface and semantic features theoretically relevant for affective-aesthetic processes in literary reading and generate quantitative predictions for neuroaesthetic studies of verbal materials.

  10. Comparison of Natural Language Processing Rules-based and Machine-learning Systems to Identify Lumbar Spine Imaging Findings Related to Low Back Pain.

    PubMed

    Tan, W Katherine; Hassanpour, Saeed; Heagerty, Patrick J; Rundell, Sean D; Suri, Pradeep; Huhdanpaa, Hannu T; James, Kathryn; Carrell, David S; Langlotz, Curtis P; Organ, Nancy L; Meier, Eric N; Sherman, Karen J; Kallmes, David F; Luetmer, Patrick H; Griffith, Brent; Nerenz, David R; Jarvik, Jeffrey G

    2018-03-28

    To evaluate a natural language processing (NLP) system built with open-source tools for identification of lumbar spine imaging findings related to low back pain on magnetic resonance and x-ray radiology reports from four health systems. We used a limited data set (de-identified except for dates) sampled from lumbar spine imaging reports of a prospectively assembled cohort of adults. From N = 178,333 reports, we randomly selected N = 871 to form a reference-standard dataset, consisting of N = 413 x-ray reports and N = 458 MR reports. Using standardized criteria, four spine experts annotated the presence of 26 findings, where 71 reports were annotated by all four experts and 800 were each annotated by two experts. We calculated inter-rater agreement and finding prevalence from annotated data. We randomly split the annotated data into development (80%) and testing (20%) sets. We developed an NLP system from both rule-based and machine-learned models. We validated the system using accuracy metrics such as sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). The multirater annotated dataset achieved inter-rater agreement of Cohen's kappa > 0.60 (substantial agreement) for 25 of 26 findings, with finding prevalence ranging from 3% to 89%. In the testing sample, rule-based and machine-learned predictions both had comparable average specificity (0.97 and 0.95, respectively). The machine-learned approach had a higher average sensitivity (0.94, compared to 0.83 for rules-based), and a higher overall AUC (0.98, compared to 0.90 for rules-based). Our NLP system performed well in identifying the 26 lumbar spine findings, as benchmarked by reference-standard annotation by medical experts. Machine-learned models provided substantial gains in model sensitivity with slight loss of specificity, and overall higher AUC. Copyright © 2018 The Association of University Radiologists. All rights reserved.

  11. Spoken language identification based on the enhanced self-adjusting extreme learning machine approach.

    PubMed

    Albadr, Musatafa Abbas Abbood; Tiun, Sabrina; Al-Dhief, Fahad Taha; Sammour, Mahmoud A M

    2018-01-01

    Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%.

  12. Spoken language identification based on the enhanced self-adjusting extreme learning machine approach

    PubMed Central

    Tiun, Sabrina; AL-Dhief, Fahad Taha; Sammour, Mahmoud A. M.

    2018-01-01

    Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%. PMID:29672546

  13. ARRAYS OF BOTTLES OF PLUTONIUM NITRATE SOLUTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margaret A. Marshall

    2012-09-01

    In October and November of 1981 thirteen approaches-to-critical were performed on a remote split table machine (RSTM) in the Critical Mass Laboratory of Pacific Northwest Laboratory (PNL) in Richland, Washington using planar arrays of polyethylene bottles filled with plutonium (Pu) nitrate solution. Arrays of up to sixteen bottles were used to measure the critical number of bottles and critical array spacing with a tight fitting Plexiglas® reflector on all sides of the arrays except the top. Some experiments used Plexiglas shells fitted around each bottles to determine the effect of moderation on criticality. Each bottle contained approximately 2.4 L ofmore » Pu(NO3)4 solution with a Pu content of 105 g Pu/L and a free acid molarity H+ of 5.1. The plutonium was of low 240Pu (2.9 wt.%) content. These experiments were sponsored by Rockwell Hanford Operations because of the lack of experimental data on the criticality of arrays of bottles of Pu solution such as might be found in storage and handling at the Purex Facility at Hanford. The results of these experiments were used “to provide benchmark data to validate calculational codes used in criticality safety assessments of [the] plant configurations” (Ref. 1). Data for this evaluation were collected from the published report (Ref. 1), the approach to critical logbook, the experimenter’s logbook, and communication with the primary experimenter, B. Michael Durst. Of the 13 experiments preformed 10 were evaluated. One of the experiments was not evaluated because it had been thrown out by the experimenter, one was not evaluated because it was a repeat of another experiment and the third was not evaluated because it reported the critical number of bottles as being greater than 25. Seven of the thirteen evaluated experiments were determined to be acceptable benchmark experiments. A similar experiment using uranyl nitrate was benchmarked as U233-SOL-THERM-014.« less

  14. A 5-year prospective radiographic evaluation of marginal bone levels adjacent to parallel-screw cylinder machined-neck implants and rough-surfaced microthreaded implants using digitized panoramic radiographs.

    PubMed

    Nickenig, Hans-Joachim; Wichmann, Manfred; Happe, Arndt; Zöller, Joachim E; Eitner, Stephan

    2013-10-01

    The purpose of this split-mouth study was to compare macro- and microstructure implant surfaces at the marginal bone level over five years of functional loading. From January to February 2006, 133 implants (70 rough-surfaced microthreaded implants and 63 machined-neck implants) were inserted in the mandible of 34 patients with Kennedy Class I residual dentitions and followed until December 2011. Marginal bone level was radiographically determined at six time points: implant placement (baseline), after the healing period, after six months, and at two years, three years, and five years follow-up. Median follow-up time was 5.2 years (range: 5.1-5.4). The machined-neck group had a mean crestal bone loss of 0.5 mm (0.0-2.3) after the healing period, 1.1 mm (0.0-3.0) at two years follow-up, and 1.4 mm (0.0-2.9) at five years follow-up. The rough-surfaced microthreaded implant group had a mean bone loss of 0.1 mm (-0.4 to 2.0) after the healing period, 0.5 mm (0.0-2.1) at two years follow-up, and 0.7 mm (0.0-2.3) at five years follow-up. The two implant types showed significant differences in marginal bone levels. Rough-surfaced microthreaded design caused significantly less loss of crestal bone levels under long-term functional loading in the mandible when compared to machined-neck implants. Copyright © 2012 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  15. Radiographic evaluation of marginal bone levels adjacent to parallel-screw cylinder machined-neck implants and rough-surfaced microthreaded implants using digitized panoramic radiographs.

    PubMed

    Nickenig, Hans-Joachim; Wichmann, Manfred; Schlegel, Karl Andreas; Nkenke, Emeka; Eitner, Stephan

    2009-06-01

    The purpose of this split-mouth study was to compare macro- and microstructure implant surfaces at the marginal bone level during a stress-free healing period and under functional loading. From January to February 2006, 133 implants (70 rough-surfaced microthreaded implants and 63 machined-neck implants) were inserted in the mandible of 34 patients with Kennedy Class I residual dentitions and followed until February 2008. The marginal bone level was radiographically determined, using digitized panoramic radiographs, at four time points: at implant placement (baseline level), after the healing period, after 6 months of functional loading, and at the end of follow-up. The median follow-up time was 1.9 (range: 1.9-2.1) years. The machined-neck group had a mean crestal bone loss of 0.5 mm (range: 0-2.3) after the healing period, 0.8 mm after 6 months (range: 0-2.4), and 1.1 mm (range: 0-3) at the end of follow-up. The rough-surfaced microthreaded implant group had a mean bone loss of 0.1 mm (range: -0.4-2) after the healing period, 0.4 mm (range: 0-2.1) after 6 months, and 0.5 mm (range: 0-2.1) at the end of follow-up. The two implant types showed significant differences in marginal bone levels (healing period: P=0.01; end of follow-up: P<0.01). Radiographic evaluation of marginal bone levels adjacent to machined-neck or rough-surfaced microthreaded implants showed that implants with the microthreaded design caused minimal changes in crestal bone levels during healing (stress-free) and under functional loading.

  16. Clinical data miner: an electronic case report form system with integrated data preprocessing and machine-learning libraries supporting clinical diagnostic model research.

    PubMed

    Installé, Arnaud Jf; Van den Bosch, Thierry; De Moor, Bart; Timmerman, Dirk

    2014-10-20

    Using machine-learning techniques, clinical diagnostic model research extracts diagnostic models from patient data. Traditionally, patient data are often collected using electronic Case Report Form (eCRF) systems, while mathematical software is used for analyzing these data using machine-learning techniques. Due to the lack of integration between eCRF systems and mathematical software, extracting diagnostic models is a complex, error-prone process. Moreover, due to the complexity of this process, it is usually only performed once, after a predetermined number of data points have been collected, without insight into the predictive performance of the resulting models. The objective of the study of Clinical Data Miner (CDM) software framework is to offer an eCRF system with integrated data preprocessing and machine-learning libraries, improving efficiency of the clinical diagnostic model research workflow, and to enable optimization of patient inclusion numbers through study performance monitoring. The CDM software framework was developed using a test-driven development (TDD) approach, to ensure high software quality. Architecturally, CDM's design is split over a number of modules, to ensure future extendability. The TDD approach has enabled us to deliver high software quality. CDM's eCRF Web interface is in active use by the studies of the International Endometrial Tumor Analysis consortium, with over 4000 enrolled patients, and more studies planned. Additionally, a derived user interface has been used in six separate interrater agreement studies. CDM's integrated data preprocessing and machine-learning libraries simplify some otherwise manual and error-prone steps in the clinical diagnostic model research workflow. Furthermore, CDM's libraries provide study coordinators with a method to monitor a study's predictive performance as patient inclusions increase. To our knowledge, CDM is the only eCRF system integrating data preprocessing and machine-learning libraries. This integration improves the efficiency of the clinical diagnostic model research workflow. Moreover, by simplifying the generation of learning curves, CDM enables study coordinators to assess more accurately when data collection can be terminated, resulting in better models or lower patient recruitment costs.

  17. The feasibility of using natural language processing to extract clinical information from breast pathology reports.

    PubMed

    Buckley, Julliette M; Coopey, Suzanne B; Sharko, John; Polubriaginof, Fernanda; Drohan, Brian; Belli, Ahmet K; Kim, Elizabeth M H; Garber, Judy E; Smith, Barbara L; Gadd, Michele A; Specht, Michelle C; Roche, Constance A; Gudewicz, Thomas M; Hughes, Kevin S

    2012-01-01

    The opportunity to integrate clinical decision support systems into clinical practice is limited due to the lack of structured, machine readable data in the current format of the electronic health record. Natural language processing has been designed to convert free text into machine readable data. The aim of the current study was to ascertain the feasibility of using natural language processing to extract clinical information from >76,000 breast pathology reports. APPROACH AND PROCEDURE: Breast pathology reports from three institutions were analyzed using natural language processing software (Clearforest, Waltham, MA) to extract information on a variety of pathologic diagnoses of interest. Data tables were created from the extracted information according to date of surgery, side of surgery, and medical record number. The variety of ways in which each diagnosis could be represented was recorded, as a means of demonstrating the complexity of machine interpretation of free text. There was widespread variation in how pathologists reported common pathologic diagnoses. We report, for example, 124 ways of saying invasive ductal carcinoma and 95 ways of saying invasive lobular carcinoma. There were >4000 ways of saying invasive ductal carcinoma was not present. Natural language processor sensitivity and specificity were 99.1% and 96.5% when compared to expert human coders. We have demonstrated how a large body of free text medical information such as seen in breast pathology reports, can be converted to a machine readable format using natural language processing, and described the inherent complexities of the task.

  18. Machine-learning-assisted materials discovery using failed experiments

    NASA Astrophysics Data System (ADS)

    Raccuglia, Paul; Elbert, Katherine C.; Adler, Philip D. F.; Falk, Casey; Wenny, Malia B.; Mollo, Aurelio; Zeller, Matthias; Friedler, Sorelle A.; Schrier, Joshua; Norquist, Alexander J.

    2016-05-01

    Inorganic-organic hybrid materials such as organically templated metal oxides, metal-organic frameworks (MOFs) and organohalide perovskites have been studied for decades, and hydrothermal and (non-aqueous) solvothermal syntheses have produced thousands of new materials that collectively contain nearly all the metals in the periodic table. Nevertheless, the formation of these compounds is not fully understood, and development of new compounds relies primarily on exploratory syntheses. Simulation- and data-driven approaches (promoted by efforts such as the Materials Genome Initiative) provide an alternative to experimental trial-and-error. Three major strategies are: simulation-based predictions of physical properties (for example, charge mobility, photovoltaic properties, gas adsorption capacity or lithium-ion intercalation) to identify promising target candidates for synthetic efforts; determination of the structure-property relationship from large bodies of experimental data, enabled by integration with high-throughput synthesis and measurement tools; and clustering on the basis of similar crystallographic structure (for example, zeolite structure classification or gas adsorption properties). Here we demonstrate an alternative approach that uses machine-learning algorithms trained on reaction data to predict reaction outcomes for the crystallization of templated vanadium selenites. We used information on ‘dark’ reactions—failed or unsuccessful hydrothermal syntheses—collected from archived laboratory notebooks from our laboratory, and added physicochemical property descriptions to the raw notebook information using cheminformatics techniques. We used the resulting data to train a machine-learning model to predict reaction success. When carrying out hydrothermal synthesis experiments using previously untested, commercially available organic building blocks, our machine-learning model outperformed traditional human strategies, and successfully predicted conditions for new organically templated inorganic product formation with a success rate of 89 per cent. Inverting the machine-learning model reveals new hypotheses regarding the conditions for successful product formation.

  19. The PSEUDODOJO: Training and grading a 85 element optimized norm-conserving pseudopotential table

    NASA Astrophysics Data System (ADS)

    van Setten, M. J.; Giantomassi, M.; Bousquet, E.; Verstraete, M. J.; Hamann, D. R.; Gonze, X.; Rignanese, G.-M.

    2018-05-01

    First-principles calculations in crystalline structures are often performed with a planewave basis set. To make the number of basis functions tractable two approximations are usually introduced: core electrons are frozen and the diverging Coulomb potential near the nucleus is replaced by a smoother expression. The norm-conserving pseudopotential was the first successful method to apply these approximations in a fully ab initio way. Later on, more efficient and more exact approaches were developed based on the ultrasoft and the projector augmented wave formalisms. These formalisms are however more complex and developing new features in these frameworks is usually more difficult than in the norm-conserving framework. Most of the existing tables of norm-conserving pseudopotentials, generated long ago, do not include the latest developments, are not systematically tested or are not designed primarily for high precision. In this paper, we present our PSEUDODOJO framework for developing and testing full tables of pseudopotentials, and demonstrate it with a new table generated with the ONCVPSP approach. The PSEUDODOJO is an open source project, building on the ABIPY package, for developing and systematically testing pseudopotentials. At present it contains 7 different batteries of tests executed with ABINIT, which are performed as a function of the energy cutoff. The results of these tests are then used to provide hints for the energy cutoff for actual production calculations. Our final set contains 141 pseudopotentials split into a standard and a stringent accuracy table. In total around 70,000 calculations were performed to test the pseudopotentials. The process of developing the final table led to new insights into the effects of both the core-valence partitioning and the non-linear core corrections on the stability, convergence, and transferability of norm-conserving pseudopotentials. The PSEUDODOJO hence provides a set of pseudopotentials and general purpose tools for further testing and development, focusing on highly accurate calculations and their use in the development of ab initio packages. The pseudopotential files are available on the PSEUDODOJO web-interface pseudo-dojo.org under the name NC (ONCVPSP) v0.4 in the psp8, UPF2, and PSML 1.1 formats. The webinterface also provides the inputs, which are compatible with the 3.3.1 and higher versions of ONCVPSP. All tests have been performed with ABINIT 8.4.

  20. The Efficacy and Safety of HA IDF Plus (with Lidocaine) Versus HA IDF (Without Lidocaine) in Nasolabial Folds Injection: A Randomized, Multicenter, Double-Blind, Split-Face Study.

    PubMed

    Lee, Jong-Hun; Kim, Seok-Hwan; Park, Eun-Soo

    2017-04-01

    Injection-related pain of dermal fillers is a consistent and bothersome problem for patients undergoing soft tissue augmentation. Reducing the pain could improve overall patient satisfaction. The purpose of this study was to compare the pain relief, efficacy, and safety of HA IDF plus containing lidocaine with HA IDF without lidocaine during correction of nasolabial folds (NLFs). Sixty-two subjects were enrolled in a randomized, multicenter, double-blind, split-face study of HA IDF plus and HA IDF for NLF correction. For split-face study, HA IDF plus was injected to one side of NLF, and HA IDF was injected to the other side. The first evaluation variable was the injection site pain measured using a 100-mm visual analogue scale (VAS). The second evaluation variables included the global aesthetic improvement scale, wrinkle severity rating scale, and adverse events. Immediately after injection, 91.94% of subjects experienced at least 10 mm decrease in VAS scores at the side injected with HA IDF plus compared with HA IDF, and the rate of subjects is statistically significant. The two fillers were not significantly different in safety profile or wrinkle correction during the follow-up visit. HA IDF plus significantly reduced the injection-related pain during NLFs correction compared with HA IDF without altering clinical outcomes or safety. Both HA IDF plus and HA IDF were considerably tolerated and most adverse reactions were mild and transient. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  1. Uranium hydrogeochemical and stream sediment reconnaissance of the Philip Smith Mountains NTMS quadrangle, Alaska

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1981-09-01

    Results of a hydrogeochemical and stream sediment reconnaissance of the Philip Smith Mountains NTMS quadrangle, Alaska are presented. In addition to this abbreviated data release, more complete data are available to the public in machine-readable form. In this data release are location data, field analyses, and laboratory analyses of several different sample media. For the sake of brevity, many field site observations have not been included in this volume. These data are, however, available on the magnetic tape. Appendices A and B describe the sample media and summarize the analytical results for each medium. The data were subsetted by onemore » of the Los Alamos National Laboratory (LANL) sorting programs into groups of stream sediment and lake sediment samples. For each group which contains a sufficient number of observations, statistical tables, tables of raw data, and 1:1000000 scale maps of pertinent elements have been included in this report.« less

  2. Daytime Land Surface Temperature Extraction from MODIS Thermal Infrared Data under Cirrus Clouds

    PubMed Central

    Fan, Xiwei; Tang, Bo-Hui; Wu, Hua; Yan, Guangjian; Li, Zhao-Liang

    2015-01-01

    Simulated data showed that cirrus clouds could lead to a maximum land surface temperature (LST) retrieval error of 11.0 K when using the generalized split-window (GSW) algorithm with a cirrus optical depth (COD) at 0.55 μm of 0.4 and in nadir view. A correction term in the COD linear function was added to the GSW algorithm to extend the GSW algorithm to cirrus cloudy conditions. The COD was acquired by a look up table of the isolated cirrus bidirectional reflectance at 0.55 μm. Additionally, the slope k of the linear function was expressed as a multiple linear model of the top of the atmospheric brightness temperatures of MODIS channels 31–34 and as the difference between split-window channel emissivities. The simulated data showed that the LST error could be reduced from 11.0 to 2.2 K. The sensitivity analysis indicated that the total errors from all the uncertainties of input parameters, extension algorithm accuracy, and GSW algorithm accuracy were less than 2.5 K in nadir view. Finally, the Great Lakes surface water temperatures measured by buoys showed that the retrieval accuracy of the GSW algorithm was improved by at least 1.5 K using the proposed extension algorithm for cirrus skies. PMID:25928059

  3. Finding Atmospheric Composition (AC) Metadata

    NASA Technical Reports Server (NTRS)

    Strub, Richard F..; Falke, Stefan; Fiakowski, Ed; Kempler, Steve; Lynnes, Chris; Goussev, Oleg

    2015-01-01

    The Atmospheric Composition Portal (ACP) is an aggregator and curator of information related to remotely sensed atmospheric composition data and analysis. It uses existing tools and technologies and, where needed, enhances those capabilities to provide interoperable access, tools, and contextual guidance for scientists and value-adding organizations using remotely sensed atmospheric composition data. The initial focus is on Essential Climate Variables identified by the Global Climate Observing System CH4, CO, CO2, NO2, O3, SO2 and aerosols. This poster addresses our efforts in building the ACP Data Table, an interface to help discover and understand remotely sensed data that are related to atmospheric composition science and applications. We harvested GCMD, CWIC, GEOSS metadata catalogs using machine to machine technologies - OpenSearch, Web Services. We also manually investigated the plethora of CEOS data providers portals and other catalogs where that data might be aggregated. This poster is our experience of the excellence, variety, and challenges we encountered.Conclusions:1.The significant benefits that the major catalogs provide are their machine to machine tools like OpenSearch and Web Services rather than any GUI usability improvements due to the large amount of data in their catalog.2.There is a trend at the large catalogs towards simulating small data provider portals through advanced services. 3.Populating metadata catalogs using ISO19115 is too complex for users to do in a consistent way, difficult to parse visually or with XML libraries, and too complex for Java XML binders like CASTOR.4.The ability to search for Ids first and then for data (GCMD and ECHO) is better for machine to machine operations rather than the timeouts experienced when returning the entire metadata entry at once. 5.Metadata harvest and export activities between the major catalogs has led to a significant amount of duplication. (This is currently being addressed) 6.Most (if not all) Earth science atmospheric composition data providers store a reference to their data at GCMD.

  4. SU-E-T-473: A Patient-Specific QC Paradigm Based On Trajectory Log Files and DICOM Plan Files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeMarco, J; McCloskey, S; Low, D

    Purpose: To evaluate a remote QC tool for monitoring treatment machine parameters and treatment workflow. Methods: The Varian TrueBeamTM linear accelerator is a digital machine that records machine axis parameters and MLC leaf positions as a function of delivered monitor unit or control point. This information is saved to a binary trajectory log file for every treatment or imaging field in the patient treatment session. A MATLAB analysis routine was developed to parse the trajectory log files for a given patient, compare the expected versus actual machine and MLC positions as well as perform a cross-comparison with the DICOM-RT planmore » file exported from the treatment planning system. The parsing routine sorts the trajectory log files based on the time and date stamp and generates a sequential report file listing treatment parameters and provides a match relative to the DICOM-RT plan file. Results: The trajectory log parsing-routine was compared against a standard record and verify listing for patients undergoing initial IMRT dosimetry verification and weekly and final chart QC. The complete treatment course was independently verified for 10 patients of varying treatment site and a total of 1267 treatment fields were evaluated including pre-treatment imaging fields where applicable. In the context of IMRT plan verification, eight prostate SBRT plans with 4-arcs per plan were evaluated based on expected versus actual machine axis parameters. The average value for the maximum RMS MLC error was 0.067±0.001mm and 0.066±0.002mm for leaf bank A and B respectively. Conclusion: A real-time QC analysis program was tested using trajectory log files and DICOM-RT plan files. The parsing routine is efficient and able to evaluate all relevant machine axis parameters during a patient treatment course including MLC leaf positions and table positions at time of image acquisition and during treatment.« less

  5. Automated Atmospheric Composition Dataset Level Metadata Discovery. Difficulties and Surprises

    NASA Astrophysics Data System (ADS)

    Strub, R. F.; Falke, S. R.; Kempler, S.; Fialkowski, E.; Goussev, O.; Lynnes, C.

    2015-12-01

    The Atmospheric Composition Portal (ACP) is an aggregator and curator of information related to remotely sensed atmospheric composition data and analysis. It uses existing tools and technologies and, where needed, enhances those capabilities to provide interoperable access, tools, and contextual guidance for scientists and value-adding organizations using remotely sensed atmospheric composition data. The initial focus is on Essential Climate Variables identified by the Global Climate Observing System - CH4, CO, CO2, NO2, O3, SO2 and aerosols. This poster addresses our efforts in building the ACP Data Table, an interface to help discover and understand remotely sensed data that are related to atmospheric composition science and applications. We harvested GCMD, CWIC, GEOSS metadata catalogs using machine to machine technologies - OpenSearch, Web Services. We also manually investigated the plethora of CEOS data providers portals and other catalogs where that data might be aggregated. This poster is our experience of the excellence, variety, and challenges we encountered.Conclusions:1.The significant benefits that the major catalogs provide are their machine to machine tools like OpenSearch and Web Services rather than any GUI usability improvements due to the large amount of data in their catalog.2.There is a trend at the large catalogs towards simulating small data provider portals through advanced services. 3.Populating metadata catalogs using ISO19115 is too complex for users to do in a consistent way, difficult to parse visually or with XML libraries, and too complex for Java XML binders like CASTOR.4.The ability to search for Ids first and then for data (GCMD and ECHO) is better for machine to machine operations rather than the timeouts experienced when returning the entire metadata entry at once. 5.Metadata harvest and export activities between the major catalogs has led to a significant amount of duplication. (This is currently being addressed) 6.Most (if not all) Earth science atmospheric composition data providers store a reference to their data at GCMD.

  6. Small communal laundries in block of flats: Planning, Equipment, Handicap Adaption

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pedersen, B.

    1980-01-01

    The primary requirements which must be made for a communal laundry is that it must be adapted to the laundry quantities, laundry needs, and available time of the households. In addition, the equipment must be such that the work involved and the and water are kept as low as possible. It is also important that the laundry facility be regarded as an attractive work environment. The following topics are discussed: Small communal laundries offer many advantages (In the same building, Possibilities for unscheduled laundering, Economically advantageous, Easy to agree on laundering times); Calculation of laundry capacity; Equipment in the laundrymore » (Washing machines, Spin dryer, Tumbler dryer and drying cabinets, Work table, Sink unit, Cold mangle); Information on equipment; Energy conservation measures (Heat exchanger, Outdoor drying); Location of equipment; Work areas which also suit the physically handicapped; Work postures are improved if the machines are placed on a higher level; Layouts; Standards for laundries.« less

  7. FAIR principles and the IEDB: short-term improvements and a long-term vision of OBO-foundry mediated machine-actionable interoperability

    PubMed Central

    Vita, Randi; Overton, James A; Mungall, Christopher J; Sette, Alessandro

    2018-01-01

    Abstract The Immune Epitope Database (IEDB), at www.iedb.org, has the mission to make published experimental data relating to the recognition of immune epitopes easily available to the scientific public. By presenting curated data in a searchable database, we have liberated it from the tables and figures of journal articles, making it more accessible and usable by immunologists. Recently, the principles of Findability, Accessibility, Interoperability and Reusability have been formulated as goals that data repositories should meet to enhance the usefulness of their data holdings. We here examine how the IEDB complies with these principles and identify broad areas of success, but also areas for improvement. We describe short-term improvements to the IEDB that are being implemented now, as well as a long-term vision of true ‘machine-actionable interoperability’, which we believe will require community agreement on standardization of knowledge representation that can be built on top of the shared use of ontologies. PMID:29688354

  8. Evaluation of application for approval of alternative methodology for compliance with the NESHAP for shipbuilding and ship repair and recommended requirements for compliance (application submitted by Metro Machine Corporation, Norfolk, Virginia). Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Serageldin, M.A.

    1999-07-01

    The US Environmental Protection Agency is providing background information that supports the use of Metro Machine Corporation`s (MMC) compliant all position enclosure (CAPE) plus air management system and regenerative thermal oxidizer (RTO) (CAPE + RTO System) as an alternative means of limiting the emissions of volatile organic hazardous air pollutants per volume of applied solids (nonvolatiles). This document also explains how the authors arrived at the operating, recordkeeping, and reporting conditions that MMC must meet for approval. The add-on control system they used consists of a pollution capture unit operation (CAPE) plus air management system and a destruction unit operationmore » (RTO). When operated according to the specified procedures, it will control emissions to a level no greater than that from using coatings which comply with the limits in Table 2 of 40 CFR Part 63, Subpart II.« less

  9. Use of machine learning to improve autism screening and diagnostic instruments: effectiveness, efficiency, and multi-instrument fusion

    PubMed Central

    Bone, Daniel; Bishop, Somer; Black, Matthew P.; Goodwin, Matthew S.; Lord, Catherine; Narayanan, Shrikanth S.

    2016-01-01

    Background Machine learning (ML) provides novel opportunities for human behavior research and clinical translation, yet its application can have noted pitfalls (Bone et al., 2015). In this work, we fastidiously utilize ML to derive autism spectrum disorder (ASD) instrument algorithms in an attempt to improve upon widely-used ASD screening and diagnostic tools. Methods The data consisted of Autism Diagnostic Interview-Revised (ADI-R) and Social Responsiveness Scale (SRS) scores for 1,264 verbal individuals with ASD and 462 verbal individuals with non-ASD developmental or psychiatric disorders (DD), split at age 10. Algorithms were created via a robust ML classifier, support vector machine (SVM), while targeting best-estimate clinical diagnosis of ASD vs. non-ASD. Parameter settings were tuned in multiple levels of cross-validation. Results The created algorithms were more effective (higher performing) than current algorithms, were tunable (sensitivity and specificity can be differentially weighted), and were more efficient (achieving near-peak performance with five or fewer codes). Results from ML-based fusion of ADI-R and SRS are reported. We present a screener algorithm for below (above) age 10 that reached 89.2% (86.7%) sensitivity and 59.0% (53.4%) specificity with only five behavioral codes. Conclusions ML is useful for creating robust, customizable instrument algorithms. In a unique dataset comprised of controls with other difficulties, our findings highlight limitations of current caregiver-report instruments and indicate possible avenues for improving ASD screening and diagnostic tools. PMID:27090613

  10. Use of machine learning to improve autism screening and diagnostic instruments: effectiveness, efficiency, and multi-instrument fusion.

    PubMed

    Bone, Daniel; Bishop, Somer L; Black, Matthew P; Goodwin, Matthew S; Lord, Catherine; Narayanan, Shrikanth S

    2016-08-01

    Machine learning (ML) provides novel opportunities for human behavior research and clinical translation, yet its application can have noted pitfalls (Bone et al., 2015). In this work, we fastidiously utilize ML to derive autism spectrum disorder (ASD) instrument algorithms in an attempt to improve upon widely used ASD screening and diagnostic tools. The data consisted of Autism Diagnostic Interview-Revised (ADI-R) and Social Responsiveness Scale (SRS) scores for 1,264 verbal individuals with ASD and 462 verbal individuals with non-ASD developmental or psychiatric disorders, split at age 10. Algorithms were created via a robust ML classifier, support vector machine, while targeting best-estimate clinical diagnosis of ASD versus non-ASD. Parameter settings were tuned in multiple levels of cross-validation. The created algorithms were more effective (higher performing) than the current algorithms, were tunable (sensitivity and specificity can be differentially weighted), and were more efficient (achieving near-peak performance with five or fewer codes). Results from ML-based fusion of ADI-R and SRS are reported. We present a screener algorithm for below (above) age 10 that reached 89.2% (86.7%) sensitivity and 59.0% (53.4%) specificity with only five behavioral codes. ML is useful for creating robust, customizable instrument algorithms. In a unique dataset comprised of controls with other difficulties, our findings highlight the limitations of current caregiver-report instruments and indicate possible avenues for improving ASD screening and diagnostic tools. © 2016 Association for Child and Adolescent Mental Health.

  11. Research on a new magnetic-field-modulated brushless double-rotor machine with sinusoidal-permeance modulating ring

    NASA Astrophysics Data System (ADS)

    Zheng, Ping; Liu, Jiaqi; Bai, Jingang; Song, Zhiyi; Liu, Yong

    2017-05-01

    The magnetic-field-modulated brushless double-rotor machine (MFM-BDRM), composed of a stator, a modulating ring rotor, and a PM rotor, is a kind of power-split device for hybrid electric vehicles (HEVs). In this paper, a new MFM-BDRM with sinusoidal-permeance modulating ring named Sinusoidal-Permeance-Modulating-Ring Brushless Double-Rotor Machine (SPMR-BDRM) is proposed to solve the problem of poor mechanical strength and large iron loss. The structure and the operating principle of the MFM-BDRM are introduced. The design principle of the sinusoidal-permeance modulating ring is analyzed and derived. The main idea of that is to minimize the harmonic permeance of air gap, thereby the harmonic magnetic fields can be restrained. There are comparisons between a MFM-BDRM with sinusoidal-permeance modulating ring and a same size MFM-BDRM with traditional modulating ring, including magnetic field distributions and electromagnetic performances. Most importantly, the iron losses are compared under six different conditions. The result indicates that the harmonic magnetic fields in the air gap are restrained; the electromagnetic torque and power factor are almost the same with same armature current; the torque ripples of the modulating ring rotor and the PM rotor are reduced; the stator loss is reduced by 13% at least and the PM loss is reduced by 20% at least compared with the same size traditional MFM-BDRM under the same operating conditions.

  12. Use of Computational Functional Genomics in Drug Discovery and Repurposing for Analgesic Indications.

    PubMed

    Lötsch, Jörn; Kringel, Dario

    2018-06-01

    The novel research area of functional genomics investigates biochemical, cellular, or physiological properties of gene products with the goal of understanding the relationship between the genome and the phenotype. These developments have made analgesic drug research a data-rich discipline mastered only by making use of parallel developments in computer science, including the establishment of knowledge bases, mining methods for big data, machine-learning, and artificial intelligence, (Table ) which will be exemplarily introduced in the following. © 2018 The Authors Clinical Pharmacology & Therapeutics published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  13. Proceedings of the International Workshop on High-Level Language Computer Architecture, May 26-28, 1980, Fort Lauderdale, Florida

    DTIC Science & Technology

    1980-06-01

    pro - due to the instructions’ higher Information con - duce. These macros can then be Interpreted by tent. In Myer’s example, the number of instruc...the pro - power fail occurs only when tWe ma- cess on the next lower level machine. It con - chine is in certain states at some tinues checking lower...be pro - cessed. This line is accessed and con - Tabl3 1. Comparison of Process~ing Time catenated with the present contents of the W and W/O ’Line

  14. A fluid-mechanical sewing machine

    NASA Astrophysics Data System (ADS)

    Lister, John; Chiu-Webster, Sunny

    2004-11-01

    It is a breakfast-table experience that when a viscous fluid thread falls a sufficient height onto a stationary horizontal surface the thread is undergoes a coiling instability. We describe experimental observations of a viscous thread falling onto a steadily moving horizontal belt. Low (or zero) belt speeds produce coiling as expected. High belt speeds produce a steady thread, whose shape is well-predicted by theory for a stretching catenary with surface tension and inertia. Intermediate belt speeds show various modes of oscillation, which produce a variety of `stitching' patterns on the belt. The onset of oscillations is predicted theoretically.

  15. 1. Credit WCT. Original 2 1/4" x 2 1/4" color ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. Credit WCT. Original 2- 1/4" x 2- 1/4" color negative is housed in the JPL Photography Laboratory, Pasadena, California. Photo shows John Morrow in charge of milling operations on coupons ("dogbones") of propellant on an Index milling machine. Coupons were milled to precise dimensions for tensile tests. Note that two sprinkler heads have been placed in very close proximity to the milling table for fire suppression purposes (JPL negative no. JPL-10283AC, 27 January 1989) - Jet Propulsion Laboratory Edwards Facility, Preparation Building, Edwards Air Force Base, Boron, Kern County, CA

  16. Re-Shielding of Cobalt-60 Teletherapy Rooms for Tomotherapy and Conventional Linear Accelerators using Monte Carlo Simulations

    NASA Astrophysics Data System (ADS)

    Çeçen, Yiğit; Yazgan, Çağrı

    2017-09-01

    Purpose. Nearly all Cobalt-60 teletherapy machines were removed around the world during the last two decades. The remaining ones are being used for experimental purposes. However, the rooms of these teletherapy machines are valuable because of lack of space in radiotherapy clinics. In order to place a new technology treatment machine in one of these rooms, one should re-shield the room since it was designed only for 1.25 MeV gamma beams on average. Mostly, the vendor of the new machine constructs the new shielding of the room using their experience. However, every radiotherapy room has different surrounding work areas and it would be wise to shield the room considering these special conditions. Also, the shield design goal of the clinic may be much lower than the International Atomic Energy Agency (IAEA) or the local association accepts. The study shows re-shielding of a Cobalt-60 room, specific to the clinic, using Monte Carlo simulations. Materials & Methods: First, a 6 MV Tomotherapy machine, then a 10 MV conventional linear accelerator (LINAC) was placed inside the Cobalt-60 teletherapy room. The photon flux outside the room was simulated using Monte Carlo N-Particle (MCNP6.1) code before and after re-shielding. For the Tomotherapy simulation, flux distributions around the machine were obtained from the vendor and implemented as the source of the model. The LINAC model was more generic with the 10 MeV electron source, the tungsten target, first and secondary collimators. The aim of the model was to obtain the maximum (40x40 cm2) open field at the isocenter. Two different simulations were carried out for gantry angles 90o and 270o. The LINAC was placed in the room such that the primary walls were A' (Gantry 270o) and C' (Gantry 90o) (figure 1). The second part of the study was to model the re-shielding of the room for Tomotherapy and for the conventional LINAC, separately. The aim was to investigate the recommended shielding by the vendors. Left side of the room was adjacent to a LINAC room with 2 meters thick concrete wall (figure 1). No shielding was necessary for that wall. Behind wall A-A' there was an outdoors forbidden area; behind wall B-B' was the contouring room for the doctors; and the control room was behind wall C-C' (figure 1). After some modifications, the final shielding was designed. Results: The photon flux distributions outside the room before and after the re-shielding were compared. The re-shielding of Tomotherapy reduced the flux down to 1.89 % on average with respect to pre-shielding (table 1). For the conventional LINAC case; after re-shielding, the photon flux in the control room -which corresponds to gantry 90°- decreased down to 0.57% with respect to pre-shielding (table 2). The photon flux behind wall A' -which corresponds to gantry 270°- decreased down to 2.46%. Everybody was all safe behind wall B' even before re-shielding.

  17. Using virtualization to protect the proprietary material science applications in volunteer computing

    NASA Astrophysics Data System (ADS)

    Khrapov, Nikolay P.; Rozen, Valery V.; Samtsevich, Artem I.; Posypkin, Mikhail A.; Sukhomlin, Vladimir A.; Oganov, Artem R.

    2018-04-01

    USPEX is a world-leading software for computational material design. In essence, USPEX splits simulation into a large number of workunits that can be processed independently. This scheme ideally fits the desktop grid architecture. Workunit processing is done by a simulation package aimed at energy minimization. Many of such packages are proprietary and should be protected from unauthorized access when running on a volunteer PC. In this paper we present an original approach based on virtualization. In a nutshell, the proprietary code and input files are stored in an encrypted folder and run inside a virtual machine image that is also password protected. The paper describes this approach in detail and discusses its application in USPEX@home volunteer project.

  18. Decision tree modeling using R.

    PubMed

    Zhang, Zhongheng

    2016-08-01

    In machine learning field, decision tree learner is powerful and easy to interpret. It employs recursive binary partitioning algorithm that splits the sample in partitioning variable with the strongest association with the response variable. The process continues until some stopping criteria are met. In the example I focus on conditional inference tree, which incorporates tree-structured regression models into conditional inference procedures. While growing a single tree is subject to small changes in the training data, random forests procedure is introduced to address this problem. The sources of diversity for random forests come from the random sampling and restricted set of input variables to be selected. Finally, I introduce R functions to perform model based recursive partitioning. This method incorporates recursive partitioning into conventional parametric model building.

  19. Application of all relevant feature selection for failure analysis of parameter-induced simulation crashes in climate models

    NASA Astrophysics Data System (ADS)

    Paja, W.; Wrzesień, M.; Niemiec, R.; Rudnicki, W. R.

    2015-07-01

    The climate models are extremely complex pieces of software. They reflect best knowledge on physical components of the climate, nevertheless, they contain several parameters, which are too weakly constrained by observations, and can potentially lead to a crash of simulation. Recently a study by Lucas et al. (2013) has shown that machine learning methods can be used for predicting which combinations of parameters can lead to crash of simulation, and hence which processes described by these parameters need refined analyses. In the current study we reanalyse the dataset used in this research using different methodology. We confirm the main conclusion of the original study concerning suitability of machine learning for prediction of crashes. We show, that only three of the eight parameters indicated in the original study as relevant for prediction of the crash are indeed strongly relevant, three other are relevant but redundant, and two are not relevant at all. We also show that the variance due to split of data between training and validation sets has large influence both on accuracy of predictions and relative importance of variables, hence only cross-validated approach can deliver robust prediction of performance and relevance of variables.

  20. Numerical analysis on performance and contaminated failures of the miniature split Stirling cryocooler

    NASA Astrophysics Data System (ADS)

    He, Ya-Ling; Zhang, Dong-Wei; Yang, Wei-Wei; Gao, Fan

    2014-01-01

    A mathematical model based on thermodynamic theory of variable mass is developed for the split Stirling refrigerator, in which, the whole machine is considered by one-dimensional approach while the processes in the regenerator are simulated by two-dimensional approach. First, the influence of the ideal frost layer distributions on the flow and heat transfer in the regenerator and the performance of the Stirling cryocooler are simulated. Then, the distribution of the contaminated water vapor and its coagulated and deposited process is qualitatively analyzed. Finally, the lifetime of the refrigerator is evaluated based on the calculated data. The results show that when the refrigerator is operated at uniform distribution of the water vapor partial pressure in the regenerator, the cooling capacity is reduced over 10% at about 631 h, and the power consumption of compressor is increased over 20% at about 1168 h. However, for the linear distribution of water vapor partial pressure, the refrigerator can work properly because the frost never reaches the criterion of failure. Also, it is found that when the Stirling cryocooler restarts after a shutdown, the cooling capacity is reduced by 10% once the frost mass is over 7.05 mg, and there is no cooling capacity once the frost mass reaches 41.2 mg.

  1. Photon-number-splitting versus cloning attacks in practical implementations of the Bennett-Brassard 1984 protocol for quantum cryptography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niederberger, Armand; Scarani, Valerio; Gisin, Nicolas

    2005-04-01

    In practical quantum cryptography, the source sometimes produces multiphoton pulses, thus enabling the eavesdropper Eve to perform the powerful photon-number-splitting (PNS) attack. Recently, it was shown by Curty and Luetkenhaus [Phys. Rev. A 69, 042321 (2004)] that the PNS attack is not always the optimal attack when two photons are present: if errors are present in the correlations Alice-Bob and if Eve cannot modify Bob's detection efficiency, Eve gains a larger amount of information using another attack based on a 2{yields}3 cloning machine. In this work, we extend this analysis to all distances Alice-Bob. We identify a new incoherent 2{yields}3more » cloning attack which performs better than those described before. Using it, we confirm that, in the presence of errors, Eve's better strategy uses 2{yields}3 cloning attacks instead of the PNS. However, this improvement is very small for the implementations of the Bennett-Brassard 1984 (BB84) protocol. Thus, the existence of these new attacks is conceptually interesting but basically does not change the value of the security parameters of BB84. The main results are valid both for Poissonian and sub-Poissonian sources.« less

  2. MO-F-CAMPUS-T-03: Data Driven Approaches for Determination of Treatment Table Tolerance Values for Record and Verification Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, N; DiCostanzo, D; Fullenkamp, M

    2015-06-15

    Purpose: To determine appropriate couch tolerance values for modern radiotherapy linac R&V systems with indexed patient setup. Methods: Treatment table tolerance values have been the most difficult to lower, due to many factors including variations in patient positioning and differences in table tops between machines. We recently installed nine linacs with similar tables and started indexing every patient in our clinic. In this study we queried our R&V database and analyzed the deviation of couch position values from the acquired values at verification simulation for all patients treated with indexed positioning. Mean and standard deviations of daily setup deviations weremore » computed in the longitudinal, lateral and vertical direction for 343 patient plans. The mean, median and standard error of the standard deviations across the whole patient population and for some disease sites were computed to determine tolerance values. Results: The plot of our couch deviation values showed a gaussian distribution, with some small deviations, corresponding to setup uncertainties on non-imaging days, and SRS/SRT/SBRT patients, as well as some large deviations which were spot checked and found to be corresponding to indexing errors that were overriden. Setting our tolerance values based on the median + 1 standard error resulted in tolerance values of 1cm lateral and longitudinal, and 0.5 cm vertical for all non- SRS/SRT/SBRT cases. Re-analizing the data, we found that about 92% of the treated fractions would be within these tolerance values (ignoring the mis-indexed patients). We also analyzed data for disease site based subpopulations and found no difference in the tolerance values that needed to be used. Conclusion: With the use of automation, auto-setup and other workflow efficiency tools being introduced into radiotherapy workflow, it is very essential to set table tolerances that allow safe treatments, but flag setup errors that need to be reassessed before treatments.« less

  3. Evaluation of the contributions of four components of gross domestic product in various regions in China.

    PubMed

    Wu, Sanmang; Lei, Yalin; Li, Li

    2015-01-01

    Four major components influence the growth of the gross domestic product in Chinese provinces: consumption, investment, transnational exports, and inter-provincial exports. By splitting a competitive input-output table into a non-competitive input-output table, this study used an input-output model to measure the contributions of the four components of gross domestic product in various regions in China. We found that international exports drove the growth of the gross domestic product more strongly in the eastern region than in other regions. Investment and inter-provincial exports were the major impetus for gross domestic product growth in the central and western regions. We also found that consumption played a minimal role in driving the growth of the gross domestic product in all regions in China. According to these findings, although various regions can share much in terms of policies to transform the impetus for economic growth, there should be different foci for different regions. Their shared policy is to increase the role of final consumption in stimulating economic growth. Region-specific policies mandate that the eastern region should strengthen the driving force provided by international exports and that the central and western regions should strengthen indigenous growth capabilities by improving scientific innovation, industrial support, and institutional innovation.

  4. Rasch analysis of the UK Functional Assessment Measure in patients with complex disability after stroke.

    PubMed

    Medvedev, Oleg N; Turner-Stokes, Lynne; Ashford, Stephen; Siegert, Richard J

    2018-02-28

    To determine whether the UK Functional Assessment Measure (UK FIM+FAM) fits the Rasch model in stroke patients with complex disability and, if so, to derive a conversion table of Rasch-transformed interval level scores. The sample included a UK multicentre cohort of 1,318 patients admitted for specialist rehabilitation following a stroke. Rasch analysis was conducted for the 30-item scale including 3 domains of items measuring physical, communication and psychosocial functions. The fit of items to the Rasch model was examined using 3 different analytical approaches referred to as "pathways". The best fit was achieved in the pathway where responses from motor, communication and psychosocial domains were summarized into 3 super-items and where some items were split because of differential item functioning (DIF) relative to left and right hemisphere location (χ2 (10) = 14.48, p = 0.15). Re-scoring of items showing disordered thresholds did not significantly improve the overall model fit. The UK FIM+FAM with domain super-items satisfies expectations of the unidimensional Rasch model without the need for re-scoring. A conversion table was produced to convert the total scale scores into interval-level data based on person estimates of the Rasch model. The clinical benefits of interval-transformed scores require further evaluation.

  5. Evaluation of the Contributions of Four Components of Gross Domestic Product in Various Regions in China

    PubMed Central

    Wu, Sanmang; Lei, Yalin; Li, Li

    2015-01-01

    Four major components influence the growth of the gross domestic product in Chinese provinces: consumption, investment, transnational exports, and inter-provincial exports. By splitting a competitive input-output table into a non-competitive input-output table, this study used an input-output model to measure the contributions of the four components of gross domestic product in various regions in China. We found that international exports drove the growth of the gross domestic product more strongly in the eastern region than in other regions. Investment and inter-provincial exports were the major impetus for gross domestic product growth in the central and western regions. We also found that consumption played a minimal role in driving the growth of the gross domestic product in all regions in China. According to these findings, although various regions can share much in terms of policies to transform the impetus for economic growth, there should be different foci for different regions. Their shared policy is to increase the role of final consumption in stimulating economic growth. Region-specific policies mandate that the eastern region should strengthen the driving force provided by international exports and that the central and western regions should strengthen indigenous growth capabilities by improving scientific innovation, industrial support, and institutional innovation. PMID:25915927

  6. [Upright posture of man and morphologic evolution of the musculi extensores digitorum pedis with reference to evolutionary myology. III].

    PubMed

    Kaneff, A

    1986-01-01

    The following anatomical objects were studied with regard to myology during evolution: M. extensor hallucis longus (MEHL), M. extensor digitorum longus (MEDL) with M. peroneus tertius (MP III), M. peroneus brevis (MPB) with M. peroneus digiti V (MPD V), M. extensor hallucis brevis (MEHB), M. extensor digitorum brevis (MEDB), and the Retinaculum musculorum extensorum imum (RMEI). The study was carried out by the preparation of 3 different groups of material. The 1st group consists of lower extremities of humans. The number of the extremities differs for the particular objects between 151 and 358 (see page 381). The 2nd group of material consists of 122 Membra pelvina from Marsupialia, Insectivora, and Primates. Table 1 shows as well the mammalian species as the number of the studied extremities. The extremities of the 1st and 2nd group were preserved in an manner suitable for a macroscopic preparation. The 3rd group of material consists of 71 lower extremities from embryos and fetus. The lower legs and feet were stained either according to the method described by Morel and Bassal with eosin added or according to Weigert. From this material, complete series of cross sections were prepared. Table 2 shows the age of the embryos (VCL [mm]) as well as the number of the studied extremities. It is important that up to the age of 46 mm VCL the difference in the age of the embryos usually amounts from 0.5 to 1.0 mm. This small difference in the age of the embryos and fetus allows a very good follow up of the changes in construction during the organogenesis. The comparison of the 3 different groups shows the following changes for the above mentioned muscles: The M. extensor hallucis longus (MEHL) is a muscle which is not split. The same result applies for its tendon which inserts at the distal phalanx of the hallux. This primitive form of the muscle amounts actually to 51.12% in human beings. In 48.88% of the cases, additional tendons and muscles are formed by the MEHL. Most of these supplements are positioned on the medial side of the main tendon, only a few lie to the lateral side. For the supplement tendons, the medial one as well as the lateral one occasionally possess a muscle belly. The muscle of the medial tendon is split off from the proximal margin of the MEHL. The muscle of the lateral tendon is split off from the distal margin of the MEHL.(ABSTRACT TRUNCATED AT 400 WORDS)

  7. An On-Chip Learning Neuromorphic Autoencoder With Current-Mode Transposable Memory Read and Virtual Lookup Table.

    PubMed

    Cho, Hwasuk; Son, Hyunwoo; Seong, Kihwan; Kim, Byungsub; Park, Hong-June; Sim, Jae-Yoon

    2018-02-01

    This paper presents an IC implementation of on-chip learning neuromorphic autoencoder unit in a form of rate-based spiking neural network. With a current-mode signaling scheme embedded in a 500 × 500 6b SRAM-based memory, the proposed architecture achieves simultaneous processing of multiplications and accumulations. In addition, a transposable memory read for both forward and backward propagations and a virtual lookup table are also proposed to perform an unsupervised learning of restricted Boltzmann machine. The IC is fabricated using 28-nm CMOS process and is verified in a three-layer network of encoder-decoder pair for training and recovery of images with two-dimensional pixels. With a dataset of 50 digits, the IC shows a normalized root mean square error of 0.078. Measured energy efficiencies are 4.46 pJ per synaptic operation for inference and 19.26 pJ per synaptic weight update for learning, respectively. The learning performance is also estimated by simulations if the proposed hardware architecture is extended to apply to a batch training of 60 000 MNIST datasets.

  8. [The "Würzburg T". A concept for optimization of early multiple trauma care in the emergency department].

    PubMed

    Kuhnigk, H; Steinhübel, B; Keil, T; Roewer, N

    2004-07-01

    Anaesthesia management, radiological diagnostic and the concept of damage control surgery should be combined in the resuscitation room. Defined clinical targets and their realisation are a CT-scan and complete damage control surgery in the shock room. Furthermore minimised patient transfer and positioning with continuous access to the head, upper parts of the body and anaesthesia machine should be realised during diagnostic procedures. Based on a carbon-slide fixed on a turntable and innovative alignment of diagnostic devices, a three phase treatment algorithm has been established. Phase A includes primary survey, anaesthetic management and ultrasound examination. Following a turn of the table conventional x-ray diagnostic is assessed in phase B. Tracks for the slide enable immediate transfer to a spiral CT-scan without additional patient positioning (phase C). Following complete CT-scan rearrangement of the table to phase A facilitates immediate damage control surgery. To accelerate device operation and treatment the integrated anaesthesia workstation is ceiling-mounted and manoeuvres close to the patient. This concept realizes complete diagnostic procedures and damage control surgery without time consuming patient transfer or rearrangement.

  9. Phase Transitions in Aluminum Under Shockless Compression at the Z Machine

    NASA Astrophysics Data System (ADS)

    Davis, Jean-Paul; Brown, Justin; Shulenburger, Luke; Knudson, Marcus

    2017-06-01

    Aluminum 6061 alloy has been used extensively as an electrode material in shockless ramp-wave experiments at the Z Machine. Previous theoretical work suggests that the principal quasi-isentrope in aluminum should pass through two phase transitions at multi-megabar pressures, first from the ambient fcc phase to hcp at around 200 GPa, then to bcc at around 320 GPa. Previous static measurements in a diamond-anvil cell have detected the hcp phase above 200 GPa along the room-temperature isentherm. Recent laser-based dynamic compression experiments have observed both the hcp and bcc phases using X-ray diffraction. Here we present high-accuracy velocity waveform data taken on pure and alloy aluminum materials at the Z Machine under shockless compression with 200-ns rise-time to 400 GPa using copper electrodes and lithium-fluoride windows. These are compared to recent EOS tables developed at Los Alamos National Laboratory, to our own results from diffusion quantum Monte-Carlo calculations, and to multi-phase EOS models with phase-transition kinetics. We find clear evidence of a fast transition around 200 GPa as expected, and a possible suggestion of a slower transition at higher pressure. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE AC04-94AL85000.

  10. Laser-treated stainless steel mini-screw implants: 3D surface roughness, bone-implant contact, and fracture resistance analysis

    PubMed Central

    Kang, He-Kyong; Chu, Tien-Min; Dechow, Paul; Stewart, Kelton; Kyung, Hee-Moon

    2016-01-01

    Summary Background/Objectives: This study investigated the biomechanical properties and bone-implant intersurface response of machined and laser surface-treated stainless steel (SS) mini-screw implants (MSIs). Material and Methods: Forty-eight 1.3mm in diameter and 6mm long SS MSIs were divided into two groups. The control (machined surface) group received no surface treatment; the laser-treated group received Nd-YAG laser surface treatment. Half in each group was used for examining surface roughness (Sa and Sq), surface texture, and facture resistance. The remaining MSIs were placed in the maxilla of six skeletally mature male beagle dogs in a randomized split-mouth design. A pair with the same surface treatment was placed on the same side and immediately loaded with 200g nickel–titanium coil springs for 8 weeks. After killing, the bone-implant contact (BIC) for each MSI was calculated using micro computed tomography. Analysis of variance model and two-sample t test were used for statistical analysis with a significance level of P <0.05. Results: The mean values of Sa and Sq were significantly higher in the laser-treated group compared with the machined group (P <0.05). There were no significant differences in fracture resistance and BIC between the two groups. Limitation: animal study Conclusions/Implications: Laser treatment increased surface roughness without compromising fracture resistance. Despite increasing surface roughness, laser treatment did not improve BIC. Overall, it appears that medical grade SS has the potential to be substituted for titanium alloy MSIs. PMID:25908868

  11. VizieR Online Data Catalog: 231 AGN candidates from the 2FGL catalog (Doert+, 2014)

    NASA Astrophysics Data System (ADS)

    Doert, M.; Errando, M.

    2016-01-01

    The second Fermi-LAT source catalog (2FGL; Nolan et al. 2012, cat. J/ApJS/199/31) is the deepest all-sky survey available in the gamma-ray band. It contains 1873 sources, of which 576 remain unassociated. The Large Area Telescope (LAT) on board the Fermi Gamma-ray Space Telescope started operations in 2008. In this work, machine-learning algorithms are used to identify unassociated sources in the 2FGL catalog with properties similar to gamma-ray-emitting Active Galactic Nuclei (AGN). This analysis finds 231 high-confidence AGN candidates (see Table3). (1 data file).

  12. Advanced composites: Fabrication processes for selected resin matrix materials

    NASA Technical Reports Server (NTRS)

    Welhart, E. K.

    1976-01-01

    This design note is based on present state of the art for epoxy and polyimide matrix composite fabrication technology. Boron/epoxy and polyimide and graphite/epoxy and polyimide structural parts can be successfully fabricated. Fabrication cycles for polyimide matrix composites have been shortened to near epoxy cycle times. Nondestructive testing has proven useful in detecting defects and anomalies in composite structure elements. Fabrication methods and tooling materials are discussed along with the advantages and disadvantages of different tooling materials. Types of honeycomb core, material costs and fabrication methods are shown in table form for comparison. Fabrication limits based on tooling size, pressure capabilities and various machining operations are also discussed.

  13. Advanced light source: Compendium of user abstracts and technical reports,1993-1996

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    1997-04-01

    This compendium contains abstracts written by users summarizing research completed or in progress from 1993-1996, ALS technical reports describing ongoing efforts related to improvement in machine operations and research and development projects, and information on ALS beamlines planned through 1998. Two tables of contents organize the user abstracts by beamline and by area of research, and an author index makes abstracts accessible by author and by principal investigator. Technical details for each beamline including whom to contact for additional information can be found in the beamline information section. Separate abstracts have been indexed into the database for contributions to thismore » compendium.« less

  14. USSR Space Life Sciences Digest, issue 25

    NASA Technical Reports Server (NTRS)

    Hooke, Lydia Razran (Editor); Teeter, Ronald (Editor); Garshnek, Victoria (Editor); Rowe, Joseph (Editor)

    1990-01-01

    This is the twenty-fifth issue of NASA's Space Life Sciences Digest. It contains abstracts of 42 journal papers or book chapters published in Russian and of 3 Soviet monographs. Selected abstracts are illustrated with figures and tables from the original. The abstracts in this issue have been identified as relevant to 26 areas of space biology and medicine. These areas include: adaptation, body fluids, botany, cardiovascular and respiratory systems, developmental biology, endocrinology, enzymology, equipment and instrumentation, exobiology, gravitational biology, habitability and environmental effects, human performance, immunology, life support systems, man-machine systems, mathematical modeling, metabolism, microbiology, musculoskeletal system, neurophysiology, nutrition, operational medicine, psychology, radiobiology, reproductive system, and space biology and medicine.

  15. featsel: A framework for benchmarking of feature selection algorithms and cost functions

    NASA Astrophysics Data System (ADS)

    Reis, Marcelo S.; Estrela, Gustavo; Ferreira, Carlos Eduardo; Barrera, Junior

    In this paper, we introduce featsel, a framework for benchmarking of feature selection algorithms and cost functions. This framework allows the user to deal with the search space as a Boolean lattice and has its core coded in C++ for computational efficiency purposes. Moreover, featsel includes Perl scripts to add new algorithms and/or cost functions, generate random instances, plot graphs and organize results into tables. Besides, this framework already comes with dozens of algorithms and cost functions for benchmarking experiments. We also provide illustrative examples, in which featsel outperforms the popular Weka workbench in feature selection procedures on data sets from the UCI Machine Learning Repository.

  16. [The noise factor in the manufacture of chocolate and pastry products].

    PubMed

    Tsvetkov, D; Kalburova, F

    1988-01-01

    Studies are performed on the noise and vibration condition and hearing in women workers in enterprises for confectionery. It is established that the noise is leading occupational hazard in the working environment and in half of the working places significantly surpasses the sanitary norms and the peak values reach sometimes to 102-105 db/A. Especially intensive sources of noise are some machines and industrial operations-vibratory tables, mixers, cocoa-rollers, grinder for sugar and cocoa, the operation "hammering of forms". The vibrations (general) are rarely met hazard in industry. A considerable decrease of hearing in the examined women workers is established.

  17. Physiological and morphological response patterns of Populus deltoides to alluvial groundwater

    USGS Publications Warehouse

    Cooper, D.J.; D'Amico, D.R.; Scott, M.L.

    2003-01-01

    We examined the physiological and morphological response patterns of plains cottonwood [Populus deltoides subsp. monilifera (Aiton) Eck.] to acute water stress imposed by groundwater pumping. Between 3 and 27 July 1996, four large pumps were used to withdraw alluvial groundwater from a cottonwood forest along the South Platte River, near Denver, Colorado, USA. The study was designed as a stand-level, split-plot experiment with factorial treatments including two soil types (a gravel soil and a loam topsoil over gravel), two water table drawdown depths (∼0.5 m and >1.0 m), and one water table control (no drawdown) per soil type. Measurements of water table depth, soil water potential (Ψs), predawn and midday shoot water potential (Ψpd and Ψmd), and D/H (deuterium/hydrogen) ratios of different water sources were made in each of six 600-m2 plots prior to, during, and immediately following pumping. Two additional plots were established and measured to examine the extent to which surface irrigation could be used to mitigate the effects of deep drawdown on P. deltoides for each soil type. Recovery of tree water status following pumping was evaluated by measuring stomatal conductance (gs) and xylem water potential (Ψxp) on approximately hourly time steps from before dawn to mid-afternoon on 11 August 1996 in watered and unwatered, deep-drawdown plots on gravel soils. P. deltoides responded to abrupt alluvial water table decline with decreased shoot water potential followed by leaf mortality. Ψpd and percent leaf loss were significantly related to the magnitude of water table declines. The onset and course of these responses were influenced by short-term variability in surface and ground water levels, acting in concert with physiological and morphological adjustments. Decreases in Ψpd corresponded with increases in Ψmd, suggesting shoot water status improved in response to stomatal closure and crown dieback. Crown dieback caused by xylem cavitation likely occurred when Ψpd reached −0.4 to −0.8 MPa. The application of surface irrigation allowed trees to maintain favorable water status with little or no apparent cavitation, even in deep-drawdown plots. Two weeks after the partial canopy dieback and cessation of pumping, gs and Ψxp measurements indicated that water stress persisted in unwatered P. deltoides in deep-drawdown plots.

  18. Machine learning and microsimulation techniques on the prognosis of dementia: A systematic literature review.

    PubMed

    Dallora, Ana Luiza; Eivazzadeh, Shahryar; Mendes, Emilia; Berglund, Johan; Anderberg, Peter

    2017-01-01

    Dementia is a complex disorder characterized by poor outcomes for the patients and high costs of care. After decades of research little is known about its mechanisms. Having prognostic estimates about dementia can help researchers, patients and public entities in dealing with this disorder. Thus, health data, machine learning and microsimulation techniques could be employed in developing prognostic estimates for dementia. The goal of this paper is to present evidence on the state of the art of studies investigating and the prognosis of dementia using machine learning and microsimulation techniques. To achieve our goal we carried out a systematic literature review, in which three large databases-Pubmed, Socups and Web of Science were searched to select studies that employed machine learning or microsimulation techniques for the prognosis of dementia. A single backward snowballing was done to identify further studies. A quality checklist was also employed to assess the quality of the evidence presented by the selected studies, and low quality studies were removed. Finally, data from the final set of studies were extracted in summary tables. In total 37 papers were included. The data summary results showed that the current research is focused on the investigation of the patients with mild cognitive impairment that will evolve to Alzheimer's disease, using machine learning techniques. Microsimulation studies were concerned with cost estimation and had a populational focus. Neuroimaging was the most commonly used variable. Prediction of conversion from MCI to AD is the dominant theme in the selected studies. Most studies used ML techniques on Neuroimaging data. Only a few data sources have been recruited by most studies and the ADNI database is the one most commonly used. Only two studies have investigated the prediction of epidemiological aspects of Dementia using either ML or MS techniques. Finally, care should be taken when interpreting the reported accuracy of ML techniques, given studies' different contexts.

  19. Machine learning and microsimulation techniques on the prognosis of dementia: A systematic literature review

    PubMed Central

    Mendes, Emilia; Berglund, Johan; Anderberg, Peter

    2017-01-01

    Background Dementia is a complex disorder characterized by poor outcomes for the patients and high costs of care. After decades of research little is known about its mechanisms. Having prognostic estimates about dementia can help researchers, patients and public entities in dealing with this disorder. Thus, health data, machine learning and microsimulation techniques could be employed in developing prognostic estimates for dementia. Objective The goal of this paper is to present evidence on the state of the art of studies investigating and the prognosis of dementia using machine learning and microsimulation techniques. Method To achieve our goal we carried out a systematic literature review, in which three large databases—Pubmed, Socups and Web of Science were searched to select studies that employed machine learning or microsimulation techniques for the prognosis of dementia. A single backward snowballing was done to identify further studies. A quality checklist was also employed to assess the quality of the evidence presented by the selected studies, and low quality studies were removed. Finally, data from the final set of studies were extracted in summary tables. Results In total 37 papers were included. The data summary results showed that the current research is focused on the investigation of the patients with mild cognitive impairment that will evolve to Alzheimer’s disease, using machine learning techniques. Microsimulation studies were concerned with cost estimation and had a populational focus. Neuroimaging was the most commonly used variable. Conclusions Prediction of conversion from MCI to AD is the dominant theme in the selected studies. Most studies used ML techniques on Neuroimaging data. Only a few data sources have been recruited by most studies and the ADNI database is the one most commonly used. Only two studies have investigated the prediction of epidemiological aspects of Dementia using either ML or MS techniques. Finally, care should be taken when interpreting the reported accuracy of ML techniques, given studies’ different contexts. PMID:28662070

  20. Dynamic compressive properties of bovine knee layered tissue

    NASA Astrophysics Data System (ADS)

    Nishida, Masahiro; Hino, Yuki; Todo, Mitsugu

    2015-09-01

    In Japan, the most common articular disease is knee osteoarthritis. Among many treatment methodologies, tissue engineering and regenerative medicine have recently received a lot of attention. In this field, cells and scaffolds are important, both ex vivo and in vivo. From the viewpoint of effective treatment, in addition to histological features, the compatibility of mechanical properties is also important. In this study, the dynamic and static compressive properties of bovine articular cartilage-cancellous bone layered tissue were measured using a universal testing machine and a split Hopkinson pressure bar method. The compressive behaviors of bovine articular cartilage-cancellous bone layered tissue were examined. The effects of strain rate on the maximum stress and the slope of stress-strain curves of the bovine articular cartilage-cancellous bone layered tissue were discussed.

  1. Rapid and continuous analyte processing in droplet microfluidic devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strey, Helmut; Kimmerling, Robert; Bakowski, Tomasz

    The compositions and methods described herein are designed to introduce functionalized microparticles into droplets that can be manipulated in microfluidic devices by fields, including electric (dielectrophoretic) or magnetic fields, and extracted by splitting a droplet to separate the portion of the droplet that contains the majority of the microparticles from the part that is largely devoid of the microparticles. Within the device, channels are variously configured at Y- or T junctions that facilitate continuous, serial isolation and dilution of analytes in solution. The devices can be limited in the sense that they can be designed to output purified analytes thatmore » are then further analyzed in separate machines or they can include additional channels through which purified analytes can be further processed and analyzed.« less

  2. The Dynamic Tensile Behavior of Railway Wheel Steel at High Strain Rates

    NASA Astrophysics Data System (ADS)

    Jing, Lin; Han, Liangliang; Zhao, Longmao; Zhang, Ying

    2016-11-01

    The dynamic tensile tests on D1 railway wheel steel at high strain rates were conducted using a split Hopkinson tensile bar (SHTB) apparatus, compared to quasi-static tests. Three different types of specimens, which were machined from three different positions (i.e., the rim, web and hub) of a railway wheel, were prepared and examined. The rim specimens were checked to have a higher yield stress and ultimate tensile strength than those web and hub specimens under both quasi-static and dynamic loadings, and the railway wheel steel was demonstrated to be strain rate dependent in dynamic tension. The dynamic tensile fracture surfaces of all the wheel steel specimens are cup-cone-shaped morphology on a macroscopic scale and with the quasi-ductile fracture features on the microscopic scale.

  3. Machine learning of atmospheric chemistry. Applications to a global chemistry transport model.

    NASA Astrophysics Data System (ADS)

    Evans, M. J.; Keller, C. A.

    2017-12-01

    Atmospheric chemistry is central to many environmental issues such as air pollution, climate change, and stratospheric ozone loss. Chemistry Transport Models (CTM) are a central tool for understanding these issues, whether for research or for forecasting. These models split the atmosphere in a large number of grid-boxes and consider the emission of compounds into these boxes and their subsequent transport, deposition, and chemical processing. The chemistry is represented through a series of simultaneous ordinary differential equations, one for each compound. Given the difference in life-times between the chemical compounds (mili-seconds for O(1D) to years for CH4) these equations are numerically stiff and solving them consists of a significant fraction of the computational burden of a CTM.We have investigated a machine learning approach to solving the differential equations instead of solving them numerically. From an annual simulation of the GEOS-Chem model we have produced a training dataset consisting of the concentration of compounds before and after the differential equations are solved, together with some key physical parameters for every grid-box and time-step. From this dataset we have trained a machine learning algorithm (random regression forest) to be able to predict the concentration of the compounds after the integration step based on the concentrations and physical state at the beginning of the time step. We have then included this algorithm back into the GEOS-Chem model, bypassing the need to integrate the chemistry.This machine learning approach shows many of the characteristics of the full simulation and has the potential to be substantially faster. There are a wide range of application for such an approach - generating boundary conditions, for use in air quality forecasts, chemical data assimilation systems, centennial scale climate simulations etc. We discuss our approches' speed and accuracy, and highlight some potential future directions for improving this approach.

  4. Oracle Applications Patch Administration Tool (PAT) Beta Version

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2002-01-04

    PAT is a Patch Administration Tool that provides analysis, tracking, and management of Oracle Application patches. This includes capabilities as outlined below: Patch Analysis & Management Tool Outline of capabilities: Administration Patch Data Maintenance -- track Oracle Application patches applied to what database instance & machine Patch Analysis capture text files (readme.txt and driver files) form comparison detail report comparison detail PL/SQL package comparison detail SQL scripts detail JSP module comparison detail Parse and load the current applptch.txt (10.7) or load patch data from Oracle Application database patch tables (11i) Display Analysis -- Compare patch to be applied with currentmore » Oracle Application installed Appl_top code versions Patch Detail Module comparison detail Analyze and display one Oracle Application module patch. Patch Management -- automatic queue and execution of patches Administration Parameter maintenance -- setting for directory structure of Oracle Application appl_top Validation data maintenance -- machine names and instances to patch Operation Patch Data Maintenance Schedule a patch (queue for later execution) Run a patch (queue for immediate execution) Review the patch logs Patch Management Reports« less

  5. Nanopublications for exposing experimental data in the life-sciences: a Huntington's Disease case study.

    PubMed

    Mina, Eleni; Thompson, Mark; Kaliyaperumal, Rajaram; Zhao, Jun; der Horst, van Eelke; Tatum, Zuotian; Hettne, Kristina M; Schultes, Erik A; Mons, Barend; Roos, Marco

    2015-01-01

    Data from high throughput experiments often produce far more results than can ever appear in the main text or tables of a single research article. In these cases, the majority of new associations are often archived either as supplemental information in an arbitrary format or in publisher-independent databases that can be difficult to find. These data are not only lost from scientific discourse, but are also elusive to automated search, retrieval and processing. Here, we use the nanopublication model to make scientific assertions that were concluded from a workflow analysis of Huntington's Disease data machine-readable, interoperable, and citable. We followed the nanopublication guidelines to semantically model our assertions as well as their provenance metadata and authorship. We demonstrate interoperability by linking nanopublication provenance to the Research Object model. These results indicate that nanopublications can provide an incentive for researchers to expose data that is interoperable and machine-readable for future use and preservation for which they can get credits for their effort. Nanopublications can have a leading role into hypotheses generation offering opportunities to produce large-scale data integration.

  6. High Throughput Multispectral Image Processing with Applications in Food Science.

    PubMed

    Tsakanikas, Panagiotis; Pavlidis, Dimitris; Nychas, George-John

    2015-01-01

    Recently, machine vision is gaining attention in food science as well as in food industry concerning food quality assessment and monitoring. Into the framework of implementation of Process Analytical Technology (PAT) in the food industry, image processing can be used not only in estimation and even prediction of food quality but also in detection of adulteration. Towards these applications on food science, we present here a novel methodology for automated image analysis of several kinds of food products e.g. meat, vanilla crème and table olives, so as to increase objectivity, data reproducibility, low cost information extraction and faster quality assessment, without human intervention. Image processing's outcome will be propagated to the downstream analysis. The developed multispectral image processing method is based on unsupervised machine learning approach (Gaussian Mixture Models) and a novel unsupervised scheme of spectral band selection for segmentation process optimization. Through the evaluation we prove its efficiency and robustness against the currently available semi-manual software, showing that the developed method is a high throughput approach appropriate for massive data extraction from food samples.

  7. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part II—Experimental Implementation

    PubMed Central

    Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario

    2016-01-01

    Coordinate measuring machines (CMM) are main instruments of measurement in laboratories and in industrial quality control. A compensation error model has been formulated (Part I). It integrates error and uncertainty in the feature measurement model. Experimental implementation for the verification of this model is carried out based on the direct testing on a moving bridge CMM. The regression results by axis are quantified and compared to CMM indication with respect to the assigned values of the measurand. Next, testing of selected measurements of length, flatness, dihedral angle, and roundness features are accomplished. The measurement of calibrated gauge blocks for length or angle, flatness verification of the CMM granite table and roundness of a precision glass hemisphere are presented under a setup of repeatability conditions. The results are analysed and compared with alternative methods of estimation. The overall performance of the model is endorsed through experimental verification, as well as the practical use and the model capability to contribute in the improvement of current standard CMM measuring capabilities. PMID:27754441

  8. Support vector machines to detect physiological patterns for EEG and EMG-based human-computer interaction: a review

    NASA Astrophysics Data System (ADS)

    Quitadamo, L. R.; Cavrini, F.; Sbernini, L.; Riillo, F.; Bianchi, L.; Seri, S.; Saggio, G.

    2017-02-01

    Support vector machines (SVMs) are widely used classifiers for detecting physiological patterns in human-computer interaction (HCI). Their success is due to their versatility, robustness and large availability of free dedicated toolboxes. Frequently in the literature, insufficient details about the SVM implementation and/or parameters selection are reported, making it impossible to reproduce study analysis and results. In order to perform an optimized classification and report a proper description of the results, it is necessary to have a comprehensive critical overview of the applications of SVM. The aim of this paper is to provide a review of the usage of SVM in the determination of brain and muscle patterns for HCI, by focusing on electroencephalography (EEG) and electromyography (EMG) techniques. In particular, an overview of the basic principles of SVM theory is outlined, together with a description of several relevant literature implementations. Furthermore, details concerning reviewed papers are listed in tables and statistics of SVM use in the literature are presented. Suitability of SVM for HCI is discussed and critical comparisons with other classifiers are reported.

  9. Cook-Levin Theorem Algorithmic-Reducibility/Completeness = Wilson Renormalization-(Semi)-Group Fixed-Points; ``Noise''-Induced Phase-Transitions (NITs) to Accelerate Algorithmics (``NIT-Picking'') REPLACING CRUTCHES!!!: Models: Turing-machine, finite-state-models, finite-automata

    NASA Astrophysics Data System (ADS)

    Young, Frederic; Siegel, Edward

    Cook-Levin theorem theorem algorithmic computational-complexity(C-C) algorithmic-equivalence reducibility/completeness equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited via Siegel FUZZYICS =CATEGORYICS = ANALOGYICS =PRAGMATYICS/CATEGORY-SEMANTICS ONTOLOGY COGNITION ANALYTICS-Aristotle ``square-of-opposition'' tabular list-format truth-table matrix analytics predicts and implements ''noise''-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics (1987)]-Sipser[Intro.Thy. Computation(`97)] algorithmic C-C: ''NIT-picking''(!!!), to optimize optimization-problems optimally(OOPO). Versus iso-''noise'' power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, ''NIT-picking'' is ''noise'' power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-''science''/SEANCE algorithmic C-C models: Turing-machine, finite-state-models, finite-automata,..., discrete-maths graph-theory equivalence to physics Feynman-diagrams are identified as early-days once-workable valid but limiting IMPEDING CRUTCHES(!!!), ONLY IMPEDE latter-days new-insights!!!

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    none,

    Oak Ridge Associated Universities (ORAU), under the Oak Ridge Institute for Science and Education (ORISE) contract, collected split surface water samples with Nuclear Fuel Services (NFS) representatives on June 12, 2013. Representatives from the U.S. Nuclear Regulatory Commission (NRC) and the Tennessee Department of Environment and Conservation were also in attendance. Samples were collected at four surface water stations, as required in the approved Request for Technical Assistance number 11-018. These stations included Nolichucky River upstream (NRU), Nolichucky River downstream (NRD), Martin Creek upstream (MCU), and Martin Creek downstream (MCD). Both ORAU and NFS performed gross alpha and gross betamore » analyses, and Table 1 presents the comparison of results using the duplicate error ratio (DER), also known as the normalized absolute difference. A DER ≤ 3 indicates at a 99% confidence interval that split sample results do not differ significantly when compared to their respective one standard deviation (sigma) uncertainty (ANSI N42.22). The NFS split sample report specifies 95% confidence level of reported uncertainties (NFS 2013). Therefore, standard two sigma reporting values were divided by 1.96. In conclusion, most DER values were less than 3 and results are consistent with low (e.g., background) concentrations. The gross beta result for sample 5198W0014 was the exception. The ORAU gross beta result of 6.30 ± 0.65 pCi/L from location NRD is well above NFS's non-detected result of 1.56 ± 0.59 pCi/L. NFS's data package includes no detected result for any radionuclide at location NRD. At NRC's request, ORAU performed gamma spectroscopic analysis of sample 5198W0014 to identify analytes contributing to the relatively elevated gross beta results. This analysis identified detected amounts of naturally-occurring constituents, most notably Ac-228 from the thorium decay series, and does not suggest the presence of site-related contamination.« less

  11. Seismic constraints on the radial dependence of the internal rotation profiles of six Kepler subgiants and young red giants

    NASA Astrophysics Data System (ADS)

    Deheuvels, S.; Doğan, G.; Goupil, M. J.; Appourchaux, T.; Benomar, O.; Bruntt, H.; Campante, T. L.; Casagrande, L.; Ceillier, T.; Davies, G. R.; De Cat, P.; Fu, J. N.; García, R. A.; Lobel, A.; Mosser, B.; Reese, D. R.; Regulo, C.; Schou, J.; Stahn, T.; Thygesen, A. O.; Yang, X. H.; Chaplin, W. J.; Christensen-Dalsgaard, J.; Eggenberger, P.; Gizon, L.; Mathis, S.; Molenda-Żakowicz, J.; Pinsonneault, M.

    2014-04-01

    Context. We still do not understand which physical mechanisms are responsible for the transport of angular momentum inside stars. The recent detection of mixed modes that contain the clear signature of rotation in the spectra of Kepler subgiants and red giants gives us the opportunity to make progress on this question. Aims: Our aim is to probe the radial dependence of the rotation profiles for a sample of Kepler targets. For this purpose, subgiants and early red giants are particularly interesting targets because their rotational splittings are more sensitive to the rotation outside the deeper core than is the case for their more evolved counterparts. Methods: We first extracted the rotational splittings and frequencies of the modes for six young Kepler red giants. We then performed a seismic modeling of these stars using the evolutionary codes Cesam2k and astec. By using the observed splittings and the rotational kernels of the optimal models, we inverted the internal rotation profiles of the six stars. Results: We obtain estimates of the core rotation rates for these stars, and upper limits to the rotation in their convective envelope. We show that the rotation contrast between the core and the envelope increases during the subgiant branch. Our results also suggest that the core of subgiants spins up with time, while their envelope spins down. For two of the stars, we show that a discontinuous rotation profile with a deep discontinuity reproduces the observed splittings significantly better than a smooth rotation profile. Interestingly, the depths that are found to be most probable for the discontinuities roughly coincide with the location of the H-burning shell, which separates the layers that contract from those that expand. Conclusions: We characterized the differential rotation pattern of six young giants with a range of metallicities, and with both radiative and convective cores on the main sequence. This will bring observational constraints to the scenarios of angular momentum transport in stars. Moreover, if the existence of sharp gradients in the rotation profiles of young red giants is confirmed, it is expected to help in distinguishing between the physical processes that could transport angular momentum in the subgiant and red giant branches. Appendices and Tables 3-9 are available in electronic form at http://www.aanda.org

  12. Combining the spin-separated exact two-component relativistic Hamiltonian with the equation-of-motion coupled-cluster method for the treatment of spin-orbit splittings of light and heavy elements.

    PubMed

    Cao, Zhanli; Li, Zhendong; Wang, Fan; Liu, Wenjian

    2017-02-01

    The spin-separated exact two-component (X2C) relativistic Hamiltonian [sf-X2C+so-DKHn, J. Chem. Phys., 2012, 137, 154114] is combined with the equation-of-motion coupled-cluster method with singles and doubles (EOM-CCSD) for the treatment of spin-orbit splittings of open-shell molecular systems. Scalar relativistic effects are treated to infinite order from the outset via the spin-free part of the X2C Hamiltonian (sf-X2C), whereas the spin-orbit couplings (SOC) are handled at the CC level via the first-order Douglas-Kroll-Hess (DKH) type of spin-orbit operator (so-DKH1). Since the exponential of single excitations, i.e., exp(T 1 ), introduces sufficient spin orbital relaxations, the inclusion of SOC at the CC level is essentially the same in accuracy as the inclusion of SOC from the outset in terms of the two-component spinors determined variationally by the sf-X2C+so-DKH1 Hamiltonian, but is computationally more efficient. Therefore, such an approach (denoted as sf-X2C-EOM-CCSD(SOC)) can achieve uniform accuracy for the spin-orbit splittings of both light and heavy elements. For light elements, the treatment of SOC can even be postponed until the EOM step (denoted as sf-X2C-EOM(SOC)-CCSD), so as to further reduce the computational cost. To reveal the efficacy of sf-X2C-EOM-CCSD(SOC) and sf-X2C-EOM(SOC)-CCSD, the spin-orbit splittings of the 2 Π states of monohydrides up to the sixth row of the periodic table are investigated. The results show that sf-X2C-EOM-CCSD(SOC) predicts very accurate results (within 5%) for elements up to the fifth row, whereas sf-X2C-EOM(SOC)-CCSD is useful only for light elements (up to the third row but with some exceptions). For comparison, the sf-X2C-S-TD-DFT-SOC approach [spin-adapted open-shell time-dependent density functional theory, Mol. Phys., 2013, 111, 3741] is applied to the same systems. The overall accuracy (1-10%) is satisfactory.

  13. High-performance sparse matrix-matrix products on Intel KNL and multicore architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagasaka, Y; Matsuoka, S; Azad, A

    Sparse matrix-matrix multiplication (SpGEMM) is a computational primitive that is widely used in areas ranging from traditional numerical applications to recent big data analysis and machine learning. Although many SpGEMM algorithms have been proposed, hardware specific optimizations for multi- and many-core processors are lacking and a detailed analysis of their performance under various use cases and matrices is not available. We firstly identify and mitigate multiple bottlenecks with memory management and thread scheduling on Intel Xeon Phi (Knights Landing or KNL). Specifically targeting multi- and many-core processors, we develop a hash-table-based algorithm and optimize a heap-based shared-memory SpGEMM algorithm. Wemore » examine their performance together with other publicly available codes. Different from the literature, our evaluation also includes use cases that are representative of real graph algorithms, such as multi-source breadth-first search or triangle counting. Our hash-table and heap-based algorithms are showing significant speedups from libraries in the majority of the cases while different algorithms dominate the other scenarios with different matrix size, sparsity, compression factor and operation type. We wrap up in-depth evaluation results and make a recipe to give the best SpGEMM algorithm for target scenario. A critical finding is that hash-table-based SpGEMM gets a significant performance boost if the nonzeros are not required to be sorted within each row of the output matrix.« less

  14. Developing a local least-squares support vector machines-based neuro-fuzzy model for nonlinear and chaotic time series prediction.

    PubMed

    Miranian, A; Abdollahzade, M

    2013-02-01

    Local modeling approaches, owing to their ability to model different operating regimes of nonlinear systems and processes by independent local models, seem appealing for modeling, identification, and prediction applications. In this paper, we propose a local neuro-fuzzy (LNF) approach based on the least-squares support vector machines (LSSVMs). The proposed LNF approach employs LSSVMs, which are powerful in modeling and predicting time series, as local models and uses hierarchical binary tree (HBT) learning algorithm for fast and efficient estimation of its parameters. The HBT algorithm heuristically partitions the input space into smaller subdomains by axis-orthogonal splits. In each partitioning, the validity functions automatically form a unity partition and therefore normalization side effects, e.g., reactivation, are prevented. Integration of LSSVMs into the LNF network as local models, along with the HBT learning algorithm, yield a high-performance approach for modeling and prediction of complex nonlinear time series. The proposed approach is applied to modeling and predictions of different nonlinear and chaotic real-world and hand-designed systems and time series. Analysis of the prediction results and comparisons with recent and old studies demonstrate the promising performance of the proposed LNF approach with the HBT learning algorithm for modeling and prediction of nonlinear and chaotic systems and time series.

  15. An Efficient Data Partitioning to Improve Classification Performance While Keeping Parameters Interpretable

    PubMed Central

    Korjus, Kristjan; Hebart, Martin N.; Vicente, Raul

    2016-01-01

    Supervised machine learning methods typically require splitting data into multiple chunks for training, validating, and finally testing classifiers. For finding the best parameters of a classifier, training and validation are usually carried out with cross-validation. This is followed by application of the classifier with optimized parameters to a separate test set for estimating the classifier’s generalization performance. With limited data, this separation of test data creates a difficult trade-off between having more statistical power in estimating generalization performance versus choosing better parameters and fitting a better model. We propose a novel approach that we term “Cross-validation and cross-testing” improving this trade-off by re-using test data without biasing classifier performance. The novel approach is validated using simulated data and electrophysiological recordings in humans and rodents. The results demonstrate that the approach has a higher probability of discovering significant results than the standard approach of cross-validation and testing, while maintaining the nominal alpha level. In contrast to nested cross-validation, which is maximally efficient in re-using data, the proposed approach additionally maintains the interpretability of individual parameters. Taken together, we suggest an addition to currently used machine learning approaches which may be particularly useful in cases where model weights do not require interpretation, but parameters do. PMID:27564393

  16. Analysis about diamond tool wear in nano-metric cutting of single crystal silicon using molecular dynamics method

    NASA Astrophysics Data System (ADS)

    Wang, Zhiguo; Liang, Yingchun; Chen, Mingjun; Tong, Zhen; Chen, Jiaxuan

    2010-10-01

    Tool wear not only changes its geometry accuracy and integrity, but also decrease machining precision and surface integrity of workpiece that affect using performance and service life of workpiece in ultra-precision machining. Scholars made a lot of experimental researches and stimulant analyses, but there is a great difference on the wear mechanism, especially on the nano-scale wear mechanism. In this paper, the three-dimensional simulation model is built to simulate nano-metric cutting of a single crystal silicon with a non-rigid right-angle diamond tool with 0 rake angle and 0 clearance angle by the molecular dynamics (MD) simulation approach, which is used to investigate the diamond tool wear during the nano-metric cutting process. A Tersoff potential is employed for the interaction between carbon-carbon atoms, silicon-silicon atoms and carbon-silicon atoms. The tool gets the high alternating shear stress, the tool wear firstly presents at the cutting edge where intension is low. At the corner the tool is splitted along the {1 1 1} crystal plane, which forms the tipping. The wear at the flank face is the structure transformation of diamond that the diamond structure transforms into the sheet graphite structure. Owing to the tool wear the cutting force increases.

  17. Discrimination of Active and Weakly Active Human BACE1 Inhibitors Using Self-Organizing Map and Support Vector Machine.

    PubMed

    Li, Hang; Wang, Maolin; Gong, Ya-Nan; Yan, Aixia

    2016-01-01

    β-secretase (BACE1) is an aspartyl protease, which is considered as a novel vital target in Alzheimer`s disease therapy. We collected a data set of 294 BACE1 inhibitors, and built six classification models to discriminate active and weakly active inhibitors using Kohonen's Self-Organizing Map (SOM) method and Support Vector Machine (SVM) method. Each molecular descriptor was calculated using the program ADRIANA.Code. We adopted two different methods: random method and Self-Organizing Map method, for training/test set split. The descriptors were selected by F-score and stepwise linear regression analysis. The best SVM model Model2C has a good prediction performance on test set with prediction accuracy, sensitivity (SE), specificity (SP) and Matthews correlation coefficient (MCC) of 89.02%, 90%, 88%, 0.78, respectively. Model 1A is the best SOM model, whose accuracy and MCC of the test set were 94.57% and 0.98, respectively. The lone pair electronegativity and polarizability related descriptors importantly contributed to bioactivity of BACE1 inhibitor. The Extended-Connectivity Finger-Prints_4 (ECFP_4) analysis found some vitally key substructural features, which could be helpful for further drug design research. The SOM and SVM models built in this study can be obtained from the authors by email or other contacts.

  18. An Efficient Data Partitioning to Improve Classification Performance While Keeping Parameters Interpretable.

    PubMed

    Korjus, Kristjan; Hebart, Martin N; Vicente, Raul

    2016-01-01

    Supervised machine learning methods typically require splitting data into multiple chunks for training, validating, and finally testing classifiers. For finding the best parameters of a classifier, training and validation are usually carried out with cross-validation. This is followed by application of the classifier with optimized parameters to a separate test set for estimating the classifier's generalization performance. With limited data, this separation of test data creates a difficult trade-off between having more statistical power in estimating generalization performance versus choosing better parameters and fitting a better model. We propose a novel approach that we term "Cross-validation and cross-testing" improving this trade-off by re-using test data without biasing classifier performance. The novel approach is validated using simulated data and electrophysiological recordings in humans and rodents. The results demonstrate that the approach has a higher probability of discovering significant results than the standard approach of cross-validation and testing, while maintaining the nominal alpha level. In contrast to nested cross-validation, which is maximally efficient in re-using data, the proposed approach additionally maintains the interpretability of individual parameters. Taken together, we suggest an addition to currently used machine learning approaches which may be particularly useful in cases where model weights do not require interpretation, but parameters do.

  19. Porcelain surface conditioning protocols and shear bond strength of orthodontic brackets.

    PubMed

    Lestrade, Ashley M; Ballard, Richard W; Xu, Xiaoming; Yu, Qingzhao; Kee, Edwin L; Armbruster, Paul C

    2016-05-01

    The objective of the present study was to determine which of six bonding protocols yielded a clinically acceptable shear bond strength (SBS) of metal orthodontic brackets to CAD/CAM lithium disilicate porcelain restorations. A secondary aim was to determine which bonding protocol produced the least surface damage at debond. Sixty lithium disilicate samples were fabricated to replicate the facial surface of a mandibular first molar using a CEREC CAD/CAM machine. The samples were split into six test groups, each of which received different mechanical/chemical pretreatment protocols to roughen the porcelain surface prior to bonding a molar orthodontic attachment. Shear bond strength testing was conducted using an Instron machine. The mean, maximum, minimal, and standard deviation SBS values for each sample group including an enamel control were calculated. A t-test was used to evaluate the statistical significance between the groups. No significant differences were found in SBS values, with the exception of surface roughening with a green stone prior to HFA and silane treatment. This protocol yielded slightly higher bond strength which was statistically significant. Chemical treatment alone with HFA/silane yielded SBS values within an acceptable clinical range to withstand forces applied by orthodontic treatment and potentially eliminates the need to mechanically roughen the ceramic surface.

  20. Application of all-relevant feature selection for the failure analysis of parameter-induced simulation crashes in climate models

    NASA Astrophysics Data System (ADS)

    Paja, Wiesław; Wrzesien, Mariusz; Niemiec, Rafał; Rudnicki, Witold R.

    2016-03-01

    Climate models are extremely complex pieces of software. They reflect the best knowledge on the physical components of the climate; nevertheless, they contain several parameters, which are too weakly constrained by observations, and can potentially lead to a simulation crashing. Recently a study by Lucas et al. (2013) has shown that machine learning methods can be used for predicting which combinations of parameters can lead to the simulation crashing and hence which processes described by these parameters need refined analyses. In the current study we reanalyse the data set used in this research using different methodology. We confirm the main conclusion of the original study concerning the suitability of machine learning for the prediction of crashes. We show that only three of the eight parameters indicated in the original study as relevant for prediction of the crash are indeed strongly relevant, three others are relevant but redundant and two are not relevant at all. We also show that the variance due to the split of data between training and validation sets has a large influence both on the accuracy of predictions and on the relative importance of variables; hence only a cross-validated approach can deliver a robust prediction of performance and relevance of variables.

  1. Cloud flexibility using DIRAC interware

    NASA Astrophysics Data System (ADS)

    Fernandez Albor, Víctor; Seco Miguelez, Marcos; Fernandez Pena, Tomas; Mendez Muñoz, Victor; Saborido Silva, Juan Jose; Graciani Diaz, Ricardo

    2014-06-01

    Communities of different locations are running their computing jobs on dedicated infrastructures without the need to worry about software, hardware or even the site where their programs are going to be executed. Nevertheless, this usually implies that they are restricted to use certain types or versions of an Operating System because either their software needs an definite version of a system library or a specific platform is required by the collaboration to which they belong. On this scenario, if a data center wants to service software to incompatible communities, it has to split its physical resources among those communities. This splitting will inevitably lead to an underuse of resources because the data centers are bound to have periods where one or more of its subclusters are idle. It is, in this situation, where Cloud Computing provides the flexibility and reduction in computational cost that data centers are searching for. This paper describes a set of realistic tests that we ran on one of such implementations. The test comprise software from three different HEP communities (Auger, LHCb and QCD phenomelogists) and the Parsec Benchmark Suite running on one or more of three Linux flavors (SL5, Ubuntu 10.04 and Fedora 13). The implemented infrastructure has, at the cloud level, CloudStack that manages the virtual machines (VM) and the hosts on which they run, and, at the user level, the DIRAC framework along with a VM extension that will submit, monitorize and keep track of the user jobs and also requests CloudStack to start or stop the necessary VM's. In this infrastructure, the community software is distributed via the CernVM-FS, which has been proven to be a reliable and scalable software distribution system. With the resulting infrastructure, users are allowed to send their jobs transparently to the Data Center. The main purpose of this system is the creation of flexible cluster, multiplatform with an scalable method for software distribution for several VOs. Users from different communities do not need to care about the installation of the standard software that is available at the nodes, nor the operating system of the host machine, which is transparent to the user.

  2. Accurate indel prediction using paired-end short reads

    PubMed Central

    2013-01-01

    Background One of the major open challenges in next generation sequencing (NGS) is the accurate identification of structural variants such as insertions and deletions (indels). Current methods for indel calling assign scores to different types of evidence or counter-evidence for the presence of an indel, such as the number of split read alignments spanning the boundaries of a deletion candidate or reads that map within a putative deletion. Candidates with a score above a manually defined threshold are then predicted to be true indels. As a consequence, structural variants detected in this manner contain many false positives. Results Here, we present a machine learning based method which is able to discover and distinguish true from false indel candidates in order to reduce the false positive rate. Our method identifies indel candidates using a discriminative classifier based on features of split read alignment profiles and trained on true and false indel candidates that were validated by Sanger sequencing. We demonstrate the usefulness of our method with paired-end Illumina reads from 80 genomes of the first phase of the 1001 Genomes Project ( http://www.1001genomes.org) in Arabidopsis thaliana. Conclusion In this work we show that indel classification is a necessary step to reduce the number of false positive candidates. We demonstrate that missing classification may lead to spurious biological interpretations. The software is available at: http://agkb.is.tuebingen.mpg.de/Forschung/SV-M/. PMID:23442375

  3. Dynamic factor analysis of groundwater quality trends in an agricultural area adjacent to Everglades National Park.

    PubMed

    Muñoz-Carpena, R; Ritter, A; Li, Y C

    2005-11-01

    The extensive eastern boundary of Everglades National Park (ENP) in south Florida (USA) is subject to one of the most expensive and ambitious environmental restoration projects in history. Understanding and predicting the water quality interactions between the shallow aquifer and surface water is a key component in meeting current environmental regulations and fine-tuning ENP wetland restoration while still maintaining flood protection for the adjacent developed areas. Dynamic factor analysis (DFA), a recent technique for the study of multivariate non-stationary time-series, was applied to study fluctuations in groundwater quality in the area. More than two years of hydrological and water quality time series (rainfall; water table depth; and soil, ground and surface water concentrations of N-NO3-, N-NH4+, P-PO4(3-), Total P, F-and Cl-) from a small agricultural watershed adjacent to the ENP were selected for the study. The unexplained variability required for determining the concentration of each chemical in the 16 wells was greatly reduced by including in the analysis some of the observed time series as explanatory variables (rainfall, water table depth, and soil and canal water chemical concentration). DFA results showed that groundwater concentration of three of the agrochemical species studied (N-NO3-, P-PO4(3-)and Total P) were affected by the same explanatory variables (water table depth, enriched topsoil, and occurrence of a leaching rainfall event, in order of decreasing relative importance). This indicates that leaching by rainfall is the main mechanism explaining concentration peaks in groundwater. In the case of N-NH4+, in addition to leaching, groundwater concentration is governed by lateral exchange with canals. F-and Cl- are mainly affected by periods of dilution by rainfall recharge, and by exchange with the canals. The unstructured nature of the common trends found suggests that these are related to the complex spatially and temporally varying land use patterns in the watershed. The results indicate that peak concentrations of agrochemicals in groundwater could be reduced by improving fertilization practices (by splitting and modifying timing of applications) and by operating the regional canal system to maintain the water table low, especially during the rainy periods.

  4. Dynamic factor analysis of groundwater quality trends in an agricultural area adjacent to Everglades National Park

    NASA Astrophysics Data System (ADS)

    Muñoz-Carpena, R.; Ritter, A.; Li, Y. C.

    2005-11-01

    The extensive eastern boundary of Everglades National Park (ENP) in south Florida (USA) is subject to one of the most expensive and ambitious environmental restoration projects in history. Understanding and predicting the water quality interactions between the shallow aquifer and surface water is a key component in meeting current environmental regulations and fine-tuning ENP wetland restoration while still maintaining flood protection for the adjacent developed areas. Dynamic factor analysis (DFA), a recent technique for the study of multivariate non-stationary time-series, was applied to study fluctuations in groundwater quality in the area. More than two years of hydrological and water quality time series (rainfall; water table depth; and soil, ground and surface water concentrations of N-NO 3-, N-NH 4+, P-PO 43-, Total P, F -and Cl -) from a small agricultural watershed adjacent to the ENP were selected for the study. The unexplained variability required for determining the concentration of each chemical in the 16 wells was greatly reduced by including in the analysis some of the observed time series as explanatory variables (rainfall, water table depth, and soil and canal water chemical concentration). DFA results showed that groundwater concentration of three of the agrochemical species studied (N-NO 3-, P-PO 43-and Total P) were affected by the same explanatory variables (water table depth, enriched topsoil, and occurrence of a leaching rainfall event, in order of decreasing relative importance). This indicates that leaching by rainfall is the main mechanism explaining concentration peaks in groundwater. In the case of N-NH 4+, in addition to leaching, groundwater concentration is governed by lateral exchange with canals. F -and Cl - are mainly affected by periods of dilution by rainfall recharge, and by exchange with the canals. The unstructured nature of the common trends found suggests that these are related to the complex spatially and temporally varying land use patterns in the watershed. The results indicate that peak concentrations of agrochemicals in groundwater could be reduced by improving fertilization practices (by splitting and modifying timing of applications) and by operating the regional canal system to maintain the water table low, especially during the rainy periods.

  5. VizieR Online Data Catalog: HI4PI spectra and column density maps (HI4PI team+, 2016)

    NASA Astrophysics Data System (ADS)

    Hi4PI Collaboration; Ben Bekhti, N.; Floeer, L.; Keller, R.; Kerp, J.; Lenz, D.; Winkel, B.; Bailin, J.; Calabretta, M. R.; Dedes, L.; Ford, H. A.; Gibson, B. K.; Haud, U.; Janowiecki, S.; Kalberla, P. M. W.; Lockman, F. J.; McClure-Griffiths, N. M.; Murphy, T.; Nakanishi, H.; Pisano, D. J.; Staveley-Smith, L.

    2016-09-01

    The HI4PI data release comprises 21-cm neutral atomic hydrogen data of the Milky Way (-600km/s0°; -470km/s

  6. Near constant-time optimal piecewise LDR to HDR inverse tone mapping

    NASA Astrophysics Data System (ADS)

    Chen, Qian; Su, Guan-Ming; Yin, Peng

    2015-02-01

    In a backward compatible HDR image/video compression, it is a general approach to reconstruct HDR from compressed LDR as a prediction to original HDR, which is referred to as inverse tone mapping. Experimental results show that 2- piecewise 2nd order polynomial has the best mapping accuracy than 1 piece high order or 2-piecewise linear, but it is also the most time-consuming method because to find the optimal pivot point to split LDR range to 2 pieces requires exhaustive search. In this paper, we propose a fast algorithm that completes optimal 2-piecewise 2nd order polynomial inverse tone mapping in near constant time without quality degradation. We observe that in least square solution, each entry in the intermediate matrix can be written as the sum of some basic terms, which can be pre-calculated into look-up tables. Since solving the matrix becomes looking up values in tables, computation time barely differs regardless of the number of points searched. Hence, we can carry out the most thorough pivot point search to find the optimal pivot that minimizes MSE in near constant time. Experiment shows that our proposed method achieves the same PSNR performance while saving 60 times computation time compared to the traditional exhaustive search in 2-piecewise 2nd order polynomial inverse tone mapping with continuous constraint.

  7. SU-E-I-55: The Contribution to Skin Dose Due to Scatter From the Patient Table and the Head Holder During Fluoroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Islam, N; Xiong, Z; Vijayan, S

    2015-06-15

    Purpose: To determine contributions to skin dose due to scatter from the table and head holder used during fluoroscopy, and also to explore alternative design material to reduce the scatter dose. Methods: Measurements were made of the primary and scatter components of the xray beam exiting the patient table and a cylindrical head holder used on a Toshiba Infinix c-arm unit as a function of kVp for the various beam filters on the machine and for various field sizes. The primary component of the beam was measured in air with the object placed close to the x-ray tube with anmore » air gap between it and a 6 cc parallel-plate ionization chamber and with the beam collimated to a size just larger than the chamber. The primary plus scatter radiation components were measured with the object moved to a position in the beam next to the chamber for larger field sizes. Both sets of measurements were preformed while keeping the source-to-chamber distance fixed. The scatter fraction was estimated by taking the ratio of the difference between the two measurements and the reading that included both primary and scatter. Similar measurements were also made for a 2.3 cm thick Styrofoam block which could substitute for the patient support. Results: The measured scatter fractions indicate that the patient table as well as the head holder contributes an additional 10–16% to the patient entrance dose depending on field size. Forward scatter was reduced with the Styrofoam block so that the scatter fraction was about 4–5%. Conclusion: The results of this investigation demonstrated that scatter from the table and head holder used in clinical fluoroscopy contribute substantially to the skin dose. The lower contribution of scatter from Styrofoam suggests that there is an opportunity to redesign patient support accessories to reduce the skin dose. Partial support from NIH grant R01EB002873 and Toshiba Medical Systems Corporation Equipment Grant.« less

  8. USSR Space Life Sciences Digest, issue 1

    NASA Technical Reports Server (NTRS)

    Hooke, L. R.; Radtke, M.; Rowe, J. E.

    1985-01-01

    The first issue of the bimonthly digest of USSR Space Life Sciences is presented. Abstracts are included for 49 Soviet periodical articles in 19 areas of aerospace medicine and space biology, published in Russian during the first quarter of 1985. Translated introductions and table of contents for nine Russian books on topics related to NASA's life science concerns are presented. Areas covered include: botany, cardiovascular and respiratory systems, cybernetics and biomedical data processing, endocrinology, gastrointestinal system, genetics, group dynamics, habitability and environmental effects, health and medicine, hematology, immunology, life support systems, man machine systems, metabolism, musculoskeletal system, neurophysiology, perception, personnel selection, psychology, radiobiology, reproductive system, and space biology. This issue concentrates on aerospace medicine and space biology.

  9. Investigations on Surface Milling of Hardened AISI 4140 Steel with Pulse Jet MQL Applicator

    NASA Astrophysics Data System (ADS)

    Bashir, Mahmood Al; Mia, Mozammel; Dhar, Nikhil Ranjan

    2018-06-01

    In this article, an experimental investigation was performed in milling hardened AISI 4140 steel of hardness 40 HRC. The machining was performed in both dry and minimal quantity lubricant (MQL) conditions, as part of neat machining, to make a strong comparison of the undertaken machining environments. The MQL was impinged int the form of pulse jet, by using the specially developed pulse-jet-attachment, to ensure that the cutting fluid can be applied in different timed pulses and quantities at critical zones. The tool wear, cutting force and surface roughness were taken as the quality responses while cutting speed, table feed rate and flow rate of the pulse were considered as influential factors. The depth of cut was kept constant at 1.50 mm because of its less significant effects and the straight oil was adopted as cutting fluid in pulse-jet-MQL. The effects of different factors, on the quality responses, are analyzed using ANOVA. It is observed that MQL applicator system exhibits overall better performance when compared to dry milling by reducing surface roughness, cutting force and prolonging tool life but a flow rate of 150 ml/h has tremendous effects on the responses. This investigation and afterward results are expected to aid the industrial practitioner and researcher to adopt the pulse-MQL in high speed milling to prolong tool life, reduce tool wear, diminish cutting force generation and promote better surface finish.

  10. Investigations on Surface Milling of Hardened AISI 4140 Steel with Pulse Jet MQL Applicator

    NASA Astrophysics Data System (ADS)

    Bashir, Mahmood Al; Mia, Mozammel; Dhar, Nikhil Ranjan

    2016-06-01

    In this article, an experimental investigation was performed in milling hardened AISI 4140 steel of hardness 40 HRC. The machining was performed in both dry and minimal quantity lubricant (MQL) conditions, as part of neat machining, to make a strong comparison of the undertaken machining environments. The MQL was impinged int the form of pulse jet, by using the specially developed pulse-jet-attachment, to ensure that the cutting fluid can be applied in different timed pulses and quantities at critical zones. The tool wear, cutting force and surface roughness were taken as the quality responses while cutting speed, table feed rate and flow rate of the pulse were considered as influential factors. The depth of cut was kept constant at 1.50 mm because of its less significant effects and the straight oil was adopted as cutting fluid in pulse-jet-MQL. The effects of different factors, on the quality responses, are analyzed using ANOVA. It is observed that MQL applicator system exhibits overall better performance when compared to dry milling by reducing surface roughness, cutting force and prolonging tool life but a flow rate of 150 ml/h has tremendous effects on the responses. This investigation and afterward results are expected to aid the industrial practitioner and researcher to adopt the pulse-MQL in high speed milling to prolong tool life, reduce tool wear, diminish cutting force generation and promote better surface finish.

  11. Comparison of four statistical and machine learning methods for crash severity prediction.

    PubMed

    Iranitalab, Amirfarrokh; Khattak, Aemal

    2017-11-01

    Crash severity prediction models enable different agencies to predict the severity of a reported crash with unknown severity or the severity of crashes that may be expected to occur sometime in the future. This paper had three main objectives: comparison of the performance of four statistical and machine learning methods including Multinomial Logit (MNL), Nearest Neighbor Classification (NNC), Support Vector Machines (SVM) and Random Forests (RF), in predicting traffic crash severity; developing a crash costs-based approach for comparison of crash severity prediction methods; and investigating the effects of data clustering methods comprising K-means Clustering (KC) and Latent Class Clustering (LCC), on the performance of crash severity prediction models. The 2012-2015 reported crash data from Nebraska, United States was obtained and two-vehicle crashes were extracted as the analysis data. The dataset was split into training/estimation (2012-2014) and validation (2015) subsets. The four prediction methods were trained/estimated using the training/estimation dataset and the correct prediction rates for each crash severity level, overall correct prediction rate and a proposed crash costs-based accuracy measure were obtained for the validation dataset. The correct prediction rates and the proposed approach showed NNC had the best prediction performance in overall and in more severe crashes. RF and SVM had the next two sufficient performances and MNL was the weakest method. Data clustering did not affect the prediction results of SVM, but KC improved the prediction performance of MNL, NNC and RF, while LCC caused improvement in MNL and RF but weakened the performance of NNC. Overall correct prediction rate had almost the exact opposite results compared to the proposed approach, showing that neglecting the crash costs can lead to misjudgment in choosing the right prediction method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Beyond the hype: deep neural networks outperform established methods using a ChEMBL bioactivity benchmark set.

    PubMed

    Lenselink, Eelke B; Ten Dijke, Niels; Bongers, Brandon; Papadatos, George; van Vlijmen, Herman W T; Kowalczyk, Wojtek; IJzerman, Adriaan P; van Westen, Gerard J P

    2017-08-14

    The increase of publicly available bioactivity data in recent years has fueled and catalyzed research in chemogenomics, data mining, and modeling approaches. As a direct result, over the past few years a multitude of different methods have been reported and evaluated, such as target fishing, nearest neighbor similarity-based methods, and Quantitative Structure Activity Relationship (QSAR)-based protocols. However, such studies are typically conducted on different datasets, using different validation strategies, and different metrics. In this study, different methods were compared using one single standardized dataset obtained from ChEMBL, which is made available to the public, using standardized metrics (BEDROC and Matthews Correlation Coefficient). Specifically, the performance of Naïve Bayes, Random Forests, Support Vector Machines, Logistic Regression, and Deep Neural Networks was assessed using QSAR and proteochemometric (PCM) methods. All methods were validated using both a random split validation and a temporal validation, with the latter being a more realistic benchmark of expected prospective execution. Deep Neural Networks are the top performing classifiers, highlighting the added value of Deep Neural Networks over other more conventional methods. Moreover, the best method ('DNN_PCM') performed significantly better at almost one standard deviation higher than the mean performance. Furthermore, Multi-task and PCM implementations were shown to improve performance over single task Deep Neural Networks. Conversely, target prediction performed almost two standard deviations under the mean performance. Random Forests, Support Vector Machines, and Logistic Regression performed around mean performance. Finally, using an ensemble of DNNs, alongside additional tuning, enhanced the relative performance by another 27% (compared with unoptimized 'DNN_PCM'). Here, a standardized set to test and evaluate different machine learning algorithms in the context of multi-task learning is offered by providing the data and the protocols. Graphical Abstract .

  13. Self-addressed diffractive lens schemes for the characterization of LCoS displays

    NASA Astrophysics Data System (ADS)

    Zhang, Haolin; Lizana, Angel; Iemmi, Claudio; Monroy-Ramírez, Freddy A.; Marquez, Andrés.; Moreno, Ignacio; Campos, Juan

    2018-02-01

    We proposed a self-calibration method to calibrate both the phase-voltage look-up table and the screen phase distribution of Liquid Crystal on Silicon (LCoS) displays by implementing different lens configurations on the studied device within a same optical scheme. On the one hand, the phase-voltage relation is determined from interferometric measurements, which are obtained by addressing split-lens phase distributions on the LCoS display. On the other hand, the surface profile is retrieved by self-addressing a diffractive micro-lens array to the LCoS display, in a way that we configure a Shack-Hartmann wavefront sensor that self-determines the screen spatial variations. Moreover, both the phase-voltage response and the surface phase inhomogeneity of the LCoS are measured within the same experimental set-up, without the necessity of further adjustments. Experimental results prove the usefulness of the above-mentioned technique for LCoS displays characterization.

  14. A Clinical Data Warehouse Based on OMOP and i2b2 for Austrian Health Claims Data.

    PubMed

    Rinner, Christoph; Gezgin, Deniz; Wendl, Christopher; Gall, Walter

    2018-01-01

    To develop simulation models for healthcare related questions clinical data can be reused. Develop a clinical data warehouse to harmonize different data sources in a standardized manner and get a reproducible interface for clinical data reuse. The Kimball life cycle for the development of data warehouse was used. The development is split into the technical, the data and the business intelligence pathway. Sample data was persisted in the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM). The i2b2 clinical data warehouse tools were used to query the OMOP CDM by applying the new i2b2 multi-fact table feature. A clinical data warehouse was set up and sample data, data dimensions and ontologies for Austrian health claims data were created. The ability of the standardized data access layer to create and apply simulation models will be evaluated next.

  15. Supervised Learning Based Hypothesis Generation from Biomedical Literature.

    PubMed

    Sang, Shengtian; Yang, Zhihao; Li, Zongyao; Lin, Hongfei

    2015-01-01

    Nowadays, the amount of biomedical literatures is growing at an explosive speed, and there is much useful knowledge undiscovered in this literature. Researchers can form biomedical hypotheses through mining these works. In this paper, we propose a supervised learning based approach to generate hypotheses from biomedical literature. This approach splits the traditional processing of hypothesis generation with classic ABC model into AB model and BC model which are constructed with supervised learning method. Compared with the concept cooccurrence and grammar engineering-based approaches like SemRep, machine learning based models usually can achieve better performance in information extraction (IE) from texts. Then through combining the two models, the approach reconstructs the ABC model and generates biomedical hypotheses from literature. The experimental results on the three classic Swanson hypotheses show that our approach outperforms SemRep system.

  16. Cool down time optimization of the Stirling cooler

    NASA Astrophysics Data System (ADS)

    Xia, M.; Chen, X. P.; Y Li, H.; Gan, Z. H.

    2017-12-01

    The cooling power is one of the most important performances of a Stirling cooler. However, in some special fields, the cool down time is more important. It is a great challenge to improve the cool down time of the Stirling cooler. A new split Stirling linear cryogenic cooler SCI09H was designed in this study. A new structure of linear motor is used in the compressor, and the machine spring is used in the expander. In order to reduce the cool down time, the stainless-steel mesh of regenerator is optimized. The weight of the cooler is 1.1 kg, the cool down time to 80K is 2 minutes at 296K with a 250J thermal mass, the cooling power is 1.1W at 80K, and the input power is 50W.

  17. An Analysis of Performance Enhancement Techniques for Overset Grid Applications

    NASA Technical Reports Server (NTRS)

    Djomehri, J. J.; Biswas, R.; Potsdam, M.; Strawn, R. C.; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The overset grid methodology has significantly reduced time-to-solution of high-fidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process resolves the geometrical complexity of the problem domain by using separately generated but overlapping structured discretization grids that periodically exchange information through interpolation. However, high performance computations of such large-scale realistic applications must be handled efficiently on state-of-the-art parallel supercomputers. This paper analyzes the effects of various performance enhancement techniques on the parallel efficiency of an overset grid Navier-Stokes CFD application running on an SGI Origin2000 machine. Specifically, the role of asynchronous communication, grid splitting, and grid grouping strategies are presented and discussed. Results indicate that performance depends critically on the level of latency hiding and the quality of load balancing across the processors.

  18. Forced-air patient warming blankets disrupt unidirectional airflow.

    PubMed

    Legg, A J; Hamer, A J

    2013-03-01

    We have recently shown that waste heat from forced-air warming blankets can increase the temperature and concentration of airborne particles over the surgical site. The mechanism for the increased concentration of particles and their site of origin remained unclear. We therefore attempted to visualise the airflow in theatre over a simulated total knee replacement using neutral-buoyancy helium bubbles. Particles were created using a Rocket PS23 smoke machine positioned below the operating table, a potential area of contamination. The same theatre set-up, warming devices and controls were used as in our previous study. This demonstrated that waste heat from the poorly insulated forced-air warming blanket increased the air temperature on the surgical side of the drape by > 5°C. This created convection currents that rose against the downward unidirectional airflow, causing turbulence over the patient. The convection currents increased the particle concentration 1000-fold (2 174 000 particles/m(3) for forced-air warming vs 1000 particles/m(3) for radiant warming and 2000 particles/m(3) for the control) by drawing potentially contaminated particles from below the operating table into the surgical site. Cite this article: Bone Joint J 2013;95-B:407-10.

  19. The universal numbers. From Biology to Physics.

    PubMed

    Marchal, Bruno

    2015-12-01

    I will explain how the mathematicians have discovered the universal numbers, or abstract computer, and I will explain some abstract biology, mainly self-reproduction and embryogenesis. Then I will explain how and why, and in which sense, some of those numbers can dream and why their dreams can glue together and must, when we assume computationalism in cognitive science, generate a phenomenological physics, as part of a larger phenomenological theology (in the sense of the greek theologians). The title should have been "From Biology to Physics, through the Phenomenological Theology of the Universal Numbers", if that was not too long for a title. The theology will consist mainly, like in some (neo)platonist greek-indian-chinese tradition, in the truth about numbers' relative relations, with each others, and with themselves. The main difference between Aristotle and Plato is that Aristotle (especially in its common and modern christian interpretation) makes reality WYSIWYG (What you see is what you get: reality is what we observe, measure, i.e. the natural material physical science) where for Plato and the (rational) mystics, what we see might be only the shadow or the border of something else, which might be non physical (mathematical, arithmetical, theological, …). Since Gödel, we know that Truth, even just the Arithmetical Truth, is vastly bigger than what the machine can rationally justify. Yet, with Church's thesis, and the mechanizability of the diagonalizations involved, machines can apprehend this and can justify their limitations, and get some sense of what might be true beyond what they can prove or justify rationally. Indeed, the incompleteness phenomenon introduces a gap between what is provable by some machine and what is true about that machine, and, as Gödel saw already in 1931, the existence of that gap is accessible to the machine itself, once it is has enough provability abilities. Incompleteness separates truth and provable, and machines can justify this in some way. More importantly incompleteness entails the distinction between many intensional variants of provability. For example, the absence of reflexion (beweisbar(⌜A⌝) → A with beweisbar being Gödel's provability predicate) makes it impossible for the machine's provability to obey the axioms usually taken for a theory of knowledge. The most important consequence of this in the machine's possible phenomenology is that it provides sense, indeed arithmetical sense, to intensional variants of provability, like the logics of provability-and-truth, which at the propositional level can be mirrored by the logic of provable-and-true statements (beweisbar(⌜A⌝) ∧ A). It is incompleteness which makes this logic different from the logic of provability. Other variants, like provable-and-consistent, or provable-and-consistent-and-true, appears in the same way, and inherits the incompleteness splitting, unlike beweisbar(⌜A⌝) ∧ A. I will recall thought experience which motivates the use of those intensional variants to associate a knower and an observer in some canonical way to the machines or the numbers. We will in this way get an abstract and phenomenological theology of a machine M through the true logics of their true self-referential abilities (even if not provable, or knowable, by the machine itself), in those different intensional senses. Cognitive science and theoretical physics motivate the study of those logics with the arithmetical interpretation of the atomic sentences restricted to the "verifiable" (Σ1) sentences, which is the way to study the theology of the computationalist machine. This provides a logic of the observable, as expected by the Universal Dovetailer Argument, which will be recalled briefly, and which can lead to a comparison of the machine's logic of physics with the empirical logic of the physicists (like quantum logic). This leads also to a series of open problems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Utility of Megavoltage Fan-Beam CT for Treatment Planning in a Head-And-Neck Cancer Patient with Extensive Dental Fillings Undergoing Helical Tomotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Claus; Liu Tianxiao; Jennelle, Richard L.

    The purpose of this study was to demonstrate the potential utility of megavoltage fan-beam computed tomography (MV-FBCT) for treatment planning in a patient undergoing helical tomotherapy for nasopharyngeal carcinoma in the presence of extensive dental artifact. A 28-year-old female with locally advanced nasopharyngeal carcinoma presented for radiation therapy. Due to the extensiveness of the dental artifact present in the oral cavity kV-CT scan acquired at simulation, which made treatment planning impossible on tomotherapy planning system, MV-FBCT imaging was obtained using the HI-ART tomotherapy treatment machine, with the patient in the treatment position, and this information was registered with her originalmore » kV-CT scan for the purposes of structure delineation, dose calculation, and treatment planning. To validate the feasibility of the MV-FBCT-generated treatment plan, an electron density CT phantom (model 465, Gammex Inc., Middleton, WI) was scanned using MV-FBCT to obtain CT number to density table. Additionally, both a 'cheese' phantom (which came with the tomotherapy treatment machine) with 2 inserted ion chambers and a generic phantom called Quasar phantom (Modus Medical Devices Inc., London, ON, Canada) with one inserted chamber were used to confirm dosimetric accuracy. The MV-FBCT could be used to clearly visualize anatomy in the region of the dental artifact and provide sufficient soft-tissue contrast to assist in the delineation of normal tissue structures and fat planes. With the elimination of the dental artifact, the MV-FBCT images allowed more accurate dose calculation by the tomotherapy system. It was confirmed that the phantom material density was determined correctly by the tomotherapy MV-FBCT number to density table. The ion chamber measurements agreed with the calculations from the MV-FBCT generated phantom plan within 2%. MV-FBCT may be useful in radiation treatment planning for nasopharyngeal cancer patients in the setting of extensive dental artifacts.« less

  1. VizieR Online Data Catalog: Redshift reliability flags (VVDS data) (Jamal+, 2018)

    NASA Astrophysics Data System (ADS)

    Jamal, S.; Le Brun, V.; Le Fevre, O.; Vibert, D.; Schmitt, A.; Surace, C.; Copin, Y.; Garilli, B.; Moresco, M.; Pozzetti, L.

    2017-09-01

    The VIMOS VLT Deep Survey (Le Fevre et al. 2013A&A...559A..14L) is a combination of 3 i-band magnitude limited surveys: Wide (17.5<=iAB<=22.5; 8.6deg2), Deep (17.5<=iAB<=24; 0.6deg2) and Ultra-Deep (23<=iAB<=24.75; 512arcmin2), that produced a total of 35526 spectroscopic galaxy redshifts between 0 and 6.7 (22434 in Wide, 12051 in Deep and 1041 in UDeep). We supplement spectra of the VIMOS VLT Deep Survey (VVDS) with newly-defined redshift reliability flags obtained from clustering (unsupervised classification in Machine Learning) a set of descriptors from individual zPDFs. In this paper, we exploit a set of 24519 spectra from the VVDS database. After computing zPDFs for each individual spectrum, a set of (8) descriptors of the zPDF are extracted to build a feature matrix X (dimension = 24519 rows, 8 columns). Then, we use a clustering (unsupervised algorithms in Machine Learning) algorithm to partition the feature space into distinct clusters (5 clusters: C1,C2,C3,C4,C5), each depicting a different level of confidence to associate with the measured redshift zMAP (Maximum-A-Posteriori estimate that corresponds to the maximum of the redshift PDF). The clustering results (C1,C2,C3,C4,C5) reported in the table are those used in the paper (Jamal et al, 2017) to present the new methodology of automating the zspec reliability assessment. In particular, we would like to point out that they were obtained from first tests conducted on the VVDS spectroscopic data (end of 2016). Therefore, the table does not depict immutable results (on-going improvements). Future updates of the VVDS redshift reliability flags can be expected. (1 data file).

  2. Can ASCII data files be standardized for Earth Science?

    NASA Astrophysics Data System (ADS)

    Evans, K. D.; Chen, G.; Wilson, A.; Law, E.; Olding, S. W.; Krotkov, N. A.; Conover, H.

    2015-12-01

    NASA's Earth Science Data Systems Working Groups (ESDSWG) was created over 10 years ago. The role of the ESDSWG is to make recommendations relevant to NASA's Earth science data systems from user experiences. Each group works independently focusing on a unique topic. Participation in ESDSWG groups comes from a variety of NASA-funded science and technology projects, such as MEaSUREs, NASA information technology experts, affiliated contractor, staff and other interested community members from academia and industry. Recommendations from the ESDSWG groups will enhance NASA's efforts to develop long term data products. Each year, the ESDSWG has a face-to-face meeting to discuss recommendations and future efforts. Last year's (2014) ASCII for Science Data Working Group (ASCII WG) completed its goals and made recommendations on a minimum set of information that is needed to make ASCII files at least human readable and usable for the foreseeable future. The 2014 ASCII WG created a table of ASCII files and their components as a means for understanding what kind of ASCII formats exist and what components they have in common. Using this table and adding information from other ASCII file formats, we will discuss the advantages and disadvantages of a standardized format. For instance, Space Geodesy scientists have been using the same RINEX/SINEX ASCII format for decades. Astronomers mostly archive their data in the FITS format. Yet Earth scientists seem to have a slew of ASCII formats, such as ICARTT, netCDF (an ASCII dump) and the IceBridge ASCII format. The 2015 Working Group is focusing on promoting extendibility and machine readability of ASCII data. Questions have been posed, including, Can we have a standardized ASCII file format? Can it be machine-readable and simultaneously human-readable? We will present a summary of the current used ASCII formats in terms of advantages and shortcomings, as well as potential improvements.

  3. Large Survey Database: A Distributed Framework for Storage and Analysis of Large Datasets

    NASA Astrophysics Data System (ADS)

    Juric, Mario

    2011-01-01

    The Large Survey Database (LSD) is a Python framework and DBMS for distributed storage, cross-matching and querying of large survey catalogs (>10^9 rows, >1 TB). The primary driver behind its development is the analysis of Pan-STARRS PS1 data. It is specifically optimized for fast queries and parallel sweeps of positionally and temporally indexed datasets. It transparently scales to more than >10^2 nodes, and can be made to function in "shared nothing" architectures. An LSD database consists of a set of vertically and horizontally partitioned tables, physically stored as compressed HDF5 files. Vertically, we partition the tables into groups of related columns ('column groups'), storing together logically related data (e.g., astrometry, photometry). Horizontally, the tables are partitioned into partially overlapping ``cells'' by position in space (lon, lat) and time (t). This organization allows for fast lookups based on spatial and temporal coordinates, as well as data and task distribution. The design was inspired by the success of Google BigTable (Chang et al., 2006). Our programming model is a pipelined extension of MapReduce (Dean and Ghemawat, 2004). An SQL-like query language is used to access data. For complex tasks, map-reduce ``kernels'' that operate on query results on a per-cell basis can be written, with the framework taking care of scheduling and execution. The combination leverages users' familiarity with SQL, while offering a fully distributed computing environment. LSD adds little overhead compared to direct Python file I/O. In tests, we sweeped through 1.1 Grows of PanSTARRS+SDSS data (220GB) less than 15 minutes on a dual CPU machine. In a cluster environment, we achieved bandwidths of 17Gbits/sec (I/O limited). Based on current experience, we believe LSD should scale to be useful for analysis and storage of LSST-scale datasets. It can be downloaded from http://mwscience.net/lsd.

  4. Comparison of the AMDAHL 470V/6 and the IBM 370/195 using benchmarks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snider, D.R.; Midlock, J.L.; Hinds, A.R.

    1976-03-01

    Six groups of jobs were run on the IBM 370/195 at the Applied Mathematics Division (AMD) of Argonne National Laboratory using the current production versions of OS/MVT 21.7 and ASP 3.1. The same jobs were then run on an AMDAHL 470V/6 at the AMDAHL manufacturing facilities in Sunnyvale, California, using the identical operating systems. Performances of the two machines are compared. Differences in the configurations were minimized. The memory size on each machine was the same, all software which had an impact on run times was the same, and the I/O configurations were as similar as possible. This allowed themore » comparison to be based on the relative performance of the two CPU's. As part of the studies preliminary to the acquisition of the IBM 195 in 1972, two of the groups of jobs had been run on a CDC 7600 by CDC personnel in Arden Hills, Minnesota, on an IBM 360/195 by IBM personnel in Poughkeepsie, New York, and on the AMD 360/50/75 production system in June, 1971. 6 figures, 9 tables.« less

  5. Feasibility of using the Massively Parallel Processor for large eddy simulations and other Computational Fluid Dynamics applications

    NASA Technical Reports Server (NTRS)

    Bruno, John

    1984-01-01

    The results of an investigation into the feasibility of using the MPP for direct and large eddy simulations of the Navier-Stokes equations is presented. A major part of this study was devoted to the implementation of two of the standard numerical algorithms for CFD. These implementations were not run on the Massively Parallel Processor (MPP) since the machine delivered to NASA Goddard does not have sufficient capacity. Instead, a detailed implementation plan was designed and from these were derived estimates of the time and space requirements of the algorithms on a suitably configured MPP. In addition, other issues related to the practical implementation of these algorithms on an MPP-like architecture were considered; namely, adaptive grid generation, zonal boundary conditions, the table lookup problem, and the software interface. Performance estimates show that the architectural components of the MPP, the Staging Memory and the Array Unit, appear to be well suited to the numerical algorithms of CFD. This combined with the prospect of building a faster and larger MMP-like machine holds the promise of achieving sustained gigaflop rates that are required for the numerical simulations in CFD.

  6. FTOOLS: A general package of software to manipulate FITS files

    NASA Astrophysics Data System (ADS)

    Blackburn, J. K.; Shaw, R. A.; Payne, H. E.; Hayes, J. J. E.; Heasarc

    1999-12-01

    FTOOLS, a highly modular collection of utilities for processing and analyzing data in the FITS (Flexible Image Transport System) format, has been developed in support of the HEASARC (High Energy Astrophysics Research Archive Center) at NASA's Goddard Space Flight Center. The FTOOLS package contains many utility programs which perform modular tasks on any FITS image or table, as well as higher-level analysis programs designed specifically for data from current and past high energy astrophysics missions. The utility programs for FITS tables are especially rich and powerful, and provide functions for presentation of file contents, extraction of specific rows or columns, appending or merging tables, binning values in a column or selecting subsets of rows based on a boolean expression. Individual FTOOLS programs can easily be chained together in scripts to achieve more complex operations such as the generation and displaying of spectra or light curves. FTOOLS development began in 1991 and has produced the main set of data analysis software for the current ASCA and RXTE space missions and for other archival sets of X-ray and gamma-ray data. The FTOOLS software package is supported on most UNIX platforms and on Windows machines. The user interface is controlled by standard parameter files that are very similar to those used by IRAF. The package is self documenting through a stand alone help task called fhelp. Software is written in ANSI C and FORTRAN to provide portability across most computer systems. The data format dependencies between hardware platforms are isolated through the FITSIO library package.

  7. Measuring Micro-Friction Torque in MEMS Gas Bearings

    PubMed Central

    Fang, Xudong; Liu, Huan

    2016-01-01

    An in situ measurement of micro-friction torque in MEMS gas bearings, which has been a challenging research topic for years, is realized by a system designed in this paper. In the system, a high accuracy micro-force sensor and an electronically-driven table are designed, fabricated and utilized. With appropriate installation of the sensor and bearings on the table, the engine rotor can be driven to rotate with the sensor using a silicon lever beam. One end of the beam is fixed to the shaft of the gas bearing, while the other end is free and in contact with the sensor probe tip. When the sensor begins to rotate with the table, the beam is pushed by the sensor probe to rotate in the same direction. For the beam, the friction torque from the gas bearing is balanced by the torque induced by pushing force from the sensor probe. Thus, the friction torque can be calculated as a product of the pushing force measured by the sensor and the lever arm, which is defined as the distance from the sensor probe tip to the centerline of the bearing. Experimental results demonstrate the feasibility of this system, with a sensitivity of 1.285 mV/μN·m in a range of 0 to 11.76 μN·m when the lever arm is 20 mm long. The measuring range can be modified by varying the length of the lever arm. Thus, this system has wide potential applications in measuring the micro-friction torque of gas bearings in rotating MEMS machines. PMID:27213377

  8. Automated spot defect characterization in a field portable night vision goggle test set

    NASA Astrophysics Data System (ADS)

    Scopatz, Stephen; Ozten, Metehan; Aubry, Gilles; Arquetoux, Guillaume

    2018-05-01

    This paper discusses a new capability developed for and results from a field portable test set for Gen 2 and Gen 3 Image Intensifier (I2) tube-based Night Vision Goggles (NVG). A previous paper described the test set and the automated and semi-automated tests supported for NVGs including a Knife Edge MTF test to replace the operator's interpretation of the USAF 1951 resolution chart. The major improvement and innovation detailed in this paper is the use of image analysis algorithms to automate the characterization of spot defects of I² tubes with the same test set hardware previously presented. The original and still common Spot Defect Test requires the operator to look through the NVGs at target of concentric rings; compare the size of the defects to a chart and manually enter the results into a table based on the size and location of each defect; this is tedious and subjective. The prior semi-automated improvement captures and displays an image of the defects and the rings; allowing the operator determine the defects with less eyestrain; while electronically storing the image and the resulting table. The advanced Automated Spot Defect Test utilizes machine vision algorithms to determine the size and location of the defects, generates the result table automatically and then records the image and the results in a computer-generated report easily usable for verification. This is inherently a more repeatable process that ensures consistent spot detection independent of the operator. Results of across several NVGs will be presented.

  9. A hybrid machine learning model to estimate nitrate contamination of production zone groundwater in the Central Valley, California

    NASA Astrophysics Data System (ADS)

    Ransom, K.; Nolan, B. T.; Faunt, C. C.; Bell, A.; Gronberg, J.; Traum, J.; Wheeler, D. C.; Rosecrans, C.; Belitz, K.; Eberts, S.; Harter, T.

    2016-12-01

    A hybrid, non-linear, machine learning statistical model was developed within a statistical learning framework to predict nitrate contamination of groundwater to depths of approximately 500 m below ground surface in the Central Valley, California. A database of 213 predictor variables representing well characteristics, historical and current field and county scale nitrogen mass balance, historical and current landuse, oxidation/reduction conditions, groundwater flow, climate, soil characteristics, depth to groundwater, and groundwater age were assigned to over 6,000 private supply and public supply wells measured previously for nitrate and located throughout the study area. The machine learning method, gradient boosting machine (GBM) was used to screen predictor variables and rank them in order of importance in relation to the groundwater nitrate measurements. The top five most important predictor variables included oxidation/reduction characteristics, historical field scale nitrogen mass balance, climate, and depth to 60 year old water. Twenty-two variables were selected for the final model and final model errors for log-transformed hold-out data were R squared of 0.45 and root mean square error (RMSE) of 1.124. Modeled mean groundwater age was tested separately for error improvement in the model and when included decreased model RMSE by 0.5% compared to the same model without age and by 0.20% compared to the model with all 213 variables. 1D and 2D partial plots were examined to determine how variables behave individually and interact in the model. Some variables behaved as expected: log nitrate decreased with increasing probability of anoxic conditions and depth to 60 year old water, generally decreased with increasing natural landuse surrounding wells and increasing mean groundwater age, generally increased with increased minimum depth to high water table and with increased base flow index value. Other variables exhibited much more erratic or noisy behavior in the model making them more difficult to interpret but highlighting the usefulness of the non-linear machine learning method. 2D interaction plots show probability of anoxic groundwater conditions largely control estimated nitrate concentrations compared to the other predictors.

  10. Strength development in concrete with wood ash blended cement and use of soft computing models to predict strength parameters.

    PubMed

    Chowdhury, S; Maniar, A; Suganya, O M

    2015-11-01

    In this study, Wood Ash (WA) prepared from the uncontrolled burning of the saw dust is evaluated for its suitability as partial cement replacement in conventional concrete. The saw dust has been acquired from a wood polishing unit. The physical, chemical and mineralogical characteristics of WA is presented and analyzed. The strength parameters (compressive strength, split tensile strength and flexural strength) of concrete with blended WA cement are evaluated and studied. Two different water-to-binder ratio (0.4 and 0.45) and five different replacement percentages of WA (5%, 10%, 15%, 18% and 20%) including control specimens for both water-to-cement ratio is considered. Results of compressive strength, split tensile strength and flexural strength showed that the strength properties of concrete mixture decreased marginally with increase in wood ash contents, but strength increased with later age. The XRD test results and chemical analysis of WA showed that it contains amorphous silica and thus can be used as cement replacing material. Through the analysis of results obtained in this study, it was concluded that WA could be blended with cement without adversely affecting the strength properties of concrete. Also using a new statistical theory of the Support Vector Machine (SVM), strength parameters were predicted by developing a suitable model and as a result, the application of soft computing in structural engineering has been successfully presented in this research paper.

  11. High-precision laser microcutting and laser microdrilling using diffractive beam-splitting and high-precision flexible beam alignment

    NASA Astrophysics Data System (ADS)

    Zibner, F.; Fornaroli, C.; Holtkamp, J.; Shachaf, Lior; Kaplan, Natan; Gillner, A.

    2017-08-01

    High-precision laser micro machining gains more importance in industrial applications every month. Optical systems like the helical optics offer highest quality together with controllable and adjustable drilling geometry, thus as taper angle, aspect ratio and heat effected zone. The helical optics is based on a rotating Dove-prism which is mounted in a hollow shaft engine together with other optical elements like wedge prisms and plane plates. Although the achieved quality can be interpreted as extremely high the low process efficiency is a main reason that this manufacturing technology has only limited demand within the industrial market. The objective of the research studies presented in this paper is to dramatically increase process efficiency as well as process flexibility. During the last years, the average power of commercial ultra-short pulsed laser sources has increased significantly. The efficient utilization of the high average laser power in the field of material processing requires an effective distribution of the laser power onto the work piece. One approach to increase the efficiency is the application of beam splitting devices to enable parallel processing. Multi beam processing is used to parallelize the fabrication of periodic structures as most application only require a partial amount of the emitted ultra-short pulsed laser power. In order to achieve highest flexibility while using multi beam processing the single beams are diverted and re-guided in a way that enables the opportunity to process with each partial beam on locally apart probes or semimanufactures.

  12. Accelerating String Set Matching in FPGA Hardware for Bioinformatics Research

    PubMed Central

    Dandass, Yoginder S; Burgess, Shane C; Lawrence, Mark; Bridges, Susan M

    2008-01-01

    Background This paper describes techniques for accelerating the performance of the string set matching problem with particular emphasis on applications in computational proteomics. The process of matching peptide sequences against a genome translated in six reading frames is part of a proteogenomic mapping pipeline that is used as a case-study. The Aho-Corasick algorithm is adapted for execution in field programmable gate array (FPGA) devices in a manner that optimizes space and performance. In this approach, the traditional Aho-Corasick finite state machine (FSM) is split into smaller FSMs, operating in parallel, each of which matches up to 20 peptides in the input translated genome. Each of the smaller FSMs is further divided into five simpler FSMs such that each simple FSM operates on a single bit position in the input (five bits are sufficient for representing all amino acids and special symbols in protein sequences). Results This bit-split organization of the Aho-Corasick implementation enables efficient utilization of the limited random access memory (RAM) resources available in typical FPGAs. The use of on-chip RAM as opposed to FPGA logic resources for FSM implementation also enables rapid reconfiguration of the FPGA without the place and routing delays associated with complex digital designs. Conclusion Experimental results show storage efficiencies of over 80% for several data sets. Furthermore, the FPGA implementation executing at 100 MHz is nearly 20 times faster than an implementation of the traditional Aho-Corasick algorithm executing on a 2.67 GHz workstation. PMID:18412963

  13. Electron Beam Welding of Gear Wheels by Splitted Beam

    NASA Astrophysics Data System (ADS)

    Dřímal, Daniel

    2014-06-01

    This contribution deals with the issue of electron beam welding of high-accurate gear wheels composed of a spur gearing and fluted shaft joined with a face weld for automotive industry. Both parts made of the high-strength low-alloy steel are welded in the condition after final machining and heat treatment, performed by case hardening, whereas it is required that the run-out in the critical point of weldment after welding, i. e. after the final operation, would be 0.04 mm max.. In case of common welding procedure, cracks were formed in the weld, initiated by spiking in the weld root. Crack formation was prevented by the use of an interlocking joint with a rounded recess and suitable welding parameters, eliminating crack initiation by spiking in the weld root. Minimisation of the welding distortions was achieved by the application of tack welding with simultaneous splitting of one beam into two parts in the opposite sections of circumferential face weld attained on the principle of a new system of controlled deflection with digital scanning of the beam. This welding procedure assured that the weldment temperature after welding would not be higher than 400 °C. Thus, this procedure allowed achieving the final run-outs in the critical point of gearwheels within the maximum range up to 0.04 mm, which is acceptable for the given application. Accurate optical measurements did not reveal any changes in the teeth dimensions.

  14. Strength development in concrete with wood ash blended cement and use of soft computing models to predict strength parameters

    PubMed Central

    Chowdhury, S.; Maniar, A.; Suganya, O.M.

    2014-01-01

    In this study, Wood Ash (WA) prepared from the uncontrolled burning of the saw dust is evaluated for its suitability as partial cement replacement in conventional concrete. The saw dust has been acquired from a wood polishing unit. The physical, chemical and mineralogical characteristics of WA is presented and analyzed. The strength parameters (compressive strength, split tensile strength and flexural strength) of concrete with blended WA cement are evaluated and studied. Two different water-to-binder ratio (0.4 and 0.45) and five different replacement percentages of WA (5%, 10%, 15%, 18% and 20%) including control specimens for both water-to-cement ratio is considered. Results of compressive strength, split tensile strength and flexural strength showed that the strength properties of concrete mixture decreased marginally with increase in wood ash contents, but strength increased with later age. The XRD test results and chemical analysis of WA showed that it contains amorphous silica and thus can be used as cement replacing material. Through the analysis of results obtained in this study, it was concluded that WA could be blended with cement without adversely affecting the strength properties of concrete. Also using a new statistical theory of the Support Vector Machine (SVM), strength parameters were predicted by developing a suitable model and as a result, the application of soft computing in structural engineering has been successfully presented in this research paper. PMID:26644928

  15. Multi-class Mode of Action Classification of Toxic Compounds Using Logic Based Kernel Methods.

    PubMed

    Lodhi, Huma; Muggleton, Stephen; Sternberg, Mike J E

    2010-09-17

    Toxicity prediction is essential for drug design and development of effective therapeutics. In this paper we present an in silico strategy, to identify the mode of action of toxic compounds, that is based on the use of a novel logic based kernel method. The technique uses support vector machines in conjunction with the kernels constructed from first order rules induced by an Inductive Logic Programming system. It constructs multi-class models by using a divide and conquer reduction strategy that splits multi-classes into binary groups and solves each individual problem recursively hence generating an underlying decision list structure. In order to evaluate the effectiveness of the approach for chemoinformatics problems like predictive toxicology, we apply it to toxicity classification in aquatic systems. The method is used to identify and classify 442 compounds with respect to the mode of action. The experimental results show that the technique successfully classifies toxic compounds and can be useful in assessing environmental risks. Experimental comparison of the performance of the proposed multi-class scheme with the standard multi-class Inductive Logic Programming algorithm and multi-class Support Vector Machine yields statistically significant results and demonstrates the potential power and benefits of the approach in identifying compounds of various toxic mechanisms. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Patient classification as an outlier detection problem: An application of the One-Class Support Vector Machine

    PubMed Central

    Mourão-Miranda, Janaina; Hardoon, David R.; Hahn, Tim; Marquand, Andre F.; Williams, Steve C.R.; Shawe-Taylor, John; Brammer, Michael

    2011-01-01

    Pattern recognition approaches, such as the Support Vector Machine (SVM), have been successfully used to classify groups of individuals based on their patterns of brain activity or structure. However these approaches focus on finding group differences and are not applicable to situations where one is interested in accessing deviations from a specific class or population. In the present work we propose an application of the one-class SVM (OC-SVM) to investigate if patterns of fMRI response to sad facial expressions in depressed patients would be classified as outliers in relation to patterns of healthy control subjects. We defined features based on whole brain voxels and anatomical regions. In both cases we found a significant correlation between the OC-SVM predictions and the patients' Hamilton Rating Scale for Depression (HRSD), i.e. the more depressed the patients were the more of an outlier they were. In addition the OC-SVM split the patient groups into two subgroups whose membership was associated with future response to treatment. When applied to region-based features the OC-SVM classified 52% of patients as outliers. However among the patients classified as outliers 70% did not respond to treatment and among those classified as non-outliers 89% responded to treatment. In addition 89% of the healthy controls were classified as non-outliers. PMID:21723950

  17. Triaging Patient Complaints: Monte Carlo Cross-Validation of Six Machine Learning Classifiers

    PubMed Central

    Cooper, William O; Catron, Thomas F; Karrass, Jan; Zhang, Zhe; Singh, Munindar P

    2017-01-01

    Background Unsolicited patient complaints can be a useful service recovery tool for health care organizations. Some patient complaints contain information that may necessitate further action on the part of the health care organization and/or the health care professional. Current approaches depend on the manual processing of patient complaints, which can be costly, slow, and challenging in terms of scalability. Objective The aim of this study was to evaluate automatic patient triage, which can potentially improve response time and provide much-needed scale, thereby enhancing opportunities to encourage physicians to self-regulate. Methods We implemented a comparison of several well-known machine learning classifiers to detect whether a complaint was associated with a physician or his/her medical practice. We compared these classifiers using a real-life dataset containing 14,335 patient complaints associated with 768 physicians that was extracted from patient complaints collected by the Patient Advocacy Reporting System developed at Vanderbilt University and associated institutions. We conducted a 10-splits Monte Carlo cross-validation to validate our results. Results We achieved an accuracy of 82% and F-score of 81% in correctly classifying patient complaints with sensitivity and specificity of 0.76 and 0.87, respectively. Conclusions We demonstrate that natural language processing methods based on modeling patient complaint text can be effective in identifying those patient complaints requiring physician action. PMID:28760726

  18. Strength of inserts in titanium alloy machining

    NASA Astrophysics Data System (ADS)

    Kozlov, V.; Huang, Z.; Zhang, J.

    2016-04-01

    In this paper, a stressed state of a non-worn cutting wedge in a machined titanium alloy (Ti6Al2Mo2Cr) is analyzed. The distribution of contact loads on the face of a cutting tool was obtained experimentally with the use of a ‘split cutting tool’. Calculation of internal stresses in the indexable insert made from cemented carbide (WC8Co) was carried out with the help of ANSYS 14.0 software. Investigations showed that a small thickness of the cutting insert leads to extremely high compressive stresses near the cutting edge, stresses that exceed the ultimate compressive strength of cemented carbide. The face and the base of the insert experience high tensile stresses, which approach the ultimate tensile strength of cemented carbide and increase a probability of cutting insert destruction. If the thickness of the cutting insert is bigger than 5 mm, compressive stresses near the cutting edge decrease, and tensile stresses on the face and base decrease to zero. The dependences of the greatest normal and tangential stresses on thickness of the cutting insert were found. Abbreviation and symbols: m/s - meter per second (cutting speed v); mm/r - millimeter per revolution (feed rate f); MPa - mega Pascal (dimension of specific contact loads and stresses); γ - rake angle of the cutting tool [°] α - clearance angle of the sharp cutting tool [°].

  19. [Discrimination of types of polyacrylamide based on near infrared spectroscopy coupled with least square support vector machine].

    PubMed

    Zhang, Hong-Guang; Yang, Qin-Min; Lu, Jian-Gang

    2014-04-01

    In this paper, a novel discriminant methodology based on near infrared spectroscopic analysis technique and least square support vector machine was proposed for rapid and nondestructive discrimination of different types of Polyacrylamide. The diffuse reflectance spectra of samples of Non-ionic Polyacrylamide, Anionic Polyacrylamide and Cationic Polyacrylamide were measured. Then principal component analysis method was applied to reduce the dimension of the spectral data and extract of the principal compnents. The first three principal components were used for cluster analysis of the three different types of Polyacrylamide. Then those principal components were also used as inputs of least square support vector machine model. The optimization of the parameters and the number of principal components used as inputs of least square support vector machine model was performed through cross validation based on grid search. 60 samples of each type of Polyacrylamide were collected. Thus a total of 180 samples were obtained. 135 samples, 45 samples for each type of Polyacrylamide, were randomly split into a training set to build calibration model and the rest 45 samples were used as test set to evaluate the performance of the developed model. In addition, 5 Cationic Polyacrylamide samples and 5 Anionic Polyacrylamide samples adulterated with different proportion of Non-ionic Polyacrylamide were also prepared to show the feasibilty of the proposed method to discriminate the adulterated Polyacrylamide samples. The prediction error threshold for each type of Polyacrylamide was determined by F statistical significance test method based on the prediction error of the training set of corresponding type of Polyacrylamide in cross validation. The discrimination accuracy of the built model was 100% for prediction of the test set. The prediction of the model for the 10 mixing samples was also presented, and all mixing samples were accurately discriminated as adulterated samples. The overall results demonstrate that the discrimination method proposed in the present paper can rapidly and nondestructively discriminate the different types of Polyacrylamide and the adulterated Polyacrylamide samples, and offered a new approach to discriminate the types of Polyacrylamide.

  20. A Parallel Vector Machine for the PM Programming Language

    NASA Astrophysics Data System (ADS)

    Bellerby, Tim

    2016-04-01

    PM is a new programming language which aims to make the writing of computational geoscience models on parallel hardware accessible to scientists who are not themselves expert parallel programmers. It is based around the concept of communicating operators: language constructs that enable variables local to a single invocation of a parallelised loop to be viewed as if they were arrays spanning the entire loop domain. This mechanism enables different loop invocations (which may or may not be executing on different processors) to exchange information in a manner that extends the successful Communicating Sequential Processes idiom from single messages to collective communication. Communicating operators avoid the additional synchronisation mechanisms, such as atomic variables, required when programming using the Partitioned Global Address Space (PGAS) paradigm. Using a single loop invocation as the fundamental unit of concurrency enables PM to uniformly represent different levels of parallelism from vector operations through shared memory systems to distributed grids. This paper describes an implementation of PM based on a vectorised virtual machine. On a single processor node, concurrent operations are implemented using masked vector operations. Virtual machine instructions operate on vectors of values and may be unmasked, masked using a Boolean field, or masked using an array of active vector cell locations. Conditional structures (such as if-then-else or while statement implementations) calculate and apply masks to the operations they control. A shift in mask representation from Boolean to location-list occurs when active locations become sufficiently sparse. Parallel loops unfold data structures (or vectors of data structures for nested loops) into vectors of values that may additionally be distributed over multiple computational nodes and then split into micro-threads compatible with the size of the local cache. Inter-node communication is accomplished using standard OpenMP and MPI. Performance analyses of the PM vector machine, demonstrating its scaling properties with respect to domain size and the number of processor nodes will be presented for a range of hardware configurations. The PM software and language definition are being made available under unrestrictive MIT and Creative Commons Attribution licenses respectively: www.pm-lang.org.

  1. An operational system for subject switching between controlled vocabularies

    NASA Technical Reports Server (NTRS)

    Silvester, June P.; Klingbiel, Paul H.

    1993-01-01

    The NASA system of automatically converting sets of terms assigned by Department of Defense indexers to sets of NASA's authorized terms is described. This little-touted system, which has been operating successfully since 1983, matches concepts, rather than words. Subject Switching uses a translation table, known as the Lexical Dictionary, accessed by a program that determines which rules to follow in making the transition from DTIC's to NASA's authorized terms. The authors describe the four phases of development of Subject Switching, changes that have been made, evaluating the system, and benefits. Benefits to NASA include saving indexers' time, the addition of access points for documents indexed, the utilization of other government indexing, and a contribution towards the now-operational NASA, online, interactive, machine aided indexing.

  2. [Electricity in healing: four different applications in a copper engraving of the Elightenment].

    PubMed

    te Heesen, Anke

    2002-01-01

    This text describes a single engraving of the picture encyclopedia Bilder-Akademie für die Jugend published from 1780 to 1784. It consisted of 52 picture tableaus, each with nine images that were connected through the biblical topic. The particular image under examination, the "Table 38", shows the healing wonders of Christ, the electrifying machine, a healing physician and the structure of ear and eye. Goal of this text will be to describe the different connections and meanings of these depicted scenes, as in the same time I will argue, that pictures can not only be interpreted by understanding how people looked at them, but also to take into question what people did with them.

  3. Proceedings of the International Conference (7th) on Machine Learning Held in Austin, Texas on 21-23 June 1990

    DTIC Science & Technology

    1990-06-23

    experiment was carried out for I(a2 I C1) = I(a2 C 2 ) = 1(a2 0 C3) = 0.33 both the databases . The number of sub-descriptions 1(a3 I C1) = I(a3 C 2 ) = l...for the second database is + 1(a2 I C2 )) + f(ep2 I C2) * as shown in table 4. The nurber of sub-descriptions (I(al I C2) + I(a2 I C2 )) is once...tends to degrade the performance [Chan 75]. path of C3]. Application 2 : The database is the 1984 Congres- sional voting pattern records consisting of

  4. A decision tree-based on-line preventive control strategy for power system transient instability prevention

    NASA Astrophysics Data System (ADS)

    Xu, Yan; Dong, Zhao Yang; Zhang, Rui; Wong, Kit Po

    2014-02-01

    Maintaining transient stability is a basic requirement for secure power system operations. Preventive control deals with modifying the system operating point to withstand probable contingencies. In this article, a decision tree (DT)-based on-line preventive control strategy is proposed for transient instability prevention of power systems. Given a stability database, a distance-based feature estimation algorithm is first applied to identify the critical generators, which are then used as features to develop a DT. By interpreting the splitting rules of DT, preventive control is realised by formulating the rules in a standard optimal power flow model and solving it. The proposed method is transparent in control mechanism, on-line computation compatible and convenient to deal with multi-contingency. The effectiveness and efficiency of the method has been verified on New England 10-machine 39-bus test system.

  5. Accurate Identification of Fatty Liver Disease in Data Warehouse Utilizing Natural Language Processing.

    PubMed

    Redman, Joseph S; Natarajan, Yamini; Hou, Jason K; Wang, Jingqi; Hanif, Muzammil; Feng, Hua; Kramer, Jennifer R; Desiderio, Roxanne; Xu, Hua; El-Serag, Hashem B; Kanwal, Fasiha

    2017-10-01

    Natural language processing is a powerful technique of machine learning capable of maximizing data extraction from complex electronic medical records. We utilized this technique to develop algorithms capable of "reading" full-text radiology reports to accurately identify the presence of fatty liver disease. Abdominal ultrasound, computerized tomography, and magnetic resonance imaging reports were retrieved from the Veterans Affairs Corporate Data Warehouse from a random national sample of 652 patients. Radiographic fatty liver disease was determined by manual review by two physicians and verified with an expert radiologist. A split validation method was utilized for algorithm development. For all three imaging modalities, the algorithms could identify fatty liver disease with >90% recall and precision, with F-measures >90%. These algorithms could be used to rapidly screen patient records to establish a large cohort to facilitate epidemiological and clinical studies and examine the clinic course and outcomes of patients with radiographic hepatic steatosis.

  6. Free-form illumination optics

    NASA Astrophysics Data System (ADS)

    Mohedano, Rubén; Chaves, Julio; Hernández, Maikel

    2016-04-01

    In many illumination problems, the beam pattern needed and/or some geometrical constraints lead to very asymmetric design conditions. These asymmetries have been solved in the past by means of arrangements of rotationally symmetric or linear lamps aimed in different directions whose patterns overlap to provide the asymmetric prescriptions or by splitting one single lamp into several sections, each one providing a part of the pattern. The development of new design methods yielding smooth continuous free-form optical surfaces to solve these challenging design problems, combined with the proper CAD modeling tools plus the development of multiple axes diamond turn machines, give birth to a new generation of optics. These are able to offer the performance and other advanced features, such as efficiency, compactness, or aesthetical advantages, and can be manufactured at low cost by injection molding. This paper presents two examples of devices with free-form optical surfaces, a camera flash, and a car headlamp.

  7. A tool for developing an automatic insect identification system based on wing outlines

    PubMed Central

    Yang, He-Ping; Ma, Chun-Sen; Wen, Hui; Zhan, Qing-Bin; Wang, Xin-Li

    2015-01-01

    For some insect groups, wing outline is an important character for species identification. We have constructed a program as the integral part of an automated system to identify insects based on wing outlines (DAIIS). This program includes two main functions: (1) outline digitization and Elliptic Fourier transformation and (2) classifier model training by pattern recognition of support vector machines and model validation. To demonstrate the utility of this program, a sample of 120 owlflies (Neuroptera: Ascalaphidae) was split into training and validation sets. After training, the sample was sorted into seven species using this tool. In five repeated experiments, the mean accuracy for identification of each species ranged from 90% to 98%. The accuracy increased to 99% when the samples were first divided into two groups based on features of their compound eyes. DAIIS can therefore be a useful tool for developing a system of automated insect identification. PMID:26251292

  8. Tchebichef moment based restoration of Gaussian blurred images.

    PubMed

    Kumar, Ahlad; Paramesran, Raveendran; Lim, Chern-Loon; Dass, Sarat C

    2016-11-10

    With the knowledge of how edges vary in the presence of a Gaussian blur, a method that uses low-order Tchebichef moments is proposed to estimate the blur parameters: sigma (σ) and size (w). The difference between the Tchebichef moments of the original and the reblurred images is used as feature vectors to train an extreme learning machine for estimating the blur parameters (σ,w). The effectiveness of the proposed method to estimate the blur parameters is examined using cross-database validation. The estimated blur parameters from the proposed method are used in the split Bregman-based image restoration algorithm. A comparative analysis of the proposed method with three existing methods using all the images from the LIVE database is carried out. The results show that the proposed method in most of the cases performs better than the three existing methods in terms of the visual quality evaluated using the structural similarity index.

  9. Resolving microstructures in Z pinches with intensity interferometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Apruzese, J. P.; Kroupp, E.; Maron, Y.

    2014-03-15

    Nearly 60 years ago, Hanbury Brown and Twiss [R. Hanbury Brown and R. Q. Twiss, Nature 178, 1046 (1956)] succeeded in measuring the 30 nrad angular diameter of Sirius using a new type of interferometry that exploited the interference of photons independently emitted from different regions of the stellar disk. Its basis was the measurement of intensity correlations as a function of detector spacing, with no beam splitting or preservation of phase information needed. Applied to Z pinches, X pinches, or laser-produced plasmas, this method could potentially provide spatial resolution under one micron. A quantitative analysis based on the workmore » of Purcell [E. M. Purcell, Nature 178, 1449 (1956)] reveals that obtaining adequate statistics from x-ray interferometry of a Z-pinch microstructure would require using the highest-current generators available. However, using visible light interferometry would reduce the needed photon count and could enable its use on sub-MA machines.« less

  10. Legal regulation of surrogate motherhood in Israel.

    PubMed

    Frenkel, D A

    2001-01-01

    The Israeli Law on surrogate motherhood demands a preconception agreement to include payments to be made to the surrogate mother. Surrogacy arrangements with family members are forbidden. Commercial surrogacy is allowed and encouraged. The Law causes many problems. Validity of consent given by surrogate mothers is doubtful. Possible future psychological harm are ignored. There is a danger of "commodification" of children. Abusing women of low socio-economic status as breeding machines may be another outcome. No clear responsibility is imposed on the "intended parents" for an impaired child. The law ignores possibility of divorce or death of the "intended parents" before the child's birth. Splitting motherhood is another social problem that has to be dealt with. So far the sperm of the husband from the "intended parents" has to be used, but further steps may follow. It is not certain that a policy of "positive eugenics" will not develop.

  11. Phenomenological study of a cellular material behaviour under dynamic loadings

    NASA Astrophysics Data System (ADS)

    Bouix, R.; Viot, Ph.; Lataillade, J.-L.

    2006-08-01

    Polypropylene foams are cellular materials, which are often use to fill structures subjected to crash or violent impacts. Therefore, it is necessary to know and to characterise in experiments their mechanical behaviour in compression at high strain rates. So, several apparatus have been used in order to highlight the influence of strain rate, material density and also temperature. A split Hopkinson Pressure Bar has been used for impact tests, a fly wheel to test theses materials at medium strain rate and an electro-mechanical testing machine associated to a climatic chamber for temperature tests. Then, a rheological model has been used in order to describe the material behaviour. The mechanical response to compression of these foams presents three typical domains: a linear elastic step, a wide collapse plateau stress, which leads to a densification, which are related to a standard rheological model.

  12. Waveguides fabricated by femtosecond laser exploiting both depressed cladding and stress-induced guiding core.

    PubMed

    Dong, Ming-Ming; Wang, Cheng-Wei; Wu, Zheng-Xiang; Zhang, Yang; Pan, Huai-Hai; Zhao, Quan-Zhong

    2013-07-01

    We report on the fabrication of stress-induced optical channel waveguides and waveguide splitters with laser-depressed cladding by femtosecond laser. The laser beam was focused into neodymium doped phosphate glass by an objective producing a destructive filament. By moving the sample along an enclosed routine in the horizontal plane followed by a minor descent less than the filament length in the vertical direction, a cylinder with rarified periphery and densified center region was fabricated. Lining up the segments in partially overlapping sequence enabled waveguiding therein. The refractive-index contrast, near- and far-field mode distribution and confocal microscope fluorescence image of the waveguide were obtained. 1-to-2, 1-to-3 and 1-to-4 splitters were also machined with adjustable splitting ratio. Compared with traditional femtosecond laser writing methods, waveguides prepared by this approach showed controllable mode conduction, strong field confinement, large numerical aperture, low propagation loss and intact core region.

  13. PANDA: A distributed multiprocessor operating system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chubb, P.

    1989-01-01

    PANDA is a design for a distributed multiprocessor and an operating system. PANDA is designed to allow easy expansion of both hardware and software. As such, the PANDA kernel provides only message passing and memory and process management. The other features needed for the system (device drivers, secondary storage management, etc.) are provided as replaceable user tasks. The thesis presents PANDA's design and implementation, both hardware and software. PANDA uses multiple 68010 processors sharing memory on a VME bus, each such node potentially connected to others via a high speed network. The machine is completely homogeneous: there are no differencesmore » between processors that are detectable by programs running on the machine. A single two-processor node has been constructed. Each processor contains memory management circuits designed to allow processors to share page tables safely. PANDA presents a programmers' model similar to the hardware model: a job is divided into multiple tasks, each having its own address space. Within each task, multiple processes share code and data. Tasks can send messages to each other, and set up virtual circuits between themselves. Peripheral devices such as disc drives are represented within PANDA by tasks. PANDA divides secondary storage into volumes, each volume being accessed by a volume access task, or VAT. All knowledge about the way that data is stored on a disc is kept in its volume's VAT. The design is such that PANDA should provide a useful testbed for file systems and device drivers, as these can be installed without recompiling PANDA itself, and without rebooting the machine.« less

  14. Application of electrolytic in-process dressing for high-efficiency grinding of ceramic parts. Research activities 1995--96

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bandyopadhyay, B.P.

    1997-02-01

    The application of Electrolytic In-Process Dressing (ELID) for highly efficient and stable grinding of ceramic parts is discussed. This research was performed at the Institute of Physical and Chemical Research (RIKEN), Tokyo, Japan, June 1995 through August 1995. Experiments were conducted using a vertical machining center. The silicon nitride work material, of Japanese manufacture and supplied in the form of a rectangular block, was clamped to a vice which was firmly fixed on the base of a strain gage dynamometer. The dynamometer was clamped on the machining center table. Reciprocating grinding was performed with a flat-faced diamond grinding wheel. Themore » output from the dynamometer was recorded with a data acquisition system and the normal component of the force was monitored. Experiments were carried out under various cutting conditions, different ELID conditions, and various grinding wheel bonds types. Rough grinding wheels of grit sizes {number_sign}170 and {number_sign}140 were used in the experiments. Compared to conventional grinding, there was a significant reduction in grinding force with ELID grinding. Therefore, ELID grinding can be recommended for high material removal rate grinding, low rigidity machines, and low rigidity workpieces. Compared to normal grinding, a reduction in grinding ratio was observed when ELID grinding was performed. A negative aspect of the process, this reduced G-ratio derives from bond erosion and can be improved somewhat by adjustments in the ELID current. The results of this investigation are discussed in detail in this report.« less

  15. Splitting the mind within the individual, nation and economy: reflections on the struggle for integration in post-war Germany.

    PubMed

    Plänkers, Tomas

    2015-02-01

    With respect to theorisations of psychical splitting, this paper explores the psychical mechanisms that underlie different forms of social splitting. The paper first outlines Freud's and Kleins different theorisations of the psychical mechanisms of splitting, where the good is split from the bad, the inside split from the outside, and the painful disavowed. I then consider the psychical mechanisms of splitting that underlie ideological supports of certain social systems, specifically that of National Socialist Germany, East Germany during the Cold War period, and neoliberal capitalism. Here, I consider ideological splits between good and evil, the relation between external and internal splits, the relation between geographical, social and internal splitting, as well as splitting as disavowal of the other. Copyright © 2015 Institute of Psychoanalysis.

  16. Windows .NET Network Distributed Basic Local Alignment Search Toolkit (W.ND-BLAST)

    PubMed Central

    Dowd, Scot E; Zaragoza, Joaquin; Rodriguez, Javier R; Oliver, Melvin J; Payton, Paxton R

    2005-01-01

    Background BLAST is one of the most common and useful tools for Genetic Research. This paper describes a software application we have termed Windows .NET Distributed Basic Local Alignment Search Toolkit (W.ND-BLAST), which enhances the BLAST utility by improving usability, fault recovery, and scalability in a Windows desktop environment. Our goal was to develop an easy to use, fault tolerant, high-throughput BLAST solution that incorporates a comprehensive BLAST result viewer with curation and annotation functionality. Results W.ND-BLAST is a comprehensive Windows-based software toolkit that targets researchers, including those with minimal computer skills, and provides the ability increase the performance of BLAST by distributing BLAST queries to any number of Windows based machines across local area networks (LAN). W.ND-BLAST provides intuitive Graphic User Interfaces (GUI) for BLAST database creation, BLAST execution, BLAST output evaluation and BLAST result exportation. This software also provides several layers of fault tolerance and fault recovery to prevent loss of data if nodes or master machines fail. This paper lays out the functionality of W.ND-BLAST. W.ND-BLAST displays close to 100% performance efficiency when distributing tasks to 12 remote computers of the same performance class. A high throughput BLAST job which took 662.68 minutes (11 hours) on one average machine was completed in 44.97 minutes when distributed to 17 nodes, which included lower performance class machines. Finally, there is a comprehensive high-throughput BLAST Output Viewer (BOV) and Annotation Engine components, which provides comprehensive exportation of BLAST hits to text files, annotated fasta files, tables, or association files. Conclusion W.ND-BLAST provides an interactive tool that allows scientists to easily utilizing their available computing resources for high throughput and comprehensive sequence analyses. The install package for W.ND-BLAST is freely downloadable from . With registration the software is free, installation, networking, and usage instructions are provided as well as a support forum. PMID:15819992

  17. Determination of split renal function using dynamic CT-angiography: preliminary results.

    PubMed

    Helck, Andreas; Schönermarck, Ulf; Habicht, Antje; Notohamiprodjo, Mike; Stangl, Manfred; Klotz, Ernst; Nikolaou, Konstantin; la Fougère, Christian; Clevert, Dirk Andrè; Reiser, Maximilian; Becker, Christoph

    2014-01-01

    To determine the feasibility of a dynamic CT angiography-protocol with regard to simultaneous assessment of renal anatomy and function. 7 healthy potential kidney donors (58 ± 7 years) underwent a dynamic computed tomography angiography (CTA) using a 128-slice CT-scanner with continuous bi-directional table movement, allowing the coverage of a scan range of 18 cm within 1.75 sec. Twelve scans of the kidneys (n = 14) were acquired every 3.5 seconds with the aim to simultaneously obtain CTA and renal function data. Image quality was assessed quantitatively (HU-measurements) and qualitatively (grade 1-4, 1 = best). The glomerular filtration rate (GFR) was calculated by a modified Patlak method and compared with the split renal function obtained with renal scintigraphy. Mean maximum attenuation was 464 ± 58 HU, 435 ± 48 HU and 277 ± 29 HU in the aorta, renal arteries, and renal veins, respectively. The abdominal aorta and all renal vessels were depicted excellently (grade 1.0). The image quality score for cortex differentiation was 1.6 ± 0.49, for the renal parenchyma 2.4 ± 0.49. GFR obtained from dynamic CTA correlated well with renal scintigraphy with a correlation coefficient of r = 0.84; P = 0.0002 (n = 14). The average absolute deviation was 1.6 mL/min. The average effective dose was 8.96 mSv. Comprehensive assessment of renal anatomy and function is feasible using a single dynamic CT angiography examination. The proposed protocol may help to improve management in case of asymmetric kidney function as well as to simplify evaluation of potential living kidney donors.

  18. A split ubiquitin system to reveal topology and released peptides of membrane proteins.

    PubMed

    Li, Qiu-Ping; Wang, Shuai; Gou, Jin-Ying

    2017-09-02

    Membrane proteins define biological functions of membranes in cells. Extracellular peptides of transmembrane proteins receive signals from pathogens or environments, and are the major targets of drug developments. Despite of their essential roles, membrane proteins remain elusive in topological studies due to technique difficulties in their expressions and purifications. First, the target gene is cloned into a destination vector to fuse with C terminal ubiquitin at the N or C terminus. Then, Cub vector with target gene and Nub WT or Nub G vectors are transformed into AP4 or AP5 yeast cells, respectively. After mating, the diploid cells are dipped onto selection medium to check the growth. Topology of the target protein is determined according to Table 1. We present a split ubiquitin topology (SUT) analysis system to study the topology and truncation peptide of membrane proteins in a simple yeast experiment. In the SUT system, transcription activator (TA) fused with a nucleo-cytoplasmic protein shows strong auto-activation with both positive and negative control vectors. TA fused with the cytoplasmic end of membrane proteins activates reporter genes only with positive control vector with a wild type N terminal ubiquitin (Nub WT ). However, TA fused with the extracellular termini of membrane proteins can't activate reporter genes even with Nub WT . Interestingly,TA fused with the released peptide of a membrane protein shows autoactivation in the SUT system. The SUT system is a simple and fast experimental procedure complementary to computational predictions and large scale proteomic techniques. The preliminary data from SUT are valuable for pathogen recognitions and new drug developments.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    none,

    Oak Ridge Associated Universities (ORAU), under the Oak Ridge Institute for Science and Education (ORISE) contract, collected split surface water samples with Nuclear Fuel Services (NFS) representatives on March 20, 2013. Representatives from the U.S. Nuclear Regulatory Commission and the Tennessee Department of Environment and Conservation were also in attendance. Samples were collected at four surface water stations, as required in the approved Request for Technical Assistance number 11-018. These stations included Nolichucky River upstream (NRU), Nolichucky River downstream (NRD), Martin Creek upstream (MCU), and Martin Creek downstream (MCD). Both ORAU and NFS performed gross alpha and gross beta analyses,more » and Table 1 presents the comparison of results using the duplicate error ratio (DER), also known as the normalized absolute difference. A DER {<=} 3 indicates that at a 99% confidence interval, split sample results do not differ significantly when compared to their respective one standard deviation (sigma) uncertainty (ANSI N42.22). The NFS split sample report does not specify the confidence level of reported uncertainties (NFS 2013). Therefore, standard two sigma reporting is assumed and uncertainty values were divided by 1.96. In conclusion, most DER values were less than 3 and results are consistent with low (e.g., background) concentrations. The gross beta result for sample 5198W0012 was the exception. The ORAU result of 9.23 ± 0.73 pCi/L from location MCD is well above NFS's result of -0.567 ± 0.63 pCi/L (non-detected). NFS's data package included a detected result for U-233/234, but no other uranium or plutonium detection, and nothing that would suggest the presence of beta-emitting radionuclides. The ORAU laboratory reanalyzed sample 5198W0012 using the remaining portion of the sample volume and a result of 11.3 ± 1.1 pCi/L was determined. As directed, the laboratory also counted the filtrate using gamma spectrometry analysis and identified only naturally occurring or ubiquitous man-made constituents, including beta emitters that are presumably responsible for the elevated gross beta values.« less

  20. StackSplit - a plugin for multi-event shear wave splitting analyses in SplitLab

    NASA Astrophysics Data System (ADS)

    Grund, Michael

    2017-04-01

    The SplitLab package (Wüstefeld et al., Computers and Geosciences, 2008), written in MATLAB, is a powerful and widely used tool for analysing seismological shear wave splitting of single event measurements. However, in many cases, especially temporary station deployments close to seaside or for recordings affected by strong anthropogenic noise, only multi-event approaches provide stable and reliable splitting results. In order to extend the original SplitLab environment for such analyses, I present the StackSplit plugin that can easily be implemented within the well accepted main program. StackSplit grants easy access to several different analysis approaches within SplitLab, including a new multiple waveform based inversion method as well as the most established standard stacking procedures. The possibility to switch between different analysis approaches at any time allows the user for the most flexible processing of individual multi-event splitting measurements for a single recording station. Besides the provided functions of the plugin, no other external program is needed for the multi-event analyses since StackSplit performs within the available SplitLab structure.

  1. Splitting of IVP bovine blastocyst affects morphology and gene expression of resulting demi-embryos during in vitro culture and in vivo elongation.

    PubMed

    Velasquez, Alejandra E; Castro, Fidel O; Veraguas, Daniel; Cox, Jose F; Lara, Evelyn; Briones, Mario; Rodriguez-Alvarez, Lleretny

    2016-02-01

    Embryo splitting might be used to increase offspring yield and for molecular analysis of embryo competence. How splitting affects developmental potential of embryos is unknown. This research aimed to study the effect of bovine blastocyst splitting on morphological and gene expression homogeneity of demi-embryos and on embryo competence during elongation. Grade I bovine blastocyst produced in vitro were split into halves and distributed in nine groups (3 × 3 setting according to age and stage before splitting; age: days 7-9; stage: early, expanded and hatched blastocysts). Homogeneity and survival rate in vitro after splitting (12 h, days 10 and 13) and the effect of splitting on embryo development at elongation after embryo transfer (day 17) were assessed morphologically and by RT-qPCR. The genes analysed were OCT4, SOX2, NANOG, CDX2, TP1, TKDP1, EOMES, and BAX. Approximately 90% of split embryos had a well conserved defined inner cell mass (ICM), 70% of the halves had similar size with no differences in gene expression 12 h after splitting. Split embryos cultured further conserved normal and comparable morphology at day 10 of development; this situation changes at day 13 when embryo morphology and gene expression differed markedly among demi-embryos. Split and non-split blastocysts were transferred to recipient cows and were recovered at day 17. Fifty per cent of non-split embryos were larger than 100 mm (33% for split embryos). OCT4, SOX2, TP1 and EOMES levels were down-regulated in elongated embryos derived from split blastocysts. In conclusion, splitting day-8 blastocysts yields homogenous demi-embryos in terms of developmental capability and gene expression, but the initiation of the filamentous stage seems to be affected by the splitting.

  2. Ice Cloud Properties in Ice-Over-Water Cloud Systems Using TRMM VIRS and TMI Data

    NASA Technical Reports Server (NTRS)

    Minnis, Patrick; Huang, Jianping; Lin, Bing; Yi, Yuhong; Arduini, Robert F.; Fan, Tai-Fang; Ayers, J. Kirk; Mace, Gerald G.

    2007-01-01

    A multi-layered cloud retrieval system (MCRS) is updated and used to estimate ice water path in maritime ice-over-water clouds using Visible and Infrared Scanner (VIRS) and TRMM Microwave Imager (TMI) measurements from the Tropical Rainfall Measuring Mission spacecraft between January and August 1998. Lookup tables of top-of-atmosphere 0.65- m reflectance are developed for ice-over-water cloud systems using radiative transfer calculations with various combinations of ice-over-water cloud layers. The liquid and ice water paths, LWP and IWP, respectively, are determined with the MCRS using these lookup tables with a combination of microwave (MW), visible (VIS), and infrared (IR) data. LWP, determined directly from the TMI MW data, is used to define the lower-level cloud properties to select the proper lookup table. The properties of the upper-level ice clouds, such as optical depth and effective size, are then derived using the Visible Infrared Solar-infrared Split-window Technique (VISST), which matches the VIRS IR, 3.9- m, and VIS data to the multilayer-cloud lookup table reflectances and a set of emittance parameterizations. Initial comparisons with surface-based radar retrievals suggest that this enhanced MCRS can significantly improve the accuracy and decrease the IWP in overlapped clouds by 42% and 13% compared to using the single-layer VISST and an earlier simplified MW-VIS-IR (MVI) differencing method, respectively, for ice-over-water cloud systems. The tropical distribution of ice-over-water clouds is the same as derived earlier from combined TMI and VIRS data, but the new values of IWP and optical depth are slightly larger than the older MVI values, and exceed those of single-layered layered clouds by 7% and 11%, respectively. The mean IWP from the MCRS is 8-14% greater than that retrieved from radar retrievals of overlapped clouds over two surface sites and the standard deviations of the differences are similar to those for single-layered clouds. Examples of a method for applying the MCRS over land without microwave data yield similar differences with the surface retrievals. By combining the MCRS with other techniques that focus primarily on optically thin cirrus over low water clouds, it will be possible to more fully assess the IWP in all conditions over ocean except for precipitating systems.

  3. Algebraic techniques for diagonalization of a split quaternion matrix in split quaternionic mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Tongsong, E-mail: jiangtongsong@sina.com; Department of Mathematics, Heze University, Heze, Shandong 274015; Jiang, Ziwu

    In the study of the relation between complexified classical and non-Hermitian quantum mechanics, physicists found that there are links to quaternionic and split quaternionic mechanics, and this leads to the possibility of employing algebraic techniques of split quaternions to tackle some problems in complexified classical and quantum mechanics. This paper, by means of real representation of a split quaternion matrix, studies the problem of diagonalization of a split quaternion matrix and gives algebraic techniques for diagonalization of split quaternion matrices in split quaternionic mechanics.

  4. Influence of the large-small split effect on strategy choice in complex subtraction.

    PubMed

    Xiang, Yan Hui; Wu, Hao; Shang, Rui Hong; Chao, Xiaomei; Ren, Ting Ting; Zheng, Li Ling; Mo, Lei

    2018-04-01

    Two main theories have been used to explain the arithmetic split effect: decision-making process theory and strategy choice theory. Using the inequality paradigm, previous studies have confirmed that individuals tend to adopt a plausibility-checking strategy and a whole-calculation strategy to solve large and small split problems in complex addition arithmetic, respectively. This supports strategy choice theory, but it is unknown whether this theory also explains performance in solving different split problems in complex subtraction arithmetic. This study used small, intermediate and large split sizes, with each split condition being further divided into problems requiring and not requiring borrowing. The reaction times (RTs) for large and intermediate splits were significantly shorter than those for small splits, while accuracy was significantly higher for large and middle splits than for small splits, reflecting no speed-accuracy trade-off. Further, RTs and accuracy differed significantly between the borrow and no-borrow conditions only for small splits. This study indicates that strategy choice theory is suitable to explain the split effect in complex subtraction arithmetic. That is, individuals tend to choose the plausibility-checking strategy or the whole-calculation strategy according to the split size. © 2016 International Union of Psychological Science.

  5. StackSplit - a plugin for multi-event shear wave splitting analyses in SplitLab

    NASA Astrophysics Data System (ADS)

    Grund, Michael

    2017-08-01

    SplitLab is a powerful and widely used tool for analysing seismological shear wave splitting of single event measurements. However, in many cases, especially temporary station deployments close to the noisy seaside, ocean bottom or for recordings affected by strong anthropogenic noise, only multi-event approaches provide stable and reliable splitting results. In order to extend the original SplitLab environment for such analyses, I present the StackSplit plugin that can easily be implemented within the well accepted main program. StackSplit grants easy access to several different analysis approaches within SplitLab, including a new multiple waveform based inversion method as well as the most established standard stacking procedures. The possibility to switch between different analysis approaches at any time allows the user for the most flexible processing of individual multi-event splitting measurements for a single recording station. Besides the provided functions of the plugin, no other external program is needed for the multi-event analyses since StackSplit performs within the available SplitLab structure which is based on MATLAB. The effectiveness and use of this plugin is demonstrated with data examples of a long running seismological recording station in Finland.

  6. Gambling Participation, Expenditure and Risk of Harm in Australia, 1997-1998 and 2010-2011.

    PubMed

    Armstrong, Andrew Richard; Thomas, Anna; Abbott, Max

    2018-03-01

    Gambling-related harm results primarily from financial losses. Internationally Australia continues to rank as the largest spending nation per capita on gambling products. This would suggest that Australian gamblers are at disproportionately high risk of harm despite almost two decades of industry scrutiny and regulation, and investment in research, treatment and education programs. However, declines in participation rates, per capita expenditure, household expenditure, national disposable income spent on gambling and problem gambling rates have been cited as evidence that fewer people are gambling, that gamblers are spending less, and that gambling safety in Australia has improved. The current study investigated these propositions using national population and accounts data, and statistics from Australia's two population-representative gambling surveys conducted in 1997-1998 and 2010-2011. Despite a falling participation rate the study found no real change in the number of people gambling overall, and increasing numbers consuming casino table games, race wagering and sports betting. Further found were increases rather than decreases in average gambler expenditure, overall, and across most products, particularly electronic gaming machines (EGMs). Potentially risky levels of average expenditure were observed in both periods, overall and for race wagering, casino table gaming, and EGMs. Changes in the proportion of income spent on gambling suggest risks declined overall and for race wagering and casino table gaming, but increased for EGMs. Finally, while problem gambling statistics were not comparable between periods, the study found double the number of moderate risk gamblers previously estimated for 2010-2011 amongst the 2 million Australians found to have experienced one or more gambling-related problems. The findings have implications for public health policy and resourcing, and the way in which prevalence and expenditure statistics have been interpreted by researchers, government and industry in Australia and elsewhere.

  7. Cellular Automata Generalized To An Inferential System

    NASA Astrophysics Data System (ADS)

    Blower, David J.

    2007-11-01

    Stephen Wolfram popularized elementary one-dimensional cellular automata in his book, A New Kind of Science. Among many remarkable things, he proved that one of these cellular automata was a Universal Turing Machine. Such cellular automata can be interpreted in a different way by viewing them within the context of the formal manipulation rules from probability theory. Bayes's Theorem is the most famous of such formal rules. As a prelude, we recapitulate Jaynes's presentation of how probability theory generalizes classical logic using modus ponens as the canonical example. We emphasize the important conceptual standing of Boolean Algebra for the formal rules of probability manipulation and give an alternative demonstration augmenting and complementing Jaynes's derivation. We show the complementary roles played in arguments of this kind by Bayes's Theorem and joint probability tables. A good explanation for all of this is afforded by the expansion of any particular logic function via the disjunctive normal form (DNF). The DNF expansion is a useful heuristic emphasized in this exposition because such expansions point out where relevant 0s should be placed in the joint probability tables for logic functions involving any number of variables. It then becomes a straightforward exercise to rely on Boolean Algebra, Bayes's Theorem, and joint probability tables in extrapolating to Wolfram's cellular automata. Cellular automata are seen as purely deductive systems, just like classical logic, which probability theory is then able to generalize. Thus, any uncertainties which we might like to introduce into the discussion about cellular automata are handled with ease via the familiar inferential path. Most importantly, the difficult problem of predicting what cellular automata will do in the far future is treated like any inferential prediction problem.

  8. An ultrasonically levitated noncontact stage using traveling vibrations on precision ceramic guide rails.

    PubMed

    Koyama, Daisuke; Ide, Takeshi; Friend, James R; Nakamura, Kentaro; Ueha, Sadayuki

    2007-03-01

    This paper presents a noncontact sliding table design and measurements of its performance via ultrasonic levitation. A slider placed atop two vibrating guide rails is levitated by an acoustic radiation force emitted from the rails. A flexural traveling wave propagating along the guide rails allows noncontact transportation of the slider. Permitting a transport mechanism that reduces abrasion and dust generation with an inexpensive and simple structure. The profile of the sliding table was designed using the finite-element analysis (FEA) for high levitation and transportation efficiency. The prototype sliding table was made of alumina ceramic (Al2O3) to increase machining accuracy and rigidity using a structure composed of a pair of guide rails with a triangular cross section and piezoelectric transducers. Two types of transducers were used: bolt-clamped Langevin transducers and bimorph transducers. A 40-mm long slider was designed to fit atop the two rail guides. Flexural standing waves and torsional standing waves were observed along the guide rails at resonance, and the levitation of the slider was obtained using the flexural mode even while the levitation distance was less than 10 microm. The levitation distance of the slider was measured while increasing the slider's weight. The levitation pressure, rigidity, and vertical displacement amplitude of the levitating slider thus were measured to be 6.7 kN/m2, 3.0 kN/microm/m2, and less than 1 microm, respectively. Noncontact transport of the slider was achieved using phased drive of the two transducers at either end of the vibrating guide rail. By controlling the phase difference, the slider transportation direction could be switched, and a maximum thrust of 13 mN was obtained.

  9. Microwave measurements of proton tunneling and structural parameters for the propiolic acid-formic acid dimer.

    PubMed

    Daly, Adam M; Douglass, Kevin O; Sarkozy, Laszlo C; Neill, Justin L; Muckle, Matt T; Zaleski, Daniel P; Pate, Brooks H; Kukolich, Stephen G

    2011-10-21

    Microwave spectra of the propiolic acid-formic acid doubly hydrogen bonded complex were measured in the 1 GHz to 21 GHz range using four different Fourier transform spectrometers. Rotational spectra for seven isotopologues were obtained. For the parent isotopologue, a total of 138 a-dipole transitions and 28 b-dipole transitions were measured for which the a-dipole transitions exhibited splittings of a few MHz into pairs of lines and the b-type dipole transitions were split by ~580 MHz. The transitions assigned to this complex were fit to obtain rotational and distortion constants for both tunneling levels: A(0+) = 6005.289(8), B(0+) = 930.553(8), C(0+) = 803.9948(6) MHz, Δ(0+)(J) = 0.075(1), Δ(0+)(JK) = 0.71(1), and δ(0+)(j) = -0.010(1) kHz and A(0-) = 6005.275(8), B(0-) = 930.546(8), C(0-) = 803.9907(5) MHz, Δ(0-)(J) = 0.076(1), Δ(0-)(JK) = 0.70(2), and δ(0-)(j) = -0.008(1) kHz. Double resonance experiments were used on some transitions to verify assignments and to obtain splittings for cases when the b-dipole transitions were difficult to measure. The experimental difference in energy between the two tunneling states is 291.428(5) MHz for proton-proton exchange and 3.35(2) MHz for the deuterium-deuterium exchange. The vibration-rotation coupling constant between the two levels, F(ab), is 120.7(2) MHz for the proton-proton exchange. With one deuterium atom substituted in either of the hydrogen-bonding protons, the tunneling splittings were not observed for a-dipole transitions, supporting the assignment of the splitting to the concerted proton tunneling motion. The spectra were obtained using three Flygare-Balle type spectrometers and one chirped-pulse machine at the University of Virginia. Rotational constants and centrifugal distortion constants were obtained for HCOOH···HOOCCCH, H(13)COOH···HOOCCCH, HCOOD···HOOCCCH, HCOOH···DOOCCCH, HCOOD···DOOCCCH, DCOOH···HOOCCCH, and DCOOD···HOOCCCH. High-level ab initio calculations provided initial rotational constants for the complex, structural parameters, and some details of the proton tunneling potential energy surface. A least squares fit to the isotopic data reveals a planar structure that is slightly asymmetric in the OH distances. The formic OH···O propiolic hydrogen bond length is 1.8 Å and the propiolic OH···O formic hydrogen bond length is 1.6 Å, for the equilibrium configuration. The magnitude of the dipole moment was experimentally determined to be 1.95(3) × 10(-30) C m (0.584(8) D) for the 0(+) states and 1.92(5) × 10(-30) C m (0.576(14) D) for the 0(-) states. © 2011 American Institute of Physics

  10. Artificial intelligence analysis of hyperspectral remote sensing data for management of water, weed, and nitrogen stresses in corn fields

    NASA Astrophysics Data System (ADS)

    Waheed, Tahir

    This study investigated the possibility of using ground-based remotely sensed hyperspectral observations with a special emphasis on detection of water, weed and nitrogen stresses contributing towards in-season decision support for precision crop management (PCM). A three factor split-split-plot experiment, with four randomized blocks as replicates, was established during the growing seasons of 2003 and 2004. Corn (Zea mays L.) hybrid DKC42-22 was grown because this hybrid is a good performer on light soils in Quebec. There were twelve 12 x 12m plots in a block (one replication per treatment per block) and the total number of plots was 48. Water stress was the main factor in the experiment. A drip irrigation system was laid out and each block was split into irrigated and non-irrigated halves. The second main factor of the experiment was weeds with two levels i.e. full weed control and no weed control. Weed treatments were assigned randomly by further splitting the irrigated and non-irrigated sub-blocks into two halves. Each of the weed treatments was furthermore split into three equal sub-sub-plots for nitrogen treatments (third factor of the experiment). Nitrogen was applied at three levels i.e. 50, 150 and 250 kg N ha-1 (Quebec norm is between 120-160 kg N ha-1). The hyperspectral data were recorded (spectral resolution = 1 nm) mid-day (between 1000 and 1400 hours) with a FieldSpec FR spectroradiometer over a spectral range of 400-2500 run at three growth stages namely: early growth, tasseling and full maturity, in each of the growing season. There are two major original contributions in this thesis: First is the development of a hyperspectral data analysis procedure for separating visible (400-700 nm), near-infrared (700-1300 nm) and mid-infrared (1300-2500 nm) regions of the spectrum for use in discriminant analysis procedure. In addition, of all the spectral band-widths analyzed, seven waveband-aggregates were identified using STEPDISC procedure, which were the most effective for classifying combined water, weed, and nitrogen stress. The second contribution is the successful classification of hyperspectral observations acquired over an agricultural field, using three innovative artificial intelligence approaches; support vector machines (SVM), genetic algorithms (GA) and decision tree (DT) algorithms. These AI approaches were used to evaluate a combined effect of water, weed and nitrogen stresses in corn and of all the three AI approaches used, SVM produced the best results (overall accuracy ranging from 88% to 100%). The general conclusion is that the conventional statistical and artificial intelligence techniques used in this study are all useful for quickly mapping combined affects of irrigation, weed and nitrogen stresses (with overall accuracies ranging from 76% to 100%). These approaches have strong potential and are of great benefit to those investigating the in-season impact of irrigation, weed and nitrogen management for corn crop production and other environment related challenges.

  11. Assessing Adaptation with Asymmetric Climate Information: evidence from water bargaining field experiments in Northeast Brazil

    NASA Astrophysics Data System (ADS)

    Pfaff, A.; Velez, M.; Taddei, R.; Broad, K.

    2011-12-01

    We assess how asymmetric climate information affects bargaining -- an adaptation institution. As often observed in the field, some actors lack information. This yields vulnerability, despite participation. We examine the loss for a participant from being uncertain about water quantity when bargaining with a fully informed participant in an ultimatum game in Northeast Brazil. When all are fully informed, our field populations in the capital city and an agricultural valley produce a typical 60-40 split between those initiating and responding in one-shot bargaining. With asymmetric information, when initiators know the water quantity is low they get 80%. Thus even within bargaining, i.e. given strong participation, better integrating climate science into water management via greater effort to communicate relevant information to all involved can help to avoid inequities that could arise despite all of the stakeholders being 'at the table', as may well occur within future water allocation along a large new canal in the case we study.

  12. Experimental analysis and constitutive modelling of steel of A-IIIN strength class

    NASA Astrophysics Data System (ADS)

    Kruszka, Leopold; Janiszewski, Jacek

    2015-09-01

    Fundamentally important is the better understanding of behaviour of new building steels under impact loadings, including plastic deformations. Results of the experimental analysis in wide range of strain rates in compression at room temperature, as well as constitutive modelling for and B500SP structural steels of new A-IIIN Polish strength class, examined dynamically by split Hopkinson pressure bar technique at high strain rates, are presented in table and graphic forms. Dynamic mechanical characteristics of compressive strength for tested building structural steel are determined as well as dynamic mechanical properties of this material are compared with 18G2-b steel of A-II strength class, including effects of the shape of tested specimens, i.e. their slenderness. The paper focuses the attention on those experimental tests, their interpretation, and constitutive semi-empirical modelling of the behaviour of tested steels based on Johnson-Cook's model. Obtained results of analyses presented here are used for designing and numerical simulations of reinforced concrete protective structures.

  13. Reference results for time-like evolution up to

    NASA Astrophysics Data System (ADS)

    Bertone, Valerio; Carrazza, Stefano; Nocera, Emanuele R.

    2015-03-01

    We present high-precision numerical results for time-like Dokshitzer-Gribov-Lipatov-Altarelli-Parisi evolution in the factorisation scheme, for the first time up to next-to-next-to-leading order accuracy in quantum chromodynamics. First, we scrutinise the analytical expressions of the splitting functions available in the literature, in both x and N space, and check their mutual consistency. Second, we implement time-like evolution in two publicly available, entirely independent and conceptually different numerical codes, in x and N space respectively: the already existing APFEL code, which has been updated with time-like evolution, and the new MELA code, which has been specifically developed to perform the study in this work. Third, by means of a model for fragmentation functions, we provide results for the evolution in different factorisation schemes, for different ratios between renormalisation and factorisation scales and at different final scales. Our results are collected in the format of benchmark tables, which could be used as a reference for global determinations of fragmentation functions in the future.

  14. Placement accuracy gauge for electrical components and method of using same

    DOEpatents

    Biggs, Peter M.; Dancer, Linda K.; Yerganian, Simon S.

    1988-10-11

    Surface mounted electrical components are typically assembled on printed wiring boards by automatic machines. It is important that the machines accurately move with respect to both X and Y rotational axes in order to insure that components are positioned precisely on connector pads of the printed wiring board being assembled. In accordance with the instant invention, a gauge is used to facilitate convenient accuracy checks. The gauge is a glass substrate on which grids of 0.005 inch lines are scribed to form location and orientation fields where components are to be placed. The grids are referenced from either fiducial marks or the edge of the substrate to establish known positions within the grids. The equipment to be evaluated is programmed to place components in known positions and the components are held in place by tacky adhesive that is sprayed on the substrate prior to placing the components. The accuracy of the component position is then compared to the programmed position by placing the substrate on a light table and observing the component location. If a significant inaccuracy with respect to any of the axes exists, the inaccuracy is apparent because the component is not aligned properly with the grid. If a precise measurement of an axis inaccuracy is desired, a measuring microscope may be utilized.

  15. Placement accuracy gauge for electrical components and method of using same

    DOEpatents

    Biggs, P.M.; Dancer, L.K.; Yerganian, S.S.

    1987-11-12

    Surface mounted electrical components are typically assembled on printed wiring board by automatic machines. It is important that the machines accurately move with respect to both X and Y rotational axes in order to insure that components are positioned precisely on connector pads of the printed wiring board being assembled. In accordance with the instant invention, a gauge is used to facilitate convenient accuracy checks. The gauge is a glass substrate on which grids of 0.005 inch lines are scribed to form location and orientation fields where components are to be placed. The grids are referenced from ether fiducial marks or the edge of the substrate to establish known positions within the grids. The equipment to be evaluated is programmed to place components in known positions and the components are held in place by tacky adhesive that is sprayed on the substrate prior to placing the components. The accuracy of the component position is then compared to the programmed position by placing the substrate on a light table and observing the component location. If a significant inaccuracy with respect to any of the axes exists, the inaccuracy is apparent because the component is not aligned properly with the grid. If a precise measurement of an axis inaccuracy is desired, a measuring microscope may be utilized. 6 figs.

  16. Alchemical and structural distribution based representation for universal quantum machine learning

    NASA Astrophysics Data System (ADS)

    Faber, Felix A.; Christensen, Anders S.; Huang, Bing; von Lilienfeld, O. Anatole

    2018-06-01

    We introduce a representation of any atom in any chemical environment for the automatized generation of universal kernel ridge regression-based quantum machine learning (QML) models of electronic properties, trained throughout chemical compound space. The representation is based on Gaussian distribution functions, scaled by power laws and explicitly accounting for structural as well as elemental degrees of freedom. The elemental components help us to lower the QML model's learning curve, and, through interpolation across the periodic table, even enable "alchemical extrapolation" to covalent bonding between elements not part of training. This point is demonstrated for the prediction of covalent binding in single, double, and triple bonds among main-group elements as well as for atomization energies in organic molecules. We present numerical evidence that resulting QML energy models, after training on a few thousand random training instances, reach chemical accuracy for out-of-sample compounds. Compound datasets studied include thousands of structurally and compositionally diverse organic molecules, non-covalently bonded protein side-chains, (H2O)40-clusters, and crystalline solids. Learning curves for QML models also indicate competitive predictive power for various other electronic ground state properties of organic molecules, calculated with hybrid density functional theory, including polarizability, heat-capacity, HOMO-LUMO eigenvalues and gap, zero point vibrational energy, dipole moment, and highest vibrational fundamental frequency.

  17. GREAT: a web portal for Genome Regulatory Architecture Tools

    PubMed Central

    Bouyioukos, Costas; Bucchini, François; Elati, Mohamed; Képès, François

    2016-01-01

    GREAT (Genome REgulatory Architecture Tools) is a novel web portal for tools designed to generate user-friendly and biologically useful analysis of genome architecture and regulation. The online tools of GREAT are freely accessible and compatible with essentially any operating system which runs a modern browser. GREAT is based on the analysis of genome layout -defined as the respective positioning of co-functional genes- and its relation with chromosome architecture and gene expression. GREAT tools allow users to systematically detect regular patterns along co-functional genomic features in an automatic way consisting of three individual steps and respective interactive visualizations. In addition to the complete analysis of regularities, GREAT tools enable the use of periodicity and position information for improving the prediction of transcription factor binding sites using a multi-view machine learning approach. The outcome of this integrative approach features a multivariate analysis of the interplay between the location of a gene and its regulatory sequence. GREAT results are plotted in web interactive graphs and are available for download either as individual plots, self-contained interactive pages or as machine readable tables for downstream analysis. The GREAT portal can be reached at the following URL https://absynth.issb.genopole.fr/GREAT and each individual GREAT tool is available for downloading. PMID:27151196

  18. Visualization of oil behavior in a small 4-cycle engine with electrical motoring by neutron radiography

    NASA Astrophysics Data System (ADS)

    Nakamura, M.; Sugimoto, K.; Asano, H.; Murakawa, H.; Takenaka, N.; Mochiki, K.

    2009-06-01

    Neutron radiography is suitable for the visualization of liquid behavior in a metallic machine. Observation of oil behavior in a small 4-cycle engine on operating was carried out by using the neutron radiography facility at JRR-3 in JAEA. The engine was not fired but operated by an electrical motor. Movies were taken by a neutron image intensifier with a color CCD camera of 8-bit resolution, 30 frames/s and 640×480 pixels developed by Toshiba Corp. The engine was placed on a turn table and was rotated, so the movie could be taken from any angle. Numbers of revolution of the engine were changed from 260 to 1200 rpm. Visualized images of the mechanism and the oil behavior in the engine were obtained.

  19. Study of the NASTRAN input/output systems

    NASA Technical Reports Server (NTRS)

    Brown, W. K.; Schoellmann, W. F.

    1977-01-01

    The basic characteristics of the NASTRAN level 16 I/O subsystem are presented with particular reference to blocking/deblocking aspects, I/O methods used on the IBM, CDC, and UNIVAC machines, definition of basic NASTRAN I/O control tables, and portability of parts of the I/O subsystem to other programs outside the NASTRAN environment are included. An explanation of the IBM primary, secondary, and tertiary files defined by the data definition (DD) cards in the NASTRAN JCL procedure. The explanation is intended to enlighten users as to the purpose of these DD cards, how they relate to one another, and why there are no similar type definition cards required on the CDC and UNIVAC versions. Enhancements designed to increase overall efficiency and decrease core requirements are also recommended.

  20. Performance prediction of optical image stabilizer using SVM for shaker-free production line

    NASA Astrophysics Data System (ADS)

    Kim, HyungKwan; Lee, JungHyun; Hyun, JinWook; Lim, Haekeun; Kim, GyuYeol; Moon, HyukSoo

    2016-04-01

    Recent smartphones adapt the camera module with optical image stabilizer(OIS) to enhance imaging quality in handshaking conditions. However, compared to the non-OIS camera module, the cost for implementing the OIS module is still high. One reason is that the production line for the OIS camera module requires a highly precise shaker table in final test process, which increases the unit cost of the production. In this paper, we propose a framework for the OIS quality prediction that is trained with the support vector machine and following module characterizing features : noise spectral density of gyroscope, optically measured linearity and cross-axis movement of hall and actuator. The classifier was tested on an actual production line and resulted in 88% accuracy of recall rate.

  1. SQLGEN: a framework for rapid client-server database application development.

    PubMed

    Nadkarni, P M; Cheung, K H

    1995-12-01

    SQLGEN is a framework for rapid client-server relational database application development. It relies on an active data dictionary on the client machine that stores metadata on one or more database servers to which the client may be connected. The dictionary generates dynamic Structured Query Language (SQL) to perform common database operations; it also stores information about the access rights of the user at log-in time, which is used to partially self-configure the behavior of the client to disable inappropriate user actions. SQLGEN uses a microcomputer database as the client to store metadata in relational form, to transiently capture server data in tables, and to allow rapid application prototyping followed by porting to client-server mode with modest effort. SQLGEN is currently used in several production biomedical databases.

  2. Enhanced Numerical Tools for Computer Simulation of Coupled Physical Phenomena and Design of Components Made of Innovative Materials

    NASA Astrophysics Data System (ADS)

    Cegielski, M.; Hernik, S.; Kula, M.; Oleksy, M.

    This section is based on paper [96], the objective of which is modeling of the unilateral damage effect in the aluminum alloy Al-2024, based on the nonlinear Armstrong-Frederick model Eq. 6.60 enriched by damage [170] with a continuous damage deactivation concept. The simulation is proposed in order to model the phenomenon of nonsymmetric hysteresis loop evolution due to different damage growth under tension and compression observed in the experiment [1]. The specimens used in the experiment were made of aluminum alloy Al-2024 (Table 7.1). The tests were carried out at room temperature on a servo-hydraulic INSTRON machine type 1340, using thin-walled tubes of the dimensions: internal diameter 15 mm and external diameter 18 mm.

  3. On-demand acoustic droplet splitting and steering in a disposable microfluidic chip.

    PubMed

    Park, Jinsoo; Jung, Jin Ho; Park, Kwangseok; Destgeer, Ghulam; Ahmed, Husnain; Ahmad, Raheel; Sung, Hyung Jin

    2018-01-30

    On-chip droplet splitting is one of the fundamental droplet-based microfluidic unit operations to control droplet volume after production and increase operational capability, flexibility, and throughput. Various droplet splitting methods have been proposed, and among them the acoustic droplet splitting method is promising because of its label-free operation without any physical or thermal damage to droplets. Previous acoustic droplet splitting methods faced several limitations: first, they employed a cross-type acoustofluidic device that precluded multichannel droplet splitting; second, they required irreversible bonding between a piezoelectric substrate and a microfluidic chip, such that the fluidic chip was not replaceable. Here, we present a parallel-type acoustofluidic device with a disposable microfluidic chip to address the limitations of previous acoustic droplet splitting devices. In the proposed device, an acoustic field is applied in the direction opposite to the flow direction to achieve multichannel droplet splitting and steering. A disposable polydimethylsiloxane microfluidic chip is employed in the developed device, thereby removing the need for permanent bonding and improving the flexibility of the droplet microfluidic device. We experimentally demonstrated on-demand acoustic droplet bi-splitting and steering with precise control over the droplet splitting ratio, and we investigated the underlying physical mechanisms of droplet splitting and steering based on Laplace pressure and ray acoustics analyses, respectively. We also demonstrated droplet tri-splitting to prove the feasibility of multichannel droplet splitting. The proposed on-demand acoustic droplet splitting device enables on-chip droplet volume control in various droplet-based microfluidic applications.

  4. Classification of malignant and benign liver tumors using a radiomics approach

    NASA Astrophysics Data System (ADS)

    Starmans, Martijn P. A.; Miclea, Razvan L.; van der Voort, Sebastian R.; Niessen, Wiro J.; Thomeer, Maarten G.; Klein, Stefan

    2018-03-01

    Correct diagnosis of the liver tumor phenotype is crucial for treatment planning, especially the distinction between malignant and benign lesions. Clinical practice includes manual scoring of the tumors on Magnetic Resonance (MR) images by a radiologist. As this is challenging and subjective, it is often followed by a biopsy. In this study, we propose a radiomics approach as an objective and non-invasive alternative for distinguishing between malignant and benign phenotypes. T2-weighted (T2w) MR sequences of 119 patients from multiple centers were collected. We developed an efficient semi-automatic segmentation method, which was used by a radiologist to delineate the tumors. Within these regions, features quantifying tumor shape, intensity, texture, heterogeneity and orientation were extracted. Patient characteristics and semantic features were added for a total of 424 features. Classification was performed using Support Vector Machines (SVMs). The performance was evaluated using internal random-split cross-validation. On the training set within each iteration, feature selection and hyperparameter optimization were performed. To this end, another cross validation was performed by splitting the training sets in training and validation parts. The optimal settings were evaluated on the independent test sets. Manual scoring by a radiologist was also performed. The radiomics approach resulted in 95% confidence intervals of the AUC of [0.75, 0.92], specificity [0.76, 0.96] and sensitivity [0.52, 0.82]. These approach the performance of the radiologist, which were an AUC of 0.93, specificity 0.70 and sensitivity 0.93. Hence, radiomics has the potential to predict the liver tumor benignity in an objective and non-invasive manner.

  5. Volumetric wireless coil based on periodically coupled split-loop resonators for clinical wrist imaging.

    PubMed

    Shchelokova, Alena V; van den Berg, Cornelis A T; Dobrykh, Dmitry A; Glybovski, Stanislav B; Zubkov, Mikhail A; Brui, Ekaterina A; Dmitriev, Dmitry S; Kozachenko, Alexander V; Efimtcev, Alexander Y; Sokolov, Andrey V; Fokin, Vladimir A; Melchakova, Irina V; Belov, Pavel A

    2018-02-09

    Design and characterization of a new inductively driven wireless coil (WLC) for wrist imaging at 1.5 T with high homogeneity operating due to focusing the B 1 field of a birdcage body coil. The WLC design has been proposed based on a volumetric self-resonant periodic structure of inductively coupled split-loop resonators with structural capacitance. The WLC was optimized and studied regarding radiofrequency fields and interaction to the birdcage coil (BC) by electromagnetic simulations. The manufactured WLC was characterized by on-bench measurements and in vivo and phantom study in comparison to a standard cable-connected receive-only coil. The WLC placed into BC gave the measured B1+ increase of the latter by 8.6 times for the same accepted power. The phantom and in vivo wrist imaging showed that the BC in receiving with the WLC inside reached equal or higher signal-to-noise ratio than the conventional clinical setup comprising the transmit-only BC and a commercial receive-only flex-coil and created no artifacts. Simulations and on-bench measurements proved safety in terms of specific absorption rate and reflected transmit power. The results showed that the proposed WLC could be an alternative to standard cable-connected receive coils in clinical magnetic resonance imaging. As an example, with no cable connection, the WLC allowed wrist imaging on a 1.5 T clinical machine using a full-body BC for transmitting and receive with the desired signal-to-noise ratio, image quality, and safety. © 2018 International Society for Magnetic Resonance in Medicine.

  6. Granular support vector machines with association rules mining for protein homology prediction.

    PubMed

    Tang, Yuchun; Jin, Bo; Zhang, Yan-Qing

    2005-01-01

    Protein homology prediction between protein sequences is one of critical problems in computational biology. Such a complex classification problem is common in medical or biological information processing applications. How to build a model with superior generalization capability from training samples is an essential issue for mining knowledge to accurately predict/classify unseen new samples and to effectively support human experts to make correct decisions. A new learning model called granular support vector machines (GSVM) is proposed based on our previous work. GSVM systematically and formally combines the principles from statistical learning theory and granular computing theory and thus provides an interesting new mechanism to address complex classification problems. It works by building a sequence of information granules and then building support vector machines (SVM) in some of these information granules on demand. A good granulation method to find suitable granules is crucial for modeling a GSVM with good performance. In this paper, we also propose an association rules-based granulation method. For the granules induced by association rules with high enough confidence and significant support, we leave them as they are because of their high "purity" and significant effect on simplifying the classification task. For every other granule, a SVM is modeled to discriminate the corresponding data. In this way, a complex classification problem is divided into multiple smaller problems so that the learning task is simplified. The proposed algorithm, here named GSVM-AR, is compared with SVM by KDDCUP04 protein homology prediction data. The experimental results show that finding the splitting hyperplane is not a trivial task (we should be careful to select the association rules to avoid overfitting) and GSVM-AR does show significant improvement compared to building one single SVM in the whole feature space. Another advantage is that the utility of GSVM-AR is very good because it is easy to be implemented. More importantly and more interestingly, GSVM provides a new mechanism to address complex classification problems.

  7. Adapting machine learning techniques to censored time-to-event health record data: A general-purpose approach using inverse probability of censoring weighting.

    PubMed

    Vock, David M; Wolfson, Julian; Bandyopadhyay, Sunayan; Adomavicius, Gediminas; Johnson, Paul E; Vazquez-Benitez, Gabriela; O'Connor, Patrick J

    2016-06-01

    Models for predicting the probability of experiencing various health outcomes or adverse events over a certain time frame (e.g., having a heart attack in the next 5years) based on individual patient characteristics are important tools for managing patient care. Electronic health data (EHD) are appealing sources of training data because they provide access to large amounts of rich individual-level data from present-day patient populations. However, because EHD are derived by extracting information from administrative and clinical databases, some fraction of subjects will not be under observation for the entire time frame over which one wants to make predictions; this loss to follow-up is often due to disenrollment from the health system. For subjects without complete follow-up, whether or not they experienced the adverse event is unknown, and in statistical terms the event time is said to be right-censored. Most machine learning approaches to the problem have been relatively ad hoc; for example, common approaches for handling observations in which the event status is unknown include (1) discarding those observations, (2) treating them as non-events, (3) splitting those observations into two observations: one where the event occurs and one where the event does not. In this paper, we present a general-purpose approach to account for right-censored outcomes using inverse probability of censoring weighting (IPCW). We illustrate how IPCW can easily be incorporated into a number of existing machine learning algorithms used to mine big health care data including Bayesian networks, k-nearest neighbors, decision trees, and generalized additive models. We then show that our approach leads to better calibrated predictions than the three ad hoc approaches when applied to predicting the 5-year risk of experiencing a cardiovascular adverse event, using EHD from a large U.S. Midwestern healthcare system. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Using histograms to introduce randomization in the generation of ensembles of decision trees

    DOEpatents

    Kamath, Chandrika; Cantu-Paz, Erick; Littau, David

    2005-02-22

    A system for decision tree ensembles that includes a module to read the data, a module to create a histogram, a module to evaluate a potential split according to some criterion using the histogram, a module to select a split point randomly in an interval around the best split, a module to split the data, and a module to combine multiple decision trees in ensembles. The decision tree method includes the steps of reading the data; creating a histogram; evaluating a potential split according to some criterion using the histogram, selecting a split point randomly in an interval around the best split, splitting the data, and combining multiple decision trees in ensembles.

  9. Comparison of Feature Selection Techniques in Machine Learning for Anatomical Brain MRI in Dementia.

    PubMed

    Tohka, Jussi; Moradi, Elaheh; Huttunen, Heikki

    2016-07-01

    We present a comparative split-half resampling analysis of various data driven feature selection and classification methods for the whole brain voxel-based classification analysis of anatomical magnetic resonance images. We compared support vector machines (SVMs), with or without filter based feature selection, several embedded feature selection methods and stability selection. While comparisons of the accuracy of various classification methods have been reported previously, the variability of the out-of-training sample classification accuracy and the set of selected features due to independent training and test sets have not been previously addressed in a brain imaging context. We studied two classification problems: 1) Alzheimer's disease (AD) vs. normal control (NC) and 2) mild cognitive impairment (MCI) vs. NC classification. In AD vs. NC classification, the variability in the test accuracy due to the subject sample did not vary between different methods and exceeded the variability due to different classifiers. In MCI vs. NC classification, particularly with a large training set, embedded feature selection methods outperformed SVM-based ones with the difference in the test accuracy exceeding the test accuracy variability due to the subject sample. The filter and embedded methods produced divergent feature patterns for MCI vs. NC classification that suggests the utility of the embedded feature selection for this problem when linked with the good generalization performance. The stability of the feature sets was strongly correlated with the number of features selected, weakly correlated with the stability of classification accuracy, and uncorrelated with the average classification accuracy.

  10. A machine learning approach for classification of anatomical coverage in CT

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoyong; Lo, Pechin; Ramakrishna, Bharath; Goldin, Johnathan; Brown, Matthew

    2016-03-01

    Automatic classification of anatomical coverage of medical images is critical for big data mining and as a pre-processing step to automatically trigger specific computer aided diagnosis systems. The traditional way to identify scans through DICOM headers has various limitations due to manual entry of series descriptions and non-standardized naming conventions. In this study, we present a machine learning approach where multiple binary classifiers were used to classify different anatomical coverages of CT scans. A one-vs-rest strategy was applied. For a given training set, a template scan was selected from the positive samples and all other scans were registered to it. Each registered scan was then evenly split into k × k × k non-overlapping blocks and for each block the mean intensity was computed. This resulted in a 1 × k3 feature vector for each scan. The feature vectors were then used to train a SVM based classifier. In this feasibility study, four classifiers were built to identify anatomic coverages of brain, chest, abdomen-pelvis, and chest-abdomen-pelvis CT scans. Each classifier was trained and tested using a set of 300 scans from different subjects, composed of 150 positive samples and 150 negative samples. Area under the ROC curve (AUC) of the testing set was measured to evaluate the performance in a two-fold cross validation setting. Our results showed good classification performance with an average AUC of 0.96.

  11. SU-E-T-255: Development of a Michigan Quality Assurance (MQA) Database for Clinical Machine Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberts, D

    Purpose: A unified database system was developed to allow accumulation, review and analysis of quality assurance (QA) data for measurement, treatment, imaging and simulation equipment in our department. Recording these data in a database allows a unified and structured approach to review and analysis of data gathered using commercial database tools. Methods: A clinical database was developed to track records of quality assurance operations on linear accelerators, a computed tomography (CT) scanner, high dose rate (HDR) afterloader and imaging systems such as on-board imaging (OBI) and Calypso in our department. The database was developed using Microsoft Access database and visualmore » basic for applications (VBA) programming interface. Separate modules were written for accumulation, review and analysis of daily, monthly and annual QA data. All modules were designed to use structured query language (SQL) as the basis of data accumulation and review. The SQL strings are dynamically re-written at run time. The database also features embedded documentation, storage of documents produced during QA activities and the ability to annotate all data within the database. Tests are defined in a set of tables that define test type, specific value, and schedule. Results: Daily, Monthly and Annual QA data has been taken in parallel with established procedures to test MQA. The database has been used to aggregate data across machines to examine the consistency of machine parameters and operations within the clinic for several months. Conclusion: The MQA application has been developed as an interface to a commercially available SQL engine (JET 5.0) and a standard database back-end. The MQA system has been used for several months for routine data collection.. The system is robust, relatively simple to extend and can be migrated to a commercial SQL server.« less

  12. Impact of mesh tracks and low-ground-pressure vehicle use on blanket peat hydrology

    NASA Astrophysics Data System (ADS)

    McKendrick-Smith, Kathryn; Holden, Joseph; Parry, Lauren

    2016-04-01

    Peatlands are subject to multiple uses including drainage, farming and recreation. Low-ground-pressure vehicle access is desirable by land owners and tracks facilitate access. However, there is concern that such activity may impact peat hydrology and so granting permission for track installation has been problematic, particularly without evidence for decision-making. We present the first comprehensive study of mesh track and low-ground-pressure vehicle impacts on peatland hydrology. In the sub-arctic oceanic climate of the Moor House World Biosphere Reserve in the North Pennines, UK, a 1.5 km long experimental track was installed to investigate hydrological impacts. Surface vegetation was cut and the plastic mesh track pinned into the peat surface. The experimental track was split into 7 treatments, designed to reflect typical track usage (0 - 5 vehicle passes per week) and varying vehicle weight. The greatest hydrological impacts were expected for sections of track subject to more frequent vehicle use and in close proximity to the track. In total 554 dipwells (including 15 automated recording at 15-min intervals) were monitored for water-table depth, positioned to capture potential spatial variability in response. Before track installation, samples for vertical and lateral hydraulic conductivity (Ks) analysis (using the modified cube method) were taken at 0-10 cm depth from a frequently driven treatment (n = 15), an infrequently driven treatment (0.5 passes per week) (n = 15) and a control site with no track/driving (n = 15). The test was repeated after 16 months of track use. We present a spatially and temporally rich water-table dataset from the study site showing how the impacts of the track on water table are spatially highly variable. Water-table depths across the site were shallow, typically within the upper 10 cm of the peat profile for > 75% of the time. We show that mesh track and low-ground-pressure vehicle impacts on water-table depth were small except for directly under and close to the track. Where the track runs parallel to the contours, water-tables were found to be deeper downslope of the track and shallower upslope. However in the no track/driving treatment; water table was significantly shallower downslope than upslope. Strong anisotropy was found in both 'before-track' and 'after-track' Ks, with horizontal Ks significantly greater than vertical Ks. No significant difference was found in vertical Ks before and after driving (medians 8.6 x 10-5 and 6.6 x 10-5 cm s-1 respectively). Horizontal Ks was significantly greater after driving (median 2.2 x 10-3 cm s-1) than before (median 3.7 x 10-4 cm s-1). Post-hoc testing highlights variability in response to treatment and topographic position. We suggest that this surprising result is related to rapid regrowth of new vegetation (particularly Sphagnum) through the mesh of the track, which was more dominant on horizontal Ks than the compression from low-ground-pressure vehicle use. Our results indicate that mesh tracks have a significant impact upon hydrology; however response is variable dependent upon topographic and seasonal factors. These findings can be used to inform land-management decision-making for the use of mesh tracks in peatlands.

  13. Fracture patterns after bilateral sagittal split osteotomy of the mandibular ramus according to the Obwegeser/Dal Pont and Hunsuck/Epker modifications.

    PubMed

    Möhlhenrich, Stephan Christian; Kniha, Kristian; Peters, Florian; Ayoub, Nassim; Goloborodko, Evgeny; Hölzle, Frank; Fritz, Ulrike; Modabber, Ali

    2017-05-01

    The aim of this study was to compare the fracture patterns after sagittal split osteotomy according to Obwegeser/Dal Pont (ODP) and Hunsuck/Epker (HE), as well as to investigate the relationship between lateral bone cut ending or angle and the incidence of unfavorable/bad splits. Postoperative cone-beam computed tomograms of 124 splits according to ODP and 60 according to HE were analyzed. ODP led to 75.8% and HE led to 60% lingual fractures with mandibular foramen contact. Horizontal fractures were found in 9.7% and 6.7%, respectively, and unfavorable/bad splits were found in 11.3% and 10%, respectively. The lateral osteotomy angle was 106.22° (SD 12.03)° for bad splits and 106.6° (SD 13.12)° for favorable splits. Correlations were found between favorable fracture patterns and split modifications and between buccal ending of the lateral bone cut and bad splits (p < 0.001). No relationship was observed between split modifications (p = 0.792) or the osteotomy angle (p = 0.937) and the incidence of unfavorable/bad splits. Split modifications had no influence on the incidence of unfavorable/bad splits, but the buccal ending of the lateral bone cut did have an influence. More lingual fractures with mandibular foramen contact are expected with the ODP modification. The osteotomy angle did not differ between favorable and bad splits. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  14. Clinical application of calculated split renal volume using computed tomography-based renal volumetry after partial nephrectomy: Correlation with technetium-99m dimercaptosuccinic acid renal scan data.

    PubMed

    Lee, Chan Ho; Park, Young Joo; Ku, Ja Yoon; Ha, Hong Koo

    2017-06-01

    To evaluate the clinical application of computed tomography-based measurement of renal cortical volume and split renal volume as a single tool to assess the anatomy and renal function in patients with renal tumors before and after partial nephrectomy, and to compare the findings with technetium-99m dimercaptosuccinic acid renal scan. The data of 51 patients with a unilateral renal tumor managed by partial nephrectomy were retrospectively analyzed. The renal cortical volume of tumor-bearing and contralateral kidneys was measured using ImageJ software. Split estimated glomerular filtration rate and split renal volume calculated using this renal cortical volume were compared with the split renal function measured with technetium-99m dimercaptosuccinic acid renal scan. A strong correlation between split renal function and split renal volume of the tumor-bearing kidney was observed before and after surgery (r = 0.89, P < 0.001 and r = 0.94, P < 0.001). The preoperative and postoperative split estimated glomerular filtration rate of the operated kidney showed a moderate correlation with split renal function (r = 0.39, P = 0.004 and r = 0.49, P < 0.001). The correlation between reductions in split renal function and split renal volume of the operated kidney (r = 0.87, P < 0.001) was stronger than that between split renal function and percent reduction in split estimated glomerular filtration rate (r = 0.64, P < 0.001). The split renal volume calculated using computed tomography-based renal volumetry had a strong correlation with the split renal function measured using technetium-99m dimercaptosuccinic acid renal scan. Computed tomography-based split renal volume measurement before and after partial nephrectomy can be used as a single modality for anatomical and functional assessment of the tumor-bearing kidney. © 2017 The Japanese Urological Association.

  15. Optical signal splitting and chirping device modeling

    NASA Astrophysics Data System (ADS)

    Vinogradova, Irina L.; Andrianova, Anna V.; Meshkov, Ivan K.; Sultanov, Albert Kh.; Abdrakhmanova, Guzel I.; Grakhova, Elizaveta P.; Ishmyarov, Arsen A.; Yantilina, Liliya Z.; Kutlieva, Gulnaz R.

    2017-04-01

    This article examines the devices for optical signal splitting and chirping device modeling. Models with splitting and switching functions are taken into consideration. The described device for optical signal splitting and chirping represents interferential splitter with profiled mixer which provides allocation of correspondent spectral component from ultra wide band frequency diapason, and signal phase shift for aerial array (AA) directive diagram control. This paper proposes modeling for two types of devices for optical signal splitting and chirping: the interference-type optical signal splitting and chirping device and the long-distance-type optical signal splitting and chirping device.

  16. Fee Splitting among General Practitioners: A Cross-Sectional Study in Iran.

    PubMed

    Parsa, Mojtaba; Larijani, Bagher; Aramesh, Kiarash; Nedjat, Saharnaz; Fotouhi, Akbar; Yekaninejad, Mir Saeed; Ebrahimian, Nejatollah; Kandi, Mohamad Jafar

    2016-12-01

    Fee splitting is a process whereby a physician refers a patient to another physician or a healthcare facility and receives a portion of the charge in return. This survey was conducted to study general practitioners' (GPs) attitudes toward fee splitting as well as the prevalence, causes, and consequences of this process. This is a cross-sectional study on 223 general practitioners in 2013. Concerning the causes and consequences of fee splitting, an unpublished qualitative study was conducted by interviewing a number of GPs and specialists and the questionnaire options were the results of the information obtained from this study. Of the total 320 GPs, 247 returned the questionnaires. The response rate was 77.18%. Of the 247 returned questionnaires, 223 fulfilled the inclusion criteria. Among the participants, 69.1% considered fee splitting completely wrong and 23.2% (frequently or rarely) practiced fee splitting. The present study showed that the prevalence of fee splitting among physicians who had positive attitudes toward fee splitting was 4.63 times higher than those who had negative attitudes. In addition, this study showed that, compared to private hospitals, fee splitting is less practiced in public hospitals. The major cause of fee splitting was found to be unrealistic/unfair tariffs and the main consequence of fee splitting was thought to be an increase in the number of unnecessary patient referrals. Fee splitting is an unethical act, contradicts the goals of the medical profession, and undermines patient's best interest. In Iran, there is no code of ethics on fee splitting, but in this study, it was found that the majority of GPs considered it unethical. However, among those who had negative attitudes toward fee splitting, there were physicians who did practice fee splitting. The results of the study showed that physicians who had a positive attitude toward fee splitting practiced it more than others. Therefore, if physicians consider fee splitting unethical, its rate will certainly decrease. The study claims that to decrease such practice, the healthcare system has to revise the tariffs.

  17. Pharmaceutical counselling about different types of tablet-splitting methods based on the results of weighing tests and mechanical development of splitting devices.

    PubMed

    Somogyi, O; Meskó, A; Csorba, L; Szabó, P; Zelkó, R

    2017-08-30

    The division of tablets and adequate methods of splitting them are a complex problem in all sectors of health care. Although tablet-splitting is often required, this procedure can be difficult for patients. Four tablets were investigated with different external features (shape, score-line, film-coat and size). The influencing effect of these features and the splitting methods was investigated according to the precision and "weight loss" of splitting techniques. All four types of tablets were halved by four methods: by hand, with a kitchen knife, with an original manufactured splitting device and with a modified tablet splitter based on a self-developed mechanical model. The mechanical parameters (harness and friability) of the products were measured during the study. The "weight loss" and precision of splitting methods were determined and compared by statistical analysis. On the basis of the results, the external features (geometry), the mechanical parameters of tablets and the mechanical structure of splitting devices can influence the "weight loss" and precision of tablet-splitting. Accordingly, a new decision-making scheme was developed for the selection of splitting methods. In addition, the skills of patients and the specialties of therapy should be considered so that pharmaceutical counselling can be more effective regarding tablet-splitting. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Small-scale modelling of cementation by descending silica-bearing fluids: Explanation of the origin of arenitic caves in South American tepuis

    NASA Astrophysics Data System (ADS)

    Aubrecht, R.; Lánczos, T.; Schlögl, J.; Audy, M.

    2017-12-01

    Geoscientific research was performed on South American table mountains (tepuis) and in their sandstone cave systems. To explain speleogenesis in these poorly soluble rocks, two theories were introduced: a) arenization theory implying selective weathering of quartz along grain boundaries and releasing of sand grains, b) selective lithification theory implying cementation by descending silica-bearing fluid flow. The latter theory presumes that the descending fluid flow becomes unstable on the interface between two layers with different porosity and splits to separate flow channels (so-called ;finger flow;). The arenites outside these channels remain uncemented. To verify the latter theory, small-scale modelling was performed, using layered sands and sodium-silicate solution. Fine to medium sand was used (0.08-0.5 mm), along with a coarse sand fraction (0.5-1.5 mm). The sands were layered and compacted in a transparent plastic boxes. Three liters of sodium-silicate solution (so-called water glass) were left to drip for several hours to the top of the sediment. The fine-grained layers were perfectly laterally impregnated, whereas the descending fluid flows split to ;fingers; in the coarse-grained layers due their higher hydraulic conductivity. This small-scale laboratory simulation mimics the real diagenesis by descending silica-bearing fluids and matches the real phenomena observed on the tepuis. The resulting cemented constructions closely mimic many geomorphological features observed on tepuis and inside their caves, e.g. ;finger-flow; pillars, overhangs, imperfectly formed (aborted) pillars in forms of hummocks hanging from ceilings, locally also thicker central pillars that originated by merging of smaller fluid-flow channels. The modelling showed that selective lithification theory can explain most of the geomorphological aspects related to the speleogenesis in tepuis.

  19. Time series evaluation of an intervention to increase statin tablet splitting by general practitioners.

    PubMed

    Polinski, Jennifer M; Schneeweiss, Sebastian; Maclure, Malcolm; Marshall, Blair; Ramsden, Samuel; Dormuth, Colin

    2011-02-01

    Tablet splitting, in which a higher-dose tablet is split to get 2 doses, reduces patients' drug costs. Statins can be split safely. General practitioners (GPs) may not direct their patients to split statins because of safety concerns or unawareness of costs. Medical chart inserts provide cost-effective education to physicians. The aim of this study was to assess whether providing GPs with statin-splitting chart inserts would increase splitting rates, and to identify predictors of splitting. In 2005 and 2006, we faxed a statin chart insert to British Columbia GPs with a request for a telephone interview. Consenting GPs were mailed 3 statin chart inserts and interviewed by phone (the intervention). In an interrupted time series, we compared monthly rates of statin-splitting prescriptions among intervention and nonintervention GPs before, during, and after the intervention. In multivariate logistic regressions accounting for patient clustering, predictors of splitting included physician and patient demographics and the specific statin prescribed. Of 5051 GPs reached, 282 (6%) agreed to the intervention. Before the intervention, GPs' splitting rate was 2.6%; after intervention, GPs' splitting rate was 7.5%. The rate for the nonintervention GPs was 4.4%. Intervention GPs were 1.68 (95% CI, 1.12-2.53) times more likely to prescribe splitting after the intervention than were nonintervention GPs. Other predictors were a patient's female sex (odds ratio [OR] = 1.26; 95% CI, 1.18-1.34), lower patient income (OR = 1.33; 95% CI, 1.18-1.34), and a lack of drug insurance (OR = 1.89; 95% CI, 1.69-2.04). An inexpensive intervention was effective in producing a sustained increase in GPs' splitting rate during 22 months of observed follow-up. Expanding statin-splitting education to all GPs might reduce prescription costs for many patients and payors. Copyright © 2011 Elsevier HS Journals, Inc. All rights reserved.

  20. Time series evaluation of an intervention to increase statin tablet splitting by general practitioners

    PubMed Central

    Polinski, Jennifer M.; Schneeweiss, Sebastian; Maclure, Malcolm; Marshall, Blair; Ramsden, Samuel; Dormuth, Colin

    2011-01-01

    Background Tablet splitting, in which a higher-dose tablet is split to get two doses, reduces patients’ drug costs. Statins can be split safely. General practitioners (GPs) may not direct their patients to split statins because of safety concerns or unawareness of costs. Medical chart inserts provide cost-effective education to physicians. We evaluated whether providing GPs with statin splitting chart inserts would increase splitting rates and identified predictors of splitting. Methods In 2005–2006, we faxed a statin chart insert to British Columbia GPs with a request for a telephone interview. Consenting GPs were mailed 3 statin chart inserts and interviewed by phone (the intervention). In an interrupted time series, we compared monthly rates of statin splitting prescriptions among intervention and non-intervention GPs before, during, and after the intervention. In multivariate logistic regressions accounting for patient clustering, predictors of splitting included physician and patient demographics and the specific statin prescribed. Results Of 5,051 GPs reached, 282 (6%) agreed to the intervention. Before the intervention, GPs’ splitting rate was 2.6%; after, intervention GPs’ splitting rate was 7.5%, non-intervention GPs’ was 4.4%. Intervention GPs were 1.68 (95% CI 1.12–2.53) times more likely to prescribe splitting after the intervention than were non-intervention GPs. Other predictors were a patient’s female sex (OR=1.26, 95% CI 1.18–1.34), lower patient income (OR=1.33, 95% CI 1.18–1.34), and no drug insurance (OR=1.89, 95% CI 1.69–2.04). Interpretation An inexpensive intervention was effective in producing a sustained increase in GPs’ splitting rate during 22 months of observed follow-up. Expanding statin splitting education to all GPs could reduce prescription costs for many patients and payors. PMID:21497707

Top