Sample records for columbia linear machine

  1. The QCDSP project —a status report

    NASA Astrophysics Data System (ADS)

    Chen, Dong; Chen, Ping; Christ, Norman; Edwards, Robert; Fleming, George; Gara, Alan; Hansen, Sten; Jung, Chulwoo; Kaehler, Adrian; Kasow, Steven; Kennedy, Anthony; Kilcup, Gregory; Luo, Yubin; Malureanu, Catalin; Mawhinney, Robert; Parsons, John; Sexton, James; Sui, Chengzhong; Vranas, Pavlos

    1998-01-01

    We give a brief overview of the massively parallel computer project underway for nearly the past four years, centered at Columbia University. A 6 Gflops and a 50 Gflops machine are presently being debugged for installation at OSU and SCRI respectively, while a 0.4 Tflops machine is under construction for Columbia and a 0.6 Tflops machine is planned for the new RIKEN Brookhaven Research Center.

  2. Return to the river: strategies for salmon restoration in the Columbia River Basin.

    Treesearch

    Richard N. Williams; Jack A. Standford; James A. Lichatowich; William J. Liss; Charles C. Coutant; Willis E. McConnaha; Richard R. Whitney; Phillip R. Mundy; Peter A. Bisson; Madison S. Powell

    2006-01-01

    The Columbia River today is a great "organic machine" (White 1995) that dominates the economy of the Pacific Northwest. Even though natural attributes remain—for example, salmon production in Washington State's Hanford Reach, the only unimpounded reach of the mainstem Columbia River—the Columbia and Snake River mainstems are dominated...

  3. Building Columbia from the SysAdmin View

    NASA Technical Reports Server (NTRS)

    Chan, David

    2005-01-01

    Project Columbia was built at NASA Ames Research Center in partnership with SGI and Intel. Columbia consists of 20 512 processor Altix machines with 440TB of storage and achieved 51.87 TeraPlops to be ranked the second fastest on the top 500 at SuperComputing 2004. Columbia was delivered, installed and put into production in 3 months. On average, a new Columbia node was brought into production in less than a week. Columbia's configuration, installation, and future plans will be discussed.

  4. 78 FR 73886 - Schweitzer-Mauduit International, Inc., Paper Machine #21, Ancram, New York; Schweitzer-Mauduit...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-09

    ...., Columbia Mill, Lee, Massachusetts; Amended Certification Regarding Eligibility To Apply for Worker... revealed that workers of Schweitzer-Mauduit International, Inc., Columbia Mill, Lee, Massachusetts are... working at Schweitzer-Mauduit International, Inc., Columbia Mill, Lee, Massachusetts (TA-W-82,718A). The...

  5. Forecasting daily streamflow using online sequential extreme learning machines

    NASA Astrophysics Data System (ADS)

    Lima, Aranildo R.; Cannon, Alex J.; Hsieh, William W.

    2016-06-01

    While nonlinear machine methods have been widely used in environmental forecasting, in situations where new data arrive continually, the need to make frequent model updates can become cumbersome and computationally costly. To alleviate this problem, an online sequential learning algorithm for single hidden layer feedforward neural networks - the online sequential extreme learning machine (OSELM) - is automatically updated inexpensively as new data arrive (and the new data can then be discarded). OSELM was applied to forecast daily streamflow at two small watersheds in British Columbia, Canada, at lead times of 1-3 days. Predictors used were weather forecast data generated by the NOAA Global Ensemble Forecasting System (GEFS), and local hydro-meteorological observations. OSELM forecasts were tested with daily, monthly or yearly model updates. More frequent updating gave smaller forecast errors, including errors for data above the 90th percentile. Larger datasets used in the initial training of OSELM helped to find better parameters (number of hidden nodes) for the model, yielding better predictions. With the online sequential multiple linear regression (OSMLR) as benchmark, we concluded that OSELM is an attractive approach as it easily outperformed OSMLR in forecast accuracy.

  6. GYROKINETIC PARTICLE SIMULATION OF TURBULENT TRANSPORT IN BURNING PLASMAS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horton, Claude Wendell

    2014-06-10

    The SciDAC project at the IFS advanced the state of high performance computing for turbulent structures and turbulent transport. The team project with Prof Zhihong Lin [PI] at Univ California Irvine produced new understanding of the turbulent electron transport. The simulations were performed at the Texas Advanced Computer Center TACC and the NERSC facility by Wendell Horton, Lee Leonard and the IFS Graduate Students working in that group. The research included a Validation of the electron turbulent transport code using the data from a steady state university experiment at the University of Columbia in which detailed probe measurements of themore » turbulence in steady state were used for wide range of temperature gradients to compare with the simulation data. These results were published in a joint paper with Texas graduate student Dr. Xiangrong Fu using the work in his PhD dissertation. X.R. Fu, W. Horton, Y. Xiao, Z. Lin, A.K. Sen and V. Sokolov, “Validation of electron Temperature gradient turbulence in the Columbia Linear Machine, Phys. Plasmas 19, 032303 (2012).« less

  7. Variable complexity online sequential extreme learning machine, with applications to streamflow prediction

    NASA Astrophysics Data System (ADS)

    Lima, Aranildo R.; Hsieh, William W.; Cannon, Alex J.

    2017-12-01

    In situations where new data arrive continually, online learning algorithms are computationally much less costly than batch learning ones in maintaining the model up-to-date. The extreme learning machine (ELM), a single hidden layer artificial neural network with random weights in the hidden layer, is solved by linear least squares, and has an online learning version, the online sequential ELM (OSELM). As more data become available during online learning, information on the longer time scale becomes available, so ideally the model complexity should be allowed to change, but the number of hidden nodes (HN) remains fixed in OSELM. A variable complexity VC-OSELM algorithm is proposed to dynamically add or remove HN in the OSELM, allowing the model complexity to vary automatically as online learning proceeds. The performance of VC-OSELM was compared with OSELM in daily streamflow predictions at two hydrological stations in British Columbia, Canada, with VC-OSELM significantly outperforming OSELM in mean absolute error, root mean squared error and Nash-Sutcliffe efficiency at both stations.

  8. Migrant Student Record Transfer System (MSRTS) [machine-readable data file].

    ERIC Educational Resources Information Center

    Arkansas State Dept. of Education, Little Rock. General Education Div.

    The Migrant Student Record Transfer System (MSRTS) machine-readable data file (MRDF) is a collection of education and health data on more than 750,000 migrant children in grades K-12 in the United States (except Hawaii), the District of Columbia, and the outlying territories of Puerto Rico and the Mariana and Marshall Islands. The active file…

  9. Eating Away from Home: Influences on the Dietary Quality of Adolescents with Overweight or Obesity.

    PubMed

    Watts, Allison W; Valente, Maria; Tu, Andrew; Mâsse, Louise C

    2017-12-01

    To examine the influence of peers and the source of meals and snacks on the dietary quality of adolescents seeking obesity treatment. Baseline surveys were completed by 173 adolescents with overweight or obesity (11-16 years old) enrolled in an e-health intervention in Vancouver, British Columbia. Dietary quality was assessed with three 24-h dietary recalls used to compute a Healthy Eating Index adapted to the Canadian context (HEI-C). Multiple linear regression examined associations between HEI-C scores and the frequency of: (i) meals prepared away from home, (ii) purchasing snacks from vending machines or stores, (iii) eating out with friends, and (iv) peers modeling healthy eating. Adolescents reported eating approximately 3 lunch or dinner meals prepared away from home and half purchased snacks from vending machines or stores per week. After adjusting for socio-demographics, less frequent purchases of snacks from vending machines or stores (b = -3.00, P = 0.03) was associated with higher HEI-C scores. More frequent dinner meals prepared away from home and eating out with friends were only associated with lower HEI-C scores in unadjusted models. Snack purchasing was associated with lower dietary quality among obesity treatment-seeking adolescents. Improving the healthfulness of foods obtained away from home may contribute to healthier diets among these adolescents.

  10. 1982 Maurice Ewing Medalist

    NASA Astrophysics Data System (ADS)

    Talwani, Manik

    It is a pleasure to present the citation for this year's Maurice Ewing medalist, John Ewing.John was born in Texas and completed his early education there. During or shortly after high school, he spent 1 year working on a farm. Working on a farm included tinkering with tractors, automobiles, and various other machines. John has told me that this one year of fooling around with machines gave him better training in instrumentation than he subsequently obtained at more hallowed institutions, such as Harvard, where he obtained a B.S. degree in physics. John subsequently took some graduate courses at Columbia. At Columbia a course in mathematics for physics students (which was a disguised course in applied quantum mechanics) had to compete for John's attention with adventures at sea with Maurice Ewing. Adventures at sea won out. Joining John in his march out of graduate school were a number of others who went on to become illustrious scientists.

  11. Investigation of a tubular dual-stator flux-switching permanent-magnet linear generator for free-piston energy converter

    NASA Astrophysics Data System (ADS)

    Sui, Yi; Zheng, Ping; Tong, Chengde; Yu, Bin; Zhu, Shaohong; Zhu, Jianguo

    2015-05-01

    This paper describes a tubular dual-stator flux-switching permanent-magnet (PM) linear generator for free-piston energy converter. The operating principle, topology, and design considerations of the machine are investigated. Combining the motion characteristic of free-piston Stirling engine, a tubular dual-stator PM linear generator is designed by finite element method. Some major structural parameters, such as the outer and inner radii of the mover, PM thickness, mover tooth width, tooth width of the outer and inner stators, etc., are optimized to improve the machine performances like thrust capability and power density. In comparison with conventional single-stator PM machines like moving-magnet linear machine and flux-switching linear machine, the proposed dual-stator flux-switching PM machine shows advantages in higher mass power density, higher volume power density, and lighter mover.

  12. Large-Scale Linear Optimization through Machine Learning: From Theory to Practical System Design and Implementation

    DTIC Science & Technology

    2016-08-10

    AFRL-AFOSR-JP-TR-2016-0073 Large-scale Linear Optimization through Machine Learning: From Theory to Practical System Design and Implementation ...2016 4.  TITLE AND SUBTITLE Large-scale Linear Optimization through Machine Learning: From Theory to Practical System Design and Implementation 5a...performances on various machine learning tasks and it naturally lends itself to fast parallel implementations . Despite this, very little work has been

  13. Modeling of Geometric Error in Linear Guide Way to Improved the vertical three-axis CNC Milling machine’s accuracy

    NASA Astrophysics Data System (ADS)

    Kwintarini, Widiyanti; Wibowo, Agung; Arthaya, Bagus M.; Yuwana Martawirya, Yatna

    2018-03-01

    The purpose of this study was to improve the accuracy of three-axis CNC Milling Vertical engines with a general approach by using mathematical modeling methods of machine tool geometric errors. The inaccuracy of CNC machines can be caused by geometric errors that are an important factor during the manufacturing process and during the assembly phase, and are factors for being able to build machines with high-accuracy. To improve the accuracy of the three-axis vertical milling machine, by knowing geometric errors and identifying the error position parameters in the machine tool by arranging the mathematical modeling. The geometric error in the machine tool consists of twenty-one error parameters consisting of nine linear error parameters, nine angle error parameters and three perpendicular error parameters. The mathematical modeling approach of geometric error with the calculated alignment error and angle error in the supporting components of the machine motion is linear guide way and linear motion. The purpose of using this mathematical modeling approach is the identification of geometric errors that can be helpful as reference during the design, assembly and maintenance stages to improve the accuracy of CNC machines. Mathematically modeling geometric errors in CNC machine tools can illustrate the relationship between alignment error, position and angle on a linear guide way of three-axis vertical milling machines.

  14. Diffuse Interface Methods for Multiclass Segmentation of High-Dimensional Data

    DTIC Science & Technology

    2014-03-04

    handwritten digits , 1998. http://yann.lecun.com/exdb/mnist/. [19] S. Nene, S. Nayar, H. Murase, Columbia Object Image Library (COIL-100), Technical Report... recognition on smartphones using a multiclass hardware-friendly support vector machine, in: Ambient Assisted Living and Home Care, Springer, 2012, pp. 216–223.

  15. SPARCHS: Symbiotic, Polymorphic, Automatic, Resilient, Clean-Slate, Host Security

    DTIC Science & Technology

    2016-03-01

    SPARCHS: SYMBIOTIC , POLYMORPHIC, AUTOMATIC, RESILIENT, CLEAN-SLATE, HOST SECURITY COLUMBIA UNIVERSITY MARCH 2016 FINAL... SYMBIOTIC , POLYMORPHIC, AUTOTOMIC, RESILIENT, CLEAN-SLATE, HOST SECURITY 5a. CONTRACT NUMBER N/A 5b. GRANT NUMBER FA8750-10-2-0253 5c. PROGRAM...17 4.2.3 SYMBIOTIC EMBEDDED MACHINES

  16. Linear positioning laser calibration setup of CNC machine tools

    NASA Astrophysics Data System (ADS)

    Sui, Xiulin; Yang, Congjing

    2002-10-01

    The linear positioning laser calibration setup of CNC machine tools is capable of executing machine tool laser calibraiotn and backlash compensation. Using this setup, hole locations on CNC machien tools will be correct and machien tool geometry will be evaluated and adjusted. Machien tool laser calibration and backlash compensation is a simple and straightforward process. First the setup is to 'find' the stroke limits of the axis. Then the laser head is then brought into correct alignment. Second is to move the machine axis to the other extreme, the laser head is now aligned, using rotation and elevation adjustments. Finally the machine is moved to the start position and final alignment is verified. The stroke of the machine, and the machine compensation interval dictate the amount of data required for each axis. These factors determine the amount of time required for a through compensation of the linear positioning accuracy. The Laser Calibrator System monitors the material temperature and the air density; this takes into consideration machine thermal growth and laser beam frequency. This linear positioning laser calibration setup can be used on CNC machine tools, CNC lathes, horizontal centers and vertical machining centers.

  17. Comparisons between designs for single-sided linear electric motors: Homopolar synchronous and induction

    NASA Astrophysics Data System (ADS)

    Nondahl, T. A.; Richter, E.

    1980-09-01

    A design study of two types of single sided (with a passive rail) linear electric machine designs, namely homopolar linear synchronous machines (LSM's) and linear induction machines (LIM's), is described. It is assumed the machines provide tractive effort for several types of light rail vehicles and locomotives. These vehicles are wheel supported and require tractive powers ranging from 200 kW to 3735 kW and top speeds ranging from 112 km/hr to 400 km/hr. All designs are made according to specified magnetic and thermal criteria. The LSM advantages are a higher power factor, much greater restoring forces for track misalignments, and less track heating. The LIM advantages are no need to synchronize the excitation frequency precisely to vehicle speed, simpler machine construction, and a more easily anchored track structure. The relative weights of the two machine types vary with excitation frequency and speed; low frequencies and low speeds favor the LSM.

  18. A Sensor-Based Method for Diagnostics of Machine Tool Linear Axes.

    PubMed

    Vogl, Gregory W; Weiss, Brian A; Donmez, M Alkan

    2015-01-01

    A linear axis is a vital subsystem of machine tools, which are vital systems within many manufacturing operations. When installed and operating within a manufacturing facility, a machine tool needs to stay in good condition for parts production. All machine tools degrade during operations, yet knowledge of that degradation is illusive; specifically, accurately detecting degradation of linear axes is a manual and time-consuming process. Thus, manufacturers need automated and efficient methods to diagnose the condition of their machine tool linear axes without disruptions to production. The Prognostics and Health Management for Smart Manufacturing Systems (PHM4SMS) project at the National Institute of Standards and Technology (NIST) developed a sensor-based method to quickly estimate the performance degradation of linear axes. The multi-sensor-based method uses data collected from a 'sensor box' to identify changes in linear and angular errors due to axis degradation; the sensor box contains inclinometers, accelerometers, and rate gyroscopes to capture this data. The sensors are expected to be cost effective with respect to savings in production losses and scrapped parts for a machine tool. Numerical simulations, based on sensor bandwidth and noise specifications, show that changes in straightness and angular errors could be known with acceptable test uncertainty ratios. If a sensor box resides on a machine tool and data is collected periodically, then the degradation of the linear axes can be determined and used for diagnostics and prognostics to help optimize maintenance, production schedules, and ultimately part quality.

  19. A Sensor-Based Method for Diagnostics of Machine Tool Linear Axes

    PubMed Central

    Vogl, Gregory W.; Weiss, Brian A.; Donmez, M. Alkan

    2017-01-01

    A linear axis is a vital subsystem of machine tools, which are vital systems within many manufacturing operations. When installed and operating within a manufacturing facility, a machine tool needs to stay in good condition for parts production. All machine tools degrade during operations, yet knowledge of that degradation is illusive; specifically, accurately detecting degradation of linear axes is a manual and time-consuming process. Thus, manufacturers need automated and efficient methods to diagnose the condition of their machine tool linear axes without disruptions to production. The Prognostics and Health Management for Smart Manufacturing Systems (PHM4SMS) project at the National Institute of Standards and Technology (NIST) developed a sensor-based method to quickly estimate the performance degradation of linear axes. The multi-sensor-based method uses data collected from a ‘sensor box’ to identify changes in linear and angular errors due to axis degradation; the sensor box contains inclinometers, accelerometers, and rate gyroscopes to capture this data. The sensors are expected to be cost effective with respect to savings in production losses and scrapped parts for a machine tool. Numerical simulations, based on sensor bandwidth and noise specifications, show that changes in straightness and angular errors could be known with acceptable test uncertainty ratios. If a sensor box resides on a machine tool and data is collected periodically, then the degradation of the linear axes can be determined and used for diagnostics and prognostics to help optimize maintenance, production schedules, and ultimately part quality. PMID:28691039

  20. The Top Six Compatibles: A Closer Look at the Machines That Are Most Compatible with the IBM PC.

    ERIC Educational Resources Information Center

    McMullen, Barbara E.; And Others

    1984-01-01

    Reviews six operationally compatible microcomputers that are most able to run IBM software without modifications--Compaq, Columbia, Corona, Hyperion, Eagle PC, and Chameleon. Information given for each includes manufacturer, uses, standard features, base list price, typical system price, and options and accessories. (MBR)

  1. A single-phase axially-magnetized permanent-magnet oscillating machine for miniature aerospace power sources

    NASA Astrophysics Data System (ADS)

    Sui, Yi; Zheng, Ping; Cheng, Luming; Wang, Weinan; Liu, Jiaqi

    2017-05-01

    A single-phase axially-magnetized permanent-magnet (PM) oscillating machine which can be integrated with a free-piston Stirling engine to generate electric power, is investigated for miniature aerospace power sources. Machine structure, operating principle and detent force characteristic are elaborately studied. With the sinusoidal speed characteristic of the mover considered, the proposed machine is designed by 2D finite-element analysis (FEA), and some main structural parameters such as air gap diameter, dimensions of PMs, pole pitches of both stator and mover, and the pole-pitch combinations, etc., are optimized to improve both the power density and force capability. Compared with the three-phase PM linear machines, the proposed single-phase machine features less PM use, simple control and low controller cost. The power density of the proposed machine is higher than that of the three-phase radially-magnetized PM linear machine, but lower than the three-phase axially-magnetized PM linear machine.

  2. Specification for Teaching Machines and Programmes (Interchangeability of Programmes). Part 1, Linear Machines and Programmes.

    ERIC Educational Resources Information Center

    British Standards Institution, London (England).

    To promote interchangeability of teaching machines and programs, so that the user is not so limited in his choice of programs, the British Standards Institute has offered a standard. Part I of the standard deals with linear teaching machines and programs that make use of the roll or sheet methods of presentation. Requirements cover: spools,…

  3. Study of Environmental Data Complexity using Extreme Learning Machine

    NASA Astrophysics Data System (ADS)

    Leuenberger, Michael; Kanevski, Mikhail

    2017-04-01

    The main goals of environmental data science using machine learning algorithm deal, in a broad sense, around the calibration, the prediction and the visualization of hidden relationship between input and output variables. In order to optimize the models and to understand the phenomenon under study, the characterization of the complexity (at different levels) should be taken into account. Therefore, the identification of the linear or non-linear behavior between input and output variables adds valuable information for the knowledge of the phenomenon complexity. The present research highlights and investigates the different issues that can occur when identifying the complexity (linear/non-linear) of environmental data using machine learning algorithm. In particular, the main attention is paid to the description of a self-consistent methodology for the use of Extreme Learning Machines (ELM, Huang et al., 2006), which recently gained a great popularity. By applying two ELM models (with linear and non-linear activation functions) and by comparing their efficiency, quantification of the linearity can be evaluated. The considered approach is accompanied by simulated and real high dimensional and multivariate data case studies. In conclusion, the current challenges and future development in complexity quantification using environmental data mining are discussed. References - Huang, G.-B., Zhu, Q.-Y., Siew, C.-K., 2006. Extreme learning machine: theory and applications. Neurocomputing 70 (1-3), 489-501. - Kanevski, M., Pozdnoukhov, A., Timonin, V., 2009. Machine Learning for Spatial Environmental Data. EPFL Press; Lausanne, Switzerland, p.392. - Leuenberger, M., Kanevski, M., 2015. Extreme Learning Machines for spatial environmental data. Computers and Geosciences 85, 64-73.

  4. A novel single-phase flux-switching permanent magnet linear generator used for free-piston Stirling engine

    NASA Astrophysics Data System (ADS)

    Zheng, Ping; Sui, Yi; Tong, Chengde; Bai, Jingang; Yu, Bin; Lin, Fei

    2014-05-01

    This paper investigates a novel single-phase flux-switching permanent-magnet (PM) linear machine used for free-piston Stirling engines. The machine topology and operating principle are studied. A flux-switching PM linear machine is designed based on the quasi-sinusoidal speed characteristic of the resonant piston. Considering the performance of back electromotive force and thrust capability, some leading structural parameters, including the air gap length, the PM thickness, the ratio of the outer radius of mover to that of stator, the mover tooth width, the stator tooth width, etc., are optimized by finite element analysis. Compared with conventional three-phase moving-magnet linear machine, the proposed single-phase flux-switching topology shows advantages in less PM use, lighter mover, and higher volume power density.

  5. On the Stability of Jump-Linear Systems Driven by Finite-State Machines with Markovian Inputs

    NASA Technical Reports Server (NTRS)

    Patilkulkarni, Sudarshan; Herencia-Zapana, Heber; Gray, W. Steven; Gonzalez, Oscar R.

    2004-01-01

    This paper presents two mean-square stability tests for a jump-linear system driven by a finite-state machine with a first-order Markovian input process. The first test is based on conventional Markov jump-linear theory and avoids the use of any higher-order statistics. The second test is developed directly using the higher-order statistics of the machine s output process. The two approaches are illustrated with a simple model for a recoverable computer control system.

  6. High efficiency machining technology and equipment for edge chamfer of KDP crystals

    NASA Astrophysics Data System (ADS)

    Chen, Dongsheng; Wang, Baorui; Chen, Jihong

    2016-10-01

    Potassium dihydrogen phosphate (KDP) is a type of nonlinear optical crystal material. To Inhibit the transverse stimulated Raman scattering of laser beam and then enhance the optical performance of the optics, the edges of the large-sized KDP crystal needs to be removed to form chamfered faces with high surface quality (RMS<5 nm). However, as the depth of cut (DOC) of fly cutting is usually several, its machining efficiency is too low to be accepted for chamfering of the KDP crystal as the amount of materials to be removed is in the order of millimeter. This paper proposes a novel hybrid machining method, which combines precision grinding with fly cutting, for crackless and high efficiency chamfer of KDP crystal. A specialized machine tool, which adopts aerostatic bearing linear slide and aerostatic bearing spindle, was developed for chamfer of the KDP crystal. The aerostatic bearing linear slide consists of an aerostatic bearing guide with linearity of 0.1 μm/100mm and a linear motor to achieve linear feeding with high precision and high dynamic performance. The vertical spindle consists of an aerostatic bearing spindle with the rotation accuracy (axial) of 0.05 microns and Fork type flexible connection precision driving mechanism. The machining experiment on flying and grinding was carried out, the optimize machining parameters was gained by a series of experiment. Surface roughness of 2.4 nm has been obtained. The machining efficiency can be improved by six times using the combined method to produce the same machined surface quality.

  7. Optical Constituents at the Mouth of the Columbia River: Variability and Signature in Remotely Sensed Reflectance

    DTIC Science & Technology

    2013-09-30

    constructed at BIO, carried the new Machine Vision Floc Camera (MVFC), a Sequoia Scientific LISST 100x Type B, an RBR CTD, and two pressure-actuated...WetStar CDOM fluorometer, a Sequoia Scientific flow control switch, and a SeaBird 37 CTD. The flow-control switch allows the ac- 9 to collect 0.2-um

  8. Effects of pole flux distribution in a homopolar linear synchronous machine

    NASA Astrophysics Data System (ADS)

    Balchin, M. J.; Eastham, J. F.; Coles, P. C.

    1994-05-01

    Linear forms of synchronous electrical machine are at present being considered as the propulsion means in high-speed, magnetically levitated (Maglev) ground transportation systems. A homopolar form of machine is considered in which the primary member, which carries both ac and dc windings, is supported on the vehicle. Test results and theoretical predictions are presented for a design of machine intended for driving a 100 passenger vehicle at a top speed of 400 km/h. The layout of the dc magnetic circuit is examined to locate the best position for the dc winding from the point of view of minimum core weight. Measurements of flux build-up under the machine at different operating speeds are given for two types of secondary pole: solid and laminated. The solid pole results, which are confirmed theoretically, show that this form of construction is impractical for high-speed drives. Measured motoring characteristics are presented for a short length of machine which simulates conditions at the leading and trailing ends of the full-sized machine. Combination of the results with those from a cylindrical version of the machine make it possible to infer the performance of the full-sized traction machine. This gives 0.8 pf and 0.9 efficiency at 300 km/h, which is much better than the reported performance of a comparable linear induction motor (0.52 pf and 0.82 efficiency). It is therefore concluded that in any projected high-speed Maglev systems, a linear synchronous machine should be the first choice as the propulsion means.

  9. Evaluation of Columbia, USMARC-Composite, Suffolk, and Texel rams as terminal sires in an extensive rangeland production system: VI.Measurements of live-lamb and carcass shape and their relationship to carcass yield and value

    USDA-ARS?s Scientific Manuscript database

    Linear measurements on live lambs and carcasses can be used to characterize sheep breeds and may have value for prediction of carcass yield and value. This study used 512 crossbred lambs produced over 3 yr by mating Columbia, USMARC Composite, Suffolk, and Texel rams to adult Rambouillet ewes to ass...

  10. Volumetric Verification of Multiaxis Machine Tool Using Laser Tracker

    PubMed Central

    Aguilar, Juan José

    2014-01-01

    This paper aims to present a method of volumetric verification in machine tools with linear and rotary axes using a laser tracker. Beyond a method for a particular machine, it presents a methodology that can be used in any machine type. Along this paper, the schema and kinematic model of a machine with three axes of movement, two linear and one rotational axes, including the measurement system and the nominal rotation matrix of the rotational axis are presented. Using this, the machine tool volumetric error is obtained and nonlinear optimization techniques are employed to improve the accuracy of the machine tool. The verification provides a mathematical, not physical, compensation, in less time than other methods of verification by means of the indirect measurement of geometric errors of the machine from the linear and rotary axes. This paper presents an extensive study about the appropriateness and drawbacks of the regression function employed depending on the types of movement of the axes of any machine. In the same way, strengths and weaknesses of measurement methods and optimization techniques depending on the space available to place the measurement system are presented. These studies provide the most appropriate strategies to verify each machine tool taking into consideration its configuration and its available work space. PMID:25202744

  11. Interpreting linear support vector machine models with heat map molecule coloring

    PubMed Central

    2011-01-01

    Background Model-based virtual screening plays an important role in the early drug discovery stage. The outcomes of high-throughput screenings are a valuable source for machine learning algorithms to infer such models. Besides a strong performance, the interpretability of a machine learning model is a desired property to guide the optimization of a compound in later drug discovery stages. Linear support vector machines showed to have a convincing performance on large-scale data sets. The goal of this study is to present a heat map molecule coloring technique to interpret linear support vector machine models. Based on the weights of a linear model, the visualization approach colors each atom and bond of a compound according to its importance for activity. Results We evaluated our approach on a toxicity data set, a chromosome aberration data set, and the maximum unbiased validation data sets. The experiments show that our method sensibly visualizes structure-property and structure-activity relationships of a linear support vector machine model. The coloring of ligands in the binding pocket of several crystal structures of a maximum unbiased validation data set target indicates that our approach assists to determine the correct ligand orientation in the binding pocket. Additionally, the heat map coloring enables the identification of substructures important for the binding of an inhibitor. Conclusions In combination with heat map coloring, linear support vector machine models can help to guide the modification of a compound in later stages of drug discovery. Particularly substructures identified as important by our method might be a starting point for optimization of a lead compound. The heat map coloring should be considered as complementary to structure based modeling approaches. As such, it helps to get a better understanding of the binding mode of an inhibitor. PMID:21439031

  12. The study of the transition regime between slab and mixed slab-toroidal electron temperature gradient modes in a basic experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balbaky, Abed; Sokolov, Vladimir; Sen, Amiya K.

    2015-05-15

    Electron temperature gradient (ETG) modes are suspected sources of anomalous electron thermal transport in magnetically confined plasmas as in tokamaks. Prior work in the Columbia Linear Machine (CLM) has been able to produce and identify slab ETG modes in a slab geometry [Wei et al., Phys. Plasmas 17, 042108 (2010)]. Now by modifying CLM to introduce curvature to the confining axial magnetic field, we have excited mixed slab-toroidal modes. Linear theory predicts a transition between slab and toroidal ETG modes when (k{sub ∥}R{sub c})/(k{sub y}ρ) ∼1 [J. Kim and W. Horton, Phys. Fluids B 3, 1167 (1991)]. We observe changesmore » in the mode amplitude for levels of curvature R{sub c}{sup −1}≪(k{sub ∥,slab})/(k{sub ⊥}ρ) , which may be explained by reductions in k{sub ∥} in the transition from slab to mixed slab-toroidal modes, as also predicted by theory. We present mode amplitude scaling as a function of magnetic field curvature. Over the range of curvature available in CLM experimentally we find a modest increase in saturated ETG potential fluctuations (∼1.5×), and a substantial increase in the power density of individual mode peaks (∼4–5×)« less

  13. Evaluation of Dual Frequency Identification Sonar (DIDSON) for Monitoring Pacific Lamprey Passage Behavior at Fishways of Bonneville Dam, 2011

    DTIC Science & Technology

    2012-01-01

    Mundy’s Welding and the University of Idaho machine shop who went out of their way to manufacture and modify our sampling gear. We also thank R. Poulin, C...Columbia River: 2008 radiotelemetry and half- duplex PIT tag studies. Technical Report 2009-8 of Idaho Cooperative Fish and Wildlife Research Unit to U.S

  14. Optimizing nursing human resource planning in British Columbia.

    PubMed

    Lavieri, Mariel S; Puterman, Martin L

    2009-06-01

    This paper describes a linear programming hierarchical planning model that determines the optimal number of nurses to train, promote to management and recruit over a 20 year planning horizon to achieve specified workforce levels. Age dynamics and attrition rates of the nursing workforce are key model components. The model was developed to help policy makers plan a sustainable nursing workforce for British Columbia, Canada. An easy to use interface and considerable flexibility makes it ideal for scenario and "What-If?" analyses.

  15. Machine learning-based methods for prediction of linear B-cell epitopes.

    PubMed

    Wang, Hsin-Wei; Pai, Tun-Wen

    2014-01-01

    B-cell epitope prediction facilitates immunologists in designing peptide-based vaccine, diagnostic test, disease prevention, treatment, and antibody production. In comparison with T-cell epitope prediction, the performance of variable length B-cell epitope prediction is still yet to be satisfied. Fortunately, due to increasingly available verified epitope databases, bioinformaticians could adopt machine learning-based algorithms on all curated data to design an improved prediction tool for biomedical researchers. Here, we have reviewed related epitope prediction papers, especially those for linear B-cell epitope prediction. It should be noticed that a combination of selected propensity scales and statistics of epitope residues with machine learning-based tools formulated a general way for constructing linear B-cell epitope prediction systems. It is also observed from most of the comparison results that the kernel method of support vector machine (SVM) classifier outperformed other machine learning-based approaches. Hence, in this chapter, except reviewing recently published papers, we have introduced the fundamentals of B-cell epitope and SVM techniques. In addition, an example of linear B-cell prediction system based on physicochemical features and amino acid combinations is illustrated in details.

  16. Predicting the dissolution kinetics of silicate glasses using machine learning

    NASA Astrophysics Data System (ADS)

    Anoop Krishnan, N. M.; Mangalathu, Sujith; Smedskjaer, Morten M.; Tandia, Adama; Burton, Henry; Bauchy, Mathieu

    2018-05-01

    Predicting the dissolution rates of silicate glasses in aqueous conditions is a complex task as the underlying mechanism(s) remain poorly understood and the dissolution kinetics can depend on a large number of intrinsic and extrinsic factors. Here, we assess the potential of data-driven models based on machine learning to predict the dissolution rates of various aluminosilicate glasses exposed to a wide range of solution pH values, from acidic to caustic conditions. Four classes of machine learning methods are investigated, namely, linear regression, support vector machine regression, random forest, and artificial neural network. We observe that, although linear methods all fail to describe the dissolution kinetics, the artificial neural network approach offers excellent predictions, thanks to its inherent ability to handle non-linear data. Overall, we suggest that a more extensive use of machine learning approaches could significantly accelerate the design of novel glasses with tailored properties.

  17. Improving Machining Accuracy of CNC Machines with Innovative Design Methods

    NASA Astrophysics Data System (ADS)

    Yemelyanov, N. V.; Yemelyanova, I. V.; Zubenko, V. L.

    2018-03-01

    The article considers achieving the machining accuracy of CNC machines by applying innovative methods in modelling and design of machining systems, drives and machine processes. The topological method of analysis involves visualizing the system as matrices of block graphs with a varying degree of detail between the upper and lower hierarchy levels. This approach combines the advantages of graph theory and the efficiency of decomposition methods, it also has visual clarity, which is inherent in both topological models and structural matrices, as well as the resiliency of linear algebra as part of the matrix-based research. The focus of the study is on the design of automated machine workstations, systems, machines and units, which can be broken into interrelated parts and presented as algebraic, topological and set-theoretical models. Every model can be transformed into a model of another type, and, as a result, can be interpreted as a system of linear and non-linear equations which solutions determine the system parameters. This paper analyses the dynamic parameters of the 1716PF4 machine at the stages of design and exploitation. Having researched the impact of the system dynamics on the component quality, the authors have developed a range of practical recommendations which have enabled one to reduce considerably the amplitude of relative motion, exclude some resonance zones within the spindle speed range of 0...6000 min-1 and improve machining accuracy.

  18. Computer-aided design studies of the homopolar linear synchronous motor

    NASA Astrophysics Data System (ADS)

    Dawson, G. E.; Eastham, A. R.; Ong, R.

    1984-09-01

    The linear induction motor (LIM), as an urban transit drive, can provide good grade-climbing capabilities and propulsion/braking performance that is independent of steel wheel-rail adhesion. In view of its 10-12 mm airgap, the LIM is characterized by a low power factor-efficiency product of order 0.4. A synchronous machine offers high efficiency and controllable power factor. An assessment of the linear homopolar configuration of this machine is presented as an alternative to the LIM. Computer-aided design studies using the finite element technique have been conducted to identify a suitable machine design for urban transit propulsion.

  19. Design and Analysis of Linear Fault-Tolerant Permanent-Magnet Vernier Machines

    PubMed Central

    Xu, Liang; Liu, Guohai; Du, Yi; Liu, Hu

    2014-01-01

    This paper proposes a new linear fault-tolerant permanent-magnet (PM) vernier (LFTPMV) machine, which can offer high thrust by using the magnetic gear effect. Both PMs and windings of the proposed machine are on short mover, while the long stator is only manufactured from iron. Hence, the proposed machine is very suitable for long stroke system applications. The key of this machine is that the magnetizer splits the two movers with modular and complementary structures. Hence, the proposed machine offers improved symmetrical and sinusoidal back electromotive force waveform and reduced detent force. Furthermore, owing to the complementary structure, the proposed machine possesses favorable fault-tolerant capability, namely, independent phases. In particular, differing from the existing fault-tolerant machines, the proposed machine offers fault tolerance without sacrificing thrust density. This is because neither fault-tolerant teeth nor the flux-barriers are adopted. The electromagnetic characteristics of the proposed machine are analyzed using the time-stepping finite-element method, which verifies the effectiveness of the theoretical analysis. PMID:24982959

  20. Design and analysis of linear fault-tolerant permanent-magnet vernier machines.

    PubMed

    Xu, Liang; Ji, Jinghua; Liu, Guohai; Du, Yi; Liu, Hu

    2014-01-01

    This paper proposes a new linear fault-tolerant permanent-magnet (PM) vernier (LFTPMV) machine, which can offer high thrust by using the magnetic gear effect. Both PMs and windings of the proposed machine are on short mover, while the long stator is only manufactured from iron. Hence, the proposed machine is very suitable for long stroke system applications. The key of this machine is that the magnetizer splits the two movers with modular and complementary structures. Hence, the proposed machine offers improved symmetrical and sinusoidal back electromotive force waveform and reduced detent force. Furthermore, owing to the complementary structure, the proposed machine possesses favorable fault-tolerant capability, namely, independent phases. In particular, differing from the existing fault-tolerant machines, the proposed machine offers fault tolerance without sacrificing thrust density. This is because neither fault-tolerant teeth nor the flux-barriers are adopted. The electromagnetic characteristics of the proposed machine are analyzed using the time-stepping finite-element method, which verifies the effectiveness of the theoretical analysis.

  1. A Double-Sided Linear Primary Permanent Magnet Vernier Machine

    PubMed Central

    2015-01-01

    The purpose of this paper is to present a new double-sided linear primary permanent magnet (PM) vernier (DSLPPMV) machine, which can offer high thrust force, low detent force, and improved power factor. Both PMs and windings of the proposed machine are on the short translator, while the long stator is designed as a double-sided simple iron core with salient teeth so that it is very robust to transmit high thrust force. The key of this new machine is the introduction of double stator and the elimination of translator yoke, so that the inductance and the volume of the machine can be reduced. Hence, the proposed machine offers improved power factor and thrust force density. The electromagnetic performances of the proposed machine are analyzed including flux, no-load EMF, thrust force density, and inductance. Based on using the finite element analysis, the characteristics and performances of the proposed machine are assessed. PMID:25874250

  2. A double-sided linear primary permanent magnet vernier machine.

    PubMed

    Du, Yi; Zou, Chunhua; Liu, Xianxing

    2015-01-01

    The purpose of this paper is to present a new double-sided linear primary permanent magnet (PM) vernier (DSLPPMV) machine, which can offer high thrust force, low detent force, and improved power factor. Both PMs and windings of the proposed machine are on the short translator, while the long stator is designed as a double-sided simple iron core with salient teeth so that it is very robust to transmit high thrust force. The key of this new machine is the introduction of double stator and the elimination of translator yoke, so that the inductance and the volume of the machine can be reduced. Hence, the proposed machine offers improved power factor and thrust force density. The electromagnetic performances of the proposed machine are analyzed including flux, no-load EMF, thrust force density, and inductance. Based on using the finite element analysis, the characteristics and performances of the proposed machine are assessed.

  3. Comparing machine learning and logistic regression methods for predicting hypertension using a combination of gene expression and next-generation sequencing data.

    PubMed

    Held, Elizabeth; Cape, Joshua; Tintle, Nathan

    2016-01-01

    Machine learning methods continue to show promise in the analysis of data from genetic association studies because of the high number of variables relative to the number of observations. However, few best practices exist for the application of these methods. We extend a recently proposed supervised machine learning approach for predicting disease risk by genotypes to be able to incorporate gene expression data and rare variants. We then apply 2 different versions of the approach (radial and linear support vector machines) to simulated data from Genetic Analysis Workshop 19 and compare performance to logistic regression. Method performance was not radically different across the 3 methods, although the linear support vector machine tended to show small gains in predictive ability relative to a radial support vector machine and logistic regression. Importantly, as the number of genes in the models was increased, even when those genes contained causal rare variants, model predictive ability showed a statistically significant decrease in performance for both the radial support vector machine and logistic regression. The linear support vector machine showed more robust performance to the inclusion of additional genes. Further work is needed to evaluate machine learning approaches on larger samples and to evaluate the relative improvement in model prediction from the incorporation of gene expression data.

  4. Efficiency of autonomous soft nanomachines at maximum power.

    PubMed

    Seifert, Udo

    2011-01-14

    We consider nanosized artificial or biological machines working in steady state enforced by imposing nonequilibrium concentrations of solutes or by applying external forces, torques, or electric fields. For unicyclic and strongly coupled multicyclic machines, efficiency at maximum power is not bounded by the linear response value 1/2. For strong driving, it can even approach the thermodynamic limit 1. Quite generally, such machines fall into three different classes characterized, respectively, as "strong and efficient," "strong and inefficient," and "balanced." For weakly coupled multicyclic machines, efficiency at maximum power has lost any universality even in the linear response regime.

  5. Method for measuring the contour of a machined part

    DOEpatents

    Bieg, L.F.

    1995-05-30

    A method is disclosed for measuring the contour of a machined part with a contour gage apparatus, having a probe assembly including a probe tip for providing a measure of linear displacement of the tip on the surface of the part. The contour gage apparatus may be moved into and out of position for measuring the part while the part is still carried on the machining apparatus. Relative positions between the part and the probe tip may be changed, and a scanning operation is performed on the machined part by sweeping the part with the probe tip, whereby data points representing linear positions of the probe tip at prescribed rotation intervals in the position changes between the part and the probe tip are recorded. The method further allows real-time adjustment of the apparatus machining the part, including real-time adjustment of the machining apparatus in response to wear of the tool that occurs during machining. 5 figs.

  6. Method for measuring the contour of a machined part

    DOEpatents

    Bieg, Lothar F.

    1995-05-30

    A method for measuring the contour of a machined part with a contour gage apparatus, having a probe assembly including a probe tip for providing a measure of linear displacement of the tip on the surface of the part. The contour gage apparatus may be moved into and out of position for measuring the part while the part is still carried on the machining apparatus. Relative positions between the part and the probe tip may be changed, and a scanning operation is performed on the machined part by sweeping the part with the probe tip, whereby data points representing linear positions of the probe tip at prescribed rotation intervals in the position changes between the part and the probe tip are recorded. The method further allows real-time adjustment of the apparatus machining the part, including real-time adjustment of the machining apparatus in response to wear of the tool that occurs during machining.

  7. Lessons learned from the development and manufacture of ceramic reusable surface insulation materials for the space shuttle orbiters

    NASA Technical Reports Server (NTRS)

    Banas, R. P.; Elgin, D. R.; Cordia, E. R.; Nickel, K. N.; Gzowski, E. R.; Aguiler, L.

    1983-01-01

    Three ceramic, reusable surface insulation materials and two borosilicate glass coatings were used in the fabrication of tiles for the Space Shuttle orbiters. Approximately 77,000 tiles were made from these materials for the first three orbiters, Columbia, Challenger, and Discovery. Lessons learned in the development, scale up to production and manufacturing phases of these materials will benefit future production of ceramic reusable surface insulation materials. Processing of raw materials into tile blanks and coating slurries; programming and machining of tiles using numerical controlled milling machines; preparing and spraying tiles with the two coatings; and controlling material shrinkage during the high temperature (2100-2275 F) coating glazing cycles are among the topics discussed.

  8. Life-Cycle Cost Database. Volume II. Appendices E, F, and G. Sample Data Development.

    DTIC Science & Technology

    1983-01-01

    Bendix Field Engineering Corporation Columbia, Maryland 21045 5 CONTENTS Page GENERAL 8 Introduction Objective Engineering Survey SYSTEM DESCRIPTION...in a typical administrative type building over a 25-year period. 1.3 ENGINEERING SURVEY An on-site survey was conducted by Bendix Field Engineering...Damp Mop and Buff Buff Routine Vacuum Strip and Refinish Heavy Duty Vacuum Machine, Scrub and Surface Shampoo Pick Up Extraction Clean Repair Location

  9. A tubular hybrid Halbach/axially-magnetized permanent-magnet linear machine

    NASA Astrophysics Data System (ADS)

    Sui, Yi; Liu, Yong; Cheng, Luming; Liu, Jiaqi; Zheng, Ping

    2017-05-01

    A single-phase tubular permanent-magnet linear machine (PMLM) with hybrid Halbach/axially-magnetized PM arrays is proposed for free-piston Stirling power generation system. Machine topology and operating principle are elaborately illustrated. With the sinusoidal speed characteristic of the free-piston Stirling engine considered, the proposed machine is designed and calculated by finite-element analysis (FEA). The main structural parameters, such as outer radius of the mover, radial length of both the axially-magnetized PMs and ferromagnetic poles, axial length of both the middle and end radially-magnetized PMs, etc., are optimized to improve both the force capability and power density. Compared with the conventional PMLMs, the proposed machine features high mass and volume power density, and has the advantages of simple control and low converter cost. The proposed machine topology is applicable to tubular PMLMs with any phases.

  10. Prediction and early detection of delirium in the intensive care unit by using heart rate variability and machine learning.

    PubMed

    Oh, Jooyoung; Cho, Dongrae; Park, Jaesub; Na, Se Hee; Kim, Jongin; Heo, Jaeseok; Shin, Cheung Soo; Kim, Jae-Jin; Park, Jin Young; Lee, Boreom

    2018-03-27

    Delirium is an important syndrome found in patients in the intensive care unit (ICU), however, it is usually under-recognized during treatment. This study was performed to investigate whether delirious patients can be successfully distinguished from non-delirious patients by using heart rate variability (HRV) and machine learning. Electrocardiography data of 140 patients was acquired during daily ICU care, and HRV data were analyzed. Delirium, including its type, severity, and etiologies, was evaluated daily by trained psychiatrists. HRV data and various machine learning algorithms including linear support vector machine (SVM), SVM with radial basis function (RBF) kernels, linear extreme learning machine (ELM), ELM with RBF kernels, linear discriminant analysis, and quadratic discriminant analysis were utilized to distinguish delirium patients from non-delirium patients. HRV data of 4797 ECGs were included, and 39 patients had delirium at least once during their ICU stay. The maximum classification accuracy was acquired using SVM with RBF kernels. Our prediction method based on HRV with machine learning was comparable to previous delirium prediction models using massive amounts of clinical information. Our results show that autonomic alterations could be a significant feature of patients with delirium in the ICU, suggesting the potential for the automatic prediction and early detection of delirium based on HRV with machine learning.

  11. A Flash X-Ray Facility for the Naval Postgraduate School

    DTIC Science & Technology

    1985-06-01

    ionizing radiation, *• NPS has had active programs with a Van de Graaff generator, a reactor, radioactive sources, X-ray machines and a linear electron ...interaction of radiation with matter and with coherent radiation. Currently the most active program is at the linear electron accelerator which over...twenty years has produced some 75 theses. The flash X-ray machine was obtained to expan-i and complement the capabilities of the linear electron

  12. Fundamental Scalings of Zonal Flows in a Basic Plasma Physics Experiment

    NASA Astrophysics Data System (ADS)

    Sokolov, Vladimir; Wei, Xiao; Sen, Amiya K.

    2007-11-01

    A basic physics experimental study of zonal flows (ZF) associated with ITG (ion temperature gradient) drift modes has been performed in the Columbia Linear Machine (CLM) and ZF has been definitively identified [1]. However, in contrast to most tokamak experiments, the stabilizing effect of ZF shear to ITG appears to be small in CLM. We now report on the study of important scaling behavior of ZF. First and most importantly, we report on the collisional damping scaling of ZF, which is considered to be its saturation mechanism [2]. By varying the sum of ion-ion and ion-neutral collision frequency over nearly half an order of magnitude, we find no change in the amplitude of ZF. Secondly, we study the scaling of ZF amplitude with ITG amplitude via increasing ITG drive though ηi, as well as feedback (stabilizing / destabilizing). We have observed markedly different scaling near and far above marginal stability. [1] V. Sokolov, X. Wei, A.K. Sen and K. Avinash, Plasma Phys.Controlled Fusion 48, S111 (2006). [2] P.H. Diamond, S.-I. Itoh, K.Itoh and T.S. Hahm, Plasma Phys.Controlled Fusion 47, R35 (2005).

  13. Classification of sodium MRI data of cartilage using machine learning.

    PubMed

    Madelin, Guillaume; Poidevin, Frederick; Makrymallis, Antonios; Regatte, Ravinder R

    2015-11-01

    To assess the possible utility of machine learning for classifying subjects with and subjects without osteoarthritis using sodium magnetic resonance imaging data. Theory: Support vector machine, k-nearest neighbors, naïve Bayes, discriminant analysis, linear regression, logistic regression, neural networks, decision tree, and tree bagging were tested. Sodium magnetic resonance imaging with and without fluid suppression by inversion recovery was acquired on the knee cartilage of 19 controls and 28 osteoarthritis patients. Sodium concentrations were measured in regions of interests in the knee for both acquisitions. Mean (MEAN) and standard deviation (STD) of these concentrations were measured in each regions of interest, and the minimum, maximum, and mean of these two measurements were calculated over all regions of interests for each subject. The resulting 12 variables per subject were used as predictors for classification. Either Min [STD] alone, or in combination with Mean [MEAN] or Min [MEAN], all from fluid suppressed data, were the best predictors with an accuracy >74%, mainly with linear logistic regression and linear support vector machine. Other good classifiers include discriminant analysis, linear regression, and naïve Bayes. Machine learning is a promising technique for classifying osteoarthritis patients and controls from sodium magnetic resonance imaging data. © 2014 Wiley Periodicals, Inc.

  14. High Velocity Linear Induction Launcher with Exit-Edge Compensation for Testing of Aerospace Components

    NASA Technical Reports Server (NTRS)

    Kuznetsov, Stephen; Marriott, Darin

    2008-01-01

    Advances in ultra high speed linear induction electromagnetic launchers over the past decade have focused on magnetic compensation of the exit and entry-edge transient flux wave to produce efficient and compact linear electric machinery. The paper discusses two approaches to edge compensation in long-stator induction catapults with typical end speeds of 150 to 1,500 m/s. In classical linear induction machines, the exit-edge effect is manifest as two auxiliary traveling waves that produce a magnetic drag on the projectile and a loss of magnetic flux over the main surface of the machine. In the new design for the Stator Compensated Induction Machine (SCIM) high velocity launcher, the exit-edge effect is nulled by a dual wavelength machine or alternately the airgap flux is peaked at a location prior to the exit edge. A four (4) stage LIM catapult is presently being constructed for 180 m/s end speed operation using double-sided longitudinal flux machines. Advanced exit and entry edge compensation is being used to maximize system efficiency, and minimize stray heating of the reaction armature. Each stage will output approximately 60 kN of force and produce over 500 G s of acceleration on the armature. The advantage of this design is there is no ablation to the projectile and no sliding contacts, allowing repeated firing of the launcher without maintenance of any sort. The paper shows results of a parametric study for 500 m/s and 1,500 m/s linear induction launchers incorporating two of the latest compensation techniques for an air-core stator primary and an iron-core primary winding. Typical thrust densities for these machines are in the range of 150 kN/sq.m. to 225 kN/sq.m. and these compete favorably with permanent magnet linear synchronous machines. The operational advantages of the high speed SCIM launcher are shown by eliminating the need for pole-angle position sensors as would be required by synchronous systems. The stator power factor is also improved.

  15. Nanoscale swimmers: hydrodynamic interactions and propulsion of molecular machines

    NASA Astrophysics Data System (ADS)

    Sakaue, T.; Kapral, R.; Mikhailov, A. S.

    2010-06-01

    Molecular machines execute nearly regular cyclic conformational changes as a result of ligand binding and product release. This cyclic conformational dynamics is generally non-reciprocal so that under time reversal a different sequence of machine conformations is visited. Since such changes occur in a solvent, coupling to solvent hydrodynamic modes will generally result in self-propulsion of the molecular machine. These effects are investigated for a class of coarse grained models of protein machines consisting of a set of beads interacting through pair-wise additive potentials. Hydrodynamic effects are incorporated through a configuration-dependent mobility tensor, and expressions for the propulsion linear and angular velocities, as well as the stall force, are obtained. In the limit where conformational changes are small so that linear response theory is applicable, it is shown that propulsion is exponentially small; thus, propulsion is nonlinear phenomenon. The results are illustrated by computations on a simple model molecular machine.

  16. Start-up and control method and apparatus for resonant free piston Stirling engine

    DOEpatents

    Walsh, Michael M.

    1984-01-01

    A resonant free-piston Stirling engine having a new and improved start-up and control method and system. A displacer linear electrodynamic machine is provided having an armature secured to and movable with the displacer and having a stator supported by the Stirling engine housing in juxtaposition to the armature. A control excitation circuit is provided for electrically exciting the displacer linear electrodynamic machine with electrical excitation signals having substantially the same frequency as the desired frequency of operation of the Stirling engine. The excitation control circuit is designed so that it selectively and controllably causes the displacer electrodynamic machine to function either as a generator load to extract power from the displacer or the control circuit selectively can be operated to cause the displacer electrodynamic machine to operate as an electric drive motor to apply additional input power to the displacer in addition to the thermodynamic power feedback to the displacer whereby the displacer linear electrodynamic machine also is used in the electric drive motor mode as a means for initially starting the resonant free-piston Stirling engine.

  17. Electric converters of electromagnetic strike machine with battery power

    NASA Astrophysics Data System (ADS)

    Usanov, K. M.; Volgin, A. V.; Kargin, V. A.; Moiseev, A. P.; Chetverikov, E. A.

    2018-03-01

    At present, the application of pulse linear electromagnetic engines to drive strike machines for immersion of rod elements into the soil, strike drilling of shallow wells, dynamic probing of soils is recognized as quite effective. The pulse linear electromagnetic engine performs discrete consumption and conversion of electrical energy into mechanical work. Pulse dosing of a stream transmitted by the battery source to the pulse linear electromagnetic engine of the energy is provided by the electrical converter. The electric converters with the control of an electromagnetic strike machine as functions of time and armature movement, which form the unipolar supply pulses of voltage and current necessary for the normal operation of a pulse linear electromagnetic engine, are proposed. Electric converters are stable in operation, implement the necessary range of output parameters control determined by the technological process conditions, have noise immunity and automatic disconnection of power supply in emergency modes.

  18. INTEGRATED PLANNING MODEL - EPA APPLICATIONS

    EPA Science Inventory

    The Integrated Planning Model (IPM) is a multi-regional, dynamic, deterministic linear programming (LP) model of the electric power sector in the continental lower 48 states and the District of Columbia. It provides forecasts up to year 2050 of least-cost capacity expansion, elec...

  19. On PMWs and two-stroke engines.

    PubMed Central

    Bell, W.; Yassi, A.; Cole, D. C.

    1998-01-01

    On Saturday, August 24, 1996, a 40-year-old man from Edmonton was riding a personal motorized watercraft (PMW, a Seadoo or Jet Ski type of machine) on Shuswap Lake, in south-central British Columbia. He was approximately 200 m offshore. The man motioned to his sister, who was riding another PMW, to follow him across the lake. She did so, but as the turned her head to check for other boat traffic, her brother suddenly slowed down and her machine rode right up on his back, crushing him against his handlebars. His sister, a nurse, held her brother's head above water until help arrived but, 48 minutes after the moment of impact, he was pronounced dead at the Shuswap Lake General Hospital. He had suffered a ruptured aorta. PMID:9789655

  20. Power electromagnetic strike machine for engineering-geological surveys

    NASA Astrophysics Data System (ADS)

    Usanov, K. M.; Volgin, A. V.; Chetverikov, E. A.; Kargin, V. A.; Moiseev, A. P.; Ivanova, Z. I.

    2017-10-01

    When implementing the processes of dynamic sensing of soils and pulsed nonexplosive seismic exploration, the most common and effective method is the strike one, which is provided by a variety of structure and parameters of pneumatic, hydraulic, electrical machines of strike action. The creation of compact portable strike machines which do not require transportation and use of mechanized means is important. A promising direction in the development of strike machines is the use of pulsed electromagnetic actuator characterized by relatively low energy consumption, relatively high specific performance and efficiency, and providing direct conversion of electrical energy into mechanical work of strike mass with linear movement trajectory. The results of these studies allowed establishing on the basis of linear electromagnetic motors the electromagnetic pulse machines with portable performance for dynamic sensing of soils and land seismic pulse of small depths.

  1. A review on prognostic techniques for non-stationary and non-linear rotating systems

    NASA Astrophysics Data System (ADS)

    Kan, Man Shan; Tan, Andy C. C.; Mathew, Joseph

    2015-10-01

    The field of prognostics has attracted significant interest from the research community in recent times. Prognostics enables the prediction of failures in machines resulting in benefits to plant operators such as shorter downtimes, higher operation reliability, reduced operations and maintenance cost, and more effective maintenance and logistics planning. Prognostic systems have been successfully deployed for the monitoring of relatively simple rotating machines. However, machines and associated systems today are increasingly complex. As such, there is an urgent need to develop prognostic techniques for such complex systems operating in the real world. This review paper focuses on prognostic techniques that can be applied to rotating machinery operating under non-linear and non-stationary conditions. The general concept of these techniques, the pros and cons of applying these methods, as well as their applications in the research field are discussed. Finally, the opportunities and challenges in implementing prognostic systems and developing effective techniques for monitoring machines operating under non-stationary and non-linear conditions are also discussed.

  2. [Effect of compaction pressure on the properties of dental machinable zirconia ceramic].

    PubMed

    Huang, Hui; Wei, Bin; Zhang, Fu-qiang; Sun, Jing; Gao, Lian

    2010-10-01

    To investigate the effect of compaction pressure on the linear shrinkage, sintering property and machinability of the dental zirconia ceramic. The nano-size zirconia powder was compacted at different isostatic pressure and sintered at different temperature. The linear shrinkage of sintered body was measured and the relative density was tested using the Archimedes method. The cylindrical surface of pre-sintering blanks was traversed using a hard metal tool. Surface and edge quality were checked visually using light stereo microscopy. The sintering behaviour depended on the compaction pressure. Increasing compaction pressure led to higher sintering rate and lower sintering temperature. Increasing compaction pressure also led to decreasing linear shrinkage of the sintered bodies, from 24.54% of 50 MPa to 20.9% of 400 MPa. Compaction pressure showed only a weak influence on machinability of zirconia blanks, but the higher compaction pressure resulted in the poor surface quality. The better sintering property and machinability of dental zirconia ceramic is found for 200-300 MPa compaction pressure.

  3. Fineblanking, Diffusion Bonding, and Testing of Fluidic Laminates.

    DTIC Science & Technology

    1980-07-01

    AD-AU69 347 TRITEC INC COLUMBIA ND F/$ 13/7 FINEBLANKING, DIFFUSION BONDING, AND TESTING OF FLUIDIC LAMINAT --ETCIU) JUL 80 L K PECAN OAAK21-79-C-0074...amplifier assembly. The effects of die roll and burrs can be minimized by secondary operations *such as abrasive machining , but this adds to the expense...clad material. Experience has shown that a clad thickness of 0.038 + 0.008 mm is required for the semi-solid diffusion bonding process. The composition

  4. Comparing Machine Learning Classifiers and Linear/Logistic Regression to Explore the Relationship between Hand Dimensions and Demographic Characteristics

    PubMed Central

    2016-01-01

    Understanding the relationship between physiological measurements from human subjects and their demographic data is important within both the biometric and forensic domains. In this paper we explore the relationship between measurements of the human hand and a range of demographic features. We assess the ability of linear regression and machine learning classifiers to predict demographics from hand features, thereby providing evidence on both the strength of relationship and the key features underpinning this relationship. Our results show that we are able to predict sex, height, weight and foot size accurately within various data-range bin sizes, with machine learning classification algorithms out-performing linear regression in most situations. In addition, we identify the features used to provide these relationships applicable across multiple applications. PMID:27806075

  5. Comparing Machine Learning Classifiers and Linear/Logistic Regression to Explore the Relationship between Hand Dimensions and Demographic Characteristics.

    PubMed

    Miguel-Hurtado, Oscar; Guest, Richard; Stevenage, Sarah V; Neil, Greg J; Black, Sue

    2016-01-01

    Understanding the relationship between physiological measurements from human subjects and their demographic data is important within both the biometric and forensic domains. In this paper we explore the relationship between measurements of the human hand and a range of demographic features. We assess the ability of linear regression and machine learning classifiers to predict demographics from hand features, thereby providing evidence on both the strength of relationship and the key features underpinning this relationship. Our results show that we are able to predict sex, height, weight and foot size accurately within various data-range bin sizes, with machine learning classification algorithms out-performing linear regression in most situations. In addition, we identify the features used to provide these relationships applicable across multiple applications.

  6. Comparison between Two Linear Supervised Learning Machines' Methods with Principle Component Based Methods for the Spectrofluorimetric Determination of Agomelatine and Its Degradants.

    PubMed

    Elkhoudary, Mahmoud M; Naguib, Ibrahim A; Abdel Salam, Randa A; Hadad, Ghada M

    2017-05-01

    Four accurate, sensitive and reliable stability indicating chemometric methods were developed for the quantitative determination of Agomelatine (AGM) whether in pure form or in pharmaceutical formulations. Two supervised learning machines' methods; linear artificial neural networks (PC-linANN) preceded by principle component analysis and linear support vector regression (linSVR), were compared with two principle component based methods; principle component regression (PCR) as well as partial least squares (PLS) for the spectrofluorimetric determination of AGM and its degradants. The results showed the benefits behind using linear learning machines' methods and the inherent merits of their algorithms in handling overlapped noisy spectral data especially during the challenging determination of AGM alkaline and acidic degradants (DG1 and DG2). Relative mean squared error of prediction (RMSEP) for the proposed models in the determination of AGM were 1.68, 1.72, 0.68 and 0.22 for PCR, PLS, SVR and PC-linANN; respectively. The results showed the superiority of supervised learning machines' methods over principle component based methods. Besides, the results suggested that linANN is the method of choice for determination of components in low amounts with similar overlapped spectra and narrow linearity range. Comparison between the proposed chemometric models and a reported HPLC method revealed the comparable performance and quantification power of the proposed models.

  7. Numerical analysis method for linear induction machines.

    NASA Technical Reports Server (NTRS)

    Elliott, D. G.

    1972-01-01

    A numerical analysis method has been developed for linear induction machines such as liquid metal MHD pumps and generators and linear motors. Arbitrary phase currents or voltages can be specified and the moving conductor can have arbitrary velocity and conductivity variations from point to point. The moving conductor is divided into a mesh and coefficients are calculated for the voltage induced at each mesh point by unit current at every other mesh point. Combining the coefficients with the mesh resistances yields a set of simultaneous equations which are solved for the unknown currents.

  8. Modeling of Pulses Having Arbitrary Amplitude and Frequency Modulation.

    DTIC Science & Technology

    1980-03-01

    function, fi(t), has been discussed in great detail in Section II. The linearized amplitude modulation, 1(t), is given by: (IV-6) vo A +h( -) TO’ # where "A...10. LCDR Francis Martin Lunney, USN 6143 Gatsby Green Columbia, Maryland 21045 149

  9. Prediction of B-cell linear epitopes with a combination of support vector machine classification and amino acid propensity identification.

    PubMed

    Wang, Hsin-Wei; Lin, Ya-Chi; Pai, Tun-Wen; Chang, Hao-Teng

    2011-01-01

    Epitopes are antigenic determinants that are useful because they induce B-cell antibody production and stimulate T-cell activation. Bioinformatics can enable rapid, efficient prediction of potential epitopes. Here, we designed a novel B-cell linear epitope prediction system called LEPS, Linear Epitope Prediction by Propensities and Support Vector Machine, that combined physico-chemical propensity identification and support vector machine (SVM) classification. We tested the LEPS on four datasets: AntiJen, HIV, a newly generated PC, and AHP, a combination of these three datasets. Peptides with globally or locally high physicochemical propensities were first identified as primitive linear epitope (LE) candidates. Then, candidates were classified with the SVM based on the unique features of amino acid segments. This reduced the number of predicted epitopes and enhanced the positive prediction value (PPV). Compared to four other well-known LE prediction systems, the LEPS achieved the highest accuracy (72.52%), specificity (84.22%), PPV (32.07%), and Matthews' correlation coefficient (10.36%).

  10. Design considerations for ultra-precision magnetic bearing supported slides

    NASA Technical Reports Server (NTRS)

    Slocum, Alexander H.; Eisenhaure, David B.

    1993-01-01

    Development plans for a prototype servocontrolled machine with 1 angstrom resolution of linear motion and 50 mm range of travel are described. Two such devices could then be combined to produce a two dimensional machine for probing large planar objects with atomic resolution, the Angstrom Resolution Measuring Machine (ARMM).

  11. Active balance system and vibration balanced machine

    NASA Technical Reports Server (NTRS)

    White, Maurice A. (Inventor); Qiu, Songgang (Inventor); Augenblick, John E. (Inventor); Peterson, Allen A. (Inventor)

    2005-01-01

    An active balance system is provided for counterbalancing vibrations of an axially reciprocating machine. The balance system includes a support member, a flexure assembly, a counterbalance mass, and a linear motor or an actuator. The support member is configured for attachment to the machine. The flexure assembly includes at least one flat spring having connections along a central portion and an outer peripheral portion. One of the central portion and the outer peripheral portion is fixedly mounted to the support member. The counterbalance mass is fixedly carried by the flexure assembly along another of the central portion and the outer peripheral portion. The linear motor has one of a stator and a mover fixedly mounted to the support member and another of the stator and the mover fixedly mounted to the counterbalance mass. The linear motor is operative to axially reciprocate the counterbalance mass.

  12. Effect of hole geometry and Electric-Discharge Machining (EDM) on airflow rates through small diameter holes in turbine blade material

    NASA Technical Reports Server (NTRS)

    Hippensteele, S. A.; Cochran, R. P.

    1980-01-01

    The effects of two design parameters, electrode diameter and hole angle, and two machine parameters, electrode current and current-on time, on air flow rates through small-diameter (0.257 to 0.462 mm) electric-discharge-machined holes were measured. The holes were machined individually in rows of 14 each through 1.6 mm thick IN-100 strips. The data showed linear increase in air flow rate with increases in electrode cross sectional area and current-on time and little change with changes in hole angle and electrode current. The average flow-rate deviation (from the mean flow rate for a given row) decreased linearly with electrode diameter and increased with hole angle. Burn time and finished hole diameter were also measured.

  13. Lawrence Livermore National Laboratory ULTRA-350 Test Bed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hopkins, D J; Wulff, T A; Carlisle, K

    2001-04-10

    LLNL has many in-house designed high precision machine tools. Some of these tools include the Large Optics Diamond Turning Machine (LODTM) [1], Diamond Turning Machine No.3 (DTM-3) and two Precision Engineering Research Lathes (PERL-1 and PERL-11). These machines have accuracy in the sub-micron range and in most cases position resolution in the couple of nanometers range. All of these machines are built with similar underlying technologies. The machines use capstan drive technology, laser interferometer position feedback, tachometer velocity feedback, permanent magnet (PM) brush motors and analog velocity and position loop servo compensation [2]. The machine controller does not perform anymore » servo compensation it simply computes the differences between the commanded position and the actual position (the following error) and sends this to a D/A for the analog servo position loop. LLNL is designing a new high precision diamond turning machine. The machine is called the ULTRA 350 [3]. In contrast to many of the proven technologies discussed above, the plan for the new machine is to use brushless linear motors, high precision linear scales, machine controller motor commutation and digital servo compensation for the velocity and position loops. Although none of these technologies are new and have been in use in industry, applications of these technologies to high precision diamond turning is limited. To minimize the risks of these technologies in the new machine design, LLNL has established a test bed to evaluate these technologies for application in high precision diamond turning. The test bed is primarily composed of commercially available components. This includes the slide with opposed hydrostatic bearings, the oil system, the brushless PM linear motor, the two-phase input three-phase output linear motor amplifier and the system controller. The linear scales are not yet commercially available but use a common electronic output format. As of this writing, the final verdict for the use of these technologies is still out but the first part of the work has been completed with promising results. The goal of this part of the work was to close a servo position loop around a slide incorporating these technologies and to measure the performance. This paper discusses the tests that were setup for system evaluation and the results of the measurements made. Some very promising results include; slide positioning to nanometer level and slow speed slide direction reversal at less than 100nm/min with no observed discontinuities. This is very important for machine contouring in diamond turning. As a point of reference, at 100 nm/min it would take the slide almost 7 years to complete the full designed travel of 350 mm. This speed has been demonstrated without the use of a velocity sensor. The velocity is derived from the position sensor. With what has been learned on the test bed, the paper finishes with a brief comparison of the old and new technologies. The emphasis of this comparison will be on the servo performance as illustrated with bode plot diagrams.« less

  14. Lawrence Livermore National Laboratory ULTRA-350 Test Bed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hopkins, D J; Wulff, T A; Carlisle, K

    2001-04-10

    LLNL has many in-house designed high precision machine tools. Some of these tools include the Large Optics Diamond Turning Machine (LODTM) [1], Diamond Turning Machine No.3 (DTM-3) and two Precision Engineering Research Lathes (PERL-I and PERL-II). These machines have accuracy in the sub-micron range and in most cases position resolution in the couple of nanometers range. All of these machines are built with similar underlying technologies. The machines use capstan drive technology, laser interferometer position feedback, tachometer velocity feedback, permanent magnet (PM) brush motors and analog velocity and position loop servo compensation [2]. The machine controller does not perform anymore » servo compensation it simply computes the differences between the commanded position and the actual position (the following error) and sends this to a D/A for the analog servo position loop. LLNL is designing a new high precision diamond turning machine. The machine is called the ULTRA 350 [3]. In contrast to many of the proven technologies discussed above, the plan for the new machine is to use brushless linear motors, high precision linear scales, machine controller motor commutation and digital servo compensation for the velocity and position loops. Although none of these technologies are new and have been in use in industry, applications of these technologies to high precision diamond turning is limited. To minimize the risks of these technologies in the new machine design, LLNL has established a test bed to evaluate these technologies for application in high precision diamond turning. The test bed is primarily composed of commercially available components. This includes the slide with opposed hydrostatic bearings, the oil system, the brushless PM linear motor, the two-phase input three-phase output linear motor amplifier and the system controller. The linear scales are not yet commercially available but use a common electronic output format. As of this writing, the final verdict for the use of these technologies is still out but the first part of the work has been completed with promising results. The goal of this part of the work was to close a servo position loop around a slide incorporating these technologies and to measure the performance. This paper discusses the tests that were setup for system evaluation and the results of the measurements made. Some very promising results include; slide positioning to nanometer level and slow speed slide direction reversal at less than 100nm/min with no observed discontinuities. This is very important for machine contouring in diamond turning. As a point of reference, at 100 nm/min it would take the slide almost 7 years to complete the full designed travel of 350 mm. This speed has been demonstrated without the use of a velocity sensor. The velocity is derived from the position sensor. With what has been learned on the test bed, the paper finishes with a brief comparison of the old and new technologies. The emphasis of this comparison will be on the servo performance as illustrated with bode plot diagrams.« less

  15. Modelling daily water temperature from air temperature for the Missouri River.

    PubMed

    Zhu, Senlin; Nyarko, Emmanuel Karlo; Hadzima-Nyarko, Marijana

    2018-01-01

    The bio-chemical and physical characteristics of a river are directly affected by water temperature, which thereby affects the overall health of aquatic ecosystems. It is a complex problem to accurately estimate water temperature. Modelling of river water temperature is usually based on a suitable mathematical model and field measurements of various atmospheric factors. In this article, the air-water temperature relationship of the Missouri River is investigated by developing three different machine learning models (Artificial Neural Network (ANN), Gaussian Process Regression (GPR), and Bootstrap Aggregated Decision Trees (BA-DT)). Standard models (linear regression, non-linear regression, and stochastic models) are also developed and compared to machine learning models. Analyzing the three standard models, the stochastic model clearly outperforms the standard linear model and nonlinear model. All the three machine learning models have comparable results and outperform the stochastic model, with GPR having slightly better results for stations No. 2 and 3, while BA-DT has slightly better results for station No. 1. The machine learning models are very effective tools which can be used for the prediction of daily river temperature.

  16. A Navier-Strokes Chimera Code on the Connection Machine CM-5: Design and Performance

    NASA Technical Reports Server (NTRS)

    Jespersen, Dennis C.; Levit, Creon; Kwak, Dochan (Technical Monitor)

    1994-01-01

    We have implemented a three-dimensional compressible Navier-Stokes code on the Connection Machine CM-5. The code is set up for implicit time-stepping on single or multiple structured grids. For multiple grids and geometrically complex problems, we follow the 'chimera' approach, where flow data on one zone is interpolated onto another in the region of overlap. We will describe our design philosophy and give some timing results for the current code. A parallel machine like the CM-5 is well-suited for finite-difference methods on structured grids. The regular pattern of connections of a structured mesh maps well onto the architecture of the machine. So the first design choice, finite differences on a structured mesh, is natural. We use centered differences in space, with added artificial dissipation terms. When numerically solving the Navier-Stokes equations, there are liable to be some mesh cells near a solid body that are small in at least one direction. This mesh cell geometry can impose a very severe CFL (Courant-Friedrichs-Lewy) condition on the time step for explicit time-stepping methods. Thus, though explicit time-stepping is well-suited to the architecture of the machine, we have adopted implicit time-stepping. We have further taken the approximate factorization approach. This creates the need to solve large banded linear systems and creates the first possible barrier to an efficient algorithm. To overcome this first possible barrier we have considered two options. The first is just to solve the banded linear systems with data spread over the whole machine, using whatever fast method is available. This option is adequate for solving scalar tridiagonal systems, but for scalar pentadiagonal or block tridiagonal systems it is somewhat slower than desired. The second option is to 'transpose' the flow and geometry variables as part of the time-stepping process: Start with x-lines of data in-processor. Form explicit terms in x, then transpose so y-lines of data are in-processor. Form explicit terms in y, then transpose so z-lines are in processor. Form explicit terms in z, then solve linear systems in the z-direction. Transpose to the y-direction, then solve linear systems in the y-direction. Finally transpose to the x direction and solve linear systems in the x-direction. This strategy avoids inter-processor communication when differencing and solving linear systems, but requires a large amount of communication when doing the transposes. The transpose method is more efficient than the non-transpose strategy when dealing with scalar pentadiagonal or block tridiagonal systems. For handling geometrically complex problems the chimera strategy was adopted. For multiple zone cases we compute on each zone sequentially (using the whole parallel machine), then send the chimera interpolation data to a distributed data structure (array) laid out over the whole machine. This information transfer implies an irregular communication pattern, and is the second possible barrier to an efficient algorithm. We have implemented these ideas on the CM-5 using CMF (Connection Machine Fortran), a data parallel language which combines elements of Fortran 90 and certain extensions, and which bears a strong similarity to High Performance Fortran. We make use of the Connection Machine Scientific Software Library (CMSSL) for the linear solver and array transpose operations.

  17. Applications of Support Vector Machines In Chemo And Bioinformatics

    NASA Astrophysics Data System (ADS)

    Jayaraman, V. K.; Sundararajan, V.

    2010-10-01

    Conventional linear & nonlinear tools for classification, regression & data driven modeling are being replaced on a rapid scale by newer techniques & tools based on artificial intelligence and machine learning. While the linear techniques are not applicable for inherently nonlinear problems, newer methods serve as attractive alternatives for solving real life problems. Support Vector Machine (SVM) classifiers are a set of universal feed-forward network based classification algorithms that have been formulated from statistical learning theory and structural risk minimization principle. SVM regression closely follows the classification methodology. In this work recent applications of SVM in Chemo & Bioinformatics will be described with suitable illustrative examples.

  18. State policies targeting junk food in schools: racial/ethnic differences in the effect of policy change on soda consumption.

    PubMed

    Taber, Daniel R; Stevens, June; Evenson, Kelly R; Ward, Dianne S; Poole, Charles; Maciejewski, Matthew L; Murray, David M; Brownson, Ross C

    2011-09-01

    We estimated the association between state policy changes and adolescent soda consumption and body mass index (BMI) percentile, overall and by race/ethnicity. We obtained data on whether states required or recommended that schools prohibit junk food in vending machines, snack bars, concession stands, and parties from the 2000 and 2006 School Health Policies and Programs Study. We used linear mixed models to estimate the association between 2000-2006 policy changes and 2007 soda consumption and BMI percentile, as reported by 90 730 students in 33 states and the District of Columbia in the Youth Risk Behavior Survey, and to test for racial/ethnic differences in the associations. Policy changes targeting concession stands were associated with 0.09 fewer servings of soda per day among students (95% confidence interval [CI] = -0.17, -0.01); the association was more pronounced among non-Hispanic Blacks (0.19 fewer servings per day). Policy changes targeting parties were associated with 0.07 fewer servings per day (95% CI = -0.13, 0.00). Policy changes were not associated with BMI percentile in any group. State policies targeting junk food in schools may reduce racial/ethnic disparities in adolescent soda consumption, but their impact appears to be too weak to reduce adolescent BMI percentile.

  19. STS-107 Flight Day 14 Highlights

    NASA Astrophysics Data System (ADS)

    2003-01-01

    This video shows the activities of the STS-107 crew on flight day 14 of the Columbia orbiter's final mission. The crew includes Commander Rick Husband, Pilot William McCool, Mission Specialists Kalpana Chawla, David Brown, Michael Anderson, and Laurel Clark, and Payload Specialist Ilan Ramon. Most of the video shows a press conference on board Columbia featuring all seven astronauts. Reporters ask the crew members questions, who reply via a handset. Most of the questions cover life in space and the mission's spaceborne experiments. Each astronaut answers multiple questions, and in response to one of the questions, each of the seven describes an 'O Wow!' moment. The remainder of the video consists of a tour of the orbiter, including the flight deck, mid-deck, and the SpaceHab Research Double Module (RDM) in the payload bay. Mission Specialist Chawla demonstrates eating at the shuttle's galley, and Commander Husband shows his toiletries. In the RDM, Mission Specialist Clark exercises on a machine for an experiment on respiration.

  20. Linear microbunching analysis for recirculation machines

    DOE PAGES

    Tsai, C. -Y.; Douglas, D.; Li, R.; ...

    2016-11-28

    Microbunching instability (MBI) has been one of the most challenging issues in designs of magnetic chicanes for short-wavelength free-electron lasers or linear colliders, as well as those of transport lines for recirculating or energy-recovery-linac machines. To quantify MBI for a recirculating machine and for more systematic analyses, we have recently developed a linear Vlasov solver and incorporated relevant collective effects into the code, including the longitudinal space charge, coherent synchrotron radiation, and linac geometric impedances, with extension of the existing formulation to include beam acceleration. In our code, we semianalytically solve the linearized Vlasov equation for microbunching amplification factor formore » an arbitrary linear lattice. In this study we apply our code to beam line lattices of two comparative isochronous recirculation arcs and one arc lattice preceded by a linac section. The resultant microbunching gain functions and spectral responses are presented, with some results compared to particle tracking simulation by elegant (M. Borland, APS Light Source Note No. LS-287, 2002). These results demonstrate clearly the impact of arc lattice design on the microbunching development. Lastly, the underlying physics with inclusion of those collective effects is elucidated and the limitation of the existing formulation is also discussed.« less

  1. Active vibration and balance system for closed cycle thermodynamic machines

    NASA Technical Reports Server (NTRS)

    Augenblick, John E. (Inventor); Peterson, Allen A. (Inventor); White, Maurice A. (Inventor); Qiu, Songgang (Inventor)

    2004-01-01

    An active balance system is provided for counterbalancing vibrations of an axially reciprocating machine. The balance system includes a support member, a flexure assembly, a counterbalance mass, and a linear motor or an actuator. The support member is configured for attachment to the machine. The flexure assembly includes at least one flat spring having connections along a central portion and an outer peripheral portion. One of the central portion and the outer peripheral portion is fixedly mounted to the support member. The counterbalance mass is fixedly carried by the flexure assembly along another of the central portion and the outer peripheral portion. The linear motor has one of a stator and a mover fixedly mounted to the support member and another of the stator and the mover fixedly mounted to the counterbalance mass. The linear motor is operative to axially reciprocate the counterbalance mass. A method is also provided.

  2. Design and analysis of an unconventional permanent magnet linear machine for energy harvesting

    NASA Astrophysics Data System (ADS)

    Zeng, Peng

    This Ph.D. dissertation proposes an unconventional high power density linear electromagnetic kinetic energy harvester, and a high-performance two-stage interface power electronics to maintain maximum power abstraction from the energy source and charge the Li-ion battery load with constant current. The proposed machine architecture is composed of a double-sided flat type silicon steel stator with winding slots, a permanent magnet mover, coil windings, a linear motion guide and an adjustable spring bearing. The unconventional design of the machine is that NdFeB magnet bars in the mover are placed with magnetic fields in horizontal direction instead of vertical direction and the same magnetic poles are facing each other. The derived magnetic equivalent circuit model proves the average air-gap flux density of the novel topology is as high as 0.73 T with 17.7% improvement over that of the conventional topology at the given geometric dimensions of the proof-of-concept machine. Subsequently, the improved output voltage and power are achieved. The dynamic model of the linear generator is also developed, and the analytical equations of output maximum power are derived for the case of driving vibration with amplitude that is equal, smaller and larger than the relative displacement between the mover and the stator of the machine respectively. Furthermore, the finite element analysis (FEA) model has been simulated to prove the derived analytical results and the improved power generation capability. Also, an optimization framework is explored to extend to the multi-Degree-of-Freedom (n-DOF) vibration based linear energy harvesting devices. Moreover, a boost-buck cascaded switch mode converter with current controller is designed to extract the maximum power from the harvester and charge the Li-ion battery with trickle current. Meanwhile, a maximum power point tracking (MPPT) algorithm is proposed and optimized for low frequency driving vibrations. Finally, a proof-of-concept unconventional permanent magnet (PM) linear generator is prototyped and tested to verify the simulation results of the FEA model. For the coil windings of 33, 66 and 165 turns, the output power of the machine is tested to have the output power of 65.6 mW, 189.1 mW, and 497.7 mW respectively with the maximum power density of 2.486 mW/cm3.

  3. An M-step preconditioned conjugate gradient method for parallel computation

    NASA Technical Reports Server (NTRS)

    Adams, L.

    1983-01-01

    This paper describes a preconditioned conjugate gradient method that can be effectively implemented on both vector machines and parallel arrays to solve sparse symmetric and positive definite systems of linear equations. The implementation on the CYBER 203/205 and on the Finite Element Machine is discussed and results obtained using the method on these machines are given.

  4. Machine learning approaches to the social determinants of health in the health and retirement study.

    PubMed

    Seligman, Benjamin; Tuljapurkar, Shripad; Rehkopf, David

    2018-04-01

    Social and economic factors are important predictors of health and of recognized importance for health systems. However, machine learning, used elsewhere in the biomedical literature, has not been extensively applied to study relationships between society and health. We investigate how machine learning may add to our understanding of social determinants of health using data from the Health and Retirement Study. A linear regression of age and gender, and a parsimonious theory-based regression additionally incorporating income, wealth, and education, were used to predict systolic blood pressure, body mass index, waist circumference, and telomere length. Prediction, fit, and interpretability were compared across four machine learning methods: linear regression, penalized regressions, random forests, and neural networks. All models had poor out-of-sample prediction. Most machine learning models performed similarly to the simpler models. However, neural networks greatly outperformed the three other methods. Neural networks also had good fit to the data ( R 2 between 0.4-0.6, versus <0.3 for all others). Across machine learning models, nine variables were frequently selected or highly weighted as predictors: dental visits, current smoking, self-rated health, serial-seven subtractions, probability of receiving an inheritance, probability of leaving an inheritance of at least $10,000, number of children ever born, African-American race, and gender. Some of the machine learning methods do not improve prediction or fit beyond simpler models, however, neural networks performed well. The predictors identified across models suggest underlying social factors that are important predictors of biological indicators of chronic disease, and that the non-linear and interactive relationships between variables fundamental to the neural network approach may be important to consider.

  5. Generation of Custom DSP Transform IP Cores: Case Study Walsh-Hadamard Transform

    DTIC Science & Technology

    2002-09-01

    mathematics and hardware design What I know: Finite state machine Pipelining Systolic array … What I know: Linear algebra Digital signal processing...state machine Pipelining Systolic array … What I know: Linear algebra Digital signal processing Adaptive filter theory … A math guy A hardware engineer...Synthesis Technology Libary Bit-width (8) HF factor (1,2,3,6) VF factor (1,2,4, ... 32) Xilinx FPGA Place&Route Xilinx FPGA Place&Route Performance

  6. HEC Applications on Columbia Project

    NASA Technical Reports Server (NTRS)

    Taft, Jim

    2004-01-01

    NASA's Columbia system consists of a cluster of twenty 512 processor SGI Altix systems. Each of these systems is 3 TFLOP/s in peak performance - approximately the same as the entire compute capability at NAS just one year ago. Each 512p system is a single system image machine with one Linunx O5, one high performance file system, and one globally shared memory. The NAS Terascale Applications Group (TAG) is chartered to assist in scaling NASA's mission critical codes to at least 512p in order to significantly improve emergency response during flight operations, as well as provide significant improvements in the codes. and rate of scientific discovery across the scientifc disciplines within NASA's Missions. Recent accomplishments are 4x improvements to codes in the ocean modeling community, 10x performance improvements in a number of computational fluid dynamics codes used in aero-vehicle design, and 5x improvements in a number of space science codes dealing in extreme physics. The TAG group will continue its scaling work to 2048p and beyond (10240 cpus) as the Columbia system becomes fully operational and the upgrades to the SGI NUMAlink memory fabric are in place. The NUMlink uprades dramatically improve system scalability for a single application. These upgrades will allow a number of codes to execute faster at higher fidelity than ever before on any other system, thus increasing the rate of scientific discovery even further

  7. Non-linear effects in bunch compressor of TARLA

    NASA Astrophysics Data System (ADS)

    Yildiz, Hüseyin; Aksoy, Avni; Arikan, Pervin

    2016-03-01

    Transport of a beam through an accelerator beamline is affected by high order and non-linear effects such as space charge, coherent synchrotron radiation, wakefield, etc. These effects damage form of the beam, and they lead particle loss, emittance growth, bunch length variation, beam halo formation, etc. One of the known non-linear effects on low energy machine is space charge effect. In this study we focus on space charge effect for Turkish Accelerator and Radiation Laboratory in Ankara (TARLA) machine which is designed to drive InfraRed Free Electron Laser covering the range of 3-250 µm. Moreover, we discuss second order effects on bunch compressor of TARLA.

  8. Single-machine common/slack due window assignment problems with linear decreasing processing times

    NASA Astrophysics Data System (ADS)

    Zhang, Xingong; Lin, Win-Chin; Wu, Wen-Hsiang; Wu, Chin-Chia

    2017-08-01

    This paper studies linear non-increasing processing times and the common/slack due window assignment problems on a single machine, where the actual processing time of a job is a linear non-increasing function of its starting time. The aim is to minimize the sum of the earliness cost, tardiness cost, due window location and due window size. Some optimality results are discussed for the common/slack due window assignment problems and two O(n log n) time algorithms are presented to solve the two problems. Finally, two examples are provided to illustrate the correctness of the corresponding algorithms.

  9. Does Mission Matter? An Analysis of Private School Achievement Differences

    ERIC Educational Resources Information Center

    Boerema, Albert J.

    2009-01-01

    Using student achievement data from British Columbia, Canada, this study is an exploration of the differences that lie within the private school sector using hierarchical linear modeling to analyze the data. The analysis showed that when controlling for language, parents' level of educational attainment, and prior achievement, the private school…

  10. Electric converters of electromagnetic strike machine with capacitor supply

    NASA Astrophysics Data System (ADS)

    Usanov, K. M.; Volgin, A. V.; Kargin, V. A.; Moiseev, A. P.; Chetverikov, E. A.

    2018-03-01

    The application of pulse linear electromagnetic engines in small power strike machines (energy impact is 0.01...1.0 kJ), where the characteristic mode of rare beats (pulse seismic vibrator, the arch crash device bins bulk materials), is quite effective. At the same time, the technical and economic performance of such machines is largely determined by the ability of the power source to provide a large instantaneous power of the supply pulses in the winding of the linear electromagnetic motor. The use of intermediate energy storage devices in power systems of rare-shock LEME makes it possible to obtain easily large instantaneous powers, forced energy conversion, and increase the performance of the machine. A capacitor power supply of a pulsed source of seismic waves is proposed for the exploration of shallow depths. The sections of the capacitor storage (CS) are connected to the winding of the linear electromagnetic motor by thyristor dischargers, the sequence of activation of which is determined by the control device. The charge of the capacitors to the required voltage is made directly from the battery source, or through the converter from a battery source with a smaller number of batteries.

  11. Dimension Reduction With Extreme Learning Machine.

    PubMed

    Kasun, Liyanaarachchi Lekamalage Chamara; Yang, Yan; Huang, Guang-Bin; Zhang, Zhengyou

    2016-08-01

    Data may often contain noise or irrelevant information, which negatively affect the generalization capability of machine learning algorithms. The objective of dimension reduction algorithms, such as principal component analysis (PCA), non-negative matrix factorization (NMF), random projection (RP), and auto-encoder (AE), is to reduce the noise or irrelevant information of the data. The features of PCA (eigenvectors) and linear AE are not able to represent data as parts (e.g. nose in a face image). On the other hand, NMF and non-linear AE are maimed by slow learning speed and RP only represents a subspace of original data. This paper introduces a dimension reduction framework which to some extend represents data as parts, has fast learning speed, and learns the between-class scatter subspace. To this end, this paper investigates a linear and non-linear dimension reduction framework referred to as extreme learning machine AE (ELM-AE) and sparse ELM-AE (SELM-AE). In contrast to tied weight AE, the hidden neurons in ELM-AE and SELM-AE need not be tuned, and their parameters (e.g, input weights in additive neurons) are initialized using orthogonal and sparse random weights, respectively. Experimental results on USPS handwritten digit recognition data set, CIFAR-10 object recognition, and NORB object recognition data set show the efficacy of linear and non-linear ELM-AE and SELM-AE in terms of discriminative capability, sparsity, training time, and normalized mean square error.

  12. Vowel Imagery Decoding toward Silent Speech BCI Using Extreme Learning Machine with Electroencephalogram

    PubMed Central

    Kim, Jongin; Park, Hyeong-jun

    2016-01-01

    The purpose of this study is to classify EEG data on imagined speech in a single trial. We recorded EEG data while five subjects imagined different vowels, /a/, /e/, /i/, /o/, and /u/. We divided each single trial dataset into thirty segments and extracted features (mean, variance, standard deviation, and skewness) from all segments. To reduce the dimension of the feature vector, we applied a feature selection algorithm based on the sparse regression model. These features were classified using a support vector machine with a radial basis function kernel, an extreme learning machine, and two variants of an extreme learning machine with different kernels. Because each single trial consisted of thirty segments, our algorithm decided the label of the single trial by selecting the most frequent output among the outputs of the thirty segments. As a result, we observed that the extreme learning machine and its variants achieved better classification rates than the support vector machine with a radial basis function kernel and linear discrimination analysis. Thus, our results suggested that EEG responses to imagined speech could be successfully classified in a single trial using an extreme learning machine with a radial basis function and linear kernel. This study with classification of imagined speech might contribute to the development of silent speech BCI systems. PMID:28097128

  13. High Energy Colliders

    NASA Astrophysics Data System (ADS)

    Palmer, R. B.; Gallardo, J. C.

    INTRODUCTION PHYSICS CONSIDERATIONS GENERAL REQUIRED LUMINOSITY FOR LEPTON COLLIDERS THE EFFECTIVE PHYSICS ENERGIES OF HADRON COLLIDERS HADRON-HADRON MACHINES LUMINOSITY SIZE AND COST CIRCULAR e^{+}e^- MACHINES LUMINOSITY SIZE AND COST e^{+}e^- LINEAR COLLIDERS LUMINOSITY CONVENTIONAL RF SUPERCONDUCTING RF AT HIGHER ENERGIES γ - γ COLLIDERS μ ^{+} μ^- COLLIDERS ADVANTAGES AND DISADVANTAGES DESIGN STUDIES STATUS AND REQUIRED R AND D COMPARISION OF MACHINES CONCLUSIONS DISCUSSION

  14. The circular form of the linear superconducting machine for marine propulsion

    NASA Astrophysics Data System (ADS)

    Rakels, J. H.; Mahtani, J. L.; Rhodes, R. G.

    1981-01-01

    The superconducting linear synchronous machine (LSM) is an efficient method of propulsion of advanced ground transport systems and can also be used in marine engineering for the propulsion of large commercial vessels, tankers, and military ships. It provides high torque at low shaft speeds and ease of reversibility; a circular LSM design is proposed as a drive motor. The equipment is compared with the superconducting homopolar motors, showing flexibility in design, built in redundancy features, and reliability.

  15. A Very Fast and Angular Momentum Conserving Tree Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marcello, Dominic C., E-mail: dmarce504@gmail.com

    There are many methods used to compute the classical gravitational field in astrophysical simulation codes. With the exception of the typically impractical method of direct computation, none ensure conservation of angular momentum to machine precision. Under uniform time-stepping, the Cartesian fast multipole method of Dehnen (also known as the very fast tree code) conserves linear momentum to machine precision. We show that it is possible to modify this method in a way that conserves both angular and linear momenta.

  16. Dual linear structured support vector machine tracking method via scale correlation filter

    NASA Astrophysics Data System (ADS)

    Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen

    2018-01-01

    Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.

  17. A Spatial Correlation Model of Permeability on the Columbia River Plateau

    NASA Astrophysics Data System (ADS)

    Jayne, R., Jr.; Pollyea, R. M.

    2017-12-01

    This study presents a spatial correlation model of regional scale permeability variability within the Columbia River Basalt Group (CRBG). The data were compiled from the literature, and include 893 aquifer test results from 598 individual wells. In order to quantify the spatial variation of permeability within the CRBG, three experimental variograms (two horizontal and one vertical) are calculated and then fit with a linear combination of mathematical models. The horizontal variograms show there is a 4.5:1 anisotropy ratio for the permeability correlation structure with a long-range correlation of 35 km at N40°E. The km-scale range of these variograms suggests that there is regional control on permeability within the CRBG. One plausible control on the permeability distribution is that rapid crustal loading during CRBG emplacement ( 80% over 1M years) resulted in an isostatic response where the Columbia Plateau had previously undergone subsidence. To support this hypothesis, we calculate a 200 m moving average of all permeability values with depth. This calculation shows that permeability generally follows a systematic decay until 1,100 m depth, beyond which the 200 m moving average permeability increases 3 orders of magnitude. Since basalt fracture networks govern permeability on Columbia River Plateau, this observation is consistent with basal flexure causing tensile stress that counteract lithostatic loading, thus maintaining higher than expected permeability at depth within the Columbia River Basalt Group. These results may have important implications for regional CRBG groundwater management, as well as engineered reservoirs for carbon capture and sequestration and nuclear waste storage.

  18. Quick-Turn Finite Element Analysis for Plug-and-Play Satellite Structures

    DTIC Science & Technology

    2007-03-01

    produced from 0.375 inch round stock and turned on a machine lathe to achieve the shoulder feature and drilled to make it hollow. Figure 3.1...component, a linear taper was machined from the connection shoulder to the solar panel connecting fork. The part was then turned using the machine lathe ...utilizing a modern five-axis Computer Numerical Code ( CNC ) machine mill, the process time could be reduced by as much as seventy-five percent and the

  19. Magnetic Flux Distribution of Linear Machines with Novel Three-Dimensional Hybrid Magnet Arrays

    PubMed Central

    Yao, Nan; Yan, Liang; Wang, Tianyi; Wang, Shaoping

    2017-01-01

    The objective of this paper is to propose a novel tubular linear machine with hybrid permanent magnet arrays and multiple movers, which could be employed for either actuation or sensing technology. The hybrid magnet array produces flux distribution on both sides of windings, and thus helps to increase the signal strength in the windings. The multiple movers are important for airspace technology, because they can improve the system’s redundancy and reliability. The proposed design concept is presented, and the governing equations are obtained based on source free property and Maxwell equations. The magnetic field distribution in the linear machine is thus analytically formulated by using Bessel functions and harmonic expansion of magnetization vector. Numerical simulation is then conducted to validate the analytical solutions of the magnetic flux field. It is proved that the analytical model agrees with the numerical results well. Therefore, it can be utilized for the formulation of signal or force output subsequently, depending on its particular implementation. PMID:29156577

  20. A comparison of optimal MIMO linear and nonlinear models for brain machine interfaces

    NASA Astrophysics Data System (ADS)

    Kim, S.-P.; Sanchez, J. C.; Rao, Y. N.; Erdogmus, D.; Carmena, J. M.; Lebedev, M. A.; Nicolelis, M. A. L.; Principe, J. C.

    2006-06-01

    The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.

  1. Magnetic Flux Distribution of Linear Machines with Novel Three-Dimensional Hybrid Magnet Arrays.

    PubMed

    Yao, Nan; Yan, Liang; Wang, Tianyi; Wang, Shaoping

    2017-11-18

    The objective of this paper is to propose a novel tubular linear machine with hybrid permanent magnet arrays and multiple movers, which could be employed for either actuation or sensing technology. The hybrid magnet array produces flux distribution on both sides of windings, and thus helps to increase the signal strength in the windings. The multiple movers are important for airspace technology, because they can improve the system's redundancy and reliability. The proposed design concept is presented, and the governing equations are obtained based on source free property and Maxwell equations. The magnetic field distribution in the linear machine is thus analytically formulated by using Bessel functions and harmonic expansion of magnetization vector. Numerical simulation is then conducted to validate the analytical solutions of the magnetic flux field. It is proved that the analytical model agrees with the numerical results well. Therefore, it can be utilized for the formulation of signal or force output subsequently, depending on its particular implementation.

  2. A comparison of optimal MIMO linear and nonlinear models for brain-machine interfaces.

    PubMed

    Kim, S-P; Sanchez, J C; Rao, Y N; Erdogmus, D; Carmena, J M; Lebedev, M A; Nicolelis, M A L; Principe, J C

    2006-06-01

    The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.

  3. Gender classification of running subjects using full-body kinematics

    NASA Astrophysics Data System (ADS)

    Williams, Christina M.; Flora, Jeffrey B.; Iftekharuddin, Khan M.

    2016-05-01

    This paper proposes novel automated gender classification of subjects while engaged in running activity. The machine learning techniques include preprocessing steps using principal component analysis followed by classification with linear discriminant analysis, and nonlinear support vector machines, and decision-stump with AdaBoost. The dataset consists of 49 subjects (25 males, 24 females, 2 trials each) all equipped with approximately 80 retroreflective markers. The trials are reflective of the subject's entire body moving unrestrained through a capture volume at a self-selected running speed, thus producing highly realistic data. The classification accuracy using leave-one-out cross validation for the 49 subjects is improved from 66.33% using linear discriminant analysis to 86.74% using the nonlinear support vector machine. Results are further improved to 87.76% by means of implementing a nonlinear decision stump with AdaBoost classifier. The experimental findings suggest that the linear classification approaches are inadequate in classifying gender for a large dataset with subjects running in a moderately uninhibited environment.

  4. Non-linear effects in bunch compressor of TARLA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yildiz, Hüseyin, E-mail: huseyinyildiz006@gmail.com, E-mail: huseyinyildiz@gazi.edu.tr; Aksoy, Avni; Arikan, Pervin

    2016-03-25

    Transport of a beam through an accelerator beamline is affected by high order and non-linear effects such as space charge, coherent synchrotron radiation, wakefield, etc. These effects damage form of the beam, and they lead particle loss, emittance growth, bunch length variation, beam halo formation, etc. One of the known non-linear effects on low energy machine is space charge effect. In this study we focus on space charge effect for Turkish Accelerator and Radiation Laboratory in Ankara (TARLA) machine which is designed to drive InfraRed Free Electron Laser covering the range of 3-250 µm. Moreover, we discuss second order effects onmore » bunch compressor of TARLA.« less

  5. Stochastic subset selection for learning with kernel machines.

    PubMed

    Rhinelander, Jason; Liu, Xiaoping P

    2012-06-01

    Kernel machines have gained much popularity in applications of machine learning. Support vector machines (SVMs) are a subset of kernel machines and generalize well for classification, regression, and anomaly detection tasks. The training procedure for traditional SVMs involves solving a quadratic programming (QP) problem. The QP problem scales super linearly in computational effort with the number of training samples and is often used for the offline batch processing of data. Kernel machines operate by retaining a subset of observed data during training. The data vectors contained within this subset are referred to as support vectors (SVs). The work presented in this paper introduces a subset selection method for the use of kernel machines in online, changing environments. Our algorithm works by using a stochastic indexing technique when selecting a subset of SVs when computing the kernel expansion. The work described here is novel because it separates the selection of kernel basis functions from the training algorithm used. The subset selection algorithm presented here can be used in conjunction with any online training technique. It is important for online kernel machines to be computationally efficient due to the real-time requirements of online environments. Our algorithm is an important contribution because it scales linearly with the number of training samples and is compatible with current training techniques. Our algorithm outperforms standard techniques in terms of computational efficiency and provides increased recognition accuracy in our experiments. We provide results from experiments using both simulated and real-world data sets to verify our algorithm.

  6. Optimizing Support Vector Machine Parameters with Genetic Algorithm for Credit Risk Assessment

    NASA Astrophysics Data System (ADS)

    Manurung, Jonson; Mawengkang, Herman; Zamzami, Elviawaty

    2017-12-01

    Support vector machine (SVM) is a popular classification method known to have strong generalization capabilities. SVM can solve the problem of classification and linear regression or nonlinear kernel which can be a learning algorithm for the ability of classification and regression. However, SVM also has a weakness that is difficult to determine the optimal parameter value. SVM calculates the best linear separator on the input feature space according to the training data. To classify data which are non-linearly separable, SVM uses kernel tricks to transform the data into a linearly separable data on a higher dimension feature space. The kernel trick using various kinds of kernel functions, such as : linear kernel, polynomial, radial base function (RBF) and sigmoid. Each function has parameters which affect the accuracy of SVM classification. To solve the problem genetic algorithms are proposed to be applied as the optimal parameter value search algorithm thus increasing the best classification accuracy on SVM. Data taken from UCI repository of machine learning database: Australian Credit Approval. The results show that the combination of SVM and genetic algorithms is effective in improving classification accuracy. Genetic algorithms has been shown to be effective in systematically finding optimal kernel parameters for SVM, instead of randomly selected kernel parameters. The best accuracy for data has been upgraded from kernel Linear: 85.12%, polynomial: 81.76%, RBF: 77.22% Sigmoid: 78.70%. However, for bigger data sizes, this method is not practical because it takes a lot of time.

  7. State Policies Targeting Junk Food in Schools: Racial/Ethnic Differences in the Effect of Policy Change on Soda Consumption

    PubMed Central

    Stevens, June; Evenson, Kelly R.; Ward, Dianne S.; Poole, Charles; Maciejewski, Matthew L.; Murray, David M.; Brownson, Ross C.

    2011-01-01

    Objectives. We estimated the association between state policy changes and adolescent soda consumption and body mass index (BMI) percentile, overall and by race/ethnicity. Methods. We obtained data on whether states required or recommended that schools prohibit junk food in vending machines, snack bars, concession stands, and parties from the 2000 and 2006 School Health Policies and Programs Study. We used linear mixed models to estimate the association between 2000–2006 policy changes and 2007 soda consumption and BMI percentile, as reported by 90 730 students in 33 states and the District of Columbia in the Youth Risk Behavior Survey, and to test for racial/ethnic differences in the associations. Results. Policy changes targeting concession stands were associated with 0.09 fewer servings of soda per day among students (95% confidence interval [CI] = −0.17, −0.01); the association was more pronounced among non-Hispanic Blacks (0.19 fewer servings per day). Policy changes targeting parties were associated with 0.07 fewer servings per day (95% CI = −0.13, 0.00). Policy changes were not associated with BMI percentile in any group. Conclusions. State policies targeting junk food in schools may reduce racial/ethnic disparities in adolescent soda consumption, but their impact appears to be too weak to reduce adolescent BMI percentile. PMID:21778484

  8. A Hybrid Method for Opinion Finding Task (KUNLP at TREC 2008 Blog Track)

    DTIC Science & Technology

    2008-11-01

    retrieve relevant documents. For the Opinion Retrieval subtask, we propose a hybrid model of lexicon-based approach and machine learning approach for...estimating and ranking the opinionated documents. For the Polarized Opinion Retrieval subtask, we employ machine learning for predicting the polarity...and linear combination technique for ranking polar documents. The hybrid model which utilize both lexicon-based approach and machine learning approach

  9. Teaching Machines, Programming, Computers, and Instructional Technology: The Roots of Performance Technology.

    ERIC Educational Resources Information Center

    Deutsch, William

    1992-01-01

    Reviews the history of the development of the field of performance technology. Highlights include early teaching machines, instructional technology, learning theory, programed instruction, the systems approach, needs assessment, branching versus linear program formats, programing languages, and computer-assisted instruction. (LRW)

  10. Taxi-Out Time Prediction for Departures at Charlotte Airport Using Machine Learning Techniques

    NASA Technical Reports Server (NTRS)

    Lee, Hanbong; Malik, Waqar; Jung, Yoon C.

    2016-01-01

    Predicting the taxi-out times of departures accurately is important for improving airport efficiency and takeoff time predictability. In this paper, we attempt to apply machine learning techniques to actual traffic data at Charlotte Douglas International Airport for taxi-out time prediction. To find the key factors affecting aircraft taxi times, surface surveillance data is first analyzed. From this data analysis, several variables, including terminal concourse, spot, runway, departure fix and weight class, are selected for taxi time prediction. Then, various machine learning methods such as linear regression, support vector machines, k-nearest neighbors, random forest, and neural networks model are applied to actual flight data. Different traffic flow and weather conditions at Charlotte airport are also taken into account for more accurate prediction. The taxi-out time prediction results show that linear regression and random forest techniques can provide the most accurate prediction in terms of root-mean-square errors. We also discuss the operational complexity and uncertainties that make it difficult to predict the taxi times accurately.

  11. Astronaut Thuot and Gemar work with Middeck O-Gravity Dynamics Experiment (MODE)

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Astronauts Pierre J. Thuot (top) and Charles D. (Sam) Gemar show off the Middeck O-Gravity Dynamics Experiment (MODE) aboard the Earth-orbiting Space Shuttle Columbia. The reusable test facility is designed to study the non-linear gravity-dependent behavior of two types of space hardware - large space structures (as depicted here) and contained fluids - planned for future spacecraft.

  12. An Intervention To Enhance the Food Environment in Public Recreation and Sport Settings: A Natural Experiment in British Columbia, Canada.

    PubMed

    Naylor, Patti-Jean; Olstad, Dana Lee; Therrien, Suzanne

    2015-08-01

    Publicly funded recreation and sports facilities provide children with access to affordable physical activities, although they often have unhealthy food environments that may increase child obesity risk. This study evaluated the impact of a capacity-building intervention (Healthy Food and Beverage Sales; HFBS) on organizational capacity for providing healthy food environments, health of vending machine products, and food policy development in recreation and sport facilities in British Columbia, Canada. Twenty-one HFBS communities received training, resources, and technical support to improve their food environment over 8 months in 2009-2010, whereas 23 comparison communities did not. Communities self-reported organizational capacity, food policies, and audited vending machine products at baseline and follow-up. Repeated-measures analysis of variance evaluated intervention impact. Intervention and comparison communities reported higher organizational capacity at follow-up; however, improvements were greater in HFBS communities (p<0.001). Healthy vending products increased from 11% to 15% (p<0.05), whereas unhealthy products declined from 56% to 46% (p<0.05) in HFBS communities, with no changes in comparison communities. At baseline 10% of HFBS communities reported having a healthy food policy, whereas 48% reported one at follow-up. No comparison communities had food policies. This is the first large, controlled study to examine the impact of an intervention to improve recreation and sport facility food environments. HFBS communities increased their self-rated capacity to provide healthy foods, healthy vending product offerings, and food policies to a greater extent than comparison communities. Recreation and sport settings are a priority setting for supporting healthy dietary behaviors among children.

  13. Solid-state resistor for pulsed power machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoltzfus, Brian; Savage, Mark E.; Hutsel, Brian Thomas

    2016-12-06

    A flexible solid-state resistor comprises a string of ceramic resistors that can be used to charge the capacitors of a linear transformer driver (LTD) used in a pulsed power machine. The solid-state resistor is able to absorb the energy of a switch prefire, thereby limiting LTD cavity damage, yet has a sufficiently low RC charge time to allow the capacitor to be recharged without disrupting the operation of the pulsed power machine.

  14. Source localization in an ocean waveguide using supervised machine learning.

    PubMed

    Niu, Haiqiang; Reeves, Emma; Gerstoft, Peter

    2017-09-01

    Source localization in ocean acoustics is posed as a machine learning problem in which data-driven methods learn source ranges directly from observed acoustic data. The pressure received by a vertical linear array is preprocessed by constructing a normalized sample covariance matrix and used as the input for three machine learning methods: feed-forward neural networks (FNN), support vector machines (SVM), and random forests (RF). The range estimation problem is solved both as a classification problem and as a regression problem by these three machine learning algorithms. The results of range estimation for the Noise09 experiment are compared for FNN, SVM, RF, and conventional matched-field processing and demonstrate the potential of machine learning for underwater source localization.

  15. Kinetic water-bag model of global collisional drift waves and ion temperature gradient instabilities in cylindrical geometry

    NASA Astrophysics Data System (ADS)

    Gravier, E.; Plaut, E.

    2013-04-01

    Collisional drift waves and ion temperature gradient (ITG) instabilities are studied using a linear water-bag kinetic model [P. Morel et al., Phys. Plasmas 14, 112109 (2007)]. An efficient spectral method, already validated in the case of drift waves instabilities [E. Gravier et al., Eur. Phys. J. D 67, 7 (2013)], allows a fast solving of the global linear problem in cylindrical geometry. The comparison between the linear ITG instability properties thus computed and the ones given by the COLUMBIA experiment [R. G. Greaves et al., Plasma Phys. Controlled Fusion 34, 1253 (1992)] shows a qualitative agreement. Moreover, the transition between collisional drift waves and ITG instabilities is studied theoretically as a function of the ion temperature profile.

  16. Applying spectral unmixing and support vector machine to airborne hyperspectral imagery for detecting giant reed

    USDA-ARS?s Scientific Manuscript database

    This study evaluated linear spectral unmixing (LSU), mixture tuned matched filtering (MTMF) and support vector machine (SVM) techniques for detecting and mapping giant reed (Arundo donax L.), an invasive weed that presents a severe threat to agroecosystems and riparian areas throughout the southern ...

  17. Teaching Machines and Programmed Instruction.

    ERIC Educational Resources Information Center

    Kay, Harry; And Others

    The various devices used in programed instruction range from the simple linear programed book to branching and skip branching programs, adaptive teaching machines, and even complex computer based systems. In order to provide a background for the would-be programer, the essential principles of each of these devices is outlined. Different ideas of…

  18. Linear- and Repetitive Feature Detection Within Remotely Sensed Imagery

    DTIC Science & Technology

    2017-04-01

    applicable to Python or other pro- gramming languages with image- processing capabilities. 4.1 Classification machine learning The first methodology uses...remotely sensed images that are in panchromatic or true-color formats. Image- processing techniques, in- cluding Hough transforms, machine learning, and...data fusion .................................................................................................... 44 6.3 Context-based processing

  19. Optical realization of optimal symmetric real state quantum cloning machine

    NASA Astrophysics Data System (ADS)

    Hu, Gui-Yu; Zhang, Wen-Hai; Ye, Liu

    2010-01-01

    We present an experimentally uniform linear optical scheme to implement the optimal 1→2 symmetric and optimal 1→3 symmetric economical real state quantum cloning machine of the polarization state of the single photon. This scheme requires single-photon sources and two-photon polarization entangled state as input states. It also involves linear optical elements and three-photon coincidence. Then we consider the realistic realization of the scheme by using the parametric down-conversion as photon resources. It is shown that under certain condition, the scheme is feasible by current experimental technology.

  20. Estimation of perceptible water vapor of atmosphere using artificial neural network, support vector machine and multiple linear regression algorithm and their comparative study

    NASA Astrophysics Data System (ADS)

    Shastri, Niket; Pathak, Kamlesh

    2018-05-01

    The water vapor content in atmosphere plays very important role in climate. In this paper the application of GPS signal in meteorology is discussed, which is useful technique that is used to estimate the perceptible water vapor of atmosphere. In this paper various algorithms like artificial neural network, support vector machine and multiple linear regression are use to predict perceptible water vapor. The comparative studies in terms of root mean square error and mean absolute errors are also carried out for all the algorithms.

  1. Management Perspectives Pertaining to Root Cause Analyses of Nunn-McCurdy Breaches. Volume 4

    DTIC Science & Technology

    2013-01-01

    the FY2012 NDAA, the Army revised its initial budget request, allocating money from the purchase of new M2 .50 caliber machine guns to the...Quick-change machine gun barrel Explosive reactive armor Linear demolition charge system Full width, surface mine ploughs On-board vehicle power...Quantity Oversight of ACAT II Programs 45 for a restart to the program citing a “critical shortage of serviceable machine guns for our Soldiers who

  2. A Two-Layer Least Squares Support Vector Machine Approach to Credit Risk Assessment

    NASA Astrophysics Data System (ADS)

    Liu, Jingli; Li, Jianping; Xu, Weixuan; Shi, Yong

    Least squares support vector machine (LS-SVM) is a revised version of support vector machine (SVM) and has been proved to be a useful tool for pattern recognition. LS-SVM had excellent generalization performance and low computational cost. In this paper, we propose a new method called two-layer least squares support vector machine which combines kernel principle component analysis (KPCA) and linear programming form of least square support vector machine. With this method sparseness and robustness is obtained while solving large dimensional and large scale database. A U.S. commercial credit card database is used to test the efficiency of our method and the result proved to be a satisfactory one.

  3. Optimizing fixed observational assets in a coastal observatory

    NASA Astrophysics Data System (ADS)

    Frolov, Sergey; Baptista, António; Wilkin, Michael

    2008-11-01

    Proliferation of coastal observatories necessitates an objective approach to managing of observational assets. In this article, we used our experience in the coastal observatory for the Columbia River estuary and plume to identify and address common problems in managing of fixed observational assets, such as salinity, temperature, and water level sensors attached to pilings and moorings. Specifically, we addressed the following problems: assessing the quality of an existing array, adding stations to an existing array, removing stations from an existing array, validating an array design, and targeting of an array toward data assimilation or monitoring. Our analysis was based on a combination of methods from oceanographic and statistical literature, mainly on the statistical machinery of the best linear unbiased estimator. The key information required for our analysis was the covariance structure for a field of interest, which was computed from the output of assimilated and non-assimilated models of the Columbia River estuary and plume. The network optimization experiments in the Columbia River estuary and plume proved to be successful, largely withstanding the scrutiny of sensitivity and validation studies, and hence providing valuable insight into optimization and operation of the existing observational network. Our success in the Columbia River estuary and plume suggest that algorithms for optimal placement of sensors are reaching maturity and are likely to play a significant role in the design of emerging ocean observatories, such as the United State's ocean observation initiative (OOI) and integrated ocean observing system (IOOS) observatories, and smaller regional observatories.

  4. Assessment of effectiveness of geologic isolation systems. Geologic-simulation model for a hypothetical site in the Columbia Plateau. Volume 2: results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foley, M.G.; Petrie, G.M.; Baldwin, A.J.

    1982-06-01

    This report contains the input data and computer results for the Geologic Simulation Model. This model is described in detail in the following report: Petrie, G.M., et. al. 1981. Geologic Simulation Model for a Hypothetical Site in the Columbia Plateau, Pacific Northwest Laboratory, Richland, Washington. The Geologic Simulation Model is a quasi-deterministic process-response model which simulates, for a million years into the future, the development of the geologic and hydrologic systems of the ground-water basin containing the Pasco Basin. Effects of natural processes on the ground-water hydrologic system are modeled principally by rate equations. The combined effects and synergistic interactionsmore » of different processes are approximated by linear superposition of their effects during discrete time intervals in a stepwise-integration approach.« less

  5. Gesture-controlled interfaces for self-service machines and other applications

    NASA Technical Reports Server (NTRS)

    Cohen, Charles J. (Inventor); Jacobus, Charles J. (Inventor); Paul, George (Inventor); Beach, Glenn (Inventor); Foulk, Gene (Inventor); Obermark, Jay (Inventor); Cavell, Brook (Inventor)

    2004-01-01

    A gesture recognition interface for use in controlling self-service machines and other devices is disclosed. A gesture is defined as motions and kinematic poses generated by humans, animals, or machines. Specific body features are tracked, and static and motion gestures are interpreted. Motion gestures are defined as a family of parametrically delimited oscillatory motions, modeled as a linear-in-parameters dynamic system with added geometric constraints to allow for real-time recognition using a small amount of memory and processing time. A linear least squares method is preferably used to determine the parameters which represent each gesture. Feature position measure is used in conjunction with a bank of predictor bins seeded with the gesture parameters, and the system determines which bin best fits the observed motion. Recognizing static pose gestures is preferably performed by localizing the body/object from the rest of the image, describing that object, and identifying that description. The disclosure details methods for gesture recognition, as well as the overall architecture for using gesture recognition to control of devices, including self-service machines.

  6. Automated discrimination of dementia spectrum disorders using extreme learning machine and structural T1 MRI features.

    PubMed

    Jongin Kim; Boreom Lee

    2017-07-01

    The classification of neuroimaging data for the diagnosis of Alzheimer's Disease (AD) is one of the main research goals of the neuroscience and clinical fields. In this study, we performed extreme learning machine (ELM) classifier to discriminate the AD, mild cognitive impairment (MCI) from normal control (NC). We compared the performance of ELM with that of a linear kernel support vector machine (SVM) for 718 structural MRI images from Alzheimer's Disease Neuroimaging Initiative (ADNI) database. The data consisted of normal control, MCI converter (MCI-C), MCI non-converter (MCI-NC), and AD. We employed SVM-based recursive feature elimination (RFE-SVM) algorithm to find the optimal subset of features. In this study, we found that the RFE-SVM feature selection approach in combination with ELM shows the superior classification accuracy to that of linear kernel SVM for structural T1 MRI data.

  7. Transducer-actuator systems and methods for performing on-machine measurements and automatic part alignment

    DOEpatents

    Barkman, William E.; Dow, Thomas A.; Garrard, Kenneth P.; Marston, Zachary

    2016-07-12

    Systems and methods for performing on-machine measurements and automatic part alignment, including: a measurement component operable for determining the position of a part on a machine; and an actuation component operable for adjusting the position of the part by contacting the part with a predetermined force responsive to the determined position of the part. The measurement component consists of a transducer. The actuation component consists of a linear actuator. Optionally, the measurement component and the actuation component consist of a single linear actuator operable for contacting the part with a first lighter force for determining the position of the part and with a second harder force for adjusting the position of the part. The actuation component is utilized in a substantially horizontal configuration and the effects of gravitational drop of the part are accounted for in the force applied and the timing of the contact.

  8. Kinetic water-bag model of global collisional drift waves and ion temperature gradient instabilities in cylindrical geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gravier, E.; Plaut, E.

    2013-04-15

    Collisional drift waves and ion temperature gradient (ITG) instabilities are studied using a linear water-bag kinetic model [P. Morel et al., Phys. Plasmas 14, 112109 (2007)]. An efficient spectral method, already validated in the case of drift waves instabilities [E. Gravier et al., Eur. Phys. J. D 67, 7 (2013)], allows a fast solving of the global linear problem in cylindrical geometry. The comparison between the linear ITG instability properties thus computed and the ones given by the COLUMBIA experiment [R. G. Greaves et al., Plasma Phys. Controlled Fusion 34, 1253 (1992)] shows a qualitative agreement. Moreover, the transition betweenmore » collisional drift waves and ITG instabilities is studied theoretically as a function of the ion temperature profile.« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartolac, S; Letourneau, D; University of Toronto, Toronto, Ontario

    Purpose: Application of process control theory in quality assurance programs promises to allow earlier identification of problems and potentially better quality in delivery than traditional paradigms based primarily on tolerances and action levels. The purpose of this project was to characterize underlying seasonal variations in linear accelerator output that can be used to improve performance or trigger preemptive maintenance. Methods: Review of runtime plots of daily (6 MV) output data acquired using in house ion chamber based devices over three years and for fifteen linear accelerators of varying make and model were evaluated. Shifts in output due to known interventionsmore » with the machines were subtracted from the data to model an uncorrected scenario for each linear accelerator. Observable linear trends were also removed from the data prior to evaluation of periodic variations. Results: Runtime plots of output revealed sinusoidal, seasonal variations that were consistent across all units, irrespective of manufacturer, model or age of machine. The average amplitude of the variation was on the order of 1%. Peak and minimum variations were found to correspond to early April and September, respectively. Approximately 48% of output adjustments made over the period examined were potentially avoidable if baseline levels had corresponded to the mean output, rather than to points near a peak or valley. Linear trends were observed for three of the fifteen units, with annual increases in output ranging from 2–3%. Conclusion: Characterization of cyclical seasonal trends allows for better separation of potentially innate accelerator behaviour from other behaviours (e.g. linear trends) that may be better described as true out of control states (i.e. non-stochastic deviations from otherwise expected behavior) and could indicate service requirements. Results also pointed to an optimal setpoint for accelerators such that output of machines is maintained within set tolerances and interventions are required less frequently.« less

  10. Color line scan camera technology and machine vision: requirements to consider

    NASA Astrophysics Data System (ADS)

    Paernaenen, Pekka H. T.

    1997-08-01

    Color machine vision has shown a dynamic uptrend in use within the past few years as the introduction of new cameras and scanner technologies itself underscores. In the future, the movement from monochrome imaging to color will hasten, as machine vision system users demand more knowledge about their product stream. As color has come to the machine vision, certain requirements for the equipment used to digitize color images are needed. Color machine vision needs not only a good color separation but also a high dynamic range and a good linear response from the camera used. Good dynamic range and linear response is necessary for color machine vision. The importance of these features becomes even more important when the image is converted to another color space. There is always lost some information when converting integer data to another form. Traditionally the color image processing has been much slower technique than the gray level image processing due to the three times greater data amount per image. The same has applied for the three times more memory needed. The advancements in computers, memory and processing units has made it possible to handle even large color images today cost efficiently. In some cases he image analysis in color images can in fact even be easier and faster than with a similar gray level image because of more information per pixel. Color machine vision sets new requirements for lighting, too. High intensity and white color light is required in order to acquire good images for further image processing or analysis. New development in lighting technology is bringing eventually solutions for color imaging.

  11. Retrieval of aerosol optical depth from surface solar radiation measurements using machine learning algorithms, non-linear regression and a radiative transfer-based look-up table

    NASA Astrophysics Data System (ADS)

    Huttunen, Jani; Kokkola, Harri; Mielonen, Tero; Esa Juhani Mononen, Mika; Lipponen, Antti; Reunanen, Juha; Vilhelm Lindfors, Anders; Mikkonen, Santtu; Erkki Juhani Lehtinen, Kari; Kouremeti, Natalia; Bais, Alkiviadis; Niska, Harri; Arola, Antti

    2016-07-01

    In order to have a good estimate of the current forcing by anthropogenic aerosols, knowledge on past aerosol levels is needed. Aerosol optical depth (AOD) is a good measure for aerosol loading. However, dedicated measurements of AOD are only available from the 1990s onward. One option to lengthen the AOD time series beyond the 1990s is to retrieve AOD from surface solar radiation (SSR) measurements taken with pyranometers. In this work, we have evaluated several inversion methods designed for this task. We compared a look-up table method based on radiative transfer modelling, a non-linear regression method and four machine learning methods (Gaussian process, neural network, random forest and support vector machine) with AOD observations carried out with a sun photometer at an Aerosol Robotic Network (AERONET) site in Thessaloniki, Greece. Our results show that most of the machine learning methods produce AOD estimates comparable to the look-up table and non-linear regression methods. All of the applied methods produced AOD values that corresponded well to the AERONET observations with the lowest correlation coefficient value being 0.87 for the random forest method. While many of the methods tended to slightly overestimate low AODs and underestimate high AODs, neural network and support vector machine showed overall better correspondence for the whole AOD range. The differences in producing both ends of the AOD range seem to be caused by differences in the aerosol composition. High AODs were in most cases those with high water vapour content which might affect the aerosol single scattering albedo (SSA) through uptake of water into aerosols. Our study indicates that machine learning methods benefit from the fact that they do not constrain the aerosol SSA in the retrieval, whereas the LUT method assumes a constant value for it. This would also mean that machine learning methods could have potential in reproducing AOD from SSR even though SSA would have changed during the observation period.

  12. Tool setting device

    DOEpatents

    Brown, Raymond J.

    1977-01-01

    The present invention relates to a tool setting device for use with numerically controlled machine tools, such as lathes and milling machines. A reference position of the machine tool relative to the workpiece along both the X and Y axes is utilized by the control circuit for driving the tool through its program. This reference position is determined for both axes by displacing a single linear variable displacement transducer (LVDT) with the machine tool through a T-shaped pivotal bar. The use of the T-shaped bar allows the cutting tool to be moved sequentially in the X or Y direction for indicating the actual position of the machine tool relative to the predetermined desired position in the numerical control circuit by using a single LVDT.

  13. Intelligent image processing for machine safety

    NASA Astrophysics Data System (ADS)

    Harvey, Dennis N.

    1994-10-01

    This paper describes the use of intelligent image processing as a machine guarding technology. One or more color, linear array cameras are positioned to view the critical region(s) around a machine tool or other piece of manufacturing equipment. The image data is processed to provide indicators of conditions dangerous to the equipment via color content, shape content, and motion content. The data from these analyses is then sent to a threat evaluator. The purpose of the evaluator is to determine if a potentially machine-damaging condition exists based on the analyses of color, shape, and motion, and on `knowledge' of the specific environment of the machine. The threat evaluator employs fuzzy logic as a means of dealing with uncertainty in the vision data.

  14. Brain-state invariant thalamo-cortical coordination revealed by non-linear encoders.

    PubMed

    Viejo, Guillaume; Cortier, Thomas; Peyrache, Adrien

    2018-03-01

    Understanding how neurons cooperate to integrate sensory inputs and guide behavior is a fundamental problem in neuroscience. A large body of methods have been developed to study neuronal firing at the single cell and population levels, generally seeking interpretability as well as predictivity. However, these methods are usually confronted with the lack of ground-truth necessary to validate the approach. Here, using neuronal data from the head-direction (HD) system, we present evidence demonstrating how gradient boosted trees, a non-linear and supervised Machine Learning tool, can learn the relationship between behavioral parameters and neuronal responses with high accuracy by optimizing the information rate. Interestingly, and unlike other classes of Machine Learning methods, the intrinsic structure of the trees can be interpreted in relation to behavior (e.g. to recover the tuning curves) or to study how neurons cooperate with their peers in the network. We show how the method, unlike linear analysis, reveals that the coordination in thalamo-cortical circuits is qualitatively the same during wakefulness and sleep, indicating a brain-state independent feed-forward circuit. Machine Learning tools thus open new avenues for benchmarking model-based characterization of spike trains.

  15. Brain-state invariant thalamo-cortical coordination revealed by non-linear encoders

    PubMed Central

    Cortier, Thomas; Peyrache, Adrien

    2018-01-01

    Understanding how neurons cooperate to integrate sensory inputs and guide behavior is a fundamental problem in neuroscience. A large body of methods have been developed to study neuronal firing at the single cell and population levels, generally seeking interpretability as well as predictivity. However, these methods are usually confronted with the lack of ground-truth necessary to validate the approach. Here, using neuronal data from the head-direction (HD) system, we present evidence demonstrating how gradient boosted trees, a non-linear and supervised Machine Learning tool, can learn the relationship between behavioral parameters and neuronal responses with high accuracy by optimizing the information rate. Interestingly, and unlike other classes of Machine Learning methods, the intrinsic structure of the trees can be interpreted in relation to behavior (e.g. to recover the tuning curves) or to study how neurons cooperate with their peers in the network. We show how the method, unlike linear analysis, reveals that the coordination in thalamo-cortical circuits is qualitatively the same during wakefulness and sleep, indicating a brain-state independent feed-forward circuit. Machine Learning tools thus open new avenues for benchmarking model-based characterization of spike trains. PMID:29565979

  16. Micro-Machined High-Frequency (80 MHz) PZT Thick Film Linear Arrays

    PubMed Central

    Zhou, Qifa; Wu, Dawei; Liu, Changgeng; Zhu, Benpeng; Djuth, Frank; Shung, K. Kirk

    2010-01-01

    This paper presents the development of a micro-machined high-frequency linear array using PZT piezoelectric thick films. The linear array has 32 elements with an element width of 24 μm and an element length of 4 mm. Array elements were fabricated by deep reactive ion etching of PZT thick films, which were prepared from spin-coating of PZT solgel composite. Detailed fabrication processes, especially PZT thick film etching conditions and a novel transferring-and-etching method, are presented and discussed. Array designs were evaluated by simulation. Experimental measurements show that the array had a center frequency of 80 MHz and a fractional bandwidth (−6 dB) of 60%. An insertion loss of −41 dB and adjacent element crosstalk of −21 dB were found at the center frequency. PMID:20889407

  17. Fuzzy support vector machine: an efficient rule-based classification technique for microarrays.

    PubMed

    Hajiloo, Mohsen; Rabiee, Hamid R; Anooshahpour, Mahdi

    2013-01-01

    The abundance of gene expression microarray data has led to the development of machine learning algorithms applicable for tackling disease diagnosis, disease prognosis, and treatment selection problems. However, these algorithms often produce classifiers with weaknesses in terms of accuracy, robustness, and interpretability. This paper introduces fuzzy support vector machine which is a learning algorithm based on combination of fuzzy classifiers and kernel machines for microarray classification. Experimental results on public leukemia, prostate, and colon cancer datasets show that fuzzy support vector machine applied in combination with filter or wrapper feature selection methods develops a robust model with higher accuracy than the conventional microarray classification models such as support vector machine, artificial neural network, decision trees, k nearest neighbors, and diagonal linear discriminant analysis. Furthermore, the interpretable rule-base inferred from fuzzy support vector machine helps extracting biological knowledge from microarray data. Fuzzy support vector machine as a new classification model with high generalization power, robustness, and good interpretability seems to be a promising tool for gene expression microarray classification.

  18. Using operations research to plan the british columbia registered nurses' workforce.

    PubMed

    Lavieri, Mariel S; Regan, Sandra; Puterman, Martin L; Ratner, Pamela A

    2008-11-01

    The authors explore the power and flexibility of using an operations research methodology known as linear programming to support health human resources (HHR) planning. The model takes as input estimates of the future need for healthcare providers and, in contrast to simulation, compares all feasible strategies to identify a long-term plan for achieving a balance between supply and demand at the least cost to the system. The approach is illustrated by using it to plan the British Columbia registered nurse (RN) workforce over a 20-year horizon. The authors show how the model can be used for scenario analysis by investigating the impact of decreasing attrition from educational programs, changing RN-to-manager ratios in direct care and exploring how other changes might alter planning recommendations. In addition to HHR policy recommendations, their analysis also points to new research opportunities. Copyright © 2008 Longwoods Publishing.

  19. SU-F-T-459: ArcCHECK Machine QA : Highly Efficient Quality Assurance Tool for VMAT, SRS & SBRT Linear Accelerator Delivery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mhatre, V; Patwe, P; Dandekar, P

    Purpose: Quality assurance (QA) of complex linear accelerators is critical and highly time consuming. ArcCHECK Machine QA tool is used to test geometric and delivery aspects of linear accelerator. In this study we evaluated the performance of this tool. Methods: Machine QA feature allows user to perform quality assurance tests using ArcCHECK phantom. Following tests were performed 1) Gantry Speed 2) Gantry Rotation 3) Gantry Angle 4)MLC/Collimator QA 5)Beam Profile Flatness & Symmetry. Data was collected on trueBEAM stX machine for 6 MV for a period of one year. The Gantry QA test allows to view errors in gantry angle,more » rotation & assess how accurately the gantry moves around the isocentre. The MLC/Collimator QA tool is used to analyze & locate the differences between leaf bank & jaw position of linac. The flatness & Symmetry test quantifies beam flatness & symmetry in IEC-y & x direction. The Gantry & Flatness/Symmetry test can be performed for static & dynamic delivery. Results: The Gantry speed was 3.9 deg/sec with speed maximum deviation around 0.3 deg/sec. The Gantry Isocentre for arc delivery was 0.9mm & static delivery was 0.4mm. The maximum percent positive & negative difference was found to be 1.9 % & – 0.25 % & maximum distance positive & negative diff was 0.4mm & – 0.3 mm for MLC/Collimator QA. The Flatness for Arc delivery was 1.8 % & Symmetry for Y was 0.8 % & X was 1.8 %. The Flatness for gantry 0°,270°,90° & 180° was 1.75,1.9,1.8 & 1.6% respectively & Symmetry for X & Y was 0.8,0.6% for 0°, 0.6,0.7% for 270°, 0.6,1% for 90° & 0.6,0.7% for 180°. Conclusion: ArcCHECK Machine QA is an useful tool for QA of Modern linear accelerators as it tests both geometric & delivery aspects. This is very important for VMAT, SRS & SBRT treatments.« less

  20. A Wavelet Support Vector Machine Combination Model for Singapore Tourist Arrival to Malaysia

    NASA Astrophysics Data System (ADS)

    Rafidah, A.; Shabri, Ani; Nurulhuda, A.; Suhaila, Y.

    2017-08-01

    In this study, wavelet support vector machine model (WSVM) is proposed and applied for monthly data Singapore tourist time series prediction. The WSVM model is combination between wavelet analysis and support vector machine (SVM). In this study, we have two parts, first part we compare between the kernel function and second part we compare between the developed models with single model, SVM. The result showed that kernel function linear better than RBF while WSVM outperform with single model SVM to forecast monthly Singapore tourist arrival to Malaysia.

  1. Methods, systems and apparatus for adjusting modulation index to improve linearity of phase voltage commands

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gallegos-Lopez, Gabriel; Perisic, Milun; Kinoshita, Michael H.

    2017-03-14

    Embodiments of the present invention relate to methods, systems and apparatus for controlling operation of a multi-phase machine in a motor drive system. The disclosed embodiments provide a mechanism for adjusting modulation index of voltage commands to improve linearity of the voltage commands.

  2. Optical Coherence Tomography Machine Learning Classifiers for Glaucoma Detection: A Preliminary Study

    PubMed Central

    Burgansky-Eliash, Zvia; Wollstein, Gadi; Chu, Tianjiao; Ramsey, Joseph D.; Glymour, Clark; Noecker, Robert J.; Ishikawa, Hiroshi; Schuman, Joel S.

    2007-01-01

    Purpose Machine-learning classifiers are trained computerized systems with the ability to detect the relationship between multiple input parameters and a diagnosis. The present study investigated whether the use of machine-learning classifiers improves optical coherence tomography (OCT) glaucoma detection. Methods Forty-seven patients with glaucoma (47 eyes) and 42 healthy subjects (42 eyes) were included in this cross-sectional study. Of the glaucoma patients, 27 had early disease (visual field mean deviation [MD] ≥ −6 dB) and 20 had advanced glaucoma (MD < −6 dB). Machine-learning classifiers were trained to discriminate between glaucomatous and healthy eyes using parameters derived from OCT output. The classifiers were trained with all 38 parameters as well as with only 8 parameters that correlated best with the visual field MD. Five classifiers were tested: linear discriminant analysis, support vector machine, recursive partitioning and regression tree, generalized linear model, and generalized additive model. For the last two classifiers, a backward feature selection was used to find the minimal number of parameters that resulted in the best and most simple prediction. The cross-validated receiver operating characteristic (ROC) curve and accuracies were calculated. Results The largest area under the ROC curve (AROC) for glaucoma detection was achieved with the support vector machine using eight parameters (0.981). The sensitivity at 80% and 95% specificity was 97.9% and 92.5%, respectively. This classifier also performed best when judged by cross-validated accuracy (0.966). The best classification between early glaucoma and advanced glaucoma was obtained with the generalized additive model using only three parameters (AROC = 0.854). Conclusions Automated machine classifiers of OCT data might be useful for enhancing the utility of this technology for detecting glaucomatous abnormality. PMID:16249492

  3. Self-Centering Reciprocating-Permanent-Magnet Machine

    NASA Technical Reports Server (NTRS)

    Bhate, Suresh; Vitale, Nick

    1988-01-01

    New design for monocoil reciprocating-permanent-magnet electric machine provides self-centering force. Linear permanent-magnet electrical motor includes outer stator, inner stator, and permanent-magnet plunger oscillateing axially between extreme left and right positions. Magnets arranged to produce centering force and allows use of only one coil of arbitrary axial length. Axial length of coil chosen to provide required efficiency and power output.

  4. Exposures and their determinants in radiographic film processing.

    PubMed

    Teschke, Kay; Chow, Yat; Brauer, Michael; Chessor, Ed; Hirtle, Bob; Kennedy, Susan M; Yeung, Moira Chan; Ward, Helen Dimich

    2002-01-01

    Radiographers process X-ray films using developer and fixer solutions that contain chemicals known to cause or exacerbate asthma. In a study in British Columbia, Canada, radiographers' personal exposures to glutaraldehyde (a constituent of the developer chemistry), acetic acid (a constituent of the fixer chemistry), and sulfur dioxide (a byproduct of sulfites, present in both developer and fixer solutions) were measured. Average full-shift exposures to glutaraldehyde, acetic acid, and sulfur dioxide were 0.0009 mg/m3, 0.09 mg/m3, and 0.08 mg/m3, respectively, all more than one order of magnitude lower than current occupational exposure limits. Local exhaust ventilation of the processing machines and use of silver recovery units lowered exposures, whereas the number of films processed per machine and the time spent near the machines increased exposures. Personnel in clinic facilities had higher exposures than those in hospitals. Private clinics were less likely to have local exhaust ventilation and silver recovery units. Their radiographers spent more time in the processor areas and processed more films per machine. Although exposures were low compared with exposure standards, there are good reasons to continue practices to minimize or eliminate exposures: glutaraldehyde and hydroquinone (present in the developer) are sensitizers; the levels at which health effects occur are not yet clearly established, but appear to be lower than current standards; and health effects resulting from the mixture of chemicals are not understood. Developments in digital imaging technology are making available options that do not involve wet-processing of photographic film and therefore could eliminate the use of developer and fixer chemicals altogether.

  5. Pursuing optimal electric machines transient diagnosis: The adaptive slope transform

    NASA Astrophysics Data System (ADS)

    Pons-Llinares, Joan; Riera-Guasp, Martín; Antonino-Daviu, Jose A.; Habetler, Thomas G.

    2016-12-01

    The aim of this paper is to introduce a new linear time-frequency transform to improve the detection of fault components in electric machines transient currents. Linear transforms are analysed from the perspective of the atoms used. A criterion to select the atoms at every point of the time-frequency plane is proposed, taking into account the characteristics of the searched component at each point. This criterion leads to the definition of the Adaptive Slope Transform, which enables a complete and optimal capture of the different components evolutions in a transient current. A comparison with conventional linear transforms (Short-Time Fourier Transform and Wavelet Transform) is carried out, showing their inherent limitations. The approach is tested with laboratory and field motors, and the Lower Sideband Harmonic is captured for the first time during an induction motor startup and subsequent load oscillations, accurately tracking its evolution.

  6. Toward an Improvement of the Analysis of Neural Coding.

    PubMed

    Alegre-Cortés, Javier; Soto-Sánchez, Cristina; Albarracín, Ana L; Farfán, Fernando D; Val-Calvo, Mikel; Ferrandez, José M; Fernandez, Eduardo

    2017-01-01

    Machine learning and artificial intelligence have strong roots on principles of neural computation. Some examples are the structure of the first perceptron, inspired in the retina, neuroprosthetics based on ganglion cell recordings or Hopfield networks. In addition, machine learning provides a powerful set of tools to analyze neural data, which has already proved its efficacy in so distant fields of research as speech recognition, behavioral states classification, or LFP recordings. However, despite the huge technological advances in neural data reduction of dimensionality, pattern selection, and clustering during the last years, there has not been a proportional development of the analytical tools used for Time-Frequency (T-F) analysis in neuroscience. Bearing this in mind, we introduce the convenience of using non-linear, non-stationary tools, EMD algorithms in particular, for the transformation of the oscillatory neural data (EEG, EMG, spike oscillations…) into the T-F domain prior to its analysis with machine learning tools. We support that to achieve meaningful conclusions, the transformed data we analyze has to be as faithful as possible to the original recording, so that the transformations forced into the data due to restrictions in the T-F computation are not extended to the results of the machine learning analysis. Moreover, bioinspired computation such as brain-machine interface may be enriched from a more precise definition of neuronal coding where non-linearities of the neuronal dynamics are considered.

  7. Performance of Ti-multilayer coated tool during machining of MDN431 alloyed steel

    NASA Astrophysics Data System (ADS)

    Badiger, Pradeep V.; Desai, Vijay; Ramesh, M. R.

    2018-04-01

    Turbine forgings and other components are required to be high resistance to corrosion and oxidation because which they are highly alloyed with Ni and Cr. Midhani manufactures one of such material MDN431. It's a hard-to-machine steel with high hardness and strength. PVD coated insert provide an answer to problem with its state of art technique on the WC tool. Machinability studies is carried out on MDN431 steel using uncoated and Ti-multilayer coated WC tool insert using Taguchi optimisation technique. During the present investigation, speed (398-625rpm), feed (0.093-0.175mm/rev), and depth of cut (0.2-0.4mm) varied according to Taguchi L9 orthogonal array, subsequently cutting forces and surface roughness (Ra) were measured. Optimizations of the obtained results are done using Taguchi technique for cutting forces and surface roughness. Using Taguchi technique linear fit model regression analysis carried out for the combination of each input variable. Experimented results are compared and found the developed model is adequate which supported by proof trials. Speed, feed and depth of cut are linearly dependent on the cutting force and surface roughness for uncoated insert whereas Speed and depth of cut feed is inversely dependent in coated insert for both cutting force and surface roughness. Machined surface for coated and uncoated inserts during machining of MDN431 is studied using optical profilometer.

  8. Energy landscapes for machine learning

    NASA Astrophysics Data System (ADS)

    Ballard, Andrew J.; Das, Ritankar; Martiniani, Stefano; Mehta, Dhagash; Sagun, Levent; Stevenson, Jacob D.; Wales, David J.

    Machine learning techniques are being increasingly used as flexible non-linear fitting and prediction tools in the physical sciences. Fitting functions that exhibit multiple solutions as local minima can be analysed in terms of the corresponding machine learning landscape. Methods to explore and visualise molecular potential energy landscapes can be applied to these machine learning landscapes to gain new insight into the solution space involved in training and the nature of the corresponding predictions. In particular, we can define quantities analogous to molecular structure, thermodynamics, and kinetics, and relate these emergent properties to the structure of the underlying landscape. This Perspective aims to describe these analogies with examples from recent applications, and suggest avenues for new interdisciplinary research.

  9. Parallel spatial direct numerical simulations on the Intel iPSC/860 hypercube

    NASA Technical Reports Server (NTRS)

    Joslin, Ronald D.; Zubair, Mohammad

    1993-01-01

    The implementation and performance of a parallel spatial direct numerical simulation (PSDNS) approach on the Intel iPSC/860 hypercube is documented. The direct numerical simulation approach is used to compute spatially evolving disturbances associated with the laminar-to-turbulent transition in boundary-layer flows. The feasibility of using the PSDNS on the hypercube to perform transition studies is examined. The results indicate that the direct numerical simulation approach can effectively be parallelized on a distributed-memory parallel machine. By increasing the number of processors nearly ideal linear speedups are achieved with nonoptimized routines; slower than linear speedups are achieved with optimized (machine dependent library) routines. This slower than linear speedup results because the Fast Fourier Transform (FFT) routine dominates the computational cost and because the routine indicates less than ideal speedups. However with the machine-dependent routines the total computational cost decreases by a factor of 4 to 5 compared with standard FORTRAN routines. The computational cost increases linearly with spanwise wall-normal and streamwise grid refinements. The hypercube with 32 processors was estimated to require approximately twice the amount of Cray supercomputer single processor time to complete a comparable simulation; however it is estimated that a subgrid-scale model which reduces the required number of grid points and becomes a large-eddy simulation (PSLES) would reduce the computational cost and memory requirements by a factor of 10 over the PSDNS. This PSLES implementation would enable transition simulations on the hypercube at a reasonable computational cost.

  10. Identification of Synchronous Machine Stability - Parameters: AN On-Line Time-Domain Approach.

    NASA Astrophysics Data System (ADS)

    Le, Loc Xuan

    1987-09-01

    A time-domain modeling approach is described which enables the stability-study parameters of the synchronous machine to be determined directly from input-output data measured at the terminals of the machine operating under normal conditions. The transient responses due to system perturbations are used to identify the parameters of the equivalent circuit models. The described models are verified by comparing their responses with the machine responses generated from the transient stability models of a small three-generator multi-bus power system and of a single -machine infinite-bus power network. The least-squares method is used for the solution of the model parameters. As a precaution against ill-conditioned problems, the singular value decomposition (SVD) is employed for its inherent numerical stability. In order to identify the equivalent-circuit parameters uniquely, the solution of a linear optimization problem with non-linear constraints is required. Here, the SVD appears to offer a simple solution to this otherwise difficult problem. Furthermore, the SVD yields solutions with small bias and, therefore, physically meaningful parameters even in the presence of noise in the data. The question concerning the need for a more advanced model of the synchronous machine which describes subtransient and even sub-subtransient behavior is dealt with sensibly by the concept of condition number. The concept provides a quantitative measure for determining whether such an advanced model is indeed necessary. Finally, the recursive SVD algorithm is described for real-time parameter identification and tracking of slowly time-variant parameters. The algorithm is applied to identify the dynamic equivalent power system model.

  11. Physics with e{sup +}e{sup -} Linear Colliders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barklow, Timothy L

    2003-05-05

    We describe the physics potential of e{sup +}e{sup -} linear colliders in this report. These machines are planned to operate in the first phase at a center-of-mass energy of 500 GeV, before being scaled up to about 1 TeV. In the second phase of the operation, a final energy of about 2 TeV is expected. The machines will allow us to perform precision tests of the heavy particles in the Standard Model, the top quark and the electroweak bosons. They are ideal facilities for exploring the properties of Higgs particles, in particular in the intermediate mass range. New vector bosonsmore » and novel matter particles in extended gauge theories can be searched for and studied thoroughly. The machines provide unique opportunities for the discovery of particles in supersymmetric extensions of the Standard Model, the spectrum of Higgs particles, the supersymmetric partners of the electroweak gauge and Higgs bosons, and of the matter particles. High precision analyses of their properties and interactions will allow for extrapolations to energy scales close to the Planck scale where gravity becomes significant. In alternative scenarios, like compositeness models, novel matter particles and interactions can be discovered and investigated in the energy range above the existing colliders up to the TeV scale. Whatever scenario is realized in Nature, the discovery potential of e{sup +}e{sup -} linear colliders and the high-precision with which the properties of particles and their interactions can be analyzed, define an exciting physics programme complementary to hadron machines.« less

  12. Fundamental aspects of steady-state conversion of heat to work at the nanoscale

    NASA Astrophysics Data System (ADS)

    Benenti, Giuliano; Casati, Giulio; Saito, Keiji; Whitney, Robert S.

    2017-06-01

    In recent years, the study of heat to work conversion has been re-invigorated by nanotechnology. Steady-state devices do this conversion without any macroscopic moving parts, through steady-state flows of microscopic particles such as electrons, photons, phonons, etc. This review aims to introduce some of the theories used to describe these steady-state flows in a variety of mesoscopic or nanoscale systems. These theories are introduced in the context of idealized machines which convert heat into electrical power (heat-engines) or convert electrical power into a heat flow (refrigerators). In this sense, the machines could be categorized as thermoelectrics, although this should be understood to include photovoltaics when the heat source is the sun. As quantum mechanics is important for most such machines, they fall into the field of quantum thermodynamics. In many cases, the machines we consider have few degrees of freedom, however the reservoirs of heat and work that they interact with are assumed to be macroscopic. This review discusses different theories which can take into account different aspects of mesoscopic and nanoscale physics, such as coherent quantum transport, magnetic-field induced effects (including topological ones such as the quantum Hall effect), and single electron charging effects. It discusses the efficiency of thermoelectric conversion, and the thermoelectric figure of merit. More specifically, the theories presented are (i) linear response theory with or without magnetic fields, (ii) Landauer scattering theory in the linear response regime and far from equilibrium, (iii) Green-Kubo formula for strongly interacting systems within the linear response regime, (iv) rate equation analysis for small quantum machines with or without interaction effects, (v) stochastic thermodynamic for fluctuating small systems. In all cases, we place particular emphasis on the fundamental questions about the bounds on ideal machines. Can magnetic-fields change the bounds on power or efficiency? What is the relationship between quantum theories of transport and the laws of thermodynamics? Does quantum mechanics place fundamental bounds on heat to work conversion which are absent in the thermodynamics of classical systems?

  13. SU-F-T-284: The Effect of Linear Accelerator Output Variation On the Quality of Patient Specific Rapid Arc Verification Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandhu, G; Cao, F; Szpala, S

    2016-06-15

    Purpose: The aim of the current study is to investigate the effect of machine output variation on the delivery of the RapidArc verification plans. Methods: Three verification plans were generated using Eclipse™ treatment planning system (V11.031) with plan normalization value 100.0%. These plans were delivered on the linear accelerators using ArcCHECK− device, with machine output 1.000 cGy/MU at calibration point. These planned and delivered dose distributions were used as reference plans. Additional plans were created in Eclipse− with normalization values ranging 92.80%–102% to mimic the machine output ranging 1.072cGy/MU-0.980cGy/MU, at the calibration point. These plans were compared against the referencemore » plans using gamma indices (3%, 3mm) and (2%, 2mm). Calculated gammas were studied for its dependence on machine output. Plans were considered passed if 90% of the points satisfy the defined gamma criteria. Results: The gamma index (3%, 3mm) was insensitive to output fluctuation within the output tolerance level (2% of calibration), and showed failures, when the machine output exceeds ≥3%. Gamma (2%, 2mm) was found to be more sensitive to the output variation compared to the gamma (3%, 3mm), and showed failures, when output exceeds ≥1.7%. The variation of the gamma indices with output variability also showed dependence upon the plan parameters (e.g. MLC movement and gantry rotation). The variation of the percentage points passing gamma criteria with output variation followed a non-linear decrease beyond the output tolerance level. Conclusion: Data from the limited plans and output conditions showed that gamma (2%, 2mm) is more sensitive to the output fluctuations compared to Gamma (3%,3mm). Work under progress, including detail data from a large number of plans and a wide range of output conditions, may be able to conclude the quantitative dependence of gammas on machine output, and hence the effect on the quality of delivered rapid arc plans.« less

  14. Possible limits of plasma linear colliders

    NASA Astrophysics Data System (ADS)

    Zimmermann, F.

    2017-07-01

    Plasma linear colliders have been proposed as next or next-next generation energy-frontier machines for high-energy physics. I investigate possible fundamental limits on energy and luminosity of such type of colliders, considering acceleration, multiple scattering off plasma ions, intrabeam scattering, bremsstrahlung, and betatron radiation. The question of energy efficiency is also addressed.

  15. Reactive power generation in high speed induction machines by continuously occurring space-transients

    NASA Astrophysics Data System (ADS)

    Laithwaite, E. R.; Kuznetsov, S. B.

    1980-09-01

    A new technique of continuously generating reactive power from the stator of a brushless induction machine is conceived and tested on a 10-kw linear machine and on 35 and 150 rotary cage motors. An auxiliary magnetic wave traveling at rotor speed is artificially created by the space-transient attributable to the asymmetrical stator winding. At least two distinct windings of different pole-pitch must be incorporated. This rotor wave drifts in and out of phase repeatedly with the stator MMF wave proper and the resulting modulation of the airgap flux is used to generate reactive VA apart from that required for magnetization or leakage flux. The VAR generation effect increases with machine size, and leading power factor operation of the entire machine is viable for large industrial motors and power system induction generators.

  16. Application of machine learning techniques to analyse the effects of physical exercise in ventricular fibrillation.

    PubMed

    Caravaca, Juan; Soria-Olivas, Emilio; Bataller, Manuel; Serrano, Antonio J; Such-Miquel, Luis; Vila-Francés, Joan; Guerrero, Juan F

    2014-02-01

    This work presents the application of machine learning techniques to analyse the influence of physical exercise in the physiological properties of the heart, during ventricular fibrillation. To this end, different kinds of classifiers (linear and neural models) are used to classify between trained and sedentary rabbit hearts. The use of those classifiers in combination with a wrapper feature selection algorithm allows to extract knowledge about the most relevant features in the problem. The obtained results show that neural models outperform linear classifiers (better performance indices and a better dimensionality reduction). The most relevant features to describe the benefits of physical exercise are those related to myocardial heterogeneity, mean activation rate and activation complexity. © 2013 Published by Elsevier Ltd.

  17. Financial Distress Prediction using Linear Discriminant Analysis and Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Santoso, Noviyanti; Wibowo, Wahyu

    2018-03-01

    A financial difficulty is the early stages before the bankruptcy. Bankruptcies caused by the financial distress can be seen from the financial statements of the company. The ability to predict financial distress became an important research topic because it can provide early warning for the company. In addition, predicting financial distress is also beneficial for investors and creditors. This research will be made the prediction model of financial distress at industrial companies in Indonesia by comparing the performance of Linear Discriminant Analysis (LDA) and Support Vector Machine (SVM) combined with variable selection technique. The result of this research is prediction model based on hybrid Stepwise-SVM obtains better balance among fitting ability, generalization ability and model stability than the other models.

  18. Prediction of Human Intestinal Absorption of Compounds Using Artificial Intelligence Techniques.

    PubMed

    Kumar, Rajnish; Sharma, Anju; Siddiqui, Mohammed Haris; Tiwari, Rajesh Kumar

    2017-01-01

    Information about Pharmacokinetics of compounds is an essential component of drug design and development. Modeling the pharmacokinetic properties require identification of the factors effecting absorption, distribution, metabolism and excretion of compounds. There have been continuous attempts in the prediction of intestinal absorption of compounds using various Artificial intelligence methods in the effort to reduce the attrition rate of drug candidates entering to preclinical and clinical trials. Currently, there are large numbers of individual predictive models available for absorption using machine learning approaches. Six Artificial intelligence methods namely, Support vector machine, k- nearest neighbor, Probabilistic neural network, Artificial neural network, Partial least square and Linear discriminant analysis were used for prediction of absorption of compounds. Prediction accuracy of Support vector machine, k- nearest neighbor, Probabilistic neural network, Artificial neural network, Partial least square and Linear discriminant analysis for prediction of intestinal absorption of compounds was found to be 91.54%, 88.33%, 84.30%, 86.51%, 79.07% and 80.08% respectively. Comparative analysis of all the six prediction models suggested that Support vector machine with Radial basis function based kernel is comparatively better for binary classification of compounds using human intestinal absorption and may be useful at preliminary stages of drug design and development. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  19. Spin dynamics in storage rings and linear accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Irwin, J.

    1994-12-01

    The purpose of these lectures is to survey the subject of spin dynamics in accelerators: to give a sense of the underlying physics, the typical analytic and numeric methods used, and an overview of results achieved. Consideration will be limited to electrons and protons. Examples of experimental and theoretical results in both linear and circular machines are included.

  20. Data mining methods in the prediction of Dementia: A real-data comparison of the accuracy, sensitivity and specificity of linear discriminant analysis, logistic regression, neural networks, support vector machines, classification trees and random forests

    PubMed Central

    2011-01-01

    Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI), but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests) were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression) in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p < 0.05). Support Vector Machines showed the larger overall classification accuracy (Median (Me) = 0.76) an area under the ROC (Me = 0.90). However this method showed high specificity (Me = 1.0) but low sensitivity (Me = 0.3). Random Forest ranked second in overall accuracy (Me = 0.73) with high area under the ROC (Me = 0.73) specificity (Me = 0.73) and sensitivity (Me = 0.64). Linear Discriminant Analysis also showed acceptable overall accuracy (Me = 0.66), with acceptable area under the ROC (Me = 0.72) specificity (Me = 0.66) and sensitivity (Me = 0.64). The remaining classifiers showed overall classification accuracy above a median value of 0.63, but for most sensitivity was around or even lower than a median value of 0.5. Conclusions When taking into account sensitivity, specificity and overall classification accuracy Random Forests and Linear Discriminant analysis rank first among all the classifiers tested in prediction of dementia using several neuropsychological tests. These methods may be used to improve accuracy, sensitivity and specificity of Dementia predictions from neuropsychological testing. PMID:21849043

  1. Powering the programmed nanostructure and function of gold nanoparticles with catenated DNA machines

    NASA Astrophysics Data System (ADS)

    Elbaz, Johann; Cecconello, Alessandro; Fan, Zhiyuan; Govorov, Alexander O.; Willner, Itamar

    2013-06-01

    DNA nanotechnology is a rapidly developing research area in nanoscience. It includes the development of DNA machines, tailoring of DNA nanostructures, application of DNA nanostructures for computing, and more. Different DNA machines were reported in the past and DNA-guided assembly of nanoparticles represents an active research effort in DNA nanotechnology. Several DNA-dictated nanoparticle structures were reported, including a tetrahedron, a triangle or linear nanoengineered nanoparticle structures; however, the programmed, dynamic reversible switching of nanoparticle structures and, particularly, the dictated switchable functions emerging from the nanostructures, are missing elements in DNA nanotechnology. Here we introduce DNA catenane systems (interlocked DNA rings) as molecular DNA machines for the programmed, reversible and switchable arrangement of different-sized gold nanoparticles. We further demonstrate that the machine-powered gold nanoparticle structures reveal unique emerging switchable spectroscopic features, such as plasmonic coupling or surface-enhanced fluorescence.

  2. A Real-Time Tool Positioning Sensor for Machine-Tools

    PubMed Central

    Ruiz, Antonio Ramon Jimenez; Rosas, Jorge Guevara; Granja, Fernando Seco; Honorato, Jose Carlos Prieto; Taboada, Jose Juan Esteve; Serrano, Vicente Mico; Jimenez, Teresa Molina

    2009-01-01

    In machining, natural oscillations, and elastic, gravitational or temperature deformations, are still a problem to guarantee the quality of fabricated parts. In this paper we present an optical measurement system designed to track and localize in 3D a reference retro-reflector close to the machine-tool's drill. The complete system and its components are described in detail. Several tests, some static (including impacts and rotations) and others dynamic (by executing linear and circular trajectories), were performed on two different machine tools. It has been integrated, for the first time, a laser tracking system into the position control loop of a machine-tool. Results indicate that oscillations and deformations close to the tool can be estimated with micrometric resolution and a bandwidth from 0 to more than 100 Hz. Therefore this sensor opens the possibility for on-line compensation of oscillations and deformations. PMID:22408472

  3. Classification of suicide attempters in schizophrenia using sociocultural and clinical features: A machine learning approach.

    PubMed

    Hettige, Nuwan C; Nguyen, Thai Binh; Yuan, Chen; Rajakulendran, Thanara; Baddour, Jermeen; Bhagwat, Nikhil; Bani-Fatemi, Ali; Voineskos, Aristotle N; Mallar Chakravarty, M; De Luca, Vincenzo

    2017-07-01

    Suicide is a major concern for those afflicted by schizophrenia. Identifying patients at the highest risk for future suicide attempts remains a complex problem for psychiatric interventions. Machine learning models allow for the integration of many risk factors in order to build an algorithm that predicts which patients are likely to attempt suicide. Currently it is unclear how to integrate previously identified risk factors into a clinically relevant predictive tool to estimate the probability of a patient with schizophrenia for attempting suicide. We conducted a cross-sectional assessment on a sample of 345 participants diagnosed with schizophrenia spectrum disorders. Suicide attempters and non-attempters were clearly identified using the Columbia Suicide Severity Rating Scale (C-SSRS) and the Beck Suicide Ideation Scale (BSS). We developed four classification algorithms using a regularized regression, random forest, elastic net and support vector machine models with sociocultural and clinical variables as features to train the models. All classification models performed similarly in identifying suicide attempters and non-attempters. Our regularized logistic regression model demonstrated an accuracy of 67% and an area under the curve (AUC) of 0.71, while the random forest model demonstrated 66% accuracy and an AUC of 0.67. Support vector classifier (SVC) model demonstrated an accuracy of 67% and an AUC of 0.70, and the elastic net model demonstrated and accuracy of 65% and an AUC of 0.71. Machine learning algorithms offer a relatively successful method for incorporating many clinical features to predict individuals at risk for future suicide attempts. Increased performance of these models using clinically relevant variables offers the potential to facilitate early treatment and intervention to prevent future suicide attempts. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Scalable Machine Learning for Massive Astronomical Datasets

    NASA Astrophysics Data System (ADS)

    Ball, Nicholas M.; Gray, A.

    2014-04-01

    We present the ability to perform data mining and machine learning operations on a catalog of half a billion astronomical objects. This is the result of the combination of robust, highly accurate machine learning algorithms with linear scalability that renders the applications of these algorithms to massive astronomical data tractable. We demonstrate the core algorithms kernel density estimation, K-means clustering, linear regression, nearest neighbors, random forest and gradient-boosted decision tree, singular value decomposition, support vector machine, and two-point correlation function. Each of these is relevant for astronomical applications such as finding novel astrophysical objects, characterizing artifacts in data, object classification (including for rare objects), object distances, finding the important features describing objects, density estimation of distributions, probabilistic quantities, and exploring the unknown structure of new data. The software, Skytree Server, runs on any UNIX-based machine, a virtual machine, or cloud-based and distributed systems including Hadoop. We have integrated it on the cloud computing system of the Canadian Astronomical Data Centre, the Canadian Advanced Network for Astronomical Research (CANFAR), creating the world's first cloud computing data mining system for astronomy. We demonstrate results showing the scaling of each of our major algorithms on large astronomical datasets, including the full 470,992,970 objects of the 2 Micron All-Sky Survey (2MASS) Point Source Catalog. We demonstrate the ability to find outliers in the full 2MASS dataset utilizing multiple methods, e.g., nearest neighbors. This is likely of particular interest to the radio astronomy community given, for example, that survey projects contain groups dedicated to this topic. 2MASS is used as a proof-of-concept dataset due to its convenience and availability. These results are of interest to any astronomical project with large and/or complex datasets that wishes to extract the full scientific value from its data.

  5. Scalable Machine Learning for Massive Astronomical Datasets

    NASA Astrophysics Data System (ADS)

    Ball, Nicholas M.; Astronomy Data Centre, Canadian

    2014-01-01

    We present the ability to perform data mining and machine learning operations on a catalog of half a billion astronomical objects. This is the result of the combination of robust, highly accurate machine learning algorithms with linear scalability that renders the applications of these algorithms to massive astronomical data tractable. We demonstrate the core algorithms kernel density estimation, K-means clustering, linear regression, nearest neighbors, random forest and gradient-boosted decision tree, singular value decomposition, support vector machine, and two-point correlation function. Each of these is relevant for astronomical applications such as finding novel astrophysical objects, characterizing artifacts in data, object classification (including for rare objects), object distances, finding the important features describing objects, density estimation of distributions, probabilistic quantities, and exploring the unknown structure of new data. The software, Skytree Server, runs on any UNIX-based machine, a virtual machine, or cloud-based and distributed systems including Hadoop. We have integrated it on the cloud computing system of the Canadian Astronomical Data Centre, the Canadian Advanced Network for Astronomical Research (CANFAR), creating the world's first cloud computing data mining system for astronomy. We demonstrate results showing the scaling of each of our major algorithms on large astronomical datasets, including the full 470,992,970 objects of the 2 Micron All-Sky Survey (2MASS) Point Source Catalog. We demonstrate the ability to find outliers in the full 2MASS dataset utilizing multiple methods, e.g., nearest neighbors, and the local outlier factor. 2MASS is used as a proof-of-concept dataset due to its convenience and availability. These results are of interest to any astronomical project with large and/or complex datasets that wishes to extract the full scientific value from its data.

  6. Double Arm Linkage precision Linear motion (DALL) Carriage, a simplified, rugged, high performance linear motion stage for the moving mirror of an Fourier Transform Spectrometer or other system requiring precision linear motion

    NASA Astrophysics Data System (ADS)

    Johnson, Kendall B.; Hopkins, Greg

    2017-08-01

    The Double Arm Linkage precision Linear motion (DALL) carriage has been developed as a simplified, rugged, high performance linear motion stage. Initially conceived as a moving mirror stage for the moving mirror of a Fourier Transform Spectrometer (FTS), it is applicable to any system requiring high performance linear motion. It is based on rigid double arm linkages connecting a base to a moving carriage through flexures. It is a monolithic design. The system is fabricated from one piece of material including the flexural elements, using high precision machining. The monolithic design has many advantages. There are no joints to slip or creep and there are no CTE (coefficient of thermal expansion) issues. This provides a stable, robust design, both mechanically and thermally and is expected to provide a wide operating temperature range, including cryogenic temperatures, and high tolerance to vibration and shock. Furthermore, it provides simplicity and ease of implementation, as there is no assembly or alignment of the mechanism. It comes out of the machining operation aligned and there are no adjustments. A prototype has been fabricated and tested, showing superb shear performance and very promising tilt performance. This makes it applicable to both corner cube and flat mirror FTS systems respectively.

  7. A Bayesian least squares support vector machines based framework for fault diagnosis and failure prognosis

    NASA Astrophysics Data System (ADS)

    Khawaja, Taimoor Saleem

    A high-belief low-overhead Prognostics and Health Management (PHM) system is desired for online real-time monitoring of complex non-linear systems operating in a complex (possibly non-Gaussian) noise environment. This thesis presents a Bayesian Least Squares Support Vector Machine (LS-SVM) based framework for fault diagnosis and failure prognosis in nonlinear non-Gaussian systems. The methodology assumes the availability of real-time process measurements, definition of a set of fault indicators and the existence of empirical knowledge (or historical data) to characterize both nominal and abnormal operating conditions. An efficient yet powerful Least Squares Support Vector Machine (LS-SVM) algorithm, set within a Bayesian Inference framework, not only allows for the development of real-time algorithms for diagnosis and prognosis but also provides a solid theoretical framework to address key concepts related to classification for diagnosis and regression modeling for prognosis. SVM machines are founded on the principle of Structural Risk Minimization (SRM) which tends to find a good trade-off between low empirical risk and small capacity. The key features in SVM are the use of non-linear kernels, the absence of local minima, the sparseness of the solution and the capacity control obtained by optimizing the margin. The Bayesian Inference framework linked with LS-SVMs allows a probabilistic interpretation of the results for diagnosis and prognosis. Additional levels of inference provide the much coveted features of adaptability and tunability of the modeling parameters. The two main modules considered in this research are fault diagnosis and failure prognosis. With the goal of designing an efficient and reliable fault diagnosis scheme, a novel Anomaly Detector is suggested based on the LS-SVM machines. The proposed scheme uses only baseline data to construct a 1-class LS-SVM machine which, when presented with online data is able to distinguish between normal behavior and any abnormal or novel data during real-time operation. The results of the scheme are interpreted as a posterior probability of health (1 - probability of fault). As shown through two case studies in Chapter 3, the scheme is well suited for diagnosing imminent faults in dynamical non-linear systems. Finally, the failure prognosis scheme is based on an incremental weighted Bayesian LS-SVR machine. It is particularly suited for online deployment given the incremental nature of the algorithm and the quick optimization problem solved in the LS-SVR algorithm. By way of kernelization and a Gaussian Mixture Modeling (GMM) scheme, the algorithm can estimate "possibly" non-Gaussian posterior distributions for complex non-linear systems. An efficient regression scheme associated with the more rigorous core algorithm allows for long-term predictions, fault growth estimation with confidence bounds and remaining useful life (RUL) estimation after a fault is detected. The leading contributions of this thesis are (a) the development of a novel Bayesian Anomaly Detector for efficient and reliable Fault Detection and Identification (FDI) based on Least Squares Support Vector Machines, (b) the development of a data-driven real-time architecture for long-term Failure Prognosis using Least Squares Support Vector Machines, (c) Uncertainty representation and management using Bayesian Inference for posterior distribution estimation and hyper-parameter tuning, and finally (d) the statistical characterization of the performance of diagnosis and prognosis algorithms in order to relate the efficiency and reliability of the proposed schemes.

  8. Hospital support services and the impacts of outsourcing on occupational health and safety.

    PubMed

    Siganporia, Pearl; Astrakianakis, George; Alamgir, Hasanat; Ostry, Aleck; Nicol, Anne-Marie; Koehoorn, Mieke

    2016-10-01

    Outsourcing labor is linked to negative impacts on occupational health and safety (OHS). In British Columbia, Canada, provincial health care service providers outsource support services such as cleaners and food service workers (CFSWs) to external contractors. This study investigates the impact of outsourcing on the occupational health safety of hospital CFSWs through a mixed methods approach. Worker's compensation data for hospital CFSWs were analyzed by negative binomial and multiple linear regressions supplemented by iterative thematic analysis of telephone interviews of the same job groups. Non-significant decreases in injury rates and days lost per injury were observed in outsourced CFSWs post outsourcing. Significant decreases (P < 0.05) were observed in average costs per injury for cleaners post outsourcing. Outsourced workers interviewed implied instances of underreporting workplace injuries. This mixed methods study describes the impact of outsourcing on OHS of healthcare workers in British Columbia. Results will be helpful for policy-makers and workplace regulators to assess program effectiveness for outsourced workers.

  9. Hospital support services and the impacts of outsourcing on occupational health and safety

    PubMed Central

    Alamgir, Hasanat; Ostry, Aleck; Nicol, Anne-Marie; Koehoorn, Mieke

    2016-01-01

    Background Outsourcing labor is linked to negative impacts on occupational health and safety (OHS). In British Columbia, Canada, provincial health care service providers outsource support services such as cleaners and food service workers (CFSWs) to external contractors. Objectives This study investigates the impact of outsourcing on the occupational health safety of hospital CFSWs through a mixed methods approach. Methods Worker’s compensation data for hospital CFSWs were analyzed by negative binomial and multiple linear regressions supplemented by iterative thematic analysis of telephone interviews of the same job groups. Results Non-significant decreases in injury rates and days lost per injury were observed in outsourced CFSWs post outsourcing. Significant decreases (P < 0.05) were observed in average costs per injury for cleaners post outsourcing. Outsourced workers interviewed implied instances of underreporting workplace injuries. Conclusions This mixed methods study describes the impact of outsourcing on OHS of healthcare workers in British Columbia. Results will be helpful for policy-makers and workplace regulators to assess program effectiveness for outsourced workers. PMID:27696988

  10. Induction linear accelerators

    NASA Astrophysics Data System (ADS)

    Birx, Daniel

    1992-03-01

    Among the family of particle accelerators, the Induction Linear Accelerator is the best suited for the acceleration of high current electron beams. Because the electromagnetic radiation used to accelerate the electron beam is not stored in the cavities but is supplied by transmission lines during the beam pulse it is possible to utilize very low Q (typically<10) structures and very large beam pipes. This combination increases the beam breakup limited maximum currents to of order kiloamperes. The micropulse lengths of these machines are measured in 10's of nanoseconds and duty factors as high as 10-4 have been achieved. Until recently the major problem with these machines has been associated with the pulse power drive. Beam currents of kiloamperes and accelerating potentials of megavolts require peak power drives of gigawatts since no energy is stored in the structure. The marriage of liner accelerator technology and nonlinear magnetic compressors has produced some unique capabilities. It now appears possible to produce electron beams with average currents measured in amperes, peak currents in kiloamperes and gradients exceeding 1 MeV/meter, with power efficiencies approaching 50%. The nonlinear magnetic compression technology has replaced the spark gap drivers used on earlier accelerators with state-of-the-art all-solid-state SCR commutated compression chains. The reliability of these machines is now approaching 1010 shot MTBF. In the following paper we will briefly review the historical development of induction linear accelerators and then discuss the design considerations.

  11. Review of the energy check of an electron-only linear accelerator over a 6 year period: sensitivity of the technique to energy shift.

    PubMed

    Biggs, Peter J

    2003-04-01

    The calibration and monthly QA of an electron-only linear accelerator dedicated to intra-operative radiation therapy has been reviewed. Since this machine is calibrated prior to every procedure, there was no necessity to adjust the output calibration at any time except, perhaps, when the magnetron is changed, provided the machine output is reasonably stable. This gives a unique opportunity to study the dose output of the machine per monitor unit, variation in the timer error, flatness and symmetry of the beam and the energy check as a function of time. The results show that, although the dose per monitor unit varied within +/- 2%, the timer error within +/- 0.005 MU and the asymmetry within 1-2%, none of these parameters showed any systematic change with time. On the other hand, the energy check showed a linear drift with time for 6, 9, and 12 MeV (2.1, 3.5, and 2.5%, respectively, over 5 years), while at 15 and 18 MeV, the energy check was relatively constant. It is further shown that based on annual calibrations and RPC TLD checks, the energy of each beam is constant and that therefore the energy check is an exquisitely sensitive one. The consistency of the independent checks is demonstrated.

  12. Wear resistance of machine tools' bionic linear rolling guides by laser cladding

    NASA Astrophysics Data System (ADS)

    Wang, Yiqiang; Liu, Botao; Guo, Zhengcai

    2017-06-01

    In order to improve the rolling wear resistance (RWR) of linear rolling guides (LRG) as well as prolong the life of machine tools, various shape samples with different units spaces ranged from 1 to 5 mm are designed through the observation of animals in the desert and manufactured by laser cladding. Wear resistance tests reproducing closely the real operational condition are conducted by using a homemade linear reciprocating wear test machine, and wear resistance is evaluated by means of weight loss measurement. Results indicate that the samples with bionic units have better RWR than the untreated one, of which the reticulate treated sample with unit space 3 mm present the best RWR. More specifically, among the punctuate treated samples, the mass loss increases with the increase of unit space; among the striate treated samples, the mass loss changes slightly with the increase of unit space, attaining a minimum at the unit space of 4 mm; among the reticulate treated samples, with the increase of unit space, the mass loss initially decreases, but turns to increase after reaching a minimum at the unit space of 3 mm. Additionally, the samples with striate shape perform better wear resistance than the other shape groups on the whole. From the ratio value of laser treated area to contacted area perspective, that the samples with ratio value between 0.15 and 0.3 possess better wear resistance is concluded.

  13. Comparison of Classifiers for Decoding Sensory and Cognitive Information from Prefrontal Neuronal Populations

    PubMed Central

    Astrand, Elaine; Enel, Pierre; Ibos, Guilhem; Dominey, Peter Ford; Baraduc, Pierre; Ben Hamed, Suliann

    2014-01-01

    Decoding neuronal information is important in neuroscience, both as a basic means to understand how neuronal activity is related to cerebral function and as a processing stage in driving neuroprosthetic effectors. Here, we compare the readout performance of six commonly used classifiers at decoding two different variables encoded by the spiking activity of the non-human primate frontal eye fields (FEF): the spatial position of a visual cue, and the instructed orientation of the animal's attention. While the first variable is exogenously driven by the environment, the second variable corresponds to the interpretation of the instruction conveyed by the cue; it is endogenously driven and corresponds to the output of internal cognitive operations performed on the visual attributes of the cue. These two variables were decoded using either a regularized optimal linear estimator in its explicit formulation, an optimal linear artificial neural network estimator, a non-linear artificial neural network estimator, a non-linear naïve Bayesian estimator, a non-linear Reservoir recurrent network classifier or a non-linear Support Vector Machine classifier. Our results suggest that endogenous information such as the orientation of attention can be decoded from the FEF with the same accuracy as exogenous visual information. All classifiers did not behave equally in the face of population size and heterogeneity, the available training and testing trials, the subject's behavior and the temporal structure of the variable of interest. In most situations, the regularized optimal linear estimator and the non-linear Support Vector Machine classifiers outperformed the other tested decoders. PMID:24466019

  14. An Analysis of the Multiple Objective Capital Budgeting Problem via Fuzzy Linear Integer (0-1) Programming.

    DTIC Science & Technology

    1980-05-31

    34 International Journal of Man- Machine Studies , Vol. 9, No. 1, 1977, pp. 1-68. [16] Zimmermann, H. J., Theory and Applications of Fuzzy Sets, Institut...Boston, Inc., Hingham, MA, 1978. [18] Yager, R. R., "Multiple Objective Decision-Making Using Fuzzy Sets," International Journal of Man- Machine Studies ...Professor of Industria Engineering ... iv t TABLE OF CONTENTS page ABSTRACT .. .. . ...... . .... ...... ........ iii LIST OF TABLES

  15. A new measuring machine in Paris

    NASA Technical Reports Server (NTRS)

    Guibert, J.; Charvin, P.

    1984-01-01

    A new photographic measuring machine is under construction at the Paris Observatory. The amount of transmitted light is measured by a linear array of 1024 photodiodes. Carriage control, data acquisition and on line processing are performed by microprocessors, a S.E.L. 32/27 computer, and an AP 120-B Array Processor. It is expected that a Schmidt telescope plate of size 360 mm square will be scanned in one hour with pixel size of ten microns.

  16. Monocoil reciprocating permanent magnet electric machine with self-centering force

    NASA Technical Reports Server (NTRS)

    Bhate, Suresh K. (Inventor); Vitale, Nicholas G. (Inventor)

    1989-01-01

    A linear reciprocating machine has a tubular outer stator housing a coil, a plunger and an inner stator. The plunger has four axially spaced rings of radially magnetized permanent magnets which cooperate two at a time with the stator to complete first or second opposite magnetic paths. The four rings of magnets and the stators are arranged so that the stroke of the plunger is independent of the axial length of the coil.

  17. Productive High Performance Parallel Programming with Auto-tuned Domain-Specific Embedded Languages

    DTIC Science & Technology

    2013-01-02

    Compilation JVM Java Virtual Machine KB Kilobyte KDT Knowledge Discovery Toolbox LAPACK Linear Algebra Package LLVM Low-Level Virtual Machine LOC Lines...different starting points. Leo Meyerovich also helped solidify some of the ideas here in discussions during Par Lab retreats. I would also like to thank...multi-timestep computations by blocking in both time and space. 88 Implementation Output Approx DSL Type Language Language Parallelism LoC Graphite

  18. Influence of magnet eddy current on magnetization characteristics of variable flux memory machine

    NASA Astrophysics Data System (ADS)

    Yang, Hui; Lin, Heyun; Zhu, Z. Q.; Lyu, Shukang

    2018-05-01

    In this paper, the magnet eddy current characteristics of a newly developed variable flux memory machine (VFMM) is investigated. Firstly, the machine structure, non-linear hysteresis characteristics and eddy current modeling of low coercive force magnet are described, respectively. Besides, the PM eddy current behaviors when applying the demagnetizing current pulses are unveiled and investigated. The mismatch of the required demagnetization currents between the cases with or without considering the magnet eddy current is identified. In addition, the influences of the magnet eddy current on the demagnetization effect of VFMM are analyzed. Finally, a prototype is manufactured and tested to verify the theoretical analyses.

  19. Energy-free machine learning force field for aluminum.

    PubMed

    Kruglov, Ivan; Sergeev, Oleg; Yanilkin, Alexey; Oganov, Artem R

    2017-08-17

    We used the machine learning technique of Li et al. (PRL 114, 2015) for molecular dynamics simulations. Atomic configurations were described by feature matrix based on internal vectors, and linear regression was used as a learning technique. We implemented this approach in the LAMMPS code. The method was applied to crystalline and liquid aluminum and uranium at different temperatures and densities, and showed the highest accuracy among different published potentials. Phonon density of states, entropy and melting temperature of aluminum were calculated using this machine learning potential. The results are in excellent agreement with experimental data and results of full ab initio calculations.

  20. Center for Parallel Optimization.

    DTIC Science & Technology

    1996-03-19

    A NEW OPTIMIZATION BASED APPROACH TO IMPROVING GENERALIZATION IN MACHINE LEARNING HAS BEEN PROPOSED AND COMPUTATIONALLY VALIDATED ON SIMPLE LINEAR MODELS AS WELL AS ON HIGHLY NONLINEAR SYSTEMS SUCH AS NEURAL NETWORKS.

  1. Building "e-rater"® Scoring Models Using Machine Learning Methods. Research Report. ETS RR-16-04

    ERIC Educational Resources Information Center

    Chen, Jing; Fife, James H.; Bejar, Isaac I.; Rupp, André A.

    2016-01-01

    The "e-rater"® automated scoring engine used at Educational Testing Service (ETS) scores the writing quality of essays. In the current practice, e-rater scores are generated via a multiple linear regression (MLR) model as a linear combination of various features evaluated for each essay and human scores as the outcome variable. This…

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsai, C. -Y.; Douglas, D.; Li, R.

    Microbunching instability (MBI) has been one of the most challenging issues in designs of magnetic chicanes for short-wavelength free-electron lasers or linear colliders, as well as those of transport lines for recirculating or energy-recovery-linac machines. To quantify MBI for a recirculating machine and for more systematic analyses, we have recently developed a linear Vlasov solver and incorporated relevant collective effects into the code, including the longitudinal space charge, coherent synchrotron radiation, and linac geometric impedances, with extension of the existing formulation to include beam acceleration. In our code, we semianalytically solve the linearized Vlasov equation for microbunching amplification factor formore » an arbitrary linear lattice. In this study we apply our code to beam line lattices of two comparative isochronous recirculation arcs and one arc lattice preceded by a linac section. The resultant microbunching gain functions and spectral responses are presented, with some results compared to particle tracking simulation by elegant (M. Borland, APS Light Source Note No. LS-287, 2002). These results demonstrate clearly the impact of arc lattice design on the microbunching development. Lastly, the underlying physics with inclusion of those collective effects is elucidated and the limitation of the existing formulation is also discussed.« less

  3. Koopman Operator Framework for Time Series Modeling and Analysis

    NASA Astrophysics Data System (ADS)

    Surana, Amit

    2018-01-01

    We propose an interdisciplinary framework for time series classification, forecasting, and anomaly detection by combining concepts from Koopman operator theory, machine learning, and linear systems and control theory. At the core of this framework is nonlinear dynamic generative modeling of time series using the Koopman operator which is an infinite-dimensional but linear operator. Rather than working with the underlying nonlinear model, we propose two simpler linear representations or model forms based on Koopman spectral properties. We show that these model forms are invariants of the generative model and can be readily identified directly from data using techniques for computing Koopman spectral properties without requiring the explicit knowledge of the generative model. We also introduce different notions of distances on the space of such model forms which is essential for model comparison/clustering. We employ the space of Koopman model forms equipped with distance in conjunction with classical machine learning techniques to develop a framework for automatic feature generation for time series classification. The forecasting/anomaly detection framework is based on using Koopman model forms along with classical linear systems and control approaches. We demonstrate the proposed framework for human activity classification, and for time series forecasting/anomaly detection in power grid application.

  4. BLAS- BASIC LINEAR ALGEBRA SUBPROGRAMS

    NASA Technical Reports Server (NTRS)

    Krogh, F. T.

    1994-01-01

    The Basic Linear Algebra Subprogram (BLAS) library is a collection of FORTRAN callable routines for employing standard techniques in performing the basic operations of numerical linear algebra. The BLAS library was developed to provide a portable and efficient source of basic operations for designers of programs involving linear algebraic computations. The subprograms available in the library cover the operations of dot product, multiplication of a scalar and a vector, vector plus a scalar times a vector, Givens transformation, modified Givens transformation, copy, swap, Euclidean norm, sum of magnitudes, and location of the largest magnitude element. Since these subprograms are to be used in an ANSI FORTRAN context, the cases of single precision, double precision, and complex data are provided for. All of the subprograms have been thoroughly tested and produce consistent results even when transported from machine to machine. BLAS contains Assembler versions and FORTRAN test code for any of the following compilers: Lahey F77L, Microsoft FORTRAN, or IBM Professional FORTRAN. It requires the Microsoft Macro Assembler and a math co-processor. The PC implementation allows individual arrays of over 64K. The BLAS library was developed in 1979. The PC version was made available in 1986 and updated in 1988.

  5. Integrating image quality in 2nu-SVM biometric match score fusion.

    PubMed

    Vatsa, Mayank; Singh, Richa; Noore, Afzel

    2007-10-01

    This paper proposes an intelligent 2nu-support vector machine based match score fusion algorithm to improve the performance of face and iris recognition by integrating the quality of images. The proposed algorithm applies redundant discrete wavelet transform to evaluate the underlying linear and non-linear features present in the image. A composite quality score is computed to determine the extent of smoothness, sharpness, noise, and other pertinent features present in each subband of the image. The match score and the corresponding quality score of an image are fused using 2nu-support vector machine to improve the verification performance. The proposed algorithm is experimentally validated using the FERET face database and the CASIA iris database. The verification performance and statistical evaluation show that the proposed algorithm outperforms existing fusion algorithms.

  6. A Novel Local Learning based Approach With Application to Breast Cancer Diagnosis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Songhua; Tourassi, Georgia

    2012-01-01

    The purpose of this study is to develop and evaluate a novel local learning-based approach for computer-assisted diagnosis of breast cancer. Our new local learning based algorithm using the linear logistic regression method as its base learner is described. Overall, our algorithm will perform its stochastic searching process until the total allowed computing time is used up by our random walk process in identifying the most suitable population subdivision scheme and their corresponding individual base learners. The proposed local learning-based approach was applied for the prediction of breast cancer given 11 mammographic and clinical findings reported by physicians using themore » BI-RADS lexicon. Our database consisted of 850 patients with biopsy confirmed diagnosis (290 malignant and 560 benign). We also compared the performance of our method with a collection of publicly available state-of-the-art machine learning methods. Predictive performance for all classifiers was evaluated using 10-fold cross validation and Receiver Operating Characteristics (ROC) analysis. Figure 1 reports the performance of 54 machine learning methods implemented in the machine learning toolkit Weka (version 3.0). We introduced a novel local learning-based classifier and compared it with an extensive list of other classifiers for the problem of breast cancer diagnosis. Our experiments show that the algorithm superior prediction performance outperforming a wide range of other well established machine learning techniques. Our conclusion complements the existing understanding in the machine learning field that local learning may capture complicated, non-linear relationships exhibited by real-world datasets.« less

  7. Modeling Dengue vector population using remotely sensed data and machine learning.

    PubMed

    Scavuzzo, Juan M; Trucco, Francisco; Espinosa, Manuel; Tauro, Carolina B; Abril, Marcelo; Scavuzzo, Carlos M; Frery, Alejandro C

    2018-05-16

    Mosquitoes are vectors of many human diseases. In particular, Aedes ægypti (Linnaeus) is the main vector for Chikungunya, Dengue, and Zika viruses in Latin America and it represents a global threat. Public health policies that aim at combating this vector require dependable and timely information, which is usually expensive to obtain with field campaigns. For this reason, several efforts have been done to use remote sensing due to its reduced cost. The present work includes the temporal modeling of the oviposition activity (measured weekly on 50 ovitraps in a north Argentinean city) of Aedes ægypti (Linnaeus), based on time series of data extracted from operational earth observation satellite images. We use are NDVI, NDWI, LST night, LST day and TRMM-GPM rain from 2012 to 2016 as predictive variables. In contrast to previous works which use linear models, we employ Machine Learning techniques using completely accessible open source toolkits. These models have the advantages of being non-parametric and capable of describing nonlinear relationships between variables. Specifically, in addition to two linear approaches, we assess a support vector machine, an artificial neural networks, a K-nearest neighbors and a decision tree regressor. Considerations are made on parameter tuning and the validation and training approach. The results are compared to linear models used in previous works with similar data sets for generating temporal predictive models. These new tools perform better than linear approaches, in particular nearest neighbor regression (KNNR) performs the best. These results provide better alternatives to be implemented operatively on the Argentine geospatial risk system that is running since 2012. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Improving linear accelerator service response with a real- time electronic event reporting system.

    PubMed

    Hoisak, Jeremy D P; Pawlicki, Todd; Kim, Gwe-Ya; Fletcher, Richard; Moore, Kevin L

    2014-09-08

    To track linear accelerator performance issues, an online event recording system was developed in-house for use by therapists and physicists to log the details of technical problems arising on our institution's four linear accelerators. In use since October 2010, the system was designed so that all clinical physicists would receive email notification when an event was logged. Starting in October 2012, we initiated a pilot project in collaboration with our linear accelerator vendor to explore a new model of service and support, in which event notifications were also sent electronically directly to dedicated engineers at the vendor's technical help desk, who then initiated a response to technical issues. Previously, technical issues were reported by telephone to the vendor's call center, which then disseminated information and coordinated a response with the Technical Support help desk and local service engineers. The purpose of this work was to investigate the improvements to clinical operations resulting from this new service model. The new and old service models were quantitatively compared by reviewing event logs and the oncology information system database in the nine months prior to and after initiation of the project. Here, we focus on events that resulted in an inoperative linear accelerator ("down" machine). Machine downtime, vendor response time, treatment cancellations, and event resolution were evaluated and compared over two equivalent time periods. In 389 clinical days, there were 119 machine-down events: 59 events before and 60 after introduction of the new model. In the new model, median time to service response decreased from 45 to 8 min, service engineer dispatch time decreased 44%, downtime per event decreased from 45 to 20 min, and treatment cancellations decreased 68%. The decreased vendor response time and reduced number of on-site visits by a service engineer resulted in decreased downtime and decreased patient treatment cancellations.

  9. Survey of beam instrumentation used in SLC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ecklund, S.D.

    A survey of beam instruments used at SLAC in the SLC machine is presented. The basic utility and operation of each device is briefly described. The various beam instruments used at the Stanford Linear Collider (SLC), can be classified by the function they perform. Beam intensity, position and size are typical of the parameters of beam which are measured. Each type of parameter is important for adjusting or tuning the machine in order to achieve optimum performance. 39 refs.

  10. Discovering charge density functionals and structure-property relationships with PROPhet: A general framework for coupling machine learning and first-principles methods

    DOE PAGES

    Kolb, Brian; Lentz, Levi C.; Kolpak, Alexie M.

    2017-04-26

    Modern ab initio methods have rapidly increased our understanding of solid state materials properties, chemical reactions, and the quantum interactions between atoms. However, poor scaling often renders direct ab initio calculations intractable for large or complex systems. There are two obvious avenues through which to remedy this problem: (i) develop new, less expensive methods to calculate system properties, or (ii) make existing methods faster. This paper describes an open source framework designed to pursue both of these avenues. PROPhet (short for PROPerty Prophet) utilizes machine learning techniques to find complex, non-linear mappings between sets of material or system properties. Themore » result is a single code capable of learning analytical potentials, non-linear density functionals, and other structure-property or property-property relationships. These capabilities enable highly accurate mesoscopic simulations, facilitate computation of expensive properties, and enable the development of predictive models for systematic materials design and optimization. Here, this work explores the coupling of machine learning to ab initio methods through means both familiar (e.g., the creation of various potentials and energy functionals) and less familiar (e.g., the creation of density functionals for arbitrary properties), serving both to demonstrate PROPhet’s ability to create exciting post-processing analysis tools and to open the door to improving ab initio methods themselves with these powerful machine learning techniques.« less

  11. Figure of merit for macrouniformity based on image quality ruler evaluation and machine learning framework

    NASA Astrophysics Data System (ADS)

    Wang, Weibao; Overall, Gary; Riggs, Travis; Silveston-Keith, Rebecca; Whitney, Julie; Chiu, George; Allebach, Jan P.

    2013-01-01

    Assessment of macro-uniformity is a capability that is important for the development and manufacture of printer products. Our goal is to develop a metric that will predict macro-uniformity, as judged by human subjects, by scanning and analyzing printed pages. We consider two different machine learning frameworks for the metric: linear regression and the support vector machine. We have implemented the image quality ruler, based on the recommendations of the INCITS W1.1 macro-uniformity team. Using 12 subjects at Purdue University and 20 subjects at Lexmark, evenly balanced with respect to gender, we conducted subjective evaluations with a set of 35 uniform b/w prints from seven different printers with five levels of tint coverage. Our results suggest that the image quality ruler method provides a reliable means to assess macro-uniformity. We then defined and implemented separate features to measure graininess, mottle, large area variation, jitter, and large-scale non-uniformity. The algorithms that we used are largely based on ISO image quality standards. Finally, we used these features computed for a set of test pages and the subjects' image quality ruler assessments of these pages to train the two different predictors - one based on linear regression and the other based on the support vector machine (SVM). Using five-fold cross-validation, we confirmed the efficacy of our predictor.

  12. Discovering charge density functionals and structure-property relationships with PROPhet: A general framework for coupling machine learning and first-principles methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolb, Brian; Lentz, Levi C.; Kolpak, Alexie M.

    Modern ab initio methods have rapidly increased our understanding of solid state materials properties, chemical reactions, and the quantum interactions between atoms. However, poor scaling often renders direct ab initio calculations intractable for large or complex systems. There are two obvious avenues through which to remedy this problem: (i) develop new, less expensive methods to calculate system properties, or (ii) make existing methods faster. This paper describes an open source framework designed to pursue both of these avenues. PROPhet (short for PROPerty Prophet) utilizes machine learning techniques to find complex, non-linear mappings between sets of material or system properties. Themore » result is a single code capable of learning analytical potentials, non-linear density functionals, and other structure-property or property-property relationships. These capabilities enable highly accurate mesoscopic simulations, facilitate computation of expensive properties, and enable the development of predictive models for systematic materials design and optimization. Here, this work explores the coupling of machine learning to ab initio methods through means both familiar (e.g., the creation of various potentials and energy functionals) and less familiar (e.g., the creation of density functionals for arbitrary properties), serving both to demonstrate PROPhet’s ability to create exciting post-processing analysis tools and to open the door to improving ab initio methods themselves with these powerful machine learning techniques.« less

  13. Stability Assessment of a System Comprising a Single Machine and Inverter with Scalable Ratings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Brian B; Lin, Yashen; Gevorgian, Vahan

    Synchronous machines have traditionally acted as the foundation of large-scale electrical infrastructures and their physical properties have formed the cornerstone of system operations. However, with the increased integration of distributed renewable resources and energy-storage technologies, there is a need to systematically acknowledge the dynamics of power-electronics inverters - the primary energy-conversion interface in such systems - in all aspects of modeling, analysis, and control of the bulk power network. In this paper, we assess the properties of coupled machine-inverter systems by studying an elementary system comprised of a synchronous generator, three-phase inverter, and a load. The inverter model is formulatedmore » such that its power rating can be scaled continuously across power levels while preserving its closed-loop response. Accordingly, the properties of the machine-inverter system can be assessed for varying ratios of machine-to-inverter power ratings. After linearizing the model and assessing its eigenvalues, we show that system stability is highly dependent on the inverter current controller and machine exciter, thus uncovering a key concern with mixed machine-inverter systems and motivating the need for next-generation grid-stabilizing inverter controls.« less

  14. Comparison of Machine Learning Methods for the Arterial Hypertension Diagnostics

    PubMed Central

    Belo, David; Gamboa, Hugo

    2017-01-01

    The paper presents results of machine learning approach accuracy applied analysis of cardiac activity. The study evaluates the diagnostics possibilities of the arterial hypertension by means of the short-term heart rate variability signals. Two groups were studied: 30 relatively healthy volunteers and 40 patients suffering from the arterial hypertension of II-III degree. The following machine learning approaches were studied: linear and quadratic discriminant analysis, k-nearest neighbors, support vector machine with radial basis, decision trees, and naive Bayes classifier. Moreover, in the study, different methods of feature extraction are analyzed: statistical, spectral, wavelet, and multifractal. All in all, 53 features were investigated. Investigation results show that discriminant analysis achieves the highest classification accuracy. The suggested approach of noncorrelated feature set search achieved higher results than data set based on the principal components. PMID:28831239

  15. An implementation of support vector machine on sentiment classification of movie reviews

    NASA Astrophysics Data System (ADS)

    Yulietha, I. M.; Faraby, S. A.; Adiwijaya; Widyaningtyas, W. C.

    2018-03-01

    With technological advances, all information about movie is available on the internet. If the information is processed properly, it will get the quality of the information. This research proposes to the classify sentiments on movie review documents. This research uses Support Vector Machine (SVM) method because it can classify high dimensional data in accordance with the data used in this research in the form of text. Support Vector Machine is a popular machine learning technique for text classification because it can classify by learning from a collection of documents that have been classified previously and can provide good result. Based on number of datasets, the 90-10 composition has the best result that is 85.6%. Based on SVM kernel, kernel linear with constant 1 has the best result that is 84.9%

  16. Remote sensing of thermal radiation from an aircraft - An analysis and evaluation of crop-freeze protection methods

    NASA Technical Reports Server (NTRS)

    Sutherland, R. A.; Hannah, H. E.; Cook, A. F.; Martsolf, J. D.

    1981-01-01

    Thermal images from an aircraft-mounted scanner are used to evaluate the effectiveness of crop-freeze protection devices. Data from flights made while using fuel oil heaters, a wind machine and an undercanopy irrigation system are compared. Results show that the overall protection provided by irrigation (at approximately 2 C) is comparable to the less energy-efficient heater-wind machine combination. Protection provided by the wind machine alone (at approximately 1 C) was found to decrease linearly with distance from the machine by approximately 1 C/100 m. The flights were made over a 1.5 hectare citrus grove at an altitude of 450 m with an 8-14 micron detector. General meteorological conditions during the experiments, conducted during the nighttime, were cold (at approximately -6 C) and calm with clear skies.

  17. Machine learning in the string landscape

    NASA Astrophysics Data System (ADS)

    Carifio, Jonathan; Halverson, James; Krioukov, Dmitri; Nelson, Brent D.

    2017-09-01

    We utilize machine learning to study the string landscape. Deep data dives and conjecture generation are proposed as useful frameworks for utilizing machine learning in the landscape, and examples of each are presented. A decision tree accurately predicts the number of weak Fano toric threefolds arising from reflexive polytopes, each of which determines a smooth F-theory compactification, and linear regression generates a previously proven conjecture for the gauge group rank in an ensemble of 4/3× 2.96× {10}^{755} F-theory compactifications. Logistic regression generates a new conjecture for when E 6 arises in the large ensemble of F-theory compactifications, which is then rigorously proven. This result may be relevant for the appearance of visible sectors in the ensemble. Through conjecture generation, machine learning is useful not only for numerics, but also for rigorous results.

  18. Analysis and comparison of end effects in linear switched reluctance and hybrid motors

    NASA Astrophysics Data System (ADS)

    Barhoumi, El Manaa; Abo-Khalil, Ahmed Galal; Berrouche, Youcef; Wurtz, Frederic

    2017-03-01

    This paper presents and discusses the longitudinal and transversal end effects which affects the propulsive force of linear motors. Generally, the modeling of linear machine considers the forces distortion due to the specific geometry of linear actuators. The insertion of permanent magnets on the stator allows improving the propulsive force produced by switched reluctance linear motors. Also, the inserted permanent magnets in the hybrid structure allow reducing considerably the ends effects observed in linear motors. The analysis was conducted using 2D and 3D finite elements method. The permanent magnet reinforces the flux produced by the winding and reorients it which allows modifying the impact of end effects. Presented simulations and discussions show the importance of this study to characterize the end effects in two different linear motors.

  19. Legislative efforts to protect children from tobacco.

    PubMed

    DiFranza, J R; Norwood, B D; Garner, D W; Tye, J B

    1987-06-26

    Public health laws intended to prevent children from smoking have been enacted in many states. We surveyed the relevant laws in all states and the District of Columbia. The efficacy of one such law prohibiting the sale of tobacco to individuals under the age of 18 years was assessed with the cooperation of an 11-year-old girl. She was successful in 75 of 100 attempts to purchase cigarettes. On the basis of this experience and a review of existing laws, we have made recommendations for a model law. These include a prohibition of the possession of tobacco by minors, a prohibition of the sale of tobacco to minors, a requirement for a warning sign at the point of sale, a ban on cigarette vending machines, and a reward for individuals reporting violators of vending laws.

  20. Controlling Continuous-Variable Quantum Key Distribution with Entanglement in the Middle Using Tunable Linear Optics Cloning Machines

    NASA Astrophysics Data System (ADS)

    Wu, Xiao Dong; Chen, Feng; Wu, Xiang Hua; Guo, Ying

    2017-02-01

    Continuous-variable quantum key distribution (CVQKD) can provide detection efficiency, as compared to discrete-variable quantum key distribution (DVQKD). In this paper, we demonstrate a controllable CVQKD with the entangled source in the middle, contrast to the traditional point-to-point CVQKD where the entanglement source is usually created by one honest party and the Gaussian noise added on the reference partner of the reconciliation is uncontrollable. In order to harmonize the additive noise that originates in the middle to resist the effect of malicious eavesdropper, we propose a controllable CVQKD protocol by performing a tunable linear optics cloning machine (LOCM) at one participant's side, say Alice. Simulation results show that we can achieve the optimal secret key rates by selecting the parameters of the tuned LOCM in the derived regions.

  1. A Prototype SSVEP Based Real Time BCI Gaming System

    PubMed Central

    Martišius, Ignas

    2016-01-01

    Although brain-computer interface technology is mainly designed with disabled people in mind, it can also be beneficial to healthy subjects, for example, in gaming or virtual reality systems. In this paper we discuss the typical architecture, paradigms, requirements, and limitations of electroencephalogram-based gaming systems. We have developed a prototype three-class brain-computer interface system, based on the steady state visually evoked potentials paradigm and the Emotiv EPOC headset. An online target shooting game, implemented in the OpenViBE environment, has been used for user feedback. The system utilizes wave atom transform for feature extraction, achieving an average accuracy of 78.2% using linear discriminant analysis classifier, 79.3% using support vector machine classifier with a linear kernel, and 80.5% using a support vector machine classifier with a radial basis function kernel. PMID:27051414

  2. Balancing Vibrations at Harmonic Frequencies by Injecting Harmonic Balancing Signals into the Armature of a Linear Motor/Alternator Coupled to a Stirling Machine

    NASA Technical Reports Server (NTRS)

    Holliday, Ezekiel S. (Inventor)

    2014-01-01

    Vibrations at harmonic frequencies are reduced by injecting harmonic balancing signals into the armature of a linear motor/alternator coupled to a Stirling machine. The vibrations are sensed to provide a signal representing the mechanical vibrations. A harmonic balancing signal is generated for selected harmonics of the operating frequency by processing the sensed vibration signal with adaptive filter algorithms of adaptive filters for each harmonic. Reference inputs for each harmonic are applied to the adaptive filter algorithms at the frequency of the selected harmonic. The harmonic balancing signals for all of the harmonics are summed with a principal control signal. The harmonic balancing signals modify the principal electrical drive voltage and drive the motor/alternator with a drive voltage component in opposition to the vibration at each harmonic.

  3. Comparative decision models for anticipating shortage of food grain production in India

    NASA Astrophysics Data System (ADS)

    Chattopadhyay, Manojit; Mitra, Subrata Kumar

    2018-01-01

    This paper attempts to predict food shortages in advance from the analysis of rainfall during the monsoon months along with other inputs used for crop production, such as land used for cereal production, percentage of area covered under irrigation and fertiliser use. We used six binary classification data mining models viz., logistic regression, Multilayer Perceptron, kernel lab-Support Vector Machines, linear discriminant analysis, quadratic discriminant analysis and k-Nearest Neighbors Network, and found that linear discriminant analysis and kernel lab-Support Vector Machines are equally suitable for predicting per capita food shortage with 89.69 % accuracy in overall prediction and 92.06 % accuracy in predicting food shortage ( true negative rate). Advance information of food shortage can help policy makers to take remedial measures in order to prevent devastating consequences arising out of food non-availability.

  4. A Prototype SSVEP Based Real Time BCI Gaming System.

    PubMed

    Martišius, Ignas; Damaševičius, Robertas

    2016-01-01

    Although brain-computer interface technology is mainly designed with disabled people in mind, it can also be beneficial to healthy subjects, for example, in gaming or virtual reality systems. In this paper we discuss the typical architecture, paradigms, requirements, and limitations of electroencephalogram-based gaming systems. We have developed a prototype three-class brain-computer interface system, based on the steady state visually evoked potentials paradigm and the Emotiv EPOC headset. An online target shooting game, implemented in the OpenViBE environment, has been used for user feedback. The system utilizes wave atom transform for feature extraction, achieving an average accuracy of 78.2% using linear discriminant analysis classifier, 79.3% using support vector machine classifier with a linear kernel, and 80.5% using a support vector machine classifier with a radial basis function kernel.

  5. High Strength P/M Gears for Vehicle Transmissions - Phase 2

    DTIC Science & Technology

    2008-08-15

    and while it was considered amenable to standard work material transfer ("blue steel" chutes for example) from other P/M processing equipment, no...depend of the machine design but should be kept to a minimum in order to minimize part transfer times. Position control of the linear axis is...Establish design of ausform gear finishing machine for P/M gears: The "Focus" part identified in phase I (New Process Planet gear P/N 17864, component

  6. Linear-hall sensor based force detecting unit for lower limb exoskeleton

    NASA Astrophysics Data System (ADS)

    Li, Hongwu; Zhu, Yanhe; Zhao, Jie; Wang, Tianshuo; Zhang, Zongwei

    2018-04-01

    This paper describes a knee-joint human-machine interaction force sensor for lower-limb force-assistance exoskeleton. The structure is designed based on hall sensor and series elastic actuator (SEA) structure. The work we have done includes the structure design, the parameter determination and dynamic simulation. By converting the force signal into macro displacement and output voltage, we completed the measurement of man-machine interaction force. And it is proved by experiments that the design is simple, stable and low-cost.

  7. Developing Test Apparatus and Measurements of AC Loss of High Temperature Superconductors

    DTIC Science & Technology

    2012-11-01

    temperature of the coil is not raised significantly. The second system, a larger machine, designed with a long term prospective to serve a test bed for...four sample chambers inside the vacuum gap, LN2 – cooled sample holder (currently only one is in use), the laminated back iron, and the outer shell...machine. accommodate a variety of different small coils and linear tapes. This assembly is surrounded by the laminated back iron and the outer shell

  8. Anytime query-tuned kernel machine classifiers via Cholesky factorization

    NASA Technical Reports Server (NTRS)

    DeCoste, D.

    2002-01-01

    We recently demonstrated 2 to 64-fold query-time speedups of Support Vector Machine and Kernel Fisher classifiers via a new computational geometry method for anytime output bounds (DeCoste,2002). This new paper refines our approach in two key ways. First, we introduce a simple linear algebra formulation based on Cholesky factorization, yielding simpler equations and lower computational overhead. Second, this new formulation suggests new methods for achieving additional speedups, including tuning on query samples. We demonstrate effectiveness on benchmark datasets.

  9. Error compensation for thermally induced errors on a machine tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krulewich, D.A.

    1996-11-08

    Heat flow from internal and external sources and the environment create machine deformations, resulting in positioning errors between the tool and workpiece. There is no industrially accepted method for thermal error compensation. A simple model has been selected that linearly relates discrete temperature measurements to the deflection. The biggest problem is how to locate the temperature sensors and to determine the number of required temperature sensors. This research develops a method to determine the number and location of temperature measurements.

  10. The study on the nanomachining property and cutting model of single-crystal sapphire by atomic force microscopy.

    PubMed

    Huang, Jen-Ching; Weng, Yung-Jin

    2014-01-01

    This study focused on the nanomachining property and cutting model of single-crystal sapphire during nanomachining. The coated diamond probe is used to as a tool, and the atomic force microscopy (AFM) is as an experimental platform for nanomachining. To understand the effect of normal force on single-crystal sapphire machining, this study tested nano-line machining and nano-rectangular pattern machining at different normal force. In nano-line machining test, the experimental results showed that the normal force increased, the groove depth from nano-line machining also increased. And the trend is logarithmic type. In nano-rectangular pattern machining test, it is found when the normal force increases, the groove depth also increased, but rather the accumulation of small chips. This paper combined the blew by air blower, the cleaning by ultrasonic cleaning machine and using contact mode probe to scan the surface topology after nanomaching, and proposed the "criterion of nanomachining cutting model," in order to determine the cutting model of single-crystal sapphire in the nanomachining is ductile regime cutting model or brittle regime cutting model. After analysis, the single-crystal sapphire substrate is processed in small normal force during nano-linear machining; its cutting modes are ductile regime cutting model. In the nano-rectangular pattern machining, due to the impact of machined zones overlap, the cutting mode is converted into a brittle regime cutting model. © 2014 Wiley Periodicals, Inc.

  11. Electrochemical growth of linear conducting crystals in microgravity

    NASA Technical Reports Server (NTRS)

    Cronise, Raymond J., IV

    1988-01-01

    Much attention has been given to the synthesis of linear conducting materials. These inorganic, organic, and polymeric materials have some very interesting electrical and optical properties, including low temperature superconductivity. Because of the anisotropic nature of these compounds, impurities and defects strongly influences the unique physical properties of such crystals. Investigations have demonstrated that electrochemical growth has provided the most reproducible and purest crystals. Space, specifically microgravity, eliminates phenomena such as buoyancy driven convection, and could permit formation of crystals many times purer than the ones grown to date. Several different linear conductors were flown on Get Away Special G-007 on board the Space Shuttle Columbia, STS 61-C, the first of a series of Project Explorer payloads. These compounds were grown by electrochemical methods, and the growth was monitored by photographs taken throughout the mission. Due to some thermal problems, no crystals of appreciable size were grown. The experimental results will be incorporated into improvements for the next 2 missions of Project Explorer. The results and conclusions of the first mission are discussed.

  12. New concept for in-line OLED manufacturing

    NASA Astrophysics Data System (ADS)

    Hoffmann, U.; Landgraf, H.; Campo, M.; Keller, S.; Koening, M.

    2011-03-01

    A new concept of a vertical In-Line deposition machine for large area white OLED production has been developed. The concept targets manufacturing on large substrates (>= Gen 4, 750 x 920 mm2) using linear deposition source achieving a total material utilization of >= 50 % and tact time down to 80 seconds. The continuously improved linear evaporation sources for the organic material achieve thickness uniformity on Gen 4 substrate of better than +/- 3 % and stable deposition rates down to less than 0.1 nm m/min and up to more than 100 nm m/min. For Lithium-Fluoride but also for other high evaporation temperature materials like Magnesium or Silver a linear source with uniformity better than +/- 3 % has been developed. For Aluminum we integrated a vertical oriented point source using wire feed to achieve high (> 150 nm m/min) and stable deposition rates. The machine concept includes a new vertical vacuum handling and alignment system for Gen 4 shadow masks. A complete alignment cycle for the mask can be done in less than one minute achieving alignment accuracy in the range of several 10 μm.

  13. Four-point bend apparatus for in situ micro-Raman stress measurements

    NASA Astrophysics Data System (ADS)

    Ward, Shawn H.; Mann, Adrian B.

    2018-06-01

    A device for in situ use with a micro-Raman microscope to determine stress from the Raman peak position was designed and validated. The device is a four-point bend machine with a micro-stepping motor and load cell, allowing for fine movement and accurate readings of the applied force. The machine has a small footprint and easily fits on most optical microscope stages. The results obtained from silicon are in good agreement with published literature values for the linear relationship between stress and peak position for the 520.8 cm‑1 Raman peak. The device was used to examine 4H–SiC and a good linear relationship was found between the 798 cm‑1 Raman peak position and stress, with the proportionality coefficient being close to the theoretical value of 0.0025. The 777 cm‑1 Raman peak also showed a linear dependence on stress, but the dependence was not as strong. The device examines both the tensile and compressive sides of the beam in bending, granting the potential for many materials and crystal orientations to be examined.

  14. Alternate approaches to future electron-positron linear colliders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loew, G.A.

    1998-07-01

    The purpose of this article is two-fold: to review the current international status of various design approaches to the next generation of e{sup +}e{sup {minus}} linear colliders, and on the occasion of his 80th birthday, to celebrate Richard B. Neal`s many contributions to the field of linear accelerators. As it turns out, combining these two tasks is a rather natural enterprise because of Neal`s long professional involvement and insight into many of the problems and options which the international e{sup +}e{sup {minus}} linear collider community is currently studying to achieve a practical design for a future machine.

  15. Mamdani-Fuzzy Modeling Approach for Quality Prediction of Non-Linear Laser Lathing Process

    NASA Astrophysics Data System (ADS)

    Sivaraos; Khalim, A. Z.; Salleh, M. S.; Sivakumar, D.; Kadirgama, K.

    2018-03-01

    Lathing is a process to fashioning stock materials into desired cylindrical shapes which usually performed by traditional lathe machine. But, the recent rapid advancements in engineering materials and precision demand gives a great challenge to the traditional method. The main drawback of conventional lathe is its mechanical contact which brings to the undesirable tool wear, heat affected zone, finishing, and dimensional accuracy especially taper quality in machining of stock with high length to diameter ratio. Therefore, a novel approach has been devised to investigate in transforming a 2D flatbed CO2 laser cutting machine into 3D laser lathing capability as an alternative solution. Three significant design parameters were selected for this experiment, namely cutting speed, spinning speed, and depth of cut. Total of 24 experiments were performed with eight (8) sequential runs where they were then replicated three (3) times. The experimental results were then used to establish Mamdani - Fuzzy predictive model where it yields the accuracy of more than 95%. Thus, the proposed Mamdani - Fuzzy modelling approach is found very much suitable and practical for quality prediction of non-linear laser lathing process for cylindrical stocks of 10mm diameter.

  16. Full-motion video analysis for improved gender classification

    NASA Astrophysics Data System (ADS)

    Flora, Jeffrey B.; Lochtefeld, Darrell F.; Iftekharuddin, Khan M.

    2014-06-01

    The ability of computer systems to perform gender classification using the dynamic motion of the human subject has important applications in medicine, human factors, and human-computer interface systems. Previous works in motion analysis have used data from sensors (including gyroscopes, accelerometers, and force plates), radar signatures, and video. However, full-motion video, motion capture, range data provides a higher resolution time and spatial dataset for the analysis of dynamic motion. Works using motion capture data have been limited by small datasets in a controlled environment. In this paper, we explore machine learning techniques to a new dataset that has a larger number of subjects. Additionally, these subjects move unrestricted through a capture volume, representing a more realistic, less controlled environment. We conclude that existing linear classification methods are insufficient for the gender classification for larger dataset captured in relatively uncontrolled environment. A method based on a nonlinear support vector machine classifier is proposed to obtain gender classification for the larger dataset. In experimental testing with a dataset consisting of 98 trials (49 subjects, 2 trials per subject), classification rates using leave-one-out cross-validation are improved from 73% using linear discriminant analysis to 88% using the nonlinear support vector machine classifier.

  17. The employment of Support Vector Machine to classify high and low performance archers based on bio-physiological variables

    NASA Astrophysics Data System (ADS)

    Taha, Zahari; Muazu Musa, Rabiu; Majeed, Anwar P. P. Abdul; Razali Abdullah, Mohamad; Amirul Abdullah, Muhammad; Hasnun Arif Hassan, Mohd; Khalil, Zubair

    2018-04-01

    The present study employs a machine learning algorithm namely support vector machine (SVM) to classify high and low potential archers from a collection of bio-physiological variables trained on different SVMs. 50 youth archers with the average age and standard deviation of (17.0 ±.056) gathered from various archery programmes completed a one end shooting score test. The bio-physiological variables namely resting heart rate, resting respiratory rate, resting diastolic blood pressure, resting systolic blood pressure, as well as calories intake, were measured prior to their shooting tests. k-means cluster analysis was applied to cluster the archers based on their scores on variables assessed. SVM models i.e. linear, quadratic and cubic kernel functions, were trained on the aforementioned variables. The k-means clustered the archers into high (HPA) and low potential archers (LPA), respectively. It was demonstrated that the linear SVM exhibited good accuracy with a classification accuracy of 94% in comparison the other tested models. The findings of this investigation can be valuable to coaches and sports managers to recognise high potential athletes from the selected bio-physiological variables examined.

  18. Mixed Integer Linear Programming based machine learning approach identifies regulators of telomerase in yeast.

    PubMed

    Poos, Alexandra M; Maicher, André; Dieckmann, Anna K; Oswald, Marcus; Eils, Roland; Kupiec, Martin; Luke, Brian; König, Rainer

    2016-06-02

    Understanding telomere length maintenance mechanisms is central in cancer biology as their dysregulation is one of the hallmarks for immortalization of cancer cells. Important for this well-balanced control is the transcriptional regulation of the telomerase genes. We integrated Mixed Integer Linear Programming models into a comparative machine learning based approach to identify regulatory interactions that best explain the discrepancy of telomerase transcript levels in yeast mutants with deleted regulators showing aberrant telomere length, when compared to mutants with normal telomere length. We uncover novel regulators of telomerase expression, several of which affect histone levels or modifications. In particular, our results point to the transcription factors Sum1, Hst1 and Srb2 as being important for the regulation of EST1 transcription, and we validated the effect of Sum1 experimentally. We compiled our machine learning method leading to a user friendly package for R which can straightforwardly be applied to similar problems integrating gene regulator binding information and expression profiles of samples of e.g. different phenotypes, diseases or treatments. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  19. Development and Implementation of a Simplified Tool Measuring System

    NASA Astrophysics Data System (ADS)

    Chen, Jenn-Yih; Lee, Bean-Yin; Lee, Kuang-Chyi; Chen, Zhao-Kai

    2010-01-01

    This paper presents a simplified system for measuring geometric profiles of end mills. Firstly, a CCD camera was used to capture images of cutting tools. Then, an image acquisition card with the encoding function was adopted to convert the source of image into an USB port of a PC, and the image could be shown on a monitor. In addition, two linear scales were mounted on the X-Y table for positioning and measuring purposes. The signals of the linear scales were transmitted into a 4-axis quadrature encoder with 4-channel counter card for position monitoring. The C++ Builder was utilized for designing the user friendly human machine interface of the measuring system of tools. There is a cross line on the image of the interface to show a coordinate for the position measurement. Finally, a well-known tool measuring and inspection machine was employed for the measuring standard. This study compares the difference of the measuring results by using the machine and the proposed system. Experimental results show that the percentage of measuring error is acceptable for some geometric parameters of the square or ball nose end mills. Therefore, the results demonstrate the effectiveness of the presented approach.

  20. Dynamics of the process boom machine working equipment under the real law of the hydraulic distributor electric spool control

    NASA Astrophysics Data System (ADS)

    Tarasov, V. N.; Boyarkina, I. V.

    2017-06-01

    Analytical calculation methods of dynamic processes of the self-propelled boom hydraulic machines working equipment are more preferable in comparison with numerical methods. The analytical research method of dynamic processes of the boom hydraulic machines working equipment by means of differential equations of acceleration and braking of the working equipment is proposed. The real control law of a hydraulic distributor electric spool is considered containing the linear law of the electric spool activation and stepped law of the electric spool deactivation. Dependences of dynamic processes of the working equipment on reduced mass, stiffness of hydraulic power cylinder, viscous drag coefficient, piston acceleration, pressure in hydraulic cylinders, inertia force are obtained. Definite recommendations relative to the reduction of dynamic loads, appearing during the working equipment control are considered as the research result. The nature and rate of parameter variations of the speed and piston acceleration dynamic process depend on the law of the ports opening and closure of the hydraulic distributor electric spool. Dynamic loads in the working equipment are decreased during a smooth linear activation of the hydraulic distributor electric spool.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    DiCostanzo, D; Ayan, A; Woollard, J

    Purpose: To predict potential failures of hardware within the Varian TrueBeam linear accelerator in order to proactively replace parts and decrease machine downtime. Methods: Machine downtime is a problem for all radiation oncology departments and vendors. Most often it is the result of unexpected equipment failure, and increased due to lack of in-house clinical engineering support. Preventative maintenance attempts to assuage downtime, but often is ineffective at preemptively preventing many failure modes such as MLC motor failures, the need to tighten a gantry chain, or the replacement of a jaw motor, among other things. To attempt to alleviate downtime, softwaremore » was developed in house that determines the maximum value of each axis enumerated in the Truebeam trajectory log files. After patient treatments, this data is stored in a SQL database. Microsoft Power BI is used to plot the average maximum error of each day of each machine as a function of time. The results are then correlated with actual faults that occurred at the machine with the help of Varian service engineers. Results: Over the course of six months, 76,312 trajectory logs have been written into the database and plotted in Power BI. Throughout the course of analysis MLC motors have been replaced on three machines due to the early warning of the trajectory log analysis. The service engineers have also been alerted to possible gantry issues on one occasion due to the aforementioned analysis. Conclusion: Analyzing the trajectory log data is a viable and effective early warning system for potential failures of the TrueBeam linear accelerator. With further analysis and tightening of the tolerance values used to determine a possible imminent failure, it should be possible to pinpoint future issues more thoroughly and for more axes of motion.« less

  2. Coupling machine learning with mechanistic models to study runoff production and river flow at the hillslope scale

    NASA Astrophysics Data System (ADS)

    Marçais, J.; Gupta, H. V.; De Dreuzy, J. R.; Troch, P. A. A.

    2016-12-01

    Geomorphological structure and geological heterogeneity of hillslopes are major controls on runoff responses. The diversity of hillslopes (morphological shapes and geological structures) on one hand, and the highly non linear runoff mechanism response on the other hand, make it difficult to transpose what has been learnt at one specific hillslope to another. Therefore, making reliable predictions on runoff appearance or river flow for a given hillslope is a challenge. Applying a classic model calibration (based on inverse problems technique) requires doing it for each specific hillslope and having some data available for calibration. When applied to thousands of cases it cannot always be promoted. Here we propose a novel modeling framework based on coupling process based models with data based approach. First we develop a mechanistic model, based on hillslope storage Boussinesq equations (Troch et al. 2003), able to model non linear runoff responses to rainfall at the hillslope scale. Second we set up a model database, representing thousands of non calibrated simulations. These simulations investigate different hillslope shapes (real ones obtained by analyzing 5m digital elevation model of Brittany and synthetic ones), different hillslope geological structures (i.e. different parametrizations) and different hydrologic forcing terms (i.e. different infiltration chronicles). Then, we use this model library to train a machine learning model on this physically based database. Machine learning model performance is then assessed by a classic validating phase (testing it on new hillslopes and comparing machine learning with mechanistic outputs). Finally we use this machine learning model to learn what are the hillslope properties controlling runoffs. This methodology will be further tested combining synthetic datasets with real ones.

  3. Optimization of classification and regression analysis of four monoclonal antibodies from Raman spectra using collaborative machine learning approach.

    PubMed

    Le, Laetitia Minh Maï; Kégl, Balázs; Gramfort, Alexandre; Marini, Camille; Nguyen, David; Cherti, Mehdi; Tfaili, Sana; Tfayli, Ali; Baillet-Guffroy, Arlette; Prognon, Patrice; Chaminade, Pierre; Caudron, Eric

    2018-07-01

    The use of monoclonal antibodies (mAbs) constitutes one of the most important strategies to treat patients suffering from cancers such as hematological malignancies and solid tumors. These antibodies are prescribed by the physician and prepared by hospital pharmacists. An analytical control enables the quality of the preparations to be ensured. The aim of this study was to explore the development of a rapid analytical method for quality control. The method used four mAbs (Infliximab, Bevacizumab, Rituximab and Ramucirumab) at various concentrations and was based on recording Raman data and coupling them to a traditional chemometric and machine learning approach for data analysis. Compared to conventional linear approach, prediction errors are reduced with a data-driven approach using statistical machine learning methods. In the latter, preprocessing and predictive models are jointly optimized. An additional original aspect of the work involved on submitting the problem to a collaborative data challenge platform called Rapid Analytics and Model Prototyping (RAMP). This allowed using solutions from about 300 data scientists in collaborative work. Using machine learning, the prediction of the four mAbs samples was considerably improved. The best predictive model showed a combined error of 2.4% versus 14.6% using linear approach. The concentration and classification errors were 5.8% and 0.7%, only three spectra were misclassified over the 429 spectra of the test set. This large improvement obtained with machine learning techniques was uniform for all molecules but maximal for Bevacizumab with an 88.3% reduction on combined errors (2.1% versus 17.9%). Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Cobalt-60 Machines and Medical Linear Accelerators: Competing Technologies for External Beam Radiotherapy.

    PubMed

    Healy, B J; van der Merwe, D; Christaki, K E; Meghzifene, A

    2017-02-01

    Medical linear accelerators (linacs) and cobalt-60 machines are both mature technologies for external beam radiotherapy. A comparison is made between these two technologies in terms of infrastructure and maintenance, dosimetry, shielding requirements, staffing, costs, security, patient throughput and clinical use. Infrastructure and maintenance are more demanding for linacs due to the complex electric componentry. In dosimetry, a higher beam energy, modulated dose rate and smaller focal spot size mean that it is easier to create an optimised treatment with a linac for conformal dose coverage of the tumour while sparing healthy organs at risk. In shielding, the requirements for a concrete bunker are similar for cobalt-60 machines and linacs but extra shielding and protection from neutrons are required for linacs. Staffing levels can be higher for linacs and more staff training is required for linacs. Life cycle costs are higher for linacs, especially multi-energy linacs. Security is more complex for cobalt-60 machines because of the high activity radioactive source. Patient throughput can be affected by source decay for cobalt-60 machines but poor maintenance and breakdowns can severely affect patient throughput for linacs. In clinical use, more complex treatment techniques are easier to achieve with linacs, and the availability of electron beams on high-energy linacs can be useful for certain treatments. In summary, there is no simple answer to the question of the choice of either cobalt-60 machines or linacs for radiotherapy in low- and middle-income countries. In fact a radiotherapy department with a combination of technologies, including orthovoltage X-ray units, may be an option. Local needs, conditions and resources will have to be factored into any decision on technology taking into account the characteristics of both forms of teletherapy, with the primary goal being the sustainability of the radiotherapy service over the useful lifetime of the equipment. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  5. Machine Learning to Improve Energy Expenditure Estimation in Children With Disabilities: A Pilot Study in Duchenne Muscular Dystrophy.

    PubMed

    Pande, Amit; Mohapatra, Prasant; Nicorici, Alina; Han, Jay J

    2016-07-19

    Children with physical impairments are at a greater risk for obesity and decreased physical activity. A better understanding of physical activity pattern and energy expenditure (EE) would lead to a more targeted approach to intervention. This study focuses on studying the use of machine-learning algorithms for EE estimation in children with disabilities. A pilot study was conducted on children with Duchenne muscular dystrophy (DMD) to identify important factors for determining EE and develop a novel algorithm to accurately estimate EE from wearable sensor-collected data. There were 7 boys with DMD, 6 healthy control boys, and 22 control adults recruited. Data were collected using smartphone accelerometer and chest-worn heart rate sensors. The gold standard EE values were obtained from the COSMED K4b2 portable cardiopulmonary metabolic unit worn by boys (aged 6-10 years) with DMD and controls. Data from this sensor setup were collected simultaneously during a series of concurrent activities. Linear regression and nonlinear machine-learning-based approaches were used to analyze the relationship between accelerometer and heart rate readings and COSMED values. Existing calorimetry equations using linear regression and nonlinear machine-learning-based models, developed for healthy adults and young children, give low correlation to actual EE values in children with disabilities (14%-40%). The proposed model for boys with DMD uses ensemble machine learning techniques and gives a 91% correlation with actual measured EE values (root mean square error of 0.017). Our results confirm that the methods developed to determine EE using accelerometer and heart rate sensor values in normal adults are not appropriate for children with disabilities and should not be used. A much more accurate model is obtained using machine-learning-based nonlinear regression specifically developed for this target population. ©Amit Pande, Prasant Mohapatra, Alina Nicorici, Jay J Han. Originally published in JMIR Rehabilitation and Assistive Technology (http://rehab.jmir.org), 19.07.2016.

  6. Linear Optimization and Image Reconstruction

    DTIC Science & Technology

    1994-06-01

    final example is again a novel one. We formulate the problem of computer assisted tomographic ( CAT ) image reconstruction as a linear optimization...possibility that a patient, Fred, suffers from a brain tumor. Further, the physician opts to make use of the CAT (Computer Aided Tomography) scan device...and examine the inside of Fred’s head without exploratory surgery. The CAT scan machine works by projecting a finite number of X-rays of known

  7. Physics Notes.

    ERIC Educational Resources Information Center

    School Science Review, 1982

    1982-01-01

    Outlines methodology, demonstrations, and materials including: an inexpensive wave machine; speed of sound in carbon dioxide; diffraction grating method for measuring spectral line wavelength; linear electronic thermometer; analogy for bromine diffusion; direct reading refractice index meter; inexpensive integrated circuit spectrophotometer; and…

  8. Quantum annealing versus classical machine learning applied to a simplified computational biology problem

    PubMed Central

    Li, Richard Y.; Di Felice, Rosa; Rohs, Remo; Lidar, Daniel A.

    2018-01-01

    Transcription factors regulate gene expression, but how these proteins recognize and specifically bind to their DNA targets is still debated. Machine learning models are effective means to reveal interaction mechanisms. Here we studied the ability of a quantum machine learning approach to predict binding specificity. Using simplified datasets of a small number of DNA sequences derived from actual binding affinity experiments, we trained a commercially available quantum annealer to classify and rank transcription factor binding. The results were compared to state-of-the-art classical approaches for the same simplified datasets, including simulated annealing, simulated quantum annealing, multiple linear regression, LASSO, and extreme gradient boosting. Despite technological limitations, we find a slight advantage in classification performance and nearly equal ranking performance using the quantum annealer for these fairly small training data sets. Thus, we propose that quantum annealing might be an effective method to implement machine learning for certain computational biology problems. PMID:29652405

  9. Nonlinear programming for classification problems in machine learning

    NASA Astrophysics Data System (ADS)

    Astorino, Annabella; Fuduli, Antonio; Gaudioso, Manlio

    2016-10-01

    We survey some nonlinear models for classification problems arising in machine learning. In the last years this field has become more and more relevant due to a lot of practical applications, such as text and web classification, object recognition in machine vision, gene expression profile analysis, DNA and protein analysis, medical diagnosis, customer profiling etc. Classification deals with separation of sets by means of appropriate separation surfaces, which is generally obtained by solving a numerical optimization model. While linear separability is the basis of the most popular approach to classification, the Support Vector Machine (SVM), in the recent years using nonlinear separating surfaces has received some attention. The objective of this work is to recall some of such proposals, mainly in terms of the numerical optimization models. In particular we tackle the polyhedral, ellipsoidal, spherical and conical separation approaches and, for some of them, we also consider the semisupervised versions.

  10. Strike action electromagnetic machine for immersion of rod elements into ground

    NASA Astrophysics Data System (ADS)

    Usanov, K. M.; Volgin, A. V.; Chetverikov, E. A.; Kargin, V. A.; Moiseev, A. P.; Ivanova, Z. I.

    2017-10-01

    During construction, survey work, and drilling shallow wells by striking, operations associated with dipping and removing the rod elements are the most common. At the same time relatively long, with small diameter, elements, in which the ratio of length to diameter l/d is 100 or more, constitute a significant proportion. At present, the application of power pulse linear electromagnetic motors to drive drum machines is recognized to be highly effective. However, the mechanical method of transmission of shocks does not allow dipping long longitudinally unstable core elements. In this case, mechanical energy must be transferred from the motor to the rod through its side surface. The design of the strike action electromagnetic machine with a through axial channel for non-mechanical end striking of the pile of long, longitudinally unstable metal rods is proposed. Electromagnetic striking machine for non-mechanical end striking rod elements provides operations characterized by ecological compatibility, safety and high quality.

  11. Stirling cryocooler test results and design model verification

    NASA Astrophysics Data System (ADS)

    Shimko, Martin A.; Stacy, W. D.; McCormick, John A.

    A long-life Stirling cycle cryocooler being developed for spaceborne applications is described. The results from tests on a preliminary breadboard version of the cryocooler used to demonstrate the feasibility of the technology and to validate the generator design code used in its development are presented. This machine achieved a cold-end temperature of 65 K while carrying a 1/2-W cooling load. The basic machine is a double-acting, flexure-bearing, split Stirling design with linear electromagnetic drives for the expander and compressors. Flat metal diaphragms replace pistons for sweeping and sealing the machine working volumes. The double-acting expander couples to a laminar-channel counterflow recuperative heat exchanger for regeneration. The PC-compatible design code developed for this design approach calculates regenerator loss, including heat transfer irreversibilities, pressure drop, and axial conduction in the regenerator walls. The code accurately predicted cooler performance and assisted in diagnosing breadboard machine flaws during shakedown and development testing.

  12. Design and analysis of linear cascade DNA hybridization chain reactions using DNA hairpins

    NASA Astrophysics Data System (ADS)

    Bui, Hieu; Garg, Sudhanshu; Miao, Vincent; Song, Tianqi; Mokhtar, Reem; Reif, John

    2017-01-01

    DNA self-assembly has been employed non-conventionally to construct nanoscale structures and dynamic nanoscale machines. The technique of hybridization chain reactions by triggered self-assembly has been shown to form various interesting nanoscale structures ranging from simple linear DNA oligomers to dendritic DNA structures. Inspired by earlier triggered self-assembly works, we present a system for controlled self-assembly of linear cascade DNA hybridization chain reactions using nine distinct DNA hairpins. NUPACK is employed to assist in designing DNA sequences and Matlab has been used to simulate DNA hairpin interactions. Gel electrophoresis and ensemble fluorescence reaction kinetics data indicate strong evidence of linear cascade DNA hybridization chain reactions. The half-time completion of the proposed linear cascade reactions indicates a linear dependency on the number of hairpins.

  13. Maternal exposure to ambient PM10 during pregnancy increases the risk of congenital heart defects: Evidence from machine learning models.

    PubMed

    Ren, Zhoupeng; Zhu, Jun; Gao, Yanfang; Yin, Qian; Hu, Maogui; Dai, Li; Deng, Changfei; Yi, Lin; Deng, Kui; Wang, Yanping; Li, Xiaohong; Wang, Jinfeng

    2018-07-15

    Previous research suggested an association between maternal exposure to ambient air pollutants and risk of congenital heart defects (CHDs), though the effects of particulate matter ≤10μm in aerodynamic diameter (PM 10 ) on CHDs are inconsistent. We used two machine learning models (i.e., random forest (RF) and gradient boosting (GB)) to investigate the non-linear effects of PM 10 exposure during the critical time window, weeks 3-8 in pregnancy, on risk of CHDs. From 2009 through 2012, we carried out a population-based birth cohort study on 39,053 live-born infants in Beijing. RF and GB models were used to calculate odds ratios for CHDs associated with increase in PM 10 exposure, adjusting for maternal and perinatal characteristics. Maternal exposure to PM 10 was identified as the primary risk factor for CHDs in all machine learning models. We observed a clear non-linear effect of maternal exposure to PM 10 on CHDs risk. Compared to 40μgm -3 , the following odds ratios resulted: 1) 92μgm -3 [RF: 1.16 (95% CI: 1.06, 1.28); GB: 1.26 (95% CI: 1.17, 1.35)]; 2) 111μgm -3 [RF: 1.04 (95% CI: 0.96, 1.14); GB: 1.04 (95% CI: 0.99, 1.08)]; 3) 124μgm -3 [RF: 1.01 (95% CI: 0.94, 1.10); GB: 0.98 (95% CI: 0.93, 1.02)]; 4) 190μgm -3 [RF: 1.29 (95% CI: 1.14, 1.44); GB: 1.71 (95% CI: 1.04, 2.17)]. Overall, both machine models showed an association between maternal exposure to ambient PM 10 and CHDs in Beijing, highlighting the need for non-linear methods to investigate dose-response relationships. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Hydrogeology and water quality of the shallow aquifer system at the Explosive Experimental Area, Naval Surface Warfare Center, Dahlgren Site, Dahlgren, Virginia

    USGS Publications Warehouse

    Bell, C.F.

    1996-01-01

    In October 1993, the U.S. Geological Survey began a study to characterize the hydrogeology of the shallow aquifer system at the Explosive Experimental Area, Naval Surface Warfare Center, Dahlgren Site, Dahlgren, Virginia, which is located on the Potomac River in the Coastal Plain Physiographic Province. The study provides a description of the hydrogeologic units, directions of ground-water flow, and back-ground water quality in the study area to a depth of about 100 feet. Lithologic, geophysical, and hydrologic data were collected from 28 wells drilled for this study, from 3 existing wells, and from outcrops. The shallow aquifer system at the Explosive Experimental Area consists of two fining-upward sequences of Pleistocene fluvial-estuarine deposits that overlie Paleocene-Eocene marine deposits of the Nanjemoy-Marlboro confining unit. The surficial hydrogeologic unit is the Columbia aquifer. Horizontal linear flow of water in this aquifer generally responds to the surface topography, discharging to tidal creeks, marshes, and the Potomac River, and rates of flow in this aquifer range from 0.003 to 0.70 foot per day. The Columbia aquifer unconformably overlies the upper confining unit 12-an organic-rich clay that is 0 to 55 feet thick. The upper confining unit conformably overlies the upper confined aquifer, a 0- to 35-feet thick unit that consists of interbedded fine-grained to medium-grained sands and clay. The upper confined aquifer probably receives most of its recharge from the adjacent and underlying Nanjemoy-Marlboro confining unit. Water in the upper confined aquifer generally flows eastward, northward, and northeastward at about 0.03 foot per day toward the Potomac River and Machodoc Creek. The Nanjemoy-Marlboro confining unit consists of glauconitic, fossiliferous silty fine-grained sands of the Nanjemoy Formation. Where the upper confined system is absent, the Nanjemoy-Marlboro confining unit is directly overlain by the Columbia aquifer. In some parts of the Explosive Experimental Area, horizontal hydraulic conductivities of the Nanjemoy-Marlboro confining unit and the Columbia aquifer are similar (from 10-4 to 10-2 foot per day), and these units effectively combine to form a thick (greater than 50 feet) aquifer. The background water quality of the shallow aquifer system is characteristic of ground waters in the Virginia Coastal Plain Physiographic Province. Water in the Columbia aquifer is a mixed ionic type, has a median pH of 5.9, and a median total dissolved solids of 106 milligrams per liter. Water in the upper confined aquifer and Nanjemoy-Marlboro confining unit is a sodium- calcium-bicarbonate type, and generally has higher pH, dissolved solids, and alkalinity than water in the Columbia aquifer. Water in the upper confined aquifer and some parts of the Columbia aquifer is anoxic, and it has high concentrations of dissolved iron, manganese, and sulfide.

  15. Numerical Technology for Large-Scale Computational Electromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharpe, R; Champagne, N; White, D

    The key bottleneck of implicit computational electromagnetics tools for large complex geometries is the solution of the resulting linear system of equations. The goal of this effort was to research and develop critical numerical technology that alleviates this bottleneck for large-scale computational electromagnetics (CEM). The mathematical operators and numerical formulations used in this arena of CEM yield linear equations that are complex valued, unstructured, and indefinite. Also, simultaneously applying multiple mathematical modeling formulations to different portions of a complex problem (hybrid formulations) results in a mixed structure linear system, further increasing the computational difficulty. Typically, these hybrid linear systems aremore » solved using a direct solution method, which was acceptable for Cray-class machines but does not scale adequately for ASCI-class machines. Additionally, LLNL's previously existing linear solvers were not well suited for the linear systems that are created by hybrid implicit CEM codes. Hence, a new approach was required to make effective use of ASCI-class computing platforms and to enable the next generation design capabilities. Multiple approaches were investigated, including the latest sparse-direct methods developed by our ASCI collaborators. In addition, approaches that combine domain decomposition (or matrix partitioning) with general-purpose iterative methods and special purpose pre-conditioners were investigated. Special-purpose pre-conditioners that take advantage of the structure of the matrix were adapted and developed based on intimate knowledge of the matrix properties. Finally, new operator formulations were developed that radically improve the conditioning of the resulting linear systems thus greatly reducing solution time. The goal was to enable the solution of CEM problems that are 10 to 100 times larger than our previous capability.« less

  16. Comparison of linear and non-linear models for predicting energy expenditure from raw accelerometer data.

    PubMed

    Montoye, Alexander H K; Begum, Munni; Henning, Zachary; Pfeiffer, Karin A

    2017-02-01

    This study had three purposes, all related to evaluating energy expenditure (EE) prediction accuracy from body-worn accelerometers: (1) compare linear regression to linear mixed models, (2) compare linear models to artificial neural network models, and (3) compare accuracy of accelerometers placed on the hip, thigh, and wrists. Forty individuals performed 13 activities in a 90 min semi-structured, laboratory-based protocol. Participants wore accelerometers on the right hip, right thigh, and both wrists and a portable metabolic analyzer (EE criterion). Four EE prediction models were developed for each accelerometer: linear regression, linear mixed, and two ANN models. EE prediction accuracy was assessed using correlations, root mean square error (RMSE), and bias and was compared across models and accelerometers using repeated-measures analysis of variance. For all accelerometer placements, there were no significant differences for correlations or RMSE between linear regression and linear mixed models (correlations: r  =  0.71-0.88, RMSE: 1.11-1.61 METs; p  >  0.05). For the thigh-worn accelerometer, there were no differences in correlations or RMSE between linear and ANN models (ANN-correlations: r  =  0.89, RMSE: 1.07-1.08 METs. Linear models-correlations: r  =  0.88, RMSE: 1.10-1.11 METs; p  >  0.05). Conversely, one ANN had higher correlations and lower RMSE than both linear models for the hip (ANN-correlation: r  =  0.88, RMSE: 1.12 METs. Linear models-correlations: r  =  0.86, RMSE: 1.18-1.19 METs; p  <  0.05), and both ANNs had higher correlations and lower RMSE than both linear models for the wrist-worn accelerometers (ANN-correlations: r  =  0.82-0.84, RMSE: 1.26-1.32 METs. Linear models-correlations: r  =  0.71-0.73, RMSE: 1.55-1.61 METs; p  <  0.01). For studies using wrist-worn accelerometers, machine learning models offer a significant improvement in EE prediction accuracy over linear models. Conversely, linear models showed similar EE prediction accuracy to machine learning models for hip- and thigh-worn accelerometers and may be viable alternative modeling techniques for EE prediction for hip- or thigh-worn accelerometers.

  17. Monitoring Temperature and Fan Speed Using Ganglia and Winbond Chips

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCaffrey, Cattie; /SLAC

    2006-09-27

    Effective monitoring is essential to keep a large group of machines, like the ones at Stanford Linear Accelerator Center (SLAC), up and running. SLAC currently uses Ganglia Monitoring System to observe about 2000 machines, analyzing metrics like CPU usage and I/O rate. However, metrics essential to machine hardware health, such as temperature and fan speed, are not being monitored. Many machines have a Winbond w83782d chip which monitors three temperatures, two of which come from dual CPUs, and returns the information when the sensor command is invoked. Ganglia also provides a feature, gmetric, that allows the users to monitor theirmore » own metrics and incorporate them into the monitoring system. The programming language Perl is chosen to implement a script that invokes the sensors command, extracts the temperature and fan speed information, and calls gmetric with the appropriate arguments. Two machines were used to test the script; the two CPUs on each machine run at about 65 Celsius, which is well within the operating temperature range (The maximum safe temperature range is 77-82 Celsius for the Pentium III processors being used). Installing the script on all machines with a Winbond w83782d chip allows the SLAC Scientific Computing and Computing Services group (SCCS) to better evaluate current cooling methods.« less

  18. Stability Assessment of a System Comprising a Single Machine and Inverter with Scalable Ratings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Brian B; Lin, Yashen; Gevorgian, Vahan

    From the inception of power systems, synchronous machines have acted as the foundation of large-scale electrical infrastructures and their physical properties have formed the cornerstone of system operations. However, power electronics interfaces are playing a growing role as they are the primary interface for several types of renewable energy sources and storage technologies. As the role of power electronics in systems continues to grow, it is crucial to investigate the properties of bulk power systems in low inertia settings. In this paper, we assess the properties of coupled machine-inverter systems by studying an elementary system comprised of a synchronous generator,more » three-phase inverter, and a load. Furthermore, the inverter model is formulated such that its power rating can be scaled continuously across power levels while preserving its closed-loop response. Accordingly, the properties of the machine-inverter system can be assessed for varying ratios of machine-to-inverter power ratings and, hence, differing levels of inertia. After linearizing the model and assessing its eigenvalues, we show that system stability is highly dependent on the interaction between the inverter current controller and machine exciter, thus uncovering a key concern with mixed machine-inverter systems and motivating the need for next-generation grid-stabilizing inverter controls.« less

  19. Theoretical and experimental research on machine tool servo system for ultra-precision position compensation on CNC lathe

    NASA Astrophysics Data System (ADS)

    Ma, Zhichao; Hu, Leilei; Zhao, Hongwei; Wu, Boda; Peng, Zhenxing; Zhou, Xiaoqin; Zhang, Hongguo; Zhu, Shuai; Xing, Lifeng; Hu, Huang

    2010-08-01

    The theories and techniques for improving machining accuracy via position control of diamond tool's tip and raising resolution of cutting depth on precise CNC lathes have been extremely focused on. A new piezo-driven ultra-precision machine tool servo system is designed and tested to improve manufacturing accuracy of workpiece. The mathematical model of machine tool servo system is established and the finite element analysis is carried out on parallel plate flexure hinges. The output position of diamond tool's tip driven by the machine tool servo system is tested via a contact capacitive displacement sensor. Proportional, integral, derivative (PID) feedback is also implemented to accommodate and compensate dynamical change owing cutting forces as well as the inherent non-linearity factors of the piezoelectric stack during cutting process. By closed loop feedback controlling strategy, the tracking error is limited to 0.8 μm. Experimental results have shown the proposed machine tool servo system could provide a tool positioning resolution of 12 nm, which is much accurate than the inherent CNC resolution magnitude. The stepped shaft of aluminum specimen with a step increment of cutting depth of 1 μm is tested, and the obtained contour illustrates the displacement command output from controller is accurately and real-time reflected on the machined part.

  20. Evaluation of Fatigue Behavior and Surface Characteristics of Aluminum Alloy 2024 T6 After Electric Discharge Machining

    NASA Astrophysics Data System (ADS)

    Mehmood, Shahid; Shah, Masood; Pasha, Riffat Asim; Sultan, Amir

    2017-10-01

    The effect of electric discharge machining (EDM) on surface quality and consequently on the fatigue performance of Al 2024 T6 is investigated. Five levels of discharge current are analyzed, while all other electrical and nonelectrical parameters are kept constant. At each discharge current level, dog-bone specimens are machined by generating a peripheral notch at the center. The fatigue tests are performed on four-point rotating bending machine at room temperature. For comparison purposes, fatigue tests are also performed on the conventionally machined specimens. Linearized SN curves for 95% failure probability and with four different confidence levels (75, 90, 95 and 99%) are plotted for each discharge current level as well as for conventionally machined specimens. These plots show that the electric discharge machined (EDMed) specimens give inferior fatigue behavior as compared to conventionally machined specimen. Moreover, discharge current inversely affects the fatigue life, and this influence is highly pronounced at lower stresses. The EDMed surfaces are characterized by surface properties that could be responsible for change in fatigue life such as surface morphology, surface roughness, white layer thickness, microhardness and residual stresses. It is found that all these surface properties are affected by changing discharge current level. However, change in fatigue life by discharge current could not be associated independently to any single surface property.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olivares, Stefano

    We investigate the performance of a selective cloning machine based on linear optical elements and Gaussian measurements, which allows one to clone at will one of the two incoming input states. This machine is a complete generalization of a 1{yields}2 cloning scheme demonstrated by Andersen et al. [Phys. Rev. Lett. 94, 240503 (2005)]. The input-output fidelity is studied for a generic Gaussian input state, and the effect of nonunit quantum efficiency is also taken into account. We show that, if the states to be cloned are squeezed states with known squeezing parameter, then the fidelity can be enhanced using amore » third suitable squeezed state during the final stage of the cloning process. A binary communication protocol based on the selective cloning machine is also discussed.« less

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hand, L.N.

    Some proposed techniques for using laser beams to accelerate charged particles are reviewed. Two specific ideas for 'grating-type' accelerating structures are discussed. Speculations are presented about how a successful laser accelerator could be used in a 'multi-pass collider', a type of machine which would have characteristics intermediate between those of synchrotrons and linear (single-pass) colliders. No definite conclusions about practical structures for laser accelerators are reached, but it is suggested that a serious effort be made to design a small prototype machine. Achieving a reasonable luminosity demands that the accelerator either be a cw machine or that laser peak powermore » requirements be much higher than those presently available. Use of superconducting gratings requires a wavelength in the sub-millimeter range.« less

  3. The laser accelerator-another unicorn in the garden

    NASA Astrophysics Data System (ADS)

    Hand, L. N.

    1981-07-01

    Some proposed techniques for using laser beams to accelerate charged particles was reviewed. Two specific ideas for grating type accelerating structures are discussed. Speculations are presented about how a successful laser accelerator could be used in a multipass collider; a type of machine which would have characteristics intermediate between those of synchrotrons and linear (single pass) colliders. No definite conclusions about practical structures for laser accelerators are reached, but it is suggested that a serious effort be made to design a small prototype machine. Achieving a reasonable luminosity demands that the accelerator either be a cw machine or that laser peak power requirements to be much higher than those presently available. Use of superconducting gratings requires a wavelength in the sub-millimeter range.

  4. Chaotic sources of noise in machine acoustics

    NASA Astrophysics Data System (ADS)

    Moon, F. C., Prof.; Broschart, Dipl.-Ing. T.

    1994-05-01

    In this paper a model is posited for deterministic, random-like noise in machines with sliding rigid parts impacting linear continuous machine structures. Such problems occur in gear transmission systems. A mathematical model is proposed to explain the random-like structure-borne and air-borne noise from such systems when the input is a periodic deterministic excitation of the quasi-rigid impacting parts. An experimental study is presented which supports the model. A thin circular plate is impacted by a chaotically vibrating mass excited by a sinusoidal moving base. The results suggest that the plate vibrations might be predicted by replacing the chaotic vibrating mass with a probabilistic forcing function. Prechaotic vibrations of the impacting mass show classical period doubling phenomena.

  5. An evaluation of the Meditech M250 and a comparison with other CT scanners.

    PubMed

    Greensmith, R; Richardson, R B; Sargood, A J; Stevens, P H; Mackintosh, I P

    1985-11-01

    The Meditech M250 computerised tomography (CT) machine was evaluated during the first half of 1984. Measurements were made of noise, modulation transfer function, slice width, radiation dose profile, uniformity and linearity of CT number, effective photon energy and parameters relating to machine specification, such as pixel size and scan time. All breakdowns were logged to indicate machine reliability. A comparison with the established EMI CT1010 and CT5005 was made for noise, resolution and multislice radiation dose, as well as the dose efficiency or quality (Q) factor for both head and body modes of operation. The M250 was found to perform to its intended specification with an acceptable level of reliability.

  6. Optical Implementation of the Optimal Universal and Phase-Covariant Quantum Cloning Machines

    NASA Astrophysics Data System (ADS)

    Ye, Liu; Song, Xue-Ke; Yang, Jie; Yang, Qun; Ma, Yang-Cheng

    Quantum cloning relates to the security of quantum computation and quantum communication. In this paper, firstly we propose a feasible unified scheme to implement optimal 1 → 2 universal, 1 → 2 asymmetric and symmetric phase-covariant cloning, and 1 → 2 economical phase-covariant quantum cloning machines only via a beam splitter. Then 1 → 3 economical phase-covariant quantum cloning machines also can be realized by adding another beam splitter in context of linear optics. The scheme is based on the interference of two photons on a beam splitter with different splitting ratios for vertical and horizontal polarization components. It is shown that under certain condition, the scheme is feasible by current experimental technology.

  7. Researcher Biographies

    Science.gov Websites

    interest: mechanical system design sensitivity analysis and optimization of linear and nonlinear structural systems, reliability analysis and reliability-based design optimization, computational methods in committee member, ISSMO; Associate Editor, Mechanics Based Design of Structures and Machines; Associate

  8. DEGAS: Dynamic Exascale Global Address Space Programming Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demmel, James

    The Dynamic, Exascale Global Address Space programming environment (DEGAS) project will develop the next generation of programming models and runtime systems to meet the challenges of Exascale computing. The Berkeley part of the project concentrated on communication-optimal code generation to optimize speed and energy efficiency by reducing data movement. Our work developed communication lower bounds, and/or communication avoiding algorithms (that either meet the lower bound, or do much less communication than their conventional counterparts) for a variety of algorithms, including linear algebra, machine learning and genomics. The Berkeley part of the project concentrated on communication-optimal code generation to optimize speedmore » and energy efficiency by reducing data movement. Our work developed communication lower bounds, and/or communication avoiding algorithms (that either meet the lower bound, or do much less communication than their conventional counterparts) for a variety of algorithms, including linear algebra, machine learning and genomics.« less

  9. Multiple-basin energy landscapes for large-amplitude conformational motions of proteins: Structure-based molecular dynamics simulations

    PubMed Central

    Okazaki, Kei-ichi; Koga, Nobuyasu; Takada, Shoji; Onuchic, Jose N.; Wolynes, Peter G.

    2006-01-01

    Biomolecules often undergo large-amplitude motions when they bind or release other molecules. Unlike macroscopic machines, these biomolecular machines can partially disassemble (unfold) and then reassemble (fold) during such transitions. Here we put forward a minimal structure-based model, the “multiple-basin model,” that can directly be used for molecular dynamics simulation of even very large biomolecular systems so long as the endpoints of the conformational change are known. We investigate the model by simulating large-scale motions of four proteins: glutamine-binding protein, S100A6, dihydrofolate reductase, and HIV-1 protease. The mechanisms of conformational transition depend on the protein basin topologies and change with temperature near the folding transition. The conformational transition rate varies linearly with driving force over a fairly large range. This linearity appears to be a consequence of partial unfolding during the conformational transition. PMID:16877541

  10. Deviation Value for Conventional X-ray in Hospitals in South Sulawesi Province from 2014 to 2016

    NASA Astrophysics Data System (ADS)

    Bachtiar, Ilham; Abdullah, Bualkar; Tahir, Dahlan

    2018-03-01

    This paper describes the conventional X-ray machine parameters tested in the region of South Sulawesi from 2014 to 2016. The objective of this research is to know deviation of every parameter of conventional X-ray machine. The testing parameters were analyzed by using quantitative methods with participatory observational approach. Data collection was performed by testing the output of conventional X-ray plane using non-invasive x-ray multimeter. The test parameters include tube voltage (kV) accuracy, radiation output linearity, reproducibility and radiation beam value (HVL) quality. The results of the analysis show four conventional X-ray test parameters have varying deviation spans, where the tube voltage (kV) accuracy has an average value of 4.12%, the average radiation output linearity is 4.47% of the average reproducibility of 0.62% and the averaged of the radiation beam (HVL) is 3.00 mm.

  11. SABRE, a 10-MV linear induction accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corely, J.P.; Alexander, J.A.; Pankuch, P.J.

    SABRE (Sandia Accelerator and Beam Research Experiment) is a 10-MV, 250-kA, 40-ns linear induction accelerator. It was designed to be used in positive polarity output. Positive polarity accelerators are important for application to Sandia's ICF (Inertial Confinement Fusion) and LMF (Laboratory Microfusion Facility) program efforts. SABRE was built to allow a more detailed study of pulsed power issues associated with positive polarity output machines. MITL (Magnetically Insulated Transmission Line) voltage adder efficiency, extraction ion diode development, and ion beam transport and focusing. The SABRE design allows the system to operate in either positive polarity output for ion extraction applications ormore » negative polarity output for more conventional electron beam loads. Details of the design of SABRE and the results of initial machine performance in negative polarity operation are presented in this paper. 13 refs., 12 figs., 1 tab.« less

  12. Effect of bionic coupling units' forms on wear resistance of gray cast iron under dry linear reciprocating sliding condition

    NASA Astrophysics Data System (ADS)

    Pang, Zuobo; Zhou, Hong; Xie, Guofeng; Cong, Dalong; Meng, Chao; Ren, Luquan

    2015-07-01

    In order to get close to the wear form of guide rails, the homemade linear reciprocating wear testing machine was used for the wear test. In order to improve the wear-resistance of gray cast iron guide rail, bionic coupling units of different forms were manufactured by a laser. Wear behavior of gray-cast-iron with bionic-coupling units has been studied under dry sliding condition at room temperature using the wear testing machine. The wear resistance was evaluated by means of weight loss measurement and wear morphology. The results indicated that bionic coupling unit could improve the wear resistance of gray cast iron. The wear resistance of gray cast iron with reticulation bionic coupling unit is the best. When the load and speed changed, reticulation bionic coupling unit still has excellent performance in improving the wear resistance of gray cast iron.

  13. Development of flank wear model of cutting tool by using adaptive feedback linear control system on machining AISI D2 steel and AISI 4340 steel

    NASA Astrophysics Data System (ADS)

    Orra, Kashfull; Choudhury, Sounak K.

    2016-12-01

    The purpose of this paper is to build an adaptive feedback linear control system to check the variation of cutting force signal to improve the tool life. The paper discusses the use of transfer function approach in improving the mathematical modelling and adaptively controlling the process dynamics of the turning operation. The experimental results shows to be in agreement with the simulation model and error obtained is less than 3%. The state space approach model used in this paper successfully check the adequacy of the control system through controllability and observability test matrix and can be transferred from one state to another by appropriate input control in a finite time. The proposed system can be implemented to other machining process under varying range of cutting conditions to improve the efficiency and observability of the system.

  14. Telescoping magnetic ball bar test gage

    DOEpatents

    Bryan, J.B.

    1982-03-15

    A telescoping magnetic ball bar test gage for determining the accuracy of machine tools, including robots, and those measuring machines having non-disengagable servo drives which cannot be clutched out. Two gage balls are held and separated from one another by a telescoping fixture which allows them relative radial motional freedom but not relative lateral motional freedom. The telescoping fixture comprises a parallel reed flexure unit and a rigid member. One gage ball is secured by a magnetic socket knuckle assembly which fixes its center with respect to the machine being tested. The other gage ball is secured by another magnetic socket knuckle assembly which is engaged or held by the machine in such manner that the center of that ball is directed to execute a prescribed trajectory, all points of which are equidistant from the center of the fixed gage ball. As the moving ball executes its trajectory, changes in the radial distance between the centers of the two balls caused by inaccuracies in the machine are determined or measured by a linear variable differential transformer (LVDT) assembly actuated by the parallel reed flexure unit. Measurements can be quickly and easily taken for multiple trajectories about several different fixed ball locations, thereby determining the accuracy of the machine.

  15. 78 FR 37222 - Columbia Organic Chemical Company Site, Columbia, Richland County, South Carolina; Notice of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-20

    ... ENVIRONMENTAL PROTECTION AGENCY [FRL-9826-7; CERCLA-04-2013-3761] Columbia Organic Chemical... Agency has entered into a settlement with Stephen Reichlyn concerning the Columbia Organic Chemical... comments by site name Columbia Organic Chemical Company by one of the following methods: www.epa.gov...

  16. Construction accident narrative classification: An evaluation of text mining techniques.

    PubMed

    Goh, Yang Miang; Ubeynarayana, C U

    2017-11-01

    Learning from past accidents is fundamental to accident prevention. Thus, accident and near miss reporting are encouraged by organizations and regulators. However, for organizations managing large safety databases, the time taken to accurately classify accident and near miss narratives will be very significant. This study aims to evaluate the utility of various text mining classification techniques in classifying 1000 publicly available construction accident narratives obtained from the US OSHA website. The study evaluated six machine learning algorithms, including support vector machine (SVM), linear regression (LR), random forest (RF), k-nearest neighbor (KNN), decision tree (DT) and Naive Bayes (NB), and found that SVM produced the best performance in classifying the test set of 251 cases. Further experimentation with tokenization of the processed text and non-linear SVM were also conducted. In addition, a grid search was conducted on the hyperparameters of the SVM models. It was found that the best performing classifiers were linear SVM with unigram tokenization and radial basis function (RBF) SVM with uni-gram tokenization. In view of its relative simplicity, the linear SVM is recommended. Across the 11 labels of accident causes or types, the precision of the linear SVM ranged from 0.5 to 1, recall ranged from 0.36 to 0.9 and F1 score was between 0.45 and 0.92. The reasons for misclassification were discussed and suggestions on ways to improve the performance were provided. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Orbit correction in a linear nonscaling fixed field alternating gradient accelerator

    DOE PAGES

    Kelliher, D. J.; Machida, S.; Edmonds, C. S.; ...

    2014-11-20

    In a linear non-scaling FFAG the large natural chromaticity of the machine results in a betatron tune that varies by several integers over the momentum range. In addition, orbit correction is complicated by the consequent variation of the phase advance between lattice elements. Here we investigate how the correction of multiple closed orbit harmonics allows correction of both the COD and the accelerated orbit distortion over the momentum range.

  18. Projected Regression Methods for Inverting Fredholm Integrals: Formalism and Application to Analytical Continuation

    NASA Astrophysics Data System (ADS)

    Arsenault, Louis-Francois; Neuberg, Richard; Hannah, Lauren A.; Millis, Andrew J.

    We present a machine learning-based statistical regression approach to the inversion of Fredholm integrals of the first kind by studying an important example for the quantum materials community, the analytical continuation problem of quantum many-body physics. It involves reconstructing the frequency dependence of physical excitation spectra from data obtained at specific points in the complex frequency plane. The approach provides a natural regularization in cases where the inverse of the Fredholm kernel is ill-conditioned and yields robust error metrics. The stability of the forward problem permits the construction of a large database of input-output pairs. Machine learning methods applied to this database generate approximate solutions which are projected onto the subspace of functions satisfying relevant constraints. We show that for low input noise the method performs as well or better than Maximum Entropy (MaxEnt) under standard error metrics, and is substantially more robust to noise. We expect the methodology to be similarly effective for any problem involving a formally ill-conditioned inversion, provided that the forward problem can be efficiently solved. AJM was supported by the Office of Science of the U.S. Department of Energy under Subcontract No. 3F-3138 and LFA by the Columbia Univeristy IDS-ROADS project, UR009033-05 which also provided part support to RN and LH.

  19. Columbia basin project, Washington: Adams, Douglas, Franklin, Grant, Lincoln, and Walla Walla Counties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1993-01-01

    The Columbia Basin Project is a multipurpose development utilizing a portion of the resources of the Columbia River in the central part of the State of Washington. The key structure, Grand Coulee Dam, is on the main stem of the Columbia River about 90 miles west of Spokane, Wash. The extensive irrigation works extend southward on the Columbia Plateau 125 miles to the vicinity of Pasco, Wash., where the Snake and Columbia Rivers join.

  20. CAMELOT: A machine learning approach for coarse-grained simulations of aggregation of block-copolymeric protein sequences

    PubMed Central

    Ruff, Kiersten M.; Harmon, Tyler S.; Pappu, Rohit V.

    2015-01-01

    We report the development and deployment of a coarse-graining method that is well suited for computer simulations of aggregation and phase separation of protein sequences with block-copolymeric architectures. Our algorithm, named CAMELOT for Coarse-grained simulations Aided by MachinE Learning Optimization and Training, leverages information from converged all atom simulations that is used to determine a suitable resolution and parameterize the coarse-grained model. To parameterize a system-specific coarse-grained model, we use a combination of Boltzmann inversion, non-linear regression, and a Gaussian process Bayesian optimization approach. The accuracy of the coarse-grained model is demonstrated through direct comparisons to results from all atom simulations. We demonstrate the utility of our coarse-graining approach using the block-copolymeric sequence from the exon 1 encoded sequence of the huntingtin protein. This sequence comprises of 17 residues from the N-terminal end of huntingtin (N17) followed by a polyglutamine (polyQ) tract. Simulations based on the CAMELOT approach are used to show that the adsorption and unfolding of the wild type N17 and its sequence variants on the surface of polyQ tracts engender a patchy colloid like architecture that promotes the formation of linear aggregates. These results provide a plausible explanation for experimental observations, which show that N17 accelerates the formation of linear aggregates in block-copolymeric N17-polyQ sequences. The CAMELOT approach is versatile and is generalizable for simulating the aggregation and phase behavior of a range of block-copolymeric protein sequences. PMID:26723608

  1. Predicting the Maximum Dynamic Strength in Bench Press: The High Precision of the Bar Velocity Approach.

    PubMed

    Loturco, Irineu; Kobal, Ronaldo; Moraes, José E; Kitamura, Katia; Cal Abad, César C; Pereira, Lucas A; Nakamura, Fábio Y

    2017-04-01

    Loturco, I, Kobal, R, Moraes, JE, Kitamura, K, Cal Abad, CC, Pereira, LA, and Nakamura, FY. Predicting the maximum dynamic strength in bench press: the high precision of the bar velocity approach. J Strength Cond Res 31(4): 1127-1131, 2017-The aim of this study was to determine the force-velocity relationship and test the possibility of determining the 1 repetition maximum (1RM) in "free weight" and Smith machine bench presses. Thirty-six male top-level athletes from 3 different sports were submitted to a standardized 1RM bench press assessment (free weight or Smith machine, in randomized order), following standard procedures encompassing lifts performed at 40-100% of 1RM. The mean propulsive velocity (MPV) was measured in all attempts. A linear regression was performed to establish the relationships between bar velocities and 1RM percentages. The actual and predicted 1RM for each exercise were compared using a paired t-test. Although the Smith machine 1RM was higher (10% difference) than the free weight 1RM, in both cases the actual and predicted values did not differ. In addition, the linear relationship between MPV and percentage of 1RM (coefficient of determination ≥95%) allow determination of training intensity based on the bar velocity. The linear relationships between the MPVs and the relative percentages of 1RM throughout the entire range of loads enable coaches to use the MPV to accurately monitor their athletes on a daily basis and accurately determine their actual 1RM without the need to perform standard maximum dynamic strength assessments.

  2. Condom vending machines in Canada's secondary schools.

    PubMed

    Kerr, D L

    1990-03-01

    A case study of 1 of the 3 school boards approving in 1989 installation of condom machines is presented: The Lisgar Collegiate Institute, Ottawa, Canada. The school is characterized as having 1000 college preparatory students from middle and upper middle class homes and university educated parents. The project was student initiated and involved 1) meeting with communication consultants to determine feasibility, 2) conducting an informal peer consultation to seek out interest and support, 3) meeting with public health officials to gain support and ideas, and 4) conducting research. Condom machine installation (2) was only 1 component; a pilot sexuality education program was included as well. The student proposal was presented and rejected by the principal and the Superintendent of Student Services. Students then lobbied the school board trustees. 2 students lobbies each school board member. Letters of support were obtained from parents' advisory groups, parents, the student council, and other influential people. The media provided coverage in a popular morning television show. The student proposal was submitted to the Board of Education's Education Committee in June 1989; students were assisted by teachers and the Parents Advisory Committee. The school board approved. In the fall of 1989, sexuality awareness week was designated as October 30-November 3. Parents were asked for comments on the designated program, but only 50 contributed in a supportive way. During this week lunch-hour displays and videos, peer-facilitated discussion groups, informal talks by experts, and student theater presentations were sponsored activities. Following this event, the school board arranged for the installment of machines in the men's and women's washrooms near where social events were held and in toilet cubicles in order to provide privacy, as requested by students. The individual cost is US$1/condom. Evaluation is planned. Students have been amused by the amount of public response to this action. Other participating schools include Qualicum School District in Vancouver Island, British Columbia, and Toronto's 36 secondary schools.

  3. Evaluation of half wave induction motor drive for use in passenger vehicles

    NASA Technical Reports Server (NTRS)

    Hoft, R. G.; Kawamura, A.; Goodarzi, A.; Yang, G. Q.; Erickson, C. L.

    1985-01-01

    Research performed at the University of Missouri-Columbia to devise and design a lower cost inverter induction motor drive for electrical propulsion of passenger vehicles is described. A two phase inverter motor system is recommended. The new design is predicted to provide comparable vehicle performance, improved reliability and a cost advantage for a high production vehicle, decreased total rating of the power semiconductor switches, and a somewhat simpler control hardware compared to the conventional three phase bridge inverter motor drive system. The major disadvantages of the two phase inverter motor drive are that it is larger and more expensive than a three phase machine, the design of snubbers for the power leakage inductances produce higher transient voltages, and the torque pulsations are relatively large because of the necessity to limit the inverter switching frequency to achieve high efficiency.

  4. Analysis of Monte Carlo accelerated iterative methods for sparse linear systems: Analysis of Monte Carlo accelerated iterative methods for sparse linear systems

    DOE PAGES

    Benzi, Michele; Evans, Thomas M.; Hamilton, Steven P.; ...

    2017-03-05

    Here, we consider hybrid deterministic-stochastic iterative algorithms for the solution of large, sparse linear systems. Starting from a convergent splitting of the coefficient matrix, we analyze various types of Monte Carlo acceleration schemes applied to the original preconditioned Richardson (stationary) iteration. We expect that these methods will have considerable potential for resiliency to faults when implemented on massively parallel machines. We also establish sufficient conditions for the convergence of the hybrid schemes, and we investigate different types of preconditioners including sparse approximate inverses. Numerical experiments on linear systems arising from the discretization of partial differential equations are presented.

  5. Data mining for the analysis of hippocampal zones in Alzheimer's disease

    NASA Astrophysics Data System (ADS)

    Ovando Vázquez, Cesaré M.

    2012-02-01

    In this work, a methodology to classify people with Alzheimer's Disease (AD), Healthy Controls (HC) and people with Mild Cognitive Impairment (MCI) is presented. This methodology consists of an ensemble of Support Vector Machines (SVM) with the hippocampal boxes (HB) as input data, these hippocampal zones are taken from Magnetic Resonance (MRI) and Positron Emission Tomography (PET) images. Two ways of constructing this ensemble are presented, the first consists of linear SVM models and the second of non-linear SVM models. Results demonstrate that the linear models classify HBs more accurately than the non-linear models between HC and MCI and that there are no differences between HC and AD.

  6. Natural Fiber Cut Machine Semi-Automatic Linear Motion System for Empty Fiber Bunches: Re-designing for Local Use

    NASA Astrophysics Data System (ADS)

    Asfarizal; Kasim, Anwar; Gunawarman; Santosa

    2017-12-01

    Empty Palm bunches of fiber is local ingredient in Indonesia that easy to obtain. Empty Palm bunches of fiber can be obtained from the palm oil industry such as in West Pasaman. The character of the empty Palm bunches of fiber that is strong and pliable has high-potential for particle board. To transform the large quantities of fiber become particles in size 0-10 mm requires a specially designed cut machine. Therefore, the machine is designed in two-stage system that is mechanical system, structure and cutting knife. Components that have been made, assembled and then tested to reveal the ability of the machine to cut. The results showed that the straight back and forth motion cut machine is able to cut out the empty oil palm bunches of fiber with a length 0-1 cm, 2 cm, 8 cm and the surface of the cut is not stringy. The cutting capacity is at a length of 2 cm in the result 24.4 (kg/h) and 8 cm obtained results of up to 84 (kg/h)

  7. Linear and angular retroreflecting interferometric alignment target

    DOEpatents

    Maxey, L. Curtis

    2001-01-01

    The present invention provides a method and apparatus for measuring both the linear displacement and angular displacement of an object using a linear interferometer system and an optical target comprising a lens, a reflective surface and a retroreflector. The lens, reflecting surface and retroreflector are specifically aligned and fixed in optical connection with one another, creating a single optical target which moves as a unit that provides multi-axis displacement information for the object with which it is associated. This displacement information is useful in many applications including machine tool control systems and laser tracker systems, among others.

  8. An efficient parallel algorithm for the solution of a tridiagonal linear system of equations

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1971-01-01

    Tridiagonal linear systems of equations are solved on conventional serial machines in a time proportional to N, where N is the number of equations. The conventional algorithms do not lend themselves directly to parallel computations on computers of the ILLIAC IV class, in the sense that they appear to be inherently serial. An efficient parallel algorithm is presented in which computation time grows as log sub 2 N. The algorithm is based on recursive doubling solutions of linear recurrence relations, and can be used to solve recurrence relations of all orders.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cernoch, Antonin; Soubusta, Jan; Celechovska, Lucie

    We report on experimental implementation of the optimal universal asymmetric 1->2 quantum cloning machine for qubits encoded into polarization states of single photons. Our linear-optical machine performs asymmetric cloning by partially symmetrizing the input polarization state of signal photon and a blank copy idler photon prepared in a maximally mixed state. We show that the employed method of measurement of mean clone fidelities exhibits strong resilience to imperfect calibration of the relative efficiencies of single-photon detectors used in the experiment. Reliable characterization of the quantum cloner is thus possible even when precise detector calibration is difficult to achieve.

  10. Low-cost autonomous perceptron neural network inspired by quantum computation

    NASA Astrophysics Data System (ADS)

    Zidan, Mohammed; Abdel-Aty, Abdel-Haleem; El-Sadek, Alaa; Zanaty, E. A.; Abdel-Aty, Mahmoud

    2017-11-01

    Achieving low cost learning with reliable accuracy is one of the important goals to achieve intelligent machines to save time, energy and perform learning process over limited computational resources machines. In this paper, we propose an efficient algorithm for a perceptron neural network inspired by quantum computing composite from a single neuron to classify inspirable linear applications after a single training iteration O(1). The algorithm is applied over a real world data set and the results are outer performs the other state-of-the art algorithms.

  11. Magnet reliability in the Fermilab Main Injector and implications for the ILC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tartaglia, M.A.; Blowers, J.; Capista, D.

    2007-08-01

    The International Linear Collider reference design requires over 13000 magnets, of approximately 135 styles, which must operate with very high reliability. The Fermilab Main Injector represents a modern machine with many conventional magnet styles, each of significant quantity, that has now accumulated many hundreds of magnet-years of operation. We review here the performance of the magnets built for this machine, assess their reliability and categorize the failure modes, and discuss implications for reliability of similar magnet styles expected to be used at the ILC.

  12. Formal verification of automated teller machine systems using SPIN

    NASA Astrophysics Data System (ADS)

    Iqbal, Ikhwan Mohammad; Adzkiya, Dieky; Mukhlash, Imam

    2017-08-01

    Formal verification is a technique for ensuring the correctness of systems. This work focuses on verifying a model of the Automated Teller Machine (ATM) system against some specifications. We construct the model as a state transition diagram that is suitable for verification. The specifications are expressed as Linear Temporal Logic (LTL) formulas. We use Simple Promela Interpreter (SPIN) model checker to check whether the model satisfies the formula. This model checker accepts models written in Process Meta Language (PROMELA), and its specifications are specified in LTL formulas.

  13. The Atwood machine revisited using smartphones

    NASA Astrophysics Data System (ADS)

    Monteiro, Martín; Stari, Cecilia; Cabeza, Cecilia; Marti, Arturo C.

    2015-09-01

    The Atwood machine is a simple device used for centuries to demonstrate Newton's second law. It consists of two supports containing different masses joined by a string. Here we propose an experiment in which a smartphone is fixed to one support. With the aid of the built-in accelerometer of the smartphone, the vertical acceleration is registered. By redistributing the masses of the supports, a linear relationship between the mass difference and the vertical acceleration is obtained. In this experiment, the use of a smartphone contributes to enhance a classical demonstration.

  14. Financial implications of the continuity of primary care.

    PubMed

    Hollander, Marcus J; Kadlec, Helena

    2015-01-01

    The objective of this study was to assess the financial implications of the continuity of care, for patients with high care needs, by examining the cost of government-funded health care services in British Columbia, Canada. Using British Columbia Ministry of Health administrative databases for fiscal year 2010-2011 and generalized linear models, we estimated cost ratios for 10 cost-related predictor variables, including patients' attachment to the practice. Patients were selected and divided into groups on the basis of their Resource Utilization Band (RUB) and placement in provincial registries for 8 chronic conditions (1,619,941 patients). The final dataset included all high- and very-high-care-needs patients in British Columbia (ie, RUB categories 4 and 5) in 1 or more of the 8 registries who met the screening criteria (222,779 patients). Of the 10 predictors, across 8 medical conditions and both RUBs, patients' attachment to the practice had the strongest relationship to costs (correlations = -0.168 to -0.322). Higher attachment was associated with lower costs. Extrapolation of the findings indicated that an increase of 5% in the overall attachment level, for the selected high-care-needs patients, could have resulted in an estimated cost avoidance of $142 million Canadian for fiscal year 2010-2011. Continuity of care, defined as a patient's attachment to his/her primary care practice, can reduce health care costs over time and across chronic conditions. Health care policy makers may wish to consider creating opportunities for primary care physicians to increase the attachment that their high-care-needs patients have to their practices.

  15. Assessing the Impacts of Wind Integration in the Western Provinces

    NASA Astrophysics Data System (ADS)

    Sopinka, Amy

    Increasing carbon dioxide levels and the fear of irreversible climate change has prompted policy makers to implement renewable portfolio standards. These renewable portfolio standards are meant to encourage the adoption of renewable energy technologies thereby reducing carbon emissions associated with fossil fuel-fired electricity generation. The ability to efficiently adopt and utilize high levels of renewable energy technology, such as wind power, depends upon the composition of the extant generation within the grid. Western Canadian electric grids are poised to integrate high levels of wind and although Alberta has sufficient and, at times, an excess supply of electricity, it does not have the inherent generator flexibility required to mirror the variability of its wind generation. British Columbia, with its large reservoir storage capacities and rapid ramping hydroelectric generation could easily provide the firming services required by Alberta; however, the two grids are connected only by a small, constrained intertie. We use a simulation model to assess the economic impacts of high wind penetrations in the Alberta grid under various balancing protocols. We find that adding wind capacity to the system impacts grid reliability, increasing the frequency of system imbalances and unscheduled intertie flow. In order for British Columbia to be viable firming resource, it must have sufficient generation capability to meet and exceed the province's electricity self-sufficiency requirements. We use a linear programming model to evaluate the province's ability to meet domestic load under various water and trade conditions. We then examine the effects of drought and wind penetration on the interconnected Alberta -- British Columbia system given differing interconnection sizes.

  16. NIEHS/EPA CEHCs: Columbia Center for Children’s Environmental Health - Columbia University

    EPA Pesticide Factsheets

    The Columbia Center for Children’s Environmental Health (CCCEH) at Columbia University studies long-term health of urban pollutants on children raised in minority neighborhoods in inner-city communities.

  17. Explaining Support Vector Machines: A Color Based Nomogram

    PubMed Central

    Van Belle, Vanya; Van Calster, Ben; Van Huffel, Sabine; Suykens, Johan A. K.; Lisboa, Paulo

    2016-01-01

    Problem setting Support vector machines (SVMs) are very popular tools for classification, regression and other problems. Due to the large choice of kernels they can be applied with, a large variety of data can be analysed using these tools. Machine learning thanks its popularity to the good performance of the resulting models. However, interpreting the models is far from obvious, especially when non-linear kernels are used. Hence, the methods are used as black boxes. As a consequence, the use of SVMs is less supported in areas where interpretability is important and where people are held responsible for the decisions made by models. Objective In this work, we investigate whether SVMs using linear, polynomial and RBF kernels can be explained such that interpretations for model-based decisions can be provided. We further indicate when SVMs can be explained and in which situations interpretation of SVMs is (hitherto) not possible. Here, explainability is defined as the ability to produce the final decision based on a sum of contributions which depend on one single or at most two input variables. Results Our experiments on simulated and real-life data show that explainability of an SVM depends on the chosen parameter values (degree of polynomial kernel, width of RBF kernel and regularization constant). When several combinations of parameter values yield the same cross-validation performance, combinations with a lower polynomial degree or a larger kernel width have a higher chance of being explainable. Conclusions This work summarizes SVM classifiers obtained with linear, polynomial and RBF kernels in a single plot. Linear and polynomial kernels up to the second degree are represented exactly. For other kernels an indication of the reliability of the approximation is presented. The complete methodology is available as an R package and two apps and a movie are provided to illustrate the possibilities offered by the method. PMID:27723811

  18. An SVM-based solution for fault detection in wind turbines.

    PubMed

    Santos, Pedro; Villa, Luisa F; Reñones, Aníbal; Bustillo, Andres; Maudes, Jesús

    2015-03-09

    Research into fault diagnosis in machines with a wide range of variable loads and speeds, such as wind turbines, is of great industrial interest. Analysis of the power signals emitted by wind turbines for the diagnosis of mechanical faults in their mechanical transmission chain is insufficient. A successful diagnosis requires the inclusion of accelerometers to evaluate vibrations. This work presents a multi-sensory system for fault diagnosis in wind turbines, combined with a data-mining solution for the classification of the operational state of the turbine. The selected sensors are accelerometers, in which vibration signals are processed using angular resampling techniques and electrical, torque and speed measurements. Support vector machines (SVMs) are selected for the classification task, including two traditional and two promising new kernels. This multi-sensory system has been validated on a test-bed that simulates the real conditions of wind turbines with two fault typologies: misalignment and imbalance. Comparison of SVM performance with the results of artificial neural networks (ANNs) shows that linear kernel SVM outperforms other kernels and ANNs in terms of accuracy, training and tuning times. The suitability and superior performance of linear SVM is also experimentally analyzed, to conclude that this data acquisition technique generates linearly separable datasets.

  19. Quantifying Melt Ponds in the Beaufort MIZ using Linear Support Vector Machines from High Resolution Panchromatic Images

    NASA Astrophysics Data System (ADS)

    Ortiz, M.; Graber, H. C.; Wilkinson, J.; Nyman, L. M.; Lund, B.

    2017-12-01

    Much work has been done on determining changes in summer ice albedo and morphological properties of melt ponds such as depth, shape and distribution using in-situ measurements and satellite-based sensors. Although these studies have dedicated much pioneering work in this area, there still lacks sufficient spatial and temporal scales. We present a prototype algorithm using Linear Support Vector Machines (LSVMs) designed to quantify the evolution of melt pond fraction from a recently government-declassified high-resolution panchromatic optical dataset. The study area of interest lies within the Beaufort marginal ice zone (MIZ), where several in-situ instruments were deployed by the British Antarctic Survey in joint with the MIZ Program, from April-September, 2014. The LSVM uses four dimensional feature data from the intensity image itself, and from various textures calculated from a modified first-order histogram technique using probability density of occurrences. We explore both the temporal evolution of melt ponds and spatial statistics such as pond fraction, pond area, and number pond density, to name a few. We also introduce a linear regression model that can potentially be used to estimate average pond area by ingesting several melt pond statistics and shape parameters.

  20. Loop shaping design for tracking performance in machine axes.

    PubMed

    Schinstock, Dale E; Wei, Zhouhong; Yang, Tao

    2006-01-01

    A modern interpretation of classical loop shaping control design methods is presented in the context of tracking control for linear motor stages. Target applications include noncontacting machines such as laser cutters and markers, water jet cutters, and adhesive applicators. The methods are directly applicable to the common PID controller and are pertinent to many electromechanical servo actuators other than linear motors. In addition to explicit design techniques a PID tuning algorithm stressing the importance of tracking is described. While the theory behind these techniques is not new, the analysis of their application to modern systems is unique in the research literature. The techniques and results should be important to control practitioners optimizing PID controller designs for tracking and in comparing results from classical designs to modern techniques. The methods stress high-gain controller design and interpret what this means for PID. Nothing in the methods presented precludes the addition of feedforward control methods for added improvements in tracking. Laboratory results from a linear motor stage demonstrate that with large open-loop gain very good tracking performance can be achieved. The resultant tracking errors compare very favorably to results from similar motions on similar systems that utilize much more complicated controllers.

  1. Analysis on the multi-dimensional spectrum of the thrust force for the linear motor feed drive system in machine tools

    NASA Astrophysics Data System (ADS)

    Yang, Xiaojun; Lu, Dun; Ma, Chengfang; Zhang, Jun; Zhao, Wanhua

    2017-01-01

    The motor thrust force has lots of harmonic components due to the nonlinearity of drive circuit and motor itself in the linear motor feed drive system. What is more, in the motion process, these thrust force harmonics may vary with the position, velocity, acceleration and load, which affects the displacement fluctuation of the feed drive system. Therefore, in this paper, on the basis of the thrust force spectrum obtained by the Maxwell equation and the electromagnetic energy method, the multi-dimensional variation of each thrust harmonic is analyzed under different motion parameters. Then the model of the servo system is established oriented to the dynamic precision. The influence of the variation of the thrust force spectrum on the displacement fluctuation is discussed. At last the experiments are carried out to verify the theoretical analysis above. It can be found that the thrust harmonics show multi-dimensional spectrum characteristics under different motion parameters and loads, which should be considered to choose the motion parameters and optimize the servo control parameters in the high-speed and high-precision machine tools equipped with the linear motor feed drive system.

  2. Machine Learning-based discovery of closures for reduced models of dynamical systems

    NASA Astrophysics Data System (ADS)

    Pan, Shaowu; Duraisamy, Karthik

    2017-11-01

    Despite the successful application of machine learning (ML) in fields such as image processing and speech recognition, only a few attempts has been made toward employing ML to represent the dynamics of complex physical systems. Previous attempts mostly focus on parameter calibration or data-driven augmentation of existing models. In this work we present a ML framework to discover closure terms in reduced models of dynamical systems and provide insights into potential problems associated with data-driven modeling. Based on exact closure models for linear system, we propose a general linear closure framework from viewpoint of optimization. The framework is based on trapezoidal approximation of convolution term. Hyperparameters that need to be determined include temporal length of memory effect, number of sampling points, and dimensions of hidden states. To circumvent the explicit specification of memory effect, a general framework inspired from neural networks is also proposed. We conduct both a priori and posteriori evaluations of the resulting model on a number of non-linear dynamical systems. This work was supported in part by AFOSR under the project ``LES Modeling of Non-local effects using Statistical Coarse-graining'' with Dr. Jean-Luc Cambier as the technical monitor.

  3. Some Alignment Considerations for the Next Linear Collider

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruland, R

    Next Linear Collider type accelerators require a new level of alignment quality. The relative alignment of these machines is to be maintained in an error envelope dimensioned in micrometers and for certain parts in nanometers. In the nanometer domain our terra firma cannot be considered monolithic but compares closer to jelly. Since conventional optical alignment methods cannot deal with the dynamics and cannot approach the level of accuracy, special alignment and monitoring techniques must be pursued.

  4. Prediction of HDR quality by combining perceptually transformed display measurements with machine learning

    NASA Astrophysics Data System (ADS)

    Choudhury, Anustup; Farrell, Suzanne; Atkins, Robin; Daly, Scott

    2017-09-01

    We present an approach to predict overall HDR display quality as a function of key HDR display parameters. We first performed subjective experiments on a high quality HDR display that explored five key HDR display parameters: maximum luminance, minimum luminance, color gamut, bit-depth and local contrast. Subjects rated overall quality for different combinations of these display parameters. We explored two models | a physical model solely based on physically measured display characteristics and a perceptual model that transforms physical parameters using human vision system models. For the perceptual model, we use a family of metrics based on a recently published color volume model (ICT-CP), which consists of the PQ luminance non-linearity (ST2084) and LMS-based opponent color, as well as an estimate of the display point spread function. To predict overall visual quality, we apply linear regression and machine learning techniques such as Multilayer Perceptron, RBF and SVM networks. We use RMSE and Pearson/Spearman correlation coefficients to quantify performance. We found that the perceptual model is better at predicting subjective quality than the physical model and that SVM is better at prediction than linear regression. The significance and contribution of each display parameter was investigated. In addition, we found that combined parameters such as contrast do not improve prediction. Traditional perceptual models were also evaluated and we found that models based on the PQ non-linearity performed better.

  5. Release of TEGDMA from composite during the chewing situation.

    PubMed

    Durner, J; Glasl, B; Zaspel, J; Kunzelmann, K H; Hickel, R; Reichl, F X

    2010-07-01

    The aim of this study was to investigate the triethylene glycol (TEGDMA) elution kinetics from light-cured composite with and without chewing simulation over a time period of 86 h. An experimental composite with TEGDMA labeled with a tracer dose of 14C-TEGDMA was used. The material parameters were in the range of commercially available composites. The mastification was simulated with the Fatigue-machine and the MUC-3 chewing simulator. 14C was eluted to 2.55% of the applied 14C-TEGDMA dose within 86 h after chewing simulation with the Fatigue-machine and to 2.60% after chewing simulation with the MUC-3. Similar 14C-kinetic data were found for 14C-elution with and without chewing simulation with the Fatigue-machine and with MUC-3. During the first 26 h after the beginning of the experiments a linear 14C-elution kinetic was observed, followed by a second linear 14C-elution kinetic with a lower slope up to 86 h in both apparatus. It could be shown that chewing simulation has no significant (p<0.05) effect on the release of 14C-TEGDMA (and/or 14C-degradation products) from polymerized composites. Copyright 2010 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  6. Pharmaceutical Raw Material Identification Using Miniature Near-Infrared (MicroNIR) Spectroscopy and Supervised Pattern Recognition Using Support Vector Machine

    PubMed Central

    Hsiung, Chang; Pederson, Christopher G.; Zou, Peng; Smith, Valton; von Gunten, Marc; O’Brien, Nada A.

    2016-01-01

    Near-infrared spectroscopy as a rapid and non-destructive analytical technique offers great advantages for pharmaceutical raw material identification (RMID) to fulfill the quality and safety requirements in pharmaceutical industry. In this study, we demonstrated the use of portable miniature near-infrared (MicroNIR) spectrometers for NIR-based pharmaceutical RMID and solved two challenges in this area, model transferability and large-scale classification, with the aid of support vector machine (SVM) modeling. We used a set of 19 pharmaceutical compounds including various active pharmaceutical ingredients (APIs) and excipients and six MicroNIR spectrometers to test model transferability. For the test of large-scale classification, we used another set of 253 pharmaceutical compounds comprised of both chemically and physically different APIs and excipients. We compared SVM with conventional chemometric modeling techniques, including soft independent modeling of class analogy, partial least squares discriminant analysis, linear discriminant analysis, and quadratic discriminant analysis. Support vector machine modeling using a linear kernel, especially when combined with a hierarchical scheme, exhibited excellent performance in both model transferability and large-scale classification. Hence, ultra-compact, portable and robust MicroNIR spectrometers coupled with SVM modeling can make on-site and in situ pharmaceutical RMID for large-volume applications highly achievable. PMID:27029624

  7. Multigrid methods in structural mechanics

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Bigelow, C. A.; Taasan, S.; Hussaini, M. Y.

    1986-01-01

    Although the application of multigrid methods to the equations of elasticity has been suggested, few such applications have been reported in the literature. In the present work, multigrid techniques are applied to the finite element analysis of a simply supported Bernoulli-Euler beam, and various aspects of the multigrid algorithm are studied and explained in detail. In this study, six grid levels were used to model half the beam. With linear prolongation and sequential ordering, the multigrid algorithm yielded results which were of machine accuracy with work equivalent to 200 standard Gauss-Seidel iterations on the fine grid. Also with linear prolongation and sequential ordering, the V(1,n) cycle with n greater than 2 yielded better convergence rates than the V(n,1) cycle. The restriction and prolongation operators were derived based on energy principles. Conserving energy during the inter-grid transfers required that the prolongation operator be the transpose of the restriction operator, and led to improved convergence rates. With energy-conserving prolongation and sequential ordering, the multigrid algorithm yielded results of machine accuracy with a work equivalent to 45 Gauss-Seidel iterations on the fine grid. The red-black ordering of relaxations yielded solutions of machine accuracy in a single V(1,1) cycle, which required work equivalent to about 4 iterations on the finest grid level.

  8. Rod Has High Tensile Strength And Low Thermal Expansion

    NASA Technical Reports Server (NTRS)

    Smith, D. E.; Everton, R. L.; Howe, E.; O'Malley, M.

    1996-01-01

    Thoriated tungsten extension rod fabricated to replace stainless-steel extension rod attached to linear variable-differential transformer in gap-measuring gauge. Threads formed on end of rod by machining with special fixtures and carefully chosen combination of speeds and feeds.

  9. Statistical analysis and machine learning algorithms for optical biopsy

    NASA Astrophysics Data System (ADS)

    Wu, Binlin; Liu, Cheng-hui; Boydston-White, Susie; Beckman, Hugh; Sriramoju, Vidyasagar; Sordillo, Laura; Zhang, Chunyuan; Zhang, Lin; Shi, Lingyan; Smith, Jason; Bailin, Jacob; Alfano, Robert R.

    2018-02-01

    Analyzing spectral or imaging data collected with various optical biopsy methods is often times difficult due to the complexity of the biological basis. Robust methods that can utilize the spectral or imaging data and detect the characteristic spectral or spatial signatures for different types of tissue is challenging but highly desired. In this study, we used various machine learning algorithms to analyze a spectral dataset acquired from human skin normal and cancerous tissue samples using resonance Raman spectroscopy with 532nm excitation. The algorithms including principal component analysis, nonnegative matrix factorization, and autoencoder artificial neural network are used to reduce dimension of the dataset and detect features. A support vector machine with a linear kernel is used to classify the normal tissue and cancerous tissue samples. The efficacies of the methods are compared.

  10. Intelligent path loss prediction engine design using machine learning in the urban outdoor environment

    NASA Astrophysics Data System (ADS)

    Wang, Ruichen; Lu, Jingyang; Xu, Yiran; Shen, Dan; Chen, Genshe; Pham, Khanh; Blasch, Erik

    2018-05-01

    Due to the progressive expansion of public mobile networks and the dramatic growth of the number of wireless users in recent years, researchers are motivated to study the radio propagation in urban environments and develop reliable and fast path loss prediction models. During last decades, different types of propagation models are developed for urban scenario path loss predictions such as the Hata model and the COST 231 model. In this paper, the path loss prediction model is thoroughly investigated using machine learning approaches. Different non-linear feature selection methods are deployed and investigated to reduce the computational complexity. The simulation results are provided to demonstratethe validity of the machine learning based path loss prediction engine, which can correctly determine the signal propagation in a wireless urban setting.

  11. Scale effects and a method for similarity evaluation in micro electrical discharge machining

    NASA Astrophysics Data System (ADS)

    Liu, Qingyu; Zhang, Qinhe; Wang, Kan; Zhu, Guang; Fu, Xiuzhuo; Zhang, Jianhua

    2016-08-01

    Electrical discharge machining(EDM) is a promising non-traditional micro machining technology that offers a vast array of applications in the manufacturing industry. However, scale effects occur when machining at the micro-scale, which can make it difficult to predict and optimize the machining performances of micro EDM. A new concept of "scale effects" in micro EDM is proposed, the scale effects can reveal the difference in machining performances between micro EDM and conventional macro EDM. Similarity theory is presented to evaluate the scale effects in micro EDM. Single factor experiments are conducted and the experimental results are analyzed by discussing the similarity difference and similarity precision. The results show that the output results of scale effects in micro EDM do not change linearly with discharge parameters. The values of similarity precision of machining time significantly increase when scaling-down the capacitance or open-circuit voltage. It is indicated that the lower the scale of the discharge parameter, the greater the deviation of non-geometrical similarity degree over geometrical similarity degree, which means that the micro EDM system with lower discharge energy experiences more scale effects. The largest similarity difference is 5.34 while the largest similarity precision can be as high as 114.03. It is suggested that the similarity precision is more effective in reflecting the scale effects and their fluctuation than similarity difference. Consequently, similarity theory is suitable for evaluating the scale effects in micro EDM. This proposed research offers engineering values for optimizing the machining parameters and improving the machining performances of micro EDM.

  12. New numerical approach for the modelling of machining applied to aeronautical structural parts

    NASA Astrophysics Data System (ADS)

    Rambaud, Pierrick; Mocellin, Katia

    2018-05-01

    The manufacturing of aluminium alloy structural aerospace parts involves several steps: forming (rolling, forging …etc), heat treatments and machining. Before machining, the manufacturing processes have embedded residual stresses into the workpiece. The final geometry is obtained during this last step, when up to 90% of the raw material volume is removed by machining. During this operation, the mechanical equilibrium of the part is in constant evolution due to the redistribution of the initial stresses. This redistribution is the main cause for workpiece deflections during machining and for distortions - after unclamping. Both may lead to non-conformity of the part regarding the geometrical and dimensional specifications and therefore to rejection of the part or additional conforming steps. In order to improve the machining accuracy and the robustness of the process, the effect of the residual stresses has to be considered for the definition of the machining process plan and even in the geometrical definition of the part. In this paper, the authors present two new numerical approaches concerning the modelling of machining of aeronautical structural parts. The first deals with the use of an immersed volume framework to model the cutting step, improving the robustness and the quality of the resulting mesh compared to the previous version. The second is about the mechanical modelling of the machining problem. The authors thus show that in the framework of rolled aluminium parts the use of a linear elasticity model is functional in the finite element formulation and promising regarding the reduction of computation times.

  13. Feasibility of retrofitting a university library with active workstations to reduce sedentary behavior.

    PubMed

    Maeda, Hotaka; Quartiroli, Alessandro; Vos, Paul W; Carr, Lucas J; Mahar, Matthew T

    2014-05-01

    Libraries are an inherently sedentary environment, but are an understudied setting for sedentary behavior interventions. To investigate the feasibility of incorporating portable pedal machines in a university library to reduce sedentary behaviors. The 11-week intervention targeted students at a university library. Thirteen portable pedal machines were placed in the library. Four forms of prompts (e-mail, library website, advertisement monitors, and poster) encouraging pedal machine use were employed during the first 4 weeks. Pedal machine use was measured via automatic timers on each machine and momentary time sampling. Daily library visits were measured using a gate counter. Individualized data were measured by survey. Data were collected in fall 2012 and analyzed in 2013. Mean (SD) cumulative pedal time per day was 95.5 (66.1) minutes. One or more pedal machines were observed being used 15% of the time (N=589). Pedal machines were used at least once by 7% of students (n=527). Controlled for gate count, no linear change of pedal machine use across days was found (b=-0.1 minutes, p=0.75) and the presence of the prompts did not change daily pedal time (p=0.63). Seven of eight items that assessed attitudes toward the intervention supported intervention feasibility (p<0.05). The unique non-individualized approach of retrofitting a library with pedal machines to reduce sedentary behavior seems feasible, but improvement of its effectiveness is needed. This study could inform future studies aimed at reshaping traditionally sedentary settings to improve public health. Copyright © 2014 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  14. Multiple Cylinder Free-Piston Stirling Machinery

    NASA Astrophysics Data System (ADS)

    Berchowitz, David M.; Kwon, Yong-Rak

    In order to improve the specific power of piston-cylinder type machinery, there is a point in capacity or power where an advantage accrues with increasing number of piston-cylinder assemblies. In the case of Stirling machinery where primary energy is transferred across the casing wall of the machine, this consideration is even more important. This is due primarily to the difference in scaling of basic power and the required heat transfer. Heat transfer is found to be progressively limited as the size of the machine increases. Multiple cylinder machines tend to preserve the surface area to volume ratio at more favorable levels. In addition, the spring effect of the working gas in the so-called alpha configuration is often sufficient to provide a high frequency resonance point that improves the specific power. There are a number of possible multiple cylinder configurations. The simplest is an opposed pair of piston-displacer machines (beta configuration). A three-cylinder machine requires stepped pistons to obtain proper volume phase relationships. Four to six cylinder configurations are also possible. A small demonstrator inline four cylinder alpha machine has been built to demonstrate both cooling operation and power generation. Data from this machine verifies theoretical expectations and is used to extrapolate the performance of future machines. Vibration levels are discussed and it is argued that some multiple cylinder machines have no linear component to the casing vibration but may have a nutating couple. Example applications are discussed ranging from general purpose coolers, computer cooling, exhaust heat power extraction and some high power engines.

  15. KENNEDY SPACE CENTER, FLA. - Jim Comer, United Space Alliance project leader for Columbia reconstruction, speaks to members of the Columbia Reconstruction Team during transfer of debris from the Columbia Debris Hangar to its permanent storage site in the Vehicle Assembly Building. More than 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.

    NASA Image and Video Library

    2003-09-15

    KENNEDY SPACE CENTER, FLA. - Jim Comer, United Space Alliance project leader for Columbia reconstruction, speaks to members of the Columbia Reconstruction Team during transfer of debris from the Columbia Debris Hangar to its permanent storage site in the Vehicle Assembly Building. More than 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.

  16. Evaluation of Columbia, U.S. Meat Animal Research Center Composite, Suffolk, and Texel rams as terminal sires in an extensive rangeland production system: VI. Measurements of live-lamb and carcass shape and their relationship to carcass yield and value.

    PubMed

    Notter, D R; Mousel, M R; Leeds, T D; Zerby, H N; Moeller, S J; Lewis, G S; Taylor, J B

    2014-05-01

    Linear measurements on live lambs and carcasses can be used to characterize sheep breeds and may have value for prediction of carcass yield and value. This study used 512 crossbred lambs produced over 3 yr by mating Columbia, U.S. Meat Animal Research Center (USMARC) Composite, Suffolk, and Texel rams to adult Rambouillet ewes to assess sire-breed differences in live-animal and carcass shape and to evaluate the value of shape measurements as predictors of chilled carcass weight (CCW), weight of high-value cuts (rack, loin, leg, and sirloin; HVW), weight of trimmed high-value cuts (trimmed rack and loin and trimmed, boneless leg and sirloin; TrHVW), and estimated carcass value before (CVal) and after trimming of high-value cuts (TrCVal). Lambs were produced under extensive rangeland conditions, weaned at an average age of 132 d, fed a concentrate diet in a drylot, and harvested in each year in 3 groups at target mean BW of 54, 61, and 68 kg. Canonical discriminant analysis indicated that over 93% of variation among sire breeds was accounted for by the contrast between tall, long, less-thickly muscled breeds with greater BW and CCW (i.e., the Columbia and Suffolk) compared with shorter, more thickly muscled breeds with smaller BW and CCW. After correcting for effects of year, harvest group, sire breed, and shipping BW, linear measurements on live lambs contributed little to prediction of CCW. Similarly, after accounting for effects of CCW, linear measurements on live animals further reduced residual SD (RSD) of dependent variables by 0.2 to 5.7%, with generally positive effects of increasing live leg width and generally negative effects of increasing heart girth. Carcass measurements were somewhat more valuable as predictors of carcass merit. After fitting effects of CCW, additional consideration of carcass shape reduced RSD by 2.1, 3.6, 9.5, and 2.2% for HVW, TrHVW, CVal, and TrCVal, respectively. Effects of increasing carcass leg width were positive for HVW, TrHVW, and TrCVal. We also observed positive effects of increasing carcass length on TrCVal and negative effects of increasing cannon bone length on HVW and CVal. Increasing shoulder width had positive effects on CVal but negative effects on TrHVW. Differences in lamb and carcass shape were significantly associated with carcass yield and value, but the additional accuracy associated with use of these measurements was modest relative to that achieved from use of only shipping BW or CCW.

  17. Quantum annealing versus classical machine learning applied to a simplified computational biology problem

    NASA Astrophysics Data System (ADS)

    Li, Richard Y.; Di Felice, Rosa; Rohs, Remo; Lidar, Daniel A.

    2018-03-01

    Transcription factors regulate gene expression, but how these proteins recognize and specifically bind to their DNA targets is still debated. Machine learning models are effective means to reveal interaction mechanisms. Here we studied the ability of a quantum machine learning approach to classify and rank binding affinities. Using simplified data sets of a small number of DNA sequences derived from actual binding affinity experiments, we trained a commercially available quantum annealer to classify and rank transcription factor binding. The results were compared to state-of-the-art classical approaches for the same simplified data sets, including simulated annealing, simulated quantum annealing, multiple linear regression, LASSO, and extreme gradient boosting. Despite technological limitations, we find a slight advantage in classification performance and nearly equal ranking performance using the quantum annealer for these fairly small training data sets. Thus, we propose that quantum annealing might be an effective method to implement machine learning for certain computational biology problems.

  18. Effects of machining conditions on the specific cutting energy of carbon fibre reinforced polymer composites

    NASA Astrophysics Data System (ADS)

    Azmi, A. I.; Syahmi, A. Z.; Naquib, M.; Lih, T. C.; Mansor, A. F.; Khalil, A. N. M.

    2017-10-01

    This article presents an approach to evaluate the effects of different machining conditions on the specific cutting energy of carbon fibre reinforced polymer composites (CFRP). Although research works in the machinability of CFRP composites have been very substantial, the present literature rarely discussed the topic of energy consumption and the specific cutting energy. A series of turning experiments were carried out on two different CFRP composites in order to determine the power and specific energy constants and eventually evaluate their effects due to the changes in machining conditions. A good agreement between the power and material removal rate using a simple linear relationship. Further analyses revealed that a power law function is best to describe the effect of feed rate on the changes in the specific cutting energy. At lower feed rate, the specific cutting energy increases exponentially due to the nature of finishing operation, whereas at higher feed rate, the changes in specific cutting energy is minimal due to the nature of roughing operation.

  19. A review of machine learning in obesity.

    PubMed

    DeGregory, K W; Kuiper, P; DeSilvio, T; Pleuss, J D; Miller, R; Roginski, J W; Fisher, C B; Harness, D; Viswanath, S; Heymsfield, S B; Dungan, I; Thomas, D M

    2018-05-01

    Rich sources of obesity-related data arising from sensors, smartphone apps, electronic medical health records and insurance data can bring new insights for understanding, preventing and treating obesity. For such large datasets, machine learning provides sophisticated and elegant tools to describe, classify and predict obesity-related risks and outcomes. Here, we review machine learning methods that predict and/or classify such as linear and logistic regression, artificial neural networks, deep learning and decision tree analysis. We also review methods that describe and characterize data such as cluster analysis, principal component analysis, network science and topological data analysis. We introduce each method with a high-level overview followed by examples of successful applications. The algorithms were then applied to National Health and Nutrition Examination Survey to demonstrate methodology, utility and outcomes. The strengths and limitations of each method were also evaluated. This summary of machine learning algorithms provides a unique overview of the state of data analysis applied specifically to obesity. © 2018 World Obesity Federation.

  20. Telescoping magnetic ball bar test gage

    DOEpatents

    Bryan, J.B.

    1984-03-13

    A telescoping magnetic ball bar test gage for determining the accuracy of machine tools, including robots, and those measuring machines having non-disengageable servo drives which cannot be clutched out is disclosed. Two gage balls are held and separated from one another by a telescoping fixture which allows them relative radial motional freedom but not relative lateral motional freedom. The telescoping fixture comprises a parallel reed flexure unit and a rigid member. One gage ball is secured by a magnetic socket knuckle assembly which fixes its center with respect to the machine being tested. The other gage ball is secured by another magnetic socket knuckle assembly which is engaged or held by the machine in such manner that the center of that ball is directed to execute a prescribed trajectory, all points of which are equidistant from the center of the fixed gage ball. As the moving ball executes its trajectory, changes in the radial distance between the centers of the two balls caused by inaccuracies in the machine are determined or measured by a linear variable differential transformer (LVDT) assembly actuated by the parallel reed flexure unit. Measurements can be quickly and easily taken for multiple trajectories about several different fixed ball locations, thereby determining the accuracy of the machine. 3 figs.

  1. Axial calibration methods of piezoelectric load sharing dynamometer

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Chang, Qingbing; Ren, Zongjin; Shao, Jun; Wang, Xinlei; Tian, Yu

    2018-06-01

    The relationship between input and output of load sharing dynamometer is seriously non-linear in different loading points of a plane, so it's significant for accutately measuring force to precisely calibrate the non-linear relationship. In this paper, firstly, based on piezoelectric load sharing dynamometer, calibration experiments of different loading points are performed in a plane. And then load sharing testing system is respectively calibrated based on BP algorithm and ELM (Extreme Learning Machine) algorithm. Finally, the results show that the calibration result of ELM is better than BP for calibrating the non-linear relationship between input and output of loading sharing dynamometer in the different loading points of a plane, which verifies that ELM algorithm is feasible in solving force non-linear measurement problem.

  2. Design and analysis of a novel mechanical loading machine for dynamic in vivo axial loading

    NASA Astrophysics Data System (ADS)

    Macione, James; Nesbitt, Sterling; Pandit, Vaibhav; Kotha, Shiva

    2012-02-01

    This paper describes the construction of a loading machine for performing in vivo, dynamic mechanical loading of the rodent forearm. The loading machine utilizes a unique type of electromagnetic actuator with no mechanically resistive components (servotube), allowing highly accurate loads to be created. A regression analysis of the force created by the actuator with respect to the input voltage demonstrates high linear correlation (R2 = 1). When the linear correlation is used to create dynamic loading waveforms in the frequency (0.5-10 Hz) and load (1-50 N) range used for in vivo loading, less than 1% normalized root mean square error (NRMSE) is computed. Larger NRMSE is found at increased frequencies, with 5%-8% occurring at 40 Hz, and reasons are discussed. Amplifiers (strain gauge, linear voltage displacement transducer (LVDT), and load cell) are constructed, calibrated, and integrated, to allow well-resolved dynamic measurements to be recorded at each program cycle. Each of the amplifiers uses an active filter with cutoff frequency at the maximum in vivo loading frequencies (50 Hz) so that electronic noise generated by the servo drive and actuator are reduced. The LVDT and load cell amplifiers allow evaluation of stress-strain relationships to determine if in vivo bone damage is occurring. The strain gauge amplifier allows dynamic force to strain calibrations to occur for animals of different sex, age, and strain. Unique features are integrated into the loading system, including a weightless mode, which allows the limbs of anesthetized animals to be quickly positioned and removed. Although the device is constructed for in vivo axial bone loading, it can be used within constraints, as a general measurement instrument in a laboratory setting.

  3. Predicting phenotypes of asthma and eczema with machine learning

    PubMed Central

    2014-01-01

    Background There is increasing recognition that asthma and eczema are heterogeneous diseases. We investigated the predictive ability of a spectrum of machine learning methods to disambiguate clinical sub-groups of asthma, wheeze and eczema, using a large heterogeneous set of attributes in an unselected population. The aim was to identify to what extent such heterogeneous information can be combined to reveal specific clinical manifestations. Methods The study population comprised a cross-sectional sample of adults, and included representatives of the general population enriched by subjects with asthma. Linear and non-linear machine learning methods, from logistic regression to random forests, were fit on a large attribute set including demographic, clinical and laboratory features, genetic profiles and environmental exposures. Outcome of interest were asthma, wheeze and eczema encoded by different operational definitions. Model validation was performed via bootstrapping. Results The study population included 554 adults, 42% male, 38% previous or current smokers. Proportion of asthma, wheeze, and eczema diagnoses was 16.7%, 12.3%, and 21.7%, respectively. Models were fit on 223 non-genetic variables plus 215 single nucleotide polymorphisms. In general, non-linear models achieved higher sensitivity and specificity than other methods, especially for asthma and wheeze, less for eczema, with areas under receiver operating characteristic curve of 84%, 76% and 64%, respectively. Our findings confirm that allergen sensitisation and lung function characterise asthma better in combination than separately. The predictive ability of genetic markers alone is limited. For eczema, new predictors such as bio-impedance were discovered. Conclusions More usefully-complex modelling is the key to a better understanding of disease mechanisms and personalised healthcare: further advances are likely with the incorporation of more factors/attributes and longitudinal measures. PMID:25077568

  4. Design and analysis of a novel mechanical loading machine for dynamic in vivo axial loading.

    PubMed

    Macione, James; Nesbitt, Sterling; Pandit, Vaibhav; Kotha, Shiva

    2012-02-01

    This paper describes the construction of a loading machine for performing in vivo, dynamic mechanical loading of the rodent forearm. The loading machine utilizes a unique type of electromagnetic actuator with no mechanically resistive components (servotube), allowing highly accurate loads to be created. A regression analysis of the force created by the actuator with respect to the input voltage demonstrates high linear correlation (R(2) = 1). When the linear correlation is used to create dynamic loading waveforms in the frequency (0.5-10 Hz) and load (1-50 N) range used for in vivo loading, less than 1% normalized root mean square error (NRMSE) is computed. Larger NRMSE is found at increased frequencies, with 5%-8% occurring at 40 Hz, and reasons are discussed. Amplifiers (strain gauge, linear voltage displacement transducer (LVDT), and load cell) are constructed, calibrated, and integrated, to allow well-resolved dynamic measurements to be recorded at each program cycle. Each of the amplifiers uses an active filter with cutoff frequency at the maximum in vivo loading frequencies (50 Hz) so that electronic noise generated by the servo drive and actuator are reduced. The LVDT and load cell amplifiers allow evaluation of stress-strain relationships to determine if in vivo bone damage is occurring. The strain gauge amplifier allows dynamic force to strain calibrations to occur for animals of different sex, age, and strain. Unique features are integrated into the loading system, including a weightless mode, which allows the limbs of anesthetized animals to be quickly positioned and removed. Although the device is constructed for in vivo axial bone loading, it can be used within constraints, as a general measurement instrument in a laboratory setting.

  5. Optimal inventories for overhaul of repairable redundant systems - A Markov decision model

    NASA Technical Reports Server (NTRS)

    Schaefer, M. K.

    1984-01-01

    A Markovian decision model was developed to calculate the optimal inventory of repairable spare parts for an avionics control system for commercial aircraft. Total expected shortage costs, repair costs, and holding costs are minimized for a machine containing a single system of redundant parts. Transition probabilities are calculated for each repair state and repair rate, and optimal spare parts inventory and repair strategies are determined through linear programming. The linear programming solutions are given in a table.

  6. Enhanced Night Vision Via a Combination of Poisson Interpolation and Machine Learning

    DTIC Science & Technology

    2006-02-01

    of 0-255, they are mostly similar. The right plot shows a family of m(x, ψ) curves of ψ=2 (the most linear) through ψ=1024 (the most curved ...complicating low-light imaging. Nayar and Branzoi [04] later suggested a second variant using a DLP micromirror array to modulate the exposure, via time...255, they are mostly similar. The right plot shows a family of m(x, ψ) curves of ψ=2 (the most linear) through ψ=1024 (the most curved

  7. 1993-1994 Final technical report for establishing the SECME Model in the District of Columbia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vickers, R.G.

    1995-12-31

    This is the final report for a program to establish the SECME Model in the District of Columbia. This program has seen the development of a partnership between the District of Columbia Public Schools, the University of the District of Columbia, the Department of Energy, and SECME. This partnership has demonstrated positive achievement in mathematics and science education and learning in students within the District of Columbia.

  8. The International Linear Collider

    NASA Astrophysics Data System (ADS)

    List, Benno

    2014-04-01

    The International Linear Collider (ILC) is a proposed e+e- linear collider with a centre-of-mass energy of 200-500 GeV, based on superconducting RF cavities. The ILC would be an ideal machine for precision studies of a light Higgs boson and the top quark, and would have a discovery potential for new particles that is complementary to that of LHC. The clean experimental conditions would allow the operation of detectors with extremely good performance; two such detectors, ILD and SiD, are currently being designed. Both make use of novel concepts for tracking and calorimetry. The Japanese High Energy Physics community has recently recommended to build the ILC in Japan.

  9. 77 FR 55191 - Endangered and Threatened Species; Recovery Plans

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-07

    ... and Threatened Species; Recovery Plans AGENCY: National Marine Fisheries Service (NMFS), National... Species Act (ESA) Recovery Plan for Lower Columbia River Chinook Salmon, Lower Columbia River Coho Salmon, Columbia River Chum Salmon, and Lower Columbia River Steelhead (Proposed Plan) was available for public...

  10. Fabrication Quality Analysis of a Fiber Optic Refractive Index Sensor Created by CO2 Laser Machining

    PubMed Central

    Chen, Chien-Hsing; Yeh, Bo-Kuan; Tang, Jaw-Luen; Wu, Wei-Te

    2013-01-01

    This study investigates the CO2 laser-stripped partial cladding of silica-based optic fibers with a core diameter of 400 μm, which enables them to sense the refractive index of the surrounding environment. However, inappropriate treatments during the machining process can generate a number of defects in the optic fiber sensors. Therefore, the quality of optic fiber sensors fabricated using CO2 laser machining must be analyzed. The results show that analysis of the fiber core size after machining can provide preliminary defect detection, and qualitative analysis of the optical transmission defects can be used to identify imperfections that are difficult to observe through size analysis. To more precisely and quantitatively detect fabrication defects, we included a tensile test and numerical aperture measurements in this study. After a series of quality inspections, we proposed improvements to the existing CO2 laser machining parameters, namely, a vertical scanning pathway, 4 W of power, and a feed rate of 9.45 cm/s. Using these improved parameters, we created optical fiber sensors with a core diameter of approximately 400 μm, no obvious optical transmission defects, a numerical aperture of 0.52 ± 0.019, a 0.886 Weibull modulus, and a 1.186 Weibull-shaped parameter. Finally, we used the optical fiber sensor fabricated using the improved parameters to measure the refractive indices of various solutions. The results show that a refractive-index resolution of 1.8 × 10−4 RIU (linear fitting R2 = 0.954) was achieved for sucrose solutions with refractive indices ranging between 1.333 and 1.383. We also adopted the particle plasmon resonance sensing scheme using the fabricated optical fibers. The results provided additional information, specifically, a superior sensor resolution of 5.73 × 10−5 RIU, and greater linearity at R2 = 0.999. PMID:23535636

  11. Estimation of metallic structure durability for a known law of stress variation

    NASA Astrophysics Data System (ADS)

    Mironov, V. I.; Lukashuk, O. A.; Ogorelkov, D. A.

    2017-12-01

    Overload of machines working in transient operational modes leads to such stresses in their bearing metallic structures that considerably exceed the endurance limit. The estimation of fatigue damages based on linear summation offers a more accurate prediction in terms of machine durability. The paper presents an alternative approach to the estimation of the factors of the cyclic degradation of a material. Free damped vibrations of the bridge girder of an overhead crane, which follow a known logarithmical decrement, are studied. It is shown that taking into account cyclic degradation substantially decreases the durability estimated for a product.

  12. Comparative evaluation of ERTS imagery for resource inventory in land use planning

    NASA Technical Reports Server (NTRS)

    Simonson, G. H. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. Numerous previously unmapped faults in central Oregon have been distinguished on ERTS-1 imagery. Tectonic mapping of fault-controlled linears demonstrates the utility of ERTS-1 imagery as a mean of illustrating and studying the regional tectonics of the state. Soil colors observed on ERTS-1 frame 1075-18150-5 at the eastern end of the Columbia basin correlate very well with those from descriptions of soils from that area. Digital output from frame 1021-18151 has shown the enhanced ability to interpret such features as joint patterns, shadowed landslide blocks, bottomlands, and drainage patterns. Widespread use of wheat-fallow rotation in northern Umatilla County, Oregon, insures that nearly one-half of the cultivated soil is devoid of vegetation much of the time. On ERTS-1 imagery, fallow fields are only slightly darker than fields of wheat stubble at the western end of the transect. Similar climate-related contrasts in soil color are visible on ERTS-1 Imagery from several other portions of the Columbia Basin. Absence of steep topography in the area mentioned, however, minimizes the disturbing effect caused by shadows.

  13. 40 CFR 81.108 - Columbia Intrastate Air Quality Control Region.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Quality Control Regions § 81.108 Columbia Intrastate Air Quality Control Region. The Columbia Intrastate Air Quality Control Region (South Carolina) consists of the territorial area encompassed by the... 40 Protection of Environment 17 2011-07-01 2011-07-01 false Columbia Intrastate Air Quality...

  14. 40 CFR 81.108 - Columbia Intrastate Air Quality Control Region.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Quality Control Regions § 81.108 Columbia Intrastate Air Quality Control Region. The Columbia Intrastate Air Quality Control Region (South Carolina) consists of the territorial area encompassed by the... 40 Protection of Environment 17 2010-07-01 2010-07-01 false Columbia Intrastate Air Quality...

  15. 76 FR 6525 - Airworthiness Directives; Cessna Aircraft Company (Type Certificate Previously Held by Columbia...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-07

    ... Airworthiness Directives; Cessna Aircraft Company (Type Certificate Previously Held by Columbia Aircraft... following new AD: 2011-03-04 Cessna Aircraft Company (Type Certificate Previously Held by Columbia Aircraft... the following Cessna Aircraft Company (type certificate previously held by Columbia Aircraft...

  16. The Neural-fuzzy Thermal Error Compensation Controller on CNC Machining Center

    NASA Astrophysics Data System (ADS)

    Tseng, Pai-Chung; Chen, Shen-Len

    The geometric errors and structural thermal deformation are factors that influence the machining accuracy of Computer Numerical Control (CNC) machining center. Therefore, researchers pay attention to thermal error compensation technologies on CNC machine tools. Some real-time error compensation techniques have been successfully demonstrated in both laboratories and industrial sites. The compensation results still need to be enhanced. In this research, the neural-fuzzy theory has been conducted to derive a thermal prediction model. An IC-type thermometer has been used to detect the heat sources temperature variation. The thermal drifts are online measured by a touch-triggered probe with a standard bar. A thermal prediction model is then derived by neural-fuzzy theory based on the temperature variation and the thermal drifts. A Graphic User Interface (GUI) system is also built to conduct the user friendly operation interface with Insprise C++ Builder. The experimental results show that the thermal prediction model developed by neural-fuzzy theory methodology can improve machining accuracy from 80µm to 3µm. Comparison with the multi-variable linear regression analysis the compensation accuracy is increased from ±10µm to ±3µm.

  17. Encapsulated Ball Bearings for Rotary Micro Machines

    DTIC Science & Technology

    2007-01-01

    maintaining fabrication simplicity and stability. Although ball bearings have been demonstrated in devices such as linear micromotors [6, 7] and rotary... micromotors [8], they have yet to be integrated into the microfabrication process to fully constrain the dynamic element. In the cases of both Modafe et

  18. Radiotherapy equipment and departments in the European countries: final results from the ESTRO-HERO survey.

    PubMed

    Grau, Cai; Defourny, Noémie; Malicki, Julian; Dunscombe, Peter; Borras, Josep M; Coffey, Mary; Slotman, Ben; Bogusz, Marta; Gasparotto, Chiara; Lievens, Yolande; Kokobobo, Arianit; Sedlmayer, Felix; Slobina, Elena; Feyen, Karen; Hadjieva, Tatiana; Odrazka, Karel; Grau Eriksen, Jesper; Jaal, Jana; Bly, Ritva; Chauvet, Bruno; Willich, Normann; Polgar, Csaba; Johannsson, Jakob; Cunningham, Moya; Magrini, Stefano; Atkocius, Vydmantas; Untereiner, Michel; Pirotta, Martin; Karadjinovic, Vanja; Levernes, Sverre; Sladowski, Krystol; Lurdes Trigo, Maria; Šegedin, Barbara; Rodriguez, Aurora; Lagerlund, Magnus; Pastoors, Bert; Hoskin, Peter; Vaarkamp, Jaap; Cleries Soler, Ramon

    2014-08-01

    Documenting the distribution of radiotherapy departments and the availability of radiotherapy equipment in the European countries is an important part of HERO - the ESTRO Health Economics in Radiation Oncology project. HERO has the overall aim to develop a knowledge base of the provision of radiotherapy in Europe and build a model for health economic evaluation of radiation treatments at the European level. The aim of the current report is to describe the distribution of radiotherapy equipment in European countries. An 84-item questionnaire was sent out to European countries, principally through their national societies. The current report includes a detailed analysis of radiotherapy departments and equipment (questionnaire items 26-29), analyzed in relation to the annual number of treatment courses and the socio-economic status of the countries. The analysis is based on validated responses from 28 of the 40 European countries defined by the European Cancer Observatory (ECO). A large variation between countries was found for most parameters studied. There were 2192 linear accelerators, 96 dedicated stereotactic machines, and 77 cobalt machines reported in the 27 countries where this information was available. A total of 12 countries had at least one cobalt machine in use. There was a median of 0.5 simulator per MV unit (range 0.3-1.5) and 1.4 (range 0.4-4.4) simulators per department. Of the 874 simulators, a total of 654 (75%) were capable of 3D imaging (CT-scanner or CBCT-option). The number of MV machines (cobalt, linear accelerators, and dedicated stereotactic machines) per million inhabitants ranged from 1.4 to 9.5 (median 5.3) and the average number of MV machines per department from 0.9 to 8.2 (median 2.6). The average number of treatment courses per year per MV machine varied from 262 to 1061 (median 419). While 69% of MV units were capable of IMRT only 49% were equipped for image guidance (IGRT). There was a clear relation between socio-economic status, as measured by GNI per capita, and availability of radiotherapy equipment in the countries. In many low income countries in Southern and Central-Eastern Europe there was very limited access to radiotherapy and especially to equipment for IMRT or IGRT. The European average number of MV machines per million inhabitants and per department is now better in line with QUARTS recommendations from 2005, but the survey also showed a significant heterogeneity in the access to modern radiotherapy equipment in Europe. High income countries especially in Northern-Western Europe are well-served with radiotherapy resources, other countries are facing important shortages of both equipment in general and especially machines capable of delivering high precision conformal treatments (IMRT, IGRT). Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  19. Characterization of optically stimulated luminescence dosemeters to measure organ doses in diagnostic radiology

    PubMed Central

    Endo, A; Katoh, T; Kobayashi, I; Joshi, R; Sur, J; Okano, T

    2012-01-01

    Objective The aim of this study was to assess the characteristics of an optically stimulated luminescence dosemeter (OSLD) for use in diagnostic radiology and to apply the OSLD in measuring the organ doses by panoramic radiography. Methods The dose linearity, energy dependency and angular dependency of aluminium oxide-based OSLDs were examined using an X-ray generator to simulate various exposure settings in diagnostic radiology. The organ doses were then measured by inserting the dosemeters into an anthropomorphic phantom while using three panoramic machines. Results The dosemeters demonstrated consistent dose linearity (coefficient of variation<1.5%) and no significant energy dependency (coefficient of variation<1.5%) under the applied exposure conditions. They also exhibited negligible angular dependency (≤10%). The organ doses of the X-ray as a result of panoramic imaging by three machines were calculated using the dosemeters. Conclusion OSLDs can be utilized to measure the organ doses in diagnostic radiology. The availability of these dosemeters in strip form proves to be reliably advantageous. PMID:22116136

  20. Application of variable teeth pitch face mill as chatter suppression method for non-rigid technological system

    NASA Astrophysics Data System (ADS)

    Svinin, V. M.; Savilov, A. V.

    2018-03-01

    The article describes the results of experimental studies on the effects of variation type for variable teeth pitches on low-rigidity workpiece chatter suppression efficiency in a feed direction and in a direction of the normal to the machined surface. Mill operation performance was identified by comparing the amplitudes of dominant chatter harmonics using constant and variable teeth pitches. The following variable pitch formation variants were studied: alternative, linear rising, and linear rising falling. The angle difference of adjacent teeth pitches ranged from 0 to 10°, from 5 to 8° and from 5 to 10° with interval of 1°. The experiments showed that for all variants, machining dynamics performance resulted from the difference of adjacent pitches corresponding to a half the chatter wavelength along the cutting surface. The alternative nature of a variable teeth pitch is most efficient as it almost completely suppresses the chatters. Theoretical explanations of the results are presented

  1. Programmable motion of DNA origami mechanisms.

    PubMed

    Marras, Alexander E; Zhou, Lifeng; Su, Hai-Jun; Castro, Carlos E

    2015-01-20

    DNA origami enables the precise fabrication of nanoscale geometries. We demonstrate an approach to engineer complex and reversible motion of nanoscale DNA origami machine elements. We first design, fabricate, and characterize the mechanical behavior of flexible DNA origami rotational and linear joints that integrate stiff double-stranded DNA components and flexible single-stranded DNA components to constrain motion along a single degree of freedom and demonstrate the ability to tune the flexibility and range of motion. Multiple joints with simple 1D motion were then integrated into higher order mechanisms. One mechanism is a crank-slider that couples rotational and linear motion, and the other is a Bennett linkage that moves between a compacted bundle and an expanded frame configuration with a constrained 3D motion path. Finally, we demonstrate distributed actuation of the linkage using DNA input strands to achieve reversible conformational changes of the entire structure on ∼ minute timescales. Our results demonstrate programmable motion of 2D and 3D DNA origami mechanisms constructed following a macroscopic machine design approach.

  2. Programmable motion of DNA origami mechanisms

    PubMed Central

    Marras, Alexander E.; Zhou, Lifeng; Su, Hai-Jun; Castro, Carlos E.

    2015-01-01

    DNA origami enables the precise fabrication of nanoscale geometries. We demonstrate an approach to engineer complex and reversible motion of nanoscale DNA origami machine elements. We first design, fabricate, and characterize the mechanical behavior of flexible DNA origami rotational and linear joints that integrate stiff double-stranded DNA components and flexible single-stranded DNA components to constrain motion along a single degree of freedom and demonstrate the ability to tune the flexibility and range of motion. Multiple joints with simple 1D motion were then integrated into higher order mechanisms. One mechanism is a crank–slider that couples rotational and linear motion, and the other is a Bennett linkage that moves between a compacted bundle and an expanded frame configuration with a constrained 3D motion path. Finally, we demonstrate distributed actuation of the linkage using DNA input strands to achieve reversible conformational changes of the entire structure on ∼minute timescales. Our results demonstrate programmable motion of 2D and 3D DNA origami mechanisms constructed following a macroscopic machine design approach. PMID:25561550

  3. Parameters selection in gene selection using Gaussian kernel support vector machines by genetic algorithm.

    PubMed

    Mao, Yong; Zhou, Xiao-Bo; Pi, Dao-Ying; Sun, You-Xian; Wong, Stephen T C

    2005-10-01

    In microarray-based cancer classification, gene selection is an important issue owing to the large number of variables and small number of samples as well as its non-linearity. It is difficult to get satisfying results by using conventional linear statistical methods. Recursive feature elimination based on support vector machine (SVM RFE) is an effective algorithm for gene selection and cancer classification, which are integrated into a consistent framework. In this paper, we propose a new method to select parameters of the aforementioned algorithm implemented with Gaussian kernel SVMs as better alternatives to the common practice of selecting the apparently best parameters by using a genetic algorithm to search for a couple of optimal parameter. Fast implementation issues for this method are also discussed for pragmatic reasons. The proposed method was tested on two representative hereditary breast cancer and acute leukaemia datasets. The experimental results indicate that the proposed method performs well in selecting genes and achieves high classification accuracies with these genes.

  4. Estimating False Positive Contamination in Crater Annotations from Citizen Science Data

    NASA Astrophysics Data System (ADS)

    Tar, P. D.; Bugiolacchi, R.; Thacker, N. A.; Gilmour, J. D.

    2017-01-01

    Web-based citizen science often involves the classification of image features by large numbers of minimally trained volunteers, such as the identification of lunar impact craters under the Moon Zoo project. Whilst such approaches facilitate the analysis of large image data sets, the inexperience of users and ambiguity in image content can lead to contamination from false positive identifications. We give an approach, using Linear Poisson Models and image template matching, that can quantify levels of false positive contamination in citizen science Moon Zoo crater annotations. Linear Poisson Models are a form of machine learning which supports predictive error modelling and goodness-of-fits, unlike most alternative machine learning methods. The proposed supervised learning system can reduce the variability in crater counts whilst providing predictive error assessments of estimated quantities of remaining true verses false annotations. In an area of research influenced by human subjectivity, the proposed method provides a level of objectivity through the utilisation of image evidence, guided by candidate crater identifications.

  5. The Queen Charlotte-Fairweather Fault Zone - Geomorphology of a submarine transform fault, offshore British Columbia and southeastern Alaska

    NASA Astrophysics Data System (ADS)

    Walton, M. A. L.; Barrie, V.; Greene, H. G.; Brothers, D. S.; Conway, K.; Conrad, J. E.

    2017-12-01

    The Queen Charlotte-Fairweather (QC-FW) Fault Zone is the Pacific - North America transform plate boundary and is clearly seen for over 900 km on the seabed as a linear and continuous feature from offshore central Haida Gwaii, British Columbia to Icy Point, Alaska. Recently (July - September 2017) collected multibeam bathymetry, seismic-reflection profiles and sediment cores provide evidence for the continuous strike-slip morphology along the continental shelfbreak and upper slope, including a linear fault valley, offset submarine canyons and gullies, and right-step offsets (pull apart basins). South of central Haida Gwaii, the QC-FW is represented by several NW-SE to N-S trending faults to the southern end of the islands. Adjacent to the fault at the southern extreme and offshore Dixon Entrance (Canada/US boundary) are 400 to 600 m high mud volcanos in 1000 to 1600 m water depth that have plumes extending up 700 m into the water column and contain extensive carbonate crusts and chemosynthetic communities within the craters. In addition, gas plumes have been identified that appear to be directly associated with the fault zone. Surficial Quaternary sediments within and adjacent to the central and southern fault date either to the deglaciation of this region of the Pacific north coast (16,000 years BP) or to the last interstadial period ( 40,000 years BP). Sediment accumulation is minimal and the sediments cored are primarily hard-packed dense sands that appear to have been transported along the fault valley. The majority of the right-lateral slip along the entire QC-FW appears to be accommodated by the single fault north of the convergence at its southern most extent.

  6. The Queen Charlotte-Fairweather Fault Zone - Geomorphology of a submarine transform fault, offshore British Columbia and southeastern Alaska

    NASA Astrophysics Data System (ADS)

    Walton, M. A. L.; Barrie, V.; Greene, H. G.; Brothers, D. S.; Conway, K.; Conrad, J. E.

    2016-12-01

    The Queen Charlotte-Fairweather (QC-FW) Fault Zone is the Pacific - North America transform plate boundary and is clearly seen for over 900 km on the seabed as a linear and continuous feature from offshore central Haida Gwaii, British Columbia to Icy Point, Alaska. Recently (July - September 2017) collected multibeam bathymetry, seismic-reflection profiles and sediment cores provide evidence for the continuous strike-slip morphology along the continental shelfbreak and upper slope, including a linear fault valley, offset submarine canyons and gullies, and right-step offsets (pull apart basins). South of central Haida Gwaii, the QC-FW is represented by several NW-SE to N-S trending faults to the southern end of the islands. Adjacent to the fault at the southern extreme and offshore Dixon Entrance (Canada/US boundary) are 400 to 600 m high mud volcanos in 1000 to 1600 m water depth that have plumes extending up 700 m into the water column and contain extensive carbonate crusts and chemosynthetic communities within the craters. In addition, gas plumes have been identified that appear to be directly associated with the fault zone. Surficial Quaternary sediments within and adjacent to the central and southern fault date either to the deglaciation of this region of the Pacific north coast (16,000 years BP) or to the last interstadial period ( 40,000 years BP). Sediment accumulation is minimal and the sediments cored are primarily hard-packed dense sands that appear to have been transported along the fault valley. The majority of the right-lateral slip along the entire QC-FW appears to be accommodated by the single fault north of the convergence at its southern most extent.

  7. Characterization of Space Shuttle External Tank Thermal Protection System (TPS) Materials in Support of the Columbia Accident Investigation

    NASA Technical Reports Server (NTRS)

    Wingard, Charles D.

    2004-01-01

    NASA suffered the loss of the seven-member crew of the Space Shuttle Columbia on February 1, 2003 when the vehicle broke apart upon re-entry to the Earth's atmosphere. The final report of the Columbia Accident Investigation Board (CAIB) determined that the accident was caused by a launch ascent incident-a suitcase-sized chunk of insulating foam on the Shuttle's External Tank (ET) broke off, and moving at almost 500 mph, struck an area of the leading edge of the Shuttle s left wing. As a result, one or more of the protective Reinforced Carbon-Carbon (RCC) panels on the wing leading edge were damaged. Upon re-entry, superheated air approaching 3,000 F breached the wing damage and caused the vehicle breakup and loss of crew. The large chunk of insulating foam that broke off during the Columbia launch was determined to come from the so-called bipod ramp area where the Shuttle s orbiter (containing crew) is attached to the ET. Underneath the foam in the bipod ramp area is a layer of TPS that is a cork-filled silicone rubber composite. In March 2003, the NASA Marshall Space Flight Center (MSFC) in Huntsville, Alabama received cured samples of the foam and composite for testing from the Michoud Assembly Facility (MAF) in New Orleans, Louisiana. The MAF is where the Shuttle's ET is manufactured. The foam and composite TPS materials for the ET have been well characterized for mechanical property data at the super-cold temperatures of the liquid oxygen and hydrogen fuels used in the ET. However, modulus data on these materials is not as well characterized. The TA Instruments 2980 Dynamic Mechanical Analyzer (DMA) was used to determine the modulus of the two TPS materials over a range of -145 to 95 C in the dual cantilever bending mode. Multi-strain, fixed frequency DMA tests were followed by multi-frequency, fixed strain tests to determine the approximate bounds of linear viscoelastic behavior for the two materials. Additional information is included in the original extended abstract.

  8. Design of Human-Machine Interface and altering of pelvic obliquity with RGR Trainer.

    PubMed

    Pietrusinski, Maciej; Unluhisarcikli, Ozer; Mavroidis, Constantinos; Cajigas, Iahn; Bonato, Paolo

    2011-01-01

    The Robotic Gait Rehabilitation (RGR) Trainer targets secondary gait deviations in stroke survivors undergoing rehabilitation. Using an impedance control strategy and a linear electromagnetic actuator, the device generates a force field to control pelvic obliquity through a Human-Machine Interface (i.e. a lower body exoskeleton). Herein we describe the design of the RGR Trainer Human-Machine Interface (HMI) and we demonstrate the system's ability to alter the pattern of movement of the pelvis during gait in a healthy subject. Results are shown for experiments during which we induced hip-hiking - in healthy subjects. Our findings indicate that the RGR Trainer has the ability of affecting pelvic obliquity during gait. Furthermore, we provide preliminary evidence of short-term retention of the modified pelvic obliquity pattern induced by the RGR Trainer. © 2011 IEEE

  9. What is the machine learning?

    NASA Astrophysics Data System (ADS)

    Chang, Spencer; Cohen, Timothy; Ostdiek, Bryan

    2018-03-01

    Applications of machine learning tools to problems of physical interest are often criticized for producing sensitivity at the expense of transparency. To address this concern, we explore a data planing procedure for identifying combinations of variables—aided by physical intuition—that can discriminate signal from background. Weights are introduced to smooth away the features in a given variable(s). New networks are then trained on this modified data. Observed decreases in sensitivity diagnose the variable's discriminating power. Planing also allows the investigation of the linear versus nonlinear nature of the boundaries between signal and background. We demonstrate the efficacy of this approach using a toy example, followed by an application to an idealized heavy resonance scenario at the Large Hadron Collider. By unpacking the information being utilized by these algorithms, this method puts in context what it means for a machine to learn.

  10. Improving the performance of extreme learning machine for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Li, Jiaojiao; Du, Qian; Li, Wei; Li, Yunsong

    2015-05-01

    Extreme learning machine (ELM) and kernel ELM (KELM) can offer comparable performance as the standard powerful classifier―support vector machine (SVM), but with much lower computational cost due to extremely simple training step. However, their performance may be sensitive to several parameters, such as the number of hidden neurons. An empirical linear relationship between the number of training samples and the number of hidden neurons is proposed. Such a relationship can be easily estimated with two small training sets and extended to large training sets so as to greatly reduce computational cost. Other parameters, such as the steepness parameter in the sigmodal activation function and regularization parameter in the KELM, are also investigated. The experimental results show that classification performance is sensitive to these parameters; fortunately, simple selections will result in suboptimal performance.

  11. Design of Human – Machine Interface and Altering of Pelvic Obliquity with RGR Trainer

    PubMed Central

    Pietrusinski, Maciej; Unluhisarcikli, Ozer; Mavroidis, Constantinos; Cajigas, Iahn; Bonato, Paolo

    2012-01-01

    The Robotic Gait Rehabilitation (RGR) Trainer targets secondary gait deviations in stroke survivors undergoing rehabilitation. Using an impedance control strategy and a linear electromagnetic actuator, the device generates a force field to control pelvic obliquity through a Human-Machine Interface (i.e. a lower body exoskeleton). Herein we describe the design of the RGR Trainer Human-Machine Interface (HMI) and we demonstrate the system’s ability to alter the pattern of movement of the pelvis during gait in a healthy subject. Results are shown for experiments during which we induced hip-hiking – in healthy subjects. Our findings indicate that the RGR Trainer has the ability of affecting pelvic obliquity during gait. Furthermore, we provide preliminary evidence of short-term retention of the modified pelvic obliquity pattern induced by the RGR Trainer. PMID:22275693

  12. 75 FR 41762 - Safety Zone; Annual Kennewick, WA, Columbia Unlimited Hydroplane Races, Kennewick, WA

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-19

    ...-AA00 Safety Zone; Annual Kennewick, WA, Columbia Unlimited Hydroplane Races, Kennewick, WA AGENCY..., Columbia Unlimited Hydroplane Races'' also known as the Tri-City Water Follies Hydroplane Races. The safety... Association hosts annual hydroplane races on the Columbia River in Kennewick, Washington. The Association is...

  13. To direct the Mayor of the District of Columbia to establish a District of Columbia National Guard Educational Assistance Program to encourage the enlistment and retention of persons in the District of Columbia National Guard by providing financial assistance to enable members of the National Guard of the District of Columbia to attend undergraduate, vocational, or technical courses.

    THOMAS, 112th Congress

    Rep. Norton, Eleanor Holmes [D-DC-At Large

    2011-03-17

    House - 04/01/2011 Referred to the Subcommittee on Health Care, District of Columbia, Census and the National Archives. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:

  14. Integration of Machining and Inspection in Aerospace Manufacturing

    NASA Astrophysics Data System (ADS)

    Simpson, Bart; Dicken, Peter J.

    2011-12-01

    The main challenge for aerospace manufacturers today is to develop the ability to produce high-quality products on a consistent basis as quickly as possible and at the lowest-possible cost. At the same time, rising material prices are making the cost of scrap higher than ever so making it more important to minimise waste. Proper inspection and quality control methods are no longer a luxury; they are an essential part of every manufacturing operation that wants to grow and be successful. However, simply bolting on some quality control procedures to the existing manufacturing processes is not enough. Inspection must be fully-integrated with manufacturing for the investment to really produce significant improvements. The traditional relationship between manufacturing and inspection is that machining is completed first on the company's machine tools and the components are then transferred to dedicated inspection equipment to be approved or rejected. However, as machining techniques become more sophisticated, and as components become larger and more complex, there are a growing number of cases where closer integration is required to give the highest productivity and the biggest reductions in wastage. Instead of a simple linear progression from CAD to CAM to machining to inspection, a more complicated series of steps is needed, with extra data needed to fill any gaps in the information available at the various stages. These new processes can be grouped under the heading of "adaptive machining". The programming of most machining operations is based around knowing three things: the position of the workpiece on the machine, the starting shape of the material to be machined, and the final shape that needs to be achieved at the end of the operation. Adaptive machining techniques allow successful machining when at least one of those elements is unknown, by using in-process measurement to close the information gaps in the process chain. It also allows any errors to be spotted earlier in the manufacturing process, so helping the problems to be resolved more quickly and at lower cost.

  15. Antiretroviral drug costs and prescription patterns in British Columbia, Canada: 1996-2011.

    PubMed

    Nosyk, Bohdan; Montaner, Julio S G; Yip, Benita; Lima, Viviane D; Hogg, Robert S

    2014-04-01

    Treatment options and therapeutic guidelines have evolved substantially since highly active antiretroviral treatment (HAART) became the standard of HIV care in 1996. We conducted the present population-based analysis to characterize the determinants of direct costs of HAART over time in British Columbia, Canada. We considered individuals ever receiving HAART in British Columbia from 1996 to 2011. Linear mixed-effects regression models were constructed to determine the effects of demographic indicators, clinical stage, and treatment characteristics on quarterly costs of HAART (in 2010$CDN) among individuals initiating in different temporal periods. The least-square mean values were estimated by CD4 category and over time for each temporal cohort. Longitudinal data on HAART recipients (N = 9601, 17.6% female, mean age at initiation = 40.5) were analyzed. Multiple regression analyses identified demographics, treatment adherence, and pharmacological class to be independently associated with quarterly HAART costs. Higher CD4 cell counts were associated with modestly lower costs among pre-HAART initiators [least-square means (95% confidence interval), CD4 > 500: 4674 (4632-4716); CD4: 350-499: 4765 (4721-4809) CD4: 200-349: 4826 (4780-4871); CD4 <200: 4809 (4759-4859)]; however these differences were not significant among post-2003 HAART initiators. Population-level mean costs increased through 2006 and stabilized post-2003 HAART initiators incurred quarterly costs up to 23% lower than pre-2000 HAART initiators in 2010. Our results highlight the magnitude of the temporal changes in HAART costs, and disparities between recent and pre-HAART initiators. This methodology can improve the precision of economic modeling efforts by using detailed cost functions for annual, population-level medication costs according to the distribution of clients by clinical stage and era of treatment initiation.

  16. Yellowstone plume trigger for Basin and Range extension and emplacement of the Nevada-Columbia Basin magmatic belt

    USGS Publications Warehouse

    Camp, Victor E; Pierce, Kenneth L.; Morgan Morzel, Lisa Ann

    2015-01-01

    Widespread extension began across the northern and central Basin and Range Province at 17–16 Ma, contemporaneous with magmatism along the Nevada–Columbia Basin magmatic belt, a linear zone of dikes and volcanic centers that extends for >1000 km, from southern Nevada to the Columbia Basin of eastern Washington. This belt was generated above an elongated sublithospheric melt zone associated with arrival of the Yellowstone mantle plume, with a north-south tabular shape attributed to plume ascent through a propagating fracture in the Juan de Fuca slab. Dike orientation along the magmatic belt suggests an extension direction of 245°–250°, but this trend lies oblique to the regional extension direction of 280°–300° during coeval and younger Basin and Range faulting, an ∼45° difference. Field relationships suggest that this magmatic trend was not controlled by regional stress in the upper crust, but rather by magma overpressure from below and forceful dike injection with an orientation inherited from a deeper process in the sublithospheric mantle. The southern half of the elongated zone of mantle upwelling was emplaced beneath a cratonic lithosphere with an elevated surface derived from Late Cretaceous to mid-Tertiary crustal thickening. This high Nevadaplano was primed for collapse with high gravitational potential energy under the influence of regional stress, partly derived from boundary forces due to Pacific–North American plate interaction. Plume arrival at 17–16 Ma resulted in advective thermal weakening of the lithosphere, mantle traction, delamination, and added buoyancy to the northern and central Basin and Range. It was not the sole cause of Basin and Range extension, but rather the catalyst for extension of the Nevadaplano, which was already on the verge of regional collapse.

  17. KENNEDY SPACE CENTER, FLA. - Workers place some of the Columbia debris moved from the Columbia Debris Hangar in its permanent storage site in the Vehicle Assembly Building. More than 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.

    NASA Image and Video Library

    2003-09-15

    KENNEDY SPACE CENTER, FLA. - Workers place some of the Columbia debris moved from the Columbia Debris Hangar in its permanent storage site in the Vehicle Assembly Building. More than 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.

  18. Modeling and Analysis of CNC Milling Process Parameters on Al3030 based Composite

    NASA Astrophysics Data System (ADS)

    Gupta, Anand; Soni, P. K.; Krishna, C. M.

    2018-04-01

    The machining of Al3030 based composites on Computer Numerical Control (CNC) high speed milling machine have assumed importance because of their wide application in aerospace industries, marine industries and automotive industries etc. Industries mainly focus on surface irregularities; material removal rate (MRR) and tool wear rate (TWR) which usually depends on input process parameters namely cutting speed, feed in mm/min, depth of cut and step over ratio. Many researchers have carried out researches in this area but very few have taken step over ratio or radial depth of cut also as one of the input variables. In this research work, the study of characteristics of Al3030 is carried out at high speed CNC milling machine over the speed range of 3000 to 5000 r.p.m. Step over ratio, depth of cut and feed rate are other input variables taken into consideration in this research work. A total nine experiments are conducted according to Taguchi L9 orthogonal array. The machining is carried out on high speed CNC milling machine using flat end mill of diameter 10mm. Flatness, MRR and TWR are taken as output parameters. Flatness has been measured using portable Coordinate Measuring Machine (CMM). Linear regression models have been developed using Minitab 18 software and result are validated by conducting selected additional set of experiments. Selection of input process parameters in order to get best machining outputs is the key contributions of this research work.

  19. The CHPRC Columbia River Protection Project Quality Assurance Project Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fix, N. J.

    Pacific Northwest National Laboratory researchers are working on the CHPRC Columbia River Protection Project (hereafter referred to as the Columbia River Project). This is a follow-on project, funded by CH2M Hill Plateau Remediation Company, LLC (CHPRC), to the Fluor Hanford, Inc. Columbia River Protection Project. The work scope consists of a number of CHPRC funded, related projects that are managed under a master project (project number 55109). All contract releases associated with the Fluor Hanford Columbia River Project (Fluor Hanford, Inc. Contract 27647) and the CHPRC Columbia River Project (Contract 36402) will be collected under this master project. Each projectmore » within the master project is authorized by a CHPRC contract release that contains the project-specific statement of work. This Quality Assurance Project Plan provides the quality assurance requirements and processes that will be followed by the Columbia River Project staff.« less

  20. KSC-2011-6172

    NASA Image and Video Library

    2011-07-29

    NACOGDOCHES, Texas -- A round 40-inch aluminum storage tank from space shuttle Columbia's Power Reactant and Storage Distribution System rests on the edge of Lake Nacogdoches in Texas. Lower lake water levels due to a local drought allowed the debris to become exposed. Columbia was destroyed during re-entry at the conclusion of the STS-107 mission in 2003. Approximately 38 to 40 percent of Columbia was recovered following the accident in a half-million-acre search area which extended from eastern Texas and to western Louisiana. This tank is one of 18 cryogenic liquid storage tanks that flew aboard Columbia. The tank is not hazardous to people or the environment and will be transported to NASA's Kennedy Space Center for storage inside the Vehicle Assembly Building with the rest of the recovered Columbia debris. For information on STS-107 and the Columbia accident, visit http://www.nasa.gov/columbia/home/index.html. Photo credit: Nacogdoches Police Dept.

  1. Effect of electric potential and current on mandibular linear measurements in cone beam CT.

    PubMed

    Panmekiate, S; Apinhasmit, W; Petersson, A

    2012-10-01

    The purpose of this study was to compare mandibular linear distances measured from cone beam CT (CBCT) images produced by different radiographic parameter settings (peak kilovoltage and milliampere value). 20 cadaver hemimandibles with edentulous ridges posterior to the mental foramen were embedded in clear resin blocks and scanned by a CBCT machine (CB MercuRay(TM); Hitachi Medico Technology Corp., Chiba-ken, Japan). The radiographic parameters comprised four peak kilovoltage settings (60 kVp, 80 kVp, 100 kVp and 120 kVp) and two milliampere settings (10 mA and 15 mA). A 102.4 mm field of view was chosen. Each hemimandible was scanned 8 times with 8 different parameter combinations resulting in 160 CBCT data sets. On the cross-sectional images, six linear distances were measured. To assess the intraobserver variation, the 160 data sets were remeasured after 2 weeks. The measurement precision was calculated using Dahlberg's formula. With the same peak kilovoltage, the measurements yielded by different milliampere values were compared using the paired t-test. With the same milliampere value, the measurements yielded by different peak kilovoltage were compared using analysis of variance. A significant difference was considered when p < 0.05. Measurement precision varied from 0.03 mm to 0.28 mm. No significant differences in the distances were found among the different radiographic parameter combinations. Based upon the specific machine in the present study, low peak kilovoltage and milliampere value might be used for linear measurements in the posterior mandible.

  2. Reliability of the Load-Velocity Relationship Obtained Through Linear and Polynomial Regression Models to Predict the One-Repetition Maximum Load.

    PubMed

    Pestaña-Melero, Francisco Luis; Haff, G Gregory; Rojas, Francisco Javier; Pérez-Castilla, Alejandro; García-Ramos, Amador

    2017-12-18

    This study aimed to compare the between-session reliability of the load-velocity relationship between (1) linear vs. polynomial regression models, (2) concentric-only vs. eccentric-concentric bench press variants, as well as (3) the within-participants vs. the between-participants variability of the velocity attained at each percentage of the one-repetition maximum (%1RM). The load-velocity relationship of 30 men (age: 21.2±3.8 y; height: 1.78±0.07 m, body mass: 72.3±7.3 kg; bench press 1RM: 78.8±13.2 kg) were evaluated by means of linear and polynomial regression models in the concentric-only and eccentric-concentric bench press variants in a Smith Machine. Two sessions were performed with each bench press variant. The main findings were: (1) first-order-polynomials (CV: 4.39%-4.70%) provided the load-velocity relationship with higher reliability than second-order-polynomials (CV: 4.68%-5.04%); (2) the reliability of the load-velocity relationship did not differ between the concentric-only and eccentric-concentric bench press variants; (3) the within-participants variability of the velocity attained at each %1RM was markedly lower than the between-participants variability. Taken together, these results highlight that, regardless of the bench press variant considered, the individual determination of the load-velocity relationship by a linear regression model could be recommended to monitor and prescribe the relative load in the Smith machine bench press exercise.

  3. Dynamic imperfections and optimized feedback design in the Compact Linear Collider main linac

    NASA Astrophysics Data System (ADS)

    Eliasson, Peder

    2008-05-01

    The Compact Linear Collider (CLIC) main linac is sensitive to dynamic imperfections such as element jitter, injected beam jitter, and ground motion. These effects cause emittance growth that, in case of ground motion, has to be counteracted by a trajectory feedback system. The feedback system itself will, due to jitter effects and imperfect beam position monitors (BPMs), indirectly cause emittance growth. Fast and accurate simulations of both the direct and indirect effects are desirable, but due to the many elements of the CLIC main linac, simulations may become very time consuming. In this paper, an efficient way of simulating linear (or nearly linear) dynamic effects is described. The method is also shown to facilitate the analytic determination of emittance growth caused by the different dynamic imperfections while using a trajectory feedback system. Emittance growth expressions are derived for quadrupole, accelerating structure, and beam jitter, for ground motion, and for noise in the feedback BPMs. Finally, it is shown how the method can be used to design a feedback system that is optimized for the optics of the machine and the ground motion spectrum of the particular site. This feedback system gives an emittance growth rate that is approximately 10 times lower than that of traditional trajectory feedbacks. The robustness of the optimized feedback system is studied for a number of additional imperfections, e.g., dipole corrector imperfections and faulty knowledge about the machine optics, with promising results.

  4. 20 CFR 1002.39 - Are States (and their political subdivisions), the District of Columbia, the Commonwealth of...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ...), the District of Columbia, the Commonwealth of Puerto Rico, and United States territories, considered... States (and their political subdivisions), the District of Columbia, the Commonwealth of Puerto Rico, and.... The District of Columbia, the Commonwealth of Puerto Rico, Guam, the Virgin Islands, and territories...

  5. 20 CFR 1002.39 - Are States (and their political subdivisions), the District of Columbia, the Commonwealth of...

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ...), the District of Columbia, the Commonwealth of Puerto Rico, and United States territories, considered... States (and their political subdivisions), the District of Columbia, the Commonwealth of Puerto Rico, and.... The District of Columbia, the Commonwealth of Puerto Rico, Guam, the Virgin Islands, and territories...

  6. 20 CFR 1002.39 - Are States (and their political subdivisions), the District of Columbia, the Commonwealth of...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...), the District of Columbia, the Commonwealth of Puerto Rico, and United States territories, considered... States (and their political subdivisions), the District of Columbia, the Commonwealth of Puerto Rico, and.... The District of Columbia, the Commonwealth of Puerto Rico, Guam, the Virgin Islands, and territories...

  7. 20 CFR 1002.39 - Are States (and their political subdivisions), the District of Columbia, the Commonwealth of...

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...), the District of Columbia, the Commonwealth of Puerto Rico, and United States territories, considered... States (and their political subdivisions), the District of Columbia, the Commonwealth of Puerto Rico, and.... The District of Columbia, the Commonwealth of Puerto Rico, Guam, the Virgin Islands, and territories...

  8. 20 CFR 1002.39 - Are States (and their political subdivisions), the District of Columbia, the Commonwealth of...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...), the District of Columbia, the Commonwealth of Puerto Rico, and United States territories, considered... States (and their political subdivisions), the District of Columbia, the Commonwealth of Puerto Rico, and.... The District of Columbia, the Commonwealth of Puerto Rico, Guam, the Virgin Islands, and territories...

  9. 77 FR 5191 - Approval and Promulgation of Air Quality Implementation Plans; District of Columbia; Regional...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-02

    ... Promulgation of Air Quality Implementation Plans; District of Columbia; Regional Haze State Implementation Plan... of Columbia Regional Haze Plan, a revision to the District of Columbia State Implementation Plan (SIP... existing anthropogenic impairment of visibility in mandatory Class I areas through a regional haze program...

  10. 75 FR 24799 - Safety Zone; Tri-City Water Follies Hydroplane Races Practice Sessions, Columbia River, Kennewick...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-06

    ...-AA00 Safety Zone; Tri-City Water Follies Hydroplane Races Practice Sessions, Columbia River, Kennewick...-City Water Follies Association hosts annual hydroplane races on the Columbia River in Kennewick... Safety Zone; Tri-City Water Follies Hydroplane Races Practice Sessions, Columbia River, Kennewick, WA (a...

  11. 77 FR 74587 - Safety Zone; Grain-Shipment Vessels, Columbia and Willamette Rivers

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-17

    ... 1625-AA00 Safety Zone; Grain-Shipment Vessels, Columbia and Willamette Rivers AGENCY: Coast Guard, DHS... inbound and outbound grain-shipment vessels involved in commerce with the Columbia Grain facility on the Willamette River in Portland, OR, and the United Grain Corporation facility on the Columbia River in...

  12. 76 FR 44587 - Notice to All Interested Parties of the Termination of the Receivership of 7439, Columbia Savings...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-26

    ... Receivership of 7439, Columbia Savings and Loan Association, Beverly Hills, CA Notice is hereby given that the Federal Deposit Insurance Corporation (``FDIC'') as Receiver for Columbia Savings and Loan Association... Resolution Trust Corporation (``RTC'') was appointed Receiver for Columbia Savings and Loan Association and...

  13. THRESHOLD ELEMENTS AND THE DESIGN OF SEQUENTIAL SWITCHING NETWORKS.

    DTIC Science & Technology

    The report covers research performed from March 1966 to March 1967. The major topics treated are: (1) methods for finding weight- threshold vectors...that realize a given switching function in multi- threshold linear logic; (2) synthesis of sequential machines by means of shift registers and simple

  14. Machine Phase Fullerene Nanotechnology: 1996

    NASA Technical Reports Server (NTRS)

    Globus, Al; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    NASA has used exotic materials for spacecraft and experimental aircraft to good effect for many decades. In spite of many advances, transportation to space still costs about $10,000 per pound. Drexler has proposed a hypothetical nanotechnology based on diamond and investigated the properties of such molecular systems. These studies and others suggest enormous potential for aerospace systems. Unfortunately, methods to realize diamonoid nanotechnology are at best highly speculative. Recent computational efforts at NASA Ames Research Center and computation and experiment elsewhere suggest that a nanotechnology of machine phase functionalized fullerenes may be synthetically relatively accessible and of great aerospace interest. Machine phase materials are (hypothetical) materials consisting entirely or in large part of microscopic machines. In a sense, most living matter fits this definition. To begin investigation of fullerene nanotechnology, we used molecular dynamics to study the properties of carbon nanotube based gears and gear/shaft configurations. Experiments on C60 and quantum calculations suggest that benzyne may react with carbon nanotubes to form gear teeth. Han has computationally demonstrated that molecular gears fashioned from (14,0) single-walled carbon nanotubes and benzyne teeth should operate well at 50-100 gigahertz. Results suggest that rotation can be converted to rotating or linear motion, and linear motion may be converted into rotation. Preliminary results suggest that these mechanical systems can be cooled by a helium atmosphere. Furthermore, Deepak has successfully simulated using helical electric fields generated by a laser to power fullerene gears once a positive and negative charge have been added to form a dipole. Even with mechanical motion, cooling, and power; creating a viable nanotechnology requires support structures, computer control, a system architecture, a variety of components, and some approach to manufacture. Additional information is contained within the original extended abstract.

  15. Combining information from 3 anatomic regions in the diagnosis of glaucoma with time-domain optical coherence tomography.

    PubMed

    Wang, Mingwu; Lu, Ake Tzu-Hui; Varma, Rohit; Schuman, Joel S; Greenfield, David S; Huang, David

    2014-03-01

    To improve the diagnosis of glaucoma by combining time-domain optical coherence tomography (TD-OCT) measurements of the optic disc, circumpapillary retinal nerve fiber layer (RNFL), and macular retinal thickness. Ninety-six age-matched normal and 96 perimetric glaucoma participants were included in this observational, cross-sectional study. Or-logic, support vector machine, relevance vector machine, and linear discrimination function were used to analyze the performances of combined TD-OCT diagnostic variables. The area under the receiver-operating curve (AROC) was used to evaluate the diagnostic accuracy and to compare the diagnostic performance of single and combined anatomic variables. The best RNFL thickness variables were the inferior (AROC=0.900), overall (AROC=0.892), and superior quadrants (AROC=0.850). The best optic disc variables were horizontal integrated rim width (AROC=0.909), vertical integrated rim area (AROC=0.908), and cup/disc vertical ratio (AROC=0.890). All macular retinal thickness variables had AROCs of 0.829 or less. Combining the top 3 RNFL and optic disc variables in optimizing glaucoma diagnosis, support vector machine had the highest AROC, 0.954, followed by or-logic (AROC=0.946), linear discrimination function (AROC=0.946), and relevance vector machine (AROC=0.943). All combination diagnostic variables had significantly larger AROCs than any single diagnostic variable. There are no significant differences among the combination diagnostic indices. With TD-OCT, RNFL and optic disc variables had better diagnostic accuracy than macular retinal variables. Combining top RNFL and optic disc variables significantly improved diagnostic performance. Clinically, or-logic classification was the most practical analytical tool with sufficient accuracy to diagnose early glaucoma.

  16. The LET Procedure for Prosthetic Myocontrol: Towards Multi-DOF Control Using Single-DOF Activations.

    PubMed

    Nowak, Markus; Castellini, Claudio

    2016-01-01

    Simultaneous and proportional myocontrol of dexterous hand prostheses is to a large extent still an open problem. With the advent of commercially and clinically available multi-fingered hand prostheses there are now more independent degrees of freedom (DOFs) in prostheses than can be effectively controlled using surface electromyography (sEMG), the current standard human-machine interface for hand amputees. In particular, it is uncertain, whether several DOFs can be controlled simultaneously and proportionally by exclusively calibrating the intended activation of single DOFs. The problem is currently solved by training on all required combinations. However, as the number of available DOFs grows, this approach becomes overly long and poses a high cognitive burden on the subject. In this paper we present a novel approach to overcome this problem. Multi-DOF activations are artificially modelled from single-DOF ones using a simple linear combination of sEMG signals, which are then added to the training set. This procedure, which we named LET (Linearly Enhanced Training), provides an augmented data set to any machine-learning-based intent detection system. In two experiments involving intact subjects, one offline and one online, we trained a standard machine learning approach using the full data set containing single- and multi-DOF activations as well as using the LET-augmented data set in order to evaluate the performance of the LET procedure. The results indicate that the machine trained on the latter data set obtains worse results in the offline experiment compared to the full data set. However, the online implementation enables the user to perform multi-DOF tasks with almost the same precision as single-DOF tasks without the need of explicitly training multi-DOF activations. Moreover, the parameters involved in the system are statistically uniform across subjects.

  17. Fabrication of micro-lens array on convex surface by meaning of micro-milling

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Du, Yunlong; Wang, Bo; Shan, Debin

    2014-08-01

    In order to develop the application of the micro-milling technology, and to fabricate ultra-precision optical surface with complex microstructure, in this paper, the primary experimental research on micro-milling complex microstructure array is carried out. A complex microstructure array surface with vary parameters is designed, and the mathematic model of the surface is set up and simulated. For the fabrication of the designed microstructure array surface, a micro three-axis ultra-precision milling machine tool is developed, aerostatic guideway drove directly by linear motor is adopted in order to guarantee the enough stiffness of the machine, and novel numerical control strategy with linear encoders of 5nm resolution used as the feedback of the control system is employed to ensure the extremely high motion control accuracy. With the help of CAD/CAM technology, convex micro lens array on convex spherical surface with different scales on material of polyvinyl chloride (PVC) and pure copper is fabricated using micro tungsten carbide ball end milling tool based on the ultra-precision micro-milling machine. Excellent nanometer-level micro-movement performance of the axis is proved by motion control experiment. The fabrication is nearly as the same as the design, the characteristic scale of the microstructure is less than 200μm and the accuracy is better than 1μm. It prove that ultra-precision micro-milling technology based on micro ultra-precision machine tool is a suitable and optional method for micro manufacture of microstructure array surface on different kinds of materials, and with the development of micro milling cutter, ultraprecision micro-milling complex microstructure surface will be achieved in future.

  18. SU-F-T-467: A Cross-Checking Approach for Dosimetric Verification of Beam- Matched Elekta Linear Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Y; Yuan, J; Geis, P

    2016-06-15

    Purpose: To verify the similarity of the dosimetric characteristics between two Elekta linear accelerators (linacs) in order to treat patients interchangeably on these two machines without re-planning. Methods: To investigate the viability of matching the 6 MV flattened beam on an existing linac (Elekta Synergy with Agility head) with a recently installed new linca (Elekta Versa HD), percent depth doses (PDD), flatness and symmetry output factors were compared for both machines. To validate the beam matching among machines, we carried out two approaches to cross-check the dosimetrical equivalence: 1) the prior treatment plans were re-computed based on the newly builtmore » Versa HD treatment planning system (TPS) model without changing the beam control points; 2) The same plans were delivered on both machines and the radiation dose measurements on a MapCheck2 were compared with TPS calculations. Three VMAT plans (Head and neck, lung, and prostate) were used in the study. Results: The difference between the PDDs for 10×10 cm{sup 2} field at all depths was less than 0.8%. The difference of flatness and symmetry for 30×30 cm{sup 2} field was less than 0.8%, and the measured output factors varies by less than 1% for each field size ranging from 2×2 cm2 to 40×40 cm{sup 2}. For the same plans, the maximum difference of the two calculated dose distributions is 2% of prescription. For the QA measurements, the gamma index passing rates were above 99% for 3%/3mm criteria with 10% threshold for all three clinical plans. Conclusion: A beam modality matching between two Elekta linacs is demonstrated with a cross-checking approach.« less

  19. Unscented Kalman Filter for Brain-Machine Interfaces

    PubMed Central

    Li, Zheng; O'Doherty, Joseph E.; Hanson, Timothy L.; Lebedev, Mikhail A.; Henriquez, Craig S.; Nicolelis, Miguel A. L.

    2009-01-01

    Brain machine interfaces (BMIs) are devices that convert neural signals into commands to directly control artificial actuators, such as limb prostheses. Previous real-time methods applied to decoding behavioral commands from the activity of populations of neurons have generally relied upon linear models of neural tuning and were limited in the way they used the abundant statistical information contained in the movement profiles of motor tasks. Here, we propose an n-th order unscented Kalman filter which implements two key features: (1) use of a non-linear (quadratic) model of neural tuning which describes neural activity significantly better than commonly-used linear tuning models, and (2) augmentation of the movement state variables with a history of n-1 recent states, which improves prediction of the desired command even before incorporating neural activity information and allows the tuning model to capture relationships between neural activity and movement at multiple time offsets simultaneously. This new filter was tested in BMI experiments in which rhesus monkeys used their cortical activity, recorded through chronically implanted multielectrode arrays, to directly control computer cursors. The 10th order unscented Kalman filter outperformed the standard Kalman filter and the Wiener filter in both off-line reconstruction of movement trajectories and real-time, closed-loop BMI operation. PMID:19603074

  20. An SVM-Based Solution for Fault Detection in Wind Turbines

    PubMed Central

    Santos, Pedro; Villa, Luisa F.; Reñones, Aníbal; Bustillo, Andres; Maudes, Jesús

    2015-01-01

    Research into fault diagnosis in machines with a wide range of variable loads and speeds, such as wind turbines, is of great industrial interest. Analysis of the power signals emitted by wind turbines for the diagnosis of mechanical faults in their mechanical transmission chain is insufficient. A successful diagnosis requires the inclusion of accelerometers to evaluate vibrations. This work presents a multi-sensory system for fault diagnosis in wind turbines, combined with a data-mining solution for the classification of the operational state of the turbine. The selected sensors are accelerometers, in which vibration signals are processed using angular resampling techniques and electrical, torque and speed measurements. Support vector machines (SVMs) are selected for the classification task, including two traditional and two promising new kernels. This multi-sensory system has been validated on a test-bed that simulates the real conditions of wind turbines with two fault typologies: misalignment and imbalance. Comparison of SVM performance with the results of artificial neural networks (ANNs) shows that linear kernel SVM outperforms other kernels and ANNs in terms of accuracy, training and tuning times. The suitability and superior performance of linear SVM is also experimentally analyzed, to conclude that this data acquisition technique generates linearly separable datasets. PMID:25760051

  1. On the bistable zone of milling processes

    PubMed Central

    Dombovari, Zoltan; Stepan, Gabor

    2015-01-01

    A modal-based model of milling machine tools subjected to time-periodic nonlinear cutting forces is introduced. The model describes the phenomenon of bistability for certain cutting parameters. In engineering, these parameter domains are referred to as unsafe zones, where steady-state milling may switch to chatter for certain perturbations. In mathematical terms, these are the parameter domains where the periodic solution of the corresponding nonlinear, time-periodic delay differential equation is linearly stable, but its domain of attraction is limited due to the existence of an unstable quasi-periodic solution emerging from a secondary Hopf bifurcation. A semi-numerical method is presented to identify the borders of these bistable zones by tracking the motion of the milling tool edges as they might leave the surface of the workpiece during the cutting operation. This requires the tracking of unstable quasi-periodic solutions and the checking of their grazing to a time-periodic switching surface in the infinite-dimensional phase space. As the parameters of the linear structural behaviour of the tool/machine tool system can be obtained by means of standard modal testing, the developed numerical algorithm provides efficient support for the design of milling processes with quick estimates of those parameter domains where chatter can still appear in spite of setting the parameters into linearly stable domains. PMID:26303918

  2. Modifications of Ti-6Al-4V surfaces by direct-write laser machining of linear grooves

    NASA Astrophysics Data System (ADS)

    Ulerich, Joseph P.; Ionescu, Lara C.; Chen, Jianbo; Soboyejo, Winston O.; Arnold, Craig B.

    2007-02-01

    As patients who receive orthopedic implants live longer and opt for surgery at a younger age, the need to extend the in vivo lifetimes of these implants has grown. One approach is to pattern implant surfaces with linear grooves, which elicit a cellular response known as contact guidance. Lasers provide a unique method of generating these surface patterns because they are capable of modifying physical and chemical properties over multiple length scales. In this paper we explore the relationship between surface morphology and laser parameters such as fluence, pulse overlap (translation distance), number of passes, and machining environment. We find that using simple procedures involving multiple passes it is possible to manipulate groove properties such as depth, shape, sub-micron roughness, and chemical composition of the Ti-6Al-4V oxide layer. Finally, we demonstrate this procedure by machining several sets of grooves with the same primary groove parameters but varied secondary characteristics. The significance of the secondary groove characteristics is demonstrated by preliminary cell studies indicating that the grooves exhibit basic features of contact guidance and that the cell proliferation in these grooves are significantly altered despite their similar primary characteristics. With further study it will be possible to use specific laser parameters during groove formation to create optimal physical and chemical properties for improved osseointegration.

  3. NAS Experiences of Porting CM Fortran Codes to HPF on IBM SP2 and SGI Power Challenge

    NASA Technical Reports Server (NTRS)

    Saini, Subhash

    1995-01-01

    Current Connection Machine (CM) Fortran codes developed for the CM-2 and the CM-5 represent an important class of parallel applications. Several users have employed CM Fortran codes in production mode on the CM-2 and the CM-5 for the last five to six years, constituting a heavy investment in terms of cost and time. With Thinking Machines Corporation's decision to withdraw from the hardware business and with the decommissioning of many CM-2 and CM-5 machines, the best way to protect the substantial investment in CM Fortran codes is to port the codes to High Performance Fortran (HPF) on highly parallel systems. HPF is very similar to CM Fortran and thus represents a natural transition. Conversion issues involved in porting CM Fortran codes on the CM-5 to HPF are presented. In particular, the differences between data distribution directives and the CM Fortran Utility Routines Library, as well as the equivalent functionality in the HPF Library are discussed. Several CM Fortran codes (Cannon algorithm for matrix-matrix multiplication, Linear solver Ax=b, 1-D convolution for 2-D datasets, Laplace's Equation solver, and Direct Simulation Monte Carlo (DSMC) codes have been ported to Subset HPF on the IBM SP2 and the SGI Power Challenge. Speedup ratios versus number of processors for the Linear solver and DSMC code are presented.

  4. A quantitative structure-activity relationship to predict efficacy of granular activated carbon adsorption to control emerging contaminants.

    PubMed

    Kennicutt, A R; Morkowchuk, L; Krein, M; Breneman, C M; Kilduff, J E

    2016-08-01

    A quantitative structure-activity relationship was developed to predict the efficacy of carbon adsorption as a control technology for endocrine-disrupting compounds, pharmaceuticals, and components of personal care products, as a tool for water quality professionals to protect public health. Here, we expand previous work to investigate a broad spectrum of molecular descriptors including subdivided surface areas, adjacency and distance matrix descriptors, electrostatic partial charges, potential energy descriptors, conformation-dependent charge descriptors, and Transferable Atom Equivalent (TAE) descriptors that characterize the regional electronic properties of molecules. We compare the efficacy of linear (Partial Least Squares) and non-linear (Support Vector Machine) machine learning methods to describe a broad chemical space and produce a user-friendly model. We employ cross-validation, y-scrambling, and external validation for quality control. The recommended Support Vector Machine model trained on 95 compounds having 23 descriptors offered a good balance between good performance statistics, low error, and low probability of over-fitting while describing a wide range of chemical features. The cross-validated model using a log-uptake (qe) response calculated at an aqueous equilibrium concentration (Ce) of 1 μM described the training dataset with an r(2) of 0.932, had a cross-validated r(2) of 0.833, and an average residual of 0.14 log units.

  5. On the Inefficiency of Equilibria in Linear Bottleneck Congestion Games

    NASA Astrophysics Data System (ADS)

    de Keijzer, Bart; Schäfer, Guido; Telelis, Orestis A.

    We study the inefficiency of equilibrium outcomes in bottleneck congestion games. These games model situations in which strategic players compete for a limited number of facilities. Each player allocates his weight to a (feasible) subset of the facilities with the goal to minimize the maximum (weight-dependent) latency that he experiences on any of these facilities. We derive upper and (asymptotically) matching lower bounds on the (strong) price of anarchy of linear bottleneck congestion games for a natural load balancing social cost objective (i.e., minimize the maximum latency of a facility). We restrict our studies to linear latency functions. Linear bottleneck congestion games still constitute a rich class of games and generalize, for example, load balancing games with identical or uniformly related machines with or without restricted assignments.

  6. Kernel-imbedded Gaussian processes for disease classification using microarray gene expression data

    PubMed Central

    Zhao, Xin; Cheung, Leo Wang-Kit

    2007-01-01

    Background Designing appropriate machine learning methods for identifying genes that have a significant discriminating power for disease outcomes has become more and more important for our understanding of diseases at genomic level. Although many machine learning methods have been developed and applied to the area of microarray gene expression data analysis, the majority of them are based on linear models, which however are not necessarily appropriate for the underlying connection between the target disease and its associated explanatory genes. Linear model based methods usually also bring in false positive significant features more easily. Furthermore, linear model based algorithms often involve calculating the inverse of a matrix that is possibly singular when the number of potentially important genes is relatively large. This leads to problems of numerical instability. To overcome these limitations, a few non-linear methods have recently been introduced to the area. Many of the existing non-linear methods have a couple of critical problems, the model selection problem and the model parameter tuning problem, that remain unsolved or even untouched. In general, a unified framework that allows model parameters of both linear and non-linear models to be easily tuned is always preferred in real-world applications. Kernel-induced learning methods form a class of approaches that show promising potentials to achieve this goal. Results A hierarchical statistical model named kernel-imbedded Gaussian process (KIGP) is developed under a unified Bayesian framework for binary disease classification problems using microarray gene expression data. In particular, based on a probit regression setting, an adaptive algorithm with a cascading structure is designed to find the appropriate kernel, to discover the potentially significant genes, and to make the optimal class prediction accordingly. A Gibbs sampler is built as the core of the algorithm to make Bayesian inferences. Simulation studies showed that, even without any knowledge of the underlying generative model, the KIGP performed very close to the theoretical Bayesian bound not only in the case with a linear Bayesian classifier but also in the case with a very non-linear Bayesian classifier. This sheds light on its broader usability to microarray data analysis problems, especially to those that linear methods work awkwardly. The KIGP was also applied to four published microarray datasets, and the results showed that the KIGP performed better than or at least as well as any of the referred state-of-the-art methods did in all of these cases. Conclusion Mathematically built on the kernel-induced feature space concept under a Bayesian framework, the KIGP method presented in this paper provides a unified machine learning approach to explore both the linear and the possibly non-linear underlying relationship between the target features of a given binary disease classification problem and the related explanatory gene expression data. More importantly, it incorporates the model parameter tuning into the framework. The model selection problem is addressed in the form of selecting a proper kernel type. The KIGP method also gives Bayesian probabilistic predictions for disease classification. These properties and features are beneficial to most real-world applications. The algorithm is naturally robust in numerical computation. The simulation studies and the published data studies demonstrated that the proposed KIGP performs satisfactorily and consistently. PMID:17328811

  7. Annual Coded Wire Tag Program; Oregon Stock Assessment, 2000 Annual Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, Mark; Mallette, Christine; Murray, William

    2002-03-01

    This annual report is in fulfillment of contract obligations with Bonneville Power Administration which is the funding source for the Oregon Department of Fish and Wildlife's Annual Stock Assessment - Coded Wire Tag Program (ODFW) Project. Tule stock fall chinook were caught primarily in British Columbia and Washington ocean, and Columbia Basin fisheries. Up-river bright stock fall chinook contributed primarily to Alaska and British Columbia ocean commercial, Columbia Basin gillnet and freshwater sport fisheries. Contribution of Rogue stock fall chinook released in the lower Columbia River occurred primarily in Oregon ocean commercial, Columbia Basin gillnet and freshwater sport fisheries. Willamettemore » stock spring chinook contributed primarily to Alaska and British Columbia ocean, and Columbia Basin sport fisheries. Willamette stock spring chinook released by CEDC contributed to similar ocean fisheries, but had much higher catch in Columbia Basin gillnet fisheries than the same stocks released in the Willamette Basin. Up-river stocks of spring chinook contributed almost exclusively to Columbia Basin fisheries. The up-river stocks of Columbia River summer steelhead contributed almost exclusively to the Columbia Basin gillnet and freshwater sport fisheries. Coho ocean fisheries from Washington to California were closed or very limited from 1994 through 1999 (1991 through 1996 broods). This has resulted in a lower percent of catch in Washington, Oregon and California ocean fisheries, and a higher percent of catch in Alaska and British Columbia ocean and Columbia Basin freshwater fisheries. Coho stocks released by ODFW below Bonneville Dam were caught mainly in Oregon, Washington, and British Columbia ocean, Columbia Gillnet and freshwater sport fisheries. Coho stocks released in the Klaskanine River and Youngs Bay area had similar ocean catch distributions, but a much higher percent catch in gillnet fisheries than the other coho releases. Ocean catch distribution of coho stocks released above Bonneville Dam was similar to the other coho groups. However, they had a higher percent catch in gillnet fisheries above Bonneville Dam than coho released below the dam. Survival rates of salmon and steelhead are influenced, not only by factors in the hatchery (disease, density, diet, size and time of release) but also by environmental factors in the river and ocean. These environmental factors are influenced by large scale oceanic and weather patterns such as El Nino. Changes in rearing conditions in the hatchery do impact survival, however, these can be offset by impacts caused by environmental factors. Coho salmon released in the Columbia River generally experience better survival rates when released later in the spring. However, for the 1990 brood year June releases of Columbia River coho had much lower survival than May releases, for all ODFW hatcheries. In general survival of ODFW Columbia River hatchery coho has declined to low levels in recent years. Preliminary results from the evaluation of Visual Implant Elastomer (VIE) tags showed tagging rate and pre-release tag retention improved from the first to second years of tagging. Tagging rate remained identical from 1999 to 2000 while pre-release tag retention dropped to 95%. Returning jack and adult salmon were sampled for CWT and VIE tags in the fall of 2000. Of 606 adults recovered at Sandy Fish Hatchery in 2000, only 1 or 0.2%, retained their VIE tag. Of 36 jacks recovered in 2000, 13 or 36.1% retained their VIE tag.« less

  8. Predicting the Performance of Chain Saw Machines Based on Shore Scleroscope Hardness

    NASA Astrophysics Data System (ADS)

    Tumac, Deniz

    2014-03-01

    Shore hardness has been used to estimate several physical and mechanical properties of rocks over the last few decades. However, the number of researches correlating Shore hardness with rock cutting performance is quite limited. Also, rather limited researches have been carried out on predicting the performance of chain saw machines. This study differs from the previous investigations in the way that Shore hardness values (SH1, SH2, and deformation coefficient) are used to determine the field performance of chain saw machines. The measured Shore hardness values are correlated with the physical and mechanical properties of natural stone samples, cutting parameters (normal force, cutting force, and specific energy) obtained from linear cutting tests in unrelieved cutting mode, and areal net cutting rate of chain saw machines. Two empirical models developed previously are improved for the prediction of the areal net cutting rate of chain saw machines. The first model is based on a revised chain saw penetration index, which uses SH1, machine weight, and useful arm cutting depth as predictors. The second model is based on the power consumed for only cutting the stone, arm thickness, and specific energy as a function of the deformation coefficient. While cutting force has a strong relationship with Shore hardness values, the normal force has a weak or moderate correlation. Uniaxial compressive strength, Cerchar abrasivity index, and density can also be predicted by Shore hardness values.

  9. Predicting Solar Activity Using Machine-Learning Methods

    NASA Astrophysics Data System (ADS)

    Bobra, M.

    2017-12-01

    Of all the activity observed on the Sun, two of the most energetic events are flares and coronal mass ejections. However, we do not, as of yet, fully understand the physical mechanism that triggers solar eruptions. A machine-learning algorithm, which is favorable in cases where the amount of data is large, is one way to [1] empirically determine the signatures of this mechanism in solar image data and [2] use them to predict solar activity. In this talk, we discuss the application of various machine learning algorithms - specifically, a Support Vector Machine, a sparse linear regression (Lasso), and Convolutional Neural Network - to image data from the photosphere, chromosphere, transition region, and corona taken by instruments aboard the Solar Dynamics Observatory in order to predict solar activity on a variety of time scales. Such an approach may be useful since, at the present time, there are no physical models of flares available for real-time prediction. We discuss our results (Bobra and Couvidat, 2015; Bobra and Ilonidis, 2016; Jonas et al., 2017) as well as other attempts to predict flares using machine-learning (e.g. Ahmed et al., 2013; Nishizuka et al. 2017) and compare these results with the more traditional techniques used by the NOAA Space Weather Prediction Center (Crown, 2012). We also discuss some of the challenges in using machine-learning algorithms for space science applications.

  10. Telescoping magnetic ball bar test gage

    DOEpatents

    Bryan, James B.

    1984-01-01

    A telescoping magnetic ball bar test gage for determining the accuracy of machine tools, including robots, and those measuring machines having non-disengageable servo drives which cannot be clutched out. Two gage balls (10, 12) are held and separated from one another by a telescoping fixture which allows them relative radial motional freedom but not relative lateral motional freedom. The telescoping fixture comprises a parallel reed flexure unit (14) and a rigid member (16, 18, 20, 22, 24). One gage ball (10) is secured by a magnetic socket knuckle assembly (34) which fixes its center with respect to the machine being tested. The other gage ball (12) is secured by another magnetic socket knuckle assembly (38) which is engaged or held by the machine in such manner that the center of that ball (12) is directed to execute a prescribed trajectory, all points of which are equidistant from the center of the fixed gage ball (10). As the moving ball (12) executes its trajectory, changes in the radial distance between the centers of the two balls (10, 12) caused by inaccuracies in the machine are determined or measured by a linear variable differential transformer (LVDT) assembly (50, 52, 54, 56, 58, 60) actuated by the parallel reed flexure unit (14). Measurements can be quickly and easily taken for multiple trajectories about several different fixed ball (10) locations, thereby determining the accuracy of the machine.

  11. Defining and Testing the Influence of Servo System Response on Machine Tool Compliance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hopkins, D J

    2004-03-24

    Compliance can be defined as the measurement of displacement per unit of force applied e.g. nano-meters per Newton (m/N). Compliance is the reciprocal of stiffness. High stiffness means low compliance and visa versa. It is an important factor in machine tool characteristics because it reflects the ability of the machine axis to maintain a desired position as it encounters a force or torque. Static compliance is a measurement made with a constant force applied e.g. the average depth of cut. Dynamic compliance is a measurement made as a function of frequency, e.g. a fast too servo (FTS) that applies amore » varying cutting force or load, interrupted cuts and external disturbances such as ground vibrations or air conditioning induced forces on the machine. Compliance can be defined for both a linear and rotary axis of a machine tool. However, to properly define compliance for a rotary axis, the axis must allow a commanded angular position. Note that this excludes velocity only axes. In this paper, several factors are discussed that affect compliance but emphasis is placed on how the machine servo system plays a key role in compliance at low to mid frequency regions. The paper discusses several techniques for measuring compliance and provides examples of results from these measurements.« less

  12. Label-free sensor for automatic identification of erythrocytes using digital in-line holographic microscopy and machine learning.

    PubMed

    Go, Taesik; Byeon, Hyeokjun; Lee, Sang Joon

    2018-04-30

    Cell types of erythrocytes should be identified because they are closely related to their functionality and viability. Conventional methods for classifying erythrocytes are time consuming and labor intensive. Therefore, an automatic and accurate erythrocyte classification system is indispensable in healthcare and biomedical fields. In this study, we proposed a new label-free sensor for automatic identification of erythrocyte cell types using a digital in-line holographic microscopy (DIHM) combined with machine learning algorithms. A total of 12 features, including information on intensity distributions, morphological descriptors, and optical focusing characteristics, is quantitatively obtained from numerically reconstructed holographic images. All individual features for discocytes, echinocytes, and spherocytes are statistically different. To improve the performance of cell type identification, we adopted several machine learning algorithms, such as decision tree model, support vector machine, linear discriminant classification, and k-nearest neighbor classification. With the aid of these machine learning algorithms, the extracted features are effectively utilized to distinguish erythrocytes. Among the four tested algorithms, the decision tree model exhibits the best identification performance for the training sets (n = 440, 98.18%) and test sets (n = 190, 97.37%). This proposed methodology, which smartly combined DIHM and machine learning, would be helpful for sensing abnormal erythrocytes and computer-aided diagnosis of hematological diseases in clinic. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. 78 FR 36212 - Availability of Application for the Proposal To Replace the Existing Movable I-5 Bridge Across...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-17

    ... materials for the Columbia River Crossing. The document contained an incorrect phone number for the Columbia... Columbia River Crossing.'' (78 FR 26380). Mistakenly, the phone number for the person listed in the FOR FURTHER INFORMATION CONTACT section was incorrect. The correct phone number for Gary Greene, Columbia...

  14. 28 CFR Appendix A to Part 812 - Qualifying District of Columbia Code Offenses

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... FOR THE DISTRICT OF COLUMBIA COLLECTION AND USE OF DNA INFORMATION Pt. 812, App. A Appendix A to Part... Columbia, the DNA Sample Collection Act of 2001 identifies the criminal offenses listed in Table 1 of this appendix as “qualifying District of Columbia offenses” for the purposes of the DNA Analysis Backlog...

  15. 28 CFR Appendix A to Part 812 - Qualifying District of Columbia Code Offenses

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... FOR THE DISTRICT OF COLUMBIA COLLECTION AND USE OF DNA INFORMATION Pt. 812, App. A Appendix A to Part... Columbia, the DNA Sample Collection Act of 2001 identifies the criminal offenses listed in Table 1 of this appendix as “qualifying District of Columbia offenses” for the purposes of the DNA Analysis Backlog...

  16. 28 CFR Appendix A to Part 812 - Qualifying District of Columbia Code Offenses

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... FOR THE DISTRICT OF COLUMBIA COLLECTION AND USE OF DNA INFORMATION Pt. 812, App. A Appendix A to Part... Columbia, the DNA Sample Collection Act of 2001 identifies the criminal offenses listed in Table 1 of this appendix as “qualifying District of Columbia offenses” for the purposes of the DNA Analysis Backlog...

  17. 28 CFR Appendix A to Part 812 - Qualifying District of Columbia Code Offenses

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... FOR THE DISTRICT OF COLUMBIA COLLECTION AND USE OF DNA INFORMATION Pt. 812, App. A Appendix A to Part... Columbia, the DNA Sample Collection Act of 2001 identifies the criminal offenses listed in Table 1 of this appendix as “qualifying District of Columbia offenses” for the purposes of the DNA Analysis Backlog...

  18. 28 CFR Appendix A to Part 812 - Qualifying District of Columbia Code Offenses

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... FOR THE DISTRICT OF COLUMBIA COLLECTION AND USE OF DNA INFORMATION Pt. 812, App. A Appendix A to Part... Columbia, the DNA Sample Collection Act of 2001 identifies the criminal offenses listed in Table 1 of this appendix as “qualifying District of Columbia offenses” for the purposes of the DNA Analysis Backlog...

  19. 78 FR 3893 - Columbia Gas Transmission, LLC; Notice of Request Under Blanket Authorization

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-17

    ... any natural gas service; however, Columbia would terminate service to one free gas customer pursuant to the terms of the lease agreement between the customer and Columbia. Columbia estimates that it... contact FERC Online Support at FERC Online[email protected] or call toll-free at (866) 206-3676, or, for...

  20. 75 FR 81464 - Safety Zone; Columbia River, The Dalles Lock and Dam

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-28

    ...-AA00 Safety Zone; Columbia River, The Dalles Lock and Dam AGENCY: Coast Guard, DHS. ACTION: Temporary... Columbia River in the vicinity of The Dalles Lock and Dam while the Army Corps of Engineers completes...; Columbia River, The Dalles Lock and Dam (a) Location. The following is a safety zone: All waters of the...

  1. To direct the Mayor of the District of Columbia to establish a District of Columbia National Guard Educational Assistance Program to encourage the enlistment and retention of persons in the District of Columbia National Guard by providing financial assistance to enable members of the National Guard of the District of Columbia to attend undergraduate, vocational, or technical courses.

    THOMAS, 111th Congress

    Rep. Norton, Eleanor Holmes [D-DC-At Large

    2009-10-22

    Senate - 07/26/2010 Committee on Homeland Security and Governmental Affairs referred to Subcommittee on Oversight of Government Management, the Federal Workforce, and the District of Columbia. (All Actions) Tracker: This bill has the status Passed HouseHere are the steps for Status of Legislation:

  2. View of the Columbia's RMS arm and end effector grasping IECM

    NASA Image and Video Library

    1982-06-27

    STS004-37-670 (27 June-4 July 1982) --- The North Atlantic Ocean southeast of the Bahamas serves as backdrop for this 70mm scene of the Columbia?s remote manipulator system (RMS) arm and hand-like device (called and end effector) grasping a multi-instrument monitor for detecting contaminants. The experiments is called the induced environment contaminant monitor (IECM). The small box contains 11 instruments for checking the contaminants in and around the orbiter?s cargo bay which might adversely affect delicate experiments carried onboard. Astronauts Thomas K. Mattingly II and Henry W. Hartsfield Jr. manned the Columbia for seven days and one hour. The Columbia?s vertical tail and orbital maneuvering system (OMS) pods are at left foreground. Photo credit: NASA

  3. KSC-99pp1142

    NASA Image and Video Library

    1999-09-24

    KENNEDY SPACE CENTER, FLA. -- The Boeing 747 Shuttle Carrier Aircraft, with the orbiter Columbia strapped to its back, waits at the Shuttle Landing Facility for clear weather to take off for its final destination, Palmdale, Calif. The oldest of four orbiters in NASA's fleet, Columbia is being ferried to Palmdale to undergo extensive inspections and modifications in Boeing's Orbiter Assembly Facility. The nine-month orbiter maintenance down period (OMDP) is the second in Columbia's history. Orbiters are periodically removed from flight operations for an OMDP. Columbia's first was in 1994. Along with more than 100 modifications on the vehicle, Columbia will be the second orbiter to be outfitted with the multifunctional electronic display system, or "glass cockpit." Columbia is expected to return to KSC in July 2000

  4. KSC-99pp1141

    NASA Image and Video Library

    1999-09-24

    KENNEDY SPACE CENTER, FLA. -- The Boeing 747 Shuttle Carrier Aircraft is cast in morning shadows as it backs away from the Mate/Demate Device with the orbiter Columbia strapped to its back. The oldest of four orbiters in NASA's fleet, Columbia is being ferried to Palmdale, Calif., where it will undergo extensive inspections and modifications in Boeing's Orbiter Assembly Facility. The nine-month orbiter maintenance down period (OMDP) is the second in Columbia's history. Orbiters are periodically removed from flight operations for an OMDP. Columbia's first was in 1994. Along with more than 100 modifications on the vehicle, Columbia will be the second orbiter to be outfitted with the multifunctional electronic display system, or "glass cockpit." Columbia is expected to return to KSC in July 2000

  5. KENNEDY SPACE CENTER, FLA. -In the Columbia Debris Hangar, Don Eitel (left) wraps pieces of Columbia debris for storage. About 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds. An area of the Vehicle Assembly Building is being prepared to store the debris.

    NASA Image and Video Library

    2003-09-10

    KENNEDY SPACE CENTER, FLA. -In the Columbia Debris Hangar, Don Eitel (left) wraps pieces of Columbia debris for storage. About 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds. An area of the Vehicle Assembly Building is being prepared to store the debris.

  6. A land use regression model for ambient ultrafine particles in Montreal, Canada: A comparison of linear regression and a machine learning approach.

    PubMed

    Weichenthal, Scott; Ryswyk, Keith Van; Goldstein, Alon; Bagg, Scott; Shekkarizfard, Maryam; Hatzopoulou, Marianne

    2016-04-01

    Existing evidence suggests that ambient ultrafine particles (UFPs) (<0.1µm) may contribute to acute cardiorespiratory morbidity. However, few studies have examined the long-term health effects of these pollutants owing in part to a need for exposure surfaces that can be applied in large population-based studies. To address this need, we developed a land use regression model for UFPs in Montreal, Canada using mobile monitoring data collected from 414 road segments during the summer and winter months between 2011 and 2012. Two different approaches were examined for model development including standard multivariable linear regression and a machine learning approach (kernel-based regularized least squares (KRLS)) that learns the functional form of covariate impacts on ambient UFP concentrations from the data. The final models included parameters for population density, ambient temperature and wind speed, land use parameters (park space and open space), length of local roads and rail, and estimated annual average NOx emissions from traffic. The final multivariable linear regression model explained 62% of the spatial variation in ambient UFP concentrations whereas the KRLS model explained 79% of the variance. The KRLS model performed slightly better than the linear regression model when evaluated using an external dataset (R(2)=0.58 vs. 0.55) or a cross-validation procedure (R(2)=0.67 vs. 0.60). In general, our findings suggest that the KRLS approach may offer modest improvements in predictive performance compared to standard multivariable linear regression models used to estimate spatial variations in ambient UFPs. However, differences in predictive performance were not statistically significant when evaluated using the cross-validation procedure. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  7. Harnessing Computational Biology for Exact Linear B-Cell Epitope Prediction: A Novel Amino Acid Composition-Based Feature Descriptor.

    PubMed

    Saravanan, Vijayakumar; Gautham, Namasivayam

    2015-10-01

    Proteins embody epitopes that serve as their antigenic determinants. Epitopes occupy a central place in integrative biology, not to mention as targets for novel vaccine, pharmaceutical, and systems diagnostics development. The presence of T-cell and B-cell epitopes has been extensively studied due to their potential in synthetic vaccine design. However, reliable prediction of linear B-cell epitope remains a formidable challenge. Earlier studies have reported discrepancy in amino acid composition between the epitopes and non-epitopes. Hence, this study proposed and developed a novel amino acid composition-based feature descriptor, Dipeptide Deviation from Expected Mean (DDE), to distinguish the linear B-cell epitopes from non-epitopes effectively. In this study, for the first time, only exact linear B-cell epitopes and non-epitopes have been utilized for developing the prediction method, unlike the use of epitope-containing regions in earlier reports. To evaluate the performance of the DDE feature vector, models have been developed with two widely used machine-learning techniques Support Vector Machine and AdaBoost-Random Forest. Five-fold cross-validation performance of the proposed method with error-free dataset and dataset from other studies achieved an overall accuracy between nearly 61% and 73%, with balance between sensitivity and specificity metrics. Performance of the DDE feature vector was better (with accuracy difference of about 2% to 12%), in comparison to other amino acid-derived features on different datasets. This study reflects the efficiency of the DDE feature vector in enhancing the linear B-cell epitope prediction performance, compared to other feature representations. The proposed method is made as a stand-alone tool available freely for researchers, particularly for those interested in vaccine design and novel molecular target development for systems therapeutics and diagnostics: https://github.com/brsaran/LBEEP.

  8. Use of a Linear Paul Trap to Study Random Noise-Induced Beam Degradation in High-Intensity Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, Moses; Gilson, Erik P.; Davidson, Ronald C.

    2009-04-10

    A random noise-induced beam degradation that can affect intense beam transport over long propagation distances has been experimentally studied by making use of the transverse beam dynamics equivalence between an alternating-gradient (AG) focusing system and a linear Paul trap system. For the present studies, machine imperfections in the quadrupole focusing lattice are considered, which are emulated by adding small random noise on the voltage waveform of the quadrupole electrodes in the Paul trap. It is observed that externally driven noise continuously produces a nonthermal tail of trapped ions, and increases the transverse emittance almost linearly with the duration of themore » noise.« less

  9. INDUCTIVE SYSTEM HEALTH MONITORING WITH STATISTICAL METRICS

    NASA Technical Reports Server (NTRS)

    Iverson, David L.

    2005-01-01

    Model-based reasoning is a powerful method for performing system monitoring and diagnosis. Building models for model-based reasoning is often a difficult and time consuming process. The Inductive Monitoring System (IMS) software was developed to provide a technique to automatically produce health monitoring knowledge bases for systems that are either difficult to model (simulate) with a computer or which require computer models that are too complex to use for real time monitoring. IMS processes nominal data sets collected either directly from the system or from simulations to build a knowledge base that can be used to detect anomalous behavior in the system. Machine learning and data mining techniques are used to characterize typical system behavior by extracting general classes of nominal data from archived data sets. In particular, a clustering algorithm forms groups of nominal values for sets of related parameters. This establishes constraints on those parameter values that should hold during nominal operation. During monitoring, IMS provides a statistically weighted measure of the deviation of current system behavior from the established normal baseline. If the deviation increases beyond the expected level, an anomaly is suspected, prompting further investigation by an operator or automated system. IMS has shown potential to be an effective, low cost technique to produce system monitoring capability for a variety of applications. We describe the training and system health monitoring techniques of IMS. We also present the application of IMS to a data set from the Space Shuttle Columbia STS-107 flight. IMS was able to detect an anomaly in the launch telemetry shortly after a foam impact damaged Columbia's thermal protection system.

  10. Design and Analysis of a Sensor System for Cutting Force Measurement in Machining Processes

    PubMed Central

    Liang, Qiaokang; Zhang, Dan; Coppola, Gianmarc; Mao, Jianxu; Sun, Wei; Wang, Yaonan; Ge, Yunjian

    2016-01-01

    Multi-component force sensors have infiltrated a wide variety of automation products since the 1970s. However, one seldom finds full-component sensor systems available in the market for cutting force measurement in machine processes. In this paper, a new six-component sensor system with a compact monolithic elastic element (EE) is designed and developed to detect the tangential cutting forces Fx, Fy and Fz (i.e., forces along x-, y-, and z-axis) as well as the cutting moments Mx, My and Mz (i.e., moments about x-, y-, and z-axis) simultaneously. Optimal structural parameters of the EE are carefully designed via simulation-driven optimization. Moreover, a prototype sensor system is fabricated, which is applied to a 5-axis parallel kinematic machining center. Calibration experimental results demonstrate that the system is capable of measuring cutting forces and moments with good linearity while minimizing coupling error. Both the Finite Element Analysis (FEA) and calibration experimental studies validate the high performance of the proposed sensor system that is expected to be adopted into machining processes. PMID:26751451

  11. Nonlinear Control of the Doubly Fed Induction Motor with Copper Losses Minimization for Electrical Vehicle

    NASA Astrophysics Data System (ADS)

    Drid, S.; Nait-Said, M.-S.; Tadjine, M.; Makouf, A.

    2008-06-01

    There is an increasing interest in electric vehicles due to environmental concerns. Recent efforts are directed toward developing an improved propulsion system for electric vehicles applications with minimal power losses. This paper deals with the high efficient vector control for the reduction of copper losses of the doubly fed motor. Firstly, the feedback linearization control based on Lyapunov approach is employed to design the underlying controller achieving the double fluxes orientation. The fluxes controllers are designed independently of the speed. The speed controller is designed using the Lyapunov method especially employed to the unknown load torques. The global asymptotic stability of the overall system is theoretically proven. Secondly, a new Torque Copper Losses Factor is proposed to deal with the problem of the machine copper losses. Its main function is to optimize the torque in keeping the machine saturation at an acceptable level. This leads to a reduction in machine currents and therefore their accompanied copper losses guaranteeing improved machine efficiency. The simulation results in comparative presentation confirm largely the effectiveness of the proposed DFIM control with a very interesting energy saving contribution.

  12. Volumetric error modeling, identification and compensation based on screw theory for a large multi-axis propeller-measuring machine

    NASA Astrophysics Data System (ADS)

    Zhong, Xuemin; Liu, Hongqi; Mao, Xinyong; Li, Bin; He, Songping; Peng, Fangyu

    2018-05-01

    Large multi-axis propeller-measuring machines have two types of geometric error, position-independent geometric errors (PIGEs) and position-dependent geometric errors (PDGEs), which both have significant effects on the volumetric error of the measuring tool relative to the worktable. This paper focuses on modeling, identifying and compensating for the volumetric error of the measuring machine. A volumetric error model in the base coordinate system is established based on screw theory considering all the geometric errors. In order to fully identify all the geometric error parameters, a new method for systematic measurement and identification is proposed. All the PIGEs of adjacent axes and the six PDGEs of the linear axes are identified with a laser tracker using the proposed model. Finally, a volumetric error compensation strategy is presented and an inverse kinematic solution for compensation is proposed. The final measuring and compensation experiments have further verified the efficiency and effectiveness of the measuring and identification method, indicating that the method can be used in volumetric error compensation for large machine tools.

  13. Design and Analysis of a Sensor System for Cutting Force Measurement in Machining Processes.

    PubMed

    Liang, Qiaokang; Zhang, Dan; Coppola, Gianmarc; Mao, Jianxu; Sun, Wei; Wang, Yaonan; Ge, Yunjian

    2016-01-07

    Multi-component force sensors have infiltrated a wide variety of automation products since the 1970s. However, one seldom finds full-component sensor systems available in the market for cutting force measurement in machine processes. In this paper, a new six-component sensor system with a compact monolithic elastic element (EE) is designed and developed to detect the tangential cutting forces Fx, Fy and Fz (i.e., forces along x-, y-, and z-axis) as well as the cutting moments Mx, My and Mz (i.e., moments about x-, y-, and z-axis) simultaneously. Optimal structural parameters of the EE are carefully designed via simulation-driven optimization. Moreover, a prototype sensor system is fabricated, which is applied to a 5-axis parallel kinematic machining center. Calibration experimental results demonstrate that the system is capable of measuring cutting forces and moments with good linearity while minimizing coupling error. Both the Finite Element Analysis (FEA) and calibration experimental studies validate the high performance of the proposed sensor system that is expected to be adopted into machining processes.

  14. Voice based gender classification using machine learning

    NASA Astrophysics Data System (ADS)

    Raahul, A.; Sapthagiri, R.; Pankaj, K.; Vijayarajan, V.

    2017-11-01

    Gender identification is one of the major problem speech analysis today. Tracing the gender from acoustic data i.e., pitch, median, frequency etc. Machine learning gives promising results for classification problem in all the research domains. There are several performance metrics to evaluate algorithms of an area. Our Comparative model algorithm for evaluating 5 different machine learning algorithms based on eight different metrics in gender classification from acoustic data. Agenda is to identify gender, with five different algorithms: Linear Discriminant Analysis (LDA), K-Nearest Neighbour (KNN), Classification and Regression Trees (CART), Random Forest (RF), and Support Vector Machine (SVM) on basis of eight different metrics. The main parameter in evaluating any algorithms is its performance. Misclassification rate must be less in classification problems, which says that the accuracy rate must be high. Location and gender of the person have become very crucial in economic markets in the form of AdSense. Here with this comparative model algorithm, we are trying to assess the different ML algorithms and find the best fit for gender classification of acoustic data.

  15. Runtime Verification of C Programs

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus

    2008-01-01

    We present in this paper a framework, RMOR, for monitoring the execution of C programs against state machines, expressed in a textual (nongraphical) format in files separate from the program. The state machine language has been inspired by a graphical state machine language RCAT recently developed at the Jet Propulsion Laboratory, as an alternative to using Linear Temporal Logic (LTL) for requirements capture. Transitions between states are labeled with abstract event names and Boolean expressions over such. The abstract events are connected to code fragments using an aspect-oriented pointcut language similar to ASPECTJ's or ASPECTC's pointcut language. The system is implemented in the C analysis and transformation package CIL, and is programmed in OCAML, the implementation language of CIL. The work is closely related to the notion of stateful aspects within aspect-oriented programming, where pointcut languages are extended with temporal assertions over the execution trace.

  16. Analysis of labor employment assessment on production machine to minimize time production

    NASA Astrophysics Data System (ADS)

    Hernawati, Tri; Suliawati; Sari Gumay, Vita

    2018-03-01

    Every company both in the field of service and manufacturing always trying to pass efficiency of it’s resource use. One resource that has an important role is labor. Labor has different efficiency levels for different jobs anyway. Problems related to the optimal allocation of labor that has different levels of efficiency for different jobs are called assignment problems, which is a special case of linear programming. In this research, Analysis of Labor Employment Assesment on Production Machine to Minimize Time Production, in PT PDM is done by using Hungarian algorithm. The aim of the research is to get the assignment of optimal labor on production machine to minimize time production. The results showed that the assignment of existing labor is not suitable because the time of completion of the assignment is longer than the assignment by using the Hungarian algorithm. By applying the Hungarian algorithm obtained time savings of 16%.

  17. Columbia Reconstruction Project Team

    NASA Image and Video Library

    2003-02-14

    In the RLV Hangar, a Columbia Reconstruction Project Team member examines pieces of debris from the Space Shuttle Columbia. The debris has begun arriving at KSC from the collection point at Barksdale Air Force Base, Shreveport, La. As part of the ongoing investigation into the tragic accident that claimed Columbia and her crew of seven, workers will attempt to reconstruct the orbiter inside the hangar.

  18. Columbia Reconstruction Project Team

    NASA Image and Video Library

    2003-02-15

    Columbia Reconstruction Project Team members move debris from the Space Shuttle Columbia into a designated sector of the RLV Hangar. The debris is being shipped to KSC from the collection point at Barksdale Air Force Base, Shreveport, La. As part of the ongoing investigation into the tragic accident that claimed Columbia and her crew of seven, workers will attempt to reconstruct the orbiter inside the hangar.

  19. Status of the interior Columbia Basin: summary of scientific findings.

    Treesearch

    Forest Service. U.S. Department of Agriculture

    1996-01-01

    The Status of the Interior Columbia Basin is a summary of the scientific findings from the Interior Columbia Basin Ecosystem Management Project. The Interior Columbia Basin includes some 145 million acres within the northwestern United Stales. Over 75 million acres of this area are managed by the USDA Forest Service or the USDI Bureau of Land Management. A framework...

  20. Megafloods and Clovis cache at Wenatchee, Washington

    NASA Astrophysics Data System (ADS)

    Waitt, Richard B.

    2016-05-01

    Immense late Wisconsin floods from glacial Lake Missoula drowned the Wenatchee reach of Washington's Columbia valley by different routes. The earliest debacles, nearly 19,000 cal yr BP, raged 335 m deep down the Columbia and built high Pangborn bar at Wenatchee. As advancing ice blocked the northwest of Columbia valley, several giant floods descended Moses Coulee and backflooded up the Columbia past Wenatchee. Ice then blocked Moses Coulee, and Grand Coulee to Quincy basin became the westmost floodway. From Quincy basin many Missoula floods backflowed 50 km upvalley to Wenatchee 18,000 to 15,500 years ago. Receding ice dammed glacial Lake Columbia centuries more-till it burst about 15,000 years ago. After Glacier Peak ashfall about 13,600 years ago, smaller great flood(s) swept down the Columbia from glacial Lake Kootenay in British Columbia. The East Wenatchee cache of huge fluted Clovis points had been laid atop Pangborn bar after the Glacier Peak ashfall, then buried by loess. Clovis people came five and a half millennia after the early gigantic Missoula floods, two and a half millennia after the last small Missoula flood, and two millennia after the glacial Lake Columbia flood. People likely saw outburst flood(s) from glacial Lake Kootenay.

  1. Kernel PLS-SVC for Linear and Nonlinear Discrimination

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.; Matthews, Bryan

    2003-01-01

    A new methodology for discrimination is proposed. This is based on kernel orthonormalized partial least squares (PLS) dimensionality reduction of the original data space followed by support vector machines for classification. Close connection of orthonormalized PLS and Fisher's approach to linear discrimination or equivalently with canonical correlation analysis is described. This gives preference to use orthonormalized PLS over principal component analysis. Good behavior of the proposed method is demonstrated on 13 different benchmark data sets and on the real world problem of the classification finger movement periods versus non-movement periods based on electroencephalogram.

  2. Supercomputer algorithms for efficient linear octree encoding of three-dimensional brain images.

    PubMed

    Berger, S B; Reis, D J

    1995-02-01

    We designed and implemented algorithms for three-dimensional (3-D) reconstruction of brain images from serial sections using two important supercomputer architectures, vector and parallel. These architectures were represented by the Cray YMP and Connection Machine CM-2, respectively. The programs operated on linear octree representations of the brain data sets, and achieved 500-800 times acceleration when compared with a conventional laboratory workstation. As the need for higher resolution data sets increases, supercomputer algorithms may offer a means of performing 3-D reconstruction well above current experimental limits.

  3. Typed Linear Chain Conditional Random Fields and Their Application to Intrusion Detection

    NASA Astrophysics Data System (ADS)

    Elfers, Carsten; Horstmann, Mirko; Sohr, Karsten; Herzog, Otthein

    Intrusion detection in computer networks faces the problem of a large number of both false alarms and unrecognized attacks. To improve the precision of detection, various machine learning techniques have been proposed. However, one critical issue is that the amount of reference data that contains serious intrusions is very sparse. In this paper we present an inference process with linear chain conditional random fields that aims to solve this problem by using domain knowledge about the alerts of different intrusion sensors represented in an ontology.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sciarrino, Fabio; Dipartimento di Fisica and Consorzio Nazionale Interuniversitario per le Scienze Fisiche della Materia, Universita 'La Sapienza', Rome 00185; De Martini, Francesco

    The optimal phase-covariant quantum cloning machine (PQCM) broadcasts the information associated to an input qubit into a multiqubit system, exploiting a partial a priori knowledge of the input state. This additional a priori information leads to a higher fidelity than for the universal cloning. The present article first analyzes different innovative schemes to implement the 1{yields}3 PQCM. The method is then generalized to any 1{yields}M machine for an odd value of M by a theoretical approach based on the general angular momentum formalism. Finally different experimental schemes based either on linear or nonlinear methods and valid for single photon polarizationmore » encoded qubits are discussed.« less

  5. Cognitive Foundry v. 3.0 (OSS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Basilico, Justin; Dixon, Kevin; McClain, Jonathan

    2009-11-18

    The Cognitive Foundry is a unified collection of tools designed for research and applications that use cognitive modeling, machine learning, or pattern recognition. The software library contains design patterns, interface definitions, and default implementations of reusable software components and algorithms designed to support a wide variety of research and development needs. The library contains three main software packages: the Common package that contains basic utilities and linear algebraic methods, the Cognitive Framework package that contains tools to assist in implementing and analyzing theories of cognition, and the Machine Learning package that provides general algorithms and methods for populating Cognitive Frameworkmore » components from domain-relevant data.« less

  6. Modeling of Passive Forces of Machine Tool Covers

    NASA Astrophysics Data System (ADS)

    Kolar, Petr; Hudec, Jan; Sulitka, Matej

    The passive forces acting against the drive force are phenomena that influence dynamical properties and precision of linear axes equipped with feed drives. Covers are one of important sources of passive forces in machine tools. The paper describes virtual evaluation of cover passive forces using the cover complex model. The model is able to compute interaction between flexible cover segments and sealing wiper. The result is deformation of cover segments and wipers which is used together with measured friction coefficient for computation of cover total passive force. This resulting passive force is dependent on cover position. Comparison of computational results and measurement on the real cover is presented in the paper.

  7. KENNEDY SPACE CENTER, FLA. - During a media tour of the Columbia Debris Hangar, photographers look at pieces of tile collected during search and recovery efforts in East Texas. About 83,000 pieces of debris from Columbia were shipped to KSC, which represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds. The debris is being packaged for storage in an area of the Vehicle Assembly Building.

    NASA Image and Video Library

    2003-09-11

    KENNEDY SPACE CENTER, FLA. - During a media tour of the Columbia Debris Hangar, photographers look at pieces of tile collected during search and recovery efforts in East Texas. About 83,000 pieces of debris from Columbia were shipped to KSC, which represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds. The debris is being packaged for storage in an area of the Vehicle Assembly Building.

  8. KENNEDY SPACE CENTER, FLA. - Some of the Columbia debris is loaded onto a flatbed truck outside the Columbia Debris Hangar. The debris is being transferred to the Vehicle Assembly Building for permanent storage. More than 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.

    NASA Image and Video Library

    2003-09-15

    KENNEDY SPACE CENTER, FLA. - Some of the Columbia debris is loaded onto a flatbed truck outside the Columbia Debris Hangar. The debris is being transferred to the Vehicle Assembly Building for permanent storage. More than 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.

  9. KENNEDY SPACE CENTER, FLA. - At the Columbia Debris Hangar, some of the debris of Space Shuttle Columbia is secured onto a flatbed truck for transfer to the Vehicle Assembly Building for permanent storage. More than 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.

    NASA Image and Video Library

    2003-09-15

    KENNEDY SPACE CENTER, FLA. - At the Columbia Debris Hangar, some of the debris of Space Shuttle Columbia is secured onto a flatbed truck for transfer to the Vehicle Assembly Building for permanent storage. More than 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.

  10. KENNEDY SPACE CENTER, FLA. - At the Columbia Debris Hangar, some of the debris of Space Shuttle Columbia is moved onto a flatbed truck for transfer to the Vehicle Assembly Building for permanent storage. More than 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.

    NASA Image and Video Library

    2003-09-15

    KENNEDY SPACE CENTER, FLA. - At the Columbia Debris Hangar, some of the debris of Space Shuttle Columbia is moved onto a flatbed truck for transfer to the Vehicle Assembly Building for permanent storage. More than 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.

  11. Laboratory Assessment of Potential Impacts to Dungeness Crabs from Disposal of Dredged Material from the Columbia River

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vavrinec, John; Pearson, Walter H.; Kohn, Nancy P.

    2007-05-07

    Dredging of the Columbia River navigation channel has raised concerns about dredging-related impacts on Dungeness crabs (Cancer magister) in the estuary, mouth of the estuary, and nearshore ocean areas adjacent to the Columbia River. The Portland District, U.S. Army Corps of Engineers engaged the Marine Sciences Laboratory (MSL) of the U.S. Department of Energy’s Pacific Northwest National Laboratory to review the state of knowledge and conduct studies concerning impacts on Dungeness crabs resulting from disposal during the Columbia River Channel Improvement Project and annual maintenance dredging in the mouth of the Columbia River. The present study concerns potential effects onmore » Dungeness crabs from dredged material disposal specific to the mouth of the Columbia River.« less

  12. KENNEDY SPACE CENTER, FLA. - Astronaut Pam Melroy speaks to members of the Columbia Reconstruction Team during transfer of debris from the Columbia Debris Hangar to its permanent storage site in the Vehicle Assembly Building. More than 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.

    NASA Image and Video Library

    2003-09-15

    KENNEDY SPACE CENTER, FLA. - Astronaut Pam Melroy speaks to members of the Columbia Reconstruction Team during transfer of debris from the Columbia Debris Hangar to its permanent storage site in the Vehicle Assembly Building. More than 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.

  13. Power training using pneumatic machines vs. plate-loaded machines to improve muscle power in older adults.

    PubMed

    Balachandran, Anoop T; Gandia, Kristine; Jacobs, Kevin A; Streiner, David L; Eltoukhy, Moataz; Signorile, Joseph F

    2017-11-01

    Power training has been shown to be more effective than conventional resistance training for improving physical function in older adults; however, most trials have used pneumatic machines during training. Considering that the general public typically has access to plate-loaded machines, the effectiveness and safety of power training using plate-loaded machines compared to pneumatic machines is an important consideration. The purpose of this investigation was to compare the effects of high-velocity training using pneumatic machines (Pn) versus standard plate-loaded machines (PL). Independently-living older adults, 60years or older were randomized into two groups: pneumatic machine (Pn, n=19) and plate-loaded machine (PL, n=17). After 12weeks of high-velocity training twice per week, groups were analyzed using an intention-to-treat approach. Primary outcomes were lower body power measured using a linear transducer and upper body power using medicine ball throw. Secondary outcomes included lower and upper body muscle muscle strength, the Physical Performance Battery (PPB), gallon jug test, the timed up-and-go test, and self-reported function using the Patient Reported Outcomes Measurement Information System (PROMIS) and an online video questionnaire. Outcome assessors were blinded to group membership. Lower body power significantly improved in both groups (Pn: 19%, PL: 31%), with no significant difference between the groups (Cohen's d=0.4, 95% CI (-1.1, 0.3)). Upper body power significantly improved only in the PL group, but showed no significant difference between the groups (Pn: 3%, PL: 6%). For balance, there was a significant difference between the groups favoring the Pn group (d=0.7, 95% CI (0.1, 1.4)); however, there were no statistically significant differences between groups for PPB, gallon jug transfer, muscle muscle strength, timed up-and-go or self-reported function. No serious adverse events were reported in either of the groups. Pneumatic and plate-loaded machines were effective in improving lower body power and physical function in older adults. The results suggest that power training can be safely and effectively performed by older adults using either pneumatic or plate-loaded machines. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Machine-learning in grading of gliomas based on multi-parametric magnetic resonance imaging at 3T.

    PubMed

    Citak-Er, Fusun; Firat, Zeynep; Kovanlikaya, Ilhami; Ture, Ugur; Ozturk-Isik, Esin

    2018-06-15

    The objective of this study was to assess the contribution of multi-parametric (mp) magnetic resonance imaging (MRI) quantitative features in the machine learning-based grading of gliomas with a multi-region-of-interests approach. Forty-three patients who were newly diagnosed as having a glioma were included in this study. The patients were scanned prior to any therapy using a standard brain tumor magnetic resonance (MR) imaging protocol that included T1 and T2-weighted, diffusion-weighted, diffusion tensor, MR perfusion and MR spectroscopic imaging. Three different regions-of-interest were drawn for each subject to encompass tumor, immediate tumor periphery, and distant peritumoral edema/normal. The normalized mp-MRI features were used to build machine-learning models for differentiating low-grade gliomas (WHO grades I and II) from high grades (WHO grades III and IV). In order to assess the contribution of regional mp-MRI quantitative features to the classification models, a support vector machine-based recursive feature elimination method was applied prior to classification. A machine-learning model based on support vector machine algorithm with linear kernel achieved an accuracy of 93.0%, a specificity of 86.7%, and a sensitivity of 96.4% for the grading of gliomas using ten-fold cross validation based on the proposed subset of the mp-MRI features. In this study, machine-learning based on multiregional and multi-parametric MRI data has proven to be an important tool in grading glial tumors accurately even in this limited patient population. Future studies are needed to investigate the use of machine learning algorithms for brain tumor classification in a larger patient cohort. Copyright © 2018. Published by Elsevier Ltd.

  15. Preliminary estimate of possible flood elevations in the Columbia River at Trojan Nuclear Power Plant due to failure of debris dam blocking Spirit Lake, Washington

    USGS Publications Warehouse

    Kresch, D.L.; Laenen, Antonius

    1984-01-01

    Failure of the debris dam, blocking the outflow of Spirit Lake near Mount St. Helens, could result in a mudflow down the Toutle and Cowlitz Rivers into the Columbia River. Flood elevations at the Trojan Nuclear Power Plant on the Columbia River, 5 mi upstream from the Cowlitz River, were simulated with a hydraulic routing model. The simulations are made for four Columbia River discharges in each of two scenarios, one in which Columbia River floods coincide with a mudflow and the other in which Columbia River floods follow a mudflow sediment deposit upstream from the Cowlitz River. In the first scenario, Manning 's roughness coefficients for clear water and for mudflow in the Columbia River are used; in the second scenario only clear water coefficients are used. The grade elevation at the power plant is 45 ft above sea level. The simulated elevations exceed 44 ft if the mudflow coincides with a Columbia River discharge that has a recurrence interval greater than 10 years (610,000 cu ft/sec); the mudflow is assumed to extend downstream from the Cowlitz River to the mouth of the Columbia River, and Manning 's roughness coefficients for a mudflow are used. The simulated elevation is 32 ft if the mudflow coincides with a 100-yr flood (820,000 cu ft/sec) and clear-water Manning 's coefficients are used throughout the entire reach of the Columbia River. The elevations exceed 45 ft if a flow exceeding the 2-yr peak discharge in the Columbia River (410,000 cu ft/sec) follows the deposit of 0.5 billion cu yd of mudflow sediment upstream of the Cowlitz River before there has been any appreciable scour or dredging of the deposit. In this simulation it is assumed that: (1) the top of the sediment deposited in the Columbia River is at an elevation of 30 ft at the mouth of the Cowlitz River, (2) the surface elevation of the sediment deposit decreases in an upstream direction at a rate of 2.5 ft/mi, and (3) clear water Manning 's coefficients apply to the entire modeled reach of the Columbia River. (Author 's abstract)

  16. 2. Historic American Buildings Survey District of Columbia Fire Department ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. Historic American Buildings Survey District of Columbia Fire Department Photo FRONT ELEVATION, 1961 - Engine Company Number Seventeen, Firehouse, 1227 Monroe Street Northeast, Washington, District of Columbia, DC

  17. 4. South Elevation Columbia Island Abutment Four; South Elevation ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. South Elevation - Columbia Island Abutment Four; South Elevation - Washington Abutment One - Arlington Memorial Bridge, Spanning Potomac River between Lincoln Memorial & Arlington National Cemetery, Washington, District of Columbia, DC

  18. Fourth Master Agreement between the University of the District of Columbia and University of the District of Columbia Faculty Association/NEA.

    ERIC Educational Resources Information Center

    District of Columbia Univ., Washington, DC.

    The collective bargaining agreement between the University of the District of Columbia and the University of the District of Columbia Faculty Association, an affiliate of the National Education Association, for the period October 1, 1988 to September 30, 1993 is presented. The agreement's 33 articles cover the following: purpose and intent, scope…

  19. Collective Bargaining Agreement between Board of Trustees of Lower Columbia College District 13 and Lower Columbia Faculty Association, 1987-1990.

    ERIC Educational Resources Information Center

    Lower Columbia Coll., Longview, WA.

    This contractual agreement between the Board of Trustees of Lower Columbia College (LCC) District 13 and the Lower Columbia College Faculty Association outlines the terms of employment for all academic employees of the district. The 13 articles in the agreement set forth provisions related to: (1) recognition of the association as exclusive…

  20. COLUMBIA RIVER BASIN SALMON AND STEELHEAD: Federal Agencies’ Recovery Responsibilities, Expenditures and Actions

    DTIC Science & Technology

    2002-07-01

    Monthly. Caspian Tern Working Group Developing a plan to reduce smolt predation by Caspian terns nesting in the Columbia River estuary. As needed...Environment and Public Works, U.S. SenateJuly 2002 COLUMBIA RIVER BASIN SALMON AND STEELHEAD Federal Agencies’ Recovery Responsibilities... COLUMBIA RIVER BASIN SALMON AND STEELHEAD: Federal Agencies Recovery Responsibilities, Expenditures and Actions Contract Number Grant Number Program

  1. KSC-03pd0475

    NASA Image and Video Library

    2003-02-21

    KENNEDY SPACE CENTER, FLA. -- Kirstie McCool Chadwick, the sister of Columbia astronaut William "Willie" J. McCool, places flowers at the Astronaut Memorial to honor the fallen crew of Space Shuttle Columbia. She joined students from Columbia Elementary School in Palm Bay, Fla., who also paid tribute to the Columbia crew. The students visited the Center to learn about the past, present and future of space exploration.

  2. KSC-03pd0474

    NASA Image and Video Library

    2003-02-21

    KENNEDY SPACE CENTER, FLA. -- Kirstie McCool Chadwick, the sister of Columbia astronaut William "Willie" J. McCool, places flowers at the Astronaut Memorial to honor the fallen crew of Space Shuttle Columbia. She joined students from Columbia Elementary School in Palm Bay, Fla., who also paid tribute to the Columbia crew. The students visited the Center to learn about the past, present and future of space exploration.

  3. Columbia River Component Data Evaluation Summary Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    C.S. Cearlock

    2006-08-02

    The purpose of the Columbia River Component Data Compilation and Evaluation task was to compile, review, and evaluate existing information for constituents that may have been released to the Columbia River due to Hanford Site operations. Through this effort an extensive compilation of information pertaining to Hanford Site-related contaminants released to the Columbia River has been completed for almost 965 km of the river.

  4. Union-Active School Librarians and School Library Advocacy: A Modified Case Study of the British Columbia Teacher-Librarians' Association and the British Columbia Teachers' Federation

    ERIC Educational Resources Information Center

    Ewbank, Ann Dutton

    2015-01-01

    This modified case study examines how the members of the British Columbia Teacher-Librarians' Association (BCTLA), a Provincial Specialist Association (PSA) of the British Columbia Teachers' Federation (BCTF), work together to advocate for strong school library programs headed by a credentialed school librarian. Since 2002, despite nullification…

  5. Columbia Reconstruction Project Team

    NASA Image and Video Library

    2003-02-15

    Columbia Reconstruction Project Team members study diagrams to aid in the placement of debris from the Space Shuttle Columbia in the RLV Hangar. The debris is being shipped to KSC from the collection point at Barksdale Air Force Base, Shreveport, La. As part of the ongoing investigation into the tragic accident that claimed Columbia and her crew of seven, workers will attempt to reconstruct the orbiter inside the hangar.

  6. Columbia Reconstruction Project Team

    NASA Image and Video Library

    2003-02-15

    Columbia Reconstruction Project Team members move a piece of debris from the Space Shuttle Columbia into a specified sector of the RLV Hangar. The debris is being shipped to KSC from the collection point at Barksdale Air Force Base, Shreveport, La. As part of the ongoing investigation into the tragic accident that claimed Columbia and her crew of seven, workers will attempt to reconstruct the orbiter inside the hangar.

  7. Columbia Reconstruction Project Team

    NASA Image and Video Library

    2003-02-15

    A Columbia Reconstruction Project Team member uses a laptop computer to catalog debris from the Space Shuttle Columbia in the RLV Hangar. The debris is being shipped to KSC from the collection point at Barksdale Air Force Base, Shreveport, La. As part of the ongoing investigation into the tragic accident that claimed Columbia and her crew of seven, workers will attempt to reconstruct the orbiter inside the hangar.

  8. Determination of real machine-tool settings and minimization of real surface deviation by computerized inspection

    NASA Technical Reports Server (NTRS)

    Litvin, Faydor L.; Kuan, Chihping; Zhang, YI

    1991-01-01

    A numerical method is developed for the minimization of deviations of real tooth surfaces from the theoretical ones. The deviations are caused by errors of manufacturing, errors of installment of machine-tool settings and distortion of surfaces by heat-treatment. The deviations are determined by coordinate measurements of gear tooth surfaces. The minimization of deviations is based on the proper correction of initially applied machine-tool settings. The contents of accomplished research project cover the following topics: (1) Descriptions of the principle of coordinate measurements of gear tooth surfaces; (2) Deviation of theoretical tooth surfaces (with examples of surfaces of hypoid gears and references for spiral bevel gears); (3) Determination of the reference point and the grid; (4) Determination of the deviations of real tooth surfaces at the points of the grid; and (5) Determination of required corrections of machine-tool settings for minimization of deviations. The procedure for minimization of deviations is based on numerical solution of an overdetermined system of n linear equations in m unknowns (m much less than n ), where n is the number of points of measurements and m is the number of parameters of applied machine-tool settings to be corrected. The developed approach is illustrated with numerical examples.

  9. Online learning control using adaptive critic designs with sparse kernel machines.

    PubMed

    Xu, Xin; Hou, Zhongsheng; Lian, Chuanqiang; He, Haibo

    2013-05-01

    In the past decade, adaptive critic designs (ACDs), including heuristic dynamic programming (HDP), dual heuristic programming (DHP), and their action-dependent ones, have been widely studied to realize online learning control of dynamical systems. However, because neural networks with manually designed features are commonly used to deal with continuous state and action spaces, the generalization capability and learning efficiency of previous ACDs still need to be improved. In this paper, a novel framework of ACDs with sparse kernel machines is presented by integrating kernel methods into the critic of ACDs. To improve the generalization capability as well as the computational efficiency of kernel machines, a sparsification method based on the approximately linear dependence analysis is used. Using the sparse kernel machines, two kernel-based ACD algorithms, that is, kernel HDP (KHDP) and kernel DHP (KDHP), are proposed and their performance is analyzed both theoretically and empirically. Because of the representation learning and generalization capability of sparse kernel machines, KHDP and KDHP can obtain much better performance than previous HDP and DHP with manually designed neural networks. Simulation and experimental results of two nonlinear control problems, that is, a continuous-action inverted pendulum problem and a ball and plate control problem, demonstrate the effectiveness of the proposed kernel ACD methods.

  10. Design of an Adaptive Human-Machine System Based on Dynamical Pattern Recognition of Cognitive Task-Load.

    PubMed

    Zhang, Jianhua; Yin, Zhong; Wang, Rubin

    2017-01-01

    This paper developed a cognitive task-load (CTL) classification algorithm and allocation strategy to sustain the optimal operator CTL levels over time in safety-critical human-machine integrated systems. An adaptive human-machine system is designed based on a non-linear dynamic CTL classifier, which maps a set of electroencephalogram (EEG) and electrocardiogram (ECG) related features to a few CTL classes. The least-squares support vector machine (LSSVM) is used as dynamic pattern classifier. A series of electrophysiological and performance data acquisition experiments were performed on seven volunteer participants under a simulated process control task environment. The participant-specific dynamic LSSVM model is constructed to classify the instantaneous CTL into five classes at each time instant. The initial feature set, comprising 56 EEG and ECG related features, is reduced to a set of 12 salient features (including 11 EEG-related features) by using the locality preserving projection (LPP) technique. An overall correct classification rate of about 80% is achieved for the 5-class CTL classification problem. Then the predicted CTL is used to adaptively allocate the number of process control tasks between operator and computer-based controller. Simulation results showed that the overall performance of the human-machine system can be improved by using the adaptive automation strategy proposed.

  11. Novel nonlinear knowledge-based mean force potentials based on machine learning.

    PubMed

    Dong, Qiwen; Zhou, Shuigeng

    2011-01-01

    The prediction of 3D structures of proteins from amino acid sequences is one of the most challenging problems in molecular biology. An essential task for solving this problem with coarse-grained models is to deduce effective interaction potentials. The development and evaluation of new energy functions is critical to accurately modeling the properties of biological macromolecules. Knowledge-based mean force potentials are derived from statistical analysis of proteins of known structures. Current knowledge-based potentials are almost in the form of weighted linear sum of interaction pairs. In this study, a class of novel nonlinear knowledge-based mean force potentials is presented. The potential parameters are obtained by nonlinear classifiers, instead of relative frequencies of interaction pairs against a reference state or linear classifiers. The support vector machine is used to derive the potential parameters on data sets that contain both native structures and decoy structures. Five knowledge-based mean force Boltzmann-based or linear potentials are introduced and their corresponding nonlinear potentials are implemented. They are the DIH potential (single-body residue-level Boltzmann-based potential), the DFIRE-SCM potential (two-body residue-level Boltzmann-based potential), the FS potential (two-body atom-level Boltzmann-based potential), the HR potential (two-body residue-level linear potential), and the T32S3 potential (two-body atom-level linear potential). Experiments are performed on well-established decoy sets, including the LKF data set, the CASP7 data set, and the Decoys “R”Us data set. The evaluation metrics include the energy Z score and the ability of each potential to discriminate native structures from a set of decoy structures. Experimental results show that all nonlinear potentials significantly outperform the corresponding Boltzmann-based or linear potentials, and the proposed discriminative framework is effective in developing knowledge-based mean force potentials. The nonlinear potentials can be widely used for ab initio protein structure prediction, model quality assessment, protein docking, and other challenging problems in computational biology.

  12. Novel hybrid linear stochastic with non-linear extreme learning machine methods for forecasting monthly rainfall a tropical climate.

    PubMed

    Zeynoddin, Mohammad; Bonakdari, Hossein; Azari, Arash; Ebtehaj, Isa; Gharabaghi, Bahram; Riahi Madavar, Hossein

    2018-09-15

    A novel hybrid approach is presented that can more accurately predict monthly rainfall in a tropical climate by integrating a linear stochastic model with a powerful non-linear extreme learning machine method. This new hybrid method was then evaluated by considering four general scenarios. In the first scenario, the modeling process is initiated without preprocessing input data as a base case. While in other three scenarios, the one-step and two-step procedures are utilized to make the model predictions more precise. The mentioned scenarios are based on a combination of stationarization techniques (i.e., differencing, seasonal and non-seasonal standardization and spectral analysis), and normality transforms (i.e., Box-Cox, John and Draper, Yeo and Johnson, Johnson, Box-Cox-Mod, log, log standard, and Manly). In scenario 2, which is a one-step scenario, the stationarization methods are employed as preprocessing approaches. In scenario 3 and 4, different combinations of normality transform, and stationarization methods are considered as preprocessing techniques. In total, 61 sub-scenarios are evaluated resulting 11013 models (10785 linear methods, 4 nonlinear models, and 224 hybrid models are evaluated). The uncertainty of the linear, nonlinear and hybrid models are examined by Monte Carlo technique. The best preprocessing technique is the utilization of Johnson normality transform and seasonal standardization (respectively) (R 2  = 0.99; RMSE = 0.6; MAE = 0.38; RMSRE = 0.1, MARE = 0.06, UI = 0.03 &UII = 0.05). The results of uncertainty analysis indicated the good performance of proposed technique (d-factor = 0.27; 95PPU = 83.57). Moreover, the results of the proposed methodology in this study were compared with an evolutionary hybrid of adaptive neuro fuzzy inference system (ANFIS) with firefly algorithm (ANFIS-FFA) demonstrating that the new hybrid methods outperformed ANFIS-FFA method. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Annual Coded Wire Tag Program; Oregon Missing Production Groups, 1995 Annual Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garrison, Robert L.; Mallette, Christine; Lewis, Mark A.

    1995-12-01

    Bonneville Power Administration is the funding source for the Oregon Department of Fish and Wildlife`s Annual Coded Wire Tag Program - Oregon Missing Production Groups Project. Tule brood fall chinook were caught primarily in the British Columbia, Washington and northern Oregon ocean commercial fisheries. The up-river bright fall chinook contributed primarily to the Alaska and British Columbia ocean commercial fisheries and the Columbia River gillnet fishery. Contribution of Rogue fall chinook released in the lower Columbia River system occurred primarily in the Oregon ocean commercial and Columbia river gillnet fisheries Willamette spring chinook salmon contributed primarily to the Alaska andmore » British Columbia ocean commercial, Oregon freshwater sport and Columbia River gillnet fisheries. Restricted ocean sport and commercial fisheries limited contribution of the Columbia coho released in the Umatilla River that survived at an average rate of 1.05% and contributed primarily to the Washington, Oregon and California ocean sport and commercial fisheries and the Columbia River gillnet fishery. The 1987 to 1991 brood years of coho released in the Yakima River survived at an average rate of 0.64% and contributed primarily to the Washington, Oregon and California ocean sport and commercial fisheries and the Columbia River gillnet fishery. Survival rates of salmon and steelhead are influenced, not only by factors in the hatchery, disease, density, diet and size and time of release, but also by environmental factors in the river and ocean. These environmental factors are controlled by large scale weather patterns such as El Nino over which man has no influence. Man could have some influence over river flow conditions, but political and economic pressures generally out weigh the biological needs of the fish.« less

  14. KENNEDY SPACE CENTER, FLA. - A student from Shoshone-Bannock Junior-Senior High School, Fort Hall, Idaho, holds part of a flag presented by dancers from the Shoshone-Bannock Native American community, Fort Hall, Idaho, commemorating the orbiter Columbia and her crew. The dancers performed a healing ceremony during the memorial service held at the Space Memorial Mirror for the crew of Columbia. Feb. 1 is the one-year anniversary of the loss of the crew and orbiter Columbia in a tragic accident as the ship returned to Earth following mission STS-107. Students and staff of the Shoshone-Bannock Nation had an experiment on board Columbia. The public was invited to the memorial service, held in the KSC Visitor Complex, which included comments by Center Director Jim Kennedy and Executive Director of Florida Space Authority Winston Scott. Scott is a former astronaut who flew on Columbia in 1997.

    NASA Image and Video Library

    2004-02-01

    KENNEDY SPACE CENTER, FLA. - A student from Shoshone-Bannock Junior-Senior High School, Fort Hall, Idaho, holds part of a flag presented by dancers from the Shoshone-Bannock Native American community, Fort Hall, Idaho, commemorating the orbiter Columbia and her crew. The dancers performed a healing ceremony during the memorial service held at the Space Memorial Mirror for the crew of Columbia. Feb. 1 is the one-year anniversary of the loss of the crew and orbiter Columbia in a tragic accident as the ship returned to Earth following mission STS-107. Students and staff of the Shoshone-Bannock Nation had an experiment on board Columbia. The public was invited to the memorial service, held in the KSC Visitor Complex, which included comments by Center Director Jim Kennedy and Executive Director of Florida Space Authority Winston Scott. Scott is a former astronaut who flew on Columbia in 1997.

  15. Two distinct phylogenetic clades of infectious hematopoietic necrosis virus overlap within the Columbia River basin

    USGS Publications Warehouse

    Garver, K.A.; Troyer, R.M.; Kurath, G.

    2003-01-01

    Infectious hematopoietic necrosis virus (IHNV), an aquatic rhabdovirus, causes a highly lethal disease of salmonid fish in North America. To evaluate the genetic diversity of IHNV from throughout the Columbia River basin, excluding the Hagerman Valley, Idaho, the sequences of a 303 nt region of the glycoprotein gene (mid-G) of 120 virus isolates were determined. Sequence comparisons revealed 30 different sequence types, with a maximum nucleotide diversity of 7.3% (22 mismatches) and an intrapopulational nucleotide diversity of 0.018. This indicates that the genetic diversity of IHNV within the Columbia River basin is 3-fold higher than in Alaska, but 2-fold lower than in the Hagerman Valley, Idaho. Phylogenetic analyses separated the Columbia River basin IHNV isolates into 2 major clades, designated U and M. The 2 clades geographically overlapped within the lower Columbia River basin and in the lower Snake River and tributaries, while the upper Columbia River basin had only U clade and the upper Snake River basin had only M clade virus types. These results suggest that there are co-circulating lineages of IHNV present within specific areas of the Columbia River basin. The epidemiological significance of these findings provided insight into viral traffic patterns exhibited by IHNV in the Columbia River basin, with specific relevance to how the Columbia River basin IHNV types were related to those in the Hagerman Valley. These analyses indicate that there have likely been 2 historical events in which Hagerman Valley IHNV types were introduced and became established in the lower Columbia River basin. However, the data also clearly indicates that the Hagerman Valley is not a continuous source of waterborne virus infecting salmonid stocks downstream.

  16. The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.

    PubMed

    Pang, Haotian; Liu, Han; Vanderbei, Robert

    2014-02-01

    We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L 1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.

  17. 1. Historic American Buildings Survey District of Columbia Fire Department ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. Historic American Buildings Survey District of Columbia Fire Department Photo FRONT ELEVATION, PRIOR TO 1960 - Engine Company Number Seventeen, Firehouse, 1227 Monroe Street Northeast, Washington, District of Columbia, DC

  18. KENNEDY SPACE CENTER, FLA. - During a media tour of the Columbia Debris Hangar, photographers focus on part of the cockpit collected from search and recovery efforts in East Texas. About 83,000 pieces of debris from Columbia were shipped to KSC, which represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds. The debris is being packaged for storage in an area of the Vehicle Assembly Building.

    NASA Image and Video Library

    2003-09-11

    KENNEDY SPACE CENTER, FLA. - During a media tour of the Columbia Debris Hangar, photographers focus on part of the cockpit collected from search and recovery efforts in East Texas. About 83,000 pieces of debris from Columbia were shipped to KSC, which represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds. The debris is being packaged for storage in an area of the Vehicle Assembly Building.

  19. KENNEDY SPACE CENTER, FLA. - Some of the Columbia debris inside the Columbia Debris Hangar is being moved out and placed on a flatbed truck (seen in the background) for transfer to the Vehicle Assembly Building for permanent storage. More than 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.

    NASA Image and Video Library

    2003-09-15

    KENNEDY SPACE CENTER, FLA. - Some of the Columbia debris inside the Columbia Debris Hangar is being moved out and placed on a flatbed truck (seen in the background) for transfer to the Vehicle Assembly Building for permanent storage. More than 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.

  20. KENNEDY SPACE CENTER, FLA. - Flatbed trucks carrying some of the debris of Space Shuttle Columbia approach the Vehicle Assembly Building (VAB). The debris is being transferred from the Columbia Debris Hangar to the VAB for permanent storage. More than 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.

    NASA Image and Video Library

    2003-09-15

    KENNEDY SPACE CENTER, FLA. - Flatbed trucks carrying some of the debris of Space Shuttle Columbia approach the Vehicle Assembly Building (VAB). The debris is being transferred from the Columbia Debris Hangar to the VAB for permanent storage. More than 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.

  1. KENNEDY SPACE CENTER, FLA. - During a media tour of the Columbia Debris Hangar, a video cameraman records some of the debris collected from search and recovery efforts in East Texas. About 83,000 pieces of debris from Columbia were shipped to KSC, which represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds. The debris is being packaged for storage in an area of the Vehicle Assembly Building.

    NASA Image and Video Library

    2003-09-11

    KENNEDY SPACE CENTER, FLA. - During a media tour of the Columbia Debris Hangar, a video cameraman records some of the debris collected from search and recovery efforts in East Texas. About 83,000 pieces of debris from Columbia were shipped to KSC, which represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds. The debris is being packaged for storage in an area of the Vehicle Assembly Building.

  2. KENNEDY SPACE CENTER, FLA. - In the Columbia Debris Hangar, Shuttle Launch Director Mike Leinbach (right) talks to the media about activities that have taken place since the Columbia accident on Feb. 1, 2003. STS-107 debris recovery and reconstruction operations are winding down. To date, nearly 84,000 pieces of debris have been recovered and sent to KSC. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.

    NASA Image and Video Library

    2003-06-04

    KENNEDY SPACE CENTER, FLA. - In the Columbia Debris Hangar, Shuttle Launch Director Mike Leinbach (right) talks to the media about activities that have taken place since the Columbia accident on Feb. 1, 2003. STS-107 debris recovery and reconstruction operations are winding down. To date, nearly 84,000 pieces of debris have been recovered and sent to KSC. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.

  3. KENNEDY SPACE CENTER, FLA. - Pieces of Columbia debris are offloaded from a flatbed truck in the transfer aisle of the Vehicle Assembly Building (VAB). The debris is being moved from the Columbia Debris Hangar to the VAB for permanent storage. More than 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.

    NASA Image and Video Library

    2003-09-15

    KENNEDY SPACE CENTER, FLA. - Pieces of Columbia debris are offloaded from a flatbed truck in the transfer aisle of the Vehicle Assembly Building (VAB). The debris is being moved from the Columbia Debris Hangar to the VAB for permanent storage. More than 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.

  4. KENNEDY SPACE CENTER, FLA. - During a media tour of the Columbia Debris Hangar, photographers focus on a piece of the debris collected from search and recovery efforts in East Texas. About 83,000 pieces of debris from Columbia were shipped to KSC, which represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds. The debris is being packaged for storage in an area of the Vehicle Assembly Building.

    NASA Image and Video Library

    2003-09-11

    KENNEDY SPACE CENTER, FLA. - During a media tour of the Columbia Debris Hangar, photographers focus on a piece of the debris collected from search and recovery efforts in East Texas. About 83,000 pieces of debris from Columbia were shipped to KSC, which represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds. The debris is being packaged for storage in an area of the Vehicle Assembly Building.

  5. KENNEDY SPACE CENTER, FLA. - Scott Thurston, NASA vehicle flow manager, speaks to members of the Columbia Reconstruction Team during transfer of debris from the Columbia Debris Hangar to its permanent storage site in the Vehicle Assembly Building. More than 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.

    NASA Image and Video Library

    2003-09-15

    KENNEDY SPACE CENTER, FLA. - Scott Thurston, NASA vehicle flow manager, speaks to members of the Columbia Reconstruction Team during transfer of debris from the Columbia Debris Hangar to its permanent storage site in the Vehicle Assembly Building. More than 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.

  6. KENNEDY SPACE CENTER, FLA. - The media get a guided tour of the Columbia Debris Hangar. Shuttle Launch Director Mike Leinbach discussed activities that have taken place since the Columbia accident on Feb. 1, 2003. STS-107 debris recovery and reconstruction operations are winding down. To date, nearly 84,000 pieces of debris have been recovered and sent to KSC. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.

    NASA Image and Video Library

    2003-06-04

    KENNEDY SPACE CENTER, FLA. - The media get a guided tour of the Columbia Debris Hangar. Shuttle Launch Director Mike Leinbach discussed activities that have taken place since the Columbia accident on Feb. 1, 2003. STS-107 debris recovery and reconstruction operations are winding down. To date, nearly 84,000 pieces of debris have been recovered and sent to KSC. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.

  7. KENNEDY SPACE CENTER, FLA. - During a media tour of the Columbia Debris Hangar, a photographer examines some of the debris collected from search and recovery efforts in East Texas. About 83,000 pieces of debris from Columbia were shipped to KSC, which represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds. The debris is being packaged for storage in an area of the Vehicle Assembly Building.

    NASA Image and Video Library

    2003-09-11

    KENNEDY SPACE CENTER, FLA. - During a media tour of the Columbia Debris Hangar, a photographer examines some of the debris collected from search and recovery efforts in East Texas. About 83,000 pieces of debris from Columbia were shipped to KSC, which represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds. The debris is being packaged for storage in an area of the Vehicle Assembly Building.

  8. KENNEDY SPACE CENTER, FLA. - A worker moves some of the Columbia debris to its storage site in the Vehicle Assembly Building (VAB). The debris is being transferred from the Columbia Debris Hangar to the VAB for permanent storage. More than 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.

    NASA Image and Video Library

    2003-09-15

    KENNEDY SPACE CENTER, FLA. - A worker moves some of the Columbia debris to its storage site in the Vehicle Assembly Building (VAB). The debris is being transferred from the Columbia Debris Hangar to the VAB for permanent storage. More than 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.

  9. KENNEDY SPACE CENTER, FLA. - Workers move some of the Columbia debris to its storage site in the Vehicle Assembly Building (VAB). The debris is being transferred from the Columbia Debris Hangar to the VAB for permanent storage. More than 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.

    NASA Image and Video Library

    2003-09-15

    KENNEDY SPACE CENTER, FLA. - Workers move some of the Columbia debris to its storage site in the Vehicle Assembly Building (VAB). The debris is being transferred from the Columbia Debris Hangar to the VAB for permanent storage. More than 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.

  10. Draft Master Agreement between the University of the District of Columbia Faculty Association/NEA and the University of District of Columbia, October 1, 1985-September 30, 1987.

    ERIC Educational Resources Information Center

    District of Columbia Univ., Washington, DC.

    The collective bargaining agreement between the University of the District of Columbia and the University of the District of Columbia Faculty Association Chapter (600 members) of the National Education Association covering the period October 1, 1985-September 30, 1987 is presented. Items covered in the agreement include: unit scope and…

  11. The Columbia River Research Laboratory

    USGS Publications Warehouse

    Maule, Alec

    2005-01-01

    The U.S. Geological Survey's Columbia River Research Laboratory (CRRL) was established in 1978 at Cook, Washington, in the Columbia River Gorge east of Portland, Oregon. The CRRL, as part of the Western Fisheries Research Center, conducts research on fishery issues in the Columbia River Basin. Our mission is to: 'Serve the public by providing scientific information to support the stewardship of our Nation's fish and aquatic resources...by conducting objective, relevant research'.

  12. Changing course. Columbia the buyer becomes Columbia the builder as the company seeks to overcome market impediments.

    PubMed

    Japsen, B; Snow, C

    1997-04-14

    In an attempt to overcome market roadblocks, Columbia/HCA Healthcare Corp is revising its strategy from buying existing hospitals to constructing new ones. In this issue we take a look at the investor-owned giant's changing tactics as well as its sometimes rocky relations with the media. We also examine Columbia's performance in its former headquarters city, Louisville, Ky.

  13. Application of Electron Microscopy Techniques to the Investigation of Space Shuttle Columbia Accident

    NASA Technical Reports Server (NTRS)

    Shah, Sandeep

    2005-01-01

    This viewgraph presentation gives an overview of the investigation into the breakup of the Space Shuttle Columbia, and addresses the importance of a failure analysis strategy for the investigation of the Columbia accident. The main focus of the presentation is on the usefulness of electron microscopy for analyzing slag deposits from the tiles and reinforced carbon-carbon (RCC) wing panels of the Columbia orbiter.

  14. Megafloods and Clovis cache at Wenatchee, Washington

    USGS Publications Warehouse

    Waitt, Richard B.

    2016-01-01

    Immense late Wisconsin floods from glacial Lake Missoula drowned the Wenatchee reach of Washington's Columbia valley by different routes. The earliest debacles, nearly 19,000 cal yr BP, raged 335 m deep down the Columbia and built high Pangborn bar at Wenatchee. As advancing ice blocked the northwest of Columbia valley, several giant floods descended Moses Coulee and backflooded up the Columbia past Wenatchee. Ice then blocked Moses Coulee, and Grand Coulee to Quincy basin became the westmost floodway. From Quincy basin many Missoula floods backflowed 50 km upvalley to Wenatchee 18,000 to 15,500 years ago. Receding ice dammed glacial Lake Columbia centuries more—till it burst about 15,000 years ago. After Glacier Peak ashfall about 13,600 years ago, smaller great flood(s) swept down the Columbia from glacial Lake Kootenay in British Columbia. The East Wenatchee cache of huge fluted Clovis points had been laid atop Pangborn bar after the Glacier Peak ashfall, then buried by loess. Clovis people came five and a half millennia after the early gigantic Missoula floods, two and a half millennia after the last small Missoula flood, and two millennia after the glacial Lake Columbia flood. People likely saw outburst flood(s) from glacial Lake Kootenay.

  15. Emplacement of Columbia River flood basalt

    NASA Astrophysics Data System (ADS)

    Reidel, Stephen P.

    1998-11-01

    Evidence is examined for the emplacement of the Umatilla, Wilbur Creek, and the Asotin Members of Columbia River Basalt Group. These flows erupted in the eastern part of the Columbia Plateau during the waning phases of volcanism. The Umatilla Member consists of two flows in the Lewiston basin area and southwestern Columbia Plateau. These flows mixed to form one flow in the central Columbia Plateau. The composition of the younger flow is preserved in the center and the composition of the older flow is at the top and bottom. There is a complete gradation between the two. Flows of the Wilbur Creek and Asotin Members erupted individually in the eastern Columbia Plateau and also mixed together in the central Columbia Plateau. Comparison of the emplacement patterns to intraflow structures and textures of the flows suggests that very little time elapsed between eruptions. In addition, the amount of crust that formed on the earlier flows prior to mixing also suggests rapid emplacement. Calculations of volumetric flow rates through constrictions in channels suggest emplacement times of weeks to months under fast laminar flow for all three members. A new model for the emplacement of Columbia River Basalt Group flows is proposed that suggests rapid eruption and emplacement for the main part of the flow and slower emplacement along the margins as the of the flow margin expands.

  16. 1. View of north tower, facing northwest from dike on ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. View of north tower, facing northwest from dike on north bank of the Columbia River. - Pasco-Kennewick Transmission Line, Columbia River Crossing Towers, Columbia Drive & Gum Street, Kennewick, Benton County, WA

  17. Historic Columbia River Highway oral history : final report.

    DOT National Transportation Integrated Search

    2009-08-01

    The Historic Columbia River Highway: Oral History Project compliments a larger effort in Oregon to reconnect abandoned sections of the Historic Columbia River Highway. The goals of the larger reconnection project, Milepost 2016 Reconnection Projec...

  18. An image segmentation method for apple sorting and grading using support vector machine and Otsu's method

    USDA-ARS?s Scientific Manuscript database

    Segmentation is the first step in image analysis to subdivide an image into meaningful regions. The segmentation result directly affects the subsequent image analysis. The objective of the research was to develop an automatic adjustable algorithm for segmentation of color images, using linear suppor...

  19. Cue Reliance in L2 Written Production

    ERIC Educational Resources Information Center

    Wiechmann, Daniel; Kerz, Elma

    2014-01-01

    Second language learners reach expert levels in relative cue weighting only gradually. On the basis of ensemble machine learning models fit to naturalistic written productions of German advanced learners of English and expert writers, we set out to reverse engineer differences in the weighting of multiple cues in a clause linearization problem. We…

  20. A Pulsed Thermographic Imaging System for Detection and Identification of Cotton Foreign Matter

    PubMed Central

    Kuzy, Jesse; Li, Changying

    2017-01-01

    Detection of foreign matter in cleaned cotton is instrumental to accurately grading cotton quality, which in turn impacts the marketability of the cotton. Current grading systems return estimates of the amount of foreign matter present, but provide no information about the identity of the contaminants. This paper explores the use of pulsed thermographic analysis to detect and identify cotton foreign matter. The design and implementation of a pulsed thermographic analysis system is described. A sample set of 240 foreign matter and cotton lint samples were collected. Hand-crafted waveform features and frequency-domain features were extracted and analyzed for statistical significance. Classification was performed on these features using linear discriminant analysis and support vector machines. Using waveform features and support vector machine classifiers, detection of cotton foreign matter was performed with 99.17% accuracy. Using frequency-domain features and linear discriminant analysis, identification was performed with 90.00% accuracy. These results demonstrate that pulsed thermographic imaging analysis produces data which is of significant utility for the detection and identification of cotton foreign matter. PMID:28273848

  1. Magnet pole shape design for reduction of thrust ripple of slotless permanent magnet linear synchronous motor with arc-shaped magnets considering end-effect based on analytical method

    NASA Astrophysics Data System (ADS)

    Shin, Kyung-Hun; Park, Hyung-Il; Kim, Kwan-Ho; Jang, Seok-Myeong; Choi, Jang-Young

    2017-05-01

    The shape of the magnet is essential to the performance of a slotless permanent magnet linear synchronous machine (PMLSM) because it is directly related to desirable machine performance. This paper presents a reduction in the thrust ripple of a PMLSM through the use of arc-shaped magnets based on electromagnetic field theory. The magnetic field solutions were obtained by considering end effect using a magnetic vector potential and two-dimensional Cartesian coordinate system. The analytical solution of each subdomain (PM, air-gap, coil, and end region) is derived, and the field solution is obtained by applying the boundary and interface conditions between the subdomains. In particular, an analytical method was derived for the instantaneous thrust and thrust ripple reduction of a PMLSM with arc-shaped magnets. In order to demonstrate the validity of the analytical results, the back electromotive force results of a finite element analysis and experiment on the manufactured prototype model were compared. The optimal point for thrust ripple minimization is suggested.

  2. Using Bar Velocity to Predict the Maximum Dynamic Strength in the Half-Squat Exercise.

    PubMed

    Loturco, Irineu; Pereira, Lucas A; Cal Abad, Cesar C; Gil, Saulo; Kitamura, Katia; Kobal, Ronaldo; Nakamura, Fábio Y

    2016-07-01

    To determine whether athletes from different sport disciplines present similar mean propulsive velocity (MPV) in the half-squat (HS) during submaximal and maximal tests, enabling prediction of 1-repetition maximum (1-RM) from MPV at any given submaximal load. Sixty-four male athletes, comprising American football, rugby, and soccer players; sprinters and jumpers; and combat-sport strikers attended 2 testing sessions separated by 2-4 wk. On the first visit, a standardized 1-RM test was performed. On the second, athletes performed HSs on Smith-machine equipment, using relative percentages of 1-RM to determine the respective MPV of submaximal and maximal loads. Linear regression established the relationship between MPV and percentage of 1-RM. A very strong linear relationship (R2 ≈ .96) was observed between the MPV and the percentages of HS 1-RM, resulting in the following equation: %HS 1-RM = -105.05 × MPV + 131.75. The MPV at HS 1-RM was ~0.3 m/s. This equation can be used to predict HS 1-RM on a Smith machine with a high degree of accuracy.

  3. Analysis of programming properties and the row-column generation method for 1-norm support vector machines.

    PubMed

    Zhang, Li; Zhou, WeiDa

    2013-12-01

    This paper deals with fast methods for training a 1-norm support vector machine (SVM). First, we define a specific class of linear programming with many sparse constraints, i.e., row-column sparse constraint linear programming (RCSC-LP). In nature, the 1-norm SVM is a sort of RCSC-LP. In order to construct subproblems for RCSC-LP and solve them, a family of row-column generation (RCG) methods is introduced. RCG methods belong to a category of decomposition techniques, and perform row and column generations in a parallel fashion. Specially, for the 1-norm SVM, the maximum size of subproblems of RCG is identical with the number of Support Vectors (SVs). We also introduce a semi-deleting rule for RCG methods and prove the convergence of RCG methods when using the semi-deleting rule. Experimental results on toy data and real-world datasets illustrate that it is efficient to use RCG to train the 1-norm SVM, especially in the case of small SVs. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Teaching High School Students Machine Learning Algorithms to Analyze Flood Risk Factors in River Deltas

    NASA Astrophysics Data System (ADS)

    Rose, R.; Aizenman, H.; Mei, E.; Choudhury, N.

    2013-12-01

    High School students interested in the STEM fields benefit most when actively participating, so I created a series of learning modules on how to analyze complex systems using machine-learning that give automated feedback to students. The automated feedbacks give timely responses that will encourage the students to continue testing and enhancing their programs. I have designed my modules to take the tactical learning approach in conveying the concepts behind correlation, linear regression, and vector distance based classification and clustering. On successful completion of these modules, students will learn how to calculate linear regression, Pearson's correlation, and apply classification and clustering techniques to a dataset. Working on these modules will allow the students to take back to the classroom what they've learned and then apply it to the Earth Science curriculum. During my research this summer, we applied these lessons to analyzing river deltas; we looked at trends in the different variables over time, looked for similarities in NDVI, precipitation, inundation, runoff and discharge, and attempted to predict floods based on the precipitation, waves mean, area of discharge, NDVI, and inundation.

  5. Output-only modal parameter estimator of linear time-varying structural systems based on vector TAR model and least squares support vector machine

    NASA Astrophysics Data System (ADS)

    Zhou, Si-Da; Ma, Yuan-Chen; Liu, Li; Kang, Jie; Ma, Zhi-Sai; Yu, Lei

    2018-01-01

    Identification of time-varying modal parameters contributes to the structural health monitoring, fault detection, vibration control, etc. of the operational time-varying structural systems. However, it is a challenging task because there is not more information for the identification of the time-varying systems than that of the time-invariant systems. This paper presents a vector time-dependent autoregressive model and least squares support vector machine based modal parameter estimator for linear time-varying structural systems in case of output-only measurements. To reduce the computational cost, a Wendland's compactly supported radial basis function is used to achieve the sparsity of the Gram matrix. A Gamma-test-based non-parametric approach of selecting the regularization factor is adapted for the proposed estimator to replace the time-consuming n-fold cross validation. A series of numerical examples have illustrated the advantages of the proposed modal parameter estimator on the suppression of the overestimate and the short data. A laboratory experiment has further validated the proposed estimator.

  6. Experimental validation of the Achromatic Telescopic Squeezing (ATS) scheme at the LHC

    NASA Astrophysics Data System (ADS)

    Fartoukh, S.; Bruce, R.; Carlier, F.; Coello De Portugal, J.; Garcia-Tabares, A.; Maclean, E.; Malina, L.; Mereghetti, A.; Mirarchi, D.; Persson, T.; Pojer, M.; Ponce, L.; Redaelli, S.; Salvachua, B.; Skowronski, P.; Solfaroli, M.; Tomas, R.; Valuch, D.; Wegscheider, A.; Wenninger, J.

    2017-07-01

    The Achromatic Telescopic Squeezing scheme offers new techniques to deliver unprecedentedly small beam spot size at the interaction points of the ATLAS and CMS experiments of the LHC, while perfectly controlling the chromatic properties of the corresponding optics (linear and non-linear chromaticities, off-momentum beta-beating, spurious dispersion induced by the crossing bumps). The first series of beam tests with ATS optics were achieved during the LHC Run I (2011/2012) for a first validation of the basics of the scheme at small intensity. In 2016, a new generation of more performing ATS optics was developed and more extensively tested in the machine, still with probe beams for optics measurement and correction at β* = 10 cm, but also with a few nominal bunches to establish first collisions at nominal β* (40 cm) and beyond (33 cm), and to analysis the robustness of these optics in terms of collimation and machine protection. The paper will highlight the most relevant and conclusive results which were obtained during this second series of ATS tests.

  7. Ranking Support Vector Machine with Kernel Approximation

    PubMed Central

    Dou, Yong

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms. PMID:28293256

  8. Ranking Support Vector Machine with Kernel Approximation.

    PubMed

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  9. Assessing Affective and Deliberative Decision-Making: Adaptation of the Columbia Card Task to Brazilian Portuguese.

    PubMed

    Kluwe-Schiavon, Bruno; Sanvicente-Vieira, Breno; Viola, Thiago W; Veiga, Eduardo; Bortolotto, Vanessa; Grassi-Oliveira, Rodrigo

    2015-11-20

    The ability to predict reward and punishment is essential for decision-making and the ability to learn about an ever-changing environment. Therefore, efforts have been made in understanding the mechanisms underlying decision-making, especially regarding how affective and deliberative processes interact with risk behavior. To adapt to Brazilian Portuguese the Columbia Card Task (CCT) and investigate affective and deliberative processes involved in decision-making. This study had two main phases: (1) a transcultural adaptation and (2) a pilot study. The feedback manipulation among the three conditions of CCT had an effect on the risk-taking level (p < .005, ES = .201). In addition, the feedback manipulation among the three conditions of CCT had an effect on the information use at both the individual and group levels. Further, a linear regression suggested that the use of information, indicated by the advantageous level of the scenarios, predict the number of cards chosen R 2 = .029, p < .001, accounting for 17% of the variance. The Brazilian CCT performs well and is a versatile method for the assessment of affective and deliberative decision-making under risk according to different feedback manipulation scenarios. This study goes further, comparing electrodermal activity during hot and warm conditions and addressing an advantageous level index analysis to asses deliberative processing.

  10. Improving gridded snow water equivalent products in British Columbia, Canada: multi-source data fusion by neural network models

    NASA Astrophysics Data System (ADS)

    Snauffer, Andrew M.; Hsieh, William W.; Cannon, Alex J.; Schnorbus, Markus A.

    2018-03-01

    Estimates of surface snow water equivalent (SWE) in mixed alpine environments with seasonal melts are particularly difficult in areas of high vegetation density, topographic relief, and snow accumulations. These three confounding factors dominate much of the province of British Columbia (BC), Canada. An artificial neural network (ANN) was created using as predictors six gridded SWE products previously evaluated for BC. Relevant spatiotemporal covariates were also included as predictors, and observations from manual snow surveys at stations located throughout BC were used as target data. Mean absolute errors (MAEs) and interannual correlations for April surveys were found using cross-validation. The ANN using the three best-performing SWE products (ANN3) had the lowest mean station MAE across the province. ANN3 outperformed each product as well as product means and multiple linear regression (MLR) models in all of BC's five physiographic regions except for the BC Plains. Subsequent comparisons with predictions generated by the Variable Infiltration Capacity (VIC) hydrologic model found ANN3 to better estimate SWE over the VIC domain and within most regions. The superior performance of ANN3 over the individual products, product means, MLR, and VIC was found to be statistically significant across the province.

  11. Calculation of force and power during bench throws using a Smith machine: the importance of considering the effect of counterweights.

    PubMed

    Kobayashi, Y; Narazaki, K; Akagi, R; Nakagaki, K; Kawamori, N; Ohta, K

    2013-09-01

    For achieving accurate and safe measurements of the force and power exerted on a load during resistance exercise, the Smith machine has been used instead of free weights. However, because some Smith machines possess counterweights, the equation for the calculation of force and power in this system should be different from the one used for free weights. The purpose of this investigation was to calculate force and power using an equation derived from a dynamic equation for a Smith machine with counterweights and to determine the differences in force and power calculated using 2 different equations. One equation was established ignoring the effect of the counterweights (Method 1). The other equation was derived from a dynamic equation for a barbell and counterweight system (Method 2). 9 female collegiate judo athletes performed bench throws using a Smith machine with a counterweight at 6 different loading conditions. Barbell displacement was recorded using a linear position transducer. The force and power were subsequently calculated by Methods 1 and 2. The results showed that the mean and peak power and force in Method 1 were significantly lower relative to those of Method 2 under all loading conditions. These results indicate that the mean and peak power and force during bench throwing using a Smith machine with counterweights would be underestimated when the calculations used to determine these parameters do not account for the effect of counterweights. © Georg Thieme Verlag KG Stuttgart · New York.

  12. A hybrid machine learning model to estimate nitrate contamination of production zone groundwater in the Central Valley, California

    NASA Astrophysics Data System (ADS)

    Ransom, K.; Nolan, B. T.; Faunt, C. C.; Bell, A.; Gronberg, J.; Traum, J.; Wheeler, D. C.; Rosecrans, C.; Belitz, K.; Eberts, S.; Harter, T.

    2016-12-01

    A hybrid, non-linear, machine learning statistical model was developed within a statistical learning framework to predict nitrate contamination of groundwater to depths of approximately 500 m below ground surface in the Central Valley, California. A database of 213 predictor variables representing well characteristics, historical and current field and county scale nitrogen mass balance, historical and current landuse, oxidation/reduction conditions, groundwater flow, climate, soil characteristics, depth to groundwater, and groundwater age were assigned to over 6,000 private supply and public supply wells measured previously for nitrate and located throughout the study area. The machine learning method, gradient boosting machine (GBM) was used to screen predictor variables and rank them in order of importance in relation to the groundwater nitrate measurements. The top five most important predictor variables included oxidation/reduction characteristics, historical field scale nitrogen mass balance, climate, and depth to 60 year old water. Twenty-two variables were selected for the final model and final model errors for log-transformed hold-out data were R squared of 0.45 and root mean square error (RMSE) of 1.124. Modeled mean groundwater age was tested separately for error improvement in the model and when included decreased model RMSE by 0.5% compared to the same model without age and by 0.20% compared to the model with all 213 variables. 1D and 2D partial plots were examined to determine how variables behave individually and interact in the model. Some variables behaved as expected: log nitrate decreased with increasing probability of anoxic conditions and depth to 60 year old water, generally decreased with increasing natural landuse surrounding wells and increasing mean groundwater age, generally increased with increased minimum depth to high water table and with increased base flow index value. Other variables exhibited much more erratic or noisy behavior in the model making them more difficult to interpret but highlighting the usefulness of the non-linear machine learning method. 2D interaction plots show probability of anoxic groundwater conditions largely control estimated nitrate concentrations compared to the other predictors.

  13. 33 CFR 165.T13-149 - Safety Zone; McNary-John Day Transmission Line Project, Columbia River, Hermiston, OR.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Transmission Line Project, Columbia River, Hermiston, OR. 165.T13-149 Section 165.T13-149 Navigation and... Project, Columbia River, Hermiston, OR. (a) Location: The following is a safety zone: All waters of the Columbia River between two lines with the first line starting at the north bank at 45° 56′ 16.5″ N/119° 19...

  14. KENNEDY SPACE CENTER, FLA. - In the Columbia Debris Hangar, Scott Thurston, NASA vehicle flow manager, addresses the media about efforts to pack the debris stored in the Columbia Debris Hangar. More than 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds. An area of the Vehicle Assembly Building is being prepared to store the debris permanently.

    NASA Image and Video Library

    2003-09-11

    KENNEDY SPACE CENTER, FLA. - In the Columbia Debris Hangar, Scott Thurston, NASA vehicle flow manager, addresses the media about efforts to pack the debris stored in the Columbia Debris Hangar. More than 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds. An area of the Vehicle Assembly Building is being prepared to store the debris permanently.

  15. KENNEDY SPACE CENTER, FLA. - In the Columbia Debris Hangar, Shuttle Launch Director Mike Leinbach points to some of the debris as he explains to the media about activities that have taken place since the Columbia accident on Feb. 1, 2003. STS-107 debris recovery and reconstruction operations are winding down. To date, nearly 84,000 pieces of debris have been recovered and sent to KSC. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.

    NASA Image and Video Library

    2003-06-04

    KENNEDY SPACE CENTER, FLA. - In the Columbia Debris Hangar, Shuttle Launch Director Mike Leinbach points to some of the debris as he explains to the media about activities that have taken place since the Columbia accident on Feb. 1, 2003. STS-107 debris recovery and reconstruction operations are winding down. To date, nearly 84,000 pieces of debris have been recovered and sent to KSC. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.

  16. KENNEDY SPACE CENTER, FLA. - Pieces of debris of Space Shuttle Columbia are offloaded from a flatbed truck in the transfer aisle of the Vehicle Assembly Building (VAB). The debris is being moved from the Columbia Debris Hangar to the VAB for permanent storage. More than 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.

    NASA Image and Video Library

    2003-09-15

    KENNEDY SPACE CENTER, FLA. - Pieces of debris of Space Shuttle Columbia are offloaded from a flatbed truck in the transfer aisle of the Vehicle Assembly Building (VAB). The debris is being moved from the Columbia Debris Hangar to the VAB for permanent storage. More than 83,000 pieces of debris were shipped to KSC during search and recovery efforts in East Texas. That represents about 38 percent of the dry weight of Columbia, equaling almost 85,000 pounds.

  17. KENNEDY SPACE CENTER, FLA. - In the RLV hangar, members of the Columbia Reconstruction Team work to identify pieces of Thermal Protection System tile from the left wing of Columbia recovered during the search and recovery efforts in East Texas. The items shipped to KSC number more than 82,000 and weigh 84,800 pounds or 38 percent of the total dry weight of Columbia. Of those items, 78,760 have been identified, with 753 placed on the left wing grid in the Hangar.

    NASA Image and Video Library

    2003-05-15

    KENNEDY SPACE CENTER, FLA. - In the RLV hangar, members of the Columbia Reconstruction Team work to identify pieces of Thermal Protection System tile from the left wing of Columbia recovered during the search and recovery efforts in East Texas. The items shipped to KSC number more than 82,000 and weigh 84,800 pounds or 38 percent of the total dry weight of Columbia. Of those items, 78,760 have been identified, with 753 placed on the left wing grid in the Hangar.

  18. The future of the Large Hadron Collider and CERN.

    PubMed

    Heuer, Rolf-Dieter

    2012-02-28

    This paper presents the Large Hadron Collider (LHC) and its current scientific programme and outlines options for high-energy colliders at the energy frontier for the years to come. The immediate plans include the exploitation of the LHC at its design luminosity and energy, as well as upgrades to the LHC and its injectors. This may be followed by a linear electron-positron collider, based on the technology being developed by the Compact Linear Collider and the International Linear Collider collaborations, or by a high-energy electron-proton machine. This contribution describes the past, present and future directions, all of which have a unique value to add to experimental particle physics, and concludes by outlining key messages for the way forward.

  19. Text mining approach to predict hospital admissions using early medical records from the emergency department.

    PubMed

    Lucini, Filipe R; S Fogliatto, Flavio; C da Silveira, Giovani J; L Neyeloff, Jeruza; Anzanello, Michel J; de S Kuchenbecker, Ricardo; D Schaan, Beatriz

    2017-04-01

    Emergency department (ED) overcrowding is a serious issue for hospitals. Early information on short-term inward bed demand from patients receiving care at the ED may reduce the overcrowding problem, and optimize the use of hospital resources. In this study, we use text mining methods to process data from early ED patient records using the SOAP framework, and predict future hospitalizations and discharges. We try different approaches for pre-processing of text records and to predict hospitalization. Sets-of-words are obtained via binary representation, term frequency, and term frequency-inverse document frequency. Unigrams, bigrams and trigrams are tested for feature formation. Feature selection is based on χ 2 and F-score metrics. In the prediction module, eight text mining methods are tested: Decision Tree, Random Forest, Extremely Randomized Tree, AdaBoost, Logistic Regression, Multinomial Naïve Bayes, Support Vector Machine (Kernel linear) and Nu-Support Vector Machine (Kernel linear). Prediction performance is evaluated by F1-scores. Precision and Recall values are also informed for all text mining methods tested. Nu-Support Vector Machine was the text mining method with the best overall performance. Its average F1-score in predicting hospitalization was 77.70%, with a standard deviation (SD) of 0.66%. The method could be used to manage daily routines in EDs such as capacity planning and resource allocation. Text mining could provide valuable information and facilitate decision-making by inward bed management teams. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  20. SU-G-BRB-02: An Open-Source Software Analysis Library for Linear Accelerator Quality Assurance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerns, J; Yaldo, D

    Purpose: Routine linac quality assurance (QA) tests have become complex enough to require automation of most test analyses. A new data analysis software library was built that allows physicists to automate routine linear accelerator quality assurance tests. The package is open source, code tested, and benchmarked. Methods: Images and data were generated on a TrueBeam linac for the following routine QA tests: VMAT, starshot, CBCT, machine logs, Winston Lutz, and picket fence. The analysis library was built using the general programming language Python. Each test was analyzed with the library algorithms and compared to manual measurements taken at the timemore » of acquisition. Results: VMAT QA results agreed within 0.1% between the library and manual measurements. Machine logs (dynalogs & trajectory logs) were successfully parsed; mechanical axis positions were verified for accuracy and MLC fluence agreed well with EPID measurements. CBCT QA measurements were within 10 HU and 0.2mm where applicable. Winston Lutz isocenter size measurements were within 0.2mm of TrueBeam’s Machine Performance Check. Starshot analysis was within 0.2mm of the Winston Lutz results for the same conditions. Picket fence images with and without a known error showed that the library was capable of detecting MLC offsets within 0.02mm. Conclusion: A new routine QA software library has been benchmarked and is available for use by the community. The library is open-source and extensible for use in larger systems.« less

  1. Ion beam figuring of high-slope surfaces based on figure error compensation algorithm.

    PubMed

    Dai, Yifan; Liao, Wenlin; Zhou, Lin; Chen, Shanyong; Xie, Xuhui

    2010-12-01

    In a deterministic figuring process, it is critical to guarantee high stability of the removal function as well as the accuracy of the dwell time solution, which directly influence the convergence of the figuring process. Hence, when figuring steep optics, the ion beam is required to keep a perpendicular incidence, and a five-axis figuring machine is typically utilized. In this paper, however, a method for high-precision figuring of high-slope optics is proposed with a linear three-axis machine, allowing for inclined beam incidence. First, the changing rule of the removal function and the normal removal rate with the incidence angle is analyzed according to the removal characteristics of ion beam figuring (IBF). Then, we propose to reduce the influence of varying removal function and projection distortion on the dwell time solution by means of figure error compensation. Consequently, the incident ion beam is allowed to keep parallel to the optical axis. Simulations and experiments are given to verify the removal analysis. Finally, a figuring experiment is conducted on a linear three-axis IBF machine, which proves the validity of the method for high-slope surfaces. It takes two iterations and about 9 min to successfully figure a fused silica sample, whose aperture is 21.3 mm and radius of curvature is 16 mm. The root-mean-square figure error of the convex surface is reduced from 13.13 to 5.86 nm.

  2. Detecting natural occlusion boundaries using local cues

    PubMed Central

    DiMattina, Christopher; Fox, Sean A.; Lewicki, Michael S.

    2012-01-01

    Occlusion boundaries and junctions provide important cues for inferring three-dimensional scene organization from two-dimensional images. Although several investigators in machine vision have developed algorithms for detecting occlusions and other edges in natural images, relatively few psychophysics or neurophysiology studies have investigated what features are used by the visual system to detect natural occlusions. In this study, we addressed this question using a psychophysical experiment where subjects discriminated image patches containing occlusions from patches containing surfaces. Image patches were drawn from a novel occlusion database containing labeled occlusion boundaries and textured surfaces in a variety of natural scenes. Consistent with related previous work, we found that relatively large image patches were needed to attain reliable performance, suggesting that human subjects integrate complex information over a large spatial region to detect natural occlusions. By defining machine observers using a set of previously studied features measured from natural occlusions and surfaces, we demonstrate that simple features defined at the spatial scale of the image patch are insufficient to account for human performance in the task. To define machine observers using a more biologically plausible multiscale feature set, we trained standard linear and neural network classifiers on the rectified outputs of a Gabor filter bank applied to the image patches. We found that simple linear classifiers could not match human performance, while a neural network classifier combining filter information across location and spatial scale compared well. These results demonstrate the importance of combining a variety of cues defined at multiple spatial scales for detecting natural occlusions. PMID:23255731

  3. Prediction of Moisture Content for Congou Black Tea Withering Leaves Using Image Features and Nonlinear Method.

    PubMed

    Liang, Gaozhen; Dong, Chunwang; Hu, Bin; Zhu, Hongkai; Yuan, Haibo; Jiang, Yongwen; Hao, Guoshuang

    2018-05-18

    Withering is the first step in the processing of congou black tea. With respect to the deficiency of traditional water content detection methods, a machine vision based NDT (Non Destructive Testing) method was established to detect the moisture content of withered leaves. First, according to the time sequences using computer visual system collected visible light images of tea leaf surfaces, and color and texture characteristics are extracted through the spatial changes of colors. Then quantitative prediction models for moisture content detection of withered tea leaves was established through linear PLS (Partial Least Squares) and non-linear SVM (Support Vector Machine). The results showed correlation coefficients higher than 0.8 between the water contents and green component mean value (G), lightness component mean value (L * ) and uniformity (U), which means that the extracted characteristics have great potential to predict the water contents. The performance parameters as correlation coefficient of prediction set (Rp), root-mean-square error of prediction (RMSEP), and relative standard deviation (RPD) of the SVM prediction model are 0.9314, 0.0411 and 1.8004, respectively. The non-linear modeling method can better describe the quantitative analytical relations between the image and water content. With superior generalization and robustness, the method would provide a new train of thought and theoretical basis for the online water content monitoring technology of automated production of black tea.

  4. Measurement of LHCD antenna position in Aditya tokamak

    NASA Astrophysics Data System (ADS)

    Ambulkar, K. K.; Sharma, P. K.; Virani, C. G.; Parmar, P. R.; Thakur, A. L.; Kulkarni, S. V.

    2010-02-01

    To drive plasma current non-inductively in ADITYA tokamak, 120 kW pulsed Lower Hybrid Current Drive (LHCD) system at 3.7 GHz has been designed, fabricated and installed on ADITYA tokamak. In this system, the antenna consists of a grill structure, having two rows, each row comprising of four sub-waveguides. The coupling of LHCD power to the plasma strongly depends on the plasma density near the mouth of grill antenna. Thus the grill antenna has to be precisely positioned for efficient coupling. The movement of mechanical bellow, which contracts or expands up to 50mm, governs the movement of antenna. In order to monitor the position of the antenna precisely, the reference position of the antenna with respect to the machine/plasma position has to be accurately determined. Further a mechanical system or an electronic system to measure the relative movement of the antenna with respect to the reference position is also desired. Also due to poor accessibility inside the ADITYA machine, it is impossible to measure physically the reference position of the grill antenna with respect to machine wall, taken as reference position and hence an alternative method has to be adopted to establish these measurements reliably. In this paper we report the design and development of a mechanism, using which the antenna position measurements are made. It also describes a unique method employing which the measurements of the reference position of the antenna with respect to the inner edge of the tokamak wall is carried out, which otherwise was impossible due to poor accessibility and physical constraints. The position of the antenna is monitored using an electronic scale, which is developed and installed on the bellow. Once the reference position is derived, the linear potentiometer, attached to the bellow, measures the linear distance using position transmitter. The accuracy of measurement obtained in our setup is within +/- 0.5 % and the linearity, along with repeatability is excellent.

  5. Multispectral imaging burn wound tissue classification system: a comparison of test accuracies between several common machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Squiers, John J.; Li, Weizhi; King, Darlene R.; Mo, Weirong; Zhang, Xu; Lu, Yang; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffrey E.

    2016-03-01

    The clinical judgment of expert burn surgeons is currently the standard on which diagnostic and therapeutic decisionmaking regarding burn injuries is based. Multispectral imaging (MSI) has the potential to increase the accuracy of burn depth assessment and the intraoperative identification of viable wound bed during surgical debridement of burn injuries. A highly accurate classification model must be developed using machine-learning techniques in order to translate MSI data into clinically-relevant information. An animal burn model was developed to build an MSI training database and to study the burn tissue classification ability of several models trained via common machine-learning algorithms. The algorithms tested, from least to most complex, were: K-nearest neighbors (KNN), decision tree (DT), linear discriminant analysis (LDA), weighted linear discriminant analysis (W-LDA), quadratic discriminant analysis (QDA), ensemble linear discriminant analysis (EN-LDA), ensemble K-nearest neighbors (EN-KNN), and ensemble decision tree (EN-DT). After the ground-truth database of six tissue types (healthy skin, wound bed, blood, hyperemia, partial injury, full injury) was generated by histopathological analysis, we used 10-fold cross validation to compare the algorithms' performances based on their accuracies in classifying data against the ground truth, and each algorithm was tested 100 times. The mean test accuracy of the algorithms were KNN 68.3%, DT 61.5%, LDA 70.5%, W-LDA 68.1%, QDA 68.9%, EN-LDA 56.8%, EN-KNN 49.7%, and EN-DT 36.5%. LDA had the highest test accuracy, reflecting the bias-variance tradeoff over the range of complexities inherent to the algorithms tested. Several algorithms were able to match the current standard in burn tissue classification, the clinical judgment of expert burn surgeons. These results will guide further development of an MSI burn tissue classification system. Given that there are few surgeons and facilities specializing in burn care, this technology may improve the standard of burn care for patients without access to specialized facilities.

  6. Feedback control of plasma instabilities with charged particle beams and study of plasma turbulence

    NASA Technical Reports Server (NTRS)

    Tham, Philip Kin-Wah

    1994-01-01

    A new non-perturbing technique for feedback control of plasma instabilities has been developed in the Columbia Linear Machine (CLM). The feedback control scheme involves the injection of a feedback modulated ion beam as a remote suppressor. The ion beam was obtained from a compact ion beam source which was developed for this purpose. A Langmuir probe was used as the feedback sensor. The feedback controller consisted of a phase-shifter and amplifiers. This technique was demonstrated by stabilizing various plasma instabilities to the background noise level, like the trapped particle instability, the ExB instability and the ion-temperature-gradient (ITG) driven instability. An important feature of this scheme is that the injected ion beam is non-perturbing to the plasma equilibrium parameters. The robustness of this feedback stabilization scheme was also investigated. The principal result is that the scheme is fairly robust, tolerating about 100% variation about the nominal parameter values. Next, this scheme is extended to the unsolved general problem of controlling multimode plasma instabilities simultaneously with a single sensor-suppressor pair. A single sensor-suppressor pair of feedback probes is desirable to reduce the perturbation caused by the probes. Two plasma instabilities the ExB and the ITG modes, were simultaneously stabilized. A simple 'state' feedback type method was used where more state information was generated from the single sensor Langmuir probe by appropriate signal processing, in this case, by differentiation. This proof-of-principle experiment demonstrated for the first time that by designing a more sophisticated electronic feedback controller, many plasma instabilities may be simultaneously controlled. Simple theoretical models showed generally good agreement with the feedback experimental results. On a parallel research front, a better understanding of the saturated state of a plasma instability was sought partly with the help of feedback. A plasma instability is usually observed in its saturated state and appears as a single feature in the frequency spectrum with a single azimuthal and parallel wavenumbers. The physics of the non-zero spectral width was investigated in detail because the finite spectral width can cause "turbulent" transport. One aspect of the "turbulence" was investigated by obtaining the scaling of the linear growth rate of the instabilities with the fluctuation levels. The linear growth rates were measured with the established gated feedback technique. The research showed that the ExB instability evolves into a quasi-coherent state when the fluctuation level is high. The coherent aspects were studied with a bispectral analysis. Moreover, the single spectral feature was discovered to be actually composed of a few radial harmonics. The radial harmonics play a role in the nonlinear saturation of the instability via three-wave coupling.

  7. Hispanics in the U.S. Military

    DTIC Science & Technology

    2006-09-01

    also noted that “much of the downsizing was set in motion without 1 Isaac Asimov , in The Columbia...1995. Asimov , Isaac. The Columbia World of Quotations. New York: Columbia University Press, 1996, www.bartleby.com/66/, accessed July 2006

  8. Status of water levels and selected water-quality conditions in the Sparta-Memphis aquifer in Arkansas, Spring-Summer 2003

    USGS Publications Warehouse

    Schrader, T.P.

    2006-01-01

    During the spring of 2003, water levels were measured in 341 wells in the Sparta-Memphis aquifer in Arkansas. Waterquality samples were collected for temperature and specificconductance measurements during the spring-summer of 2003 from 70 wells in Arkansas in the Sparta-Memphis aquifer. Maps of areal distribution of potentiometric surface, change in waterlevel measurements from 1999 to 2003, and specific-conductance data reveal spatial trends across the study area. The highest water-level altitude measured in Arkansas was 328 feet above National Geodetic Vertical Datum of 1929 (NGVD of 1929) in Craighead County; the lowest water-level altitude was 199 feet below NGVD of 1929 in Union County. Three large cones of depression are shown in the 2003 potentiometric surface map, centered in Columbia, Jefferson, and Union Counties in Arkansas as a result of large withdrawals for industrial and public supplies. A broad depression exists in western Poinsett County in Arkansas. The potentiometric surface indicates that large withdrawals have altered or reversed the natural direction of flow in most areas. In the northern third of the study area the flow is from the east, west, and north towards the broad depression in Poinsett County. In the central third of the study area the flow is dominated by the cone of depression centered in Jefferson County. In the southern third of the study area the flow is dominated by the two cones of depression in Union and Columbia Counties. A map of water-level changes from 1999 to 2003 was constructed using water-level measurements from 281 wells. The largest rise in water level measured was about 57.8 feet in Columbia County. The largest decline in water level measured was about -71.6 feet in Columbia County. Areas with a general rise are shown in Arkansas, Bradley, Calhoun, Cleveland, Columbia, Ouachita, and Union Counties. Areas with a general decline are shown in Craighead, Crittenden, Cross, Desha, Drew, Jefferson, Lonoke, Phillips, Poinsett, Prairie, and Woodruff Counties. Hydrographs were constructed for wells with a minimum of 25 years of water-level measurements. A trend line using a linear regression was calculated for the period of record from spring of 1978 to spring of 2003 to determine the annual decline or rise in feet per year for water levels in each well. The hydrographs were grouped by county. The mean values for county annual water-level decline or rise ranged from -1.42 to 0.27 foot per year. Specific conductance ranged from 82 microsiemens per centimeter at 25 degrees Celsius in Jefferson County to about 1,210 microsiemens per centimeter at 25 degrees Celsius in Lee County. The mean specific conductance was 400 microsiemens per centimeter at 25 degrees Celsius.

  9. KSC-99padig006

    NASA Image and Video Library

    1999-09-24

    KENNEDY SPACE CENTER, FLA. -- From the Shuttle Landing Facility, the orbiter Columbia leaves Kennedy Space Center on the back of a Boeing 747 Shuttle Carrier Aircraft on a ferry flight to Palmdale, Calif. Columbia, the oldest of four orbiters in NASA's fleet, will undergo extensive inspections and modifications in Boeing's Orbiter Assembly Facility during a nine-month orbiter maintenance down period (OMDP), the second in its history. Orbiters are periodically removed from flight operations for an OMDP. Columbia's first was in 1994. Along with more than 100 modifications on the vehicle, Columbia will be the second orbiter to be outfitted with the multifunctional electronic display system, or "glass cockpit." Columbia is expected to return to KSC in July 2000

  10. Columbia Quilt

    NASA Image and Video Library

    2018-02-22

    A certificate is on display that confirms the transfer of a giant hand-made quilt in honor of space shuttle Columbia and her crew from the Office of Procurement to the Columbia Preservation Room inside the Vehicle Assembly Building at NASA's Kennedy Space Center in Florida. The quilt was made by Katherine Walsh, a lifelong NASA and space program fan originally from Kentucky. The quilt will be displayed with its certificate in the preservation room as part of NASA's Apollo, Challenger, Columbia Lessons Learned Program.

  11. KENNEDY SPACE CENTER, FLA. - Storage boxes and other containers of Columbia debris wait in the Columbia Debris Hangar for transfer to storage in the Vehicle Assembly Building. About 83,000 pieces were shipped to KSC during search and recovery efforts in East Texas.

    NASA Image and Video Library

    2003-09-02

    KENNEDY SPACE CENTER, FLA. - Storage boxes and other containers of Columbia debris wait in the Columbia Debris Hangar for transfer to storage in the Vehicle Assembly Building. About 83,000 pieces were shipped to KSC during search and recovery efforts in East Texas.

  12. Parallel algorithms for boundary value problems

    NASA Technical Reports Server (NTRS)

    Lin, Avi

    1990-01-01

    A general approach to solve boundary value problems numerically in a parallel environment is discussed. The basic algorithm consists of two steps: the local step where all the P available processors work in parallel, and the global step where one processor solves a tridiagonal linear system of the order P. The main advantages of this approach are two fold. First, this suggested approach is very flexible, especially in the local step and thus the algorithm can be used with any number of processors and with any of the SIMD or MIMD machines. Secondly, the communication complexity is very small and thus can be used as easily with shared memory machines. Several examples for using this strategy are discussed.

  13. A microdynamic version of the tensile test machine

    NASA Technical Reports Server (NTRS)

    Glaser, R. J.

    1991-01-01

    Very large space structures require structural reactions to control forces associated with nanometer-level displacements; JPL has accordingly built a tensile test machine capable of mN-level force measurements and nm-level displacement measurements, with a view to the study of structural linear joining technology at the lower limit of its resolution. The tester is composed of a moving table that is supported by six flexured legs and a test specimen cantilevered off the table to ground. Three vertical legs contain piezoactuators allowing changes in length up to 200 microns while generating axial load and bending moments. Displacements between ground and table are measured by means of three laser-interferometric channels.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Page, Brandi R., E-mail: bpage@wakehealth.edu; Hudson, Alana D.; Brown, Derek W.

    The international growth of cancer and lack of available treatment is en route to become a global crisis. With >60% of cancer patients needing radiation therapy at some point during their treatment course, the lack of available facilities and treatment programs worldwide is extremely problematic. The number of deaths from treatable cancers is projected to increase to 11.5 million deaths in 2030 because the international population is aging and growing. In this review, we present how best to answer the need for radiation therapy facilities from a technical standpoint. Specifically, we examine whether cobalt teletherapy machines or megavoltage linear acceleratormore » machines are best equipped to handle the multitudes in need of radiation therapy treatment in the developing world.« less

  15. Patient compliance with screening for fecal occult blood in family practice.

    PubMed Central

    Hoogewerf, P E; Hislop, T G; Morrison, B J; Burns, S D; Sizto, R

    1987-01-01

    Thirty-two family physicians in British Columbia collaborated in a study to evaluate their patients' compliance when offered testing for fecal occult blood (FOB) with Hemoccult II as a screening test for asymptomatic colorectal cancer. Of the 5003 eligible patients 71% complied. Thirteen variables were investigated. Compliance was found to be directly related to age in a linear manner (chi-squared value for trend = 180.4, p less than 0.0001), age alone correctly classifying 58.5% of the patients as complying or not complying. The association with other variables was less strong. Restricting the consumption of red meat during the test period had no effect on compliance. PMID:3607662

  16. City of Columbia, Missouri - Clean Water Act Public Notice

    EPA Pesticide Factsheets

    The EPA is providing notice of a proposed Administrative Penalty Assessment against the City of Columbia, MO, regarding alleged violations at the City's Landfill and Yard Waste Compost Facility, located at 5700 Peabody Road, Columbia, Boone County, MO, 652

  17. An improved conjugate gradient scheme to the solution of least squares SVM.

    PubMed

    Chu, Wei; Ong, Chong Jin; Keerthi, S Sathiya

    2005-03-01

    The least square support vector machines (LS-SVM) formulation corresponds to the solution of a linear system of equations. Several approaches to its numerical solutions have been proposed in the literature. In this letter, we propose an improved method to the numerical solution of LS-SVM and show that the problem can be solved using one reduced system of linear equations. Compared with the existing algorithm for LS-SVM, the approach used in this letter is about twice as efficient. Numerical results using the proposed method are provided for comparisons with other existing algorithms.

  18. HOM-Free Linear Accelerating Structure for e+ e- Linear Collider at C-Band

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kubo, Kiyoshi

    2003-07-07

    HOM-free linear acceleration structure using the choke mode cavity (damped cavity) is now under design for e{sup +}e{sup -} linear collider project at C-band frequency (5712 MHz). Since this structure shows powerful damping effect on most of all HOMs, there is no multibunch problem due to long range wakefields. The structure will be equipped with the microwave absorbers in each cells and also the in-line dummy load in the last few cells. The straightness tolerance for 1.8 m long structure is closer than 30 {micro}m for 25% emittance dilution limit, which can be achieved by standard machining and braising techniques.more » Since it has good vacuum pumping conductance through annular gaps in each cell, instabilities due to the interaction of beam with the residual-gas and ions can be minimized.« less

  19. Field-trip guide to the vents, dikes, stratigraphy, and structure of the Columbia River Basalt Group, eastern Oregon and southeastern Washington

    USGS Publications Warehouse

    Camp, Victor E; Reidel, Stephen P.; Ross, Martin E.; Brown, Richard J.; Self, Stephen

    2017-06-22

    The Columbia River Basalt Group covers an area of more than 210,000 km2 with an estimated volume of 210,000 km3. As the youngest continental flood-basalt province on Earth (16.7–5.5 Ma), it is well preserved, with a coherent and detailed stratigraphy exposed in the deep canyonlands of eastern Oregon and southeastern Washington. The Columbia River flood-basalt province is often cited as a model for the study of similar provinces worldwide.This field-trip guide explores the main source region of the Columbia River Basalt Group and is written for trip participants attending the 2017 International Association of Volcanology and Chemistry of the Earth’s Interior (IAVCEI) Scientific Assembly in Portland, Oregon, USA. The first part of the guide provides an overview of the geologic features common in the Columbia River flood-basalt province and the stratigraphic terminology used in the Columbia River Basalt Group. The accompanying road log examines the stratigraphic evolution, eruption history, and structure of the province through a field examination of the lavas, dikes, and pyroclastic rocks of the Columbia River Basalt Group.

  20. The Columbia Debris Loan Program; Examples of Microscopic Analysis

    NASA Technical Reports Server (NTRS)

    Russell, Rick; Thurston, Scott; Smith, Stephen; Marder, Arnold; Steckel, Gary

    2006-01-01

    Following the tragic loss of the Space Shuttle Columbia NASA formed The Columbia Recovery Office (CRO). The CRO was initially formed at the Johnson Space Center after the conclusion of recovery operations on May 1,2003 and then transferred .to the Kennedy Space Center on October 6,2003 and renamed The Columbia Recovery Office and Preservation. An integral part of the preservation project was the development of a process to loan Columbia debris to qualified researchers and technical educators. The purposes of this program include aiding in the advancement of advanced spacecraft design and flight safety development, the advancement of the study of hypersonic re-entry to enhance ground safety, to train and instruct accident investigators and to establish an enduring legacy for Space Shuttle Columbia and her crew. Along with a summary of the debris loan process examples of microscopic analysis of Columbia debris items will be presented. The first example will be from the reconstruction following the STS- 107 accident and how the Materials and Proessteesa m used microscopic analysis to confirm the accident scenario. Additionally, three examples of microstructural results from the debris loan process from NASA internal, academia and private industry will be presented.

Top