Li, Ning; Cao, Chao; Wang, Cong
2017-06-15
Supporting simultaneous access of machine-type devices is a critical challenge in machine-to-machine (M2M) communications. In this paper, we propose an optimal scheme to dynamically adjust the Access Class Barring (ACB) factor and the number of random access channel (RACH) resources for clustered machine-to-machine (M2M) communications, in which Delay-Sensitive (DS) devices coexist with Delay-Tolerant (DT) ones. In M2M communications, since delay-sensitive devices share random access resources with delay-tolerant devices, reducing the resources consumed by delay-sensitive devices means that there will be more resources available to delay-tolerant ones. Our goal is to optimize the random access scheme, which can not only satisfy the requirements of delay-sensitive devices, but also take the communication quality of delay-tolerant ones into consideration. We discuss this problem from the perspective of delay-sensitive services by adjusting the resource allocation and ACB scheme for these devices dynamically. Simulation results show that our proposed scheme realizes good performance in satisfying the delay-sensitive services as well as increasing the utilization rate of the random access resources allocated to them.
LTE-advanced random access mechanism for M2M communication: A review
NASA Astrophysics Data System (ADS)
Mustafa, Rashid; Sarowa, Sandeep; Jaglan, Reena Rathee; Khan, Mohammad Junaid; Agrawal, Sunil
2016-03-01
Machine Type Communications (MTC) enables one or more self-sufficient machines to communicate directly with one another without human interference. MTC applications include smart grid, security, e-Health and intelligent automation system. To support huge numbers of MTC devices, one of the challenging issues is to provide a competent way for numerous access in the network and to minimize network overload. In this article, the different control mechanisms for overload random access are reviewed to avoid congestion caused by random access channel (RACH) of MTC devices. However, past and present wireless technologies have been engineered for Human-to-Human (H2H) communications, in particular, for transmission of voice. Consequently the Long Term Evolution (LTE) -Advanced is expected to play a central role in communicating Machine to Machine (M2M) and are very optimistic about H2H communications. Distinct and unique characteristics of M2M communications create new challenges from those in H2H communications. In this article, we investigate the impact of massive M2M terminals attempting random access to LTE-Advanced all at once. We discuss and review the solutions to alleviate the overload problem by Third Generation Partnership Project (3GPP). As a result, we evaluate and compare these solutions that can effectively eliminate the congestion on the random access channel for M2M communications without affecting H2H communications.
Paging memory from random access memory to backing storage in a parallel computer
Archer, Charles J; Blocksome, Michael A; Inglett, Todd A; Ratterman, Joseph D; Smith, Brian E
2013-05-21
Paging memory from random access memory (`RAM`) to backing storage in a parallel computer that includes a plurality of compute nodes, including: executing a data processing application on a virtual machine operating system in a virtual machine on a first compute node; providing, by a second compute node, backing storage for the contents of RAM on the first compute node; and swapping, by the virtual machine operating system in the virtual machine on the first compute node, a page of memory from RAM on the first compute node to the backing storage on the second compute node.
Computational work and time on finite machines.
NASA Technical Reports Server (NTRS)
Savage, J. E.
1972-01-01
Measures of the computational work and computational delay required by machines to compute functions are given. Exchange inequalities are developed for random access, tape, and drum machines to show that product inequalities between storage and time, number of drum tracks and time, number of bits in an address and time, etc., must be satisfied to compute finite functions on bounded machines.
Operating System For Numerically Controlled Milling Machine
NASA Technical Reports Server (NTRS)
Ray, R. B.
1992-01-01
OPMILL program is operating system for Kearney and Trecker milling machine providing fast easy way to program manufacture of machine parts with IBM-compatible personal computer. Gives machinist "equation plotter" feature, which plots equations that define movements and converts equations to milling-machine-controlling program moving cutter along defined path. System includes tool-manager software handling up to 25 tools and automatically adjusts to account for each tool. Developed on IBM PS/2 computer running DOS 3.3 with 1 MB of random-access memory.
Lei, Chunyang; Bie, Hongxia; Fang, Gengfa; Gaura, Elena; Brusey, James; Zhang, Xuekun; Dutkiewicz, Eryk
2016-07-18
Super dense wireless sensor networks (WSNs) have become popular with the development of Internet of Things (IoT), Machine-to-Machine (M2M) communications and Vehicular-to-Vehicular (V2V) networks. While highly-dense wireless networks provide efficient and sustainable solutions to collect precise environmental information, a new channel access scheme is needed to solve the channel collision problem caused by the large number of competing nodes accessing the channel simultaneously. In this paper, we propose a space-time random access method based on a directional data transmission strategy, by which collisions in the wireless channel are significantly decreased and channel utility efficiency is greatly enhanced. Simulation results show that our proposed method can decrease the packet loss rate to less than 2 % in large scale WSNs and in comparison with other channel access schemes for WSNs, the average network throughput can be doubled.
Stylianou, Neophytos; Akbarov, Artur; Kontopantelis, Evangelos; Buchan, Iain; Dunn, Ken W
2015-08-01
Predicting mortality from burn injury has traditionally employed logistic regression models. Alternative machine learning methods have been introduced in some areas of clinical prediction as the necessary software and computational facilities have become accessible. Here we compare logistic regression and machine learning predictions of mortality from burn. An established logistic mortality model was compared to machine learning methods (artificial neural network, support vector machine, random forests and naïve Bayes) using a population-based (England & Wales) case-cohort registry. Predictive evaluation used: area under the receiver operating characteristic curve; sensitivity; specificity; positive predictive value and Youden's index. All methods had comparable discriminatory abilities, similar sensitivities, specificities and positive predictive values. Although some machine learning methods performed marginally better than logistic regression the differences were seldom statistically significant and clinically insubstantial. Random forests were marginally better for high positive predictive value and reasonable sensitivity. Neural networks yielded slightly better prediction overall. Logistic regression gives an optimal mix of performance and interpretability. The established logistic regression model of burn mortality performs well against more complex alternatives. Clinical prediction with a small set of strong, stable, independent predictors is unlikely to gain much from machine learning outside specialist research contexts. Copyright © 2015 Elsevier Ltd and ISBI. All rights reserved.
MLACP: machine-learning-based prediction of anticancer peptides
Manavalan, Balachandran; Basith, Shaherin; Shin, Tae Hwan; Choi, Sun; Kim, Myeong Ok; Lee, Gwang
2017-01-01
Cancer is the second leading cause of death globally, and use of therapeutic peptides to target and kill cancer cells has received considerable attention in recent years. Identification of anticancer peptides (ACPs) through wet-lab experimentation is expensive and often time consuming; therefore, development of an efficient computational method is essential to identify potential ACP candidates prior to in vitro experimentation. In this study, we developed support vector machine- and random forest-based machine-learning methods for the prediction of ACPs using the features calculated from the amino acid sequence, including amino acid composition, dipeptide composition, atomic composition, and physicochemical properties. We trained our methods using the Tyagi-B dataset and determined the machine parameters by 10-fold cross-validation. Furthermore, we evaluated the performance of our methods on two benchmarking datasets, with our results showing that the random forest-based method outperformed the existing methods with an average accuracy and Matthews correlation coefficient value of 88.7% and 0.78, respectively. To assist the scientific community, we also developed a publicly accessible web server at www.thegleelab.org/MLACP.html. PMID:29100375
NASA Technical Reports Server (NTRS)
Rogers, David
1988-01-01
The advent of the Connection Machine profoundly changes the world of supercomputers. The highly nontraditional architecture makes possible the exploration of algorithms that were impractical for standard Von Neumann architectures. Sparse distributed memory (SDM) is an example of such an algorithm. Sparse distributed memory is a particularly simple and elegant formulation for an associative memory. The foundations for sparse distributed memory are described, and some simple examples of using the memory are presented. The relationship of sparse distributed memory to three important computational systems is shown: random-access memory, neural networks, and the cerebellum of the brain. Finally, the implementation of the algorithm for sparse distributed memory on the Connection Machine is discussed.
Parsing Citations in Biomedical Articles Using Conditional Random Fields
Zhang, Qing; Cao, Yong-Gang; Yu, Hong
2011-01-01
Citations are used ubiquitously in biomedical full-text articles and play an important role for representing both the rhetorical structure and the semantic content of the articles. As a result, text mining systems will significantly benefit from a tool that automatically extracts the content of a citation. In this study, we applied the supervised machine-learning algorithms Conditional Random Fields (CRFs) to automatically parse a citation into its fields (e.g., Author, Title, Journal, and Year). With a subset of html format open-access PubMed Central articles, we report an overall 97.95% F1-score. The citation parser can be accessed at: http://www.cs.uwm.edu/~qing/projects/cithit/index.html. PMID:21419403
The Talking Dictionary. The Prospectus Series, Paper No. 2.
ERIC Educational Resources Information Center
Ward, Ted
Three talking dictionaires designed to increase independence and resource-use skills of handicapped children have specific advantages and limitations. System I involves a random access tape recorder, a printed or braille dictionary which contains the inquiry numbers for words, a console (similar to an adding machine) on which the number is…
Clustering Single-Cell Expression Data Using Random Forest Graphs.
Pouyan, Maziyar Baran; Nourani, Mehrdad
2017-07-01
Complex tissues such as brain and bone marrow are made up of multiple cell types. As the study of biological tissue structure progresses, the role of cell-type-specific research becomes increasingly important. Novel sequencing technology such as single-cell cytometry provides researchers access to valuable biological data. Applying machine-learning techniques to these high-throughput datasets provides deep insights into the cellular landscape of the tissue where those cells are a part of. In this paper, we propose the use of random-forest-based single-cell profiling, a new machine-learning-based technique, to profile different cell types of intricate tissues using single-cell cytometry data. Our technique utilizes random forests to capture cell marker dependences and model the cellular populations using the cell network concept. This cellular network helps us discover what cell types are in the tissue. Our experimental results on public-domain datasets indicate promising performance and accuracy of our technique in extracting cell populations of complex tissues.
Locking devices on cigarette vending machines: evaluation of a city ordinance.
Forster, J L; Hourigan, M E; Kelder, S
1992-01-01
OBJECTIVES. Policymakers, researchers, and citizens are beginning to recognize the need to limit minors' access to tobacco by restricting the sale of cigarettes through vending machines. One policy alternative that has been proposed by the tobacco industry is a requirement that vending machines be fitted with electronic locking devices. This study evaluates such a policy as enacted in St. Paul, Minn. METHODS. A random sample of vending machine locations was selected for cigarette purchase attempts conducted before implementation and at 3 and 12 months postimplementation. RESULTS. The rate of noncompliance by merchants was 34% after 3 months and 30% after 1 year. The effect of the law was to reduce the ability of a minor to purchase cigarettes from locations originally selling cigarettes through vending machines from 86% at baseline to 36% at 3 months. The purchase rate at these locations rose to 48% at 1 year. CONCLUSIONS. Our results suggest that cigarette vending machine locking devices may not be as effective as vending machine bans and require additional enforcement to ensure compliance with the law. PMID:1503160
EXODUS II: A finite element data model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schoof, L.A.; Yarberry, V.R.
1994-09-01
EXODUS II is a model developed to store and retrieve data for finite element analyses. It is used for preprocessing (problem definition), postprocessing (results visualization), as well as code to code data transfer. An EXODUS II data file is a random access, machine independent, binary file that is written and read via C, C++, or Fortran library routines which comprise the Application Programming Interface (API).
Neumark-Sztainer, Dianne; French, Simone A; Hannan, Peter J; Story, Mary; Fulkerson, Jayne A
2005-10-06
This study examined associations between high school students' lunch patterns and vending machine purchases and the school food environment and policies. A randomly selected sample of 1088 high school students from 20 schools completed surveys about their lunch practices and vending machine purchases. School food policies were assessed by principal and food director surveys. The number of vending machines and their hours of operation were assessed by trained research staff. Students at schools with open campus policies during lunchtime were significantly more likely to eat lunch at a fast food restaurant than students at schools with closed campus policies (0.7 days/week vs. 0.2 days/week, p < .001). Student snack food purchases at school were significantly associated with the number of snack machines at schools (p < .001) and policies about the types of food that can be sold. In schools with policies, students reported making snack food purchases an average of 0.5 +/- 1.1 days/week as compared to an average of 0.9 +/- 1.3 days/week in schools without policies (p < .001). In schools in which soft drink machines were turned off during lunch time, students purchased soft drinks from vending machines 1.4 +/- 1.6 days/week as compared to 1.9 +/- 1.8 days/week in schools in which soft drink machines were turned on during lunch (p = .040). School food policies that decrease access to foods high in fats and sugars are associated with less frequent purchase of these items in school among high school students. Schools should examine their food-related policies and decrease access to foods that are low in nutrients and high in fats and sugars.
Neumark-Sztainer, Dianne; French, Simone A; Hannan, Peter J; Story, Mary; Fulkerson, Jayne A
2005-01-01
Objectives This study examined associations between high school students' lunch patterns and vending machine purchases and the school food environment and policies. Methods A randomly selected sample of 1088 high school students from 20 schools completed surveys about their lunch practices and vending machine purchases. School food policies were assessed by principal and food director surveys. The number of vending machines and their hours of operation were assessed by trained research staff. Results Students at schools with open campus policies during lunchtime were significantly more likely to eat lunch at a fast food restaurant than students at schools with closed campus policies (0.7 days/week vs. 0.2 days/week, p < .001). Student snack food purchases at school were significantly associated with the number of snack machines at schools (p < .001) and policies about the types of food that can be sold. In schools with policies, students reported making snack food purchases an average of 0.5 ± 1.1 days/week as compared to an average of 0.9 ± 1.3 days/week in schools without policies (p < .001). In schools in which soft drink machines were turned off during lunch time, students purchased soft drinks from vending machines 1.4 ± 1.6 days/week as compared to 1.9 ± 1.8 days/week in schools in which soft drink machines were turned on during lunch (p = .040). Conclusion School food policies that decrease access to foods high in fats and sugars are associated with less frequent purchase of these items in school among high school students. Schools should examine their food-related policies and decrease access to foods that are low in nutrients and high in fats and sugars. PMID:16209716
ERECTING/MACHINE SHOP, CRANE ACCESS GANGWAY BETWEEN ERECTING (L) AND MACHINE ...
ERECTING/MACHINE SHOP, CRANE ACCESS GANGWAY BETWEEN ERECTING (L) AND MACHINE (R) SHOPS, LOOKING NORTH. - Southern Pacific, Sacramento Shops, Erecting Shop, 111 I Street, Sacramento, Sacramento County, CA
ERIC Educational Resources Information Center
Wu, Dan; He, Daqing
2012-01-01
Purpose: This paper seeks to examine the further integration of machine translation technologies with cross language information access in providing web users the capabilities of accessing information beyond language barriers. Machine translation and cross language information access are related technologies, and yet they have their own unique…
Boosting the FM-Index on the GPU: Effective Techniques to Mitigate Random Memory Access.
Chacón, Alejandro; Marco-Sola, Santiago; Espinosa, Antonio; Ribeca, Paolo; Moure, Juan Carlos
2015-01-01
The recent advent of high-throughput sequencing machines producing big amounts of short reads has boosted the interest in efficient string searching techniques. As of today, many mainstream sequence alignment software tools rely on a special data structure, called the FM-index, which allows for fast exact searches in large genomic references. However, such searches translate into a pseudo-random memory access pattern, thus making memory access the limiting factor of all computation-efficient implementations, both on CPUs and GPUs. Here, we show that several strategies can be put in place to remove the memory bottleneck on the GPU: more compact indexes can be implemented by having more threads work cooperatively on larger memory blocks, and a k-step FM-index can be used to further reduce the number of memory accesses. The combination of those and other optimisations yields an implementation that is able to process about two Gbases of queries per second on our test platform, being about 8 × faster than a comparable multi-core CPU version, and about 3 × to 5 × faster than the FM-index implementation on the GPU provided by the recently announced Nvidia NVBIO bioinformatics library.
NASA Astrophysics Data System (ADS)
Othman, Arsalan A.; Gloaguen, Richard
2017-09-01
Lithological mapping in mountainous regions is often impeded by limited accessibility due to relief. This study aims to evaluate (1) the performance of different supervised classification approaches using remote sensing data and (2) the use of additional information such as geomorphology. We exemplify the methodology in the Bardi-Zard area in NE Iraq, a part of the Zagros Fold - Thrust Belt, known for its chromite deposits. We highlighted the improvement of remote sensing geological classification by integrating geomorphic features and spatial information in the classification scheme. We performed a Maximum Likelihood (ML) classification method besides two Machine Learning Algorithms (MLA): Support Vector Machine (SVM) and Random Forest (RF) to allow the joint use of geomorphic features, Band Ratio (BR), Principal Component Analysis (PCA), spatial information (spatial coordinates) and multispectral data of the Advanced Space-borne Thermal Emission and Reflection radiometer (ASTER) satellite. The RF algorithm showed reliable results and discriminated serpentinite, talus and terrace deposits, red argillites with conglomerates and limestone, limy conglomerates and limestone conglomerates, tuffites interbedded with basic lavas, limestone and Metamorphosed limestone and reddish green shales. The best overall accuracy (∼80%) was achieved by Random Forest (RF) algorithms in the majority of the sixteen tested combination datasets.
Adachi-Mejia, A.M.; Longacre, M.R.; Skatrud-Mickelson, M.; Li, Z.; Purvis, L.A.; Titus, L.J.; Beach, M.L.; Dalton, M.A.
2013-01-01
SUMMARY Objectives The 2010 Dietary Guidelines for Americans include reducing consumption of sugar-sweetened beverages. Among the many possible routes of access for youth, school vending machines provide ready availability of sugar-sweetened beverages. The purpose of this study was to determine variation in high school student access to sugar-sweetened beverages through vending machines by geographic location – urban, town or rural – and to offer an approach for analysing school vending machine content. Study design Cross-sectional observational study. Methods Between October 2007 and May 2008, trained coders recorded beverage vending machine content and machine-front advertising in 113 machines across 26 schools in New Hampshire and Vermont, USA. Results Compared with town schools, urban schools were significantly less likely to offer sugar-sweetened beverages (P=0.002). Rural schools also offered more sugar-sweetened beverages than urban schools, but this difference was not significant. Advertisements for sugar-sweetened beverages were highly prevalent in town schools. Conclusions High school students have ready access to sugar-sweetened beverages through their school vending machines. Town schools offer the highest risk of exposure; school vending machines located in towns offer up to twice as much access to sugar-sweetened beverages in both content and advertising compared with urban locations. Variation by geographic region suggests that healthier environments are possible and some schools can lead as inspirational role models. PMID:23498924
Monroy-Parada, Doris Xiomara; Ángeles Moya, María; José Bosqued, María; López, Lázaro; Rodríguez-Artalejo, Fernando; Royo-Bordonada, Miguel Ángel
2016-06-09
Policies restricting access to sugary drinks and unhealthy foods in the school environment are associated with healthier consumption patterns. In 2010, Spain approved a Consensus Document regarding Food at Schools with nutritional criteria to improve the nutritional profile of foods and drinks served at schools. The objective of this study was to describe the frequency of food and drink vending machines at secondary schools in Madrid, the products offered at them and their nutritional profile. Cross-sectional study of a random sample of 330 secondary schools in Madrid in 2014-2015. The characteristics of the schools and the existence of vending machines were recorded through the internet and by telephone interview. The products offered in a representative sample of 6 vending machines were identified by in situ inspection, and its nutritional composition was taken from its labeling. Finally, the nutritional profile of each product was analyzed with the United Kingdom profile model, which classifies products as healthy and less healthy. The prevalence of vending machines was 17.3%. Among the products offered, 80.5% were less healthy food and drinks (high in energy, fat or sugar and poor in nutrients) and 10.5% were healthy products. Vending machines are common at secondary schools in Madrid. Most products are vending machines are still less healthy.
Adachi-Mejia, A M; Longacre, M R; Skatrud-Mickelson, M; Li, Z; Purvis, L A; Titus, L J; Beach, M L; Dalton, M A
2013-05-01
The 2010 Dietary Guidelines for Americans include reducing consumption of sugar-sweetened beverages. Among the many possible routes of access for youth, school vending machines provide ready availability of sugar-sweetened beverages. The purpose of this study was to determine variation in high school student access to sugar-sweetened beverages through vending machines by geographic location - urban, town or rural - and to offer an approach for analysing school vending machine content. Cross-sectional observational study. Between October 2007 and May 2008, trained coders recorded beverage vending machine content and machine-front advertising in 113 machines across 26 schools in New Hampshire and Vermont, USA. Compared with town schools, urban schools were significantly less likely to offer sugar-sweetened beverages (P = 0.002). Rural schools also offered more sugar-sweetened beverages than urban schools, but this difference was not significant. Advertisements for sugar-sweetened beverages were highly prevalent in town schools. High school students have ready access to sugar-sweetened beverages through their school vending machines. Town schools offer the highest risk of exposure; school vending machines located in towns offer up to twice as much access to sugar-sweetened beverages in both content and advertising compared with urban locations. Variation by geographic region suggests that healthier environments are possible and some schools can lead as inspirational role models. Copyright © 2013 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
A national survey of public support for restrictions on youth access to tobacco.
Bailey, W J; Crowe, J W
1994-10-01
A national telephone survey was conducted to measure public support for seven proposals to restrict youth access to tobacco products, including increases in the cigarette excise tax. A random digit dialing survey, using computer-assisted telephone interviews and a two-stage Mitofsky-Waksberg design, was used to generate and replace telephone numbers and to select individuals from within households. More than 94% of respondents believed cigarette smoking by children and adolescents to be a "very serious" or "somewhat serious" problem. Most respondents expressed support for all the proposed measures to restrict youth access to tobacco products (fines for sellers, fines for youthful violators, licensing of all tobacco vendors, restrictions on cigarette vending machines, ban on sponsorship of youth-oriented events, and ban on all tobacco advertising), and for increases in the cigarette excise tax.
Automatic vetting of planet candidates from ground based surveys: Machine learning with NGTS
NASA Astrophysics Data System (ADS)
Armstrong, David J.; Günther, Maximilian N.; McCormac, James; Smith, Alexis M. S.; Bayliss, Daniel; Bouchy, François; Burleigh, Matthew R.; Casewell, Sarah; Eigmüller, Philipp; Gillen, Edward; Goad, Michael R.; Hodgkin, Simon T.; Jenkins, James S.; Louden, Tom; Metrailler, Lionel; Pollacco, Don; Poppenhaeger, Katja; Queloz, Didier; Raynard, Liam; Rauer, Heike; Udry, Stéphane; Walker, Simon R.; Watson, Christopher A.; West, Richard G.; Wheatley, Peter J.
2018-05-01
State of the art exoplanet transit surveys are producing ever increasing quantities of data. To make the best use of this resource, in detecting interesting planetary systems or in determining accurate planetary population statistics, requires new automated methods. Here we describe a machine learning algorithm that forms an integral part of the pipeline for the NGTS transit survey, demonstrating the efficacy of machine learning in selecting planetary candidates from multi-night ground based survey data. Our method uses a combination of random forests and self-organising-maps to rank planetary candidates, achieving an AUC score of 97.6% in ranking 12368 injected planets against 27496 false positives in the NGTS data. We build on past examples by using injected transit signals to form a training set, a necessary development for applying similar methods to upcoming surveys. We also make the autovet code used to implement the algorithm publicly accessible. autovet is designed to perform machine learned vetting of planetary candidates, and can utilise a variety of methods. The apparent robustness of machine learning techniques, whether on space-based or the qualitatively different ground-based data, highlights their importance to future surveys such as TESS and PLATO and the need to better understand their advantages and pitfalls in an exoplanetary context.
Council-supported condom vending machines: are they acceptable to rural communities?
Tomnay, Jane E; Hatch, Beth
2013-11-01
Twenty-four hour access to condoms for young people living in rural Victoria is problematic for many reasons, including the fact that condom vending machines are often located in venues and places they cannot access. We partnered with three rural councils to install condom vending machines in locations that provided improved access to condoms for local young people. Councils regularly checked the machines, refilled the condoms and retrieved the money. They also managed the maintenance of the machine and provided monthly data. In total, 1153 condoms were purchased over 12 months, with 924 (80%) obtained from male toilets and 69% (801 out of 1153) purchased in the second half of the study. Revenue of $2626.10 (AUD) was generated and no negative feedback from residents was received by any council nor was there any negative reporting by local media. Vandalism, tampering or damage occurred at all sites; however, only two significant episodes of damage required a machine to be sent away for repairs. Condom vending machines installed in rural towns in north-east Victoria are accessible to young people after business hours, are cost-effective for councils and have not generated any complaints from residents. The machines have not suffered unrepairable damage and were used more frequently as the study progressed.
Development and implementation of (Q)SAR modeling within the CHARMMing web-user interface.
Weidlich, Iwona E; Pevzner, Yuri; Miller, Benjamin T; Filippov, Igor V; Woodcock, H Lee; Brooks, Bernard R
2015-01-05
Recent availability of large publicly accessible databases of chemical compounds and their biological activities (PubChem, ChEMBL) has inspired us to develop a web-based tool for structure activity relationship and quantitative structure activity relationship modeling to add to the services provided by CHARMMing (www.charmming.org). This new module implements some of the most recent advances in modern machine learning algorithms-Random Forest, Support Vector Machine, Stochastic Gradient Descent, Gradient Tree Boosting, so forth. A user can import training data from Pubchem Bioassay data collections directly from our interface or upload his or her own SD files which contain structures and activity information to create new models (either categorical or numerical). A user can then track the model generation process and run models on new data to predict activity. © 2014 Wiley Periodicals, Inc.
Validation of optical codes based on 3D nanostructures
NASA Astrophysics Data System (ADS)
Carnicer, Artur; Javidi, Bahram
2017-05-01
Image information encoding using random phase masks produce speckle-like noise distributions when the sample is propagated in the Fresnel domain. As a result, information cannot be accessed by simple visual inspection. Phase masks can be easily implemented in practice by attaching cello-tape to the plain-text message. Conventional 2D-phase masks can be generalized to 3D by combining glass and diffusers resulting in a more complex, physical unclonable function. In this communication, we model the behavior of a 3D phase mask using a simple approach: light is propagated trough glass using the angular spectrum of plane waves whereas the diffusor is described as a random phase mask and a blurring effect on the amplitude of the propagated wave. Using different designs for the 3D phase mask and multiple samples, we demonstrate that classification is possible using the k-nearest neighbors and random forests machine learning algorithms.
Impact of the HEALTHY Study on Vending Machine Offerings in Middle Schools.
Hartstein, Jill; Cullen, Karen W; Virus, Amy; El Ghormli, Laure; Volpe, Stella L; Staten, Myrlene A; Bridgman, Jessica C; Stadler, Diane D; Gillis, Bonnie; McCormick, Sarah B; Mobley, Connie C
2011-01-01
The purpose of this study is to report the impact of the three-year middle school-based HEALTHY study on intervention school vending machine offerings. There were two goals for the vending machines: serve only dessert/snack foods with 200 kilocalories or less per single serving package, and eliminate 100% fruit juice and beverages with added sugar. Six schools in each of seven cities (Houston, TX, San Antonio, TX, Irvine, CA, Portland, OR, Pittsburg, PA, Philadelphia, PA, and Chapel Hill, NC) were randomized into intervention (n=21 schools) or control (n=21 schools) groups, with three intervention and three control schools per city. All items in vending machine slots were tallied twice in the fall of 2006 for baseline data and twice at the end of the study, in 2009. The percentage of total slots for each food/beverage category was calculated and compared between intervention and control schools at the end of study, using the Pearson chi-square test statistic. At baseline, 15 intervention and 15 control schools had beverage and/or snack vending machines, compared with 11 intervention and 11 control schools at the end of the study. At the end of study, all of the intervention schools with beverage vending machines, but only one out of the nine control schools, met the beverage goal. The snack goal was met by all of the intervention schools and only one of the four control schools with snack vending machines. The HEALTHY study's vending machine beverage and snack goals were successfully achieved in intervention schools, reducing access to less healthy food items outside the school meals program. Although the effect of these changes on student diet, energy balance and growth is unknown, these results suggest that healthier options for snacks can successfully be offered in school vending machines.
Markham, Francis; Doran, Bruce; Young, Martin
2016-08-01
An emerging body of research has documented an association between problem gambling and domestic violence in a range of study populations and locations. Yet little research has analysed this relationship at ecological scales. This study investigates the proposition that gambling accessibility and the incidence of domestic violence might be linked. The association between police-recorded domestic violence and electronic gaming machine accessibility is described at the postcode level. Police recorded family incidents per 10,000 and domestic-violence related physical assault offenses per 10,000 were used as outcome variables. Electronic gaming machine accessibility was measured as electronic gaming machines per 10,000 and gambling venues per 100,000. Bayesian spatio-temporal mixed-effects models were used to estimate the associations between gambling accessibility and domestic violence, using annual postcode-level data in Victoria, Australia between 2005 and 2014, adjusting for a range of covariates. Significant associations of policy-relevant magnitudes were found between all domestic violence and EGM accessibility variables. Postcodes with no electronic gaming machines were associated with 20% (95% credibility interval [C.I.]: 15%, 24%) fewer family incidents per 10,000 and 30% (95% C.I.: 24%, 35%) fewer domestic-violence assaults per 10,000, when compared with postcodes with 75 electronic gaming machine per 10,000. The causal relations underlying these associations are unclear. Quasi-experimental research is required to determine if reducing gambling accessibility is likely to reduce the incidence of domestic violence. Copyright © 2016 Elsevier Ltd. All rights reserved.
More complete gene silencing by fewer siRNAs: transparent optimized design and biophysical signature
Ladunga, Istvan
2007-01-01
Highly accurate knockdown functional analyses based on RNA interference (RNAi) require the possible most complete hydrolysis of the targeted mRNA while avoiding the degradation of untargeted genes (off-target effects). This in turn requires significant improvements to target selection for two reasons. First, the average silencing activity of randomly selected siRNAs is as low as 62%. Second, applying more than five different siRNAs may lead to saturation of the RNA-induced silencing complex (RISC) and to the degradation of untargeted genes. Therefore, selecting a small number of highly active siRNAs is critical for maximizing knockdown and minimizing off-target effects. To satisfy these needs, a publicly available and transparent machine learning tool is presented that ranks all possible siRNAs for each targeted gene. Support vector machines (SVMs) with polynomial kernels and constrained optimization models select and utilize the most predictive effective combinations from 572 sequence, thermodynamic, accessibility and self-hairpin features over 2200 published siRNAs. This tool reaches an accuracy of 92.3% in cross-validation experiments. We fully present the underlying biophysical signature that involves free energy, accessibility and dinucleotide characteristics. We show that while complete silencing is possible at certain structured target sites, accessibility information improves the prediction of the 90% active siRNA target sites. Fast siRNA activity predictions can be performed on our web server at . PMID:17169992
Computer network defense system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urias, Vincent; Stout, William M. S.; Loverro, Caleb
A method and apparatus for protecting virtual machines. A computer system creates a copy of a group of the virtual machines in an operating network in a deception network to form a group of cloned virtual machines in the deception network when the group of the virtual machines is accessed by an adversary. The computer system creates an emulation of components from the operating network in the deception network. The components are accessible by the group of the cloned virtual machines as if the group of the cloned virtual machines was in the operating network. The computer system moves networkmore » connections for the group of the virtual machines in the operating network used by the adversary from the group of the virtual machines in the operating network to the group of the cloned virtual machines, enabling protecting the group of the virtual machines from actions performed by the adversary.« less
Derailing healthy choices: an audit of vending machines at train stations in NSW.
Kelly, Bridget; Flood, Victoria M; Bicego, Cecilia; Yeatman, Heather
2012-04-01
Train stations provide opportunities for food purchases and many consumers are exposed to these venues daily, on their commute to and from work. This study aimed to describe the food environment that commuters are exposed to at train stations in NSW. One hundred train stations were randomly sampled from the Greater Sydney Metropolitan region, representing a range of demographic areas. A purpose-designed instrument was developed to collect information on the availability, promotion and cost of food and beverages in vending machines. Items were classified as high/low in energy according to NSW school canteen criteria. Of the 206 vending machines identified, 84% of slots were stocked with high-energy food and beverages. The most frequently available items were chips and extruded snacks (33%), sugar-sweetened soft drinks (18%), chocolate (12%) and confectionery (10%). High energy foods were consistently cheaper than lower-energy alternatives. Transport sites may cumulatively contribute to excess energy consumption as the items offered are energy dense. Interventions are required to improve train commuters' access to healthy food and beverages.
Method for forming precision clockplate with pivot pins
Wild, Ronald L [Albuquerque, NM
2010-06-01
Methods are disclosed for producing a precision clockplate with rotational bearing surfaces (e.g. pivot pins). The methods comprise providing an electrically conductive blank, conventionally machining oversize features comprising bearing surfaces into the blank, optionally machining of a relief on non-bearing surfaces, providing wire accesses adjacent to bearing surfaces, threading the wire of an electrical discharge machine through the accesses and finishing the bearing surfaces by wire electrical discharge machining. The methods have been shown to produce bearing surfaces of comparable dimension and tolerances as those produced by micro-machining methods such as LIGA, at reduced cost and complexity.
Nower, Lia; Blaszczynski, Alex
2010-09-01
Studies attempting to identify the specific 'addictive' features of electronic gaming machines (EGMs) have yielded largely inconclusive results, suggesting that it is the interaction between a gambler's cognitions and the machine, rather than the machine itself, which fuels excessive play. Research has reported that machine players with gambling problems adopt a number of erroneous cognitive perceptions regarding the probability of winning and the nature of randomness. What is unknown, however, is whether motivations for gambling and attitudes toward pre-session monetary limit-setting vary across levels of gambling severity, and whether proposed precommitment strategies would be useful in minimizing excessive gambling expenditures. The current study explored these concepts in a sample of 127 adults, ages 18 to 81, attending one of four gambling venues in Queensland, Australia. The study found that problem gamblers were more likely than other gamblers to play machines to earn income or escape their problems rather than for fun and enjoyment. Similarly, they were less likely to endorse any type of monetary limit-setting prior to play. They were also reticent to adopt the use of a 'smart card' or other strategy to limit access to money during a session, though they indicated they lost track of money while gambling and were rarely aware of whether they were winning or losing during play. Implications for precommitment policies and further research are discussed.
NASA Technical Reports Server (NTRS)
Schwarzenberg, M.; Pippia, P.; Meloni, M. A.; Cossu, G.; Cogoli-Greuter, M.; Cogoli, A.
1998-01-01
The purpose of this paper is to present the results obtained in our laboratory with both instruments, the FFM [free fall machine] and the RPM [random positioning machine], to compare them with the data from earlier experiments with human lymphocytes conducted in the FRC [fast rotating clinostat] and in space. Furthermore, the suitability of the FFM and RPM for research in gravitational cell biology is discussed.
Photonics walking up a human hair
NASA Astrophysics Data System (ADS)
Zeng, Hao; Parmeggiani, Camilla; Martella, Daniele; Wasylczyk, Piotr; Burresi, Matteo; Wiersma, Diederik S.
2016-03-01
While animals have access to sugars as energy source, this option is generally not available to artificial machines and robots. Energy delivery is thus the bottleneck for creating independent robots and machines, especially on micro- and nano- meter length scales. We have found a way to produce polymeric nano-structures with local control over the molecular alignment, which allowed us to solve the above issue. By using a combination of polymers, of which part is optically sensitive, we can create complex functional structures with nanometer accuracy, responsive to light. In particular, this allowed us to realize a structure that can move autonomously over surfaces (it can "walk") using the environmental light as its energy source. The robot is only 60 μm in total length, thereby smaller than any known terrestrial walking species, and it is capable of random, directional walking and rotating on different dry surfaces.
Communication overhead on the Intel Paragon, IBM SP2 and Meiko CS-2
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.
1995-01-01
Interprocessor communication overhead is a crucial measure of the power of parallel computing systems-its impact can severely limit the performance of parallel programs. This report presents measurements of communication overhead on three contemporary commercial multicomputer systems: the Intel Paragon, the IBM SP2 and the Meiko CS-2. In each case the time to communicate between processors is presented as a function of message length. The time for global synchronization and memory access is discussed. The performance of these machines in emulating hypercubes and executing random pairwise exchanges is also investigated. It is shown that the interprocessor communication time depends heavily on the specific communication pattern required. These observations contradict the commonly held belief that communication overhead on contemporary machines is independent of the placement of tasks on processors. The information presented in this report permits the evaluation of the efficiency of parallel algorithm implementations against standard baselines.
NASA Astrophysics Data System (ADS)
Li, Hui; Hong, Lu-Yao; Zhou, Qing; Yu, Hai-Jie
2015-08-01
The business failure of numerous companies results in financial crises. The high social costs associated with such crises have made people to search for effective tools for business risk prediction, among which, support vector machine is very effective. Several modelling means, including single-technique modelling, hybrid modelling, and ensemble modelling, have been suggested in forecasting business risk with support vector machine. However, existing literature seldom focuses on the general modelling frame for business risk prediction, and seldom investigates performance differences among different modelling means. We reviewed researches on forecasting business risk with support vector machine, proposed the general assisted prediction modelling frame with hybridisation and ensemble (APMF-WHAE), and finally, investigated the use of principal components analysis, support vector machine, random sampling, and group decision, under the general frame in forecasting business risk. Under the APMF-WHAE frame with support vector machine as the base predictive model, four specific predictive models were produced, namely, pure support vector machine, a hybrid support vector machine involved with principal components analysis, a support vector machine ensemble involved with random sampling and group decision, and an ensemble of hybrid support vector machine using group decision to integrate various hybrid support vector machines on variables produced from principle components analysis and samples from random sampling. The experimental results indicate that hybrid support vector machine and ensemble of hybrid support vector machines were able to produce dominating performance than pure support vector machine and support vector machine ensemble.
ERIC Educational Resources Information Center
Kocken, Paul L.; Eeuwijk, Jennifer; van Kesteren, Nicole M.C.; Dusseldorp, Elise; Buijs, Goof; Bassa-Dafesh, Zeina; Snel, Jeltje
2012-01-01
Background: Vending machines account for food sales and revenue in schools. We examined 3 strategies for promoting the sale of lower-calorie food products from vending machines in high schools in the Netherlands. Methods: A school-based randomized controlled trial was conducted in 13 experimental schools and 15 control schools. Three strategies…
The Machine / Job Features Mechanism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alef, M.; Cass, T.; Keijser, J. J.
Within the HEPiX virtualization group and the Worldwide LHC Computing Grid’s Machine/Job Features Task Force, a mechanism has been developed which provides access to detailed information about the current host and the current job to the job itself. This allows user payloads to access meta information, independent of the current batch system or virtual machine model. The information can be accessed either locally via the filesystem on a worker node, or remotely via HTTP(S) from a webserver. This paper describes the final version of the specification from 2016 which was published as an HEP Software Foundation technical note, and themore » design of the implementations of this version for batch and virtual machine platforms. We discuss early experiences with these implementations and how they can be exploited by experiment frameworks.« less
The machine/job features mechanism
NASA Astrophysics Data System (ADS)
Alef, M.; Cass, T.; Keijser, J. J.; McNab, A.; Roiser, S.; Schwickerath, U.; Sfiligoi, I.
2017-10-01
Within the HEPiX virtualization group and the Worldwide LHC Computing Grid’s Machine/Job Features Task Force, a mechanism has been developed which provides access to detailed information about the current host and the current job to the job itself. This allows user payloads to access meta information, independent of the current batch system or virtual machine model. The information can be accessed either locally via the filesystem on a worker node, or remotely via HTTP(S) from a webserver. This paper describes the final version of the specification from 2016 which was published as an HEP Software Foundation technical note, and the design of the implementations of this version for batch and virtual machine platforms. We discuss early experiences with these implementations and how they can be exploited by experiment frameworks.
Probability machines: consistent probability estimation using nonparametric learning machines.
Malley, J D; Kruppa, J; Dasgupta, A; Malley, K G; Ziegler, A
2012-01-01
Most machine learning approaches only provide a classification for binary responses. However, probabilities are required for risk estimation using individual patient characteristics. It has been shown recently that every statistical learning machine known to be consistent for a nonparametric regression problem is a probability machine that is provably consistent for this estimation problem. The aim of this paper is to show how random forests and nearest neighbors can be used for consistent estimation of individual probabilities. Two random forest algorithms and two nearest neighbor algorithms are described in detail for estimation of individual probabilities. We discuss the consistency of random forests, nearest neighbors and other learning machines in detail. We conduct a simulation study to illustrate the validity of the methods. We exemplify the algorithms by analyzing two well-known data sets on the diagnosis of appendicitis and the diagnosis of diabetes in Pima Indians. Simulations demonstrate the validity of the method. With the real data application, we show the accuracy and practicality of this approach. We provide sample code from R packages in which the probability estimation is already available. This means that all calculations can be performed using existing software. Random forest algorithms as well as nearest neighbor approaches are valid machine learning methods for estimating individual probabilities for binary responses. Freely available implementations are available in R and may be used for applications.
Impact of the HEALTHY Study on Vending Machine Offerings in Middle Schools
Hartstein, Jill; Cullen, Karen W.; Virus, Amy; El Ghormli, Laure; Volpe, Stella L.; Staten, Myrlene A; Bridgman, Jessica C.; Stadler, Diane D.; Gillis, Bonnie; McCormick, Sarah B.; Mobley, Connie C.
2013-01-01
Purpose/Objectives The purpose of this study is to report the impact of the three-year middle school-based HEALTHY study on intervention school vending machine offerings. There were two goals for the vending machines: serve only dessert/snack foods with 200 kilocalories or less per single serving package, and eliminate 100% fruit juice and beverages with added sugar. Methods Six schools in each of seven cities (Houston, TX, San Antonio, TX, Irvine, CA, Portland, OR, Pittsburg, PA, Philadelphia, PA, and Chapel Hill, NC) were randomized into intervention (n=21 schools) or control (n=21 schools) groups, with three intervention and three control schools per city. All items in vending machine slots were tallied twice in the fall of 2006 for baseline data and twice at the end of the study, in 2009. The percentage of total slots for each food/beverage category was calculated and compared between intervention and control schools at the end of study, using the Pearson chi-square test statistic. Results At baseline, 15 intervention and 15 control schools had beverage and/or snack vending machines, compared with 11 intervention and 11 control schools at the end of the study. At the end of study, all of the intervention schools with beverage vending machines, but only one out of the nine control schools, met the beverage goal. The snack goal was met by all of the intervention schools and only one of the four control schools with snack vending machines. Applications to Child Nutrition Professionals The HEALTHY study’s vending machine beverage and snack goals were successfully achieved in intervention schools, reducing access to less healthy food items outside the school meals program. Although the effect of these changes on student diet, energy balance and growth is unknown, these results suggest that healthier options for snacks can successfully be offered in school vending machines. PMID:23687471
Balachandran, Anoop T; Gandia, Kristine; Jacobs, Kevin A; Streiner, David L; Eltoukhy, Moataz; Signorile, Joseph F
2017-11-01
Power training has been shown to be more effective than conventional resistance training for improving physical function in older adults; however, most trials have used pneumatic machines during training. Considering that the general public typically has access to plate-loaded machines, the effectiveness and safety of power training using plate-loaded machines compared to pneumatic machines is an important consideration. The purpose of this investigation was to compare the effects of high-velocity training using pneumatic machines (Pn) versus standard plate-loaded machines (PL). Independently-living older adults, 60years or older were randomized into two groups: pneumatic machine (Pn, n=19) and plate-loaded machine (PL, n=17). After 12weeks of high-velocity training twice per week, groups were analyzed using an intention-to-treat approach. Primary outcomes were lower body power measured using a linear transducer and upper body power using medicine ball throw. Secondary outcomes included lower and upper body muscle muscle strength, the Physical Performance Battery (PPB), gallon jug test, the timed up-and-go test, and self-reported function using the Patient Reported Outcomes Measurement Information System (PROMIS) and an online video questionnaire. Outcome assessors were blinded to group membership. Lower body power significantly improved in both groups (Pn: 19%, PL: 31%), with no significant difference between the groups (Cohen's d=0.4, 95% CI (-1.1, 0.3)). Upper body power significantly improved only in the PL group, but showed no significant difference between the groups (Pn: 3%, PL: 6%). For balance, there was a significant difference between the groups favoring the Pn group (d=0.7, 95% CI (0.1, 1.4)); however, there were no statistically significant differences between groups for PPB, gallon jug transfer, muscle muscle strength, timed up-and-go or self-reported function. No serious adverse events were reported in either of the groups. Pneumatic and plate-loaded machines were effective in improving lower body power and physical function in older adults. The results suggest that power training can be safely and effectively performed by older adults using either pneumatic or plate-loaded machines. Copyright © 2017 Elsevier Inc. All rights reserved.
Syringe vending machines for injection drug users: an experiment in Marseille, France.
Obadia, Y; Feroni, I; Perrin, V; Vlahov, D; Moatti, J P
1999-01-01
OBJECTIVES: This study evaluated the usefulness of vending machines in providing injection drug users with access to sterile syringes in Marseille, France. METHODS: Self-administered questionnaires were offered to 485 injection drug users obtaining syringes from 32 pharmacies, 4 needle exchange programs, and 3 vending machines. RESULTS: Of the 343 respondents (response rate = 70.7%), 21.3% used the vending machines as their primary source of syringes. Primary users of vending machines were more likely than primary users of other sources to be younger than 30 years, to report no history of drug maintenance treatment, and to report no sharing of needles or injection paraphernalia. CONCLUSIONS: Vending machines may be an appropriate strategy for providing access to syringes for younger injection drug users, who have typically avoided needle exchange programs and pharmacies. PMID:10589315
Dominguez Veiga, Jose Juan; O'Reilly, Martin; Whelan, Darragh; Caulfield, Brian; Ward, Tomas E
2017-08-04
Inertial sensors are one of the most commonly used sources of data for human activity recognition (HAR) and exercise detection (ED) tasks. The time series produced by these sensors are generally analyzed through numerical methods. Machine learning techniques such as random forests or support vector machines are popular in this field for classification efforts, but they need to be supported through the isolation of a potentially large number of additionally crafted features derived from the raw data. This feature preprocessing step can involve nontrivial digital signal processing (DSP) techniques. However, in many cases, the researchers interested in this type of activity recognition problems do not possess the necessary technical background for this feature-set development. The study aimed to present a novel application of established machine vision methods to provide interested researchers with an easier entry path into the HAR and ED fields. This can be achieved by removing the need for deep DSP skills through the use of transfer learning. This can be done by using a pretrained convolutional neural network (CNN) developed for machine vision purposes for exercise classification effort. The new method should simply require researchers to generate plots of the signals that they would like to build classifiers with, store them as images, and then place them in folders according to their training label before retraining the network. We applied a CNN, an established machine vision technique, to the task of ED. Tensorflow, a high-level framework for machine learning, was used to facilitate infrastructure needs. Simple time series plots generated directly from accelerometer and gyroscope signals are used to retrain an openly available neural network (Inception), originally developed for machine vision tasks. Data from 82 healthy volunteers, performing 5 different exercises while wearing a lumbar-worn inertial measurement unit (IMU), was collected. The ability of the proposed method to automatically classify the exercise being completed was assessed using this dataset. For comparative purposes, classification using the same dataset was also performed using the more conventional approach of feature-extraction and classification using random forest classifiers. With the collected dataset and the proposed method, the different exercises could be recognized with a 95.89% (3827/3991) accuracy, which is competitive with current state-of-the-art techniques in ED. The high level of accuracy attained with the proposed approach indicates that the waveform morphologies in the time-series plots for each of the exercises is sufficiently distinct among the participants to allow the use of machine vision approaches. The use of high-level machine learning frameworks, coupled with the novel use of machine vision techniques instead of complex manually crafted features, may facilitate access to research in the HAR field for individuals without extensive digital signal processing or machine learning backgrounds. ©Jose Juan Dominguez Veiga, Martin O'Reilly, Darragh Whelan, Brian Caulfield, Tomas E Ward. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 04.08.2017.
O'Reilly, Martin; Whelan, Darragh; Caulfield, Brian; Ward, Tomas E
2017-01-01
Background Inertial sensors are one of the most commonly used sources of data for human activity recognition (HAR) and exercise detection (ED) tasks. The time series produced by these sensors are generally analyzed through numerical methods. Machine learning techniques such as random forests or support vector machines are popular in this field for classification efforts, but they need to be supported through the isolation of a potentially large number of additionally crafted features derived from the raw data. This feature preprocessing step can involve nontrivial digital signal processing (DSP) techniques. However, in many cases, the researchers interested in this type of activity recognition problems do not possess the necessary technical background for this feature-set development. Objective The study aimed to present a novel application of established machine vision methods to provide interested researchers with an easier entry path into the HAR and ED fields. This can be achieved by removing the need for deep DSP skills through the use of transfer learning. This can be done by using a pretrained convolutional neural network (CNN) developed for machine vision purposes for exercise classification effort. The new method should simply require researchers to generate plots of the signals that they would like to build classifiers with, store them as images, and then place them in folders according to their training label before retraining the network. Methods We applied a CNN, an established machine vision technique, to the task of ED. Tensorflow, a high-level framework for machine learning, was used to facilitate infrastructure needs. Simple time series plots generated directly from accelerometer and gyroscope signals are used to retrain an openly available neural network (Inception), originally developed for machine vision tasks. Data from 82 healthy volunteers, performing 5 different exercises while wearing a lumbar-worn inertial measurement unit (IMU), was collected. The ability of the proposed method to automatically classify the exercise being completed was assessed using this dataset. For comparative purposes, classification using the same dataset was also performed using the more conventional approach of feature-extraction and classification using random forest classifiers. Results With the collected dataset and the proposed method, the different exercises could be recognized with a 95.89% (3827/3991) accuracy, which is competitive with current state-of-the-art techniques in ED. Conclusions The high level of accuracy attained with the proposed approach indicates that the waveform morphologies in the time-series plots for each of the exercises is sufficiently distinct among the participants to allow the use of machine vision approaches. The use of high-level machine learning frameworks, coupled with the novel use of machine vision techniques instead of complex manually crafted features, may facilitate access to research in the HAR field for individuals without extensive digital signal processing or machine learning backgrounds. PMID:28778851
Accessible engineering drawings for visually impaired machine operators.
Ramteke, Deepak; Kansal, Gayatri; Madhab, Benu
2014-01-01
An engineering drawing provides manufacturing information to a machine operator. An operator plans and executes machining operations based on this information. A visually impaired (VI) operator does not have direct access to the drawings. Drawing information is provided to them verbally or by using sample parts. Both methods have limitations that affect the quality of output. Use of engineering drawings is a standard practice for every industry; this hampers employment of a VI operator. Accessible engineering drawings are required to increase both independence, as well as, employability of VI operators. Today, Computer Aided Design (CAD) software is used for making engineering drawings, which are saved in CAD files. Required information is extracted from the CAD files and converted into Braille or voice. The authors of this article propose a method to make engineering drawings information directly accessible to a VI operator.
Low Cost Comprehensive Microcomputer-Based Medical History Database Acquisition
Buchan, Robert R. C.
1980-01-01
A carefully detailed, comprehensive medical history database is the fundamental essence of patient-physician interaction. Computer generated medical history acquisition has repeatedly been shown to be highly acceptable to both patient and physician while consistantly providing a superior product. Cost justification of machine derived problem and history databases, however, has in the past been marginal, at best. Routine use of the technology has therefore been limited to large clinics, university hospitals and federal installations where feasible volume applications are supported by endowment, research funds or taxes. This paper summarizes the use of a unique low cost device which marries advanced microprocessor technology with random access, variable-frame film projection techniques to acquire a detailed comprehensive medical history database. Preliminary data are presented which compare patient, physician, and machine generated histories for content, discovery, compliance and acceptability. Results compare favorably with the findings in similar studies by a variety of authors. ImagesFigure 1Figure 2Figure 3Figure 4
Taber, Daniel R; Chriqui, Jamie F; Vuillaume, Renee; Chaloupka, Frank J
2014-01-01
Sodas are widely sold in vending machines and other school venues in the United States, particularly in high school. Research suggests that policy changes have reduced soda access, but the impact of reduced access on consumption is unclear. This study was designed to identify student, environmental, or policy characteristics that modify the associations between school vending machines and student dietary behaviors. Data on school vending machine access and student diet were obtained as part of the National Youth Physical Activity and Nutrition Study (NYPANS) and linked to state-level data on soda taxes, restaurant taxes, and state laws governing the sale of soda in schools. Regression models were used to: 1) estimate associations between vending machine access and soda consumption, fast food consumption, and lunch source, and 2) determine if associations were modified by state soda taxes, restaurant taxes, laws banning in-school soda sales, or student characteristics (race/ethnicity, sex, home food access, weight loss behaviors.). Contrary to the hypothesis, students tended to consume 0.53 fewer servings of soda/week (95% CI: -1.17, 0.11) and consume fast food on 0.24 fewer days/week (95% CI: -0.44, -0.05) if they had in-school access to vending machines. They were also less likely to consume soda daily (23.9% vs. 27.9%, average difference = -4.02, 95% CI: -7.28, -0.76). However, these inverse associations were observed primarily among states with lower soda and restaurant tax rates (relative to general food tax rates) and states that did not ban in-school soda sales. Associations did not vary by any student characteristics except for weight loss behaviors. Isolated changes to the school food environment may have unintended consequences unless policymakers incorporate other initiatives designed to discourage overall soda consumption.
Taber, Daniel R.; Chriqui, Jamie F.; Vuillaume, Renee; Chaloupka, Frank J.
2014-01-01
Background Sodas are widely sold in vending machines and other school venues in the United States, particularly in high school. Research suggests that policy changes have reduced soda access, but the impact of reduced access on consumption is unclear. This study was designed to identify student, environmental, or policy characteristics that modify the associations between school vending machines and student dietary behaviors. Methods Data on school vending machine access and student diet were obtained as part of the National Youth Physical Activity and Nutrition Study (NYPANS) and linked to state-level data on soda taxes, restaurant taxes, and state laws governing the sale of soda in schools. Regression models were used to: 1) estimate associations between vending machine access and soda consumption, fast food consumption, and lunch source, and 2) determine if associations were modified by state soda taxes, restaurant taxes, laws banning in-school soda sales, or student characteristics (race/ethnicity, sex, home food access, weight loss behaviors.) Results Contrary to the hypothesis, students tended to consume 0.53 fewer servings of soda/week (95% CI: -1.17, 0.11) and consume fast food on 0.24 fewer days/week (95% CI: -0.44, -0.05) if they had in-school access to vending machines. They were also less likely to consume soda daily (23.9% vs. 27.9%, average difference = -4.02, 95% CI: -7.28, -0.76). However, these inverse associations were observed primarily among states with lower soda and restaurant tax rates (relative to general food tax rates) and states that did not ban in-school soda sales. Associations did not vary by any student characteristics except for weight loss behaviors. Conclusion Isolated changes to the school food environment may have unintended consequences unless policymakers incorporate other initiatives designed to discourage overall soda consumption. PMID:25083906
Schmidt, Johannes; Glaser, Bruno
2016-01-01
Tropical forests are significant carbon sinks and their soils’ carbon storage potential is immense. However, little is known about the soil organic carbon (SOC) stocks of tropical mountain areas whose complex soil-landscape and difficult accessibility pose a challenge to spatial analysis. The choice of methodology for spatial prediction is of high importance to improve the expected poor model results in case of low predictor-response correlations. Four aspects were considered to improve model performance in predicting SOC stocks of the organic layer of a tropical mountain forest landscape: Different spatial predictor settings, predictor selection strategies, various machine learning algorithms and model tuning. Five machine learning algorithms: random forests, artificial neural networks, multivariate adaptive regression splines, boosted regression trees and support vector machines were trained and tuned to predict SOC stocks from predictors derived from a digital elevation model and satellite image. Topographical predictors were calculated with a GIS search radius of 45 to 615 m. Finally, three predictor selection strategies were applied to the total set of 236 predictors. All machine learning algorithms—including the model tuning and predictor selection—were compared via five repetitions of a tenfold cross-validation. The boosted regression tree algorithm resulted in the overall best model. SOC stocks ranged between 0.2 to 17.7 kg m-2, displaying a huge variability with diffuse insolation and curvatures of different scale guiding the spatial pattern. Predictor selection and model tuning improved the models’ predictive performance in all five machine learning algorithms. The rather low number of selected predictors favours forward compared to backward selection procedures. Choosing predictors due to their indiviual performance was vanquished by the two procedures which accounted for predictor interaction. PMID:27128736
Ließ, Mareike; Schmidt, Johannes; Glaser, Bruno
2016-01-01
Tropical forests are significant carbon sinks and their soils' carbon storage potential is immense. However, little is known about the soil organic carbon (SOC) stocks of tropical mountain areas whose complex soil-landscape and difficult accessibility pose a challenge to spatial analysis. The choice of methodology for spatial prediction is of high importance to improve the expected poor model results in case of low predictor-response correlations. Four aspects were considered to improve model performance in predicting SOC stocks of the organic layer of a tropical mountain forest landscape: Different spatial predictor settings, predictor selection strategies, various machine learning algorithms and model tuning. Five machine learning algorithms: random forests, artificial neural networks, multivariate adaptive regression splines, boosted regression trees and support vector machines were trained and tuned to predict SOC stocks from predictors derived from a digital elevation model and satellite image. Topographical predictors were calculated with a GIS search radius of 45 to 615 m. Finally, three predictor selection strategies were applied to the total set of 236 predictors. All machine learning algorithms-including the model tuning and predictor selection-were compared via five repetitions of a tenfold cross-validation. The boosted regression tree algorithm resulted in the overall best model. SOC stocks ranged between 0.2 to 17.7 kg m-2, displaying a huge variability with diffuse insolation and curvatures of different scale guiding the spatial pattern. Predictor selection and model tuning improved the models' predictive performance in all five machine learning algorithms. The rather low number of selected predictors favours forward compared to backward selection procedures. Choosing predictors due to their indiviual performance was vanquished by the two procedures which accounted for predictor interaction.
Analysis of Machine Learning Techniques for Heart Failure Readmissions.
Mortazavi, Bobak J; Downing, Nicholas S; Bucholz, Emily M; Dharmarajan, Kumar; Manhapra, Ajay; Li, Shu-Xia; Negahban, Sahand N; Krumholz, Harlan M
2016-11-01
The current ability to predict readmissions in patients with heart failure is modest at best. It is unclear whether machine learning techniques that address higher dimensional, nonlinear relationships among variables would enhance prediction. We sought to compare the effectiveness of several machine learning algorithms for predicting readmissions. Using data from the Telemonitoring to Improve Heart Failure Outcomes trial, we compared the effectiveness of random forests, boosting, random forests combined hierarchically with support vector machines or logistic regression (LR), and Poisson regression against traditional LR to predict 30- and 180-day all-cause readmissions and readmissions because of heart failure. We randomly selected 50% of patients for a derivation set, and a validation set comprised the remaining patients, validated using 100 bootstrapped iterations. We compared C statistics for discrimination and distributions of observed outcomes in risk deciles for predictive range. In 30-day all-cause readmission prediction, the best performing machine learning model, random forests, provided a 17.8% improvement over LR (mean C statistics, 0.628 and 0.533, respectively). For readmissions because of heart failure, boosting improved the C statistic by 24.9% over LR (mean C statistic 0.678 and 0.543, respectively). For 30-day all-cause readmission, the observed readmission rates in the lowest and highest deciles of predicted risk with random forests (7.8% and 26.2%, respectively) showed a much wider separation than LR (14.2% and 16.4%, respectively). Machine learning methods improved the prediction of readmission after hospitalization for heart failure compared with LR and provided the greatest predictive range in observed readmission rates. © 2016 American Heart Association, Inc.
A novel asynchronous access method with binary interfaces
2008-01-01
Background Traditionally synchronous access strategies require users to comply with one or more time constraints in order to communicate intent with a binary human-machine interface (e.g., mechanical, gestural or neural switches). Asynchronous access methods are preferable, but have not been used with binary interfaces in the control of devices that require more than two commands to be successfully operated. Methods We present the mathematical development and evaluation of a novel asynchronous access method that may be used to translate sporadic activations of binary interfaces into distinct outcomes for the control of devices requiring an arbitrary number of commands to be controlled. With this method, users are required to activate their interfaces only when the device under control behaves erroneously. Then, a recursive algorithm, incorporating contextual assumptions relevant to all possible outcomes, is used to obtain an informed estimate of user intention. We evaluate this method by simulating a control task requiring a series of target commands to be tracked by a model user. Results When compared to a random selection, the proposed asynchronous access method offers a significant reduction in the number of interface activations required from the user. Conclusion This novel access method offers a variety of advantages over traditionally synchronous access strategies and may be adapted to a wide variety of contexts, with primary relevance to applications involving direct object manipulation. PMID:18959797
Boxwala, Aziz A; Kim, Jihoon; Grillo, Janice M; Ohno-Machado, Lucila
2011-01-01
To determine whether statistical and machine-learning methods, when applied to electronic health record (EHR) access data, could help identify suspicious (ie, potentially inappropriate) access to EHRs. From EHR access logs and other organizational data collected over a 2-month period, the authors extracted 26 features likely to be useful in detecting suspicious accesses. Selected events were marked as either suspicious or appropriate by privacy officers, and served as the gold standard set for model evaluation. The authors trained logistic regression (LR) and support vector machine (SVM) models on 10-fold cross-validation sets of 1291 labeled events. The authors evaluated the sensitivity of final models on an external set of 58 events that were identified as truly inappropriate and investigated independently from this study using standard operating procedures. The area under the receiver operating characteristic curve of the models on the whole data set of 1291 events was 0.91 for LR, and 0.95 for SVM. The sensitivity of the baseline model on this set was 0.8. When the final models were evaluated on the set of 58 investigated events, all of which were determined as truly inappropriate, the sensitivity was 0 for the baseline method, 0.76 for LR, and 0.79 for SVM. The LR and SVM models may not generalize because of interinstitutional differences in organizational structures, applications, and workflows. Nevertheless, our approach for constructing the models using statistical and machine-learning techniques can be generalized. An important limitation is the relatively small sample used for the training set due to the effort required for its construction. The results suggest that statistical and machine-learning methods can play an important role in helping privacy officers detect suspicious accesses to EHRs.
Kim, Jihoon; Grillo, Janice M; Ohno-Machado, Lucila
2011-01-01
Objective To determine whether statistical and machine-learning methods, when applied to electronic health record (EHR) access data, could help identify suspicious (ie, potentially inappropriate) access to EHRs. Methods From EHR access logs and other organizational data collected over a 2-month period, the authors extracted 26 features likely to be useful in detecting suspicious accesses. Selected events were marked as either suspicious or appropriate by privacy officers, and served as the gold standard set for model evaluation. The authors trained logistic regression (LR) and support vector machine (SVM) models on 10-fold cross-validation sets of 1291 labeled events. The authors evaluated the sensitivity of final models on an external set of 58 events that were identified as truly inappropriate and investigated independently from this study using standard operating procedures. Results The area under the receiver operating characteristic curve of the models on the whole data set of 1291 events was 0.91 for LR, and 0.95 for SVM. The sensitivity of the baseline model on this set was 0.8. When the final models were evaluated on the set of 58 investigated events, all of which were determined as truly inappropriate, the sensitivity was 0 for the baseline method, 0.76 for LR, and 0.79 for SVM. Limitations The LR and SVM models may not generalize because of interinstitutional differences in organizational structures, applications, and workflows. Nevertheless, our approach for constructing the models using statistical and machine-learning techniques can be generalized. An important limitation is the relatively small sample used for the training set due to the effort required for its construction. Conclusion The results suggest that statistical and machine-learning methods can play an important role in helping privacy officers detect suspicious accesses to EHRs. PMID:21672912
ERIC Educational Resources Information Center
Bernard, H. Russell; Jones, Ray
1984-01-01
Focuses on problems in making machine-readable data files (MRDFs) accessible and in using them: quality of data in MRDFs themselves (social scientists' concern) and accessibility--availability of bibliographic control, quality of documentation, level of user skills (librarians' concern). Skills needed by social scientists and librarians are…
Table-driven software architecture for a stitching system
NASA Technical Reports Server (NTRS)
Thrash, Patrick J. (Inventor); Miller, Jeffrey L. (Inventor); Pallas, Ken (Inventor); Trank, Robert C. (Inventor); Fox, Rhoda (Inventor); Korte, Mike (Inventor); Codos, Richard (Inventor); Korolev, Alexandre (Inventor); Collan, William (Inventor)
2001-01-01
Native code for a CNC stitching machine is generated by generating a geometry model of a preform; generating tool paths from the geometry model, the tool paths including stitching instructions for making stitches; and generating additional instructions indicating thickness values. The thickness values are obtained from a lookup table. When the stitching machine runs the native code, it accesses a lookup table to determine a thread tension value corresponding to the thickness value. The stitching machine accesses another lookup table to determine a thread path geometry value corresponding to the thickness value.
Islam, Md Mofizul; Conigrave, Katherine M
2007-01-01
Reaching hard-to-reach and high-risk injecting drug users (IDUs) is one of the most important challenges for contemporary needle syringe programs (NSPs). The aim of this review is to examine, based upon the available international experience, the effectiveness of syringe vending machines and mobile van/bus based NSPs in making services more accessible to these hard-to-reach and high-risk groups of IDUs. A literature search revealed 40 papers/reports, of which 18 were on dispensing machines (including vending and exchange machines) and 22 on mobile vans. The findings demonstrate that syringe dispensing machines and mobile vans are promising modalities of NSPs, which can make services more accessible to the target group and in particular to the harder-to-reach and higher-risk groups of IDUs. Their anonymous and confidential approaches make services attractive, accessible and acceptable to these groups. These two outlets were found to be complementary to each other and to other modes of NSPs. Services through dispensing machines and mobile vans in strategically important sites are crucial elements in continuing efforts in reducing the spread of HIV and other blood borne viruses among IDUs. PMID:17958894
Campus-based snack food vending consumption.
Caruso, Michelle L; Klein, Elizabeth G; Kaye, Gail
2014-01-01
To evaluate the purchases of university vending machine clientele and to understand what consumers purchase, purchase motivations, and purchase frequency after implementation of a vending policy designed to promote access to healthier snack options. Cross-sectional data collection from consumers at 8 campus vending machines purposefully selected from a list of highest-grossing machines. Vending machines were stocked with 28.5% green (choose most often), 43% yellow (occasionally), and 28.5% red (least often) food items. Consumers were predominately students (86%) and persons aged 18-24 years (71%). Red vending choices were overwhelmingly selected over healthier vending options (59%). Vended snack food selections were most influenced by hunger (42%) and convenience (41%). Most consumers (51%) frequented vending machines at least 1 time per week. Despite decreased access to less healthful red snack food choices, consumers chose these snacks more frequently than healthier options in campus vending machines. Copyright © 2014 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
Hsieh, Chung-Ho; Lu, Ruey-Hwa; Lee, Nai-Hsin; Chiu, Wen-Ta; Hsu, Min-Huei; Li, Yu-Chuan Jack
2011-01-01
Diagnosing acute appendicitis clinically is still difficult. We developed random forests, support vector machines, and artificial neural network models to diagnose acute appendicitis. Between January 2006 and December 2008, patients who had a consultation session with surgeons for suspected acute appendicitis were enrolled. Seventy-five percent of the data set was used to construct models including random forest, support vector machines, artificial neural networks, and logistic regression. Twenty-five percent of the data set was withheld to evaluate model performance. The area under the receiver operating characteristic curve (AUC) was used to evaluate performance, which was compared with that of the Alvarado score. Data from a total of 180 patients were collected, 135 used for training and 45 for testing. The mean age of patients was 39.4 years (range, 16-85). Final diagnosis revealed 115 patients with and 65 without appendicitis. The AUC of random forest, support vector machines, artificial neural networks, logistic regression, and Alvarado was 0.98, 0.96, 0.91, 0.87, and 0.77, respectively. The sensitivity, specificity, positive, and negative predictive values of random forest were 94%, 100%, 100%, and 87%, respectively. Random forest performed better than artificial neural networks, logistic regression, and Alvarado. We demonstrated that random forest can predict acute appendicitis with good accuracy and, deployed appropriately, can be an effective tool in clinical decision making. Copyright © 2011 Mosby, Inc. All rights reserved.
Royo-Bordonada, Miguel A; Martínez-Huedo, María A
2014-01-01
To evaluate compliance with the self-regulation agreement of the food and drink vending machine sector in primary schools in Madrid, Spain. Cross-sectional study of the prevalence of vending machines in 558 primary schools in 2008. Using the directory of all registered primary schools in Madrid, we identified the presence of machines by telephone interviews and evaluated compliance with the agreement by visiting the schools and assessing accessibility, type of publicity, the products offered and knowledge of the agreement. The prevalence of schools with vending machines was 5.8%. None of the schools reported knowledge of the agreement or of its nutritional guidelines, and most machines were accessible to primary school pupils (79.3%) and packed with high-calorie, low-nutrient-dense foods (58.6%). Compliance with the self-regulation agreement of the vending machines sector was low. Stricter regulation should receive priority in the battle against the obesity epidemic. Copyright © 2013 SESPAS. Published by Elsevier Espana. All rights reserved.
Calculating with light using a chip-scale all-optical abacus.
Feldmann, J; Stegmaier, M; Gruhler, N; Ríos, C; Bhaskaran, H; Wright, C D; Pernice, W H P
2017-11-02
Machines that simultaneously process and store multistate data at one and the same location can provide a new class of fast, powerful and efficient general-purpose computers. We demonstrate the central element of an all-optical calculator, a photonic abacus, which provides multistate compute-and-store operation by integrating functional phase-change materials with nanophotonic chips. With picosecond optical pulses we perform the fundamental arithmetic operations of addition, subtraction, multiplication, and division, including a carryover into multiple cells. This basic processing unit is embedded into a scalable phase-change photonic network and addressed optically through a two-pulse random access scheme. Our framework provides first steps towards light-based non-von Neumann arithmetic.
Radioactive hot cell access hole decontamination machine
Simpson, William E.
1982-01-01
Radioactive hot cell access hole decontamination machine. A mobile housing has an opening large enough to encircle the access hole and has a shielding door, with a door opening and closing mechanism, for uncovering and covering the opening. The housing contains a shaft which has an apparatus for rotating the shaft and a device for independently translating the shaft from the housing through the opening and access hole into the hot cell chamber. A properly sized cylindrical pig containing wire brushes and cloth or other disks, with an arrangement for releasably attaching it to the end of the shaft, circumferentially cleans the access hole wall of radioactive contamination and thereafter detaches from the shaft to fall into the hot cell chamber.
Torkzaban, Bahareh; Kayvanjoo, Amir Hossein; Ardalan, Arman; Mousavi, Soraya; Mariotti, Roberto; Baldoni, Luciana; Ebrahimie, Esmaeil; Ebrahimi, Mansour; Hosseini-Mazinani, Mehdi
2015-01-01
Finding efficient analytical techniques is overwhelmingly turning into a bottleneck for the effectiveness of large biological data. Machine learning offers a novel and powerful tool to advance classification and modeling solutions in molecular biology. However, these methods have been less frequently used with empirical population genetics data. In this study, we developed a new combined approach of data analysis using microsatellite marker data from our previous studies of olive populations using machine learning algorithms. Herein, 267 olive accessions of various origins including 21 reference cultivars, 132 local ecotypes, and 37 wild olive specimens from the Iranian plateau, together with 77 of the most represented Mediterranean varieties were investigated using a finely selected panel of 11 microsatellite markers. We organized data in two '4-targeted' and '16-targeted' experiments. A strategy of assaying different machine based analyses (i.e. data cleaning, feature selection, and machine learning classification) was devised to identify the most informative loci and the most diagnostic alleles to represent the population and the geography of each olive accession. These analyses revealed microsatellite markers with the highest differentiating capacity and proved efficiency for our method of clustering olive accessions to reflect upon their regions of origin. A distinguished highlight of this study was the discovery of the best combination of markers for better differentiating of populations via machine learning models, which can be exploited to distinguish among other biological populations.
Health Promotion and Healthier Products Increase Vending Purchases: A Randomized Factorial Trial.
Hua, Sophia V; Kimmel, Lisa; Van Emmenes, Michael; Taherian, Rafi; Remer, Geraldine; Millman, Adam; Ickovics, Jeannette R
2017-07-01
The current food environment has a high prevalence of nutrient-sparse foods and beverages, most starkly seen in vending machine offerings. There are currently few studies that explore different interventions that might lead to healthier vending machine purchases. To examine how healthier product availability, price reductions, and/or promotional signs affect sales and revenue of snack and beverage vending machines. A 2×2×2 factorial randomized controlled trial was conducted. Students, staff, and employees on a university campus. All co-located snack and beverage vending machines (n=56, 28 snack and 28 beverage) were randomized into one of eight conditions: availability of healthier products and/or 25% price reduction for healthier items and/or promotional signs on machines. Aggregate sales and revenue data for the 5-month study period (February to June 2015) were compared with data from the same months 1 year prior. Analyses were conducted July 2015. The change in units sold and revenue between February through June 2014 and 2015. Linear regression models (main effects and interaction effects) and t test analyses were performed. The interaction between healthier product guidelines and promotional signs in snack vending machines documented increased revenue (P<0.05). Beverage machines randomized to meet healthier product guidelines documented increased units sold (P<0.05) with no revenue change. Price reductions alone had no effect, nor were there any effects for the three-way interaction of the factors. Examining top-selling products for all vending machines combined, pre- to postintervention, we found an overall shift to healthier purchasing. When healthier vending snacks are available, promotional signs are also important to ensure consumers purchase those items in greater amounts. Mitigating potential loss in profits is essential for sustainability of a healthier food environment. Copyright © 2017 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.
Optical alignment of electrodes on electrical discharge machines
NASA Technical Reports Server (NTRS)
Boissevain, A. G.; Nelson, B. W.
1972-01-01
Shadowgraph system projects magnified image on screen so that alignment of small electrodes mounted on electrical discharge machines can be corrected and verified. Technique may be adapted to other machine tool equipment where physical contact cannot be made during inspection and access to tool limits conventional runout checking procedures.
Estelles-Lopez, Lucia; Ropodi, Athina; Pavlidis, Dimitris; Fotopoulou, Jenny; Gkousari, Christina; Peyrodie, Audrey; Panagou, Efstathios; Nychas, George-John; Mohareb, Fady
2017-09-01
Over the past decade, analytical approaches based on vibrational spectroscopy, hyperspectral/multispectral imagining and biomimetic sensors started gaining popularity as rapid and efficient methods for assessing food quality, safety and authentication; as a sensible alternative to the expensive and time-consuming conventional microbiological techniques. Due to the multi-dimensional nature of the data generated from such analyses, the output needs to be coupled with a suitable statistical approach or machine-learning algorithms before the results can be interpreted. Choosing the optimum pattern recognition or machine learning approach for a given analytical platform is often challenging and involves a comparative analysis between various algorithms in order to achieve the best possible prediction accuracy. In this work, "MeatReg", a web-based application is presented, able to automate the procedure of identifying the best machine learning method for comparing data from several analytical techniques, to predict the counts of microorganisms responsible of meat spoilage regardless of the packaging system applied. In particularly up to 7 regression methods were applied and these are ordinary least squares regression, stepwise linear regression, partial least square regression, principal component regression, support vector regression, random forest and k-nearest neighbours. MeatReg" was tested with minced beef samples stored under aerobic and modified atmosphere packaging and analysed with electronic nose, HPLC, FT-IR, GC-MS and Multispectral imaging instrument. Population of total viable count, lactic acid bacteria, pseudomonads, Enterobacteriaceae and B. thermosphacta, were predicted. As a result, recommendations of which analytical platforms are suitable to predict each type of bacteria and which machine learning methods to use in each case were obtained. The developed system is accessible via the link: www.sorfml.com. Copyright © 2017 Elsevier Ltd. All rights reserved.
School vending machine purchasing behavior: results from the 2005 YouthStyles survey.
Thompson, Olivia M; Yaroch, Amy L; Moser, Richard P; Finney Rutten, Lila J; Agurs-Collins, Tanya
2010-05-01
Competitive foods are often available in school vending machines. Providing youth with access to school vending machines, and thus competitive foods, is of concern, considering the continued high prevalence of childhood obesity: competitive foods tend to be energy dense and nutrient poor and can contribute to increased energy intake in children and adolescents. To evaluate the relationship between school vending machine purchasing behavior and school vending machine access and individual-level dietary characteristics, we used population-level YouthStyles 2005 survey data to compare nutrition-related policy and behavioral characteristics by the number of weekly vending machine purchases made by public school children and adolescents (N = 869). Odds ratios (ORs) and corresponding 95% confidence intervals (CIs) were computed using age- and race/ethnicity-adjusted logistic regression models that were weighted on age and sex of child, annual household income, head of household age, and race/ethnicity of the adult in study. Data were collected in 2005 and analyzed in 2008. Compared to participants who did not purchase from a vending machine, participants who purchased >or=3 days/week were more likely to (1) have unrestricted access to a school vending machine (OR = 1.71; 95% CI = 1.13-2.59); (2) consume regular soda and chocolate candy >or=1 time/day (OR = 3.21; 95% CI = 1.87-5.51 and OR = 2.71; 95% CI = 1.34-5.46, respectively); and (3) purchase pizza or fried foods from a school cafeteria >or=1 day/week (OR = 5.05; 95% CI = 3.10-8.22). Future studies are needed to establish the contribution that the school-nutrition environment makes on overall youth dietary intake behavior, paying special attention to health disparities between whites and nonwhites.
The Frictionless Data Package: Data Containerization for Automated Scientific Workflows
NASA Astrophysics Data System (ADS)
Shepherd, A.; Fils, D.; Kinkade, D.; Saito, M. A.
2017-12-01
As cross-disciplinary geoscience research increasingly relies on machines to discover and access data, one of the critical questions facing data repositories is how data and supporting materials should be packaged for consumption. Traditionally, data repositories have relied on a human's involvement throughout discovery and access workflows. This human could assess fitness for purpose by reading loosely coupled, unstructured information from web pages and documentation. In attempts to shorten the time to science and access data resources across may disciplines, expectations for machines to mediate the process of discovery and access is challenging data repository infrastructure. This challenge is to find ways to deliver data and information in ways that enable machines to make better decisions by enabling them to understand the data and metadata of many data types. Additionally, once machines have recommended a data resource as relevant to an investigator's needs, the data resource should be easy to integrate into that investigator's toolkits for analysis and visualization. The Biological and Chemical Oceanography Data Management Office (BCO-DMO) supports NSF-funded OCE and PLR investigators with their project's data management needs. These needs involve a number of varying data types some of which require multiple files with differing formats. Presently, BCO-DMO has described these data types and the important relationships between the type's data files through human-readable documentation on web pages. For machines directly accessing data files from BCO-DMO, this documentation could be overlooked and lead to misinterpreting the data. Instead, BCO-DMO is exploring the idea of data containerization, or packaging data and related information for easier transport, interpretation, and use. In researching the landscape of data containerization, the Frictionlessdata Data Package (http://frictionlessdata.io/) provides a number of valuable advantages over similar solutions. This presentation will focus on these advantages and how the Frictionlessdata Data Package addresses a number of real-world use cases faced for data discovery, access, analysis and visualization.
2017-01-16
ARTICLE Received 24 Sep 2016 | Accepted 29 Nov 2016 | Published 16 Jan 2017 Prediction and real- time compensation of qubit decoherence via machine...information to suppress stochastic, semiclassical decoherence, even when access to measurements is limited. First, we implement a time -division...quantum information experiments. Second, we employ predictive feedback during sequential but time delayed measurements to reduce the Dick effect as
Code of Federal Regulations, 2012 CFR
2012-01-01
... device means a card, code, or other means of access to a consumer's account, or any combination thereof..., automated teller machines, and cash dispensing machines. (i) Financial institution means a bank, savings...
Code of Federal Regulations, 2014 CFR
2014-01-01
... device means a card, code, or other means of access to a consumer's account, or any combination thereof..., automated teller machines, and cash dispensing machines. (i) Financial institution means a bank, savings...
Code of Federal Regulations, 2011 CFR
2011-01-01
... device means a card, code, or other means of access to a consumer's account, or any combination thereof..., automated teller machines, and cash dispensing machines. (i) Financial institution means a bank, savings...
Code of Federal Regulations, 2013 CFR
2013-01-01
... device means a card, code, or other means of access to a consumer's account, or any combination thereof..., automated teller machines, and cash dispensing machines. (i) Financial institution means a bank, savings...
Do warning signs on electronic gaming machines influence irrational cognitions?
Monaghan, Sally; Blaszczynski, Alex; Nower, Lia
2009-08-01
Electronic gaming machines are popular among problem gamblers; in response, governments have introduced "responsible gaming" legislation incorporating the mandatory display of warning signs on or near electronic gaming machines. These signs are designed to correct irrational and erroneous beliefs through the provision of accurate information on probabilities of winning and the concept of randomness. There is minimal empirical data evaluating the effectiveness of such signs. In this study, 93 undergraduate students were randomly allocated to standard and informative messages displayed on an electronic gaming machine during play in a laboratory setting. Results revealed that a majority of participants incorrectly estimated gambling odds and reported irrational gambling-related cognitions prior to play. In addition, there were no significant between-group differences, and few participants recalled the content of messages or modified their gambling-related cognitions. Signs placed on electronic gaming machines may not modify irrational beliefs or alter gambling behaviour.
School Vending Machine Purchasing Behavior: Results from the 2005 YouthStyles Survey
ERIC Educational Resources Information Center
Thompson, Olivia M.; Yaroch, Amy L.; Moser, Richard P.; Rutten, Lila J. Finney; Agurs-Collins, Tanya
2010-01-01
Background: Competitive foods are often available in school vending machines. Providing youth with access to school vending machines, and thus competitive foods, is of concern, considering the continued high prevalence of childhood obesity: competitive foods tend to be energy dense and nutrient poor and can contribute to increased energy intake in…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hrubiak, Rostislav; Sinogeikin, Stanislav; Rod, Eric
We have designed and constructed a new system for micro-machining parts and sample assemblies used for diamond anvil cells and general user operations at the High Pressure Collaborative Access Team, sector 16 of the Advanced Photon Source. The new micro-machining system uses a pulsed laser of 400 ps pulse duration, ablating various materials without thermal melting, thus leaving a clean edge. With optics designed for a tight focus, the system can machine holes any size larger than 3 μm in diameter. Unlike a standard electrical discharge machining drill, the new laser system allows micro-machining of non-conductive materials such as: amorphousmore » boron and silicon carbide gaskets, diamond, oxides, and other materials including organic materials such as polyimide films (i.e., Kapton). An important feature of the new system is the use of gas-tight or gas-flow environmental chambers which allow the laser micro-machining to be done in a controlled (e.g., inert gas) atmosphere to prevent oxidation and other chemical reactions in air sensitive materials. The gas-tight workpiece enclosure is also useful for machining materials with known health risks (e.g., beryllium). Specialized control software with a graphical interface enables micro-machining of custom 2D and 3D shapes. The laser-machining system was designed in a Class 1 laser enclosure, i.e., it includes laser safety interlocks and computer controls and allows for routine operation. Though initially designed mainly for machining of the diamond anvil cell gaskets, the laser-machining system has since found many other micro-machining applications, several of which are presented here.« less
Code of Federal Regulations, 2013 CFR
2013-01-01
..., code, or other means of access to a consumer's account, or any combination thereof, that may be used by..., automated teller machines (ATMs), and cash dispensing machines. (i) “Financial institution” means a bank...
Code of Federal Regulations, 2012 CFR
2012-01-01
..., code, or other means of access to a consumer's account, or any combination thereof, that may be used by..., automated teller machines (ATMs), and cash dispensing machines. (i) “Financial institution” means a bank...
L.R. Iverson; A.M. Prasad; A. Liaw
2004-01-01
More and better machine learning tools are becoming available for landscape ecologists to aid in understanding species-environment relationships and to map probable species occurrence now and potentially into the future. To thal end, we evaluated three statistical models: Regression Tree Analybib (RTA), Bagging Trees (BT) and Random Forest (RF) for their utility in...
Evaluating machine learning algorithms estimating tremor severity ratings on the Bain-Findley scale
NASA Astrophysics Data System (ADS)
Yohanandan, Shivanthan A. C.; Jones, Mary; Peppard, Richard; Tan, Joy L.; McDermott, Hugh J.; Perera, Thushara
2016-12-01
Tremor is a debilitating symptom of some movement disorders. Effective treatment, such as deep brain stimulation (DBS), is contingent upon frequent clinical assessments using instruments such as the Bain-Findley tremor rating scale (BTRS). Many patients, however, do not have access to frequent clinical assessments. Wearable devices have been developed to provide patients with access to frequent objective assessments outside the clinic via telemedicine. Nevertheless, the information they report is not in the form of BTRS ratings. One way to transform this information into BTRS ratings is through linear regression models (LRMs). Another, potentially more accurate method is through machine learning classifiers (MLCs). This study aims to compare MLCs and LRMs, and identify the most accurate model that can transform objective tremor information into tremor severity ratings on the BTRS. Nine participants with upper limb tremor had their DBS stimulation amplitude varied while they performed clinical upper-extremity exercises. Tremor features were acquired using the tremor biomechanics analysis laboratory (TREMBAL). Movement disorder specialists rated tremor severity on the BTRS from video recordings. Seven MLCs and 6 LRMs transformed TREMBAL features into tremor severity ratings on the BTRS using the specialists’ ratings as training data. The weighted Cohen’s kappa ({κ\\text{w}} ) defined the models’ rating accuracy. This study shows that the Random Forest MLC was the most accurate model ({κ\\text{w}} = 0.81) at transforming tremor information into BTRS ratings, thereby improving the clinical interpretation of tremor information obtained from wearable devices.
Kringel, Dario; Geisslinger, Gerd; Resch, Eduard; Oertel, Bruno G; Thrun, Michael C; Heinemann, Sarah; Lötsch, Jörn
2018-03-27
Heat pain and its modulation by capsaicin varies among subjects in experimental and clinical settings. A plausible cause is a genetic component, of which TRPV1 ion channels, by their response to both heat and capsaicin, are primary candidates. However, TRPA1 channels can heterodimerize with TRPV1 channels and carry genetic variants reported to modulate heat pain sensitivity. To address the role of these candidate genes in capsaicin-induced hypersensitization to heat, pain thresholds acquired before and after topical application of capsaicin and TRPA1/TRPV1 exomic sequences derived by next-generation sequencing were assessed in n = 75 healthy volunteers and the genetic information comprised 278 loci. Gaussian mixture modeling indicated 2 phenotype groups with high or low capsaicin-induced hypersensitization to heat. Unsupervised machine learning implemented as swarm-based clustering hinted at differences in the genetic pattern between these phenotype groups. Several methods of supervised machine learning implemented as random forests, adaptive boosting, k-nearest neighbors, naive Bayes, support vector machines, and for comparison, binary logistic regression predicted the phenotype group association consistently better when based on the observed genotypes than when using a random permutation of the exomic sequences. Of note, TRPA1 variants were more important for correct phenotype group association than TRPV1 variants. This indicates a role of the TRPA1 and TRPV1 next-generation sequencing-based genetic pattern in the modulation of the individual response to heat-related pain phenotypes. When considering earlier evidence that topical capsaicin can induce neuropathy-like quantitative sensory testing patterns in healthy subjects, implications for future analgesic treatments with transient receptor potential inhibitors arise.This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.
Enhancement of plant metabolite fingerprinting by machine learning.
Scott, Ian M; Vermeer, Cornelia P; Liakata, Maria; Corol, Delia I; Ward, Jane L; Lin, Wanchang; Johnson, Helen E; Whitehead, Lynne; Kular, Baldeep; Baker, John M; Walsh, Sean; Dave, Anuja; Larson, Tony R; Graham, Ian A; Wang, Trevor L; King, Ross D; Draper, John; Beale, Michael H
2010-08-01
Metabolite fingerprinting of Arabidopsis (Arabidopsis thaliana) mutants with known or predicted metabolic lesions was performed by (1)H-nuclear magnetic resonance, Fourier transform infrared, and flow injection electrospray-mass spectrometry. Fingerprinting enabled processing of five times more plants than conventional chromatographic profiling and was competitive for discriminating mutants, other than those affected in only low-abundance metabolites. Despite their rapidity and complexity, fingerprints yielded metabolomic insights (e.g. that effects of single lesions were usually not confined to individual pathways). Among fingerprint techniques, (1)H-nuclear magnetic resonance discriminated the most mutant phenotypes from the wild type and Fourier transform infrared discriminated the fewest. To maximize information from fingerprints, data analysis was crucial. One-third of distinctive phenotypes might have been overlooked had data models been confined to principal component analysis score plots. Among several methods tested, machine learning (ML) algorithms, namely support vector machine or random forest (RF) classifiers, were unsurpassed for phenotype discrimination. Support vector machines were often the best performing classifiers, but RFs yielded some particularly informative measures. First, RFs estimated margins between mutant phenotypes, whose relations could then be visualized by Sammon mapping or hierarchical clustering. Second, RFs provided importance scores for the features within fingerprints that discriminated mutants. These scores correlated with analysis of variance F values (as did Kruskal-Wallis tests, true- and false-positive measures, mutual information, and the Relief feature selection algorithm). ML classifiers, as models trained on one data set to predict another, were ideal for focused metabolomic queries, such as the distinctiveness and consistency of mutant phenotypes. Accessible software for use of ML in plant physiology is highlighted.
High coherence plane breaking packaging for superconducting qubits.
Bronn, Nicholas T; Adiga, Vivekananda P; Olivadese, Salvatore B; Wu, Xian; Chow, Jerry M; Pappas, David P
2018-04-01
We demonstrate a pogo pin package for a superconducting quantum processor specifically designed with a nontrivial layout topology (e.g., a center qubit that cannot be accessed from the sides of the chip). Two experiments on two nominally identical superconducting quantum processors in pogo packages, which use commercially available parts and require modest machining tolerances, are performed at low temperature (10 mK) in a dilution refrigerator and both found to behave comparably to processors in standard planar packages with wirebonds where control and readout signals come in from the edges. Single- and two-qubit gate errors are also characterized via randomized benchmarking, exhibiting similar error rates as in standard packages, opening the possibility of integrating pogo pin packaging with extensible qubit architectures.
High coherence plane breaking packaging for superconducting qubits
NASA Astrophysics Data System (ADS)
Bronn, Nicholas T.; Adiga, Vivekananda P.; Olivadese, Salvatore B.; Wu, Xian; Chow, Jerry M.; Pappas, David P.
2018-04-01
We demonstrate a pogo pin package for a superconducting quantum processor specifically designed with a nontrivial layout topology (e.g., a center qubit that cannot be accessed from the sides of the chip). Two experiments on two nominally identical superconducting quantum processors in pogo packages, which use commercially available parts and require modest machining tolerances, are performed at low temperature (10 mK) in a dilution refrigerator and both found to behave comparably to processors in standard planar packages with wirebonds where control and readout signals come in from the edges. Single- and two-qubit gate errors are also characterized via randomized benchmarking, exhibiting similar error rates as in standard packages, opening the possibility of integrating pogo pin packaging with extensible qubit architectures.
NASA Astrophysics Data System (ADS)
Wu, Yunnan; Luo, Lin; Li, Jin; Zhang, Ya-Qin
2000-05-01
The concentric mosaics offer a quick solution to the construction and navigation of a virtual environment. To reduce the vast data amount of the concentric mosaics, a compression scheme based on 3D wavelet transform has been proposed in a previous paper. In this work, we investigate the efficient implementation of the renderer. It is preferable not to expand the compressed bitstream as a whole, so that the memory consumption of the renderer can be reduced. Instead, only the data necessary to render the current view are accessed and decoded. The progressive inverse wavelet synthesis (PIWS) algorithm is proposed to provide the random data access and to reduce the calculation for the data access requests to a minimum. A mixed cache is used in PIWS, where the entropy decoded wavelet coefficient, intermediate result of lifting and fully synthesized pixel are all stored at the same memory unit because of the in- place calculation property of the lifting implementation. PIWS operates with a finite state machine, where each memory unit is attached with a state to indicate what type of content is currently stored. The computational saving achieved by PIWS is demonstrated with extensive experiment results.
A Study of Multifunctional Document Centers that Are Accessible to People Who Are Visually Impaired
ERIC Educational Resources Information Center
Huffman, Lee A.; Uslan, Mark M.; Burton, Darren M.; Eghtesadi, Caesar
2009-01-01
The capabilities of modern photocopy machines have advanced beyond the simple duplication of documents. In addition to the standard functions of copying, collating, and stapling, such machines can be a part of telecommunication networks and provide printing, scanning, faxing, and e-mailing functions. No longer just copy machines, these devices are…
Calibrating random forests for probability estimation.
Dankowski, Theresa; Ziegler, Andreas
2016-09-30
Probabilities can be consistently estimated using random forests. It is, however, unclear how random forests should be updated to make predictions for other centers or at different time points. In this work, we present two approaches for updating random forests for probability estimation. The first method has been proposed by Elkan and may be used for updating any machine learning approach yielding consistent probabilities, so-called probability machines. The second approach is a new strategy specifically developed for random forests. Using the terminal nodes, which represent conditional probabilities, the random forest is first translated to logistic regression models. These are, in turn, used for re-calibration. The two updating strategies were compared in a simulation study and are illustrated with data from the German Stroke Study Collaboration. In most simulation scenarios, both methods led to similar improvements. In the simulation scenario in which the stricter assumptions of Elkan's method were not met, the logistic regression-based re-calibration approach for random forests outperformed Elkan's method. It also performed better on the stroke data than Elkan's method. The strength of Elkan's method is its general applicability to any probability machine. However, if the strict assumptions underlying this approach are not met, the logistic regression-based approach is preferable for updating random forests for probability estimation. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Competitive foods available in Pennsylvania public high schools.
Probart, Claudia; McDonnell, Elaine; Weirich, J Elaine; Hartman, Terryl; Bailey-Davis, Lisa; Prabhakher, Vaheedha
2005-08-01
This study examined the types and extent of competitive foods available in public high schools in Pennsylvania. We developed, pilot tested, and distributed surveys to school foodservice directors in a random sample of 271 high schools in Pennsylvania. Two hundred twenty-eight surveys were returned, for a response rate of 84%. Statistical analyses were performed: Descriptive statistics were used to examine the extent of competitive food sales in Pennsylvania public high schools. The survey data were analyzed using SPSS software version 11.5.1 (2002, SPSS base 11.0 for Windows, SPSS Inc, Chicago, IL). A la carte sales provide almost dollar 700/day to school foodservice programs, almost 85% of which receive no financial support from their school districts. The top-selling a la carte items are "hamburgers, pizza, and sandwiches." Ninety-four percent of respondents indicated that vending machines are accessible to students. The item most commonly offered in vending machines is bottled water (71.5%). While food items are less often available through school stores and club fund-raisers, candy is the item most commonly offered through these sources. Competitive foods are widely available in high schools. Although many of the items available are low in nutritional value, we found several of the top-selling a la carte options to be nutritious and bottled water the item most often identified as available through vending machines.
Taber, Daniel R; Chriqui, Jamie F; Vuillaume, Renee; Kelder, Steven H; Chaloupka, Frank J
2015-07-27
Across the United States, many states have actively banned the sale of soda in high schools, and evidence suggests that students' in-school access to soda has declined as a result. However, schools may be substituting soda with other sugar-sweetened beverages (SSBs), and national trends indicate that adolescents are consuming more sports drinks and energy drinks. This study examined whether students consumed more non-soda SSBs in states that banned the sale of soda in school. Student data on consumption of various SSBs and in-school access to vending machines that sold SSBs were obtained from the National Youth Physical Activity and Nutrition Study (NYPANS), conducted in 2010. Student data were linked to state laws regarding the sale of soda in school in 2010. Students were cross-classified based on their access to vending machines and whether their state banned soda in school, creating 4 comparison groups. Zero-inflated negative binomial models were used to compare these 4 groups with respect to students’ self-reported consumption of diet soda, sports drinks, energy drinks, coffee/tea, or other SSBs. Students who had access to vending machines in a state that did not ban soda were the reference group. Models were adjusted for race/ethnicity, sex, grade, home food access, state median income, and U.S. Census region. Students consumed more servings of sports drinks, energy drinks, coffee/tea, and other SSBs if they resided in a state that banned soda in school but attended a school with vending machines that sold other SSBs. Similar results were observed where schools did not have vending machines but the state allowed soda to be sold in school. Intake was generally not elevated where both states and schools limited SSB availability – i.e., states banned soda and schools did not have SSB vending machines. State laws that ban soda but allow other SSBs may lead students to substitute other non-soda SSBs. Additional longitudinal research is needed to confirm this. Elevated SSB intake was not observed when both states and schools took steps to remove SSBs from school.
2015-01-01
Background Across the United States, many states have actively banned the sale of soda in high schools, and evidence suggests that students’ in-school access to soda has declined as a result. However, schools may be substituting soda with other sugar-sweetened beverages (SSBs), and national trends indicate that adolescents are consuming more sports drinks and energy drinks. This study examined whether students consumed more non-soda SSBs in states that banned the sale of soda in school. Methods Student data on consumption of various SSBs and in-school access to vending machines that sold SSBs were obtained from the National Youth Physical Activity and Nutrition Study (NYPANS), conducted in 2010. Student data were linked to state laws regarding the sale of soda in school in 2010. Students were cross-classified based on their access to vending machines and whether their state banned soda in school, creating 4 comparison groups. Zero-inflated negative binomial models were used to compare these 4 groups with respect to students’ self-reported consumption of diet soda, sports drinks, energy drinks, coffee/tea, or other SSBs. Students who had access to vending machines in a state that did not ban soda were the reference group. Models were adjusted for race/ethnicity, sex, grade, home food access, state median income, and U.S. Census region. Results Students consumed more servings of sports drinks, energy drinks, coffee/tea, and other SSBs if they resided in a state that banned soda in school but attended a school with vending machines that sold other SSBs. Similar results were observed where schools did not have vending machines but the state allowed soda to be sold in school. Intake was generally not elevated where both states and schools limited SSB availability – i.e., states banned soda and schools did not have SSB vending machines. Conclusion State laws that ban soda but allow other SSBs may lead students to substitute other non-soda SSBs. Additional longitudinal research is needed to confirm this. Elevated SSB intake was not observed when both states and schools took steps to remove SSBs from school. PMID:26221969
Mikhchi, Abbas; Honarvar, Mahmood; Kashan, Nasser Emam Jomeh; Aminafshar, Mehdi
2016-06-21
Genotype imputation is an important tool for prediction of unknown genotypes for both unrelated individuals and parent-offspring trios. Several imputation methods are available and can either employ universal machine learning methods, or deploy algorithms dedicated to infer missing genotypes. In this research the performance of eight machine learning methods: Support Vector Machine, K-Nearest Neighbors, Extreme Learning Machine, Radial Basis Function, Random Forest, AdaBoost, LogitBoost, and TotalBoost compared in terms of the imputation accuracy, computation time and the factors affecting imputation accuracy. The methods employed using real and simulated datasets to impute the un-typed SNPs in parent-offspring trios. The tested methods show that imputation of parent-offspring trios can be accurate. The Random Forest and Support Vector Machine were more accurate than the other machine learning methods. The TotalBoost performed slightly worse than the other methods.The running times were different between methods. The ELM was always most fast algorithm. In case of increasing the sample size, the RBF requires long imputation time.The tested methods in this research can be an alternative for imputation of un-typed SNPs in low missing rate of data. However, it is recommended that other machine learning methods to be used for imputation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Holographic Labeling And Reading Machine For Authentication And Security Appications
Weber, David C.; Trolinger, James D.
1999-07-06
A holographic security label and automated reading machine for marking and subsequently authenticating any object such as an identification badge, a pass, a ticket, a manufactured part, or a package is described. The security label is extremely difficult to copy or even to read by unauthorized persons. The system comprises a holographic security label that has been created with a coded reference wave, whose specification can be kept secret. The label contains information that can be extracted only with the coded reference wave, which is derived from a holographic key, which restricts access of the information to only the possessor of the key. A reading machine accesses the information contained in the label and compares it with data stored in the machine through the application of a joint transform correlator, which is also equipped with a reference hologram that adds additional security to the procedure.
Virtual Mission Operations of Remote Sensors With Rapid Access To and From Space
NASA Technical Reports Server (NTRS)
Ivancic, William D.; Stewart, Dave; Walke, Jon; Dikeman, Larry; Sage, Steven; Miller, Eric; Northam, James; Jackson, Chris; Taylor, John; Lynch, Scott;
2010-01-01
This paper describes network-centric operations, where a virtual mission operations center autonomously receives sensor triggers, and schedules space and ground assets using Internet-based technologies and service-oriented architectures. For proof-of-concept purposes, sensor triggers are received from the United States Geological Survey (USGS) to determine targets for space-based sensors. The Surrey Satellite Technology Limited (SSTL) Disaster Monitoring Constellation satellite, the United Kingdom Disaster Monitoring Constellation (UK-DMC), is used as the space-based sensor. The UK-DMC s availability is determined via machine-to-machine communications using SSTL s mission planning system. Access to/from the UK-DMC for tasking and sensor data is via SSTL s and Universal Space Network s (USN) ground assets. The availability and scheduling of USN s assets can also be performed autonomously via machine-to-machine communications. All communication, both on the ground and between ground and space, uses open Internet standards.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-29
... INTERNATIONAL TRADE COMMISSION [DN 2859] Certain Dynamic Random Access Memory Devices, and.... International Trade Commission has received a complaint entitled In Re Certain Dynamic Random Access Memory... certain dynamic random access memory devices, and products containing same. The complaint names Elpida...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-01
... Semiconductor Chips Having Synchronous Dynamic Random Access Memory Controllers and Products Containing Same... synchronous dynamic random access memory controllers and products containing same by reason of infringement of... semiconductor chips having synchronous dynamic random access memory controllers and products containing same...
Access, Equity, and Opportunity. Women in Machining: A Model Program.
ERIC Educational Resources Information Center
Warner, Heather
The Women in Machining (WIM) program is a Machine Action Project (MAP) initiative that was developed in response to a local skilled metalworking labor shortage, despite a virtual absence of women and people of color from area shops. The project identified post-war stereotypes and other barriers that must be addressed if women are to have an equal…
Enhanced networked server management with random remote backups
NASA Astrophysics Data System (ADS)
Kim, Song-Kyoo
2003-08-01
In this paper, the model is focused on available server management in network environments. The (remote) backup servers are hooked up by VPN (Virtual Private Network) and replace broken main severs immediately. A virtual private network (VPN) is a way to use a public network infrastructure and hooks up long-distance servers within a single network infrastructure. The servers can be represent as "machines" and then the system deals with main unreliable and random auxiliary spare (remote backup) machines. When the system performs a mandatory routine maintenance, auxiliary machines are being used for backups during idle periods. Unlike other existing models, the availability of auxiliary machines is changed for each activation in this enhanced model. Analytically tractable results are obtained by using several mathematical techniques and the results are demonstrated in the framework of optimized networked server allocation problems.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-27
... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-821] Certain Dynamic Random Access Memory... importation, and the sale within the United States after importation of certain dynamic random access memory... certain dynamic random access memory devices, and products containing same that infringe one or more of...
Overload Control for Signaling Congestion of Machine Type Communications in 3GPP Networks
Lu, Zhaoming; Pan, Qi; Wang, Luhan; Wen, Xiangming
2016-01-01
Because of the limited resources on radio access channels of third generation partnership projection (3GPP) network, one of the most challenging tasks posted by 3GPP cellular-based machine type communications (MTC) is congestion due to massive requests for connection to radio access network (RAN). In this paper, an overload control algorithm in 3GPP RAN is proposed, which proactively disperses the simultaneous access attempts in evenly distributed time window. Through periodic reservation strategy, massive access requests of MTC devices are dispersed in time, which reduces the probability of confliction of signaling. By the compensation and prediction mechanism, each device can communicate with MTC server with dynamic load of air interface. Numerical results prove that proposed method makes MTC applications friendly to 3GPP cellular network. PMID:27936011
Overload Control for Signaling Congestion of Machine Type Communications in 3GPP Networks.
Lu, Zhaoming; Pan, Qi; Wang, Luhan; Wen, Xiangming
2016-01-01
Because of the limited resources on radio access channels of third generation partnership projection (3GPP) network, one of the most challenging tasks posted by 3GPP cellular-based machine type communications (MTC) is congestion due to massive requests for connection to radio access network (RAN). In this paper, an overload control algorithm in 3GPP RAN is proposed, which proactively disperses the simultaneous access attempts in evenly distributed time window. Through periodic reservation strategy, massive access requests of MTC devices are dispersed in time, which reduces the probability of confliction of signaling. By the compensation and prediction mechanism, each device can communicate with MTC server with dynamic load of air interface. Numerical results prove that proposed method makes MTC applications friendly to 3GPP cellular network.
36 CFR Appendix A to Part 1191 - Table Of Contents
Code of Federal Regulations, 2014 CFR
2014-07-01
... Protruding Objects 205 Operable Parts 206 Accessible Routes 207 Accessible Means of Egress 208 Parking Spaces..., Kitchenettes, and Sinks 213 Toilet Facilities and Bathing Facilities 214 Washing Machines and Clothes Dryers... F205 Operable Parts F206 Accessible Routes F207 Accessible Means of Egress F208 Parking Spaces F209...
ERIC Educational Resources Information Center
Lau, Andrew J.
2013-01-01
This dissertation is an ethnography conducted with the Los Angeles-based community arts organization called Machine Project. Operating both a storefront gallery in Echo Park and as a loose association of contemporary artists, performers, curators, and designers, Machine Project seeks to make "rarefied knowledge accessible" through…
An Introduction to Database Structure and Database Machines.
ERIC Educational Resources Information Center
Detweiler, Karen
1984-01-01
Enumerates principal management objectives of database management systems (data independence, quality, security, multiuser access, central control) and criteria for comparison (response time, size, flexibility, other features). Conventional database management systems, relational databases, and database machines used for backend processing are…
Defense Logistics Standard Systems Functional Requirements.
1987-03-01
Artificial Intelligence - the development of a machine capability to perform functions normally concerned with human intelligence, such as learning , adapting...Basic Data Base Machine Configurations .... ......... D- 18 xx ~ ?f~~~vX PART I: MODELS - DEFENSE LOGISTICS STANDARD SYSTEMS FUNCTIONAL REQUIREMENTS...On-line, Interactive Access. Integrating user input and machine output in a dynamic, real-time, give-and- take process is considered the optimum mode
A microcomputer network for control of a continuous mining machine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schiffbauer, W.H.
1993-12-31
This report details a microcomputer-based control and monitoring network that was developed in-house by the U.S. Bureau of Mines and installed on a continuous mining machine. The network consists of microcomputers that are connected together via a single twisted-pair cable. Each microcomputer was developed to provide a particular function in the control process. Machine-mounted microcomputers, in conjunction with the appropriate sensors, provide closed-loop control of the machine, navigation, and environmental monitoring. Off-the-machine microcomputers provide remote control of the machine, sensor status, and a connection to the network so that external computers can access network data and control the continuous miningmore » machine. Because of the network`s generic structure, it can be installed on most mining machines.« less
Wang, Pin-Chieh; Ritz, Beate R; Janowitz, Ira; Harrison, Robert J; Yu, Fei; Chan, Jacqueline; Rempel, David M
2008-03-01
Determine whether an adjustable chair with a curved or a flat seat pan improved monthly back and hip pain scores in sewing machine operators. This 4-month intervention study randomized 293 sewing machine operators with back and hip pain. The participants in the control group received a placebo intervention, and participants in the intervention groups received the placebo intervention and one of the two intervention chairs. Compared with the control group, mean pain improvement for the flat chair intervention was 0.43 points (95% CI = 0.34, 0.51) per month, and mean pain improvement for the curved chair intervention was 0.25 points (95% CI = 0.16, 0.34) per month. A height-adjustable task chair with a swivel function can reduce back and hip pain in sewing machine operators. The findings may be relevant to workers who perform visual- and hand-intensive manufacturing jobs.
A cooperative effort to pass tobacco control ordinances in Wichita, Kansas.
Pippert, K; Jecha, L; Coen, S; MacDonald, P; Francisco, J; Pickard, S
1995-01-01
In October 1993, the Tobacco-Free Wichita Coalition proposed ordinances to the Wichita City Council that would tightly control access of minors to tobacco and prohibit smoking in public places. The subsequent successful change in local health policy required the collaborative efforts of local and state organizations and health agencies. A simple random telephone survey commissioned and financed by the coalition demonstrated that 76 percent (95 percent CI = 72 percent to 80 percent) of adult Wichita-Sedgwick County residents favored enforced penalties for merchants selling tobacco to minors, and 62 percent (95 percent CI = 58 percent to 66 percent) favored a ban on tobacco vending machines. Fifty-four percent (95 percent CI = 50 percent to 58 percent) favored a smoking ban in all public places.
Enhancement of Plant Metabolite Fingerprinting by Machine Learning1[W
Scott, Ian M.; Vermeer, Cornelia P.; Liakata, Maria; Corol, Delia I.; Ward, Jane L.; Lin, Wanchang; Johnson, Helen E.; Whitehead, Lynne; Kular, Baldeep; Baker, John M.; Walsh, Sean; Dave, Anuja; Larson, Tony R.; Graham, Ian A.; Wang, Trevor L.; King, Ross D.; Draper, John; Beale, Michael H.
2010-01-01
Metabolite fingerprinting of Arabidopsis (Arabidopsis thaliana) mutants with known or predicted metabolic lesions was performed by 1H-nuclear magnetic resonance, Fourier transform infrared, and flow injection electrospray-mass spectrometry. Fingerprinting enabled processing of five times more plants than conventional chromatographic profiling and was competitive for discriminating mutants, other than those affected in only low-abundance metabolites. Despite their rapidity and complexity, fingerprints yielded metabolomic insights (e.g. that effects of single lesions were usually not confined to individual pathways). Among fingerprint techniques, 1H-nuclear magnetic resonance discriminated the most mutant phenotypes from the wild type and Fourier transform infrared discriminated the fewest. To maximize information from fingerprints, data analysis was crucial. One-third of distinctive phenotypes might have been overlooked had data models been confined to principal component analysis score plots. Among several methods tested, machine learning (ML) algorithms, namely support vector machine or random forest (RF) classifiers, were unsurpassed for phenotype discrimination. Support vector machines were often the best performing classifiers, but RFs yielded some particularly informative measures. First, RFs estimated margins between mutant phenotypes, whose relations could then be visualized by Sammon mapping or hierarchical clustering. Second, RFs provided importance scores for the features within fingerprints that discriminated mutants. These scores correlated with analysis of variance F values (as did Kruskal-Wallis tests, true- and false-positive measures, mutual information, and the Relief feature selection algorithm). ML classifiers, as models trained on one data set to predict another, were ideal for focused metabolomic queries, such as the distinctiveness and consistency of mutant phenotypes. Accessible software for use of ML in plant physiology is highlighted. PMID:20566707
The influence of negative training set size on machine learning-based virtual screening.
Kurczab, Rafał; Smusz, Sabina; Bojarski, Andrzej J
2014-01-01
The paper presents a thorough analysis of the influence of the number of negative training examples on the performance of machine learning methods. The impact of this rather neglected aspect of machine learning methods application was examined for sets containing a fixed number of positive and a varying number of negative examples randomly selected from the ZINC database. An increase in the ratio of positive to negative training instances was found to greatly influence most of the investigated evaluating parameters of ML methods in simulated virtual screening experiments. In a majority of cases, substantial increases in precision and MCC were observed in conjunction with some decreases in hit recall. The analysis of dynamics of those variations let us recommend an optimal composition of training data. The study was performed on several protein targets, 5 machine learning algorithms (SMO, Naïve Bayes, Ibk, J48 and Random Forest) and 2 types of molecular fingerprints (MACCS and CDK FP). The most effective classification was provided by the combination of CDK FP with SMO or Random Forest algorithms. The Naïve Bayes models appeared to be hardly sensitive to changes in the number of negative instances in the training set. In conclusion, the ratio of positive to negative training instances should be taken into account during the preparation of machine learning experiments, as it might significantly influence the performance of particular classifier. What is more, the optimization of negative training set size can be applied as a boosting-like approach in machine learning-based virtual screening.
The influence of negative training set size on machine learning-based virtual screening
2014-01-01
Background The paper presents a thorough analysis of the influence of the number of negative training examples on the performance of machine learning methods. Results The impact of this rather neglected aspect of machine learning methods application was examined for sets containing a fixed number of positive and a varying number of negative examples randomly selected from the ZINC database. An increase in the ratio of positive to negative training instances was found to greatly influence most of the investigated evaluating parameters of ML methods in simulated virtual screening experiments. In a majority of cases, substantial increases in precision and MCC were observed in conjunction with some decreases in hit recall. The analysis of dynamics of those variations let us recommend an optimal composition of training data. The study was performed on several protein targets, 5 machine learning algorithms (SMO, Naïve Bayes, Ibk, J48 and Random Forest) and 2 types of molecular fingerprints (MACCS and CDK FP). The most effective classification was provided by the combination of CDK FP with SMO or Random Forest algorithms. The Naïve Bayes models appeared to be hardly sensitive to changes in the number of negative instances in the training set. Conclusions In conclusion, the ratio of positive to negative training instances should be taken into account during the preparation of machine learning experiments, as it might significantly influence the performance of particular classifier. What is more, the optimization of negative training set size can be applied as a boosting-like approach in machine learning-based virtual screening. PMID:24976867
A Technique for Machine-Aided Indexing
ERIC Educational Resources Information Center
Klingbiel, Paul H.
1973-01-01
The technique for machine-aided indexing developed at the Defense Documentation Center (DDC) is illustrated on a randomly chosen abstract. Additional text is provided in coded form so that the reader can more fully explore this technique. (2 references) (Author)
Automated Verification of Specifications with Typestates and Access Permissions
NASA Technical Reports Server (NTRS)
Siminiceanu, Radu I.; Catano, Nestor
2011-01-01
We propose an approach to formally verify Plural specifications based on access permissions and typestates, by model-checking automatically generated abstract state-machines. Our exhaustive approach captures all the possible behaviors of abstract concurrent programs implementing the specification. We describe the formal methodology employed by our technique and provide an example as proof of concept for the state-machine construction rules. The implementation of a fully automated algorithm to generate and verify models, currently underway, provides model checking support for the Plural tool, which currently supports only program verification via data flow analysis (DFA).
Current Developments in Machine Learning Techniques in Biological Data Mining.
Dumancas, Gerard G; Adrianto, Indra; Bello, Ghalib; Dozmorov, Mikhail
2017-01-01
This supplement is intended to focus on the use of machine learning techniques to generate meaningful information on biological data. This supplement under Bioinformatics and Biology Insights aims to provide scientists and researchers working in this rapid and evolving field with online, open-access articles authored by leading international experts in this field. Advances in the field of biology have generated massive opportunities to allow the implementation of modern computational and statistical techniques. Machine learning methods in particular, a subfield of computer science, have evolved as an indispensable tool applied to a wide spectrum of bioinformatics applications. Thus, it is broadly used to investigate the underlying mechanisms leading to a specific disease, as well as the biomarker discovery process. With a growth in this specific area of science comes the need to access up-to-date, high-quality scholarly articles that will leverage the knowledge of scientists and researchers in the various applications of machine learning techniques in mining biological data.
Microcomputer network for control of a continuous mining machine. Information circular/1993
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schiffbauer, W.H.
1993-01-01
The paper details a microcomputer-based control and monitoring network that was developed in-house by the U.S. Bureau of Mines, and installed on a Joy 14 continuous mining machine. The network consists of microcomputers that are connected together via a single twisted pair cable. Each microcomputer was developed to provide a particular function in the control process. Machine-mounted microcomputers in conjunction with the appropriate sensors provide closed-loop control of the machine, navigation, and environmental monitoring. Off-the-machine microcomputers provide remote control of the machine, sensor status, and a connection to the network so that external computers can access network data and controlmore » the continuous mining machine. Although the network was installed on a Joy 14 continuous mining machine, its use extends beyond it. Its generic structure lends itself to installation onto most mining machine types.« less
Cole-Lewis, Heather; Varghese, Arun; Sanders, Amy; Schwarz, Mary; Pugatch, Jillian
2015-01-01
Background Electronic cigarettes (e-cigarettes) continue to be a growing topic among social media users, especially on Twitter. The ability to analyze conversations about e-cigarettes in real-time can provide important insight into trends in the public’s knowledge, attitudes, and beliefs surrounding e-cigarettes, and subsequently guide public health interventions. Objective Our aim was to establish a supervised machine learning algorithm to build predictive classification models that assess Twitter data for a range of factors related to e-cigarettes. Methods Manual content analysis was conducted for 17,098 tweets. These tweets were coded for five categories: e-cigarette relevance, sentiment, user description, genre, and theme. Machine learning classification models were then built for each of these five categories, and word groupings (n-grams) were used to define the feature space for each classifier. Results Predictive performance scores for classification models indicated that the models correctly labeled the tweets with the appropriate variables between 68.40% and 99.34% of the time, and the percentage of maximum possible improvement over a random baseline that was achieved by the classification models ranged from 41.59% to 80.62%. Classifiers with the highest performance scores that also achieved the highest percentage of the maximum possible improvement over a random baseline were Policy/Government (performance: 0.94; % improvement: 80.62%), Relevance (performance: 0.94; % improvement: 75.26%), Ad or Promotion (performance: 0.89; % improvement: 72.69%), and Marketing (performance: 0.91; % improvement: 72.56%). The most appropriate word-grouping unit (n-gram) was 1 for the majority of classifiers. Performance continued to marginally increase with the size of the training dataset of manually annotated data, but eventually leveled off. Even at low dataset sizes of 4000 observations, performance characteristics were fairly sound. Conclusions Social media outlets like Twitter can uncover real-time snapshots of personal sentiment, knowledge, attitudes, and behavior that are not as accessible, at this scale, through any other offline platform. Using the vast data available through social media presents an opportunity for social science and public health methodologies to utilize computational methodologies to enhance and extend research and practice. This study was successful in automating a complex five-category manual content analysis of e-cigarette-related content on Twitter using machine learning techniques. The study details machine learning model specifications that provided the best accuracy for data related to e-cigarettes, as well as a replicable methodology to allow extension of these methods to additional topics. PMID:26307512
Cole-Lewis, Heather; Varghese, Arun; Sanders, Amy; Schwarz, Mary; Pugatch, Jillian; Augustson, Erik
2015-08-25
Electronic cigarettes (e-cigarettes) continue to be a growing topic among social media users, especially on Twitter. The ability to analyze conversations about e-cigarettes in real-time can provide important insight into trends in the public's knowledge, attitudes, and beliefs surrounding e-cigarettes, and subsequently guide public health interventions. Our aim was to establish a supervised machine learning algorithm to build predictive classification models that assess Twitter data for a range of factors related to e-cigarettes. Manual content analysis was conducted for 17,098 tweets. These tweets were coded for five categories: e-cigarette relevance, sentiment, user description, genre, and theme. Machine learning classification models were then built for each of these five categories, and word groupings (n-grams) were used to define the feature space for each classifier. Predictive performance scores for classification models indicated that the models correctly labeled the tweets with the appropriate variables between 68.40% and 99.34% of the time, and the percentage of maximum possible improvement over a random baseline that was achieved by the classification models ranged from 41.59% to 80.62%. Classifiers with the highest performance scores that also achieved the highest percentage of the maximum possible improvement over a random baseline were Policy/Government (performance: 0.94; % improvement: 80.62%), Relevance (performance: 0.94; % improvement: 75.26%), Ad or Promotion (performance: 0.89; % improvement: 72.69%), and Marketing (performance: 0.91; % improvement: 72.56%). The most appropriate word-grouping unit (n-gram) was 1 for the majority of classifiers. Performance continued to marginally increase with the size of the training dataset of manually annotated data, but eventually leveled off. Even at low dataset sizes of 4000 observations, performance characteristics were fairly sound. Social media outlets like Twitter can uncover real-time snapshots of personal sentiment, knowledge, attitudes, and behavior that are not as accessible, at this scale, through any other offline platform. Using the vast data available through social media presents an opportunity for social science and public health methodologies to utilize computational methodologies to enhance and extend research and practice. This study was successful in automating a complex five-category manual content analysis of e-cigarette-related content on Twitter using machine learning techniques. The study details machine learning model specifications that provided the best accuracy for data related to e-cigarettes, as well as a replicable methodology to allow extension of these methods to additional topics.
Shan, Juan; Alam, S Kaisar; Garra, Brian; Zhang, Yingtao; Ahmed, Tahira
2016-04-01
This work identifies effective computable features from the Breast Imaging Reporting and Data System (BI-RADS), to develop a computer-aided diagnosis (CAD) system for breast ultrasound. Computerized features corresponding to ultrasound BI-RADs categories were designed and tested using a database of 283 pathology-proven benign and malignant lesions. Features were selected based on classification performance using a "bottom-up" approach for different machine learning methods, including decision tree, artificial neural network, random forest and support vector machine. Using 10-fold cross-validation on the database of 283 cases, the highest area under the receiver operating characteristic (ROC) curve (AUC) was 0.84 from a support vector machine with 77.7% overall accuracy; the highest overall accuracy, 78.5%, was from a random forest with the AUC 0.83. Lesion margin and orientation were optimum features common to all of the different machine learning methods. These features can be used in CAD systems to help distinguish benign from worrisome lesions. Copyright © 2016 World Federation for Ultrasound in Medicine & Biology. All rights reserved.
Chaotic sources of noise in machine acoustics
NASA Astrophysics Data System (ADS)
Moon, F. C., Prof.; Broschart, Dipl.-Ing. T.
1994-05-01
In this paper a model is posited for deterministic, random-like noise in machines with sliding rigid parts impacting linear continuous machine structures. Such problems occur in gear transmission systems. A mathematical model is proposed to explain the random-like structure-borne and air-borne noise from such systems when the input is a periodic deterministic excitation of the quasi-rigid impacting parts. An experimental study is presented which supports the model. A thin circular plate is impacted by a chaotically vibrating mass excited by a sinusoidal moving base. The results suggest that the plate vibrations might be predicted by replacing the chaotic vibrating mass with a probabilistic forcing function. Prechaotic vibrations of the impacting mass show classical period doubling phenomena.
Applications of Database Machines in Library Systems.
ERIC Educational Resources Information Center
Salmon, Stephen R.
1984-01-01
Characteristics and advantages of database machines are summarized and their applications to library functions are described. The ability to attach multiple hosts to the same database and flexibility in choosing operating and database management systems for different functions without loss of access to common database are noted. (EJS)
The Future of Access Technology for Blind and Visually Impaired People.
ERIC Educational Resources Information Center
Schreier, E. M.
1990-01-01
This article describes potential use of new technological products and services by blind/visually impaired people. Items discussed include computer input devices, public telephones, automatic teller machines, airline and rail arrival/departure displays, ticketing machines, information retrieval systems, order-entry terminals, optical character…
Code of Federal Regulations, 2013 CFR
2013-07-01
... operations. Direct competition. The presence and operation of a DoD Component vending machine or a vending... machines or vending facilities operated in areas serving employees, the majority of whom normally do not have access (in terms of uninterrupted ease of approach and the amount of time required to patronize...
Code of Federal Regulations, 2012 CFR
2012-07-01
... operations. Direct competition. The presence and operation of a DoD Component vending machine or a vending... machines or vending facilities operated in areas serving employees, the majority of whom normally do not have access (in terms of uninterrupted ease of approach and the amount of time required to patronize...
Code of Federal Regulations, 2011 CFR
2011-07-01
... operations. Direct competition. The presence and operation of a DoD Component vending machine or a vending... machines or vending facilities operated in areas serving employees, the majority of whom normally do not have access (in terms of uninterrupted ease of approach and the amount of time required to patronize...
A Qualitative Security Analysis of a New Class of 3-D Integrated Crypto Co-processors
2012-01-01
and mobile phones, lottery ticket vending machines , and various electronic payment systems. The main reason for their use in such applications is that...military applications such as secure communication links. However, the proliferation of Automated Teller Machines (ATMs) in the ’80s introduced them to...commercial applications. Today many popular consumer devices have cryptographic processors in them, for example, smart- cards for pay-TV access machines
Kim, Dong Wook; Kim, Hwiyoung; Nam, Woong; Kim, Hyung Jun; Cha, In-Ho
2018-04-23
The aim of this study was to build and validate five types of machine learning models that can predict the occurrence of BRONJ associated with dental extraction in patients taking bisphosphonates for the management of osteoporosis. A retrospective review of the medical records was conducted to obtain cases and controls for the study. Total 125 patients consisting of 41 cases and 84 controls were selected for the study. Five machine learning prediction algorithms including multivariable logistic regression model, decision tree, support vector machine, artificial neural network, and random forest were implemented. The outputs of these models were compared with each other and also with conventional methods, such as serum CTX level. Area under the receiver operating characteristic (ROC) curve (AUC) was used to compare the results. The performance of machine learning models was significantly superior to conventional statistical methods and single predictors. The random forest model yielded the best performance (AUC = 0.973), followed by artificial neural network (AUC = 0.915), support vector machine (AUC = 0.882), logistic regression (AUC = 0.844), decision tree (AUC = 0.821), drug holiday alone (AUC = 0.810), and CTX level alone (AUC = 0.630). Machine learning methods showed superior performance in predicting BRONJ associated with dental extraction compared to conventional statistical methods using drug holiday and serum CTX level. Machine learning can thus be applied in a wide range of clinical studies. Copyright © 2017. Published by Elsevier Inc.
Development of techniques to enhance man/machine communication
NASA Technical Reports Server (NTRS)
Targ, R.; Cole, P.; Puthoff, H.
1974-01-01
A four-state random stimulus generator, considered to function as an ESP teaching machine was used to investigate an approach to facilitating interactions between man and machines. A subject tries to guess in which of four states the machine is. The machine offers the user feedback and reinforcement as to the correctness of his choice. Using this machine, 148 volunteer subjects were screened under various protocols. Several whose learning slope and/or mean score departed significantly from chance expectation were identified. Direct physiological evidence of perception of remote stimuli not presented to any known sense of the percipient using electroencephalographic (EEG) output when a light was flashed in a distant room was also studied.
On Using Home Networks and Cloud Computing for a Future Internet of Things
NASA Astrophysics Data System (ADS)
Niedermayer, Heiko; Holz, Ralph; Pahl, Marc-Oliver; Carle, Georg
In this position paper we state four requirements for a Future Internet and sketch our initial concept. The requirements: (1) more comfort, (2) integration of home networks, (3) resources like service clouds in the network, and (4) access anywhere on any machine. Future Internet needs future quality and future comfort. There need to be new possiblities for everyone. Our focus is on higher layers and related to the many overlay proposals. We consider them to run on top of a basic Future Internet core. A new user experience means to include all user devices. Home networks and services should be a fundamental part of the Future Internet. Home networks extend access and allow interaction with the environment. Cloud Computing can provide reliable resources beyond local boundaries. For access anywhere, we also need secure storage for data and profiles in the network, in particular for access with non-personal devices (Internet terminal, ticket machine, ...).
AUTOCLASSIFICATION OF THE VARIABLE 3XMM SOURCES USING THE RANDOM FOREST MACHINE LEARNING ALGORITHM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrell, Sean A.; Murphy, Tara; Lo, Kitty K., E-mail: s.farrell@physics.usyd.edu.au
In the current era of large surveys and massive data sets, autoclassification of astrophysical sources using intelligent algorithms is becoming increasingly important. In this paper we present the catalog of variable sources in the Third XMM-Newton Serendipitous Source catalog (3XMM) autoclassified using the Random Forest machine learning algorithm. We used a sample of manually classified variable sources from the second data release of the XMM-Newton catalogs (2XMMi-DR2) to train the classifier, obtaining an accuracy of ∼92%. We also evaluated the effectiveness of identifying spurious detections using a sample of spurious sources, achieving an accuracy of ∼95%. Manual investigation of amore » random sample of classified sources confirmed these accuracy levels and showed that the Random Forest machine learning algorithm is highly effective at automatically classifying 3XMM sources. Here we present the catalog of classified 3XMM variable sources. We also present three previously unidentified unusual sources that were flagged as outlier sources by the algorithm: a new candidate supergiant fast X-ray transient, a 400 s X-ray pulsar, and an eclipsing 5 hr binary system coincident with a known Cepheid.« less
Lara-Tejero, María; Bewersdorf, Jörg; Galán, Jorge E.
2017-01-01
Type III protein secretion machines have evolved to deliver bacterially encoded effector proteins into eukaryotic cells. Although electron microscopy has provided a detailed view of these machines in isolation or fixed samples, little is known about their organization in live bacteria. Here we report the visualization and characterization of the Salmonella type III secretion machine in live bacteria by 2D and 3D single-molecule switching superresolution microscopy. This approach provided access to transient components of this machine, which previously could not be analyzed. We determined the subcellular distribution of individual machines, the stoichiometry of the different components of this machine in situ, and the spatial distribution of the substrates of this machine before secretion. Furthermore, by visualizing this machine in Salmonella mutants we obtained major insights into the machine’s assembly. This study bridges a major resolution gap in the visualization of this nanomachine and may serve as a paradigm for the examination of other bacterially encoded molecular machines. PMID:28533372
Vending machine assessment methodology. A systematic review.
Matthews, Melissa A; Horacek, Tanya M
2015-07-01
The nutritional quality of food and beverage products sold in vending machines has been implicated as a contributing factor to the development of an obesogenic food environment. How comprehensive, reliable, and valid are the current assessment tools for vending machines to support or refute these claims? A systematic review was conducted to summarize, compare, and evaluate the current methodologies and available tools for vending machine assessment. A total of 24 relevant research studies published between 1981 and 2013 met inclusion criteria for this review. The methodological variables reviewed in this study include assessment tool type, study location, machine accessibility, product availability, healthfulness criteria, portion size, price, product promotion, and quality of scientific practice. There were wide variations in the depth of the assessment methodologies and product healthfulness criteria utilized among the reviewed studies. Of the reviewed studies, 39% evaluated machine accessibility, 91% evaluated product availability, 96% established healthfulness criteria, 70% evaluated portion size, 48% evaluated price, 52% evaluated product promotion, and 22% evaluated the quality of scientific practice. Of all reviewed articles, 87% reached conclusions that provided insight into the healthfulness of vended products and/or vending environment. Product healthfulness criteria and complexity for snack and beverage products was also found to be variable between the reviewed studies. These findings make it difficult to compare results between studies. A universal, valid, and reliable vending machine assessment tool that is comprehensive yet user-friendly is recommended. Copyright © 2015 Elsevier Ltd. All rights reserved.
Modelling Biophysical Parameters of Maize Using Landsat 8 Time Series
NASA Astrophysics Data System (ADS)
Dahms, Thorsten; Seissiger, Sylvia; Conrad, Christopher; Borg, Erik
2016-06-01
Open and free access to multi-frequent high-resolution data (e.g. Sentinel - 2) will fortify agricultural applications based on satellite data. The temporal and spatial resolution of these remote sensing datasets directly affects the applicability of remote sensing methods, for instance a robust retrieving of biophysical parameters over the entire growing season with very high geometric resolution. In this study we use machine learning methods to predict biophysical parameters, namely the fraction of absorbed photosynthetic radiation (FPAR), the leaf area index (LAI) and the chlorophyll content, from high resolution remote sensing. 30 Landsat 8 OLI scenes were available in our study region in Mecklenburg-Western Pomerania, Germany. In-situ data were weekly to bi-weekly collected on 18 maize plots throughout the summer season 2015. The study aims at an optimized prediction of biophysical parameters and the identification of the best explaining spectral bands and vegetation indices. For this purpose, we used the entire in-situ dataset from 24.03.2015 to 15.10.2015. Random forest and conditional inference forests were used because of their explicit strong exploratory and predictive character. Variable importance measures allowed for analysing the relation between the biophysical parameters with respect to the spectral response, and the performance of the two approaches over the plant stock evolvement. Classical random forest regression outreached the performance of conditional inference forests, in particular when modelling the biophysical parameters over the entire growing period. For example, modelling biophysical parameters of maize for the entire vegetation period using random forests yielded: FPAR: R² = 0.85; RMSE = 0.11; LAI: R² = 0.64; RMSE = 0.9 and chlorophyll content (SPAD): R² = 0.80; RMSE=4.9. Our results demonstrate the great potential in using machine-learning methods for the interpretation of long-term multi-frequent remote sensing datasets to model biophysical parameters.
Towards large-scale FAME-based bacterial species identification using machine learning techniques.
Slabbinck, Bram; De Baets, Bernard; Dawyndt, Peter; De Vos, Paul
2009-05-01
In the last decade, bacterial taxonomy witnessed a huge expansion. The swift pace of bacterial species (re-)definitions has a serious impact on the accuracy and completeness of first-line identification methods. Consequently, back-end identification libraries need to be synchronized with the List of Prokaryotic names with Standing in Nomenclature. In this study, we focus on bacterial fatty acid methyl ester (FAME) profiling as a broadly used first-line identification method. From the BAME@LMG database, we have selected FAME profiles of individual strains belonging to the genera Bacillus, Paenibacillus and Pseudomonas. Only those profiles resulting from standard growth conditions have been retained. The corresponding data set covers 74, 44 and 95 validly published bacterial species, respectively, represented by 961, 378 and 1673 standard FAME profiles. Through the application of machine learning techniques in a supervised strategy, different computational models have been built for genus and species identification. Three techniques have been considered: artificial neural networks, random forests and support vector machines. Nearly perfect identification has been achieved at genus level. Notwithstanding the known limited discriminative power of FAME analysis for species identification, the computational models have resulted in good species identification results for the three genera. For Bacillus, Paenibacillus and Pseudomonas, random forests have resulted in sensitivity values, respectively, 0.847, 0.901 and 0.708. The random forests models outperform those of the other machine learning techniques. Moreover, our machine learning approach also outperformed the Sherlock MIS (MIDI Inc., Newark, DE, USA). These results show that machine learning proves very useful for FAME-based bacterial species identification. Besides good bacterial identification at species level, speed and ease of taxonomic synchronization are major advantages of this computational species identification strategy.
Virtual screening by a new Clustering-based Weighted Similarity Extreme Learning Machine approach
Kudisthalert, Wasu
2018-01-01
Machine learning techniques are becoming popular in virtual screening tasks. One of the powerful machine learning algorithms is Extreme Learning Machine (ELM) which has been applied to many applications and has recently been applied to virtual screening. We propose the Weighted Similarity ELM (WS-ELM) which is based on a single layer feed-forward neural network in a conjunction of 16 different similarity coefficients as activation function in the hidden layer. It is known that the performance of conventional ELM is not robust due to random weight selection in the hidden layer. Thus, we propose a Clustering-based WS-ELM (CWS-ELM) that deterministically assigns weights by utilising clustering algorithms i.e. k-means clustering and support vector clustering. The experiments were conducted on one of the most challenging datasets–Maximum Unbiased Validation Dataset–which contains 17 activity classes carefully selected from PubChem. The proposed algorithms were then compared with other machine learning techniques such as support vector machine, random forest, and similarity searching. The results show that CWS-ELM in conjunction with support vector clustering yields the best performance when utilised together with Sokal/Sneath(1) coefficient. Furthermore, ECFP_6 fingerprint presents the best results in our framework compared to the other types of fingerprints, namely ECFP_4, FCFP_4, and FCFP_6. PMID:29652912
Mateen, Bilal Akhter; Bussas, Matthias; Doogan, Catherine; Waller, Denise; Saverino, Alessia; Király, Franz J; Playford, E Diane
2018-05-01
To determine whether tests of cognitive function and patient-reported outcome measures of motor function can be used to create a machine learning-based predictive tool for falls. Prospective cohort study. Tertiary neurological and neurosurgical center. In all, 337 in-patients receiving neurosurgical, neurological, or neurorehabilitation-based care. Binary (Y/N) for falling during the in-patient episode, the Trail Making Test (a measure of attention and executive function) and the Walk-12 (a patient-reported measure of physical function). The principal outcome was a fall during the in-patient stay ( n = 54). The Trail test was identified as the best predictor of falls. Moreover, addition of other variables, did not improve the prediction (Wilcoxon signed-rank P < 0.001). Classical linear statistical modeling methods were then compared with more recent machine learning based strategies, for example, random forests, neural networks, support vector machines. The random forest was the best modeling strategy when utilizing just the Trail Making Test data (Wilcoxon signed-rank P < 0.001) with 68% (± 7.7) sensitivity, and 90% (± 2.3) specificity. This study identifies a simple yet powerful machine learning (Random Forest) based predictive model for an in-patient neurological population, utilizing a single neuropsychological test of cognitive function, the Trail Making test.
Models of the solvent-accessible surface of biopolymers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, R.E.
1996-09-01
Many biopolymers such as proteins, DNA, and RNA have been studied because they have important biomedical roles and may be good targets for therapeutic action in treating diseases. This report describes how plastic models of the solvent-accessible surface of biopolymers were made. Computer files containing sets of triangles were calculated, then used on a stereolithography machine to make the models. Small (2 in.) models were made to test whether the computer calculations were done correctly. Also, files of the type (.stl) required by any ISO 9001 rapid prototyping machine were written onto a CD-ROM for distribution to American companies.
NASA Technical Reports Server (NTRS)
Burke, Gary R.; Taft, Stephanie
2004-01-01
State machines are commonly used to control sequential logic in FPGAs and ASKS. An errant state machine can cause considerable damage to the device it is controlling. For example in space applications, the FPGA might be controlling Pyros, which when fired at the wrong time will cause a mission failure. Even a well designed state machine can be subject to random errors us a result of SEUs from the radiation environment in space. There are various ways to encode the states of a state machine, and the type of encoding makes a large difference in the susceptibility of the state machine to radiation. In this paper we compare 4 methods of state machine encoding and find which method gives the best fault tolerance, as well as determining the resources needed for each method.
Taxi-Out Time Prediction for Departures at Charlotte Airport Using Machine Learning Techniques
NASA Technical Reports Server (NTRS)
Lee, Hanbong; Malik, Waqar; Jung, Yoon C.
2016-01-01
Predicting the taxi-out times of departures accurately is important for improving airport efficiency and takeoff time predictability. In this paper, we attempt to apply machine learning techniques to actual traffic data at Charlotte Douglas International Airport for taxi-out time prediction. To find the key factors affecting aircraft taxi times, surface surveillance data is first analyzed. From this data analysis, several variables, including terminal concourse, spot, runway, departure fix and weight class, are selected for taxi time prediction. Then, various machine learning methods such as linear regression, support vector machines, k-nearest neighbors, random forest, and neural networks model are applied to actual flight data. Different traffic flow and weather conditions at Charlotte airport are also taken into account for more accurate prediction. The taxi-out time prediction results show that linear regression and random forest techniques can provide the most accurate prediction in terms of root-mean-square errors. We also discuss the operational complexity and uncertainties that make it difficult to predict the taxi times accurately.
Some history and use of the random positioning machine, RPM, in gravity related research
NASA Astrophysics Data System (ADS)
van Loon, Jack J. W. A.
The first experiments using machines and instruments to manipulate gravity and thus learn about its impact to this force onto living systems were performed by Sir Thomas Andrew Knight in 1806, exactly two centuries ago. What have we learned from these experiments and in particular what have we learned about the use of instruments to reveal the impact of gravity and rotation on plants and other living systems? In this essay I want to go into the use of instruments in gravity related research with emphases on the Random Positioning Machine, RPM. Going from water wheel via clinostat to RPM, we will address the usefulness and possible working principles of these hypergravity and mostly called microgravity, or better, micro-weight simulation techniques.
Early experiences in developing and managing the neuroscience gateway.
Sivagnanam, Subhashini; Majumdar, Amit; Yoshimoto, Kenneth; Astakhov, Vadim; Bandrowski, Anita; Martone, MaryAnn; Carnevale, Nicholas T
2015-02-01
The last few decades have seen the emergence of computational neuroscience as a mature field where researchers are interested in modeling complex and large neuronal systems and require access to high performance computing machines and associated cyber infrastructure to manage computational workflow and data. The neuronal simulation tools, used in this research field, are also implemented for parallel computers and suitable for high performance computing machines. But using these tools on complex high performance computing machines remains a challenge because of issues with acquiring computer time on these machines located at national supercomputer centers, dealing with complex user interface of these machines, dealing with data management and retrieval. The Neuroscience Gateway is being developed to alleviate and/or hide these barriers to entry for computational neuroscientists. It hides or eliminates, from the point of view of the users, all the administrative and technical barriers and makes parallel neuronal simulation tools easily available and accessible on complex high performance computing machines. It handles the running of jobs and data management and retrieval. This paper shares the early experiences in bringing up this gateway and describes the software architecture it is based on, how it is implemented, and how users can use this for computational neuroscience research using high performance computing at the back end. We also look at parallel scaling of some publicly available neuronal models and analyze the recent usage data of the neuroscience gateway.
Early experiences in developing and managing the neuroscience gateway
Sivagnanam, Subhashini; Majumdar, Amit; Yoshimoto, Kenneth; Astakhov, Vadim; Bandrowski, Anita; Martone, MaryAnn; Carnevale, Nicholas. T.
2015-01-01
SUMMARY The last few decades have seen the emergence of computational neuroscience as a mature field where researchers are interested in modeling complex and large neuronal systems and require access to high performance computing machines and associated cyber infrastructure to manage computational workflow and data. The neuronal simulation tools, used in this research field, are also implemented for parallel computers and suitable for high performance computing machines. But using these tools on complex high performance computing machines remains a challenge because of issues with acquiring computer time on these machines located at national supercomputer centers, dealing with complex user interface of these machines, dealing with data management and retrieval. The Neuroscience Gateway is being developed to alleviate and/or hide these barriers to entry for computational neuroscientists. It hides or eliminates, from the point of view of the users, all the administrative and technical barriers and makes parallel neuronal simulation tools easily available and accessible on complex high performance computing machines. It handles the running of jobs and data management and retrieval. This paper shares the early experiences in bringing up this gateway and describes the software architecture it is based on, how it is implemented, and how users can use this for computational neuroscience research using high performance computing at the back end. We also look at parallel scaling of some publicly available neuronal models and analyze the recent usage data of the neuroscience gateway. PMID:26523124
Predicting healthcare associated infections using patients' experiences
NASA Astrophysics Data System (ADS)
Pratt, Michael A.; Chu, Henry
2016-05-01
Healthcare associated infections (HAI) are a major threat to patient safety and are costly to health systems. Our goal is to predict the HAI performance of a hospital using the patients' experience responses as input. We use four classifiers, viz. random forest, naive Bayes, artificial feedforward neural networks, and the support vector machine, to perform the prediction of six types of HAI. The six types include blood stream, urinary tract, surgical site, and intestinal infections. Experiments show that the random forest and support vector machine perform well across the six types of HAI.
Survey of home hemodialysis patients and nursing staff regarding vascular access use and care.
Spry, Leslie A; Burkart, John M; Holcroft, Christina; Mortier, Leigh; Glickman, Joel D
2015-04-01
Vascular access infections are of concern to hemodialysis patients and nurses. Best demonstrated practices (BDPs) have not been developed for home hemodialysis (HHD) access use, but there have been generally accepted practices (GAPs) endorsed by dialysis professionals. We developed a survey to gather information about training provided and actual practices of HHD patients using the NxStage System One HHD machine. We used GAP to assess training used by nurses to teach HHD access care and then assess actual practice (adherence) by HHD patients. We also assessed training and adherence where GAPs do not exist. We received a 43% response rate from patients and 76% response from nurses representing 19 randomly selected HHD training centers. We found that nurses were not uniformly instructing HHD patients according to GAP, patients were not performing access cannulation according to GAP, nor were they adherent to their training procedures. Identification of signs and symptoms of infection was commonly trained appropriately, but we observed a reluctance to report some signs and symptoms of infection by patients. Of particular concern, when aggregating all steps surveyed, not a single nurse or patient reported training or performing all steps in accordance with GAP. We also identified practices for which there are no GAPs that require further study and may or may not impact outcomes such as infection. Further research is needed to develop strategies to implement and expand GAP, measure outcomes, and ultimately develop BDP for HHD to improve infectious complications. © 2014 The Authors. Hemodialysis International published by Wiley Periodicals, Inc. on behalf of International Society for Hemodialysis.
VLSI-based video event triggering for image data compression
NASA Astrophysics Data System (ADS)
Williams, Glenn L.
1994-02-01
Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.
Android Malware Classification Using K-Means Clustering Algorithm
NASA Astrophysics Data System (ADS)
Hamid, Isredza Rahmi A.; Syafiqah Khalid, Nur; Azma Abdullah, Nurul; Rahman, Nurul Hidayah Ab; Chai Wen, Chuah
2017-08-01
Malware was designed to gain access or damage a computer system without user notice. Besides, attacker exploits malware to commit crime or fraud. This paper proposed Android malware classification approach based on K-Means clustering algorithm. We evaluate the proposed model in terms of accuracy using machine learning algorithms. Two datasets were selected to demonstrate the practicing of K-Means clustering algorithms that are Virus Total and Malgenome dataset. We classify the Android malware into three clusters which are ransomware, scareware and goodware. Nine features were considered for each types of dataset such as Lock Detected, Text Detected, Text Score, Encryption Detected, Threat, Porn, Law, Copyright and Moneypak. We used IBM SPSS Statistic software for data classification and WEKA tools to evaluate the built cluster. The proposed K-Means clustering algorithm shows promising result with high accuracy when tested using Random Forest algorithm.
VLSI-based Video Event Triggering for Image Data Compression
NASA Technical Reports Server (NTRS)
Williams, Glenn L.
1994-01-01
Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.
Source localization in an ocean waveguide using supervised machine learning.
Niu, Haiqiang; Reeves, Emma; Gerstoft, Peter
2017-09-01
Source localization in ocean acoustics is posed as a machine learning problem in which data-driven methods learn source ranges directly from observed acoustic data. The pressure received by a vertical linear array is preprocessed by constructing a normalized sample covariance matrix and used as the input for three machine learning methods: feed-forward neural networks (FNN), support vector machines (SVM), and random forests (RF). The range estimation problem is solved both as a classification problem and as a regression problem by these three machine learning algorithms. The results of range estimation for the Noise09 experiment are compared for FNN, SVM, RF, and conventional matched-field processing and demonstrate the potential of machine learning for underwater source localization.
Machine learning models in breast cancer survival prediction.
Montazeri, Mitra; Montazeri, Mohadeseh; Montazeri, Mahdieh; Beigzadeh, Amin
2016-01-01
Breast cancer is one of the most common cancers with a high mortality rate among women. With the early diagnosis of breast cancer survival will increase from 56% to more than 86%. Therefore, an accurate and reliable system is necessary for the early diagnosis of this cancer. The proposed model is the combination of rules and different machine learning techniques. Machine learning models can help physicians to reduce the number of false decisions. They try to exploit patterns and relationships among a large number of cases and predict the outcome of a disease using historical cases stored in datasets. The objective of this study is to propose a rule-based classification method with machine learning techniques for the prediction of different types of Breast cancer survival. We use a dataset with eight attributes that include the records of 900 patients in which 876 patients (97.3%) and 24 (2.7%) patients were females and males respectively. Naive Bayes (NB), Trees Random Forest (TRF), 1-Nearest Neighbor (1NN), AdaBoost (AD), Support Vector Machine (SVM), RBF Network (RBFN), and Multilayer Perceptron (MLP) machine learning techniques with 10-cross fold technique were used with the proposed model for the prediction of breast cancer survival. The performance of machine learning techniques were evaluated with accuracy, precision, sensitivity, specificity, and area under ROC curve. Out of 900 patients, 803 patients and 97 patients were alive and dead, respectively. In this study, Trees Random Forest (TRF) technique showed better results in comparison to other techniques (NB, 1NN, AD, SVM and RBFN, MLP). The accuracy, sensitivity and the area under ROC curve of TRF are 96%, 96%, 93%, respectively. However, 1NN machine learning technique provided poor performance (accuracy 91%, sensitivity 91% and area under ROC curve 78%). This study demonstrates that Trees Random Forest model (TRF) which is a rule-based classification model was the best model with the highest level of accuracy. Therefore, this model is recommended as a useful tool for breast cancer survival prediction as well as medical decision making.
78 FR 14564 - Agency Information Collection Activities: Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-06
... due to the fact that the Family Smoking Prevention and Tobacco Control Act, which gives the Food and... actively enforcing the vending machine requirements of the Family Smoking Prevention and Tobacco Control Act''). This new option is included because federal law bans vending machines in youth accessible...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-13
... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-792] Certain Static Random Access Memories and Products Containing Same; Commission Determination Affirming a Final Initial Determination..., and the sale within the United States after importation of certain static random access memories and...
2013-11-01
machine learning techniques used in BBAC to make predictions about the intent of actors establishing TCP connections and issuing HTTP requests. We discuss pragmatic challenges and solutions we encountered in implementing and evaluating BBAC, discussing (a) the general concepts underlying BBAC, (b) challenges we have encountered in identifying suitable datasets, (c) mitigation strategies to cope...and describe current plans for transitioning BBAC capabilities into the Department of Defense together with lessons learned for the machine learning
Lifelong personal health data and application software via virtual machines in the cloud.
Van Gorp, Pieter; Comuzzi, Marco
2014-01-01
Personal Health Records (PHRs) should remain the lifelong property of patients, who should be able to show them conveniently and securely to selected caregivers and institutions. In this paper, we present MyPHRMachines, a cloud-based PHR system taking a radically new architectural solution to health record portability. In MyPHRMachines, health-related data and the application software to view and/or analyze it are separately deployed in the PHR system. After uploading their medical data to MyPHRMachines, patients can access them again from remote virtual machines that contain the right software to visualize and analyze them without any need for conversion. Patients can share their remote virtual machine session with selected caregivers, who will need only a Web browser to access the pre-loaded fragments of their lifelong PHR. We discuss a prototype of MyPHRMachines applied to two use cases, i.e., radiology image sharing and personalized medicine.
Park, Sohyun; Sappenfield, William M; Huang, Youjie; Sherry, Bettylou; Bensyl, Diana M
2010-10-01
Childhood obesity is a major public health concern and is associated with substantial morbidities. Access to less-healthy foods might facilitate dietary behaviors that contribute to obesity. However, less-healthy foods are usually available in school vending machines. This cross-sectional study examined the prevalence of students buying snacks or beverages from school vending machines instead of buying school lunch and predictors of this behavior. Analyses were based on the 2003 Florida Youth Physical Activity and Nutrition Survey using a representative sample of 4,322 students in grades six through eight in 73 Florida public middle schools. Analyses included χ2 tests and logistic regression. The outcome measure was buying a snack or beverage from vending machines 2 or more days during the previous 5 days instead of buying lunch. The survey response rate was 72%. Eighteen percent of respondents reported purchasing a snack or beverage from a vending machine 2 or more days during the previous 5 school days instead of buying school lunch. Although healthier options were available, the most commonly purchased vending machine items were chips, pretzels/crackers, candy bars, soda, and sport drinks. More students chose snacks or beverages instead of lunch in schools where beverage vending machines were also available than did students in schools where beverage vending machines were unavailable: 19% and 7%, respectively (P≤0.05). The strongest risk factor for buying snacks or beverages from vending machines instead of buying school lunch was availability of beverage vending machines in schools (adjusted odds ratio=3.5; 95% confidence interval, 2.2 to 5.7). Other statistically significant risk factors were smoking, non-Hispanic black race/ethnicity, Hispanic ethnicity, and older age. Although healthier choices were available, the most common choices were the less-healthy foods. Schools should consider developing policies to reduce the availability of less-healthy choices in vending machines and to reduce access to beverage vending machines. Copyright © 2010 American Dietetic Association. Published by Elsevier Inc. All rights reserved.
Prediction of drug synergy in cancer using ensemble-based machine learning techniques
NASA Astrophysics Data System (ADS)
Singh, Harpreet; Rana, Prashant Singh; Singh, Urvinder
2018-04-01
Drug synergy prediction plays a significant role in the medical field for inhibiting specific cancer agents. It can be developed as a pre-processing tool for therapeutic successes. Examination of different drug-drug interaction can be done by drug synergy score. It needs efficient regression-based machine learning approaches to minimize the prediction errors. Numerous machine learning techniques such as neural networks, support vector machines, random forests, LASSO, Elastic Nets, etc., have been used in the past to realize requirement as mentioned above. However, these techniques individually do not provide significant accuracy in drug synergy score. Therefore, the primary objective of this paper is to design a neuro-fuzzy-based ensembling approach. To achieve this, nine well-known machine learning techniques have been implemented by considering the drug synergy data. Based on the accuracy of each model, four techniques with high accuracy are selected to develop ensemble-based machine learning model. These models are Random forest, Fuzzy Rules Using Genetic Cooperative-Competitive Learning method (GFS.GCCL), Adaptive-Network-Based Fuzzy Inference System (ANFIS) and Dynamic Evolving Neural-Fuzzy Inference System method (DENFIS). Ensembling is achieved by evaluating the biased weighted aggregation (i.e. adding more weights to the model with a higher prediction score) of predicted data by selected models. The proposed and existing machine learning techniques have been evaluated on drug synergy score data. The comparative analysis reveals that the proposed method outperforms others in terms of accuracy, root mean square error and coefficient of correlation.
Improved Writing-Conductor Designs For Magnetic Memory
NASA Technical Reports Server (NTRS)
Wu, Jiin-Chuan; Stadler, Henry L.; Katti, Romney R.
1994-01-01
Writing currents reduced to practical levels. Improved conceptual designs for writing conductors in micromagnet/Hall-effect random-access integrated-circuit memory reduces electrical current needed to magnetize micromagnet in each memory cell. Basic concept of micromagnet/Hall-effect random-access memory presented in "Magnetic Analog Random-Access Memory" (NPO-17999).
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-02
... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-792] Certain Static Random Access Memories and Products Containing Same; Commission Determination To Review in Part a Final Initial... States after importation of certain static random access memories and products containing the same by...
Virtual collaborative environments: programming and controlling robotic devices remotely
NASA Astrophysics Data System (ADS)
Davies, Brady R.; McDonald, Michael J., Jr.; Harrigan, Raymond W.
1995-12-01
This paper describes a technology for remote sharing of intelligent electro-mechanical devices. An architecture and actual system have been developed and tested, based on the proposed National Information Infrastructure (NII) or Information Highway, to facilitate programming and control of intelligent programmable machines (like robots, machine tools, etc.). Using appropriate geometric models, integrated sensors, video systems, and computing hardware; computer controlled resources owned and operated by different (in a geographic sense as well as legal sense) entities can be individually or simultaneously programmed and controlled from one or more remote locations. Remote programming and control of intelligent machines will create significant opportunities for sharing of expensive capital equipment. Using the technology described in this paper, university researchers, manufacturing entities, automation consultants, design entities, and others can directly access robotic and machining facilities located across the country. Disparate electro-mechanical resources will be shared in a manner similar to the way supercomputers are accessed by multiple users. Using this technology, it will be possible for researchers developing new robot control algorithms to validate models and algorithms right from their university labs without ever owning a robot. Manufacturers will be able to model, simulate, and measure the performance of prospective robots before selecting robot hardware optimally suited for their intended application. Designers will be able to access CNC machining centers across the country to fabricate prototypic parts during product design validation. An existing prototype architecture and system has been developed and proven. Programming and control of a large gantry robot located at Sandia National Laboratories in Albuquerque, New Mexico, was demonstrated from such remote locations as Washington D.C., Washington State, and Southern California.
ODISEES: A New Paradigm in Data Access
NASA Astrophysics Data System (ADS)
Huffer, E.; Little, M. M.; Kusterer, J.
2013-12-01
As part of its ongoing efforts to improve access to data, the Atmospheric Science Data Center has developed a high-precision Earth Science domain ontology (the 'ES Ontology') implemented in a graph database ('the Semantic Metadata Repository') that is used to store detailed, semantically-enhanced, parameter-level metadata for ASDC data products. The ES Ontology provides the semantic infrastructure needed to drive the ASDC's Ontology-Driven Interactive Search Environment for Earth Science ('ODISEES'), a data discovery and access tool, and will support additional data services such as analytics and visualization. The ES ontology is designed on the premise that naming conventions alone are not adequate to provide the information needed by prospective data consumers to assess the suitability of a given dataset for their research requirements; nor are current metadata conventions adequate to support seamless machine-to-machine interactions between file servers and end-user applications. Data consumers need information not only about what two data elements have in common, but also about how they are different. End-user applications need consistent, detailed metadata to support real-time data interoperability. The ES ontology is a highly precise, bottom-up, queriable model of the Earth Science domain that focuses on critical details about the measurable phenomena, instrument techniques, data processing methods, and data file structures. Earth Science parameters are described in detail in the ES Ontology and mapped to the corresponding variables that occur in ASDC datasets. Variables are in turn mapped to well-annotated representations of the datasets that they occur in, the instrument(s) used to create them, the instrument platforms, the processing methods, etc., creating a linked-data structure that allows both human and machine users to access a wealth of information critical to understanding and manipulating the data. The mappings are recorded in the Semantic Metadata Repository as RDF-triples. An off-the-shelf Ontology Development Environment and a custom Metadata Conversion Tool comprise a human-machine/machine-machine hybrid tool that partially automates the creation of metadata as RDF-triples by interfacing with existing metadata repositories and providing a user interface that solicits input from a human user, when needed. RDF-triples are pushed to the Ontology Development Environment, where a reasoning engine executes a series of inference rules whose antecedent conditions can be satisfied by the initial set of RDF-triples, thereby generating the additional detailed metadata that is missing in existing repositories. A SPARQL Endpoint, a web-based query service and a Graphical User Interface allow prospective data consumers - even those with no familiarity with NASA data products - to search the metadata repository to find and order data products that meet their exact specifications. A web-based API will provide an interface for machine-to-machine transactions.
NASA Astrophysics Data System (ADS)
Walker, J. I.; Blodgett, D. L.; Suftin, I.; Kunicki, T.
2013-12-01
High-resolution data for use in environmental modeling is increasingly becoming available at broad spatial and temporal scales. Downscaled climate projections, remotely sensed landscape parameters, and land-use/land-cover projections are examples of datasets that may exceed an individual investigation's data management and analysis capacity. To allow projects on limited budgets to work with many of these data sets, the burden of working with them must be reduced. The approach being pursued at the U.S. Geological Survey Center for Integrated Data Analytics uses standard self-describing web services that allow machine to machine data access and manipulation. These techniques have been implemented and deployed in production level server-based Web Processing Services that can be accessed from a web application or scripted workflow. Data publication techniques that allow machine-interpretation of large collections of data have also been implemented for numerous datasets at U.S. Geological Survey data centers as well as partner agencies and academic institutions. Discovery of data services is accomplished using a method in which a machine-generated metadata record holds content--derived from the data's source web service--that is intended for human interpretation as well as machine interpretation. A distributed search application has been developed that demonstrates the utility of a decentralized search of data-owner metadata catalogs from multiple agencies. The integrated but decentralized system of metadata, data, and server-based processing capabilities will be presented. The design, utility, and value of these solutions will be illustrated with applied science examples and success stories. Datasets such as the EPA's Integrated Climate and Land Use Scenarios, USGS/NASA MODIS derived land cover attributes, and downscaled climate projections from several sources are examples of data this system includes. These and other datasets, have been published as standard, self-describing, web services that provide the ability to inspect and subset the data. This presentation will demonstrate this file-to-web service concept and how it can be used from script-based workflows or web applications.
Nonvolatile Memory Materials for Neuromorphic Intelligent Machines.
Jeong, Doo Seok; Hwang, Cheol Seong
2018-04-18
Recent progress in deep learning extends the capability of artificial intelligence to various practical tasks, making the deep neural network (DNN) an extremely versatile hypothesis. While such DNN is virtually built on contemporary data centers of the von Neumann architecture, physical (in part) DNN of non-von Neumann architecture, also known as neuromorphic computing, can remarkably improve learning and inference efficiency. Particularly, resistance-based nonvolatile random access memory (NVRAM) highlights its handy and efficient application to the multiply-accumulate (MAC) operation in an analog manner. Here, an overview is given of the available types of resistance-based NVRAMs and their technological maturity from the material- and device-points of view. Examples within the strategy are subsequently addressed in comparison with their benchmarks (virtual DNN in deep learning). A spiking neural network (SNN) is another type of neural network that is more biologically plausible than the DNN. The successful incorporation of resistance-based NVRAM in SNN-based neuromorphic computing offers an efficient solution to the MAC operation and spike timing-based learning in nature. This strategy is exemplified from a material perspective. Intelligent machines are categorized according to their architecture and learning type. Also, the functionality and usefulness of NVRAM-based neuromorphic computing are addressed. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Bitter or not? BitterPredict, a tool for predicting taste from chemical structure.
Dagan-Wiener, Ayana; Nissim, Ido; Ben Abu, Natalie; Borgonovo, Gigliola; Bassoli, Angela; Niv, Masha Y
2017-09-21
Bitter taste is an innately aversive taste modality that is considered to protect animals from consuming toxic compounds. Yet, bitterness is not always noxious and some bitter compounds have beneficial effects on health. Hundreds of bitter compounds were reported (and are accessible via the BitterDB http://bitterdb.agri.huji.ac.il/dbbitter.php ), but numerous additional bitter molecules are still unknown. The dramatic chemical diversity of bitterants makes bitterness prediction a difficult task. Here we present a machine learning classifier, BitterPredict, which predicts whether a compound is bitter or not, based on its chemical structure. BitterDB was used as the positive set, and non-bitter molecules were gathered from literature to create the negative set. Adaptive Boosting (AdaBoost), based on decision trees machine-learning algorithm was applied to molecules that were represented using physicochemical and ADME/Tox descriptors. BitterPredict correctly classifies over 80% of the compounds in the hold-out test set, and 70-90% of the compounds in three independent external sets and in sensory test validation, providing a quick and reliable tool for classifying large sets of compounds into bitter and non-bitter groups. BitterPredict suggests that about 40% of random molecules, and a large portion (66%) of clinical and experimental drugs, and of natural products (77%) are bitter.
ERIC Educational Resources Information Center
Porter, Dennis
One recommendation of the 1989 California Strategic Plan for Adult Education is the use of EduCard. EduCard, the Adult Education Access Card, is a means of giving learners access to information about educational opportunities and providing administrators with machine-readable information on learners' prior education and traiing. Three models are:…
Machine-z: Rapid Machine-Learned Redshift Indicator for Swift Gamma-Ray Bursts
NASA Technical Reports Server (NTRS)
Ukwatta, T. N.; Wozniak, P. R.; Gehrels, N.
2016-01-01
Studies of high-redshift gamma-ray bursts (GRBs) provide important information about the early Universe such as the rates of stellar collapsars and mergers, the metallicity content, constraints on the re-ionization period, and probes of the Hubble expansion. Rapid selection of high-z candidates from GRB samples reported in real time by dedicated space missions such as Swift is the key to identifying the most distant bursts before the optical afterglow becomes too dim to warrant a good spectrum. Here, we introduce 'machine-z', a redshift prediction algorithm and a 'high-z' classifier for Swift GRBs based on machine learning. Our method relies exclusively on canonical data commonly available within the first few hours after the GRB trigger. Using a sample of 284 bursts with measured redshifts, we trained a randomized ensemble of decision trees (random forest) to perform both regression and classification. Cross-validated performance studies show that the correlation coefficient between machine-z predictions and the true redshift is nearly 0.6. At the same time, our high-z classifier can achieve 80 per cent recall of true high-redshift bursts, while incurring a false positive rate of 20 per cent. With 40 per cent false positive rate the classifier can achieve approximately 100 per cent recall. The most reliable selection of high-redshift GRBs is obtained by combining predictions from both the high-z classifier and the machine-z regressor.
Random access with adaptive packet aggregation in LTE/LTE-A.
Zhou, Kaijie; Nikaein, Navid
While random access presents a promising solution for efficient uplink channel access, the preamble collision rate can significantly increase when massive number of devices simultaneously access the channel. To address this issue and improve the reliability of the random access, an adaptive packet aggregation method is proposed. With the proposed method, a device does not trigger a random access for every single packet. Instead, it starts a random access when the number of aggregated packets reaches a given threshold. This method reduces the packet collision rate at the expense of an extra latency, which is used to accumulate multiple packets into a single transmission unit. Therefore, the tradeoff between packet loss rate and channel access latency has to be carefully selected. We use semi-Markov model to derive the packet loss rate and channel access latency as functions of packet aggregation number. Hence, the optimal amount of aggregated packets can be found, which keeps the loss rate below the desired value while minimizing the access latency. We also apply for the idea of packet aggregation for power saving, where a device aggregates as many packets as possible until the latency constraint is reached. Simulations are carried out to evaluate our methods. We find that the packet loss rate and/or power consumption are significantly reduced with the proposed method.
AstroCloud, a Cyber-Infrastructure for Astronomy Research: Data Access and Interoperability
NASA Astrophysics Data System (ADS)
Fan, D.; He, B.; Xiao, J.; Li, S.; Li, C.; Cui, C.; Yu, C.; Hong, Z.; Yin, S.; Wang, C.; Cao, Z.; Fan, Y.; Mi, L.; Wan, W.; Wang, J.
2015-09-01
Data access and interoperability module connects the observation proposals, data, virtual machines and software. According to the unique identifier of PI (principal investigator), an email address or an internal ID, data can be collected by PI's proposals, or by the search interfaces, e.g. conesearch. Files associated with the searched results could be easily transported to cloud storages, including the storage with virtual machines, or several commercial platforms like Dropbox. Benefitted from the standards of IVOA (International Observatories Alliance), VOTable formatted searching result could be sent to kinds of VO software. Latter endeavor will try to integrate more data and connect archives and some other astronomical resources.
Support vector machine in machine condition monitoring and fault diagnosis
NASA Astrophysics Data System (ADS)
Widodo, Achmad; Yang, Bo-Suk
2007-08-01
Recently, the issue of machine condition monitoring and fault diagnosis as a part of maintenance system became global due to the potential advantages to be gained from reduced maintenance costs, improved productivity and increased machine availability. This paper presents a survey of machine condition monitoring and fault diagnosis using support vector machine (SVM). It attempts to summarize and review the recent research and developments of SVM in machine condition monitoring and diagnosis. Numerous methods have been developed based on intelligent systems such as artificial neural network, fuzzy expert system, condition-based reasoning, random forest, etc. However, the use of SVM for machine condition monitoring and fault diagnosis is still rare. SVM has excellent performance in generalization so it can produce high accuracy in classification for machine condition monitoring and diagnosis. Until 2006, the use of SVM in machine condition monitoring and fault diagnosis is tending to develop towards expertise orientation and problem-oriented domain. Finally, the ability to continually change and obtain a novel idea for machine condition monitoring and fault diagnosis using SVM will be future works.
Creating Web-Based Scientific Applications Using Java Servlets
NASA Technical Reports Server (NTRS)
Palmer, Grant; Arnold, James O. (Technical Monitor)
2001-01-01
There are many advantages to developing web-based scientific applications. Any number of people can access the application concurrently. The application can be accessed from a remote location. The application becomes essentially platform-independent because it can be run from any machine that has internet access and can run a web browser. Maintenance and upgrades to the application are simplified since only one copy of the application exists in a centralized location. This paper details the creation of web-based applications using Java servlets. Java is a powerful, versatile programming language that is well suited to developing web-based programs. A Java servlet provides the interface between the central server and the remote client machines. The servlet accepts input data from the client, runs the application on the server, and sends the output back to the client machine. The type of servlet that supports the HTTP protocol will be discussed in depth. Among the topics the paper will discuss are how to write an http servlet, how the servlet can run applications written in Java and other languages, and how to set up a Java web server. The entire process will be demonstrated by building a web-based application to compute stagnation point heat transfer.
A machine learning approach for predicting methionine oxidation sites.
Aledo, Juan C; Cantón, Francisco R; Veredas, Francisco J
2017-09-29
The oxidation of protein-bound methionine to form methionine sulfoxide, has traditionally been regarded as an oxidative damage. However, recent evidences support the view of this reversible reaction as a regulatory post-translational modification. The perception that methionine sulfoxidation may provide a mechanism to the redox regulation of a wide range of cellular processes, has stimulated some proteomic studies. However, these experimental approaches are expensive and time-consuming. Therefore, computational methods designed to predict methionine oxidation sites are an attractive alternative. As a first approach to this matter, we have developed models based on random forests, support vector machines and neural networks, aimed at accurate prediction of sites of methionine oxidation. Starting from published proteomic data regarding oxidized methionines, we created a hand-curated dataset formed by 113 unique polypeptides of known structure, containing 975 methionyl residues, 122 of which were oxidation-prone (positive dataset) and 853 were oxidation-resistant (negative dataset). We use a machine learning approach to generate predictive models from these datasets. Among the multiple features used in the classification task, some of them contributed substantially to the performance of the predictive models. Thus, (i) the solvent accessible area of the methionine residue, (ii) the number of residues between the analyzed methionine and the next methionine found towards the N-terminus and (iii) the spatial distance between the atom of sulfur from the analyzed methionine and the closest aromatic residue, were among the most relevant features. Compared to the other classifiers we also evaluated, random forests provided the best performance, with accuracy, sensitivity and specificity of 0.7468±0.0567, 0.6817±0.0982 and 0.7557±0.0721, respectively (mean ± standard deviation). We present the first predictive models aimed to computationally detect methionine sites that may become oxidized in vivo in response to oxidative signals. These models provide insights into the structural context in which a methionine residue become either oxidation-resistant or oxidation-prone. Furthermore, these models should be useful in prioritizing methinonyl residues for further studies to determine their potential as regulatory post-translational modification sites.
Machine learning research 1989-90
NASA Technical Reports Server (NTRS)
Porter, Bruce W.; Souther, Arthur
1990-01-01
Multifunctional knowledge bases offer a significant advance in artificial intelligence because they can support numerous expert tasks within a domain. As a result they amortize the costs of building a knowledge base over multiple expert systems and they reduce the brittleness of each system. Due to the inevitable size and complexity of multifunctional knowledge bases, their construction and maintenance require knowledge engineering and acquisition tools that can automatically identify interactions between new and existing knowledge. Furthermore, their use requires software for accessing those portions of the knowledge base that coherently answer questions. Considerable progress was made in developing software for building and accessing multifunctional knowledge bases. A language was developed for representing knowledge, along with software tools for editing and displaying knowledge, a machine learning program for integrating new information into existing knowledge, and a question answering system for accessing the knowledge base.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-13
... DEPARTMENT OF COMMERCE International Trade Administration [C-580-851] Dynamic Random Access Memory... administrative review of the countervailing duty order on dynamic random access memory semiconductors from the... following events have occurred since the publication of the preliminary results of this review. See Dynamic...
ATLAS, an integrated structural analysis and design system. Volume 4: Random access file catalog
NASA Technical Reports Server (NTRS)
Gray, F. P., Jr. (Editor)
1979-01-01
A complete catalog is presented for the random access files used by the ATLAS integrated structural analysis and design system. ATLAS consists of several technical computation modules which output data matrices to corresponding random access file. A description of the matrices written on these files is contained herein.
32 CFR 655.10 - Use of radiation sources by non-Army entities on Army land (AR 385-11).
Code of Federal Regulations, 2010 CFR
2010-07-01
... radioisotope; or (5) A machine-produced ionizing-radiation source capable of producing an area, accessible to... NARM and machine-produced ionizing radiation sources, the applicant has an appropriate State... 32 National Defense 4 2010-07-01 2010-07-01 true Use of radiation sources by non-Army entities on...
Machine-Aided Indexing in Practice: An Encounter with Automatic Indexing of the Third Kind.
ERIC Educational Resources Information Center
Klingbiel, Paul H.
This three-part report includes a brief history of the Defense Documentation Center (DDC) with a description of the collections and their accessibility; categorization of automatic indexing into three kinds with a brief description of the DDC system of machine-aided indexing; and an indication of some operational experiences with the system.…
Real English: A Translator to Enable Natural Language Man-Machine Conversation.
ERIC Educational Resources Information Center
Gautin, Harvey
This dissertation presents a pragmatic interpreter/translator called Real English to serve as a natural language man-machine communication interface in a multi-mode on-line information retrieval system. This multi-mode feature affords the user a library-like searching tool by giving him access to a dictionary, lexicon, thesaurus, synonym table,…
Web-Based Machine Translation as a Tool for Promoting Electronic Literacy and Language Awareness
ERIC Educational Resources Information Center
Williams, Lawrence
2006-01-01
This article addresses a pervasive problem of concern to teachers of many foreign languages: the use of Web-Based Machine Translation (WBMT) by students who do not understand the complexities of this relatively new tool. Although networked technologies have greatly increased access to many language and communication tools, WBMT is still…
78 FR 19979 - Establishment of the Presidential Commission on Election Administration
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-03
..., recruitment, and number of poll workers; (iii) voting accessibility for uniformed and overseas voters; (iv) the efficient management of voter rolls and poll books; (v) voting machine capacity and technology; (vi) ballot simplicity and voter education; (vii) voting accessibility for individuals with...
Qcorp: an annotated classification corpus of Chinese health questions.
Guo, Haihong; Na, Xu; Li, Jiao
2018-03-22
Health question-answering (QA) systems have become a typical application scenario of Artificial Intelligent (AI). An annotated question corpus is prerequisite for training machines to understand health information needs of users. Thus, we aimed to develop an annotated classification corpus of Chinese health questions (Qcorp) and make it openly accessible. We developed a two-layered classification schema and corresponding annotation rules on basis of our previous work. Using the schema, we annotated 5000 questions that were randomly selected from 5 Chinese health websites within 6 broad sections. 8 annotators participated in the annotation task, and the inter-annotator agreement was evaluated to ensure the corpus quality. Furthermore, the distribution and relationship of the annotated tags were measured by descriptive statistics and social network map. The questions were annotated using 7101 tags that covers 29 topic categories in the two-layered schema. In our released corpus, the distribution of questions on the top-layered categories was treatment of 64.22%, diagnosis of 37.14%, epidemiology of 14.96%, healthy lifestyle of 10.38%, and health provider choice of 4.54% respectively. Both the annotated health questions and annotation schema were openly accessible on the Qcorp website. Users can download the annotated Chinese questions in CSV, XML, and HTML format. We developed a Chinese health question corpus including 5000 manually annotated questions. It is openly accessible and would contribute to the intelligent health QA system development.
Terry-McElrath, Yvonne M; O'Malley, Patrick M; Johnston, Lloyd D
2012-01-01
This study explores sugar-sweetened beverage (SSB) availability in US secondary school competitive venues during the first 3 years following the school wellness policy requirement (2007-2009). Furthermore, analyses examine associations with school policy and SSB availability. Analyses use questionnaire data from 757 middle and 762 high schools in the nationally representative Youth, Education, and Society study to examine soda and non-soda SSB availability associations with school policy including (1) beverage bottling contracts and related incentives, (2) individuals/organizations responsible for decisions regarding beverages available in vending machines, and (3) school wellness policies and nutrition guidelines. Non-soda SSBs made up the majority of SSBs in both middle and high schools. Soda was especially likely to be found in vending machines; non-soda SSBs were widely available across competitive venues. Access to soda decreased significantly over time; however, non-soda SSB access did not show a similar decrease. School policy allowing beverage supplier contractual involvement (bottling contract incentives and beverage supplier "say" in vending machine beverage choices) was related to increased SSB access. However, the existence of developed nutritional guidelines was associated with lower SSB availability. Students had high access to SSBs across competitive school venues, with non-soda SSBs making up the majority of SSB beverage options. Efforts to reduce access to SSBs in US secondary schools should include a focus on reducing both soda and non-soda SSBs, reducing beverage supplier involvement in school beverage choices, and encouraging the development of targeted nutritional guidelines for all competitive venues. © 2011, American School Health Association.
Taniguchi, Hidetaka; Sato, Hiroshi; Shirakawa, Tomohiro
2018-05-09
Human learners can generalize a new concept from a small number of samples. In contrast, conventional machine learning methods require large amounts of data to address the same types of problems. Humans have cognitive biases that promote fast learning. Here, we developed a method to reduce the gap between human beings and machines in this type of inference by utilizing cognitive biases. We implemented a human cognitive model into machine learning algorithms and compared their performance with the currently most popular methods, naïve Bayes, support vector machine, neural networks, logistic regression and random forests. We focused on the task of spam classification, which has been studied for a long time in the field of machine learning and often requires a large amount of data to obtain high accuracy. Our models achieved superior performance with small and biased samples in comparison with other representative machine learning methods.
Shamshirsaz, Alireza Abdollah; Kamgar, Mohammad; Bekheirnia, Mir Reza; Ayazi, Farzam; Hashemi, Seyed Reza; Bouzari, Navid; Habibzadeh, Mohammad Reza; Pourzahedgilani, Nima; Broumand, Varshasb; Shamshirsaz, Amirhooshang Abdollah; Moradi, Maziyar; Borghei, Mehrdad; Haghighi, Niloofar Nobakht; Broumand, Behrooz
2004-01-01
Background Hepatitis C virus (HCV) infection is a significant problem among patients undergoing maintenance hemodialysis (HD). We conducted a prospective multi-center study to evaluate the effect of dialysis machine separation on the spread of HCV infection. Methods Twelve randomly selected dialysis centers in Tehran, Iran were randomly divided into two groups; those using dedicated machines (D) for HCV infected individuals and those using non-dedicated HD machines (ND). 593 HD cases including 51 HCV positive (RT-PCR) cases and 542 HCV negative patients were enrolled in this study. The prevalence of HCV infection in the D group was 10.1% (range: 4.6%– 13.2%) and it was 7.1% (range: 4.2%–16.8%) in the ND group. During the study conduction 5 new HCV positive cases and 169 new HCV negative cases were added. In the D group, PCR positive patients were dialyzed on dedicated machines. In the ND group all patients shared the same machines. Results In the first follow-up period, the incidence of HCV infection was 1.6% and 4.7% in the D and ND group respectively (p = 0.05). In the second follow-up period, the incidence of HCV infection was 1.3% in the D group and 5.7% in the ND group (p < 0.05). Conclusions In this study the incidence of HCV in HD patients decreased by the use of dedicated HD machines for HCV infected patients. Additional studies may help to clarify the role of machine dedication in conjunction with application of universal precautions in reducing HCV transmission. PMID:15469615
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-28
... Random Access Memory Semiconductors and Products Containing Same, Including Memory Modules; Notice of a... importation of certain dynamic random access memory semiconductors and products containing same, including memory modules, by reason of infringement of certain claims of U.S. Patent Nos. 5,480,051; 5,422,309; 5...
NASA Technical Reports Server (NTRS)
Byman, J. E.
1985-01-01
A brief history of aircraft production techniques is given. A flexible machining cell is then described. It is a computer controlled system capable of performing 4-axis machining part cleaning, dimensional inspection and materials handling functions in an unmanned environment. The cell was designed to: allow processing of similar and dissimilar parts in random order without disrupting production; allow serial (one-shipset-at-a-time) manufacturing; reduce work-in-process inventory; maximize machine utilization through remote set-up; maximize throughput and minimize labor.
A General Method for Predicting Amino Acid Residues Experiencing Hydrogen Exchange
Wang, Boshen; Perez-Rathke, Alan; Li, Renhao; Liang, Jie
2018-01-01
Information on protein hydrogen exchange can help delineate key regions involved in protein-protein interactions and provides important insight towards determining functional roles of genetic variants and their possible mechanisms in disease processes. Previous studies have shown that the degree of hydrogen exchange is affected by hydrogen bond formations, solvent accessibility, proximity to other residues, and experimental conditions. However, a general predictive method for identifying residues capable of hydrogen exchange transferable to a broad set of proteins is lacking. We have developed a machine learning method based on random forest that can predict whether a residue experiences hydrogen exchange. Using data from the Start2Fold database, which contains information on 13,306 residues (3,790 of which experience hydrogen exchange and 9,516 which do not exchange), our method achieves good performance. Specifically, we achieve an overall out-of-bag (OOB) error, an unbiased estimate of the test set error, of 20.3 percent. Using a randomly selected test data set consisting of 500 residues experiencing hydrogen exchange and 500 which do not, our method achieves an accuracy of 0.79, a recall of 0.74, a precision of 0.82, and an F1 score of 0.78.
A video event trigger for high frame rate, high resolution video technology
NASA Astrophysics Data System (ADS)
Williams, Glenn L.
1991-12-01
When video replaces film the digitized video data accumulates very rapidly, leading to a difficult and costly data storage problem. One solution exists for cases when the video images represent continuously repetitive 'static scenes' containing negligible activity, occasionally interrupted by short events of interest. Minutes or hours of redundant video frames can be ignored, and not stored, until activity begins. A new, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term or short term changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pretrigger and post-trigger storage techniques are then adaptable for archiving the digital stream from only the significant video images.
A video event trigger for high frame rate, high resolution video technology
NASA Technical Reports Server (NTRS)
Williams, Glenn L.
1991-01-01
When video replaces film the digitized video data accumulates very rapidly, leading to a difficult and costly data storage problem. One solution exists for cases when the video images represent continuously repetitive 'static scenes' containing negligible activity, occasionally interrupted by short events of interest. Minutes or hours of redundant video frames can be ignored, and not stored, until activity begins. A new, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term or short term changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pretrigger and post-trigger storage techniques are then adaptable for archiving the digital stream from only the significant video images.
Prediction of Baseflow Index of Catchments using Machine Learning Algorithms
NASA Astrophysics Data System (ADS)
Yadav, B.; Hatfield, K.
2017-12-01
We present the results of eight machine learning techniques for predicting the baseflow index (BFI) of ungauged basins using a surrogate of catchment scale climate and physiographic data. The tested algorithms include ordinary least squares, ridge regression, least absolute shrinkage and selection operator (lasso), elasticnet, support vector machine, gradient boosted regression trees, random forests, and extremely randomized trees. Our work seeks to identify the dominant controls of BFI that can be readily obtained from ancillary geospatial databases and remote sensing measurements, such that the developed techniques can be extended to ungauged catchments. More than 800 gauged catchments spanning the continental United States were selected to develop the general methodology. The BFI calculation was based on the baseflow separated from daily streamflow hydrograph using HYSEP filter. The surrogate catchment attributes were compiled from multiple sources including digital elevation model, soil, landuse, climate data, other publicly available ancillary and geospatial data. 80% catchments were used to train the ML algorithms, and the remaining 20% of the catchments were used as an independent test set to measure the generalization performance of fitted models. A k-fold cross-validation using exhaustive grid search was used to fit the hyperparameters of each model. Initial model development was based on 19 independent variables, but after variable selection and feature ranking, we generated revised sparse models of BFI prediction that are based on only six catchment attributes. These key predictive variables selected after the careful evaluation of bias-variance tradeoff include average catchment elevation, slope, fraction of sand, permeability, temperature, and precipitation. The most promising algorithms exceeding an accuracy score (r-square) of 0.7 on test data include support vector machine, gradient boosted regression trees, random forests, and extremely randomized trees. Considering both the accuracy and the computational complexity of these algorithms, we identify the extremely randomized trees as the best performing algorithm for BFI prediction in ungauged basins.
Knowledge-based load leveling and task allocation in human-machine systems
NASA Technical Reports Server (NTRS)
Chignell, M. H.; Hancock, P. A.
1986-01-01
Conventional human-machine systems use task allocation policies which are based on the premise of a flexible human operator. This individual is most often required to compensate for and augment the capabilities of the machine. The development of artificial intelligence and improved technologies have allowed for a wider range of task allocation strategies. In response to these issues a Knowledge Based Adaptive Mechanism (KBAM) is proposed for assigning tasks to human and machine in real time, using a load leveling policy. This mechanism employs an online workload assessment and compensation system which is responsive to variations in load through an intelligent interface. This interface consists of a loading strategy reasoner which has access to information about the current status of the human-machine system as well as a database of admissible human/machine loading strategies. Difficulties standing in the way of successful implementation of the load leveling strategy are examined.
Lötsch, Jörn; Geisslinger, Gerd; Heinemann, Sarah; Lerch, Florian; Oertel, Bruno G; Ultsch, Alfred
2017-08-16
The comprehensive assessment of pain-related human phenotypes requires combinations of nociceptive measures that produce complex high-dimensional data, posing challenges to bioinformatic analysis. In this study, we assessed established experimental models of heat hyperalgesia of the skin, consisting of local ultraviolet-B (UV-B) irradiation or capsaicin application, in 82 healthy subjects using a variety of noxious stimuli. We extended the original heat stimulation by applying cold and mechanical stimuli and assessing the hypersensitization effects with a clinically established quantitative sensory testing (QST) battery (German Research Network on Neuropathic Pain). This study provided a 246 × 10-sized data matrix (82 subjects assessed at baseline, following UV-B application, and following capsaicin application) with respect to 10 QST parameters, which we analyzed using machine-learning techniques. We observed statistically significant effects of the hypersensitization treatments in 9 different QST parameters. Supervised machine-learned analysis implemented as random forests followed by ABC analysis pointed to heat pain thresholds as the most relevantly affected QST parameter. However, decision tree analysis indicated that UV-B additionally modulated sensitivity to cold. Unsupervised machine-learning techniques, implemented as emergent self-organizing maps, hinted at subgroups responding to topical application of capsaicin. The distinction among subgroups was based on sensitivity to pressure pain, which could be attributed to sex differences, with women being more sensitive than men. Thus, while UV-B and capsaicin share a major component of heat pain sensitization, they differ in their effects on QST parameter patterns in healthy subjects, suggesting a lack of redundancy between these models.This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.
The Photon Shell Game and the Quantum von Neumann Architecture with Superconducting Circuits
NASA Astrophysics Data System (ADS)
Mariantoni, Matteo
2012-02-01
Superconducting quantum circuits have made significant advances over the past decade, allowing more complex and integrated circuits that perform with good fidelity. We have recently implemented a machine comprising seven quantum channels, with three superconducting resonators, two phase qubits, and two zeroing registers. I will explain the design and operation of this machine, first showing how a single microwave photon | 1 > can be prepared in one resonator and coherently transferred between the three resonators. I will also show how more exotic states such as double photon states | 2 > and superposition states | 0 >+ | 1 > can be shuffled among the resonators as well [1]. I will then demonstrate how this machine can be used as the quantum-mechanical analog of the von Neumann computer architecture, which for a classical computer comprises a central processing unit and a memory holding both instructions and data. The quantum version comprises a quantum central processing unit (quCPU) that exchanges data with a quantum random-access memory (quRAM) integrated on one chip, with instructions stored on a classical computer. I will also present a proof-of-concept demonstration of a code that involves all seven quantum elements: (1), Preparing an entangled state in the quCPU, (2), writing it to the quRAM, (3), preparing a second state in the quCPU, (4), zeroing it, and, (5), reading out the first state stored in the quRAM [2]. Finally, I will demonstrate that the quantum von Neumann machine provides one unit cell of a two-dimensional qubit-resonator array that can be used for surface code quantum computing. This will allow the realization of a scalable, fault-tolerant quantum processor with the most forgiving error rates to date. [4pt] [1] M. Mariantoni et al., Nature Physics 7, 287-293 (2011.)[0pt] [2] M. Mariantoni et al., Science 334, 61-65 (2011).
Exploring prediction uncertainty of spatial data in geostatistical and machine learning Approaches
NASA Astrophysics Data System (ADS)
Klump, J. F.; Fouedjio, F.
2017-12-01
Geostatistical methods such as kriging with external drift as well as machine learning techniques such as quantile regression forest have been intensively used for modelling spatial data. In addition to providing predictions for target variables, both approaches are able to deliver a quantification of the uncertainty associated with the prediction at a target location. Geostatistical approaches are, by essence, adequate for providing such prediction uncertainties and their behaviour is well understood. However, they often require significant data pre-processing and rely on assumptions that are rarely met in practice. Machine learning algorithms such as random forest regression, on the other hand, require less data pre-processing and are non-parametric. This makes the application of machine learning algorithms to geostatistical problems an attractive proposition. The objective of this study is to compare kriging with external drift and quantile regression forest with respect to their ability to deliver reliable prediction uncertainties of spatial data. In our comparison we use both simulated and real world datasets. Apart from classical performance indicators, comparisons make use of accuracy plots, probability interval width plots, and the visual examinations of the uncertainty maps provided by the two approaches. By comparing random forest regression to kriging we found that both methods produced comparable maps of estimated values for our variables of interest. However, the measure of uncertainty provided by random forest seems to be quite different to the measure of uncertainty provided by kriging. In particular, the lack of spatial context can give misleading results in areas without ground truth data. These preliminary results raise questions about assessing the risks associated with decisions based on the predictions from geostatistical and machine learning algorithms in a spatial context, e.g. mineral exploration.
A Virtual Astronomical Research Machine in No Time (VARMiNT)
NASA Astrophysics Data System (ADS)
Beaver, John
2012-05-01
We present early results of using virtual machine software to help make astronomical research computing accessible to a wider range of individuals. Our Virtual Astronomical Research Machine in No Time (VARMiNT) is an Ubuntu Linux virtual machine with free, open-source software already installed and configured (and in many cases documented). The purpose of VARMiNT is to provide a ready-to-go astronomical research computing environment that can be freely shared between researchers, or between amateur and professional, teacher and student, etc., and to circumvent the often-difficult task of configuring a suitable computing environment from scratch. Thus we hope that VARMiNT will make it easier for individuals to engage in research computing even if they have no ready access to the facilities of a research institution. We describe our current version of VARMiNT and some of the ways it is being used at the University of Wisconsin - Fox Valley, a two-year teaching campus of the University of Wisconsin System, as a means to enhance student independent study research projects and to facilitate collaborations with researchers at other locations. We also outline some future plans and prospects.
Dynamically programmable cache
NASA Astrophysics Data System (ADS)
Nakkar, Mouna; Harding, John A.; Schwartz, David A.; Franzon, Paul D.; Conte, Thomas
1998-10-01
Reconfigurable machines have recently been used as co- processors to accelerate the execution of certain algorithms or program subroutines. The problems with the above approach include high reconfiguration time and limited partial reconfiguration. By far the most critical problems are: (1) the small on-chip memory which results in slower execution time, and (2) small FPGA areas that cannot implement large subroutines. Dynamically Programmable Cache (DPC) is a novel architecture for embedded processors which offers solutions to the above problems. To solve memory access problems, DPC processors merge reconfigurable arrays with the data cache at various cache levels to create a multi-level reconfigurable machines. As a result DPC machines have both higher data accessibility and FPGA memory bandwidth. To solve the limited FPGA resource problem, DPC processors implemented multi-context switching (Virtualization) concept. Virtualization allows implementation of large subroutines with fewer FPGA cells. Additionally, DPC processors can parallelize the execution of several operations resulting in faster execution time. In this paper, the speedup improvement for DPC machines are shown to be 5X faster than an Altera FLEX10K FPGA chip and 2X faster than a Sun Ultral SPARC station for two different algorithms (convolution and motion estimation).
A Boltzmann machine for the organization of intelligent machines
NASA Technical Reports Server (NTRS)
Moed, Michael C.; Saridis, George N.
1990-01-01
A three-tier structure consisting of organization, coordination, and execution levels forms the architecture of an intelligent machine using the principle of increasing precision with decreasing intelligence from a hierarchically intelligent control. This system has been formulated as a probabilistic model, where uncertainty and imprecision can be expressed in terms of entropies. The optimal strategy for decision planning and task execution can be found by minimizing the total entropy in the system. The focus is on the design of the organization level as a Boltzmann machine. Since this level is responsible for planning the actions of the machine, the Boltzmann machine is reformulated to use entropy as the cost function to be minimized. Simulated annealing, expanding subinterval random search, and the genetic algorithm are presented as search techniques to efficiently find the desired action sequence and illustrated with numerical examples.
Social Science Data Archives and Libraries: A View to the Future.
ERIC Educational Resources Information Center
Clark, Barton M.
1982-01-01
Discusses factors militating against integration of social science data archives and libraries in near future, noting usage of materials, access requisite skills of librarians, economic stability of archives, existing structures which manage social science data archives. Role of librarians, data access tools, and cataloging of machine-readable…
ARL Statement on Unlimited Use and Exchange of Bibliographic Records.
ERIC Educational Resources Information Center
Association of Research Libraries, Washington, DC.
The Association of Research Libraries is fully committed to the principle of unrestricted access to and dissemination of ideas, i.e., member libraries must have unlimited access to the machine-readable bibliographic records which are created by member libraries and maintained in bibliographic utilities. Coordinated collection development programs…
Taxing soft drinks and restricting access to vending machines to curb child obesity.
Fletcher, Jason M; Frisvold, David; Tefft, Nathan
2010-05-01
One of the largest drivers of the current obesity epidemic is thought to be excessive consumption of sugar-sweetened beverages. Some have proposed vending machine restrictions and taxing soft drinks to curb children's consumption of soft drinks; to a large extent, these policies have not been evaluated empirically. We examine these policies using two nationally representative data sets and find no evidence that, as currently practiced, either is effective at reducing children's weight. We conclude by outlining changes that may increase their effectiveness, such as implementing comprehensive restrictions on access to soft drinks in schools and imposing higher tax rates than are currently in place in many jurisdictions.
Machine- z: Rapid machine-learned redshift indicator for Swift gamma-ray bursts
Ukwatta, T. N.; Wozniak, P. R.; Gehrels, N.
2016-03-08
Studies of high-redshift gamma-ray bursts (GRBs) provide important information about the early Universe such as the rates of stellar collapsars and mergers, the metallicity content, constraints on the re-ionization period, and probes of the Hubble expansion. Rapid selection of high-z candidates from GRB samples reported in real time by dedicated space missions such as Swift is the key to identifying the most distant bursts before the optical afterglow becomes too dim to warrant a good spectrum. Here, we introduce ‘machine-z’, a redshift prediction algorithm and a ‘high-z’ classifier for Swift GRBs based on machine learning. Our method relies exclusively onmore » canonical data commonly available within the first few hours after the GRB trigger. Using a sample of 284 bursts with measured redshifts, we trained a randomized ensemble of decision trees (random forest) to perform both regression and classification. Cross-validated performance studies show that the correlation coefficient between machine-z predictions and the true redshift is nearly 0.6. At the same time, our high-z classifier can achieve 80 per cent recall of true high-redshift bursts, while incurring a false positive rate of 20 per cent. With 40 per cent false positive rate the classifier can achieve ~100 per cent recall. As a result, the most reliable selection of high-redshift GRBs is obtained by combining predictions from both the high-z classifier and the machine-z regressor.« less
Parker, David L; Brosseau, Lisa M; Samant, Yogindra; Xi, Min; Pan, Wei; Haugan, David
2009-01-01
Metal fabrication employs an estimated 3.1 million workers in the United States. The absence of machine guarding and related programs such as lockout/tagout may result in serious injury or death. The purpose of this study was to improve machine-related safety in small metal-fabrication businesses. We used a randomized trial with two groups: management only and management-employee. We evaluated businesses for the adequacy of machine guarding (machine scorecard) and related safety programs (safety audit). We provided all businesses with a report outlining deficiencies and prioritizing their remediation. In addition, the management-employee group received four one-hour interactive training sessions from a peer educator. We evaluated 40 metal-fabrication businesses at baseline and 37 (93%) one year later. Of the three nonparticipants, two had gone out of business. More than 40% of devices required for adequate guarding were missing or inadequate, and 35% of required safety programs and practices were absent at baseline. Both measures improved significantly during the course of the intervention. No significant differences in changes occurred between the two intervention groups. Machine-guarding practices and programs improved by up to 13% and safety audit scores by up to 23%. Businesses that added safety committees or those that started with the lowest baseline measures showed the greatest improvements. Simple and easy-to-use assessment tools allowed businesses to significantly improve their safety practices, and safety committees facilitated this process.
Eroglu, Duygu Yilmaz; Ozmutlu, H Cenk
2014-01-01
We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms.
2011-01-01
Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI), but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests) were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression) in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p < 0.05). Support Vector Machines showed the larger overall classification accuracy (Median (Me) = 0.76) an area under the ROC (Me = 0.90). However this method showed high specificity (Me = 1.0) but low sensitivity (Me = 0.3). Random Forest ranked second in overall accuracy (Me = 0.73) with high area under the ROC (Me = 0.73) specificity (Me = 0.73) and sensitivity (Me = 0.64). Linear Discriminant Analysis also showed acceptable overall accuracy (Me = 0.66), with acceptable area under the ROC (Me = 0.72) specificity (Me = 0.66) and sensitivity (Me = 0.64). The remaining classifiers showed overall classification accuracy above a median value of 0.63, but for most sensitivity was around or even lower than a median value of 0.5. Conclusions When taking into account sensitivity, specificity and overall classification accuracy Random Forests and Linear Discriminant analysis rank first among all the classifiers tested in prediction of dementia using several neuropsychological tests. These methods may be used to improve accuracy, sensitivity and specificity of Dementia predictions from neuropsychological testing. PMID:21849043
Moatti, J P; Vlahov, D; Feroni, I; Perrin, V; Obadia, Y
2001-03-01
In Marseille, southeastern France, HIV prevention programs for injection drug users (IDUs) simultaneously include access to sterile syringes through needle exchange programs (NEPs), legal pharmacy sales and, since 1996, vending machines that mechanically exchange new syringes for used ones. The purpose of this study was to compare the characteristics of IDUs according to the site where they last obtained new syringes. During 3 days in September 1997, all IDUs who obtained syringes from 32 pharmacies, four NEPs and three vending machines were offered the opportunity to complete a self-administered questionnaire on demographics, drug use characteristics and program utilization. Of 485 individuals approached, the number who completed the questionnaire was 141 in pharmacies, 114 in NEPs and 88 at vending machines (response rate = 70.7%). Compared to NEP users, vending machine users were younger and less likely to be enrolled in a methadone program or to report being HIV infected, but more likely to misuse buprenorphine. They also had lower financial resources and were less likely to be heroin injectors than both pharmacy and NEP users. Our results suggest that vending machines attract a very different group of IDUs than NEPs, and that both programs are useful adjuncts to legal pharmacy sales for covering the needs of IDUs for sterile syringes in a single city. Assessment of the effectiveness and cost-effectiveness of combining such programs for the prevention of HIV and other infectious diseases among IDUs requires further comparative research. Copyright 2001 S. Karger AG, Basel
Performance Analysis of Ivshmem for High-Performance Computing in Virtual Machines
NASA Astrophysics Data System (ADS)
Ivanovic, Pavle; Richter, Harald
2018-01-01
High-Performance computing (HPC) is rarely accomplished via virtual machines (VMs). In this paper, we present a remake of ivshmem which can change this. Ivshmem was a shared memory (SHM) between virtual machines on the same server, with SHM-access synchronization included, until about 5 years ago when newer versions of Linux and its virtualization library libvirt evolved. We restored that SHM-access synchronization feature because it is indispensable for HPC and made ivshmem runnable with contemporary versions of Linux, libvirt, KVM, QEMU and especially MPICH, which is an implementation of MPI - the standard HPC communication library. Additionally, MPICH was transparently modified by us to get ivshmem included, resulting in a three to ten times performance improvement compared to TCP/IP. Furthermore, we have transparently replaced MPI_PUT, a single-side MPICH communication mechanism, by an own MPI_PUT wrapper. As a result, our ivshmem even surpasses non-virtualized SHM data transfers for block lengths greater than 512 KBytes, showing the benefits of virtualization. All improvements were possible without using SR-IOV.
Johnson, Donna B.; Krieger, James; MacDougall, Erin; Payne, Elizabeth; Chan, Nadine L.
2015-01-01
Policies that change environments are important tools for preventing chronic diseases, including obesity. Boards of health often have authority to adopt such policies, but few do so. This study assesses 1) how one local board of health developed a policy approach for healthy food access through vending machine guidelines (rather than regulations) and 2) the impact of the approach. Using a case study design guided by “three streams” policy theory and RE-AIM, we analyzed data from a focus group, interviews, and policy documents. The guidelines effectively supported institutional policy development in several settings. Recognition of the problem of chronic disease and the policy solution of vending machine guidelines created an opening for the board to influence nutrition environments. Institutions identified a need for support in adopting vending machine policies. Communities could benefit from the study board’s approach to using nonregulatory evidence-based guidelines as a policy tool. PMID:25927606
MLBCD: a machine learning tool for big clinical data.
Luo, Gang
2015-01-01
Predictive modeling is fundamental for extracting value from large clinical data sets, or "big clinical data," advancing clinical research, and improving healthcare. Machine learning is a powerful approach to predictive modeling. Two factors make machine learning challenging for healthcare researchers. First, before training a machine learning model, the values of one or more model parameters called hyper-parameters must typically be specified. Due to their inexperience with machine learning, it is hard for healthcare researchers to choose an appropriate algorithm and hyper-parameter values. Second, many clinical data are stored in a special format. These data must be iteratively transformed into the relational table format before conducting predictive modeling. This transformation is time-consuming and requires computing expertise. This paper presents our vision for and design of MLBCD (Machine Learning for Big Clinical Data), a new software system aiming to address these challenges and facilitate building machine learning predictive models using big clinical data. The paper describes MLBCD's design in detail. By making machine learning accessible to healthcare researchers, MLBCD will open the use of big clinical data and increase the ability to foster biomedical discovery and improve care.
Food labeling; calorie labeling of articles of food in vending machines. Final rule.
2014-12-01
To implement the vending machine food labeling provisions of the Patient Protection and Affordable Care Act of 2010 (ACA), the Food and Drug Administration (FDA or we) is establishing requirements for providing calorie declarations for food sold from certain vending machines. This final rule will ensure that calorie information is available for certain food sold from a vending machine that does not permit a prospective purchaser to examine the Nutrition Facts Panel before purchasing the article, or does not otherwise provide visible nutrition information at the point of purchase. The declaration of accurate and clear calorie information for food sold from vending machines will make calorie information available to consumers in a direct and accessible manner to enable consumers to make informed and healthful dietary choices. This final rule applies to certain food from vending machines operated by a person engaged in the business of owning or operating 20 or more vending machines. Vending machine operators not subject to the rules may elect to be subject to the Federal requirements by registering with FDA.
Approximating prediction uncertainty for random forest regression models
John W. Coulston; Christine E. Blinn; Valerie A. Thomas; Randolph H. Wynne
2016-01-01
Machine learning approaches such as random forest have increased for the spatial modeling and mapping of continuous variables. Random forest is a non-parametric ensemble approach, and unlike traditional regression approaches there is no direct quantification of prediction error. Understanding prediction uncertainty is important when using model-based continuous maps as...
Zhang, A; Critchley, S; Monsour, P A
2016-12-01
The aim of the present study was to assess the current adoption of cone beam computed tomography (CBCT) and panoramic radiography (PR) machines across Australia. Information regarding registered CBCT and PR machines was obtained from radiation regulators across Australia. The number of X-ray machines was correlated with the population size, the number of dentists, and the gross state product (GSP) per capita, to determine the best fitting regression model(s). In 2014, there were 232 CBCT and 1681 PR machines registered in Australia. Based on absolute counts, Queensland had the largest number of CBCT and PR machines whereas the Northern Territory had the smallest number. However, when based on accessibility in terms of the population size and the number of dentists, the Australian Capital Territory had the most CBCT machines and Western Australia had the most PR machines. The number of X-ray machines correlated strongly with both the population size and the number of dentists, but not with the GSP per capita. In 2014, the ratio of PR to CBCT machines was approximately 7:1. Projected increases in either the population size or the number of dentists could positively impact on the adoption of PR and CBCT machines in Australia. © 2016 Australian Dental Association.
Evaluating the Use of Machine Translation Post-Editing in the Foreign Language Class
ERIC Educational Resources Information Center
Nino, Ana
2008-01-01
Generalised access to the Internet and globalisation has led to increased demand for translation services and a resurgence in the use of machine translation (MT) systems. MT post-editing or the correction of MT output to an acceptable standard is known to be one of the ways to face the huge demand on multilingual communication. Given that the use…
Machine Learning Approaches for Clinical Psychology and Psychiatry.
Dwyer, Dominic B; Falkai, Peter; Koutsouleris, Nikolaos
2018-05-07
Machine learning approaches for clinical psychology and psychiatry explicitly focus on learning statistical functions from multidimensional data sets to make generalizable predictions about individuals. The goal of this review is to provide an accessible understanding of why this approach is important for future practice given its potential to augment decisions associated with the diagnosis, prognosis, and treatment of people suffering from mental illness using clinical and biological data. To this end, the limitations of current statistical paradigms in mental health research are critiqued, and an introduction is provided to critical machine learning methods used in clinical studies. A selective literature review is then presented aiming to reinforce the usefulness of machine learning methods and provide evidence of their potential. In the context of promising initial results, the current limitations of machine learning approaches are addressed, and considerations for future clinical translation are outlined.
NASA Astrophysics Data System (ADS)
Arriaza, Mari Carmen; Domínguez-Rodrigo, Manuel
2016-05-01
In the past twenty years, skeletal part profiles, which are prone to equifinality, have not occupied a prominent role in the interpretation of early Pleistocene sites on Africa. Alternatively, taphonomic studies on bone surface modifications and bone breakage patterns, have provided heuristic interpretations of some of the best preserved archaeological record of this period; namely, the Olduvai Bed I sites. The most recent and comprehensive taphonomic study of these sites (Domínguez-Rodrigo et al., 2007a) showed that FLK Zinj was an anthropogenic assemblage in which hominins acquired carcasses via primary access. That study also showed that the other sites were palimpsests with minimal or no intervention by hominins. The FLK N, FLK NN and DK sequence seemed to be dominated by single-agent (mostly, felid) or multiple-agent (mostly, felid-hyenid) processes. The present study re-analyzes the Bed I sites focusing on skeletal part profiles. Machine learning methods, which incorporate complex algorithms, are powerful predictive and classification methods and have the potential to better extract information from skeletal part representation than past approaches. Here, multiple algorithms (via decision trees, neural networks, random forests and support vector machines) are combined to produce a solid interpretation of bone accumulation agency at the Olduvai Bed I sites. This new approach virtually coincides with previous taphonomic interpretations on a site by site basis and shows that felids were dominant accumulating agents over hyenas during Bed I times. The recent discovery of possibly a modern lion-accumulated assemblage at Olduvai Gorge (Arriaza et al., submitted) provides a very timely analog for this interpretation.
ERIC Educational Resources Information Center
Ture, Ferhan
2013-01-01
With the adoption of web services in daily life, people have access to tremendous amounts of information, beyond any human's reading and comprehension capabilities. As a result, search technologies have become a fundamental tool for accessing information. Furthermore, the web contains information in multiple languages, introducing another barrier…
Microcomputer-Based Access to Machine-Readable Numeric Databases.
ERIC Educational Resources Information Center
Wenzel, Patrick
1988-01-01
Describes the use of microcomputers and relational database management systems to improve access to numeric databases by the Data and Program Library Service at the University of Wisconsin. The internal records management system, in-house reference tools, and plans to extend these tools to the entire campus are discussed. (3 references) (CLB)
Wireless Computing Architecture III
2013-09-01
MIMO Multiple-Input and Multiple-Output MIMO /CON MIMO with concurrent hannel access and estimation MU- MIMO Multiuser MIMO OFDM Orthogonal...compressive sensing \\; a design for concurrent channel estimation in scalable multiuser MIMO networking; and novel networking protocols based on machine...Network, Antenna Arrays, UAV networking, Angle of Arrival, Localization MIMO , Access Point, Channel State Information, Compressive Sensing 16
Ghosts in the Machine: Incarcerated Students and the Digital University
ERIC Educational Resources Information Center
Hopkins, Susan
2015-01-01
Providing higher education to offenders in custody has become an increasingly complex business in the age of digital learning. Most Australian prisoners still have no direct access to the internet and relatively unreliable access to information technology. As incarceration is now a business, prisons, like universities, are increasingly subject to…
unless the user has internet access on the same machine. The products, including metadata, that are on is an access portal to GEONETCast products. It is a searchable database that can be found at -channel. This version will run on a local computer at a user site but internet links will not function
CART (Communication Access Realtime Translation). PEPNet Tipsheet
ERIC Educational Resources Information Center
Larson, Judy, Comp.
1999-01-01
Communication Access Realtime Translation--(CART)--is the instant translation of the spoken word into English text performed by a CART reporter using a stenotype machine, notebook computer and realtime software. The text is then displayed on a computer monitor or other display device for the student who is deaf or hard of hearing to read. This…
[Evaluation of Medical Instruments Cleaning Effect of Fluorescence Detection Technique].
Sheng, Nan; Shen, Yue; Li, Zhen; Li, Huijuan; Zhou, Chaoqun
2016-01-01
To compare the cleaning effect of automatic cleaning machine and manual cleaning on coupling type surgical instruments. A total of 32 cleaned medical instruments were randomly sampled from medical institutions in Putuo District medical institutions disinfection supply center. Hygiena System SUREII ATP was used to monitor the ATP value, and the cleaning effect was evaluated. The surface ATP values of the medical instrument of manual cleaning were higher than that of the automatic cleaning machine. Coupling type surgical instruments has better cleaning effect of automatic cleaning machine before disinfection, the application is recommended.
Operation of micro and molecular machines: a new concept with its origins in interface science.
Ariga, Katsuhiko; Ishihara, Shinsuke; Izawa, Hironori; Xia, Hong; Hill, Jonathan P
2011-03-21
A landmark accomplishment of nanotechnology would be successful fabrication of ultrasmall machines that can work like tweezers, motors, or even computing devices. Now we must consider how operation of micro- and molecular machines might be implemented for a wide range of applications. If these machines function only under limited conditions and/or require specialized apparatus then they are useless for practical applications. Therefore, it is important to carefully consider the access of functionality of the molecular or nanoscale systems by conventional stimuli at the macroscopic level. In this perspective, we will outline the position of micro- and molecular machines in current science and technology. Most of these machines are operated by light irradiation, application of electrical or magnetic fields, chemical reactions, and thermal fluctuations, which cannot always be applied in remote machine operation. We also propose strategies for molecular machine operation using the most conventional of stimuli, that of macroscopic mechanical force, achieved through mechanical operation of molecular machines located at an air-water interface. The crucial roles of the characteristics of an interfacial environment, i.e. connection between macroscopic dimension and nanoscopic function, and contact of media with different dielectric natures, are also described.
Open access to high-level data and analysis tools in the CMS experiment at the LHC
Calderon, A.; Colling, D.; Huffman, A.; ...
2015-12-23
The CMS experiment, in recognition of its commitment to data preservation and open access as well as to education and outreach, has made its first public release of high-level data under the CC0 waiver: up to half of the proton-proton collision data (by volume) at 7 TeV from 2010 in CMS Analysis Object Data format. CMS has prepared, in collaboration with CERN and the other LHC experiments, an open-data web portal based on Invenio. The portal provides access to CMS public data as well as to analysis tools and documentation for the public. The tools include an event display andmore » histogram application that run in the browser. In addition a virtual machine containing a CMS software environment along with XRootD access to the data is available. Within the virtual machine the public can analyse CMS data, example code is provided. As a result, we describe the accompanying tools and documentation and discuss the first experiences of data use.« less
Can machine-learning improve cardiovascular risk prediction using routine clinical data?
Kai, Joe; Garibaldi, Jonathan M.; Qureshi, Nadeem
2017-01-01
Background Current approaches to predict cardiovascular risk fail to identify many people who would benefit from preventive treatment, while others receive unnecessary intervention. Machine-learning offers opportunity to improve accuracy by exploiting complex interactions between risk factors. We assessed whether machine-learning can improve cardiovascular risk prediction. Methods Prospective cohort study using routine clinical data of 378,256 patients from UK family practices, free from cardiovascular disease at outset. Four machine-learning algorithms (random forest, logistic regression, gradient boosting machines, neural networks) were compared to an established algorithm (American College of Cardiology guidelines) to predict first cardiovascular event over 10-years. Predictive accuracy was assessed by area under the ‘receiver operating curve’ (AUC); and sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) to predict 7.5% cardiovascular risk (threshold for initiating statins). Findings 24,970 incident cardiovascular events (6.6%) occurred. Compared to the established risk prediction algorithm (AUC 0.728, 95% CI 0.723–0.735), machine-learning algorithms improved prediction: random forest +1.7% (AUC 0.745, 95% CI 0.739–0.750), logistic regression +3.2% (AUC 0.760, 95% CI 0.755–0.766), gradient boosting +3.3% (AUC 0.761, 95% CI 0.755–0.766), neural networks +3.6% (AUC 0.764, 95% CI 0.759–0.769). The highest achieving (neural networks) algorithm predicted 4,998/7,404 cases (sensitivity 67.5%, PPV 18.4%) and 53,458/75,585 non-cases (specificity 70.7%, NPV 95.7%), correctly predicting 355 (+7.6%) more patients who developed cardiovascular disease compared to the established algorithm. Conclusions Machine-learning significantly improves accuracy of cardiovascular risk prediction, increasing the number of patients identified who could benefit from preventive treatment, while avoiding unnecessary treatment of others. PMID:28376093
Can machine-learning improve cardiovascular risk prediction using routine clinical data?
Weng, Stephen F; Reps, Jenna; Kai, Joe; Garibaldi, Jonathan M; Qureshi, Nadeem
2017-01-01
Current approaches to predict cardiovascular risk fail to identify many people who would benefit from preventive treatment, while others receive unnecessary intervention. Machine-learning offers opportunity to improve accuracy by exploiting complex interactions between risk factors. We assessed whether machine-learning can improve cardiovascular risk prediction. Prospective cohort study using routine clinical data of 378,256 patients from UK family practices, free from cardiovascular disease at outset. Four machine-learning algorithms (random forest, logistic regression, gradient boosting machines, neural networks) were compared to an established algorithm (American College of Cardiology guidelines) to predict first cardiovascular event over 10-years. Predictive accuracy was assessed by area under the 'receiver operating curve' (AUC); and sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) to predict 7.5% cardiovascular risk (threshold for initiating statins). 24,970 incident cardiovascular events (6.6%) occurred. Compared to the established risk prediction algorithm (AUC 0.728, 95% CI 0.723-0.735), machine-learning algorithms improved prediction: random forest +1.7% (AUC 0.745, 95% CI 0.739-0.750), logistic regression +3.2% (AUC 0.760, 95% CI 0.755-0.766), gradient boosting +3.3% (AUC 0.761, 95% CI 0.755-0.766), neural networks +3.6% (AUC 0.764, 95% CI 0.759-0.769). The highest achieving (neural networks) algorithm predicted 4,998/7,404 cases (sensitivity 67.5%, PPV 18.4%) and 53,458/75,585 non-cases (specificity 70.7%, NPV 95.7%), correctly predicting 355 (+7.6%) more patients who developed cardiovascular disease compared to the established algorithm. Machine-learning significantly improves accuracy of cardiovascular risk prediction, increasing the number of patients identified who could benefit from preventive treatment, while avoiding unnecessary treatment of others.
Kocken, Paul L; Eeuwijk, Jennifer; Van Kesteren, Nicole M C; Dusseldorp, Elise; Buijs, Goof; Bassa-Dafesh, Zeina; Snel, Jeltje
2012-03-01
Vending machines account for food sales and revenue in schools. We examined 3 strategies for promoting the sale of lower-calorie food products from vending machines in high schools in the Netherlands. A school-based randomized controlled trial was conducted in 13 experimental schools and 15 control schools. Three strategies were tested within each experimental school: increasing the availability of lower-calorie products in vending machines, labeling products, and reducing the price of lower-calorie products. The experimental schools introduced the strategies in 3 consecutive phases, with phase 3 incorporating all 3 strategies. The control schools remained the same. The sales volumes from the vending machines were registered. Products were grouped into (1) extra foods containing empty calories, for example, candies and potato chips, (2) nutrient-rich basic foods, and (3) beverages. They were also divided into favorable, moderately unfavorable, and unfavorable products. Total sales volumes for experimental and control schools did not differ significantly for the extra and beverage products. Proportionally, the higher availability of lower-calorie extra products in the experimental schools led to higher sales of moderately unfavorable extra products than in the control schools, and to higher sales of favorable extra products in experimental schools where students have to stay during breaks. Together, availability, labeling, and price reduction raised the proportional sales of favorable beverages. Results indicate that when the availability of lower-calorie foods is increased and is also combined with labeling and reduced prices, students make healthier choices without buying more or fewer products from school vending machines. Changes to school vending machines help to create a healthy school environment. © 2012, American School Health Association.
Ranjith, G; Parvathy, R; Vikas, V; Chandrasekharan, Kesavadas; Nair, Suresh
2015-04-01
With the advent of new imaging modalities, radiologists are faced with handling increasing volumes of data for diagnosis and treatment planning. The use of automated and intelligent systems is becoming essential in such a scenario. Machine learning, a branch of artificial intelligence, is increasingly being used in medical image analysis applications such as image segmentation, registration and computer-aided diagnosis and detection. Histopathological analysis is currently the gold standard for classification of brain tumors. The use of machine learning algorithms along with extraction of relevant features from magnetic resonance imaging (MRI) holds promise of replacing conventional invasive methods of tumor classification. The aim of the study is to classify gliomas into benign and malignant types using MRI data. Retrospective data from 28 patients who were diagnosed with glioma were used for the analysis. WHO Grade II (low-grade astrocytoma) was classified as benign while Grade III (anaplastic astrocytoma) and Grade IV (glioblastoma multiforme) were classified as malignant. Features were extracted from MR spectroscopy. The classification was done using four machine learning algorithms: multilayer perceptrons, support vector machine, random forest and locally weighted learning. Three of the four machine learning algorithms gave an area under ROC curve in excess of 0.80. Random forest gave the best performance in terms of AUC (0.911) while sensitivity was best for locally weighted learning (86.1%). The performance of different machine learning algorithms in the classification of gliomas is promising. An even better performance may be expected by integrating features extracted from other MR sequences. © The Author(s) 2015 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Using machine learning for sequence-level automated MRI protocol selection in neuroradiology.
Brown, Andrew D; Marotta, Thomas R
2018-05-01
Incorrect imaging protocol selection can lead to important clinical findings being missed, contributing to both wasted health care resources and patient harm. We present a machine learning method for analyzing the unstructured text of clinical indications and patient demographics from magnetic resonance imaging (MRI) orders to automatically protocol MRI procedures at the sequence level. We compared 3 machine learning models - support vector machine, gradient boosting machine, and random forest - to a baseline model that predicted the most common protocol for all observations in our test set. The gradient boosting machine model significantly outperformed the baseline and demonstrated the best performance of the 3 models in terms of accuracy (95%), precision (86%), recall (80%), and Hamming loss (0.0487). This demonstrates the feasibility of automating sequence selection by applying machine learning to MRI orders. Automated sequence selection has important safety, quality, and financial implications and may facilitate improvements in the quality and safety of medical imaging service delivery.
Classification of large-sized hyperspectral imagery using fast machine learning algorithms
NASA Astrophysics Data System (ADS)
Xia, Junshi; Yokoya, Naoto; Iwasaki, Akira
2017-07-01
We present a framework of fast machine learning algorithms in the context of large-sized hyperspectral images classification from the theoretical to a practical viewpoint. In particular, we assess the performance of random forest (RF), rotation forest (RoF), and extreme learning machine (ELM) and the ensembles of RF and ELM. These classifiers are applied to two large-sized hyperspectral images and compared to the support vector machines. To give the quantitative analysis, we pay attention to comparing these methods when working with high input dimensions and a limited/sufficient training set. Moreover, other important issues such as the computational cost and robustness against the noise are also discussed.
2018-01-01
Background Many studies have tried to develop predictors for return-to-work (RTW). However, since complex factors have been demonstrated to predict RTW, it is difficult to use them practically. This study investigated whether factors used in previous studies could predict whether an individual had returned to his/her original work by four years after termination of the worker's recovery period. Methods An initial logistic regression analysis of 1,567 participants of the fourth Panel Study of Worker's Compensation Insurance yielded odds ratios. The participants were divided into two subsets, a training dataset and a test dataset. Using the training dataset, logistic regression, decision tree, random forest, and support vector machine models were established, and important variables of each model were identified. The predictive abilities of the different models were compared. Results The analysis showed that only earned income and company-related factors significantly affected return-to-original-work (RTOW). The random forest model showed the best accuracy among the tested machine learning models; however, the difference was not prominent. Conclusion It is possible to predict a worker's probability of RTOW using machine learning techniques with moderate accuracy. PMID:29736160
NASA Astrophysics Data System (ADS)
Tang, Jie; Liu, Rong; Zhang, Yue-Li; Liu, Mou-Ze; Hu, Yong-Fang; Shao, Ming-Jie; Zhu, Li-Jun; Xin, Hua-Wen; Feng, Gui-Wen; Shang, Wen-Jun; Meng, Xiang-Guang; Zhang, Li-Rong; Ming, Ying-Zi; Zhang, Wei
2017-02-01
Tacrolimus has a narrow therapeutic window and considerable variability in clinical use. Our goal was to compare the performance of multiple linear regression (MLR) and eight machine learning techniques in pharmacogenetic algorithm-based prediction of tacrolimus stable dose (TSD) in a large Chinese cohort. A total of 1,045 renal transplant patients were recruited, 80% of which were randomly selected as the “derivation cohort” to develop dose-prediction algorithm, while the remaining 20% constituted the “validation cohort” to test the final selected algorithm. MLR, artificial neural network (ANN), regression tree (RT), multivariate adaptive regression splines (MARS), boosted regression tree (BRT), support vector regression (SVR), random forest regression (RFR), lasso regression (LAR) and Bayesian additive regression trees (BART) were applied and their performances were compared in this work. Among all the machine learning models, RT performed best in both derivation [0.71 (0.67-0.76)] and validation cohorts [0.73 (0.63-0.82)]. In addition, the ideal rate of RT was 4% higher than that of MLR. To our knowledge, this is the first study to use machine learning models to predict TSD, which will further facilitate personalized medicine in tacrolimus administration in the future.
Ellis, Katherine; Godbole, Suneeta; Marshall, Simon; Lanckriet, Gert; Staudenmayer, John; Kerr, Jacqueline
2014-01-01
Active travel is an important area in physical activity research, but objective measurement of active travel is still difficult. Automated methods to measure travel behaviors will improve research in this area. In this paper, we present a supervised machine learning method for transportation mode prediction from global positioning system (GPS) and accelerometer data. We collected a dataset of about 150 h of GPS and accelerometer data from two research assistants following a protocol of prescribed trips consisting of five activities: bicycling, riding in a vehicle, walking, sitting, and standing. We extracted 49 features from 1-min windows of this data. We compared the performance of several machine learning algorithms and chose a random forest algorithm to classify the transportation mode. We used a moving average output filter to smooth the output predictions over time. The random forest algorithm achieved 89.8% cross-validated accuracy on this dataset. Adding the moving average filter to smooth output predictions increased the cross-validated accuracy to 91.9%. Machine learning methods are a viable approach for automating measurement of active travel, particularly for measuring travel activities that traditional accelerometer data processing methods misclassify, such as bicycling and vehicle travel.
A machine learning system to improve heart failure patient assistance.
Guidi, Gabriele; Pettenati, Maria Chiara; Melillo, Paolo; Iadanza, Ernesto
2014-11-01
In this paper, we present a clinical decision support system (CDSS) for the analysis of heart failure (HF) patients, providing various outputs such as an HF severity evaluation, HF-type prediction, as well as a management interface that compares the different patients' follow-ups. The whole system is composed of a part of intelligent core and of an HF special-purpose management tool also providing the function to act as interface for the artificial intelligence training and use. To implement the smart intelligent functions, we adopted a machine learning approach. In this paper, we compare the performance of a neural network (NN), a support vector machine, a system with fuzzy rules genetically produced, and a classification and regression tree and its direct evolution, which is the random forest, in analyzing our database. Best performances in both HF severity evaluation and HF-type prediction functions are obtained by using the random forest algorithm. The management tool allows the cardiologist to populate a "supervised database" suitable for machine learning during his or her regular outpatient consultations. The idea comes from the fact that in literature there are a few databases of this type, and they are not scalable to our case.
An introduction to quantum machine learning
NASA Astrophysics Data System (ADS)
Schuld, Maria; Sinayskiy, Ilya; Petruccione, Francesco
2015-04-01
Machine learning algorithms learn a desired input-output relation from examples in order to interpret new inputs. This is important for tasks such as image and speech recognition or strategy optimisation, with growing applications in the IT industry. In the last couple of years, researchers investigated if quantum computing can help to improve classical machine learning algorithms. Ideas range from running computationally costly algorithms or their subroutines efficiently on a quantum computer to the translation of stochastic methods into the language of quantum theory. This contribution gives a systematic overview of the emerging field of quantum machine learning. It presents the approaches as well as technical details in an accessible way, and discusses the potential of a future theory of quantum learning.
ERIC Educational Resources Information Center
Golino, Hudson F.; Gomes, Cristiano M. A.
2016-01-01
This paper presents a non-parametric imputation technique, named random forest, from the machine learning field. The random forest procedure has two main tuning parameters: the number of trees grown in the prediction and the number of predictors used. Fifty experimental conditions were created in the imputation procedure, with different…
Xing, Haifeng; Hou, Bo; Lin, Zhihui; Guo, Meifeng
2017-10-13
MEMS (Micro Electro Mechanical System) gyroscopes have been widely applied to various fields, but MEMS gyroscope random drift has nonlinear and non-stationary characteristics. It has attracted much attention to model and compensate the random drift because it can improve the precision of inertial devices. This paper has proposed to use wavelet filtering to reduce noise in the original data of MEMS gyroscopes, then reconstruct the random drift data with PSR (phase space reconstruction), and establish the model for the reconstructed data by LSSVM (least squares support vector machine), of which the parameters were optimized using CPSO (chaotic particle swarm optimization). Comparing the effect of modeling the MEMS gyroscope random drift with BP-ANN (back propagation artificial neural network) and the proposed method, the results showed that the latter had a better prediction accuracy. Using the compensation of three groups of MEMS gyroscope random drift data, the standard deviation of three groups of experimental data dropped from 0.00354°/s, 0.00412°/s, and 0.00328°/s to 0.00065°/s, 0.00072°/s and 0.00061°/s, respectively, which demonstrated that the proposed method can reduce the influence of MEMS gyroscope random drift and verified the effectiveness of this method for modeling MEMS gyroscope random drift.
Washing machine related injuries in children: a continuing threat
Warner, B; Kenney, B; Rice, M
2003-01-01
Objective: To describe washing machine related injuries in children in the United States. Methods: Injury data for 496 washing machine related injuries documented by the Consumer Product Safety Commission's National Electronic Injury Surveillance System and death certificate data files were analyzed. Gender, age, diagnosis, body part injured, disposition, location and mechanism of injury were considered in the analysis of data. Results: The upper extremities were most frequently injured in washing machine related injuries, especially with wringer machines. Fewer than 10% of patients required admission, but automatic washers accounted for most of these and for both of the deaths. Automatic washer injuries involved a wider range of injury mechanism, including 23 children who fell from the machines while in baby seats. Conclusions: Though most injuries associated with washing machines are minor, some are severe and devastating. Many of the injuries could be avoided with improvements in machine design while others suggest a need for increased education of potential dangers and better supervision of children if they are allowed access to areas where washing machines are operating. Furthermore, washing machines should only be used for their intended purpose. Given the limitations of educational efforts to prevent injuries, health professionals should have a major role in public education regarding these seemingly benign household appliances. PMID:14693900
Predictors of return rate discrimination in slot machine play.
Coates, Ewan; Blaszczynski, Alex
2014-09-01
The purpose of this study was to investigate the extent to which accurate estimates of payback percentages and volatility combined with prior learning, enabled players to successfully discriminate between multi-line/multi-credit slot machines that provided differing rates of reinforcement. The aim was to determine if the capacity to discriminate structural characteristics of gaming machines influenced player choices in selecting 'favourite' slot machines. Slot machine gambling history, gambling beliefs and knowledge, impulsivity, illusions of control, and problem solving style were assessed in a sample of 48 first year undergraduate psychology students. Participants were subsequently exposed to a choice paradigm where they could freely select to play either of two concurrently presented PC-simulated slot machines programmed to randomly differ in expected player return rates (payback percentage) and win frequency (volatility). Results suggest that prior learning and cognitions (particularly gambler's fallacy) but not payback, were major contributors to the ability of a player to discriminate volatility between slot machines. Participants displayed a general tendency to discriminate payback, but counter-intuitively placed more bets on the slot machine with lower payback percentage rates.
USDA-ARS?s Scientific Manuscript database
Palmer amaranth (Amaranthus palmeri S. Wats.) invasion negatively impacts cotton (Gossypium hirsutum L.) production systems throughout the United States. The objective of this study was to evaluate canopy hyperspectral narrowband data as input into the random forest machine learning algorithm to dis...
Fernandes, Henrique; Zhang, Hai; Figueiredo, Alisson; Malheiros, Fernando; Ignacio, Luis Henrique; Sfarra, Stefano; Ibarra-Castanedo, Clemente; Guimaraes, Gilmar; Maldague, Xavier
2018-01-19
The use of fiber reinforced materials such as randomly-oriented strands has grown in recent years, especially for manufacturing of aerospace composite structures. This growth is mainly due to their advantageous properties: they are lighter and more resistant to corrosion when compared to metals and are more easily shaped than continuous fiber composites. The resistance and stiffness of these materials are directly related to their fiber orientation. Thus, efficient approaches to assess their fiber orientation are in demand. In this paper, a non-destructive evaluation method is applied to assess the fiber orientation on laminates reinforced with randomly-oriented strands. More specifically, a method called pulsed thermal ellipsometry combined with an artificial neural network, a machine learning technique, is used in order to estimate the fiber orientation on the surface of inspected parts. Results showed that the method can be potentially used to inspect large areas with good accuracy and speed.
Maldague, Xavier
2018-01-01
The use of fiber reinforced materials such as randomly-oriented strands has grown in recent years, especially for manufacturing of aerospace composite structures. This growth is mainly due to their advantageous properties: they are lighter and more resistant to corrosion when compared to metals and are more easily shaped than continuous fiber composites. The resistance and stiffness of these materials are directly related to their fiber orientation. Thus, efficient approaches to assess their fiber orientation are in demand. In this paper, a non-destructive evaluation method is applied to assess the fiber orientation on laminates reinforced with randomly-oriented strands. More specifically, a method called pulsed thermal ellipsometry combined with an artificial neural network, a machine learning technique, is used in order to estimate the fiber orientation on the surface of inspected parts. Results showed that the method can be potentially used to inspect large areas with good accuracy and speed. PMID:29351240
Binary pressure-sensitive paint measurements using miniaturised, colour, machine vision cameras
NASA Astrophysics Data System (ADS)
Quinn, Mark Kenneth
2018-05-01
Recent advances in machine vision technology and capability have led to machine vision cameras becoming applicable for scientific imaging. This study aims to demonstrate the applicability of machine vision colour cameras for the measurement of dual-component pressure-sensitive paint (PSP). The presence of a second luminophore component in the PSP mixture significantly reduces its inherent temperature sensitivity, increasing its applicability at low speeds. All of the devices tested are smaller than the cooled CCD cameras traditionally used and most are of significantly lower cost, thereby increasing the accessibility of such technology and techniques. Comparisons between three machine vision cameras, a three CCD camera, and a commercially available specialist PSP camera are made on a range of parameters, and a detailed PSP calibration is conducted in a static calibration chamber. The findings demonstrate that colour machine vision cameras can be used for quantitative, dual-component, pressure measurements. These results give rise to the possibility of performing on-board dual-component PSP measurements in wind tunnels or on real flight/road vehicles.
Machine learning and data science in soft materials engineering
NASA Astrophysics Data System (ADS)
Ferguson, Andrew L.
2018-01-01
In many branches of materials science it is now routine to generate data sets of such large size and dimensionality that conventional methods of analysis fail. Paradigms and tools from data science and machine learning can provide scalable approaches to identify and extract trends and patterns within voluminous data sets, perform guided traversals of high-dimensional phase spaces, and furnish data-driven strategies for inverse materials design. This topical review provides an accessible introduction to machine learning tools in the context of soft and biological materials by ‘de-jargonizing’ data science terminology, presenting a taxonomy of machine learning techniques, and surveying the mathematical underpinnings and software implementations of popular tools, including principal component analysis, independent component analysis, diffusion maps, support vector machines, and relative entropy. We present illustrative examples of machine learning applications in soft matter, including inverse design of self-assembling materials, nonlinear learning of protein folding landscapes, high-throughput antimicrobial peptide design, and data-driven materials design engines. We close with an outlook on the challenges and opportunities for the field.
Machine learning and data science in soft materials engineering.
Ferguson, Andrew L
2018-01-31
In many branches of materials science it is now routine to generate data sets of such large size and dimensionality that conventional methods of analysis fail. Paradigms and tools from data science and machine learning can provide scalable approaches to identify and extract trends and patterns within voluminous data sets, perform guided traversals of high-dimensional phase spaces, and furnish data-driven strategies for inverse materials design. This topical review provides an accessible introduction to machine learning tools in the context of soft and biological materials by 'de-jargonizing' data science terminology, presenting a taxonomy of machine learning techniques, and surveying the mathematical underpinnings and software implementations of popular tools, including principal component analysis, independent component analysis, diffusion maps, support vector machines, and relative entropy. We present illustrative examples of machine learning applications in soft matter, including inverse design of self-assembling materials, nonlinear learning of protein folding landscapes, high-throughput antimicrobial peptide design, and data-driven materials design engines. We close with an outlook on the challenges and opportunities for the field.
SECIMTools: a suite of metabolomics data analysis tools.
Kirpich, Alexander S; Ibarra, Miguel; Moskalenko, Oleksandr; Fear, Justin M; Gerken, Joseph; Mi, Xinlei; Ashrafi, Ali; Morse, Alison M; McIntyre, Lauren M
2018-04-20
Metabolomics has the promise to transform the area of personalized medicine with the rapid development of high throughput technology for untargeted analysis of metabolites. Open access, easy to use, analytic tools that are broadly accessible to the biological community need to be developed. While technology used in metabolomics varies, most metabolomics studies have a set of features identified. Galaxy is an open access platform that enables scientists at all levels to interact with big data. Galaxy promotes reproducibility by saving histories and enabling the sharing workflows among scientists. SECIMTools (SouthEast Center for Integrated Metabolomics) is a set of Python applications that are available both as standalone tools and wrapped for use in Galaxy. The suite includes a comprehensive set of quality control metrics (retention time window evaluation and various peak evaluation tools), visualization techniques (hierarchical cluster heatmap, principal component analysis, modular modularity clustering), basic statistical analysis methods (partial least squares - discriminant analysis, analysis of variance, t-test, Kruskal-Wallis non-parametric test), advanced classification methods (random forest, support vector machines), and advanced variable selection tools (least absolute shrinkage and selection operator LASSO and Elastic Net). SECIMTools leverages the Galaxy platform and enables integrated workflows for metabolomics data analysis made from building blocks designed for easy use and interpretability. Standard data formats and a set of utilities allow arbitrary linkages between tools to encourage novel workflow designs. The Galaxy framework enables future data integration for metabolomics studies with other omics data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Papantoni-Kazakos, P.; Paterakis, M.
1988-07-01
For many communication applications with time constraints (e.g., transmission of packetized voice messages), a critical performance measure is the percentage of messages transmitted within a given amount of time after their generation at the transmitting station. This report presents a random-access algorithm (RAA) suitable for time-constrained applications. Performance analysis demonstrates that significant message-delay improvement is attained at the expense of minimal traffic loss. Also considered is the case of noisy channels. The noise effect appears at erroneously observed channel feedback. Error sensitivity analysis shows that the proposed random-access algorithm is insensitive to feedback channel errors. Window Random-Access Algorithms (RAAs) aremore » considered next. These algorithms constitute an important subclass of Multiple-Access Algorithms (MAAs); they are distributive, and they attain high throughput and low delays by controlling the number of simultaneously transmitting users.« less
Detecting Abnormal Machine Characteristics in Cloud Infrastructures
NASA Technical Reports Server (NTRS)
Bhaduri, Kanishka; Das, Kamalika; Matthews, Bryan L.
2011-01-01
In the cloud computing environment resources are accessed as services rather than as a product. Monitoring this system for performance is crucial because of typical pay-peruse packages bought by the users for their jobs. With the huge number of machines currently in the cloud system, it is often extremely difficult for system administrators to keep track of all machines using distributed monitoring programs such as Ganglia1 which lacks system health assessment and summarization capabilities. To overcome this problem, we propose a technique for automated anomaly detection using machine performance data in the cloud. Our algorithm is entirely distributed and runs locally on each computing machine on the cloud in order to rank the machines in order of their anomalous behavior for given jobs. There is no need to centralize any of the performance data for the analysis and at the end of the analysis, our algorithm generates error reports, thereby allowing the system administrators to take corrective actions. Experiments performed on real data sets collected for different jobs validate the fact that our algorithm has a low overhead for tracking anomalous machines in a cloud infrastructure.
Development of 300 mesh Soy Bean Crusher for Tofu Material Processing
NASA Astrophysics Data System (ADS)
Lee, E. S.; Pratama, P. S.; Supeno, D.; Jeong, S. W.; Byun, J. Y.; Woo, J. H.; Park, C. S.; Choi, W. S.
2018-03-01
A machine such as bean crusher machine is subjected to different loads and vibration. Due to this vibration there will be certain deformations which affect the performance of the machine in adverse manner. This paper proposed a vibration analysis of bean crusher machine using ANSYS. The effect of vibration on the structure was studied in order to ensure the safety using finite element analysis. This research supports the machine designer to create a better product with lower cost and faster development time. To do this, firstly, using Inventor, a CAD model is prepared. Secondly, the analysis is to be carried out using ANSYS 15. The modal analysis and random vibration analysis of the structure was conducted. The analysis shows that the proposed design was successfully shows the minimum deformation when the vibration was applied in normal condition.
Machine learning algorithms for the creation of clinical healthcare enterprise systems
NASA Astrophysics Data System (ADS)
Mandal, Indrajit
2017-10-01
Clinical recommender systems are increasingly becoming popular for improving modern healthcare systems. Enterprise systems are persuasively used for creating effective nurse care plans to provide nurse training, clinical recommendations and clinical quality control. A novel design of a reliable clinical recommender system based on multiple classifier system (MCS) is implemented. A hybrid machine learning (ML) ensemble based on random subspace method and random forest is presented. The performance accuracy and robustness of proposed enterprise architecture are quantitatively estimated to be above 99% and 97%, respectively (above 95% confidence interval). The study then extends to experimental analysis of the clinical recommender system with respect to the noisy data environment. The ranking of items in nurse care plan is demonstrated using machine learning algorithms (MLAs) to overcome the drawback of the traditional association rule method. The promising experimental results are compared against the sate-of-the-art approaches to highlight the advancement in recommendation technology. The proposed recommender system is experimentally validated using five benchmark clinical data to reinforce the research findings.
Individualized Instruction for Data Access (IIDA). Quarterly Reports No. 5-6.
ERIC Educational Resources Information Center
Drexel Univ., Philadelphia, PA. Graduate School of Library Science.
These quarterly reports for June and September 1979 are two in a series concerned with describing a research project to design, test, and evaluate a machine intermediary which teaches and assists users of the DIALOG system. Report No. 5 explains in detail the instructional aspect of the Individualized Instruction for Data Access (IIDA) project and…
Application of Computer Simulation to Teach ATM Access to Individuals with Intellectual Disabilities
ERIC Educational Resources Information Center
Davies, Daniel K.; Stock, Steven E.; Wehmeyer, Michael L.
2003-01-01
This study investigates use of computer simulation for teaching ATM use to adults with intellectual disabilities. ATM-SIM is a computer-based trainer used for teaching individuals with intellectual disabilities how to use an automated teller machine (ATM) to access their personal bank accounts. In the pilot evaluation, a prototype system was…
ERIC Educational Resources Information Center
Hert, Carol A.; Nilan, Michael S.
1991-01-01
Presents preliminary data that characterizes the relationship between what users say they are trying to accomplish when using an online public access catalog (OPAC) and their perceptions of what input to give the system. Human-machine interaction is discussed, and appropriate methods for evaluating information retrieval systems are considered. (18…
Giovannetti, Vittorio; Lloyd, Seth; Maccone, Lorenzo
2008-04-25
A random access memory (RAM) uses n bits to randomly address N=2(n) distinct memory cells. A quantum random access memory (QRAM) uses n qubits to address any quantum superposition of N memory cells. We present an architecture that exponentially reduces the requirements for a memory call: O(logN) switches need be thrown instead of the N used in conventional (classical or quantum) RAM designs. This yields a more robust QRAM algorithm, as it in general requires entanglement among exponentially less gates, and leads to an exponential decrease in the power needed for addressing. A quantum optical implementation is presented.
Slot Machines: Pursuing Responsible Gaming Practices for Virtual Reels and Near Misses
ERIC Educational Resources Information Center
Harrigan, Kevin A.
2009-01-01
Since 1983, slot machines in North America have used a computer and virtual reels to determine the odds. Since at least 1988, a technique called clustering has been used to create a high number of near misses, failures that are close to wins. The result is that what the player sees does not represent the underlying probabilities and randomness,…
The HARNESS Workbench: Unified and Adaptive Access to Diverse HPC Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sunderam, Vaidy S.
2012-03-20
The primary goal of the Harness WorkBench (HWB) project is to investigate innovative software environments that will help enhance the overall productivity of applications science on diverse HPC platforms. Two complementary frameworks were designed: one, a virtualized command toolkit for application building, deployment, and execution, that provides a common view across diverse HPC systems, in particular the DOE leadership computing platforms (Cray, IBM, SGI, and clusters); and two, a unified runtime environment that consolidates access to runtime services via an adaptive framework for execution-time and post processing activities. A prototype of the first was developed based on the concept ofmore » a 'system-call virtual machine' (SCVM), to enhance portability of the HPC application deployment process across heterogeneous high-end machines. The SCVM approach to portable builds is based on the insertion of toolkit-interpretable directives into original application build scripts. Modifications resulting from these directives preserve the semantics of the original build instruction flow. The execution of the build script is controlled by our toolkit that intercepts build script commands in a manner transparent to the end-user. We have applied this approach to a scientific production code (Gamess-US) on the Cray-XT5 machine. The second facet, termed Unibus, aims to facilitate provisioning and aggregation of multifaceted resources from resource providers and end-users perspectives. To achieve that, Unibus proposes a Capability Model and mediators (resource drivers) to virtualize access to diverse resources, and soft and successive conditioning to enable automatic and user-transparent resource provisioning. A proof of concept implementation has demonstrated the viability of this approach on high end machines, grid systems and computing clouds.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-25
... Access Memory Semiconductors and Products Containing Same, Including Memory Modules; Notice of... the sale within the United States after importation of certain dynamic random access memory semiconductors and products containing same, including memory modules, by reason of infringement of certain...
Ozmutlu, H. Cenk
2014-01-01
We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms. PMID:24977204
Casimir rack and pinion as a miniaturized kinetic energy harvester
NASA Astrophysics Data System (ADS)
Miri, MirFaez; Etesami, Zahra
2016-08-01
We study a nanoscale machine composed of a rack and a pinion with no contact, but intermeshed via the lateral Casimir force. We adopt a simple model for the random velocity of the rack subject to external random forces, namely, a dichotomous noise with zero mean value. We show that the pinion, even when it experiences random thermal torque, can do work against a load. The device thus converts the kinetic energy of the random motions of the rack into useful work.
Rao, Sunil V; Hess, Connie N; Barham, Britt; Aberle, Laura H; Anstrom, Kevin J; Patel, Tejan B; Jorgensen, Jesse P; Mazzaferri, Ernest L; Jolly, Sanjit S; Jacobs, Alice; Newby, L Kristin; Gibson, C Michael; Kong, David F; Mehran, Roxana; Waksman, Ron; Gilchrist, Ian C; McCourt, Brian J; Messenger, John C; Peterson, Eric D; Harrington, Robert A; Krucoff, Mitchell W
2014-08-01
This study sought to determine the effect of radial access on outcomes in women undergoing percutaneous coronary intervention (PCI) using a registry-based randomized trial. Women are at increased risk of bleeding and vascular complications after PCI. The role of radial access in women is unclear. Women undergoing cardiac catheterization or PCI were randomized to radial or femoral arterial access. Data from the CathPCI Registry and trial-specific data were merged into a final study database. The primary efficacy endpoint was Bleeding Academic Research Consortium type 2, 3, or 5 bleeding or vascular complications requiring intervention. The primary feasibility endpoint was access site crossover. The primary analysis cohort was the subgroup undergoing PCI; sensitivity analyses were conducted in the total randomized population. The trial was stopped early for a lower than expected event rate. A total of 1,787 women (691 undergoing PCI) were randomized at 60 sites. There was no significant difference in the primary efficacy endpoint between radial or femoral access among women undergoing PCI (radial 1.2% vs. 2.9% femoral, odds ratio [OR]: 0.39; 95% confidence interval [CI]: 0.12 to 1.27); among women undergoing cardiac catheterization or PCI, radial access significantly reduced bleeding and vascular complications (0.6% vs. 1.7%; OR: 0.32; 95% CI: 0.12 to 0.90). Access site crossover was significantly higher among women assigned to radial access (PCI cohort: 6.1% vs. 1.7%; OR: 3.65; 95% CI: 1.45 to 9.17); total randomized cohort: (6.7% vs. 1.9%; OR: 3.70; 95% CI: 2.14 to 6.40). More women preferred radial access. In this pragmatic trial, which was terminated early, the radial approach did not significantly reduce bleeding or vascular complications in women undergoing PCI. Access site crossover occurred more often in women assigned to radial access. (SAFE-PCI for Women; NCT01406236). Copyright © 2014 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
A performance comparison of the IBM RS/6000 and the Astronautics ZS-1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, W.M.; Abraham, S.G.; Davidson, E.S.
1991-01-01
Concurrent uniprocessor architectures, of which vector and superscalar are two examples, are designed to capitalize on fine-grain parallelism. The authors have developed a performance evaluation method for comparing and improving these architectures, and in this article they present the methodology and a detailed case study of two machines. The runtime of many programs is dominated by time spent in loop constructs - for example, Fortran Do-loops. Loops generally comprise two logical processes: The access process generates addresses for memory operations while the execute process operates on floating-point data. Memory access patterns typically can be generated independently of the data inmore » the execute process. This independence allows the access process to slip ahead, thereby hiding memory latency. The IBM 360/91 was designed in 1967 to achieve slip dynamically, at runtime. One CPU unit executes integer operations while another handles floating-point operations. Other machines, including the VAX 9000 and the IBM RS/6000, use a similar approach.« less
Development of a HIPAA-compliant environment for translational research data and analytics.
Bradford, Wayne; Hurdle, John F; LaSalle, Bernie; Facelli, Julio C
2014-01-01
High-performance computing centers (HPC) traditionally have far less restrictive privacy management policies than those encountered in healthcare. We show how an HPC can be re-engineered to accommodate clinical data while retaining its utility in computationally intensive tasks such as data mining, machine learning, and statistics. We also discuss deploying protected virtual machines. A critical planning step was to engage the university's information security operations and the information security and privacy office. Access to the environment requires a double authentication mechanism. The first level of authentication requires access to the university's virtual private network and the second requires that the users be listed in the HPC network information service directory. The physical hardware resides in a data center with controlled room access. All employees of the HPC and its users take the university's local Health Insurance Portability and Accountability Act training series. In the first 3 years, researcher count has increased from 6 to 58.
Stochastic scheduling on a repairable manufacturing system
NASA Astrophysics Data System (ADS)
Li, Wei; Cao, Jinhua
1995-08-01
In this paper, we consider some stochastic scheduling problems with a set of stochastic jobs on a manufacturing system with a single machine that is subject to multiple breakdowns and repairs. When the machine processing a job fails, the job processing must restart some time later when the machine is repaired. For this typical manufacturing system, we find the optimal policies that minimize the following objective functions: (1) the weighed sum of the completion times; (2) the weighed number of late jobs having constant due dates; (3) the weighted number of late jobs having random due dates exponentially distributed, which generalize some previous results.
Assessment of various supervised learning algorithms using different performance metrics
NASA Astrophysics Data System (ADS)
Susheel Kumar, S. M.; Laxkar, Deepak; Adhikari, Sourav; Vijayarajan, V.
2017-11-01
Our work brings out comparison based on the performance of supervised machine learning algorithms on a binary classification task. The supervised machine learning algorithms which are taken into consideration in the following work are namely Support Vector Machine(SVM), Decision Tree(DT), K Nearest Neighbour (KNN), Naïve Bayes(NB) and Random Forest(RF). This paper mostly focuses on comparing the performance of above mentioned algorithms on one binary classification task by analysing the Metrics such as Accuracy, F-Measure, G-Measure, Precision, Misclassification Rate, False Positive Rate, True Positive Rate, Specificity, Prevalence.
Accurate Diabetes Risk Stratification Using Machine Learning: Role of Missing Value and Outliers.
Maniruzzaman, Md; Rahman, Md Jahanur; Al-MehediHasan, Md; Suri, Harman S; Abedin, Md Menhazul; El-Baz, Ayman; Suri, Jasjit S
2018-04-10
Diabetes mellitus is a group of metabolic diseases in which blood sugar levels are too high. About 8.8% of the world was diabetic in 2017. It is projected that this will reach nearly 10% by 2045. The major challenge is that when machine learning-based classifiers are applied to such data sets for risk stratification, leads to lower performance. Thus, our objective is to develop an optimized and robust machine learning (ML) system under the assumption that missing values or outliers if replaced by a median configuration will yield higher risk stratification accuracy. This ML-based risk stratification is designed, optimized and evaluated, where: (i) the features are extracted and optimized from the six feature selection techniques (random forest, logistic regression, mutual information, principal component analysis, analysis of variance, and Fisher discriminant ratio) and combined with ten different types of classifiers (linear discriminant analysis, quadratic discriminant analysis, naïve Bayes, Gaussian process classification, support vector machine, artificial neural network, Adaboost, logistic regression, decision tree, and random forest) under the hypothesis that both missing values and outliers when replaced by computed medians will improve the risk stratification accuracy. Pima Indian diabetic dataset (768 patients: 268 diabetic and 500 controls) was used. Our results demonstrate that on replacing the missing values and outliers by group median and median values, respectively and further using the combination of random forest feature selection and random forest classification technique yields an accuracy, sensitivity, specificity, positive predictive value, negative predictive value and area under the curve as: 92.26%, 95.96%, 79.72%, 91.14%, 91.20%, and 0.93, respectively. This is an improvement of 10% over previously developed techniques published in literature. The system was validated for its stability and reliability. RF-based model showed the best performance when outliers are replaced by median values.
Richardson, Alice; Signor, Ben M; Lidbury, Brett A; Badrick, Tony
2016-11-01
Big Data is having an impact on many areas of research, not the least of which is biomedical science. In this review paper, big data and machine learning are defined in terms accessible to the clinical chemistry community. Seven myths associated with machine learning and big data are then presented, with the aim of managing expectation of machine learning amongst clinical chemists. The myths are illustrated with four examples investigating the relationship between biomarkers in liver function tests, enhanced laboratory prediction of hepatitis virus infection, the relationship between bilirubin and white cell count, and the relationship between red cell distribution width and laboratory prediction of anaemia. Copyright © 2016 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Age verification cards fail to fully prevent minors from accessing tobacco products.
Kanda, Hideyuki; Osaki, Yoneatsu; Ohida, Takashi; Kaneita, Yoshitaka; Munezawa, Takeshi
2011-03-01
Proper age verification can prevent minors from accessing tobacco products. For this reason, electronic locking devices based on a proof-of age system utilising cards were installed in almost every tobacco vending machine across Japan and Germany to restrict sales to minors. We aimed to clarify the associations between amount smoked by high school students and the usage of age verification cards by conducting a nationwide cross-sectional survey of students in Japan. This survey was conducted in 2008. We asked high school students, aged 13-18 years, in Japan about their smoking behaviour, where they purchase cigarettes, if or if not they have used age verification cards, and if yes, how they obtained this card. As the amount smoked increased, the prevalence of purchasing cigarettes from vending machines also rose for both males and females. The percentage of those with experience of using an age verification card was also higher among those who smoked more. Somebody outside of family was the top source of obtaining cards. Surprisingly, around 5% of males and females belonging to the group with highest smoking levels applied for cards themselves. Age verification cards cannot fully prevent minors from accessing tobacco products. These findings suggest that a total ban of tobacco vending machines, not an age verification system, is needed to prevent sales to minors.
Chen, Gongbo; Li, Shanshan; Knibbs, Luke D; Hamm, N A S; Cao, Wei; Li, Tiantian; Guo, Jianping; Ren, Hongyan; Abramson, Michael J; Guo, Yuming
2018-09-15
Machine learning algorithms have very high predictive ability. However, no study has used machine learning to estimate historical concentrations of PM 2.5 (particulate matter with aerodynamic diameter ≤ 2.5 μm) at daily time scale in China at a national level. To estimate daily concentrations of PM 2.5 across China during 2005-2016. Daily ground-level PM 2.5 data were obtained from 1479 stations across China during 2014-2016. Data on aerosol optical depth (AOD), meteorological conditions and other predictors were downloaded. A random forests model (non-parametric machine learning algorithms) and two traditional regression models were developed to estimate ground-level PM 2.5 concentrations. The best-fit model was then utilized to estimate the daily concentrations of PM 2.5 across China with a resolution of 0.1° (≈10 km) during 2005-2016. The daily random forests model showed much higher predictive accuracy than the other two traditional regression models, explaining the majority of spatial variability in daily PM 2.5 [10-fold cross-validation (CV) R 2 = 83%, root mean squared prediction error (RMSE) = 28.1 μg/m 3 ]. At the monthly and annual time-scale, the explained variability of average PM 2.5 increased up to 86% (RMSE = 10.7 μg/m 3 and 6.9 μg/m 3 , respectively). Taking advantage of a novel application of modeling framework and the most recent ground-level PM 2.5 observations, the machine learning method showed higher predictive ability than previous studies. Random forests approach can be used to estimate historical exposure to PM 2.5 in China with high accuracy. Copyright © 2018 Elsevier B.V. All rights reserved.
Uniform resolution of compact identifiers for biomedical data
Wimalaratne, Sarala M.; Juty, Nick; Kunze, John; Janée, Greg; McMurry, Julie A.; Beard, Niall; Jimenez, Rafael; Grethe, Jeffrey S.; Hermjakob, Henning; Martone, Maryann E.; Clark, Tim
2018-01-01
Most biomedical data repositories issue locally-unique accessions numbers, but do not provide globally unique, machine-resolvable, persistent identifiers for their datasets, as required by publishers wishing to implement data citation in accordance with widely accepted principles. Local accessions may however be prefixed with a namespace identifier, providing global uniqueness. Such “compact identifiers” have been widely used in biomedical informatics to support global resource identification with local identifier assignment. We report here on our project to provide robust support for machine-resolvable, persistent compact identifiers in biomedical data citation, by harmonizing the Identifiers.org and N2T.net (Name-To-Thing) meta-resolvers and extending their capabilities. Identifiers.org services hosted at the European Molecular Biology Laboratory - European Bioinformatics Institute (EMBL-EBI), and N2T.net services hosted at the California Digital Library (CDL), can now resolve any given identifier from over 600 source databases to its original source on the Web, using a common registry of prefix-based redirection rules. We believe these services will be of significant help to publishers and others implementing persistent, machine-resolvable citation of research data. PMID:29737976
Pearce, J; Mason, K; Hiscock, R; Day, P
2008-10-01
To investigate associations between neighbourhood accessibility to gambling outlets (non-casino gaming machine locations, sports betting venues and casinos) and individual gambling behaviour in New Zealand. A Geographical Information Systems (GIS) measure of neighbourhood access to gambling venues. Two-level logistic regression models were fitted to examine the effects of neighbourhood access on individual gambling behaviour after controlling for potential individual- and neighbourhood-level confounding factors. 38,350 neighbourhoods across New Zealand. 12,529 respondents of the 2002/03 New Zealand Health Survey. Compared with those living in the quartile of neighbourhoods with the furthest access to a gambling venue, residents living in the quartile of neighbourhoods with the closest access were more likely (adjusted for age, sex, socio-economic status at the individual-level and deprivation, urban/rural status at the neighbourhood-level) to be a gambler (OR 1.60, 95% CI 1.20 to 2.15) or problem gambler (OR 2.70, 95% CI 1.03 to 7.05). When examined independently, neighbourhood access to venues with non-casino gaming machines (gambling: OR 1.67, 95% CI 1.28 to 2.18; problem gambling: OR 2.71, 95% CI 1.45 to 5.07) and sports betting venues (gambling: OR 1.67, 95% CI 1.28 to 2.18; problem gambling: OR 2.71, 95% CI 1.45 to 5.07) were similarly related. Neighbourhood access to opportunities for gambling is related to gambling and problem gambling behaviour, and contributes substantially to neighbourhood inequalities in gambling over and above-individual level characteristics.
Prediction of Protein-Protein Interaction Sites by Random Forest Algorithm with mRMR and IFS
Li, Bi-Qing; Feng, Kai-Yan; Chen, Lei; Huang, Tao; Cai, Yu-Dong
2012-01-01
Prediction of protein-protein interaction (PPI) sites is one of the most challenging problems in computational biology. Although great progress has been made by employing various machine learning approaches with numerous characteristic features, the problem is still far from being solved. In this study, we developed a novel predictor based on Random Forest (RF) algorithm with the Minimum Redundancy Maximal Relevance (mRMR) method followed by incremental feature selection (IFS). We incorporated features of physicochemical/biochemical properties, sequence conservation, residual disorder, secondary structure and solvent accessibility. We also included five 3D structural features to predict protein-protein interaction sites and achieved an overall accuracy of 0.672997 and MCC of 0.347977. Feature analysis showed that 3D structural features such as Depth Index (DPX) and surface curvature (SC) contributed most to the prediction of protein-protein interaction sites. It was also shown via site-specific feature analysis that the features of individual residues from PPI sites contribute most to the determination of protein-protein interaction sites. It is anticipated that our prediction method will become a useful tool for identifying PPI sites, and that the feature analysis described in this paper will provide useful insights into the mechanisms of interaction. PMID:22937126
Kevlar: Transitioning Helix for Research to Practice
2016-03-01
entropy randomization techniques, automated program repairs leveraging highly-optimized virtual machine technology, and developing a novel framework...attacker from exploiting residual vulnerabilities in a wide variety of classes. Helix/Kevlar uses novel, fine-grained, high- entropy diversification...the Air Force, and IARPA). Salient features of Helix/Kevlar include developing high- entropy randomization techniques, automated program repairs
Integrated semiconductor-magnetic random access memory system
NASA Technical Reports Server (NTRS)
Katti, Romney R. (Inventor); Blaes, Brent R. (Inventor)
2001-01-01
The present disclosure describes a non-volatile magnetic random access memory (RAM) system having a semiconductor control circuit and a magnetic array element. The integrated magnetic RAM system uses CMOS control circuit to read and write data magnetoresistively. The system provides a fast access, non-volatile, radiation hard, high density RAM for high speed computing.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-20
... DEPARTMENT OF COMMERCE International Trade Administration [C-580-851] Dynamic Random Access Memory Semiconductors from the Republic of Korea: Extension of Time Limit for Preliminary Results of Countervailing Duty... access memory semiconductors from the Republic of Korea, covering the period January 1, 2008 through...
Zeng, Xueqiang; Luo, Gang
2017-12-01
Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.
Verschueren, Sabine M. P.; Degens, Hans; Morse, Christopher I.; Onambélé, Gladys L.
2017-01-01
Accurate monitoring of sedentary behaviour and physical activity is key to investigate their exact role in healthy ageing. To date, accelerometers using cut-off point models are most preferred for this, however, machine learning seems a highly promising future alternative. Hence, the current study compared between cut-off point and machine learning algorithms, for optimal quantification of sedentary behaviour and physical activity intensities in the elderly. Thus, in a heterogeneous sample of forty participants (aged ≥60 years, 50% female) energy expenditure during laboratory-based activities (ranging from sedentary behaviour through to moderate-to-vigorous physical activity) was estimated by indirect calorimetry, whilst wearing triaxial thigh-mounted accelerometers. Three cut-off point algorithms and a Random Forest machine learning model were developed and cross-validated using the collected data. Detailed analyses were performed to check algorithm robustness, and examine and benchmark both overall and participant-specific balanced accuracies. This revealed that the four models can at least be used to confidently monitor sedentary behaviour and moderate-to-vigorous physical activity. Nevertheless, the machine learning algorithm outperformed the cut-off point models by being robust for all individual’s physiological and non-physiological characteristics and showing more performance of an acceptable level over the whole range of physical activity intensities. Therefore, we propose that Random Forest machine learning may be optimal for objective assessment of sedentary behaviour and physical activity in older adults using thigh-mounted triaxial accelerometry. PMID:29155839
Wullems, Jorgen A; Verschueren, Sabine M P; Degens, Hans; Morse, Christopher I; Onambélé, Gladys L
2017-01-01
Accurate monitoring of sedentary behaviour and physical activity is key to investigate their exact role in healthy ageing. To date, accelerometers using cut-off point models are most preferred for this, however, machine learning seems a highly promising future alternative. Hence, the current study compared between cut-off point and machine learning algorithms, for optimal quantification of sedentary behaviour and physical activity intensities in the elderly. Thus, in a heterogeneous sample of forty participants (aged ≥60 years, 50% female) energy expenditure during laboratory-based activities (ranging from sedentary behaviour through to moderate-to-vigorous physical activity) was estimated by indirect calorimetry, whilst wearing triaxial thigh-mounted accelerometers. Three cut-off point algorithms and a Random Forest machine learning model were developed and cross-validated using the collected data. Detailed analyses were performed to check algorithm robustness, and examine and benchmark both overall and participant-specific balanced accuracies. This revealed that the four models can at least be used to confidently monitor sedentary behaviour and moderate-to-vigorous physical activity. Nevertheless, the machine learning algorithm outperformed the cut-off point models by being robust for all individual's physiological and non-physiological characteristics and showing more performance of an acceptable level over the whole range of physical activity intensities. Therefore, we propose that Random Forest machine learning may be optimal for objective assessment of sedentary behaviour and physical activity in older adults using thigh-mounted triaxial accelerometry.
Effects of promotional materials on vending sales of low-fat items in teachers' lounges.
Fiske, Amy; Cullen, Karen Weber
2004-01-01
This study examined the impact of an environmental intervention in the form of promotional materials and increased availability of low-fat items on vending machine sales. Ten vending machines were selected and randomly assigned to one of three conditions: control, or one of two experimental conditions. Vending machines in the two intervention conditions received three additional low-fat selections. Low-fat items were promoted at two levels: labels (intervention I), and labels plus signs (intervention II). The number of individual items sold and the total revenue generated was recorded weekly for each machine for 4 weeks. Use of promotional materials resulted in a small, but not significant, increase in the number of low-fat items sold, although machine sales were not significantly impacted by the change in product selection. Results of this study, although not statistically significant, suggest that environmental change may be a realistic means of positively influencing consumer behavior.
NASA Astrophysics Data System (ADS)
Bai, Ting; Sun, Kaimin; Deng, Shiquan; Chen, Yan
2018-03-01
High resolution image change detection is one of the key technologies of remote sensing application, which is of great significance for resource survey, environmental monitoring, fine agriculture, military mapping and battlefield environment detection. In this paper, for high-resolution satellite imagery, Random Forest (RF), Support Vector Machine (SVM), Deep belief network (DBN), and Adaboost models were established to verify the possibility of different machine learning applications in change detection. In order to compare detection accuracy of four machine learning Method, we applied these four machine learning methods for two high-resolution images. The results shows that SVM has higher overall accuracy at small samples compared to RF, Adaboost, and DBN for binary and from-to change detection. With the increase in the number of samples, RF has higher overall accuracy compared to Adaboost, SVM and DBN.
Predicting the dissolution kinetics of silicate glasses using machine learning
NASA Astrophysics Data System (ADS)
Anoop Krishnan, N. M.; Mangalathu, Sujith; Smedskjaer, Morten M.; Tandia, Adama; Burton, Henry; Bauchy, Mathieu
2018-05-01
Predicting the dissolution rates of silicate glasses in aqueous conditions is a complex task as the underlying mechanism(s) remain poorly understood and the dissolution kinetics can depend on a large number of intrinsic and extrinsic factors. Here, we assess the potential of data-driven models based on machine learning to predict the dissolution rates of various aluminosilicate glasses exposed to a wide range of solution pH values, from acidic to caustic conditions. Four classes of machine learning methods are investigated, namely, linear regression, support vector machine regression, random forest, and artificial neural network. We observe that, although linear methods all fail to describe the dissolution kinetics, the artificial neural network approach offers excellent predictions, thanks to its inherent ability to handle non-linear data. Overall, we suggest that a more extensive use of machine learning approaches could significantly accelerate the design of novel glasses with tailored properties.
ERIC Educational Resources Information Center
Sandler, Mark
1985-01-01
Discusses several concerns about nature of online public access catalogs (OPAC) that have particular import to reference librarians: user passivity and loss of control growing out of "human-machine interface" and the larger social context; and the tendency of computerized bibliographic systems to obfuscate human origins of library…
Open access for ALICE analysis based on virtualization technology
NASA Astrophysics Data System (ADS)
Buncic, P.; Gheata, M.; Schutz, Y.
2015-12-01
Open access is one of the important leverages for long-term data preservation for a HEP experiment. To guarantee the usability of data analysis tools beyond the experiment lifetime it is crucial that third party users from the scientific community have access to the data and associated software. The ALICE Collaboration has developed a layer of lightweight components built on top of virtualization technology to hide the complexity and details of the experiment-specific software. Users can perform basic analysis tasks within CernVM, a lightweight generic virtual machine, paired with an ALICE specific contextualization. Once the virtual machine is launched, a graphical user interface is automatically started without any additional configuration. This interface allows downloading the base ALICE analysis software and running a set of ALICE analysis modules. Currently the available tools include fully documented tutorials for ALICE analysis, such as the measurement of strange particle production or the nuclear modification factor in Pb-Pb collisions. The interface can be easily extended to include an arbitrary number of additional analysis modules. We present the current status of the tools used by ALICE through the CERN open access portal, and the plans for future extensions of this system.
An Adaptive Channel Access Method for Dynamic Super Dense Wireless Sensor Networks.
Lei, Chunyang; Bie, Hongxia; Fang, Gengfa; Zhang, Xuekun
2015-12-03
Super dense and distributed wireless sensor networks have become very popular with the development of small cell technology, Internet of Things (IoT), Machine-to-Machine (M2M) communications, Vehicular-to-Vehicular (V2V) communications and public safety networks. While densely deployed wireless networks provide one of the most important and sustainable solutions to improve the accuracy of sensing and spectral efficiency, a new channel access scheme needs to be designed to solve the channel congestion problem introduced by the high dynamics of competing nodes accessing the channel simultaneously. In this paper, we firstly analyzed the channel contention problem using a novel normalized channel contention analysis model which provides information on how to tune the contention window according to the state of channel contention. We then proposed an adaptive channel contention window tuning algorithm in which the contention window tuning rate is set dynamically based on the estimated channel contention level. Simulation results show that our proposed adaptive channel access algorithm based on fast contention window tuning can achieve more than 95 % of the theoretical optimal throughput and 0 . 97 of fairness index especially in dynamic and dense networks.
Guided Tour of Pythonian Museum
NASA Technical Reports Server (NTRS)
Lee, H. Joe
2017-01-01
At http:hdfeos.orgzoo, we have a large collection of Python examples of dealing with NASA HDF (Hierarchical Data Format) products. During this hands-on Python tutorial session, we'll present a few common hacks to access and visualize local NASA HDF data. We'll also cover how to access remote data served by OPeNDAP (Open-source Project for a Network Data Access Protocol). As a glue language, we will demonstrate how you can use Python for your data workflow - from searching data to analyzing data with machine learning.
NASA Technical Reports Server (NTRS)
Wang, Wenlong; Mandra, Salvatore; Katzgraber, Helmut G.
2016-01-01
In this paper, we propose a patch planting method for creating arbitrarily large spin glass instances with known ground states. The scaling of the computational complexity of these instances with various block numbers and sizes is investigated and compared with random instances using population annealing Monte Carlo and the quantum annealing DW2X machine. The method can be useful for benchmarking tests for future generation quantum annealing machines, classical and quantum mechanical optimization algorithms.
Ellis, Katherine; Godbole, Suneeta; Marshall, Simon; Lanckriet, Gert; Staudenmayer, John; Kerr, Jacqueline
2014-01-01
Background: Active travel is an important area in physical activity research, but objective measurement of active travel is still difficult. Automated methods to measure travel behaviors will improve research in this area. In this paper, we present a supervised machine learning method for transportation mode prediction from global positioning system (GPS) and accelerometer data. Methods: We collected a dataset of about 150 h of GPS and accelerometer data from two research assistants following a protocol of prescribed trips consisting of five activities: bicycling, riding in a vehicle, walking, sitting, and standing. We extracted 49 features from 1-min windows of this data. We compared the performance of several machine learning algorithms and chose a random forest algorithm to classify the transportation mode. We used a moving average output filter to smooth the output predictions over time. Results: The random forest algorithm achieved 89.8% cross-validated accuracy on this dataset. Adding the moving average filter to smooth output predictions increased the cross-validated accuracy to 91.9%. Conclusion: Machine learning methods are a viable approach for automating measurement of active travel, particularly for measuring travel activities that traditional accelerometer data processing methods misclassify, such as bicycling and vehicle travel. PMID:24795875
Zemp, Roland; Tanadini, Matteo; Plüss, Stefan; Schnüriger, Karin; Singh, Navrag B; Taylor, William R; Lorenzetti, Silvio
2016-01-01
Occupational musculoskeletal disorders, particularly chronic low back pain (LBP), are ubiquitous due to prolonged static sitting or nonergonomic sitting positions. Therefore, the aim of this study was to develop an instrumented chair with force and acceleration sensors to determine the accuracy of automatically identifying the user's sitting position by applying five different machine learning methods (Support Vector Machines, Multinomial Regression, Boosting, Neural Networks, and Random Forest). Forty-one subjects were requested to sit four times in seven different prescribed sitting positions (total 1148 samples). Sixteen force sensor values and the backrest angle were used as the explanatory variables (features) for the classification. The different classification methods were compared by means of a Leave-One-Out cross-validation approach. The best performance was achieved using the Random Forest classification algorithm, producing a mean classification accuracy of 90.9% for subjects with which the algorithm was not familiar. The classification accuracy varied between 81% and 98% for the seven different sitting positions. The present study showed the possibility of accurately classifying different sitting positions by means of the introduced instrumented office chair combined with machine learning analyses. The use of such novel approaches for the accurate assessment of chair usage could offer insights into the relationships between sitting position, sitting behaviour, and the occurrence of musculoskeletal disorders.
NASA Astrophysics Data System (ADS)
Chang, Liang-Shun; Lin, Chrong Jung; King, Ya-Chin
2014-01-01
The temperature dependent characteristics of the random telegraphic noise (RTN) on contact resistive random access memory (CRRAM) are studied in this work. In addition to the bi-level switching, the occurrences of the middle states in the RTN signal are investigated. Based on the unique its temperature dependent characteristics, a new temperature sensing scheme is proposed for applications in ultra-low power sensor modules.
Determinants of wood dust exposure in the Danish furniture industry.
Mikkelsen, Anders B; Schlunssen, Vivi; Sigsgaard, Torben; Schaumburg, Inger
2002-11-01
This paper investigates the relation between wood dust exposure in the furniture industry and occupational hygiene variables. During the winter 1997-98 54 factories were visited and 2362 personal, passive inhalable dust samples were obtained; the geometric mean was 0.95 mg/m(3) and the geometric standard deviation was 2.08. In a first measuring round 1685 dust concentrations were obtained. For some of the workers repeated measurements were carried out 1 (351) and 2 weeks (326) after the first measurement. Hygiene variables like job, exhaust ventilation, cleaning procedures, etc., were documented. A multivariate analysis based on mixed effects models was used with hygiene variables being fixed effects and worker, machine, department and factory being random effects. A modified stepwise strategy of model making was adopted taking into account the hierarchically structured variables and making possible the exclusion of non-influential random as well as fixed effects. For woodworking, the following determinants of exposure increase the dust concentration: manual and automatic sanding and use of compressed air with fully automatic and semi-automatic machines and for cleaning of work pieces. Decreased dust exposure resulted from the use of compressed air with manual machines, working at fully automatic or semi-automatic machines, functioning exhaust ventilation, work on the night shift, daily cleaning of rooms, cleaning of work pieces with a brush, vacuum cleaning of machines, supplementary fresh air intake and safety representative elected within the last 2 yr. For handling and assembling, increased exposure results from work at automatic machines and presence of wood dust on the workpieces. Work on the evening shift, supplementary fresh air intake, work in a chair factory and special cleaning staff produced decreased exposure to wood dust. The implications of the results for the prevention of wood dust exposure are discussed.
18 CFR 356.2 - General instructions.
Code of Federal Regulations, 2011 CFR
2011-04-01
... procedures that assure the reliability of and ready access to data stored on machine readable media. Internal... of services performed by associated companies. Oil pipeline companies must assure the availability of...
Pseudo-random tool paths for CNC sub-aperture polishing and other applications.
Dunn, Christina R; Walker, David D
2008-11-10
In this paper we first contrast classical and CNC polishing techniques in regard to the repetitiveness of the machine motions. We then present a pseudo-random tool path for use with CNC sub-aperture polishing techniques and report polishing results from equivalent random and raster tool-paths. The random tool-path used - the unicursal random tool-path - employs a random seed to generate a pattern which never crosses itself. Because of this property, this tool-path is directly compatible with dwell time maps for corrective polishing. The tool-path can be used to polish any continuous area of any boundary shape, including surfaces with interior perforations.
Feasibility of Virtual Machine and Cloud Computing Technologies for High Performance Computing
2014-05-01
Hat Enterprise Linux SaaS software as a service VM virtual machine vNUMA virtual non-uniform memory access WRF weather research and forecasting...previously mentioned in Chapter I Section B1 of this paper, which is used to run the weather research and forecasting ( WRF ) model in their experiments...against a VMware virtualization solution of WRF . The experiment consisted of running WRF in a standard configuration between the D-VTM and VMware while
A Unified Access Model for Interconnecting Heterogeneous Wireless Networks
2015-05-01
Defined Networking, OpenFlow, WiFi, LTE 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 18 19a. NAME OF...Machine Configurations with WiFi and LTE 4 2.3 Three Virtual Machine Configurations with WiFi and LTE 5 3. Results and Discussion 5 4. Summary and...WiFi and long-term evolution ( LTE ), and created a communication pathway between them via a central controller node. Our simulation serves as a
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-07
... Access Memory and Nand Flash Memory Devices and Products Containing Same; Notice of Institution of... importation, and the sale within the United States after importation of certain dynamic random access memory and NAND flash memory devices and products containing same by reason of infringement of certain claims...
Extending the coverage of the internet of things with low-cost nanosatellite networks
NASA Astrophysics Data System (ADS)
Almonacid, Vicente; Franck, Laurent
2017-09-01
Recent technology advances have made CubeSats not only an affordable means of access to space, but also promising platforms to develop a new variety of space applications. In this paper, we explore the idea of using nanosatellites as access points to provide extended coverage to the Internet of Things (IoT) and Machine-to-Machine (M2M) communications. This study is mainly motivated by two facts: on the one hand, it is already obvious that the number of machine-type devices deployed globally will experiment an exponential growth over the forthcoming years. This trend is pushed by the available terrestrial cellular infrastructure, which allows adding support for M2M connectivity at marginal costs. On the other hand, the same growth is not observed in remote areas that must rely on space-based connectivity. In such environments, the demand for M2M communications is potentially large, yet it is challenged by the lack of cost-effective service providers. The traffic characteristics of typical M2M applications translate into the requirement for an extremely low cost per transmitted message. Under these strong economical constraints, we expect that nanosatellites in the low Earth orbit will play a fundamental role in overcoming what we may call the IoT digital divide. The objective of this paper is therefore to provide a general analysis of a nanosatellite-based, global IoT/M2M network. We put emphasis in the engineering challenges faced in designing the Earth-to-Space communication link, where the adoption of an efficient multiple-access scheme is paramount for ensuring connectivity to a large number of terminal nodes. In particular, the trade-offs energy efficiency-access delay and energy efficiency-throughput are discussed, and a novel access approach suitable for delay-tolerant applications is proposed. Thus, by keeping a system-level standpoint, we identify key issues and discuss perspectives towards energy efficient and cost-effective solutions.
Vending machine policies and practices in Delaware.
Gemmill, Erin; Cotugna, Nancy
2005-04-01
Overweight has reached alarming proportions among America's youth. Although the cause of the rise in overweight rates in children and adolescents is certainly the result of the interaction of a variety of factors, the presence of vending machines in schools is one issue that has recently come to the forefront. Many states have passed or proposed legislation that limits student access to vending machines in schools or require that vending machines in schools offer healthier choices. The purposes of this study were (a) to assess the food and beverage vending machine offerings in the public school districts in the state of Delaware and (b) to determine whether there are any district vending policies in place other than the current U.S. Department of Agriculture regulations. The results of this study indicate the most commonly sold food and drink items in school vending machines are of minimal nutritional value. School administrators are most frequently in charge of the vending contract, as well as setting and enforcing vending machine policies. Suggestions are offered to assist school nurses, often the only health professional in the school, in becoming advocates for changes in school vending practices and policies that promote the health and well-being of children and adolescents.
Bypassing the Kohn-Sham equations with machine learning.
Brockherde, Felix; Vogt, Leslie; Li, Li; Tuckerman, Mark E; Burke, Kieron; Müller, Klaus-Robert
2017-10-11
Last year, at least 30,000 scientific papers used the Kohn-Sham scheme of density functional theory to solve electronic structure problems in a wide variety of scientific fields. Machine learning holds the promise of learning the energy functional via examples, bypassing the need to solve the Kohn-Sham equations. This should yield substantial savings in computer time, allowing larger systems and/or longer time-scales to be tackled, but attempts to machine-learn this functional have been limited by the need to find its derivative. The present work overcomes this difficulty by directly learning the density-potential and energy-density maps for test systems and various molecules. We perform the first molecular dynamics simulation with a machine-learned density functional on malonaldehyde and are able to capture the intramolecular proton transfer process. Learning density models now allows the construction of accurate density functionals for realistic molecular systems.Machine learning allows electronic structure calculations to access larger system sizes and, in dynamical simulations, longer time scales. Here, the authors perform such a simulation using a machine-learned density functional that avoids direct solution of the Kohn-Sham equations.
Development of machine learning models for diagnosis of glaucoma.
Kim, Seong Jae; Cho, Kyong Jin; Oh, Sejong
2017-01-01
The study aimed to develop machine learning models that have strong prediction power and interpretability for diagnosis of glaucoma based on retinal nerve fiber layer (RNFL) thickness and visual field (VF). We collected various candidate features from the examination of retinal nerve fiber layer (RNFL) thickness and visual field (VF). We also developed synthesized features from original features. We then selected the best features proper for classification (diagnosis) through feature evaluation. We used 100 cases of data as a test dataset and 399 cases of data as a training and validation dataset. To develop the glaucoma prediction model, we considered four machine learning algorithms: C5.0, random forest (RF), support vector machine (SVM), and k-nearest neighbor (KNN). We repeatedly composed a learning model using the training dataset and evaluated it by using the validation dataset. Finally, we got the best learning model that produces the highest validation accuracy. We analyzed quality of the models using several measures. The random forest model shows best performance and C5.0, SVM, and KNN models show similar accuracy. In the random forest model, the classification accuracy is 0.98, sensitivity is 0.983, specificity is 0.975, and AUC is 0.979. The developed prediction models show high accuracy, sensitivity, specificity, and AUC in classifying among glaucoma and healthy eyes. It will be used for predicting glaucoma against unknown examination records. Clinicians may reference the prediction results and be able to make better decisions. We may combine multiple learning models to increase prediction accuracy. The C5.0 model includes decision rules for prediction. It can be used to explain the reasons for specific predictions.
Comparison of Random Forest and Support Vector Machine classifiers using UAV remote sensing imagery
NASA Astrophysics Data System (ADS)
Piragnolo, Marco; Masiero, Andrea; Pirotti, Francesco
2017-04-01
Since recent years surveying with unmanned aerial vehicles (UAV) is getting a great amount of attention due to decreasing costs, higher precision and flexibility of usage. UAVs have been applied for geomorphological investigations, forestry, precision agriculture, cultural heritage assessment and for archaeological purposes. It can be used for land use and land cover classification (LULC). In literature, there are two main types of approaches for classification of remote sensing imagery: pixel-based and object-based. On one hand, pixel-based approach mostly uses training areas to define classes and respective spectral signatures. On the other hand, object-based classification considers pixels, scale, spatial information and texture information for creating homogeneous objects. Machine learning methods have been applied successfully for classification, and their use is increasing due to the availability of faster computing capabilities. The methods learn and train the model from previous computation. Two machine learning methods which have given good results in previous investigations are Random Forest (RF) and Support Vector Machine (SVM). The goal of this work is to compare RF and SVM methods for classifying LULC using images collected with a fixed wing UAV. The processing chain regarding classification uses packages in R, an open source scripting language for data analysis, which provides all necessary algorithms. The imagery was acquired and processed in November 2015 with cameras providing information over the red, blue, green and near infrared wavelength reflectivity over a testing area in the campus of Agripolis, in Italy. Images were elaborated and ortho-rectified through Agisoft Photoscan. The ortho-rectified image is the full data set, and the test set is derived from partial sub-setting of the full data set. Different tests have been carried out, using a percentage from 2 % to 20 % of the total. Ten training sets and ten validation sets are obtained from each test set. The control dataset consist of an independent visual classification done by an expert over the whole area. The classes are (i) broadleaf, (ii) building, (iii) grass, (iv) headland access path, (v) road, (vi) sowed land, (vii) vegetable. The RF and SVM are applied to the test set. The performances of the methods are evaluated using the three following accuracy metrics: Kappa index, Classification accuracy and Classification Error. All three are calculated in three different ways: with K-fold cross validation, using the validation test set and using the full test set. The analysis indicates that SVM gets better results in terms of good scores using K-fold cross or validation test set. Using the full test set, RF achieves a better result in comparison to SVM. It also seems that SVM performs better with smaller training sets, whereas RF performs better as training sets get larger.
Recent advances in environmental data mining
NASA Astrophysics Data System (ADS)
Leuenberger, Michael; Kanevski, Mikhail
2016-04-01
Due to the large amount and complexity of data available nowadays in geo- and environmental sciences, we face the need to develop and incorporate more robust and efficient methods for their analysis, modelling and visualization. An important part of these developments deals with an elaboration and application of a contemporary and coherent methodology following the process from data collection to the justification and communication of the results. Recent fundamental progress in machine learning (ML) can considerably contribute to the development of the emerging field - environmental data science. The present research highlights and investigates the different issues that can occur when dealing with environmental data mining using cutting-edge machine learning algorithms. In particular, the main attention is paid to the description of the self-consistent methodology and two efficient algorithms - Random Forest (RF, Breiman, 2001) and Extreme Learning Machines (ELM, Huang et al., 2006), which recently gained a great popularity. Despite the fact that they are based on two different concepts, i.e. decision trees vs artificial neural networks, they both propose promising results for complex, high dimensional and non-linear data modelling. In addition, the study discusses several important issues of data driven modelling, including feature selection and uncertainties. The approach considered is accompanied by simulated and real data case studies from renewable resources assessment and natural hazards tasks. In conclusion, the current challenges and future developments in statistical environmental data learning are discussed. References - Breiman, L., 2001. Random Forests. Machine Learning 45 (1), 5-32. - Huang, G.-B., Zhu, Q.-Y., Siew, C.-K., 2006. Extreme learning machine: theory and applications. Neurocomputing 70 (1-3), 489-501. - Kanevski, M., Pozdnoukhov, A., Timonin, V., 2009. Machine Learning for Spatial Environmental Data. EPFL Press; Lausanne, Switzerland, p.392. - Leuenberger, M., Kanevski, M., 2015. Extreme Learning Machines for spatial environmental data. Computers and Geosciences 85, 64-73.
Tedesco-Silva, Helio; Mello Offerni, Juliano Chrystian; Ayres Carneiro, Vanessa; Ivani de Paula, Mayara; Neto, Elias David; Brambate Carvalhinho Lemos, Francine; Requião Moura, Lúcio Roberto; Pacheco E Silva Filho, Alvaro; de Morais Cunha, Mirian de Fátima; Francisco da Silva, Erica; Miorin, Luiz Antonio; Demetrio, Daniela Priscila; Luconi, Paulo Sérgio; da Silva Luconi, Waldere Tania; Bobbio, Savina Adriana; Kuschnaroff, Liz Milstein; Noronha, Irene Lourdes; Braga, Sibele Lessa; Barsante, Renata Cristina; Mendes Moreira, João Cezar; Fernandes-Charpiot, Ida Maria Maximina; Abbud-Filho, Mario; Modelli de Andrade, Luis Gustavo; Dalsoglio Garcia, Paula; Tanajura Santamaria Saber, Luciana; Fernandes Laurindo, Alan; Chocair, Pedro Renato; Cuvello Neto, Américo Lourenço; Zanocco, Juliana Aparecida; Duboc de Almeida Soares Filho, Antonio Jose; Ferreira Aguiar, Wilson; Medina Pestana, Jose
2017-05-01
This study compared the use of static cold storage versus continuous hypothermic machine perfusion in a cohort of kidney transplant recipients at high risk for delayed graft function (DGF). In this national, multicenter, and controlled trial, 80 pairs of kidneys recovered from brain-dead deceased donors were randomized to cold storage or machine perfusion, transplanted, and followed up for 12 months. The primary endpoint was the incidence of DGF. Secondary endpoints included the duration of DGF, hospital stay, primary nonfunction, estimated glomerular filtration rate, acute rejection, and allograft and patient survivals. Mean cold ischemia time was high but not different between the 2 groups (25.6 ± 6.6 hours vs 25.05 ± 6.3 hours, 0.937). The incidence of DGF was lower in the machine perfusion compared with cold storage group (61% vs. 45%, P = 0.031). Machine perfusion was independently associated with a reduced risk of DGF (odds ratio, 0.49; 95% confidence interval, 0.26-0.95). Mean estimated glomerular filtration rate tended to be higher at day 28 (40.6 ± 19.9 mL/min per 1.73 m 2 vs 49.0 ± 26.9 mL/min per 1.73 m 2 ; P = 0.262) and 1 year (48.3 ± 19.8 mL/min per 1.73 m 2 vs 54.4 ± 28.6 mL/min per 1.73 m 2 ; P = 0.201) in the machine perfusion group. No differences in the incidence of acute rejection, primary nonfunction (0% vs 2.5%), graft loss (7.5% vs 10%), or death (8.8% vs 6.3%) were observed. In this cohort of recipients of deceased donor kidneys with high mean cold ischemia time and high incidence of DGF, the use of continuous machine perfusion was associated with a reduced risk of DGF compared with the traditional cold storage preservation method.
Secure Autonomous Automated Scheduling (SAAS). Rev. 1.1
NASA Technical Reports Server (NTRS)
Walke, Jon G.; Dikeman, Larry; Sage, Stephen P.; Miller, Eric M.
2010-01-01
This report describes network-centric operations, where a virtual mission operations center autonomously receives sensor triggers, and schedules space and ground assets using Internet-based technologies and service-oriented architectures. For proof-of-concept purposes, sensor triggers are received from the United States Geological Survey (USGS) to determine targets for space-based sensors. The Surrey Satellite Technology Limited (SSTL) Disaster Monitoring Constellation satellite, the UK-DMC, is used as the space-based sensor. The UK-DMC's availability is determined via machine-to-machine communications using SSTL's mission planning system. Access to/from the UK-DMC for tasking and sensor data is via SSTL's and Universal Space Network's (USN) ground assets. The availability and scheduling of USN's assets can also be performed autonomously via machine-to-machine communications. All communication, both on the ground and between ground and space, uses open Internet standards
Performance Analysis of Abrasive Waterjet Machining Process at Low Pressure
NASA Astrophysics Data System (ADS)
Murugan, M.; Gebremariam, MA; Hamedon, Z.; Azhari, A.
2018-03-01
Normally, a commercial waterjet cutting machine can generate water pressure up to 600 MPa. This range of pressure is used to machine a wide variety of materials. Hence, the price of waterjet cutting machine is expensive. Therefore, there is a need to develop a low cost waterjet machine in order to make the technology more accessible for the masses. Due to its low cost, such machines may only be able to generate water pressure at a much reduced rate. The present study attempts to investigate the performance of abrasive water jet machining process at low cutting pressure using self-developed low cost waterjet machine. It aims to study the feasibility of machining various materials at low pressure which later can aid in further development of an effective low cost water jet machine. A total of three different materials were machined at a low pressure of 34 MPa. The materials are mild steel, aluminium alloy 6061 and plastics Delrin®. Furthermore, a traverse rate was varied between 1 to 3 mm/min. The study on cutting performance at low pressure for different materials was conducted in terms of depth penetration, kerf taper ratio and surface roughness. It was found that all samples were able to be machined at low cutting pressure with varied qualities. Also, the depth of penetration decreases with an increase in the traverse rate. Meanwhile, the surface roughness and kerf taper ratio increase with an increase in the traverse rate. It can be concluded that a low cost waterjet machine with a much reduced rate of water pressure can be successfully used for machining certain materials with acceptable qualities.
Jeffrey T. Walton
2008-01-01
Three machine learning subpixel estimation methods (Cubist, Random Forests, and support vector regression) were applied to estimate urban cover. Urban forest canopy cover and impervious surface cover were estimated from Landsat-7 ETM+ imagery using a higher resolution cover map resampled to 30 m as training and reference data. Three different band combinations (...
3/29/2018: Making Data Machine-Readable Webinar | National Agricultural
Library Skip to main content Home National Agricultural Library United States Department of | USDA.gov | Agricultural Research Service | Plain Language | FOIA | Accessibility Statement | Information
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tal, J.; Lopez, A.; Edwards, J.M.
1995-04-01
In this paper, an alternative solution to the traditional CNC machine tool controller has been introduced. Software and hardware modules have been described and their incorporation in a CNC control system has been outlined. This type of CNC machine tool controller demonstrates that technology is accessible and can be readily implemented into an open architecture machine tool controller. Benefit to the user is greater controller flexibility, while being economically achievable. PC based, motion as well as non-motion features will provide flexibility through a Windows environment. Up-grading this type of controller system through software revisions will keep the machine tool inmore » a competitive state with minimal effort. Software and hardware modules are mass produced permitting competitive procurement and incorporation. Open architecture CNC systems provide diagnostics thus enhancing maintainability, and machine tool up-time. A major concern of traditional CNC systems has been operator training time. Training time can be greatly minimized by making use of Windows environment features.« less
ERIC Educational Resources Information Center
Buckland, Lawrence F.; Madden, Mary
From experimental work performed, and reported upon in this document, it is concluded that converting the New York State Library (NYSL) shelf list sample to machine readable form, and searching this shelf list using a remote access catalog are technically sound concepts though the capital costs of data conversion and system installation will be…
BCH codes for large IC random-access memory systems
NASA Technical Reports Server (NTRS)
Lin, S.; Costello, D. J., Jr.
1983-01-01
In this report some shortened BCH codes for possible applications to large IC random-access memory systems are presented. These codes are given by their parity-check matrices. Encoding and decoding of these codes are discussed.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-16
... Static Random Access Memories and Products Containing Same, DN 2816; the Commission is soliciting... importation of certain static random access memories and products containing same. The complaint names as...
NASA Technical Reports Server (NTRS)
Poole, L. R.
1974-01-01
A study was conducted of an alternate method for storage and use of bathymetry data in the Langley Research Center and Virginia Institute of Marine Science mid-Atlantic continental-shelf wave-refraction computer program. The regional bathymetry array was divided into 105 indexed modules which can be read individually into memory in a nonsequential manner from a peripheral file using special random-access subroutines. In running a sample refraction case, a 75-percent decrease in program field length was achieved by using the random-access storage method in comparison with the conventional method of total regional array storage. This field-length decrease was accompanied by a comparative 5-percent increase in central processing time and a 477-percent increase in the number of operating-system calls. A comparative Langley Research Center computer system cost savings of 68 percent was achieved by using the random-access storage method.
Machine Learning Techniques for Prediction of Early Childhood Obesity.
Dugan, T M; Mukhopadhyay, S; Carroll, A; Downs, S
2015-01-01
This paper aims to predict childhood obesity after age two, using only data collected prior to the second birthday by a clinical decision support system called CHICA. Analyses of six different machine learning methods: RandomTree, RandomForest, J48, ID3, Naïve Bayes, and Bayes trained on CHICA data show that an accurate, sensitive model can be created. Of the methods analyzed, the ID3 model trained on the CHICA dataset proved the best overall performance with accuracy of 85% and sensitivity of 89%. Additionally, the ID3 model had a positive predictive value of 84% and a negative predictive value of 88%. The structure of the tree also gives insight into the strongest predictors of future obesity in children. Many of the strongest predictors seen in the ID3 modeling of the CHICA dataset have been independently validated in the literature as correlated with obesity, thereby supporting the validity of the model. This study demonstrated that data from a production clinical decision support system can be used to build an accurate machine learning model to predict obesity in children after age two.
A tale of two "forests": random forest machine learning AIDS tropical forest carbon mapping.
Mascaro, Joseph; Asner, Gregory P; Knapp, David E; Kennedy-Bowdoin, Ty; Martin, Roberta E; Anderson, Christopher; Higgins, Mark; Chadwick, K Dana
2014-01-01
Accurate and spatially-explicit maps of tropical forest carbon stocks are needed to implement carbon offset mechanisms such as REDD+ (Reduced Deforestation and Degradation Plus). The Random Forest machine learning algorithm may aid carbon mapping applications using remotely-sensed data. However, Random Forest has never been compared to traditional and potentially more reliable techniques such as regionally stratified sampling and upscaling, and it has rarely been employed with spatial data. Here, we evaluated the performance of Random Forest in upscaling airborne LiDAR (Light Detection and Ranging)-based carbon estimates compared to the stratification approach over a 16-million hectare focal area of the Western Amazon. We considered two runs of Random Forest, both with and without spatial contextual modeling by including--in the latter case--x, and y position directly in the model. In each case, we set aside 8 million hectares (i.e., half of the focal area) for validation; this rigorous test of Random Forest went above and beyond the internal validation normally compiled by the algorithm (i.e., called "out-of-bag"), which proved insufficient for this spatial application. In this heterogeneous region of Northern Peru, the model with spatial context was the best preforming run of Random Forest, and explained 59% of LiDAR-based carbon estimates within the validation area, compared to 37% for stratification or 43% by Random Forest without spatial context. With the 60% improvement in explained variation, RMSE against validation LiDAR samples improved from 33 to 26 Mg C ha(-1) when using Random Forest with spatial context. Our results suggest that spatial context should be considered when using Random Forest, and that doing so may result in substantially improved carbon stock modeling for purposes of climate change mitigation.
A Tale of Two “Forests”: Random Forest Machine Learning Aids Tropical Forest Carbon Mapping
Mascaro, Joseph; Asner, Gregory P.; Knapp, David E.; Kennedy-Bowdoin, Ty; Martin, Roberta E.; Anderson, Christopher; Higgins, Mark; Chadwick, K. Dana
2014-01-01
Accurate and spatially-explicit maps of tropical forest carbon stocks are needed to implement carbon offset mechanisms such as REDD+ (Reduced Deforestation and Degradation Plus). The Random Forest machine learning algorithm may aid carbon mapping applications using remotely-sensed data. However, Random Forest has never been compared to traditional and potentially more reliable techniques such as regionally stratified sampling and upscaling, and it has rarely been employed with spatial data. Here, we evaluated the performance of Random Forest in upscaling airborne LiDAR (Light Detection and Ranging)-based carbon estimates compared to the stratification approach over a 16-million hectare focal area of the Western Amazon. We considered two runs of Random Forest, both with and without spatial contextual modeling by including—in the latter case—x, and y position directly in the model. In each case, we set aside 8 million hectares (i.e., half of the focal area) for validation; this rigorous test of Random Forest went above and beyond the internal validation normally compiled by the algorithm (i.e., called “out-of-bag”), which proved insufficient for this spatial application. In this heterogeneous region of Northern Peru, the model with spatial context was the best preforming run of Random Forest, and explained 59% of LiDAR-based carbon estimates within the validation area, compared to 37% for stratification or 43% by Random Forest without spatial context. With the 60% improvement in explained variation, RMSE against validation LiDAR samples improved from 33 to 26 Mg C ha−1 when using Random Forest with spatial context. Our results suggest that spatial context should be considered when using Random Forest, and that doing so may result in substantially improved carbon stock modeling for purposes of climate change mitigation. PMID:24489686
Correcting Classifiers for Sample Selection Bias in Two-Phase Case-Control Studies
Theis, Fabian J.
2017-01-01
Epidemiological studies often utilize stratified data in which rare outcomes or exposures are artificially enriched. This design can increase precision in association tests but distorts predictions when applying classifiers on nonstratified data. Several methods correct for this so-called sample selection bias, but their performance remains unclear especially for machine learning classifiers. With an emphasis on two-phase case-control studies, we aim to assess which corrections to perform in which setting and to obtain methods suitable for machine learning techniques, especially the random forest. We propose two new resampling-based methods to resemble the original data and covariance structure: stochastic inverse-probability oversampling and parametric inverse-probability bagging. We compare all techniques for the random forest and other classifiers, both theoretically and on simulated and real data. Empirical results show that the random forest profits from only the parametric inverse-probability bagging proposed by us. For other classifiers, correction is mostly advantageous, and methods perform uniformly. We discuss consequences of inappropriate distribution assumptions and reason for different behaviors between the random forest and other classifiers. In conclusion, we provide guidance for choosing correction methods when training classifiers on biased samples. For random forests, our method outperforms state-of-the-art procedures if distribution assumptions are roughly fulfilled. We provide our implementation in the R package sambia. PMID:29312464
Applications of random forest feature selection for fine-scale genetic population assignment.
Sylvester, Emma V A; Bentzen, Paul; Bradbury, Ian R; Clément, Marie; Pearce, Jon; Horne, John; Beiko, Robert G
2018-02-01
Genetic population assignment used to inform wildlife management and conservation efforts requires panels of highly informative genetic markers and sensitive assignment tests. We explored the utility of machine-learning algorithms (random forest, regularized random forest and guided regularized random forest) compared with F ST ranking for selection of single nucleotide polymorphisms (SNP) for fine-scale population assignment. We applied these methods to an unpublished SNP data set for Atlantic salmon ( Salmo salar ) and a published SNP data set for Alaskan Chinook salmon ( Oncorhynchus tshawytscha ). In each species, we identified the minimum panel size required to obtain a self-assignment accuracy of at least 90% using each method to create panels of 50-700 markers Panels of SNPs identified using random forest-based methods performed up to 7.8 and 11.2 percentage points better than F ST -selected panels of similar size for the Atlantic salmon and Chinook salmon data, respectively. Self-assignment accuracy ≥90% was obtained with panels of 670 and 384 SNPs for each data set, respectively, a level of accuracy never reached for these species using F ST -selected panels. Our results demonstrate a role for machine-learning approaches in marker selection across large genomic data sets to improve assignment for management and conservation of exploited populations.
Method for Evaluation of Outage Probability on Random Access Channel in Mobile Communication Systems
NASA Astrophysics Data System (ADS)
Kollár, Martin
2012-05-01
In order to access the cell in all mobile communication technologies a so called random-access procedure is used. For example in GSM this is represented by sending the CHANNEL REQUEST message from Mobile Station (MS) to Base Transceiver Station (BTS) which is consequently forwarded as an CHANNEL REQUIRED message to the Base Station Controller (BSC). If the BTS decodes some noise on the Random Access Channel (RACH) as random access by mistake (so- called ‘phantom RACH') then it is a question of pure coincidence which èstablishment cause’ the BTS thinks to have recognized. A typical invalid channel access request or phantom RACH is characterized by an IMMEDIATE ASSIGNMENT procedure (assignment of an SDCCH or TCH) which is not followed by sending an ESTABLISH INDICATION from MS to BTS. In this paper a mathematical model for evaluation of the Power RACH Busy Threshold (RACHBT) in order to guaranty in advance determined outage probability on RACH is described and discussed as well. It focuses on Global System for Mobile Communications (GSM) however the obtained results can be generalized on remaining mobile technologies (
Achieving human and machine accessibility of cited data in scholarly publications
Starr, Joan; Castro, Eleni; Crosas, Mercè; Dumontier, Michel; Downs, Robert R.; Duerr, Ruth; Haak, Laurel L.; Haendel, Melissa; Herman, Ivan; Hodson, Simon; Hourclé, Joe; Kratz, John Ernest; Lin, Jennifer; Nielsen, Lars Holm; Nurnberger, Amy; Proell, Stefan; Rauber, Andreas; Sacchi, Simone; Smith, Arthur; Taylor, Mike; Clark, Tim
2015-01-01
Reproducibility and reusability of research results is an important concern in scientific communication and science policy. A foundational element of reproducibility and reusability is the open and persistently available presentation of research data. However, many common approaches for primary data publication in use today do not achieve sufficient long-term robustness, openness, accessibility or uniformity. Nor do they permit comprehensive exploitation by modern Web technologies. This has led to several authoritative studies recommending uniform direct citation of data archived in persistent repositories. Data are to be considered as first-class scholarly objects, and treated similarly in many ways to cited and archived scientific and scholarly literature. Here we briefly review the most current and widely agreed set of principle-based recommendations for scholarly data citation, the Joint Declaration of Data Citation Principles (JDDCP). We then present a framework for operationalizing the JDDCP; and a set of initial recommendations on identifier schemes, identifier resolution behavior, required metadata elements, and best practices for realizing programmatic machine actionability of cited data. The main target audience for the common implementation guidelines in this article consists of publishers, scholarly organizations, and persistent data repositories, including technical staff members in these organizations. But ordinary researchers can also benefit from these recommendations. The guidance provided here is intended to help achieve widespread, uniform human and machine accessibility of deposited data, in support of significantly improved verification, validation, reproducibility and re-use of scholarly/scientific data. PMID:26167542
NASA Astrophysics Data System (ADS)
Grandi, C.; Italiano, A.; Salomoni, D.; Calabrese Melcarne, A. K.
2011-12-01
WNoDeS, an acronym for Worker Nodes on Demand Service, is software developed at CNAF-Tier1, the National Computing Centre of the Italian Institute for Nuclear Physics (INFN) located in Bologna. WNoDeS provides on demand, integrated access to both Grid and Cloud resources through virtualization technologies. Besides the traditional use of computing resources in batch mode, users need to have interactive and local access to a number of systems. WNoDeS can dynamically select these computers instantiating Virtual Machines, according to the requirements (computing, storage and network resources) of users through either the Open Cloud Computing Interface API, or through a web console. An interactive use is usually limited to activities in user space, i.e. where the machine configuration is not modified. In some other instances the activity concerns development and testing of services and thus implies the modification of the system configuration (and, therefore, root-access to the resource). The former use case is a simple extension of the WNoDeS approach, where the resource is provided in interactive mode. The latter implies saving the virtual image at the end of each user session so that it can be presented to the user at subsequent requests. This work describes how the LHC experiments at INFN-Bologna are testing and making use of these dynamically created ad-hoc machines via WNoDeS to support flexible, interactive analysis and software development at the INFN Tier-1 Computing Centre.
Achieving human and machine accessibility of cited data in scholarly publications.
Starr, Joan; Castro, Eleni; Crosas, Mercè; Dumontier, Michel; Downs, Robert R; Duerr, Ruth; Haak, Laurel L; Haendel, Melissa; Herman, Ivan; Hodson, Simon; Hourclé, Joe; Kratz, John Ernest; Lin, Jennifer; Nielsen, Lars Holm; Nurnberger, Amy; Proell, Stefan; Rauber, Andreas; Sacchi, Simone; Smith, Arthur; Taylor, Mike; Clark, Tim
Reproducibility and reusability of research results is an important concern in scientific communication and science policy. A foundational element of reproducibility and reusability is the open and persistently available presentation of research data. However, many common approaches for primary data publication in use today do not achieve sufficient long-term robustness, openness, accessibility or uniformity. Nor do they permit comprehensive exploitation by modern Web technologies. This has led to several authoritative studies recommending uniform direct citation of data archived in persistent repositories. Data are to be considered as first-class scholarly objects, and treated similarly in many ways to cited and archived scientific and scholarly literature. Here we briefly review the most current and widely agreed set of principle-based recommendations for scholarly data citation, the Joint Declaration of Data Citation Principles (JDDCP). We then present a framework for operationalizing the JDDCP; and a set of initial recommendations on identifier schemes, identifier resolution behavior, required metadata elements, and best practices for realizing programmatic machine actionability of cited data. The main target audience for the common implementation guidelines in this article consists of publishers, scholarly organizations, and persistent data repositories, including technical staff members in these organizations. But ordinary researchers can also benefit from these recommendations. The guidance provided here is intended to help achieve widespread, uniform human and machine accessibility of deposited data, in support of significantly improved verification, validation, reproducibility and re-use of scholarly/scientific data.
Improving Hip-Worn Accelerometer Estimates of Sitting Using Machine Learning Methods.
Kerr, Jacqueline; Carlson, Jordan; Godbole, Suneeta; Cadmus-Bertram, Lisa; Bellettiere, John; Hartman, Sheri
2018-02-13
To improve estimates of sitting time from hip worn accelerometers used in large cohort studies by employing machine learning methods developed on free living activPAL data. Thirty breast cancer survivors concurrently wore a hip worn accelerometer and a thigh worn activPAL for 7 days. A random forest classifier, trained on the activPAL data, was employed to detect sitting, standing and sit-stand transitions in 5 second windows in the hip worn accelerometer. The classifier estimates were compared to the standard accelerometer cut point and significant differences across different bout lengths were investigated using mixed effect models. Overall, the algorithm predicted the postures with moderate accuracy (stepping 77%, standing 63%, sitting 67%, sit to stand 52% and stand to sit 51%). Daily level analyses indicated that errors in transition estimates were only occurring during sitting bouts of 2 minutes or less. The standard cut point was significantly different from the activPAL across all bout lengths, overestimating short bouts and underestimating long bouts. This is among the first algorithms for sitting and standing for hip worn accelerometer data to be trained from entirely free living activPAL data. The new algorithm detected prolonged sitting which has been shown to be most detrimental to health. Further validation and training in larger cohorts is warranted.This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
CD-REST: a system for extracting chemical-induced disease relation in literature.
Xu, Jun; Wu, Yonghui; Zhang, Yaoyun; Wang, Jingqi; Lee, Hee-Jin; Xu, Hua
2016-01-01
Mining chemical-induced disease relations embedded in the vast biomedical literature could facilitate a wide range of computational biomedical applications, such as pharmacovigilance. The BioCreative V organized a Chemical Disease Relation (CDR) Track regarding chemical-induced disease relation extraction from biomedical literature in 2015. We participated in all subtasks of this challenge. In this article, we present our participation system Chemical Disease Relation Extraction SysTem (CD-REST), an end-to-end system for extracting chemical-induced disease relations in biomedical literature. CD-REST consists of two main components: (1) a chemical and disease named entity recognition and normalization module, which employs the Conditional Random Fields algorithm for entity recognition and a Vector Space Model-based approach for normalization; and (2) a relation extraction module that classifies both sentence-level and document-level candidate drug-disease pairs by support vector machines. Our system achieved the best performance on the chemical-induced disease relation extraction subtask in the BioCreative V CDR Track, demonstrating the effectiveness of our proposed machine learning-based approaches for automatic extraction of chemical-induced disease relations in biomedical literature. The CD-REST system provides web services using HTTP POST request. The web services can be accessed fromhttp://clinicalnlptool.com/cdr The online CD-REST demonstration system is available athttp://clinicalnlptool.com/cdr/cdr.html. Database URL:http://clinicalnlptool.com/cdr;http://clinicalnlptool.com/cdr/cdr.html. © The Author(s) 2016. Published by Oxford University Press.
Use of Online Machine Translation for Nursing Literature: A Questionnaire-Based Survey
Anazawa, Ryoko; Ishikawa, Hirono; Takahiro, Kiuchi
2013-01-01
Background: The language barrier is a significant obstacle for nurses who are not native English speakers to obtain information from international journals. Freely accessible online machine translation (MT) offers a possible solution to this problem. Aim: To explore how Japanese nursing professionals use online MT and perceive its usability in reading English articles and to discuss what should be considered for better utilisation of online MT lessening the language barrier. Method: In total, 250 randomly selected assistants and research associates at nursing colleges across Japan answered a questionnaire examining the current use of online MT and perceived usability among Japanese nurses, along with the number of articles read in English and the perceived language barrier. The items were rated on Likert scales, and t-test, ANOVA, chi-square test, and Spearman’s correlation were used for analyses. Results: Of the participants, 73.8% had used online MT. More than half of them felt it was usable. The language barrier was strongly felt, and academic degrees and English proficiency level were associated factors. The perceived language barrier was related to the frequency of online MT use. No associated factor was found for the perceived usability of online MT. Conclusion: Language proficiency is an important factor for optimum utilisation of MT. A need for education in the English language, reading scientific papers, and online MT training was indicated. Cooperation with developers and providers of MT for the improvement of their systems is required. PMID:23459140
Surface wind characteristics of some Aleutian Islands. [for selection of windpowered machine sites
NASA Technical Reports Server (NTRS)
Wentink, T., Jr.
1973-01-01
The wind power potential of Alaska is assessed in order to determine promising windpower sites for construction of wind machines and for shipment of wind derived energy. Analyses of near surface wind data from promising Aleutian sites accessible by ocean transport indicate probable velocity regimes and also present deficiencies in available data. It is shown that winds for some degree of power generation are available 77 percent of the time in the Aleutians with peak velocities depending on location.
Data-Driven Learning of Total and Local Energies in Elemental Boron
NASA Astrophysics Data System (ADS)
Deringer, Volker L.; Pickard, Chris J.; Csányi, Gábor
2018-04-01
The allotropes of boron continue to challenge structural elucidation and solid-state theory. Here we use machine learning combined with random structure searching (RSS) algorithms to systematically construct an interatomic potential for boron. Starting from ensembles of randomized atomic configurations, we use alternating single-point quantum-mechanical energy and force computations, Gaussian approximation potential (GAP) fitting, and GAP-driven RSS to iteratively generate a representation of the element's potential-energy surface. Beyond the total energies of the very different boron allotropes, our model readily provides atom-resolved, local energies and thus deepened insight into the frustrated β -rhombohedral boron structure. Our results open the door for the efficient and automated generation of GAPs, and other machine-learning-based interatomic potentials, and suggest their usefulness as a tool for materials discovery.
Data-Driven Learning of Total and Local Energies in Elemental Boron.
Deringer, Volker L; Pickard, Chris J; Csányi, Gábor
2018-04-13
The allotropes of boron continue to challenge structural elucidation and solid-state theory. Here we use machine learning combined with random structure searching (RSS) algorithms to systematically construct an interatomic potential for boron. Starting from ensembles of randomized atomic configurations, we use alternating single-point quantum-mechanical energy and force computations, Gaussian approximation potential (GAP) fitting, and GAP-driven RSS to iteratively generate a representation of the element's potential-energy surface. Beyond the total energies of the very different boron allotropes, our model readily provides atom-resolved, local energies and thus deepened insight into the frustrated β-rhombohedral boron structure. Our results open the door for the efficient and automated generation of GAPs, and other machine-learning-based interatomic potentials, and suggest their usefulness as a tool for materials discovery.
NASA Astrophysics Data System (ADS)
Nishizuka, N.; Sugiura, K.; Kubo, Y.; Den, M.; Watari, S.; Ishii, M.
2017-02-01
We developed a flare prediction model using machine learning, which is optimized to predict the maximum class of flares occurring in the following 24 hr. Machine learning is used to devise algorithms that can learn from and make decisions on a huge amount of data. We used solar observation data during the period 2010-2015, such as vector magnetograms, ultraviolet (UV) emission, and soft X-ray emission taken by the Solar Dynamics Observatory and the Geostationary Operational Environmental Satellite. We detected active regions (ARs) from the full-disk magnetogram, from which ˜60 features were extracted with their time differentials, including magnetic neutral lines, the current helicity, the UV brightening, and the flare history. After standardizing the feature database, we fully shuffled and randomly separated it into two for training and testing. To investigate which algorithm is best for flare prediction, we compared three machine-learning algorithms: the support vector machine, k-nearest neighbors (k-NN), and extremely randomized trees. The prediction score, the true skill statistic, was higher than 0.9 with a fully shuffled data set, which is higher than that for human forecasts. It was found that k-NN has the highest performance among the three algorithms. The ranking of the feature importance showed that previous flare activity is most effective, followed by the length of magnetic neutral lines, the unsigned magnetic flux, the area of UV brightening, and the time differentials of features over 24 hr, all of which are strongly correlated with the flux emergence dynamics in an AR.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nishizuka, N.; Kubo, Y.; Den, M.
We developed a flare prediction model using machine learning, which is optimized to predict the maximum class of flares occurring in the following 24 hr. Machine learning is used to devise algorithms that can learn from and make decisions on a huge amount of data. We used solar observation data during the period 2010–2015, such as vector magnetograms, ultraviolet (UV) emission, and soft X-ray emission taken by the Solar Dynamics Observatory and the Geostationary Operational Environmental Satellite . We detected active regions (ARs) from the full-disk magnetogram, from which ∼60 features were extracted with their time differentials, including magnetic neutralmore » lines, the current helicity, the UV brightening, and the flare history. After standardizing the feature database, we fully shuffled and randomly separated it into two for training and testing. To investigate which algorithm is best for flare prediction, we compared three machine-learning algorithms: the support vector machine, k-nearest neighbors (k-NN), and extremely randomized trees. The prediction score, the true skill statistic, was higher than 0.9 with a fully shuffled data set, which is higher than that for human forecasts. It was found that k-NN has the highest performance among the three algorithms. The ranking of the feature importance showed that previous flare activity is most effective, followed by the length of magnetic neutral lines, the unsigned magnetic flux, the area of UV brightening, and the time differentials of features over 24 hr, all of which are strongly correlated with the flux emergence dynamics in an AR.« less
Fraccaro, Paolo; Nicolo, Massimo; Bonetto, Monica; Giacomini, Mauro; Weller, Peter; Traverso, Carlo Enrico; Prosperi, Mattia; OSullivan, Dympna
2015-01-27
To investigate machine learning methods, ranging from simpler interpretable techniques to complex (non-linear) "black-box" approaches, for automated diagnosis of Age-related Macular Degeneration (AMD). Data from healthy subjects and patients diagnosed with AMD or other retinal diseases were collected during routine visits via an Electronic Health Record (EHR) system. Patients' attributes included demographics and, for each eye, presence/absence of major AMD-related clinical signs (soft drusen, retinal pigment epitelium, defects/pigment mottling, depigmentation area, subretinal haemorrhage, subretinal fluid, macula thickness, macular scar, subretinal fibrosis). Interpretable techniques known as white box methods including logistic regression and decision trees as well as less interpreitable techniques known as black box methods, such as support vector machines (SVM), random forests and AdaBoost, were used to develop models (trained and validated on unseen data) to diagnose AMD. The gold standard was confirmed diagnosis of AMD by physicians. Sensitivity, specificity and area under the receiver operating characteristic (AUC) were used to assess performance. Study population included 487 patients (912 eyes). In terms of AUC, random forests, logistic regression and adaboost showed a mean performance of (0.92), followed by SVM and decision trees (0.90). All machine learning models identified soft drusen and age as the most discriminating variables in clinicians' decision pathways to diagnose AMD. Both black-box and white box methods performed well in identifying diagnoses of AMD and their decision pathways. Machine learning models developed through the proposed approach, relying on clinical signs identified by retinal specialists, could be embedded into EHR to provide physicians with real time (interpretable) support.
Zhao, Jiangsan; Bodner, Gernot; Rewald, Boris
2016-01-01
Phenotyping local crop cultivars is becoming more and more important, as they are an important genetic source for breeding – especially in regard to inherent root system architectures. Machine learning algorithms are promising tools to assist in the analysis of complex data sets; novel approaches are need to apply them on root phenotyping data of mature plants. A greenhouse experiment was conducted in large, sand-filled columns to differentiate 16 European Pisum sativum cultivars based on 36 manually derived root traits. Through combining random forest and support vector machine models, machine learning algorithms were successfully used for unbiased identification of most distinguishing root traits and subsequent pairwise cultivar differentiation. Up to 86% of pea cultivar pairs could be distinguished based on top five important root traits (Timp5) – Timp5 differed widely between cultivar pairs. Selecting top important root traits (Timp) provided a significant improved classification compared to using all available traits or randomly selected trait sets. The most frequent Timp of mature pea cultivars was total surface area of lateral roots originating from tap root segments at 0–5 cm depth. The high classification rate implies that culturing did not lead to a major loss of variability in root system architecture in the studied pea cultivars. Our results illustrate the potential of machine learning approaches for unbiased (root) trait selection and cultivar classification based on rather small, complex phenotypic data sets derived from pot experiments. Powerful statistical approaches are essential to make use of the increasing amount of (root) phenotyping information, integrating the complex trait sets describing crop cultivars. PMID:27999587
Machine learning of big data in gaining insight into successful treatment of hypertension.
Koren, Gideon; Nordon, Galia; Radinsky, Kira; Shalev, Varda
2018-06-01
Despite effective medications, rates of uncontrolled hypertension remain high. Treatment protocols are largely based on randomized trials and meta-analyses of these studies. The objective of this study was to test the utility of machine learning of big data in gaining insight into the treatment of hypertension. We applied machine learning techniques such as decision trees and neural networks, to identify determinants that contribute to the success of hypertension drug treatment on a large set of patients. We also identified concomitant drugs not considered to have antihypertensive activity, which may contribute to lowering blood pressure (BP) control. Higher initial BP predicts lower success rates. Among the medication options and their combinations, treatment with beta blockers appears to be more commonly effective, which is not reflected in contemporary guidelines. Among numerous concomitant drugs taken by hypertensive patients, proton pump inhibitors (PPIs), and HMG CO-A reductase inhibitors (statins) significantly improved the success rate of hypertension. In conclusions, machine learning of big data is a novel method to identify effective antihypertensive therapy and for repurposing medications already on the market for new indications. Our results related to beta blockers, stemming from machine learning of a large and diverse set of big data, in contrast to the much narrower criteria for randomized clinic trials (RCTs), should be corroborated and affirmed by other methods, as they hold potential promise for an old class of drugs which may be presently underutilized. These previously unrecognized effects of PPIs and statins have been very recently identified as effective in lowering BP in preliminary clinical observations, lending credibility to our big data results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
von Laszewski, G.; Foster, I.; Gawor, J.
In this paper we report on the features of the Java Commodity Grid Kit. The Java CoG Kit provides middleware for accessing Grid functionality from the Java framework. Java CoG Kit middleware is general enough to design a variety of advanced Grid applications with quite different user requirements. Access to the Grid is established via Globus protocols, allowing the Java CoG Kit to communicate also with the C Globus reference implementation. Thus, the Java CoG Kit provides Grid developers with the ability to utilize the Grid, as well as numerous additional libraries and frameworks developed by the Java community tomore » enable network, Internet, enterprise, and peer-to peer computing. A variety of projects have successfully used the client libraries of the Java CoG Kit to access Grids driven by the C Globus software. In this paper we also report on the efforts to develop server side Java CoG Kit components. As part of this research we have implemented a prototype pure Java resource management system that enables one to run Globus jobs on platforms on which a Java virtual machine is supported, including Windows NT machines.« less
Learning About Climate and Atmospheric Models Through Machine Learning
NASA Astrophysics Data System (ADS)
Lucas, D. D.
2017-12-01
From the analysis of ensemble variability to improving simulation performance, machine learning algorithms can play a powerful role in understanding the behavior of atmospheric and climate models. To learn about model behavior, we create training and testing data sets through ensemble techniques that sample different model configurations and values of input parameters, and then use supervised machine learning to map the relationships between the inputs and outputs. Following this procedure, we have used support vector machines, random forests, gradient boosting and other methods to investigate a variety of atmospheric and climate model phenomena. We have used machine learning to predict simulation crashes, estimate the probability density function of climate sensitivity, optimize simulations of the Madden Julian oscillation, assess the impacts of weather and emissions uncertainty on atmospheric dispersion, and quantify the effects of model resolution changes on precipitation. This presentation highlights recent examples of our applications of machine learning to improve the understanding of climate and atmospheric models. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Quinn, Mark Kenneth; Spinosa, Emanuele; Roberts, David A
2017-07-25
Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access.
Introduction to machine learning for brain imaging.
Lemm, Steven; Blankertz, Benjamin; Dickhaus, Thorsten; Müller, Klaus-Robert
2011-05-15
Machine learning and pattern recognition algorithms have in the past years developed to become a working horse in brain imaging and the computational neurosciences, as they are instrumental for mining vast amounts of neural data of ever increasing measurement precision and detecting minuscule signals from an overwhelming noise floor. They provide the means to decode and characterize task relevant brain states and to distinguish them from non-informative brain signals. While undoubtedly this machinery has helped to gain novel biological insights, it also holds the danger of potential unintentional abuse. Ideally machine learning techniques should be usable for any non-expert, however, unfortunately they are typically not. Overfitting and other pitfalls may occur and lead to spurious and nonsensical interpretation. The goal of this review is therefore to provide an accessible and clear introduction to the strengths and also the inherent dangers of machine learning usage in the neurosciences. Copyright © 2010 Elsevier Inc. All rights reserved.
Spinosa, Emanuele; Roberts, David A.
2017-01-01
Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access. PMID:28757553
Using PVM to host CLIPS in distributed environments
NASA Technical Reports Server (NTRS)
Myers, Leonard; Pohl, Kym
1994-01-01
It is relatively easy to enhance CLIPS (C Language Integrated Production System) to support multiple expert systems running in a distributed environment with heterogeneous machines. The task is minimized by using the PVM (Parallel Virtual Machine) code from Oak Ridge Labs to provide the distributed utility. PVM is a library of C and FORTRAN subprograms that supports distributive computing on many different UNIX platforms. A PVM deamon is easily installed on each CPU that enters the virtual machine environment. Any user with rsh or rexec access to a machine can use the one PVM deamon to obtain a generous set of distributed facilities. The ready availability of both CLIPS and PVM makes the combination of software particularly attractive for budget conscious experimentation of heterogeneous distributive computing with multiple CLIPS executables. This paper presents a design that is sufficient to provide essential message passing functions in CLIPS and enable the full range of PVM facilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ukwatta, T. N.; Wozniak, P. R.; Gehrels, N.
Studies of high-redshift gamma-ray bursts (GRBs) provide important information about the early Universe such as the rates of stellar collapsars and mergers, the metallicity content, constraints on the re-ionization period, and probes of the Hubble expansion. Rapid selection of high-z candidates from GRB samples reported in real time by dedicated space missions such as Swift is the key to identifying the most distant bursts before the optical afterglow becomes too dim to warrant a good spectrum. Here, we introduce ‘machine-z’, a redshift prediction algorithm and a ‘high-z’ classifier for Swift GRBs based on machine learning. Our method relies exclusively onmore » canonical data commonly available within the first few hours after the GRB trigger. Using a sample of 284 bursts with measured redshifts, we trained a randomized ensemble of decision trees (random forest) to perform both regression and classification. Cross-validated performance studies show that the correlation coefficient between machine-z predictions and the true redshift is nearly 0.6. At the same time, our high-z classifier can achieve 80 per cent recall of true high-redshift bursts, while incurring a false positive rate of 20 per cent. With 40 per cent false positive rate the classifier can achieve ~100 per cent recall. As a result, the most reliable selection of high-redshift GRBs is obtained by combining predictions from both the high-z classifier and the machine-z regressor.« less
Method and apparatus for precision laser micromachining
Chang, Jim; Warner, Bruce E.; Dragon, Ernest P.
2000-05-02
A method and apparatus for micromachining and microdrilling which results in a machined part of superior surface quality is provided. The system uses a near diffraction limited, high repetition rate, short pulse length, visible wavelength laser. The laser is combined with a high speed precision tilting mirror and suitable beam shaping optics, thus allowing a large amount of energy to be accurately positioned and scanned on the workpiece. As a result of this system, complicated, high resolution machining patterns can be achieved. A cover plate may be temporarily attached to the workpiece. Then as the workpiece material is vaporized during the machining process, the vapors condense on the cover plate rather than the surface of the workpiece. In order to eliminate cutting rate variations as the cutting direction is varied, a randomly polarized laser beam is utilized. A rotating half-wave plate is used to achieve the random polarization. In order to correctly locate the focus at the desired location within the workpiece, the position of the focus is first determined by monitoring the speckle size while varying the distance between the workpiece and the focussing optics. When the speckle size reaches a maximum, the focus is located at the first surface of the workpiece. After the location of the focus has been determined, it is repositioned to the desired location within the workpiece, thus optimizing the quality of the machined area.
Uncertainty in Random Forests: What does it mean in a spatial context?
NASA Astrophysics Data System (ADS)
Klump, Jens; Fouedjio, Francky
2017-04-01
Geochemical surveys are an important part of exploration for mineral resources and in environmental studies. The samples and chemical analyses are often laborious and difficult to obtain and therefore come at a high cost. As a consequence, these surveys are characterised by datasets with large numbers of variables but relatively few data points when compared to conventional big data problems. With more remote sensing platforms and sensor networks being deployed, large volumes of auxiliary data of the surveyed areas are becoming available. The use of these auxiliary data has the potential to improve the prediction of chemical element concentrations over the whole study area. Kriging is a well established geostatistical method for the prediction of spatial data but requires significant pre-processing and makes some basic assumptions about the underlying distribution of the data. Some machine learning algorithms, on the other hand, may require less data pre-processing and are non-parametric. In this study we used a dataset provided by Kirkwood et al. [1] to explore the potential use of Random Forest in geochemical mapping. We chose Random Forest because it is a well understood machine learning method and has the advantage that it provides us with a measure of uncertainty. By comparing Random Forest to Kriging we found that both methods produced comparable maps of estimated values for our variables of interest. Kriging outperformed Random Forest for variables of interest with relatively strong spatial correlation. The measure of uncertainty provided by Random Forest seems to be quite different to the measure of uncertainty provided by Kriging. In particular, the lack of spatial context can give misleading results in areas without ground truth data. In conclusion, our preliminary results show that the model driven approach in geostatistics gives us more reliable estimates for our target variables than Random Forest for variables with relatively strong spatial correlation. However, in cases of weak spatial correlation Random Forest, as a nonparametric method, may give the better results once we have a better understanding of the meaning of its uncertainty measures in a spatial context. References [1] Kirkwood, C., M. Cave, D. Beamish, S. Grebby, and A. Ferreira (2016), A machine learning approach to geochemical mapping, Journal of Geochemical Exploration, 163, 28-40, doi:10.1016/j.gexplo.2016.05.003.
Assessing the Resource Gap in a Changing Arctic
2013-04-01
weather.122 The Supervisor of Salvage and Diving (SUPSALV) under the US Navy’s Naval Sea Systems Command has a robust oil containment system in...commercial/aeromagazine/aero_16/polar_story.html (accessed November 29, 2012). 67Jerry Beilinson, “What if a Cruise Ship Wrecked in Alaska?,” January...25, 2012, http://www.popularmechanics.com/technology/engineering/extreme-machines/what-if-a-cruise- ship- wrecked -in-alaska-6645471 (accessed December
Application of data cubes for improving detection of water cycle extreme events
NASA Astrophysics Data System (ADS)
Teng, W. L.; Albayrak, A.
2015-12-01
As part of an ongoing NASA-funded project to remove a longstanding barrier to accessing NASA data (i.e., accessing archived time-step array data as point-time series), for the hydrology and other point-time series-oriented communities, "data cubes" are created from which time series files (aka "data rods") are generated on-the-fly and made available as Web services from the Goddard Earth Sciences Data and Information Services Center (GES DISC). Data cubes are data as archived rearranged into spatio-temporal matrices, which allow for easy access to the data, both spatially and temporally. A data cube is a specific case of the general optimal strategy of reorganizing data to match the desired means of access. The gain from such reorganization is greater the larger the data set. As a use case for our project, we are leveraging existing software to explore the application of the data cubes concept to machine learning, for the purpose of detecting water cycle extreme (WCE) events, a specific case of anomaly detection, requiring time series data. We investigate the use of the sequential probability ratio test (SPRT) for anomaly detection and support vector machines (SVM) for anomaly classification. We show an example of detection of WCE events, using the Global Land Data Assimilation Systems (GLDAS) data set.
Application of Data Cubes for Improving Detection of Water Cycle Extreme Events
NASA Technical Reports Server (NTRS)
Albayrak, Arif; Teng, William
2015-01-01
As part of an ongoing NASA-funded project to remove a longstanding barrier to accessing NASA data (i.e., accessing archived time-step array data as point-time series), for the hydrology and other point-time series-oriented communities, "data cubes" are created from which time series files (aka "data rods") are generated on-the-fly and made available as Web services from the Goddard Earth Sciences Data and Information Services Center (GES DISC). Data cubes are data as archived rearranged into spatio-temporal matrices, which allow for easy access to the data, both spatially and temporally. A data cube is a specific case of the general optimal strategy of reorganizing data to match the desired means of access. The gain from such reorganization is greater the larger the data set. As a use case of our project, we are leveraging existing software to explore the application of the data cubes concept to machine learning, for the purpose of detecting water cycle extreme events, a specific case of anomaly detection, requiring time series data. We investigate the use of support vector machines (SVM) for anomaly classification. We show an example of detection of water cycle extreme events, using data from the Tropical Rainfall Measuring Mission (TRMM).
Does providing nutrition information at vending machines reduce calories per item sold?
Dingman, Deirdre A; Schulz, Mark R; Wyrick, David L; Bibeau, Daniel L; Gupta, Sat N
2015-02-01
In 2010, the United States (US) enacted a restaurant menu labeling law. The law also applied to vending machine companies selling food. Research suggested that providing nutrition information on menus in restaurants might reduce the number of calories purchased. We tested the effect of providing nutrition information and 'healthy' designations to consumers where vending machines were located in college residence halls. We conducted our study at one university in Southeast US (October-November 2012). We randomly assigned 18 vending machines locations (residence halls) to an intervention or control group. For the intervention we posted nutrition information, interpretive signage, and sent a promotional email to residents of the hall. For the control group we did nothing. We tracked sales over 4 weeks before and 4 weeks after we introduced the intervention. Our intervention did not change what the residents bought. We recommend additional research about providing nutrition information where vending machines are located, including testing formats used to present information.
Manipulating Tabu List to Handle Machine Breakdowns in Job Shop Scheduling Problems
NASA Astrophysics Data System (ADS)
Nababan, Erna Budhiarti; SalimSitompul, Opim
2011-06-01
Machine breakdowns in a production schedule may occur on a random basis that make the well-known hard combinatorial problem of Job Shop Scheduling Problems (JSSP) becomes more complex. One of popular techniques used to solve the combinatorial problems is Tabu Search. In this technique, moves that will be not allowed to be revisited are retained in a tabu list in order to avoid in gaining solutions that have been obtained previously. In this paper, we propose an algorithm to employ a second tabu list to keep broken machines, in addition to the tabu list that keeps the moves. The period of how long the broken machines will be kept on the list is categorized using fuzzy membership function. Our technique are tested to the benchmark data of JSSP available on the OR library. From the experiment, we found that our algorithm is promising to help a decision maker to face the event of machine breakdowns.
Application of Machine Learning Approaches for Protein-protein Interactions Prediction.
Zhang, Mengying; Su, Qiang; Lu, Yi; Zhao, Manman; Niu, Bing
2017-01-01
Proteomics endeavors to study the structures, functions and interactions of proteins. Information of the protein-protein interactions (PPIs) helps to improve our knowledge of the functions and the 3D structures of proteins. Thus determining the PPIs is essential for the study of the proteomics. In this review, in order to study the application of machine learning in predicting PPI, some machine learning approaches such as support vector machine (SVM), artificial neural networks (ANNs) and random forest (RF) were selected, and the examples of its applications in PPIs were listed. SVM and RF are two commonly used methods. Nowadays, more researchers predict PPIs by combining more than two methods. This review presents the application of machine learning approaches in predicting PPI. Many examples of success in identification and prediction in the area of PPI prediction have been discussed, and the PPIs research is still in progress. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Computer-aided diagnosis of lung nodule using gradient tree boosting and Bayesian optimization.
Nishio, Mizuho; Nishizawa, Mitsuo; Sugiyama, Osamu; Kojima, Ryosuke; Yakami, Masahiro; Kuroda, Tomohiro; Togashi, Kaori
2018-01-01
We aimed to evaluate a computer-aided diagnosis (CADx) system for lung nodule classification focussing on (i) usefulness of the conventional CADx system (hand-crafted imaging feature + machine learning algorithm), (ii) comparison between support vector machine (SVM) and gradient tree boosting (XGBoost) as machine learning algorithms, and (iii) effectiveness of parameter optimization using Bayesian optimization and random search. Data on 99 lung nodules (62 lung cancers and 37 benign lung nodules) were included from public databases of CT images. A variant of the local binary pattern was used for calculating a feature vector. SVM or XGBoost was trained using the feature vector and its corresponding label. Tree Parzen Estimator (TPE) was used as Bayesian optimization for parameters of SVM and XGBoost. Random search was done for comparison with TPE. Leave-one-out cross-validation was used for optimizing and evaluating the performance of our CADx system. Performance was evaluated using area under the curve (AUC) of receiver operating characteristic analysis. AUC was calculated 10 times, and its average was obtained. The best averaged AUC of SVM and XGBoost was 0.850 and 0.896, respectively; both were obtained using TPE. XGBoost was generally superior to SVM. Optimal parameters for achieving high AUC were obtained with fewer numbers of trials when using TPE, compared with random search. Bayesian optimization of SVM and XGBoost parameters was more efficient than random search. Based on observer study, AUC values of two board-certified radiologists were 0.898 and 0.822. The results show that diagnostic accuracy of our CADx system was comparable to that of radiologists with respect to classifying lung nodules.
DOE Office of Scientific and Technical Information (OSTI.GOV)
1996-05-01
The Network Information System (NWIS) was initially implemented in May 1996 as a system in which computing devices could be recorded so that unique names could be generated for each device. Since then the system has grown to be an enterprise wide information system which is integrated with other systems to provide the seamless flow of data through the enterprise. The system Iracks data for two main entities: people and computing devices. The following are the type of functions performed by NWIS for these two entities: People Provides source information to the enterprise person data repository for select contractors andmore » visitors Generates and tracks unique usernames and Unix user IDs for every individual granted cyber access Tracks accounts for centrally managed computing resources, and monitors and controls the reauthorization of the accounts in accordance with the DOE mandated interval Computing Devices Generates unique names for all computing devices registered in the system Tracks the following information for each computing device: manufacturer, make, model, Sandia property number, vendor serial number, operating system and operating system version, owner, device location, amount of memory, amount of disk space, and level of support provided for the machine Tracks the hardware address for network cards Tracks the P address registered to computing devices along with the canonical and alias names for each address Updates the Dynamic Domain Name Service (DDNS) for canonical and alias names Creates the configuration files for DHCP to control the DHCP ranges and allow access to only properly registered computers Tracks and monitors classified security plans for stand-alone computers Tracks the configuration requirements used to setup the machine Tracks the roles people have on machines (system administrator, administrative access, user, etc...) Allows systems administrators to track changes made on the machine (both hardware and software) Generates an adjustment history of changes on selected fields« less
Bagheri, Hossein; Hooshmand, Tabassom; Aghajani, Farzaneh
2015-09-01
This study aimed to evaluate the effect of different ceramic surface treatments after machining grinding on the biaxial flexural strength (BFS) of machinable dental ceramics with different crystalline phases. Disk-shape specimens (10mm in diameter and 1.3mm in thickness) of machinable ceramic cores (two silica-based and one zirconia-based ceramics) were prepared. Each type of the ceramic surfaces was then randomly treated (n=15) with different treatments as follows: 1) machined finish as control, 2) machined finish and sandblasting with alumina, and 3) machined finish and hydrofluoric acid etching for the leucite and lithium disilicate-based ceramics, and for the zirconia; 1) machined finish and post-sintered as control, 2) machined finish, post-sintered, and sandblasting, and 3) machined finish, post-sintered, and Nd;YAG laser irradiation. The BFS were measured in a universal testing machine. Data based were analyzed by ANOVA and Tukey's multiple comparisons post-hoc test (α=0.05). The mean BFS of machined finish only surfaces for leucite ceramic was significantly higher than that of sandblasted (P=0.001) and acid etched surfaces (P=0.005). A significantly lower BFS was found after sandblasting for lithium disilicate compared with that of other groups (P<0.05). Sandblasting significantly increased the BFS for the zirconia (P<0.05), but the BFS was significantly decreased after laser irradiation (P<0.05). The BFS of the machinable ceramics was affected by the type of ceramic material and surface treatment method. Sandblasting with alumina was detrimental to the strength of only silica-based ceramics. Nd:YAG laser irradiation may lead to substantial strength degradation of zirconia.
Bagheri, Hossein; Aghajani, Farzaneh
2015-01-01
Objectives: This study aimed to evaluate the effect of different ceramic surface treatments after machining grinding on the biaxial flexural strength (BFS) of machinable dental ceramics with different crystalline phases. Materials and Methods: Disk-shape specimens (10mm in diameter and 1.3mm in thickness) of machinable ceramic cores (two silica-based and one zirconia-based ceramics) were prepared. Each type of the ceramic surfaces was then randomly treated (n=15) with different treatments as follows: 1) machined finish as control, 2) machined finish and sandblasting with alumina, and 3) machined finish and hydrofluoric acid etching for the leucite and lithium disilicate-based ceramics, and for the zirconia; 1) machined finish and post-sintered as control, 2) machined finish, post-sintered, and sandblasting, and 3) machined finish, post-sintered, and Nd;YAG laser irradiation. The BFS were measured in a universal testing machine. Data based were analyzed by ANOVA and Tukey’s multiple comparisons post-hoc test (α=0.05). Results: The mean BFS of machined finish only surfaces for leucite ceramic was significantly higher than that of sandblasted (P=0.001) and acid etched surfaces (P=0.005). A significantly lower BFS was found after sandblasting for lithium disilicate compared with that of other groups (P<0.05). Sandblasting significantly increased the BFS for the zirconia (P<0.05), but the BFS was significantly decreased after laser irradiation (P<0.05). Conclusions: The BFS of the machinable ceramics was affected by the type of ceramic material and surface treatment method. Sandblasting with alumina was detrimental to the strength of only silica-based ceramics. Nd:YAG laser irradiation may lead to substantial strength degradation of zirconia. PMID:27148372
Machine Learning in the Big Data Era: Are We There Yet?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sukumar, Sreenivas Rangan
In this paper, we discuss the machine learning challenges of the Big Data era. We observe that recent innovations in being able to collect, access, organize, integrate, and query massive amounts of data from a wide variety of data sources have brought statistical machine learning under more scrutiny and evaluation for gleaning insights from the data than ever before. In that context, we pose and debate the question - Are machine learning algorithms scaling with the ability to store and compute? If yes, how? If not, why not? We survey recent developments in the state-of-the-art to discuss emerging and outstandingmore » challenges in the design and implementation of machine learning algorithms at scale. We leverage experience from real-world Big Data knowledge discovery projects across domains of national security and healthcare to suggest our efforts be focused along the following axes: (i) the data science challenge - designing scalable and flexible computational architectures for machine learning (beyond just data-retrieval); (ii) the science of data challenge the ability to understand characteristics of data before applying machine learning algorithms and tools; and (iii) the scalable predictive functions challenge the ability to construct, learn and infer with increasing sample size, dimensionality, and categories of labels. We conclude with a discussion of opportunities and directions for future research.« less
Development of testing machine for tunnel inspection using multi-rotor UAV
NASA Astrophysics Data System (ADS)
Iwamoto, Tatsuya; Enaka, Tomoya; Tada, Keijirou
2017-05-01
Many concrete structures are deteriorating to dangerous levels throughout Japan. These concrete structures need to be inspected regularly to be sure that they are safe enough to be used. The inspection method for these concrete structures is typically the impact acoustic method. In the impact acoustic method, the worker taps the surface of the concrete with a hammer. Thus, it is necessary to set up scaffolding to access tunnel walls for inspection. Alternatively, aerial work platforms can be used. However, setting up scaffolding and aerial work platforms is not economical with regard to time or money. Therefore, we developed a testing machine using a multirotor UAV for tunnel inspection. This test machine flies by a plurality of rotors, and it is pushed along a concrete wall and moved by using rubber crawlers. The impact acoustic method is used in this testing machine. This testing machine has a hammer to make an impact, and a microphone to acquire the impact sound. The impact sound is converted into an electrical signal and is wirelessly transmitted to the computer. At the same time, the position of the testing machine is measured by image processing using a camera. The weight and dimensions of the testing machine are approximately 1.25 kg and 500 mm by 500 mm by 250 mm, respectively.
A global characterization and identification of multifunctional enzymes.
Cheng, Xian-Ying; Huang, Wei-Juan; Hu, Shi-Chang; Zhang, Hai-Lei; Wang, Hao; Zhang, Jing-Xian; Lin, Hong-Huang; Chen, Yu-Zong; Zou, Quan; Ji, Zhi-Liang
2012-01-01
Multi-functional enzymes are enzymes that perform multiple physiological functions. Characterization and identification of multi-functional enzymes are critical for communication and cooperation between different functions and pathways within a complex cellular system or between cells. In present study, we collected literature-reported 6,799 multi-functional enzymes and systematically characterized them in structural, functional, and evolutionary aspects. It was found that four physiochemical properties, that is, charge, polarizability, hydrophobicity, and solvent accessibility, are important for characterization of multi-functional enzymes. Accordingly, a combinational model of support vector machine and random forest model was constructed, based on which 6,956 potential novel multi-functional enzymes were successfully identified from the ENZYME database. Moreover, it was observed that multi-functional enzymes are non-evenly distributed in species, and that Bacteria have relatively more multi-functional enzymes than Archaebacteria and Eukaryota. Comparative analysis indicated that the multi-functional enzymes experienced a fluctuation of gene gain and loss during the evolution from S. cerevisiae to H. sapiens. Further pathway analyses indicated that a majority of multi-functional enzymes were well preserved in catalyzing several essential cellular processes, for example, metabolisms of carbohydrates, nucleotides, and amino acids. What's more, a database of known multi-functional enzymes and a server for novel multi-functional enzyme prediction were also constructed for free access at http://bioinf.xmu.edu.cn/databases/MFEs/index.htm.
Seizure Forecasting and the Preictal State in Canine Epilepsy.
Varatharajah, Yogatheesan; Iyer, Ravishankar K; Berry, Brent M; Worrell, Gregory A; Brinkmann, Benjamin H
2017-02-01
The ability to predict seizures may enable patients with epilepsy to better manage their medications and activities, potentially reducing side effects and improving quality of life. Forecasting epileptic seizures remains a challenging problem, but machine learning methods using intracranial electroencephalographic (iEEG) measures have shown promise. A machine-learning-based pipeline was developed to process iEEG recordings and generate seizure warnings. Results support the ability to forecast seizures at rates greater than a Poisson random predictor for all feature sets and machine learning algorithms tested. In addition, subject-specific neurophysiological changes in multiple features are reported preceding lead seizures, providing evidence supporting the existence of a distinct and identifiable preictal state.
SEIZURE FORECASTING AND THE PREICTAL STATE IN CANINE EPILEPSY
Varatharajah, Yogatheesan; Iyer, Ravishankar K.; Berry, Brent M.; Worrell, Gregory A.; Brinkmann, Benjamin H.
2017-01-01
The ability to predict seizures may enable patients with epilepsy to better manage their medications and activities, potentially reducing side effects and improving quality of life. Forecasting epileptic seizures remains a challenging problem, but machine learning methods using intracranial electroencephalographic (iEEG) measures have shown promise. A machine-learning-based pipeline was developed to process iEEG recordings and generate seizure warnings. Results support the ability to forecast seizures at rates greater than a Poisson random predictor for all feature sets and machine learning algorithms tested. In addition, subject-specific neurophysiological changes in multiple features are reported preceding lead seizures, providing evidence supporting the existence of a distinct and identifiable preictal state. PMID:27464854
The value of prior knowledge in machine learning of complex network systems.
Ferranti, Dana; Krane, David; Craft, David
2017-11-15
Our overall goal is to develop machine-learning approaches based on genomics and other relevant accessible information for use in predicting how a patient will respond to a given proposed drug or treatment. Given the complexity of this problem, we begin by developing, testing and analyzing learning methods using data from simulated systems, which allows us access to a known ground truth. We examine the benefits of using prior system knowledge and investigate how learning accuracy depends on various system parameters as well as the amount of training data available. The simulations are based on Boolean networks-directed graphs with 0/1 node states and logical node update rules-which are the simplest computational systems that can mimic the dynamic behavior of cellular systems. Boolean networks can be generated and simulated at scale, have complex yet cyclical dynamics and as such provide a useful framework for developing machine-learning algorithms for modular and hierarchical networks such as biological systems in general and cancer in particular. We demonstrate that utilizing prior knowledge (in the form of network connectivity information), without detailed state equations, greatly increases the power of machine-learning algorithms to predict network steady-state node values ('phenotypes') and perturbation responses ('drug effects'). Links to codes and datasets here: https://gray.mgh.harvard.edu/people-directory/71-david-craft-phd. dcraft@broadinstitute.org. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
A machine learning approach for viral genome classification.
Remita, Mohamed Amine; Halioui, Ahmed; Malick Diouara, Abou Abdallah; Daigle, Bruno; Kiani, Golrokh; Diallo, Abdoulaye Baniré
2017-04-11
Advances in cloning and sequencing technology are yielding a massive number of viral genomes. The classification and annotation of these genomes constitute important assets in the discovery of genomic variability, taxonomic characteristics and disease mechanisms. Existing classification methods are often designed for specific well-studied family of viruses. Thus, the viral comparative genomic studies could benefit from more generic, fast and accurate tools for classifying and typing newly sequenced strains of diverse virus families. Here, we introduce a virus classification platform, CASTOR, based on machine learning methods. CASTOR is inspired by a well-known technique in molecular biology: restriction fragment length polymorphism (RFLP). It simulates, in silico, the restriction digestion of genomic material by different enzymes into fragments. It uses two metrics to construct feature vectors for machine learning algorithms in the classification step. We benchmark CASTOR for the classification of distinct datasets of human papillomaviruses (HPV), hepatitis B viruses (HBV) and human immunodeficiency viruses type 1 (HIV-1). Results reveal true positive rates of 99%, 99% and 98% for HPV Alpha species, HBV genotyping and HIV-1 M subtyping, respectively. Furthermore, CASTOR shows a competitive performance compared to well-known HIV-1 specific classifiers (REGA and COMET) on whole genomes and pol fragments. The performance of CASTOR, its genericity and robustness could permit to perform novel and accurate large scale virus studies. The CASTOR web platform provides an open access, collaborative and reproducible machine learning classifiers. CASTOR can be accessed at http://castor.bioinfo.uqam.ca .
Ali, Habiba I; Jarrar, Amjad H; Abo-El-Enen, Mostafa; Al Shamsi, Mariam; Al Ashqar, Huda
2015-05-28
Increasing the healthfulness of campus food environments is an important step in promoting healthful food choices among college students. This study explored university students' suggestions on promoting healthful food choices from campus vending machines. It also examined factors influencing students' food choices from vending machines. Peer-led semi-structured individual interviews were conducted with 43 undergraduate students (33 females and 10 males) recruited from students enrolled in an introductory nutrition course in a large national university in the United Arab Emirates. Interviews were audiotaped, transcribed, and coded to generate themes using N-Vivo software. Accessibility, peer influence, and busy schedules were the main factors influencing students' food choices from campus vending machines. Participants expressed the need to improve the nutritional quality of the food items sold in the campus vending machines. Recommendations for students' nutrition educational activities included placing nutrition tips on or beside the vending machines and using active learning methods, such as competitions on nutrition knowledge. The results of this study have useful applications in improving the campus food environment and nutrition education opportunities at the university to assist students in making healthful food choices.
Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa; Al-Garadi, Mohammed Ali
2017-01-01
Widespread implementation of electronic databases has improved the accessibility of plaintext clinical information for supplementary use. Numerous machine learning techniques, such as supervised machine learning approaches or ontology-based approaches, have been employed to obtain useful information from plaintext clinical data. This study proposes an automatic multi-class classification system to predict accident-related causes of death from plaintext autopsy reports through expert-driven feature selection with supervised automatic text classification decision models. Accident-related autopsy reports were obtained from one of the largest hospital in Kuala Lumpur. These reports belong to nine different accident-related causes of death. Master feature vector was prepared by extracting features from the collected autopsy reports by using unigram with lexical categorization. This master feature vector was used to detect cause of death [according to internal classification of disease version 10 (ICD-10) classification system] through five automated feature selection schemes, proposed expert-driven approach, five subset sizes of features, and five machine learning classifiers. Model performance was evaluated using precisionM, recallM, F-measureM, accuracy, and area under ROC curve. Four baselines were used to compare the results with the proposed system. Random forest and J48 decision models parameterized using expert-driven feature selection yielded the highest evaluation measure approaching (85% to 90%) for most metrics by using a feature subset size of 30. The proposed system also showed approximately 14% to 16% improvement in the overall accuracy compared with the existing techniques and four baselines. The proposed system is feasible and practical to use for automatic classification of ICD-10-related cause of death from autopsy reports. The proposed system assists pathologists to accurately and rapidly determine underlying cause of death based on autopsy findings. Furthermore, the proposed expert-driven feature selection approach and the findings are generally applicable to other kinds of plaintext clinical reports.
Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa; Al-Garadi, Mohammed Ali
2017-01-01
Objectives Widespread implementation of electronic databases has improved the accessibility of plaintext clinical information for supplementary use. Numerous machine learning techniques, such as supervised machine learning approaches or ontology-based approaches, have been employed to obtain useful information from plaintext clinical data. This study proposes an automatic multi-class classification system to predict accident-related causes of death from plaintext autopsy reports through expert-driven feature selection with supervised automatic text classification decision models. Methods Accident-related autopsy reports were obtained from one of the largest hospital in Kuala Lumpur. These reports belong to nine different accident-related causes of death. Master feature vector was prepared by extracting features from the collected autopsy reports by using unigram with lexical categorization. This master feature vector was used to detect cause of death [according to internal classification of disease version 10 (ICD-10) classification system] through five automated feature selection schemes, proposed expert-driven approach, five subset sizes of features, and five machine learning classifiers. Model performance was evaluated using precisionM, recallM, F-measureM, accuracy, and area under ROC curve. Four baselines were used to compare the results with the proposed system. Results Random forest and J48 decision models parameterized using expert-driven feature selection yielded the highest evaluation measure approaching (85% to 90%) for most metrics by using a feature subset size of 30. The proposed system also showed approximately 14% to 16% improvement in the overall accuracy compared with the existing techniques and four baselines. Conclusion The proposed system is feasible and practical to use for automatic classification of ICD-10-related cause of death from autopsy reports. The proposed system assists pathologists to accurately and rapidly determine underlying cause of death based on autopsy findings. Furthermore, the proposed expert-driven feature selection approach and the findings are generally applicable to other kinds of plaintext clinical reports. PMID:28166263
Gaming machine addiction: the role of avoidance, accessibility and social support.
Thomas, Anna C; Allen, Felicity L; Phillips, James; Karantzas, Gery
2011-12-01
Commonality in etiology and clinical expression plus high comorbidity between pathological gambling and substance use disorders suggest common underlying motives. It is important to understand common motivators and differentiating factors. An overarching framework of addiction was used to examine predictors of problem gambling in current electronic gaming machine (EGM) gamblers. Path analysis was used to examine the relationships between antecedent factors (stressors, coping habits, social support), gambling motivations (avoidance, accessibility, social) and gambling behavior. Three hundred and forty seven (229 females: M = 29.20 years, SD = 14.93; 118 males: M = 29.64 years, SD = 12.49) people participated. Consistent with stress, coping and addiction theory, situational life stressors and general avoidance coping were positively related to avoidance-motivated gambling. In turn, avoidance-motivated gambling was positively related to EGM gambling frequency and problems. Consistent with exposure theory, life stressors were positively related to accessibility-motivated gambling, and accessibility-motivated gambling was positively related to EGM gambling frequency and gambling problems. These findings are consistent with other addiction research and suggest avoidance-motivated gambling is part of a more generalized pattern of avoidance coping with relative accessibility to EGM gambling explaining its choice as a method of avoidance. Findings also showed social support acted as a direct protective factor in relation to gambling frequency and problems and indirectly via avoidance and accessibility gambling motivations. Finally, life stressors were positively related to socially motivated gambling but this motivation was not related to either social support or gambling behavior suggesting it has little direct influence on gambling problems.
77 FR 12284 - Access to Confidential Business Information; Protection Strategies Incorporated
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-29
... (202) 566-0280. Docket visitors are required to show photographic identification, pass through a metal detector, and sign the EPA visitor log. All visitor bags are processed through an X-ray machine and subject...
76 FR 77224 - Access to Confidential Business Information by Primus Solutions, Inc.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-12
... are required to show photographic identification, pass through a metal detector, and sign the EPA visitor log. All visitor bags are processed through an X-ray machine and subject to search. Visitors will...
14 CFR 221.500 - Transmission of electronic tariffs to subscribers.
Code of Federal Regulations, 2011 CFR
2011-01-01
... to any subscriber to the on-line tariff database, including access to the justification required by... machine-readable data (raw tariff data) of all daily transactions made to its on-line tariff database. The...
14 CFR 221.500 - Transmission of electronic tariffs to subscribers.
Code of Federal Regulations, 2013 CFR
2013-01-01
... to any subscriber to the on-line tariff database, including access to the justification required by... machine-readable data (raw tariff data) of all daily transactions made to its on-line tariff database. The...
14 CFR 221.500 - Transmission of electronic tariffs to subscribers.
Code of Federal Regulations, 2010 CFR
2010-01-01
... to any subscriber to the on-line tariff database, including access to the justification required by... machine-readable data (raw tariff data) of all daily transactions made to its on-line tariff database. The...
14 CFR 221.500 - Transmission of electronic tariffs to subscribers.
Code of Federal Regulations, 2012 CFR
2012-01-01
... to any subscriber to the on-line tariff database, including access to the justification required by... machine-readable data (raw tariff data) of all daily transactions made to its on-line tariff database. The...
NASA Technical Reports Server (NTRS)
Katti, Romney R.
1995-01-01
Random-access memory (RAM) devices of proposed type exploit magneto-optical properties of magnetic garnets exhibiting perpendicular anisotropy. Magnetic writing and optical readout used. Provides nonvolatile storage and resists damage by ionizing radiation. Because of basic architecture and pinout requirements, most likely useful as small-capacity memory devices.
Development of Curie point switching for thin film, random access, memory device
NASA Technical Reports Server (NTRS)
Lewicki, G. W.; Tchernev, D. I.
1967-01-01
Managanese bismuthide films are used in the development of a random access memory device of high packing density and nondestructive readout capability. Memory entry is by Curie point switching using a laser beam. Readout is accomplished by microoptical or micromagnetic scanning.
Hernández-Ramos, José L.; Bernabe, Jorge Bernal; Moreno, M. Victoria; Skarmeta, Antonio F.
2015-01-01
As we get into the Internet of Things era, security and privacy concerns remain as the main obstacles in the development of innovative and valuable services to be exploited by society. Given the Machine-to-Machine (M2M) nature of these emerging scenarios, the application of current privacy-friendly technologies needs to be reconsidered and adapted to be deployed in such global ecosystem. This work proposes different privacy-preserving mechanisms through the application of anonymous credential systems and certificateless public key cryptography. The resulting alternatives are intended to enable an anonymous and accountable access control approach to be deployed on large-scale scenarios, such as Smart Cities. Furthermore, the proposed mechanisms have been deployed on constrained devices, in order to assess their suitability for a secure and privacy-preserving M2M-enabled Internet of Things. PMID:26140349
Turner, Anne M; Mandel, Hannah; Capurro, Daniel
2013-01-01
Limited English proficiency (LEP), defined as a limited ability to read, speak, write, or understand English, is associated with health disparities. Despite federal and state requirements to translate health information, the vast majority of health materials are solely available in English. This project investigates barriers to translation of health information and explores new technologies to improve access to multilingual public health materials. We surveyed all 77 local health departments (LHDs) in the Northwest about translation needs, practices, barriers and attitudes towards machine translation (MT). We received 67 responses from 45 LHDs. Translation of health materials is the principle strategy used by LHDs to reach LEP populations. Cost and access to qualified translators are principle barriers to producing multilingual materials. Thirteen LHDs have used online MT tools. Many respondents expressed concerns about the accuracy of MT. Overall, respondents were positive about its potential use, if low costs and quality could be assured.
Turner, Anne M.; Mandel, Hannah; Capurro, Daniel
2013-01-01
Limited English proficiency (LEP), defined as a limited ability to read, speak, write, or understand English, is associated with health disparities. Despite federal and state requirements to translate health information, the vast majority of health materials are solely available in English. This project investigates barriers to translation of health information and explores new technologies to improve access to multilingual public health materials. We surveyed all 77 local health departments (LHDs) in the Northwest about translation needs, practices, barriers and attitudes towards machine translation (MT). We received 67 responses from 45 LHDs. Translation of health materials is the principle strategy used by LHDs to reach LEP populations. Cost and access to qualified translators are principle barriers to producing multilingual materials. Thirteen LHDs have used online MT tools. Many respondents expressed concerns about the accuracy of MT. Overall, respondents were positive about its potential use, if low costs and quality could be assured. PMID:24551414
A Cloud-based Approach to Medical NLP
Chard, Kyle; Russell, Michael; Lussier, Yves A.; Mendonça, Eneida A; Silverstein, Jonathan C.
2011-01-01
Natural Language Processing (NLP) enables access to deep content embedded in medical texts. To date, NLP has not fulfilled its promise of enabling robust clinical encoding, clinical use, quality improvement, and research. We submit that this is in part due to poor accessibility, scalability, and flexibility of NLP systems. We describe here an approach and system which leverages cloud-based approaches such as virtual machines and Representational State Transfer (REST) to extract, process, synthesize, mine, compare/contrast, explore, and manage medical text data in a flexibly secure and scalable architecture. Available architectures in which our Smntx (pronounced as semantics) system can be deployed include: virtual machines in a HIPAA-protected hospital environment, brought up to run analysis over bulk data and destroyed in a local cloud; a commercial cloud for a large complex multi-institutional trial; and within other architectures such as caGrid, i2b2, or NHIN. PMID:22195072
Design and development of linked data from the National Map
Usery, E. Lynn; Varanka, Dalia E.
2012-01-01
The development of linked data on the World-Wide Web provides the opportunity for the U.S. Geological Survey (USGS) to supply its extensive volumes of geospatial data, information, and knowledge in a machine interpretable form and reach users and applications that heretofore have been unavailable. To pilot a process to take advantage of this opportunity, the USGS is developing an ontology for The National Map and converting selected data from nine research test areas to a Semantic Web format to support machine processing and linked data access. In a case study, the USGS has developed initial methods for legacy vector and raster formatted geometry, attributes, and spatial relationships to be accessed in a linked data environment maintaining the capability to generate graphic or image output from semantic queries. The description of an initial USGS approach to developing ontology, linked data, and initial query capability from The National Map databases is presented.
Hernández-Ramos, José L; Bernabe, Jorge Bernal; Moreno, M Victoria; Skarmeta, Antonio F
2015-07-01
As we get into the Internet of Things era, security and privacy concerns remain as the main obstacles in the development of innovative and valuable services to be exploited by society. Given the Machine-to-Machine (M2M) nature of these emerging scenarios, the application of current privacy-friendly technologies needs to be reconsidered and adapted to be deployed in such global ecosystem. This work proposes different privacy-preserving mechanisms through the application of anonymous credential systems and certificateless public key cryptography. The resulting alternatives are intended to enable an anonymous and accountable access control approach to be deployed on large-scale scenarios, such as Smart Cities. Furthermore, the proposed mechanisms have been deployed on constrained devices, in order to assess their suitability for a secure and privacy-preserving M2M-enabled Internet of Things.
A cloud-based approach to medical NLP.
Chard, Kyle; Russell, Michael; Lussier, Yves A; Mendonça, Eneida A; Silverstein, Jonathan C
2011-01-01
Natural Language Processing (NLP) enables access to deep content embedded in medical texts. To date, NLP has not fulfilled its promise of enabling robust clinical encoding, clinical use, quality improvement, and research. We submit that this is in part due to poor accessibility, scalability, and flexibility of NLP systems. We describe here an approach and system which leverages cloud-based approaches such as virtual machines and Representational State Transfer (REST) to extract, process, synthesize, mine, compare/contrast, explore, and manage medical text data in a flexibly secure and scalable architecture. Available architectures in which our Smntx (pronounced as semantics) system can be deployed include: virtual machines in a HIPAA-protected hospital environment, brought up to run analysis over bulk data and destroyed in a local cloud; a commercial cloud for a large complex multi-institutional trial; and within other architectures such as caGrid, i2b2, or NHIN.
Hybrid computing using a neural network with dynamic external memory.
Graves, Alex; Wayne, Greg; Reynolds, Malcolm; Harley, Tim; Danihelka, Ivo; Grabska-Barwińska, Agnieszka; Colmenarejo, Sergio Gómez; Grefenstette, Edward; Ramalho, Tiago; Agapiou, John; Badia, Adrià Puigdomènech; Hermann, Karl Moritz; Zwols, Yori; Ostrovski, Georg; Cain, Adam; King, Helen; Summerfield, Christopher; Blunsom, Phil; Kavukcuoglu, Koray; Hassabis, Demis
2016-10-27
Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read-write memory.
Machine Learning Methods for Production Cases Analysis
NASA Astrophysics Data System (ADS)
Mokrova, Nataliya V.; Mokrov, Alexander M.; Safonova, Alexandra V.; Vishnyakov, Igor V.
2018-03-01
Approach to analysis of events occurring during the production process were proposed. Described machine learning system is able to solve classification tasks related to production control and hazard identification at an early stage. Descriptors of the internal production network data were used for training and testing of applied models. k-Nearest Neighbors and Random forest methods were used to illustrate and analyze proposed solution. The quality of the developed classifiers was estimated using standard statistical metrics, such as precision, recall and accuracy.
Vierhout, Bastiaan P; Saleem, Ben R; Ott, Alewijn; van Dijl, Jan Maarten; de Kempenaer, Ties D van Andringa; Pierie, Maurice E N; Bottema, Jan T; Zeebregts, Clark J
2015-09-14
Access for endovascular repair of abdominal aortic aneurysms (EVAR) is obtained through surgical cutdown or percutaneously. The only devices suitable for percutaneous closure of the 20 French arteriotomies of the common femoral artery (CFA) are the Prostar(™) and Proglide(™) devices (Abbott Vascular). Positive effects of these devices seem to consist of a lower infection rate, and shorter operation time and hospital stay. This conclusion was published in previous reports comparing techniques in patients in two different groups (cohort or randomized). Access techniques were never compared in one and the same patient; this research simplifies comparison because patient characteristics will be similar in both groups. Percutaneous access of the CFA is compared to surgical cutdown in a single patient; in EVAR surgery, access is necessary in both groins in each patient. Randomization is performed on the introduction site of the larger main device of the endoprosthesis. The contralateral device of the endoprosthesis is smaller. When we use this type of randomization, both groups will contain a similar number of main and contralateral devices. Preoperative nose cultures and perineal cultures are obtained, to compare colonization with postoperative wound cultures (in case of a surgical site infection). Furthermore, patient comfort will be considered, using VAS-scores (Visual analog scale). Punch biopsies of the groin will be harvested to retrospectively compare skin of patients who suffered a surgical site infection (SSI) to patients who did not have an SSI. The PiERO trial is a multicenter randomized controlled clinical trial designed to show the consequences of using percutaneous access in EVAR surgery and focuses on the occurrence of surgical site infections. NTR4257 10 November 2013, NL44578.042.13.
Vienna Fortran - A Language Specification. Version 1.1
1992-03-01
other computer archi- tectures is the fact that the memory is physically distributed among the processors; the time required to access a non-local...datum may be an order of magnitude higher than the time taken to access locally stored data. This has important consequences for program efficiency. In...machine in many aspects. It is tedious, time -consuming and error prone. It has led to particularly slow software development cycles and, in consequence
Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv
2014-01-01
JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded.
Hands-free human-machine interaction with voice
NASA Astrophysics Data System (ADS)
Juang, B. H.
2004-05-01
Voice is natural communication interface between a human and a machine. The machine, when placed in today's communication networks, may be configured to provide automation to save substantial operating cost, as demonstrated in AT&T's VRCP (Voice Recognition Call Processing), or to facilitate intelligent services, such as virtual personal assistants, to enhance individual productivity. These intelligent services often need to be accessible anytime, anywhere (e.g., in cars when the user is in a hands-busy-eyes-busy situation or during meetings where constantly talking to a microphone is either undersirable or impossible), and thus call for advanced signal processing and automatic speech recognition techniques which support what we call ``hands-free'' human-machine communication. These techniques entail a broad spectrum of technical ideas, ranging from use of directional microphones and acoustic echo cancellatiion to robust speech recognition. In this talk, we highlight a number of key techniques that were developed for hands-free human-machine communication in the mid-1990s after Bell Labs became a unit of Lucent Technologies. A video clip will be played to demonstrate the accomplishement.
[Effect of manual cleaning and machine cleaning for dental handpiece].
Zhou, Xiaoli; Huang, Hao; He, Xiaoyan; Chen, Hui; Zhou, Xiaoying
2013-08-01
Comparing the dental handpiece' s cleaning effect between manual cleaning and machine cleaning. Eighty same contaminated dental handpieces were randomly divided into experimental group and control group, each group contains 40 pieces. The experimental group was treated by full automatic washing machine, and the control group was cleaned manually. The cleaning method was conducted according to the operations process standard, then ATP bioluminescence was used to test the cleaning results. Average relative light units (RLU) by ATP bioluminescence detection were as follows: Experimental group was 9, control group was 41. The two groups were less than the recommended RLU value provided by the instrument manufacturer (RLU < or = 45). There was significant difference between the two groups (P < 0.05). The cleaning quality of the experimental group was better than that of control group. It is recommended that the central sterile supply department should clean dental handpieces by machine to ensure the cleaning effect and maintain the quality.
Learning molecular energies using localized graph kernels.
Ferré, Grégoire; Haut, Terry; Barros, Kipton
2017-03-21
Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.
Learning molecular energies using localized graph kernels
NASA Astrophysics Data System (ADS)
Ferré, Grégoire; Haut, Terry; Barros, Kipton
2017-03-01
Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.
Machine learning methods in chemoinformatics
Mitchell, John B O
2014-01-01
Machine learning algorithms are generally developed in computer science or adjacent disciplines and find their way into chemical modeling by a process of diffusion. Though particular machine learning methods are popular in chemoinformatics and quantitative structure–activity relationships (QSAR), many others exist in the technical literature. This discussion is methods-based and focused on some algorithms that chemoinformatics researchers frequently use. It makes no claim to be exhaustive. We concentrate on methods for supervised learning, predicting the unknown property values of a test set of instances, usually molecules, based on the known values for a training set. Particularly relevant approaches include Artificial Neural Networks, Random Forest, Support Vector Machine, k-Nearest Neighbors and naïve Bayes classifiers. WIREs Comput Mol Sci 2014, 4:468–481. How to cite this article: WIREs Comput Mol Sci 2014, 4:468–481. doi:10.1002/wcms.1183 PMID:25285160
Machine printed text and handwriting identification in noisy document images.
Zheng, Yefeng; Li, Huiping; Doermann, David
2004-03-01
In this paper, we address the problem of the identification of text in noisy document images. We are especially focused on segmenting and identifying between handwriting and machine printed text because: 1) Handwriting in a document often indicates corrections, additions, or other supplemental information that should be treated differently from the main content and 2) the segmentation and recognition techniques requested for machine printed and handwritten text are significantly different. A novel aspect of our approach is that we treat noise as a separate class and model noise based on selected features. Trained Fisher classifiers are used to identify machine printed text and handwriting from noise and we further exploit context to refine the classification. A Markov Random Field-based (MRF) approach is used to model the geometrical structure of the printed text, handwriting, and noise to rectify misclassifications. Experimental results show that our approach is robust and can significantly improve page segmentation in noisy document collections.
A Comparison of Machine Learning Approaches for Corn Yield Estimation
NASA Astrophysics Data System (ADS)
Kim, N.; Lee, Y. W.
2017-12-01
Machine learning is an efficient empirical method for classification and prediction, and it is another approach to crop yield estimation. The objective of this study is to estimate corn yield in the Midwestern United States by employing the machine learning approaches such as the support vector machine (SVM), random forest (RF), and deep neural networks (DNN), and to perform the comprehensive comparison for their results. We constructed the database using satellite images from MODIS, the climate data of PRISM climate group, and GLDAS soil moisture data. In addition, to examine the seasonal sensitivities of corn yields, two period groups were set up: May to September (MJJAS) and July and August (JA). In overall, the DNN showed the highest accuracies in term of the correlation coefficient for the two period groups. The differences between our predictions and USDA yield statistics were about 10-11 %.
Bahl, Manisha; Barzilay, Regina; Yedidia, Adam B; Locascio, Nicholas J; Yu, Lili; Lehman, Constance D
2018-03-01
Purpose To develop a machine learning model that allows high-risk breast lesions (HRLs) diagnosed with image-guided needle biopsy that require surgical excision to be distinguished from HRLs that are at low risk for upgrade to cancer at surgery and thus could be surveilled. Materials and Methods Consecutive patients with biopsy-proven HRLs who underwent surgery or at least 2 years of imaging follow-up from June 2006 to April 2015 were identified. A random forest machine learning model was developed to identify HRLs at low risk for upgrade to cancer. Traditional features such as age and HRL histologic results were used in the model, as were text features from the biopsy pathologic report. Results One thousand six HRLs were identified, with a cancer upgrade rate of 11.4% (115 of 1006). A machine learning random forest model was developed with 671 HRLs and tested with an independent set of 335 HRLs. Among the most important traditional features were age and HRL histologic results (eg, atypical ductal hyperplasia). An important text feature from the pathologic reports was "severely atypical." Instead of surgical excision of all HRLs, if those categorized with the model to be at low risk for upgrade were surveilled and the remainder were excised, then 97.4% (37 of 38) of malignancies would have been diagnosed at surgery, and 30.6% (91 of 297) of surgeries of benign lesions could have been avoided. Conclusion This study provides proof of concept that a machine learning model can be applied to predict the risk of upgrade of HRLs to cancer. Use of this model could decrease unnecessary surgery by nearly one-third and could help guide clinical decision making with regard to surveillance versus surgical excision of HRLs. © RSNA, 2017.
Lei, Tailong; Sun, Huiyong; Kang, Yu; Zhu, Feng; Liu, Hui; Zhou, Wenfang; Wang, Zhe; Li, Dan; Li, Youyong; Hou, Tingjun
2017-11-06
Xenobiotic chemicals and their metabolites are mainly excreted out of our bodies by the urinary tract through the urine. Chemical-induced urinary tract toxicity is one of the main reasons that cause failure during drug development, and it is a common adverse event for medications, natural supplements, and environmental chemicals. Despite its importance, there are only a few in silico models for assessing urinary tract toxicity for a large number of compounds with diverse chemical structures. Here, we developed a series of qualitative and quantitative structure-activity relationship (QSAR) models for predicting urinary tract toxicity. In our study, the recursive feature elimination method incorporated with random forests (RFE-RF) was used for dimension reduction, and then eight machine learning approaches were used for QSAR modeling, i.e., relevance vector machine (RVM), support vector machine (SVM), regularized random forest (RRF), C5.0 trees, eXtreme gradient boosting (XGBoost), AdaBoost.M1, SVM boosting (SVMBoost), and RVM boosting (RVMBoost). For building classification models, the synthetic minority oversampling technique was used to handle the imbalance data set problem. Among all the machine learning approaches, SVMBoost based on the RBF kernel achieves both the best quantitative (q ext 2 = 0.845) and qualitative predictions for the test set (MCC of 0.787, AUC of 0.893, sensitivity of 89.6%, specificity of 94.1%, and global accuracy of 90.8%). The application domains were then analyzed, and all of the tested chemicals fall within the application domain coverage. We also examined the structure features of the chemicals with large prediction errors. In brief, both the regression and classification models developed by the SVMBoost approach have reliable prediction capability for assessing chemical-induced urinary tract toxicity.
A machine learning-based framework to identify type 2 diabetes through electronic health records
Zheng, Tao; Xie, Wei; Xu, Liling; He, Xiaoying; Zhang, Ya; You, Mingrong; Yang, Gong; Chen, You
2016-01-01
Objective To discover diverse genotype-phenotype associations affiliated with Type 2 Diabetes Mellitus (T2DM) via genome-wide association study (GWAS) and phenome-wide association study (PheWAS), more cases (T2DM subjects) and controls (subjects without T2DM) are required to be identified (e.g., via Electronic Health Records (EHR)). However, existing expert based identification algorithms often suffer in a low recall rate and could miss a large number of valuable samples under conservative filtering standards. The goal of this work is to develop a semi-automated framework based on machine learning as a pilot study to liberalize filtering criteria to improve recall rate with a keeping of low false positive rate. Materials and methods We propose a data informed framework for identifying subjects with and without T2DM from EHR via feature engineering and machine learning. We evaluate and contrast the identification performance of widely-used machine learning models within our framework, including k-Nearest-Neighbors, Naïve Bayes, Decision Tree, Random Forest, Support Vector Machine and Logistic Regression. Our framework was conducted on 300 patient samples (161 cases, 60 controls and 79 unconfirmed subjects), randomly selected from 23,281 diabetes related cohort retrieved from a regional distributed EHR repository ranging from 2012 to 2014. Results We apply top-performing machine learning algorithms on the engineered features. We benchmark and contrast the accuracy, precision, AUC, sensitivity and specificity of classification models against the state-of-the-art expert algorithm for identification of T2DM subjects. Our results indicate that the framework achieved high identification performances (∼0.98 in average AUC), which are much higher than the state-of-the-art algorithm (0.71 in AUC). Discussion Expert algorithm-based identification of T2DM subjects from EHR is often hampered by the high missing rates due to their conservative selection criteria. Our framework leverages machine learning and feature engineering to loosen such selection criteria to achieve a high identification rate of cases and controls. Conclusions Our proposed framework demonstrates a more accurate and efficient approach for identifying subjects with and without T2DM from EHR. PMID:27919371
NASA Astrophysics Data System (ADS)
Huttunen, Jani; Kokkola, Harri; Mielonen, Tero; Esa Juhani Mononen, Mika; Lipponen, Antti; Reunanen, Juha; Vilhelm Lindfors, Anders; Mikkonen, Santtu; Erkki Juhani Lehtinen, Kari; Kouremeti, Natalia; Bais, Alkiviadis; Niska, Harri; Arola, Antti
2016-07-01
In order to have a good estimate of the current forcing by anthropogenic aerosols, knowledge on past aerosol levels is needed. Aerosol optical depth (AOD) is a good measure for aerosol loading. However, dedicated measurements of AOD are only available from the 1990s onward. One option to lengthen the AOD time series beyond the 1990s is to retrieve AOD from surface solar radiation (SSR) measurements taken with pyranometers. In this work, we have evaluated several inversion methods designed for this task. We compared a look-up table method based on radiative transfer modelling, a non-linear regression method and four machine learning methods (Gaussian process, neural network, random forest and support vector machine) with AOD observations carried out with a sun photometer at an Aerosol Robotic Network (AERONET) site in Thessaloniki, Greece. Our results show that most of the machine learning methods produce AOD estimates comparable to the look-up table and non-linear regression methods. All of the applied methods produced AOD values that corresponded well to the AERONET observations with the lowest correlation coefficient value being 0.87 for the random forest method. While many of the methods tended to slightly overestimate low AODs and underestimate high AODs, neural network and support vector machine showed overall better correspondence for the whole AOD range. The differences in producing both ends of the AOD range seem to be caused by differences in the aerosol composition. High AODs were in most cases those with high water vapour content which might affect the aerosol single scattering albedo (SSA) through uptake of water into aerosols. Our study indicates that machine learning methods benefit from the fact that they do not constrain the aerosol SSA in the retrieval, whereas the LUT method assumes a constant value for it. This would also mean that machine learning methods could have potential in reproducing AOD from SSR even though SSA would have changed during the observation period.
A machine learning-based framework to identify type 2 diabetes through electronic health records.
Zheng, Tao; Xie, Wei; Xu, Liling; He, Xiaoying; Zhang, Ya; You, Mingrong; Yang, Gong; Chen, You
2017-01-01
To discover diverse genotype-phenotype associations affiliated with Type 2 Diabetes Mellitus (T2DM) via genome-wide association study (GWAS) and phenome-wide association study (PheWAS), more cases (T2DM subjects) and controls (subjects without T2DM) are required to be identified (e.g., via Electronic Health Records (EHR)). However, existing expert based identification algorithms often suffer in a low recall rate and could miss a large number of valuable samples under conservative filtering standards. The goal of this work is to develop a semi-automated framework based on machine learning as a pilot study to liberalize filtering criteria to improve recall rate with a keeping of low false positive rate. We propose a data informed framework for identifying subjects with and without T2DM from EHR via feature engineering and machine learning. We evaluate and contrast the identification performance of widely-used machine learning models within our framework, including k-Nearest-Neighbors, Naïve Bayes, Decision Tree, Random Forest, Support Vector Machine and Logistic Regression. Our framework was conducted on 300 patient samples (161 cases, 60 controls and 79 unconfirmed subjects), randomly selected from 23,281 diabetes related cohort retrieved from a regional distributed EHR repository ranging from 2012 to 2014. We apply top-performing machine learning algorithms on the engineered features. We benchmark and contrast the accuracy, precision, AUC, sensitivity and specificity of classification models against the state-of-the-art expert algorithm for identification of T2DM subjects. Our results indicate that the framework achieved high identification performances (∼0.98 in average AUC), which are much higher than the state-of-the-art algorithm (0.71 in AUC). Expert algorithm-based identification of T2DM subjects from EHR is often hampered by the high missing rates due to their conservative selection criteria. Our framework leverages machine learning and feature engineering to loosen such selection criteria to achieve a high identification rate of cases and controls. Our proposed framework demonstrates a more accurate and efficient approach for identifying subjects with and without T2DM from EHR. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
2016-01-01
Background As more and more researchers are turning to big data for new opportunities of biomedical discoveries, machine learning models, as the backbone of big data analysis, are mentioned more often in biomedical journals. However, owing to the inherent complexity of machine learning methods, they are prone to misuse. Because of the flexibility in specifying machine learning models, the results are often insufficiently reported in research articles, hindering reliable assessment of model validity and consistent interpretation of model outputs. Objective To attain a set of guidelines on the use of machine learning predictive models within clinical settings to make sure the models are correctly applied and sufficiently reported so that true discoveries can be distinguished from random coincidence. Methods A multidisciplinary panel of machine learning experts, clinicians, and traditional statisticians were interviewed, using an iterative process in accordance with the Delphi method. Results The process produced a set of guidelines that consists of (1) a list of reporting items to be included in a research article and (2) a set of practical sequential steps for developing predictive models. Conclusions A set of guidelines was generated to enable correct application of machine learning models and consistent reporting of model specifications and results in biomedical research. We believe that such guidelines will accelerate the adoption of big data analysis, particularly with machine learning methods, in the biomedical research community. PMID:27986644
Sciahbasi, Alessandro; Calabrò, Paolo; Sarandrea, Alessandro; Rigattieri, Stefano; Tomassini, Francesco; Sardella, Gennaro; Zavalloni, Dennis; Cortese, Bernardo; Limbruno, Ugo; Tebaldi, Matteo; Gagnor, Andrea; Rubartelli, Paolo; Zingarelli, Antonio; Valgimigli, Marco
2014-06-01
Radiation absorbed by interventional cardiologists is a frequently under-evaluated important issue. Aim is to compare radiation dose absorbed by interventional cardiologists during percutaneous coronary procedures for acute coronary syndromes comparing transradial and transfemoral access. The randomized multicentre MATRIX (Minimizing Adverse Haemorrhagic Events by TRansradial Access Site and Systemic Implementation of angioX) trial has been designed to compare the clinical outcome of patients with acute coronary syndromes treated invasively according to the access site (transfemoral vs. transradial) and to the anticoagulant therapy (bivalirudin vs. heparin). Selected experienced interventional cardiologists involved in this study have been equipped with dedicated thermoluminescent dosimeters to evaluate the radiation dose absorbed during transfemoral or right transradial or left transradial access. For each access we evaluate the radiation dose absorbed at wrist, at thorax and at eye level. Consequently the operator is equipped with three sets (transfemoral, right transradial or left transradial access) of three different dosimeters (wrist, thorax and eye dosimeter). Primary end-point of the study is the procedural radiation dose absorbed by operators at thorax. An important secondary end-point is the procedural radiation dose absorbed by operators comparing the right or left radial approach. Patient randomization is performed according to the MATRIX protocol for the femoral or radial approach. A further randomization for the radial approach is performed to compare right and left transradial access. The RAD-MATRIX study will probably consent to clarify the radiation issue for interventional cardiologist comparing transradial and transfemoral access in the setting of acute coronary syndromes. Copyright © 2014 Elsevier Inc. All rights reserved.
Prediction of antiepileptic drug treatment outcomes using machine learning.
Colic, Sinisa; Wither, Robert G; Lang, Min; Zhang, Liang; Eubanks, James H; Bardakjian, Berj L
2017-02-01
Antiepileptic drug (AED) treatments produce inconsistent outcomes, often necessitating patients to go through several drug trials until a successful treatment can be found. This study proposes the use of machine learning techniques to predict epilepsy treatment outcomes of commonly used AEDs. Machine learning algorithms were trained and evaluated using features obtained from intracranial electroencephalogram (iEEG) recordings of the epileptiform discharges observed in Mecp2-deficient mouse model of the Rett Syndrome. Previous work have linked the presence of cross-frequency coupling (I CFC ) of the delta (2-5 Hz) rhythm with the fast ripple (400-600 Hz) rhythm in epileptiform discharges. Using the I CFC to label post-treatment outcomes we compared support vector machines (SVMs) and random forest (RF) machine learning classifiers for providing likelihood scores of successful treatment outcomes. (a) There was heterogeneity in AED treatment outcomes, (b) machine learning techniques could be used to rank the efficacy of AEDs by estimating likelihood scores for successful treatment outcome, (c) I CFC features yielded the most effective a priori identification of appropriate AED treatment, and (d) both classifiers performed comparably. Machine learning approaches yielded predictions of successful drug treatment outcomes which in turn could reduce the burdens of drug trials and lead to substantial improvements in patient quality of life.
A comparison of free weight squat to Smith machine squat using electromyography.
Schwanbeck, Shane; Chilibeck, Philip D; Binsted, Gordon
2009-12-01
The purpose of this experiment was to determine whether free weight or Smith machine squats were optimal for activating the prime movers of the legs and the stabilizers of the legs and the trunk. Six healthy participants performed 1 set of 8 repetitions (using a weight they could lift 8 times, i.e., 8RM, or 8 repetition maximum) for each of the free weight squat and Smith machine squat in a randomized order with a minimum of 3 days between sessions, while electromyographic (EMG) activity of the tibialis anterior, gastrocnemius, vastus medialis, vastus lateralis, biceps femoris, lumbar erector spinae, and rectus abdominus were simultaneously measured. Electromyographic activity was significantly higher by 34, 26, and 49 in the gastrocnemius, biceps femoris, and vastus medialis, respectively, during the free weight squat compared to the Smith machine squat (p < 0.05). There were no significant differences between free weight and Smith machine squat for any of the other muscles; however, the EMG averaged over all muscles during the free weight squat was 43% higher when compared to the Smith machine squat (p < 0.05). The free weight squat may be more beneficial than the Smith machine squat for individuals who are looking to strengthen plantar flexors, knee flexors, and knee extensors.
Prediction of antiepileptic drug treatment outcomes using machine learning
NASA Astrophysics Data System (ADS)
Colic, Sinisa; Wither, Robert G.; Lang, Min; Zhang, Liang; Eubanks, James H.; Bardakjian, Berj L.
2017-02-01
Objective. Antiepileptic drug (AED) treatments produce inconsistent outcomes, often necessitating patients to go through several drug trials until a successful treatment can be found. This study proposes the use of machine learning techniques to predict epilepsy treatment outcomes of commonly used AEDs. Approach. Machine learning algorithms were trained and evaluated using features obtained from intracranial electroencephalogram (iEEG) recordings of the epileptiform discharges observed in Mecp2-deficient mouse model of the Rett Syndrome. Previous work have linked the presence of cross-frequency coupling (I CFC) of the delta (2-5 Hz) rhythm with the fast ripple (400-600 Hz) rhythm in epileptiform discharges. Using the I CFC to label post-treatment outcomes we compared support vector machines (SVMs) and random forest (RF) machine learning classifiers for providing likelihood scores of successful treatment outcomes. Main results. (a) There was heterogeneity in AED treatment outcomes, (b) machine learning techniques could be used to rank the efficacy of AEDs by estimating likelihood scores for successful treatment outcome, (c) I CFC features yielded the most effective a priori identification of appropriate AED treatment, and (d) both classifiers performed comparably. Significance. Machine learning approaches yielded predictions of successful drug treatment outcomes which in turn could reduce the burdens of drug trials and lead to substantial improvements in patient quality of life.
Lenhard, Fabian; Sauer, Sebastian; Andersson, Erik; Månsson, Kristoffer Nt; Mataix-Cols, David; Rück, Christian; Serlachius, Eva
2018-03-01
There are no consistent predictors of treatment outcome in paediatric obsessive-compulsive disorder (OCD). One reason for this might be the use of suboptimal statistical methodology. Machine learning is an approach to efficiently analyse complex data. Machine learning has been widely used within other fields, but has rarely been tested in the prediction of paediatric mental health treatment outcomes. To test four different machine learning methods in the prediction of treatment response in a sample of paediatric OCD patients who had received Internet-delivered cognitive behaviour therapy (ICBT). Participants were 61 adolescents (12-17 years) who enrolled in a randomized controlled trial and received ICBT. All clinical baseline variables were used to predict strictly defined treatment response status three months after ICBT. Four machine learning algorithms were implemented. For comparison, we also employed a traditional logistic regression approach. Multivariate logistic regression could not detect any significant predictors. In contrast, all four machine learning algorithms performed well in the prediction of treatment response, with 75 to 83% accuracy. The results suggest that machine learning algorithms can successfully be applied to predict paediatric OCD treatment outcome. Validation studies and studies in other disorders are warranted. Copyright © 2017 John Wiley & Sons, Ltd.
Effective Dust Control Systems on Concrete Dowel Drilling Machinery
Echt, Alan S.; Sanderson, Wayne T.; Mead, Kenneth R.; Feng, H. Amy; Farwick, Daniel R.; Farwick, Dawn Ramsey
2016-01-01
Rotary-type percussion dowel drilling machines, which drill horizontal holes in concrete pavement, have been documented to produce respirable crystalline silica concentrations above recommended exposure criteria. This places operators at potential risk for developing health effects from exposure. United States manufacturers of these machines offer optional dust control systems. The effectiveness of the dust control systems to reduce respirable dust concentrations on two types of drilling machines was evaluated under controlled conditions with the machines operating inside large tent structures in an effort to eliminate secondary exposure sources not related to the dowel-drilling operation. Area air samples were collected at breathing zone height at three locations around each machine. Through equal numbers of sampling rounds with the control systems randomly selected to be on or off, the control systems were found to significantly reduce respirable dust concentrations from a geometric mean of 54 mg per cubic meter to 3.0 mg per cubic meter on one machine and 57 mg per cubic meter to 5.3 mg per cubic meter on the other machine. This research shows that the dust control systems can dramatically reduce respirable dust concentrations by over 90% under controlled conditions. However, these systems need to be evaluated under actual work conditions to determine their effectiveness in reducing worker exposures to crystalline silica below hazardous levels. PMID:27074062
77 FR 68769 - Access to Confidential Business Information by Eastern Research Group
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-16
... (202) 566-0280. Docket visitors are required to show photographic identification, pass through a metal detector, and sign the EPA visitor log. All visitor bags are processed through an X-ray machine and subject...
76 FR 69722 - Access to Confidential Business Information by Protection Strategies Incorporated
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-09
... visitors are required to show photographic identification, pass through a metal detector, and sign the EPA visitor log. All visitor bags are processed through an X-ray machine and subject to search. Visitors will...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-15
... show photographic identification, pass through a metal detector, and sign the EPA visitor log. All visitor bags are processed through an X-ray machine and subject to search. Visitors will be provided an...
76 FR 23586 - Access to Confidential Business Information by Syracuse Research Corporation
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-27
... (202) 566-0280. Docket visitors are required to show photographic identification, pass through a metal detector, and sign the EPA visitor log. All visitor bags are processed through an X-ray machine and subject...
78 FR 66696 - Access to Confidential Business Information by Arcadis U.S., Inc.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-06
... (202) 566-0280. Docket visitors are required to show photographic identification, pass through a metal detector, and sign the EPA visitor log. All visitor bags are processed through an X-ray machine and subject...
A Decision-Making Tools Workshop
1999-08-01
California Polytechnic State University, San Luis Obispo, CA 47 Distributed Intelligent Agents Katia Sycara, Keith Decker, Anandeep Pannu , Mike...Anandeep Pannu and Katia Sycara. Learning text filtering preferences. In 1996 AAAI Symposium on Machine Learning and Information Access, 1996. [19] Anand
Creating Library Interiors: Planning and Design Considerations.
ERIC Educational Resources Information Center
Jones, Plummer Alston, Jr.; Barton, Phillip K.
1997-01-01
Examines design considerations for public library interiors: access; acoustical treatment; assignable and nonassignable space; building interiors: ceilings, clocks, color, control, drinking fountains; exhibit space: slotwall display, floor coverings, floor loading, furniture, lighting, mechanical systems, public address, copying machines,…
Underestimating extreme events in power-law behavior due to machine-dependent cutoffs
NASA Astrophysics Data System (ADS)
Radicchi, Filippo
2014-11-01
Power-law distributions are typical macroscopic features occurring in almost all complex systems observable in nature. As a result, researchers in quantitative analyses must often generate random synthetic variates obeying power-law distributions. The task is usually performed through standard methods that map uniform random variates into the desired probability space. Whereas all these algorithms are theoretically solid, in this paper we show that they are subject to severe machine-dependent limitations. As a result, two dramatic consequences arise: (i) the sampling in the tail of the distribution is not random but deterministic; (ii) the moments of the sample distribution, which are theoretically expected to diverge as functions of the sample sizes, converge instead to finite values. We provide quantitative indications for the range of distribution parameters that can be safely handled by standard libraries used in computational analyses. Whereas our findings indicate possible reinterpretations of numerical results obtained through flawed sampling methodologies, they also pave the way for the search for a concrete solution to this central issue shared by all quantitative sciences dealing with complexity.
NASA Astrophysics Data System (ADS)
Chandler, C. L.; Groman, R. C.; Shepherd, A.; Allison, M. D.; Kinkade, D.; Rauch, S.; Wiebe, P. H.; Glover, D. M.
2014-12-01
The ability to reproduce scientific results is a cornerstone of the scientific method, and access to the data upon which the results are based is essential to reproducibility. Access to the data alone is not enough though, and research communities have recognized the importance of metadata (data documentation) to enable discovery and data access, and facilitate interpretation and accurate reuse. The Biological and Chemical Oceanography Data Management Office (BCO-DMO) was first funded in late 2006 by the National Science Foundation (NSF) Division of Ocean Sciences (OCE) Biology and Chemistry Sections to help ensure that data generated during NSF OCE funded research would be preserved and available for future use. The BCO-DMO was formed by combining the formerly independent data management offices of two marine research programs: the United States Joint Global Ocean Flux Study (US JGOFS) and the US GLOBal Ocean ECosystems Dynamics (US GLOBEC) program. Since the US JGOFS and US GLOBEC programs were both active (1990s) there have been significant changes in all aspects of the research data life cycle, and the staff at BCO-DMO has modified the way in which we manage data contributed to the office. The supporting documentation that describes each dataset was originally displayed as a human-readable text file retrievable via a Web browser. BCO-DMO still offers that form because our primary audience is marine researchers using Web browser clients; however we are seeing an increased demand to support machine client access. Metadata records from the BCO-DMO data system are now extracted and published out in a variety of formats. The system supports ISO 19115, FGDC, GCMD DIF, schema.org Dataset extension, formal publication with a DOI, and RDF with semantic markup including PROV-O, FOAF and more. In the 1990s, data documentation helped researchers locate data of interest and understand the provenance sufficiently to determine fitness for purpose. Today, providing data documentation in a machine interpretable form enables researchers to make more effective use of machine clients to discover and access data. This presentation will describe the challenges associated with and benefits realized from layering modern Semantic Web technologies on top of a legacy data system. http://bco-dmo.org/
Microcompartments and Protein Machines in Prokaryotes
Saier, Milton H.
2013-01-01
The prokaryotic cell was once thought of as a “bag of enzymes” with little or no intracellular compartmentalization. In this view, most reactions essential for life occurred as a consequence of random molecular collisions involving substrates, cofactors and cytoplasmic enzymes. Our current conception of a prokaryote is far from this view. We now consider a bacterium or an archaeon as a highly structured, non-random collection of functional membrane-embedded and proteinaceous molecular machines, each of which serves a specialized function. In this article we shall present an overview of such microcompartments including (i) the bacterial cytoskeleton and the apparati allowing DNA segregation during cells division, (ii) energy transduction apparati involving light-driven proton pumping and ion gradient-driven ATP synthesis, (iii) prokaryotic motility and taxis machines that mediate cell movements in response to gradients of chemicals and physical forces, (iv) machines of protein folding, secretion and degradation, (v) metabolasomes carrying out specific chemical reactions, (vi) 24 hour clocks allowing bacteria to coordinate their metabolic activities with the daily solar cycle and (vii) proteinaceous membrane compartmentalized structures such as sulfur granules and gas vacuoles. Membrane-bounded prokaryotic organelles were considered in a recent JMMB written symposium concerned with membraneous compartmentalization in bacteria [Saier and Bogdanov, 2013]. By contrast, in this symposium, we focus on proteinaceous microcompartments. These two symposia, taken together, provide the interested reader with an objective view of the remarkable complexity of what was once thought of as a simple non-compartmentalized cell. PMID:23920489
Big Data Tools as Applied to ATLAS Event Data
NASA Astrophysics Data System (ADS)
Vukotic, I.; Gardner, R. W.; Bryant, L. A.
2017-10-01
Big Data technologies have proven to be very useful for storage, processing and visualization of derived metrics associated with ATLAS distributed computing (ADC) services. Logfiles, database records, and metadata from a diversity of systems have been aggregated and indexed to create an analytics platform for ATLAS ADC operations analysis. Dashboards, wide area data access cost metrics, user analysis patterns, and resource utilization efficiency charts are produced flexibly through queries against a powerful analytics cluster. Here we explore whether these techniques and associated analytics ecosystem can be applied to add new modes of open, quick, and pervasive access to ATLAS event data. Such modes would simplify access and broaden the reach of ATLAS public data to new communities of users. An ability to efficiently store, filter, search and deliver ATLAS data at the event and/or sub-event level in a widely supported format would enable or significantly simplify usage of machine learning environments and tools like Spark, Jupyter, R, SciPy, Caffe, TensorFlow, etc. Machine learning challenges such as the Higgs Boson Machine Learning Challenge, the Tracking challenge, Event viewers (VP1, ATLANTIS, ATLASrift), and still to be developed educational and outreach tools would be able to access the data through a simple REST API. In this preliminary investigation we focus on derived xAOD data sets. These are much smaller than the primary xAODs having containers, variables, and events of interest to a particular analysis. Being encouraged with the performance of Elasticsearch for the ADC analytics platform, we developed an algorithm for indexing derived xAOD event data. We have made an appropriate document mapping and have imported a full set of standard model W/Z datasets. We compare the disk space efficiency of this approach to that of standard ROOT files, the performance in simple cut flow type of data analysis, and will present preliminary results on its scaling characteristics with different numbers of clients, query complexity, and size of the data retrieved.
Quantum Entanglement in Neural Network States
NASA Astrophysics Data System (ADS)
Deng, Dong-Ling; Li, Xiaopeng; Das Sarma, S.
2017-04-01
Machine learning, one of today's most rapidly growing interdisciplinary fields, promises an unprecedented perspective for solving intricate quantum many-body problems. Understanding the physical aspects of the representative artificial neural-network states has recently become highly desirable in the applications of machine-learning techniques to quantum many-body physics. In this paper, we explore the data structures that encode the physical features in the network states by studying the quantum entanglement properties, with a focus on the restricted-Boltzmann-machine (RBM) architecture. We prove that the entanglement entropy of all short-range RBM states satisfies an area law for arbitrary dimensions and bipartition geometry. For long-range RBM states, we show by using an exact construction that such states could exhibit volume-law entanglement, implying a notable capability of RBM in representing quantum states with massive entanglement. Strikingly, the neural-network representation for these states is remarkably efficient, in the sense that the number of nonzero parameters scales only linearly with the system size. We further examine the entanglement properties of generic RBM states by randomly sampling the weight parameters of the RBM. We find that their averaged entanglement entropy obeys volume-law scaling, and the meantime strongly deviates from the Page entropy of the completely random pure states. We show that their entanglement spectrum has no universal part associated with random matrix theory and bears a Poisson-type level statistics. Using reinforcement learning, we demonstrate that RBM is capable of finding the ground state (with power-law entanglement) of a model Hamiltonian with a long-range interaction. In addition, we show, through a concrete example of the one-dimensional symmetry-protected topological cluster states, that the RBM representation may also be used as a tool to analytically compute the entanglement spectrum. Our results uncover the unparalleled power of artificial neural networks in representing quantum many-body states regardless of how much entanglement they possess, which paves a novel way to bridge computer-science-based machine-learning techniques to outstanding quantum condensed-matter physics problems.
Price, John M.; Colflesh, Gregory J. H.; Cerella, John; Verhaeghen, Paul
2014-01-01
We investigated the effects of 10 hours of practice on variations of the N-Back task to investigate the processes underlying possible expansion of the focus of attention within working memory. Using subtractive logic, we showed that random access (i.e., Sternberg-like search) yielded a modest effect (a 50% increase in speed) whereas the processes of forward access (i.e., retrieval in order, as in a standard N-Back task) and updating (i.e., changing the contents of working memory) were executed about 5 times faster after extended practice. We additionally found that extended practice increased working memory capacity as measured by the size of the focus of attention for the forward-access task, but not for variations where probing was in random order. This suggests that working memory capacity may depend on the type of search process engaged, and that certain working-memory-related cognitive processes are more amenable to practice than others. PMID:24486803
Electrical Evaluation of RCA MWS5501D Random Access Memory, Volume 2, Appendix a
NASA Technical Reports Server (NTRS)
Klute, A.
1979-01-01
The electrical characterization and qualification test results are presented for the RCA MWS5001D random access memory. The tests included functional tests, AC and DC parametric tests, AC parametric worst-case pattern selection test, determination of worst-case transition for setup and hold times, and a series of schmoo plots. The address access time, address readout time, the data hold time, and the data setup time are some of the results surveyed.
Livingstone, Charles; Adams, Peter J
2011-01-01
To illustrate ways in which industry control over the gambling market and its regulatory system have enabled rapid proliferation in gambling consumption and harm. To discuss the relationship between government regulation and the accessibility, marketing and technologies of electronic gambling machines in Australia and New Zealand. The regulatory framework for gambling in both countries has encouraged highly accessible,regressively distributed and heavily marketed high-impact electronic gambling machines. This framework has developed in large part through the conjunction of government revenue needs and the adaptation of a folk model of gambling appropriated by gambling businesses and engineered to incorporate a discourse that legitimate their gambling businesses. Governments should be encouraged to invest in 'upstream' public health strategies that contain the economic and social drivers for intensifying gambling consumption. One key aspect involves questioning the most suitable scale, location and marketing of gambling operations, and the reliance of government on gambling revenues (whether directly or as substitution for other government expenditure). Technological solutions to disrupt the development of obsessive gambling habits are also available and are likely to reduce gambling-related harm.
Teaching medical students ultrasound-guided vascular access - which learning method is best?
Lian, Alwin; Rippey, James C R; Carr, Peter J
2017-05-15
Ultrasound is recommended to guide insertion of peripheral intravenous vascular cannulae (PIVC) where difficulty is experienced. Ultrasound machines are now common-place and junior doctors are often expected to be able to use them. The educational standards for this skill are highly varied, ranging from no education, to self-guided internet-based education, to formal, face-to-face traditional education. In an attempt to decide which educational technique our institution should introduce, a small pilot trial comparing educational techniques was designed. Thirty medical students were enrolled and allocated to one of three groups. PIVC placing ability was then observed, tested and graded on vascular access phantoms. The formal, face-to-face traditional education was rated best by the students, and had the highest success rate in PIVC placement, the improvement statistically significant compared to no education (p = 0.01) and trending towards significance when compared to self-directed internet-based education (p<0.06). The group receiving traditional face-to-face teaching on ultrasound-guided vascular access, performed significantly better than those not receiving education. As the number of ultrasound machines in clinical areas increases, it is important that education programs to support their safe and appropriate use are developed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
von Laszewski, G.; Gawor, J.; Lane, P.
In this paper we report on the features of the Java Commodity Grid Kit (Java CoG Kit). The Java CoG Kit provides middleware for accessing Grid functionality from the Java framework. Java CoG Kit middleware is general enough to design a variety of advanced Grid applications with quite different user requirements. Access to the Grid is established via Globus Toolkit protocols, allowing the Java CoG Kit to also communicate with the services distributed as part of the C Globus Toolkit reference implementation. Thus, the Java CoG Kit provides Grid developers with the ability to utilize the Grid, as well asmore » numerous additional libraries and frameworks developed by the Java community to enable network, Internet, enterprise and peer-to-peer computing. A variety of projects have successfully used the client libraries of the Java CoG Kit to access Grids driven by the C Globus Toolkit software. In this paper we also report on the efforts to develop serverside Java CoG Kit components. As part of this research we have implemented a prototype pure Java resource management system that enables one to run Grid jobs on platforms on which a Java virtual machine is supported, including Windows NT machines.« less
NASA Astrophysics Data System (ADS)
Lary, D. J.
2013-12-01
A BigData case study is described where multiple datasets from several satellites, high-resolution global meteorological data, social media and in-situ observations are combined using machine learning on a distributed cluster using an automated workflow. The global particulate dataset is relevant to global public health studies and would not be possible to produce without the use of the multiple big datasets, in-situ data and machine learning.To greatly reduce the development time and enhance the functionality a high level language capable of parallel processing has been used (Matlab). A key consideration for the system is high speed access due to the large data volume, persistence of the large data volumes and a precise process time scheduling capability.
Manual actuator. [for spacecraft exercising machines
NASA Technical Reports Server (NTRS)
Gause, R. L.; Glenn, C. G. (Inventor)
1974-01-01
An actuator for an exercising machine employable by a crewman aboard a manned spacecraft is presented. The actuator is characterized by a force delivery arm projected from a rotary imput shaft of an exercising machine and having a force input handle extended orthogonally from its distal end. The handle includes a hand-grip configured to be received within the palm of the crewman's hand and a grid pivotally supported for angular displacement between a first position, wherein the grid is disposed in an overlying juxtaposition with the hand-grip, and a second position, angularly displaced from the first position, for affording access to the hand-grip, and a latching mechanism fixed to the sole of a shoe worn by the crewman for latching the shoe to the grid when the grid is in the first position.
Computer access security code system
NASA Technical Reports Server (NTRS)
Collins, Earl R., Jr. (Inventor)
1990-01-01
A security code system for controlling access to computer and computer-controlled entry situations comprises a plurality of subsets of alpha-numeric characters disposed in random order in matrices of at least two dimensions forming theoretical rectangles, cubes, etc., such that when access is desired, at least one pair of previously unused character subsets not found in the same row or column of the matrix is chosen at random and transmitted by the computer. The proper response to gain access is transmittal of subsets which complete the rectangle, and/or a parallelepiped whose opposite corners were defined by first groups of code. Once used, subsets are not used again to absolutely defeat unauthorized access by eavesdropping, and the like.
Moore, Jason H; Shestov, Maksim; Schmitt, Peter; Olson, Randal S
2018-01-01
A central challenge of developing and evaluating artificial intelligence and machine learning methods for regression and classification is access to data that illuminates the strengths and weaknesses of different methods. Open data plays an important role in this process by making it easy for computational researchers to easily access real data for this purpose. Genomics has in some examples taken a leading role in the open data effort starting with DNA microarrays. While real data from experimental and observational studies is necessary for developing computational methods it is not sufficient. This is because it is not possible to know what the ground truth is in real data. This must be accompanied by simulated data where that balance between signal and noise is known and can be directly evaluated. Unfortunately, there is a lack of methods and software for simulating data with the kind of complexity found in real biological and biomedical systems. We present here the Heuristic Identification of Biological Architectures for simulating Complex Hierarchical Interactions (HIBACHI) method and prototype software for simulating complex biological and biomedical data. Further, we introduce new methods for developing simulation models that generate data that specifically allows discrimination between different machine learning methods.
Could EBT Machines Increase Fruit and Vegetable Purchases at New York City Green Carts?
Breck, Andrew; Kiszko, Kamila; Martinez, Olivia; Abrams, Courtney; Elbel, Brian
2017-09-21
Residents of some low-income neighborhoods have limited access to fresh fruits and vegetables. In 2008, New York City issued new mobile fruit and vegetable cart licenses for neighborhoods with inadequate availability of fresh produce. Some of these carts were equipped with electronic benefit transfer (EBT) machines, allowing them to accept Supplemental Nutrition Assistance Program (SNAP) benefits. This article examines the association between type and quantities of fruits and vegetables purchased from mobile fruit and vegetable vendors and consumer characteristics, including payment method. Customers at 4 produce carts in the Bronx, New York, were surveyed during 3 periods in 2013 and 2014. Survey data, including purchased fruit and vegetable quantities, were analyzed using multivariable negative binomial regressions, with payment method (cash only vs EBT or EBT and cash) as the primary independent variable. Covariates included availability of EBT, vendor, and customer sociodemographic characteristics. A total of 779 adults participated in this study. Shoppers who used SNAP benefits purchased an average of 5.4 more cup equivalents of fruits and vegetables than did shoppers who paid with cash. Approximately 80% of this difference was due to higher quantities of purchased fruits. Expanding access to EBT machines at mobile produce carts may increase purchases of fruits and vegetables from these vendors.
Could EBT Machines Increase Fruit and Vegetable Purchases at New York City Green Carts?
Breck, Andrew; Kiszko, Kamila; Martinez, Olivia; Abrams, Courtney
2017-01-01
Introduction Residents of some low-income neighborhoods have limited access to fresh fruits and vegetables. In 2008, New York City issued new mobile fruit and vegetable cart licenses for neighborhoods with inadequate availability of fresh produce. Some of these carts were equipped with electronic benefit transfer (EBT) machines, allowing them to accept Supplemental Nutrition Assistance Program (SNAP) benefits. This article examines the association between type and quantities of fruits and vegetables purchased from mobile fruit and vegetable vendors and consumer characteristics, including payment method. Methods Customers at 4 produce carts in the Bronx, New York, were surveyed during 3 periods in 2013 and 2014. Survey data, including purchased fruit and vegetable quantities, were analyzed using multivariable negative binomial regressions, with payment method (cash only vs EBT or EBT and cash) as the primary independent variable. Covariates included availability of EBT, vendor, and customer sociodemographic characteristics. Results A total of 779 adults participated in this study. Shoppers who used SNAP benefits purchased an average of 5.4 more cup equivalents of fruits and vegetables than did shoppers who paid with cash. Approximately 80% of this difference was due to higher quantities of purchased fruits. Conclusion Expanding access to EBT machines at mobile produce carts may increase purchases of fruits and vegetables from these vendors. PMID:28934080
Barriers to radiotherapy access at the University College Hospital in Ibadan, Nigeria.
Anakwenze, Chidinma P; Ntekim, Atara; Trock, Bruce; Uwadiae, Iyobosa B; Page, Brandi R
2017-08-01
Nigeria has the biggest gap between radiotherapy availability and need, with one machine per 19.4 million people, compared to one machine per 250,000 people in high-income countries. This study aims to identify its patient-level barriers to radiotherapy access. This was a cross sectional study consisting of patient questionnaires ( n = 50) conducted in January 2016 to assess patient demographics, types of cancers seen, barriers to receiving radiotherapy, health beliefs and practices, and factors leading to treatment delay. Eighty percent of patients could not afford radiotherapy without financial assistance and only 6% of the patients had federal insurance, which did not cover radiotherapy services. Of the patients who had completed radiotherapy treatment, 91.3% had experienced treatment delay or often cancellation due to healthcare worker strike, power failure, machine breakdown, or prolonged wait time. The timeliness of a patient's radiotherapy care correlated with their employment status and distance from radiotherapy center ( p < 0.05). Barriers to care at a radiotherapy center in a low- and middle-income country (LMIC) have previously not been well characterized. These findings can be used to inform efforts to expand the availability of radiotherapy and improve current treatment capacity in Nigeria and in other LMICs.
NASA Astrophysics Data System (ADS)
Yang, Jyun-Bao; Chang, Ting-Chang; Huang, Jheng-Jie; Chen, Yu-Chun; Chen, Yu-Ting; Tseng, Hsueh-Chih; Chu, Ann-Kuo; Sze, Simon M.
2014-04-01
In this study, indium-gallium-zinc-oxide thin film transistors can be operated either as transistors or resistance random access memory devices. Before the forming process, current-voltage curve transfer characteristics are observed, and resistance switching characteristics are measured after a forming process. These resistance switching characteristics exhibit two behaviors, and are dominated by different mechanisms. The mode 1 resistance switching behavior is due to oxygen vacancies, while mode 2 is dominated by the formation of an oxygen-rich layer. Furthermore, an easy approach is proposed to reduce power consumption when using these resistance random access memory devices with the amorphous indium-gallium-zinc-oxide thin film transistor.
75 FR 8330 - Access to Confidential Business Information by Eastern Research Group
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-24
... identification, pass through a metal detector, and sign the EPA visitor log. All visitors' bags are processed through an X-ray machine and subject to search. Visitors will be provided an EPA/DC badge that must be...
77 FR 68769 - Access to Confidential Business Information by Abt Associates, Inc.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-16
... identification, pass through a metal detector, and sign the EPA visitor log. All visitor bags are processed through an X-ray machine and subject to search. Visitors will be provided an EPA/DC badge that must be...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-08
... (202) 566-0280. Docket visitors are required to show photographic identification, pass through a metal detector, and sign the EPA visitor log. All visitor bags are processed through an X-ray machine and subject...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-05
... show photographic identification, pass through a metal detector, and sign the EPA visitor log. All visitor bags are processed through an X-ray machine and subject to search. Visitors will be provided an...
77 FR 21766 - Access to Confidential Business Information by CGI Federal Inc.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-11
... identification, pass through a metal detector, and sign the EPA visitor log. All visitor bags are processed through an X-ray machine and subject to search. Visitors will be provided an EPA/DC badge that must be...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-14
... (202) 566-0280. Docket visitors are required to show photographic identification, pass through a metal detector, and sign the EPA visitor log. All visitor bags are processed through an X-ray machine and subject...
75 FR 56096 - Access to Confidential Business Information by Industrial Economics Incorporated
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-15
... photographic identification, pass through a metal detector, and sign the EPA visitor log. All visitor bags are processed through an X-ray machine and subject to search. Visitors will be provided an EPA/DC badge that...
78 FR 20101 - Access to Confidential Business Information by Chemical Abstract Services
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-03
... identification, pass through a metal detector, and sign the EPA visitor log. All visitor bags are processed through an X-ray machine and subject to search. Visitors will be provided an EPA/DC badge that must be...
Protecting Files Hosted on Virtual Machines With Out-of-Guest Access Control
2017-12-01
analyzes the design and methodology of the implemented mechanism, while Chapter 4 explains the test methodology, test cases, and performance testing ...SACL, we verify that the user or group accessing the file has sufficient permissions. If that is correct, the callback function returns control to...ferify. In the first section, we validate our design of ferify. Next, we explain the tests we performed to verify that ferify has the results we expected
Materiel Readiness Support Activity Automation Plan
1986-09-01
Hardwire leased lines Sytek RF broadband cable modems Digital phone switched service Medium Speed - up to 56k baud RF modems Digital phone service High...dialing 121 I iI Medium Speed - up to 56k baud RF modems - up to 56k baud sync modem $2070 plus installation - $25 per month maintenance - $1200 per...security is to disconnect net- work, modem , and hardwire access (that is, all external access to the machine) after 5 p.m. (normal business hours
High-speed ultrafast laser machining with tertiary beam positioning (Conference Presentation)
NASA Astrophysics Data System (ADS)
Yang, Chuan; Zhang, Haibin
2017-03-01
For an industrial laser application, high process throughput and low average cost of ownership are critical to commercial success. Benefiting from high peak power, nonlinear absorption and small-achievable spot size, ultrafast lasers offer advantages of minimal heat affected zone, great taper and sidewall quality, and small via capability that exceeds the limits of their predecessors in via drilling for electronic packaging. In the past decade, ultrafast lasers have both grown in power and reduced in cost. For example, recently, disk and fiber technology have both shown stable operation in the 50W to 200W range, mostly at high repetition rate (beyond 500 kHz) that helps avoid detrimental nonlinear effects. However, to effectively and efficiently scale the throughput with the fast-growing power capability of the ultrafast lasers while keeping the beneficial laser-material interactions is very challenging, mainly because of the bottleneck imposed by the inertia-related acceleration limit and servo gain bandwidth when only stages and galvanometers are being used. On the other side, inertia-free scanning solutions like acoustic optics and electronic optical deflectors have small scan field, and therefore not suitable for large-panel processing. Our recent system developments combine stages, galvanometers, and AODs into a coordinated tertiary architecture for high bandwidth and meanwhile large field beam positioning. Synchronized three-level movements allow extremely fast local speed and continuous motion over the whole stage travel range. We present the via drilling results from such ultrafast system with up to 3MHz pulse to pulse random access, enabling high quality low cost ultrafast machining with emerging high average power laser sources.
Cavallo, Filippo; Sinigaglia, Stefano; Megali, Giuseppe; Pietrabissa, Andrea; Dario, Paolo; Mosca, Franco; Cuschieri, Alfred
2014-10-01
The uptake of minimal access surgery (MAS) has by virtue of its clinical benefits become widespread across the surgical specialties. However, despite its advantages in reducing traumatic insult to the patient, it imposes significant ergonomic restriction on the operating surgeons who require training for the safe execution. Recent progress in manipulator technologies (robotic or mechanical) have certainly reduced the level of difficulty, however it requires information for a complete gesture analysis of surgical performance. This article reports on the development and evaluation of such a system capable of full biomechanical and machine learning. The system for gesture analysis comprises 5 principal modules, which permit synchronous acquisition of multimodal surgical gesture signals from different sources and settings. The acquired signals are used to perform a biomechanical analysis for investigation of kinematics, dynamics, and muscle parameters of surgical gestures and a machine learning model for segmentation and recognition of principal phases of surgical gesture. The biomechanical system is able to estimate the level of expertise of subjects and the ergonomics in using different instruments. The machine learning approach is able to ascertain the level of expertise of subjects and has the potential for automatic recognition of surgical gesture for surgeon-robot interactions. Preliminary tests have confirmed the efficacy of the system for surgical gesture analysis, providing an objective evaluation of progress during training of surgeons in their acquisition of proficiency in MAS approach and highlighting useful information for the design and evaluation of master-slave manipulator systems. © The Author(s) 2013.
Plated wire random access memories
NASA Technical Reports Server (NTRS)
Gouldin, L. D.
1975-01-01
A program was conducted to construct 4096-work by 18-bit random access, NDRO-plated wire memory units. The memory units were subjected to comprehensive functional and environmental tests at the end-item level to verify comformance with the specified requirements. A technical description of the unit is given, along with acceptance test data sheets.
USDA-ARS?s Scientific Manuscript database
This study examined weight loss between a community-based, intensive behavioral counseling program (Weight Watchers PointsPlus that included three treatment access modes and a self-help condition. A total of 292 participants were randomized to a Weight Watchers (WW; n=147) or a self-help condition (...
Shi, Lu
2010-01-01
There is controversy over to what degree banning sugar-sweetened beverage (SSB) sales at schools could decrease the SSB intake. This paper uses the adolescent sample of 2005 California Health Interview Survey to estimate the association between the availability of SSB from school vending machines and the amount of SSB consumption. Propensity score stratification and kernel-based propensity score matching are used to address the selection bias issue in cross-sectional data. Propensity score stratification shows that adolescents who had access to SSB through their school vending machines consumed 0.170 more drinks of SSB than those who did not (P < .05). Kernel-based propensity score matching shows the SSB consumption difference to be 0.158 on the prior day (P < .05). This paper strengthens the evidence for the association between SSB availability via school vending machines and the actual SSB consumption, while future studies are needed to explore changes in other beverages after SSB becomes less available.
Complementarity between entanglement-assisted and quantum distributed random access code
NASA Astrophysics Data System (ADS)
Hameedi, Alley; Saha, Debashis; Mironowicz, Piotr; Pawłowski, Marcin; Bourennane, Mohamed
2017-05-01
Collaborative communication tasks such as random access codes (RACs) employing quantum resources have manifested great potential in enhancing information processing capabilities beyond the classical limitations. The two quantum variants of RACs, namely, quantum random access code (QRAC) and the entanglement-assisted random access code (EARAC), have demonstrated equal prowess for a number of tasks. However, there do exist specific cases where one outperforms the other. In this article, we study a family of 3 →1 distributed RACs [J. Bowles, N. Brunner, and M. Pawłowski, Phys. Rev. A 92, 022351 (2015), 10.1103/PhysRevA.92.022351] and present its general construction of both the QRAC and the EARAC. We demonstrate that, depending on the function of inputs that is sought, if QRAC achieves the maximal success probability then EARAC fails to do so and vice versa. Moreover, a tripartite Bell-type inequality associated with the EARAC variants reveals the genuine multipartite nonlocality exhibited by our protocol. We conclude with an experimental realization of the 3 →1 distributed QRAC that achieves higher success probabilities than the maximum possible with EARACs for a number of tasks.
Portable and Error-Free DNA-Based Data Storage.
Yazdi, S M Hossein Tabatabaei; Gabrys, Ryan; Milenkovic, Olgica
2017-07-10
DNA-based data storage is an emerging nonvolatile memory technology of potentially unprecedented density, durability, and replication efficiency. The basic system implementation steps include synthesizing DNA strings that contain user information and subsequently retrieving them via high-throughput sequencing technologies. Existing architectures enable reading and writing but do not offer random-access and error-free data recovery from low-cost, portable devices, which is crucial for making the storage technology competitive with classical recorders. Here we show for the first time that a portable, random-access platform may be implemented in practice using nanopore sequencers. The novelty of our approach is to design an integrated processing pipeline that encodes data to avoid costly synthesis and sequencing errors, enables random access through addressing, and leverages efficient portable sequencing via new iterative alignment and deletion error-correcting codes. Our work represents the only known random access DNA-based data storage system that uses error-prone nanopore sequencers, while still producing error-free readouts with the highest reported information rate/density. As such, it represents a crucial step towards practical employment of DNA molecules as storage media.
Machine Learning Predictions of a Multiresolution Climate Model Ensemble
NASA Astrophysics Data System (ADS)
Anderson, Gemma J.; Lucas, Donald D.
2018-05-01
Statistical models of high-resolution climate models are useful for many purposes, including sensitivity and uncertainty analyses, but building them can be computationally prohibitive. We generated a unique multiresolution perturbed parameter ensemble of a global climate model. We use a novel application of a machine learning technique known as random forests to train a statistical model on the ensemble to make high-resolution model predictions of two important quantities: global mean top-of-atmosphere energy flux and precipitation. The random forests leverage cheaper low-resolution simulations, greatly reducing the number of high-resolution simulations required to train the statistical model. We demonstrate that high-resolution predictions of these quantities can be obtained by training on an ensemble that includes only a small number of high-resolution simulations. We also find that global annually averaged precipitation is more sensitive to resolution changes than to any of the model parameters considered.
NASA Astrophysics Data System (ADS)
Wu, Qi
2010-03-01
Demand forecasts play a crucial role in supply chain management. The future demand for a certain product is the basis for the respective replenishment systems. Aiming at demand series with small samples, seasonal character, nonlinearity, randomicity and fuzziness, the existing support vector kernel does not approach the random curve of the sales time series in the space (quadratic continuous integral space). In this paper, we present a hybrid intelligent system combining the wavelet kernel support vector machine and particle swarm optimization for demand forecasting. The results of application in car sale series forecasting show that the forecasting approach based on the hybrid PSOWv-SVM model is effective and feasible, the comparison between the method proposed in this paper and other ones is also given, which proves that this method is, for the discussed example, better than hybrid PSOv-SVM and other traditional methods.
NASA Astrophysics Data System (ADS)
Pavlichin, Dmitri S.; Mabuchi, Hideo
2014-06-01
Nanoscale integrated photonic devices and circuits offer a path to ultra-low power computation at the few-photon level. Here we propose an optical circuit that performs a ubiquitous operation: the controlled, random-access readout of a collection of stored memory phases or, equivalently, the computation of the inner product of a vector of phases with a binary selector" vector, where the arithmetic is done modulo 2pi and the result is encoded in the phase of a coherent field. This circuit, a collection of cascaded interferometers driven by a coherent input field, demonstrates the use of coherence as a computational resource, and of the use of recently-developed mathematical tools for modeling optical circuits with many coupled parts. The construction extends in a straightforward way to the computation of matrix-vector and matrix-matrix products, and, with the inclusion of an optical feedback loop, to the computation of a weighted" readout of stored memory phases. We note some applications of these circuits for error correction and for computing tasks requiring fast vector inner products, e.g. statistical classification and some machine learning algorithms.
Ma, Tao; Wang, Fen; Cheng, Jianjun; Yu, Yang; Chen, Xiaoyun
2016-01-01
The development of intrusion detection systems (IDS) that are adapted to allow routers and network defence systems to detect malicious network traffic disguised as network protocols or normal access is a critical challenge. This paper proposes a novel approach called SCDNN, which combines spectral clustering (SC) and deep neural network (DNN) algorithms. First, the dataset is divided into k subsets based on sample similarity using cluster centres, as in SC. Next, the distance between data points in a testing set and the training set is measured based on similarity features and is fed into the deep neural network algorithm for intrusion detection. Six KDD-Cup99 and NSL-KDD datasets and a sensor network dataset were employed to test the performance of the model. These experimental results indicate that the SCDNN classifier not only performs better than backpropagation neural network (BPNN), support vector machine (SVM), random forest (RF) and Bayes tree models in detection accuracy and the types of abnormal attacks found. It also provides an effective tool of study and analysis of intrusion detection in large networks. PMID:27754380
Ma, Tao; Wang, Fen; Cheng, Jianjun; Yu, Yang; Chen, Xiaoyun
2016-10-13
The development of intrusion detection systems (IDS) that are adapted to allow routers and network defence systems to detect malicious network traffic disguised as network protocols or normal access is a critical challenge. This paper proposes a novel approach called SCDNN, which combines spectral clustering (SC) and deep neural network (DNN) algorithms. First, the dataset is divided into k subsets based on sample similarity using cluster centres, as in SC. Next, the distance between data points in a testing set and the training set is measured based on similarity features and is fed into the deep neural network algorithm for intrusion detection. Six KDD-Cup99 and NSL-KDD datasets and a sensor network dataset were employed to test the performance of the model. These experimental results indicate that the SCDNN classifier not only performs better than backpropagation neural network (BPNN), support vector machine (SVM), random forest (RF) and Bayes tree models in detection accuracy and the types of abnormal attacks found. It also provides an effective tool of study and analysis of intrusion detection in large networks.
DiFranza, J R; Savageau, J A; Aisquith, B F
1996-01-01
OBJECTIVES. This study evaluated the influence of age, gender, vending machine lockout devices, and tobacco industry-sponsored voluntary compliance programs ("It's the Law" programs) on underage youths' ability to purchase tobacco. METHODS. Twelve youths made 480 attempts to purchase tobacco in Massachusetts from over-the-counter retailers and vending machines with and without remote control lockout devices. Half the vendors were participating in It's the Law programs. RESULTS. In communities with no requirements for lockout devices, illegal sales were far more likely from vending machines than from over-the-counter sources (odds ratio [OR] = 5.9, 95% confidence interval [CI] = 3.3, 10.3). Locks on vending machines made them equivalent to over-the-counter sources in terms of illegal sales to youths. Vendors participating in It's the Law programs were as likely to make illegal sales as nonparticipants (OR = 0.87, 95% CI = 0.57, 1.35). Girls and youths 16 years of age and older were more successful at purchasing tobacco. CONCLUSIONS. The It's the Law programs are ineffective in preventing illegal sales. While locks made vending machines equivalent to over-the-counter sources in their compliance with the law, they are not a substitute for law enforcement. PMID:8633739
PMLB: a large benchmark suite for machine learning evaluation and comparison.
Olson, Randal S; La Cava, William; Orzechowski, Patryk; Urbanowicz, Ryan J; Moore, Jason H
2017-01-01
The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.
Park, Hanla; Papadaki, Angeliki
2016-01-01
Vending machine use has been associated with low dietary quality among children but there is limited evidence on its role in food habits of University students. We aimed to examine the nutritional value of foods sold in vending machines in a UK University and conduct formative research to investigate differences in food intake and body weight by vending machine use among 137 University students. The nutrient content of snacks and beverages available at nine campus vending machines was assessed by direct observation in May 2014. Participants (mean age 22.5 years; 54% males) subsequently completed a self-administered questionnaire to assess vending machine behaviours and food intake. Self-reported weight and height were collected. Vending machine snacks were generally high in sugar, fat and saturated fat, whereas most beverages were high in sugar. Seventy three participants (53.3%) used vending machines more than once per week and 82.2% (n 60) of vending machine users used them to snack between meals. Vending machine accessibility was positively correlated with vending machine use (r = 0.209, P = 0.015). Vending machine users, compared to non-users, reported a significantly higher weekly consumption of savoury snacks (5.2 vs. 2.8, P = 0.014), fruit juice (6.5 vs. 4.3, P = 0.035), soft drinks (5.1 vs. 1.9, P = 0.006), meat products (8.3 vs. 5.6, P = 0.029) and microwave meals (2.0 vs. 1.3, P = 0.020). No between-group differences were found in body weight. Most foods available from vending machines in this UK University were of low nutritional quality. In this sample of University students, vending machine users displayed several unfavourable dietary behaviours, compared to non-users. Findings can be used to inform the development of an environmental intervention that will focus on vending machines to improve dietary behaviours in University students in the UK. Copyright © 2015 Elsevier Ltd. All rights reserved.
Oswald, Tasha M; Winder-Patel, Breanna; Ruder, Steven; Xing, Guibo; Stahmer, Aubyn; Solomon, Marjorie
2018-05-01
The purpose of this pilot randomized controlled trial was to investigate the acceptability and efficacy of the Acquiring Career, Coping, Executive control, Social Skills (ACCESS) Program, a group intervention tailored for young adults with autism spectrum disorder (ASD) to enhance critical skills and beliefs that promote adult functioning, including social and adaptive skills, self-determination skills, and coping self-efficacy. Forty-four adults with ASD (ages 18-38; 13 females) and their caregivers were randomly assigned to treatment or waitlist control. Compared to controls, adults in treatment significantly improved in adaptive and self-determination skills, per caregiver report, and self-reported greater belief in their ability to access social support to cope with stressors. Results provide evidence for the acceptability and efficacy of the ACCESS Program.
AVE-SESAME program for the REEDA System
NASA Technical Reports Server (NTRS)
Hickey, J. S.
1981-01-01
The REEDA system software was modified and improved to process the AVE-SESAME severe storm data. A random access file system for the AVE storm data was designed, tested, and implemented. The AVE/SESAME software was modified to incorporate the random access file input and to interface with new graphics hardware/software now available on the REEDA system. Software was developed to graphically display the AVE/SESAME data in the convention normally used by severe storm researchers. Software was converted to AVE/SESAME software systems and interfaced with existing graphics hardware/software available on the REEDA System. Software documentation was provided for existing AVE/SESAME programs underlining functional flow charts and interacting questions. All AVE/SESAME data sets in random access format was processed to allow developed software to access the entire AVE/SESAME data base. The existing software was modified to allow for processing of different AVE/SESAME data set types including satellite surface and radar data.
Lawrence, Sally; Boyle, Maria; Craypo, Lisa; Samuels, Sarah
2009-06-01
Little has been done to ensure that the foods sold within health care facilities promote healthy lifestyles. Policies to improve school nutrition environments can serve as models for health care organizations. This study was designed to assess the healthfulness of foods sold in health care facility vending machines as well as how health care organizations are using policies to create healthy food environments. Food and beverage assessments were conducted in 19 California health care facilities that serve children in the Healthy Eating, Active Communities sites. Items sold in vending machines were inventoried at each facility and interviews conducted for information on vending policies. Analyses examined the types of products sold and the healthfulness of these products. Ninety-six vending machines were observed in 15 (79%) of the facilities. Hospitals averaged 9.3 vending machines per facility compared with 3 vending machines per health department and 1.4 per clinic. Sodas comprised the greatest percentage of all beverages offered for sale: 30% in hospital vending machines and 38% in clinic vending machines. Water (20%) was the most prevalent in health departments. Candy comprised the greatest percentage of all foods offered in vending machines: 31% in clinics, 24% in hospitals, and 20% in health department facilities. Across all facilities, 75% of beverages and 81% of foods sold in vending machines did not adhere to the California school nutrition standards (Senate Bill 12). Nine (47%) of the health care facilities had adopted, or were in the process of adopting, policies that set nutrition standards for vending machines. According to the California school nutrition standards, the majority of items found in the vending machines in participating health care facilities were unhealthy. Consumption of sweetened beverages and high-energy-density foods has been linked to increased prevalence of obesity. Some health care facilities are developing policies that set nutrition standards for vending machines. These policies could be effective in increasing access to healthy foods and beverages in institutional settings.
Overview of emerging nonvolatile memory technologies
2014-01-01
Nonvolatile memory technologies in Si-based electronics date back to the 1990s. Ferroelectric field-effect transistor (FeFET) was one of the most promising devices replacing the conventional Flash memory facing physical scaling limitations at those times. A variant of charge storage memory referred to as Flash memory is widely used in consumer electronic products such as cell phones and music players while NAND Flash-based solid-state disks (SSDs) are increasingly displacing hard disk drives as the primary storage device in laptops, desktops, and even data centers. The integration limit of Flash memories is approaching, and many new types of memory to replace conventional Flash memories have been proposed. Emerging memory technologies promise new memories to store more data at less cost than the expensive-to-build silicon chips used by popular consumer gadgets including digital cameras, cell phones and portable music players. They are being investigated and lead to the future as potential alternatives to existing memories in future computing systems. Emerging nonvolatile memory technologies such as magnetic random-access memory (MRAM), spin-transfer torque random-access memory (STT-RAM), ferroelectric random-access memory (FeRAM), phase-change memory (PCM), and resistive random-access memory (RRAM) combine the speed of static random-access memory (SRAM), the density of dynamic random-access memory (DRAM), and the nonvolatility of Flash memory and so become very attractive as another possibility for future memory hierarchies. Many other new classes of emerging memory technologies such as transparent and plastic, three-dimensional (3-D), and quantum dot memory technologies have also gained tremendous popularity in recent years. Subsequently, not an exaggeration to say that computer memory could soon earn the ultimate commercial validation for commercial scale-up and production the cheap plastic knockoff. Therefore, this review is devoted to the rapidly developing new class of memory technologies and scaling of scientific procedures based on an investigation of recent progress in advanced Flash memory devices. PMID:25278820
Overview of emerging nonvolatile memory technologies.
Meena, Jagan Singh; Sze, Simon Min; Chand, Umesh; Tseng, Tseung-Yuen
2014-01-01
Nonvolatile memory technologies in Si-based electronics date back to the 1990s. Ferroelectric field-effect transistor (FeFET) was one of the most promising devices replacing the conventional Flash memory facing physical scaling limitations at those times. A variant of charge storage memory referred to as Flash memory is widely used in consumer electronic products such as cell phones and music players while NAND Flash-based solid-state disks (SSDs) are increasingly displacing hard disk drives as the primary storage device in laptops, desktops, and even data centers. The integration limit of Flash memories is approaching, and many new types of memory to replace conventional Flash memories have been proposed. Emerging memory technologies promise new memories to store more data at less cost than the expensive-to-build silicon chips used by popular consumer gadgets including digital cameras, cell phones and portable music players. They are being investigated and lead to the future as potential alternatives to existing memories in future computing systems. Emerging nonvolatile memory technologies such as magnetic random-access memory (MRAM), spin-transfer torque random-access memory (STT-RAM), ferroelectric random-access memory (FeRAM), phase-change memory (PCM), and resistive random-access memory (RRAM) combine the speed of static random-access memory (SRAM), the density of dynamic random-access memory (DRAM), and the nonvolatility of Flash memory and so become very attractive as another possibility for future memory hierarchies. Many other new classes of emerging memory technologies such as transparent and plastic, three-dimensional (3-D), and quantum dot memory technologies have also gained tremendous popularity in recent years. Subsequently, not an exaggeration to say that computer memory could soon earn the ultimate commercial validation for commercial scale-up and production the cheap plastic knockoff. Therefore, this review is devoted to the rapidly developing new class of memory technologies and scaling of scientific procedures based on an investigation of recent progress in advanced Flash memory devices.
Exploring a potential energy surface by machine learning for characterizing atomic transport
NASA Astrophysics Data System (ADS)
Kanamori, Kenta; Toyoura, Kazuaki; Honda, Junya; Hattori, Kazuki; Seko, Atsuto; Karasuyama, Masayuki; Shitara, Kazuki; Shiga, Motoki; Kuwabara, Akihide; Takeuchi, Ichiro
2018-03-01
We propose a machine-learning method for evaluating the potential barrier governing atomic transport based on the preferential selection of dominant points for atomic transport. The proposed method generates numerous random samples of the entire potential energy surface (PES) from a probabilistic Gaussian process model of the PES, which enables defining the likelihood of the dominant points. The robustness and efficiency of the method are demonstrated on a dozen model cases for proton diffusion in oxides, in comparison with a conventional nudge elastic band method.
Fiber tractography using machine learning.
Neher, Peter F; Côté, Marc-Alexandre; Houde, Jean-Christophe; Descoteaux, Maxime; Maier-Hein, Klaus H
2017-09-01
We present a fiber tractography approach based on a random forest classification and voting process, guiding each step of the streamline progression by directly processing raw diffusion-weighted signal intensities. For comparison to the state-of-the-art, i.e. tractography pipelines that rely on mathematical modeling, we performed a quantitative and qualitative evaluation with multiple phantom and in vivo experiments, including a comparison to the 96 submissions of the ISMRM tractography challenge 2015. The results demonstrate the vast potential of machine learning for fiber tractography. Copyright © 2017 Elsevier Inc. All rights reserved.
Epidermis area detection for immunofluorescence microscopy
NASA Astrophysics Data System (ADS)
Dovganich, Andrey; Krylov, Andrey; Nasonov, Andrey; Makhneva, Natalia
2018-04-01
We propose a novel image segmentation method for immunofluorescence microscopy images of skin tissue for the diagnosis of various skin diseases. The segmentation is based on machine learning algorithms. The feature vector is filled by three groups of features: statistical features, Laws' texture energy measures and local binary patterns. The images are preprocessed for better learning. Different machine learning algorithms have been used and the best results have been obtained with random forest algorithm. We use the proposed method to detect the epidermis region as a part of pemphigus diagnosis system.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-16
... visitors are required to show photographic identification, pass through a metal detector, and sign the EPA visitor log. All visitor bags are processed through an X-ray machine and subject to search. Visitors will...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-22
... photographic identification, pass through a metal detector, and sign the EPA visitor log. All visitor bags are processed through an X-ray machine and subject to search. Visitors will be provided an EPA/DC badge that...
76 FR 9012 - Access to Confidential Business Information by Electronic Consulting Services, Inc.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-16
... identification, pass through a metal detector, and sign the EPA visitor log. All visitor bags are processed through an X-ray machine and subject to search. Visitors will be provided an EPA/DC badge that must be...
A Mechanized Information Services Catalog.
ERIC Educational Resources Information Center
Marron, Beatrice; And Others
The National Bureau of Standards is mechanizing a catalog of currently available information sources and services. Information from recent surveys of machine-readable, commercially-available bibliographic data bases, and the various current awareness, batch retrospective, and interactive retrospective services which can access them have been…
Content Classification: Leveraging New Tools and Librarians' Expertise.
ERIC Educational Resources Information Center
Starr, Jennie
1999-01-01
Presents factors for librarians to consider when decision-making about information retrieval. Discusses indexing theory; thesauri aids; controlled vocabulary or thesauri to increase access; humans versus machines; automated tools; product evaluations and evaluation criteria; automated classification tools; content server products; and document…
Kuo, Ching-Yen; Yu, Liang-Chin; Chen, Hou-Chaung; Chan, Chien-Lung
2018-01-01
The aims of this study were to compare the performance of machine learning methods for the prediction of the medical costs associated with spinal fusion in terms of profit or loss in Taiwan Diagnosis-Related Groups (Tw-DRGs) and to apply these methods to explore the important factors associated with the medical costs of spinal fusion. A data set was obtained from a regional hospital in Taoyuan city in Taiwan, which contained data from 2010 to 2013 on patients of Tw-DRG49702 (posterior and other spinal fusion without complications or comorbidities). Naïve-Bayesian, support vector machines, logistic regression, C4.5 decision tree, and random forest methods were employed for prediction using WEKA 3.8.1. Five hundred thirty-two cases were categorized as belonging to the Tw-DRG49702 group. The mean medical cost was US $4,549.7, and the mean age of the patients was 62.4 years. The mean length of stay was 9.3 days. The length of stay was an important variable in terms of determining medical costs for patients undergoing spinal fusion. The random forest method had the best predictive performance in comparison to the other methods, achieving an accuracy of 84.30%, a sensitivity of 71.4%, a specificity of 92.2%, and an AUC of 0.904. Our study demonstrated that the random forest model can be employed to predict the medical costs of Tw-DRG49702, and could inform hospital strategy in terms of increasing the financial management efficiency of this operation.
Trends in communicative access solutions for children with cerebral palsy.
Myrden, Andrew; Schudlo, Larissa; Weyand, Sabine; Zeyl, Timothy; Chau, Tom
2014-08-01
Access solutions may facilitate communication in children with limited functional speech and motor control. This study reviews current trends in access solution development for children with cerebral palsy, with particular emphasis on the access technology that harnesses a control signal from the user (eg, movement or physiological change) and the output device (eg, augmentative and alternative communication system) whose behavior is modulated by the user's control signal. Access technologies have advanced from simple mechanical switches to machine vision (eg, eye-gaze trackers), inertial sensing, and emerging physiological interfaces that require minimal physical effort. Similarly, output devices have evolved from bulky, dedicated hardware with limited configurability, to platform-agnostic, highly personalized mobile applications. Emerging case studies encourage the consideration of access technology for all nonverbal children with cerebral palsy with at least nascent contingency awareness. However, establishing robust evidence of the effectiveness of the aforementioned advances will require more expansive studies. © The Author(s) 2014.
Lou, Wangchao; Wang, Xiaoqing; Chen, Fan; Chen, Yixiao; Jiang, Bo; Zhang, Hua
2014-01-01
Developing an efficient method for determination of the DNA-binding proteins, due to their vital roles in gene regulation, is becoming highly desired since it would be invaluable to advance our understanding of protein functions. In this study, we proposed a new method for the prediction of the DNA-binding proteins, by performing the feature rank using random forest and the wrapper-based feature selection using forward best-first search strategy. The features comprise information from primary sequence, predicted secondary structure, predicted relative solvent accessibility, and position specific scoring matrix. The proposed method, called DBPPred, used Gaussian naïve Bayes as the underlying classifier since it outperformed five other classifiers, including decision tree, logistic regression, k-nearest neighbor, support vector machine with polynomial kernel, and support vector machine with radial basis function. As a result, the proposed DBPPred yields the highest average accuracy of 0.791 and average MCC of 0.583 according to the five-fold cross validation with ten runs on the training benchmark dataset PDB594. Subsequently, blind tests on the independent dataset PDB186 by the proposed model trained on the entire PDB594 dataset and by other five existing methods (including iDNA-Prot, DNA-Prot, DNAbinder, DNABIND and DBD-Threader) were performed, resulting in that the proposed DBPPred yielded the highest accuracy of 0.769, MCC of 0.538, and AUC of 0.790. The independent tests performed by the proposed DBPPred on completely a large non-DNA binding protein dataset and two RNA binding protein datasets also showed improved or comparable quality when compared with the relevant prediction methods. Moreover, we observed that majority of the selected features by the proposed method are statistically significantly different between the mean feature values of the DNA-binding and the non DNA-binding proteins. All of the experimental results indicate that the proposed DBPPred can be an alternative perspective predictor for large-scale determination of DNA-binding proteins. PMID:24475169
Parallel Processing and Scientific Applications
1992-11-30
Lattice QCD Calculations on the Connection Machine), SIAM News 24, 1 (May 1991) 5. C. F. Baillie and D. A. Johnston, Crumpling Dynamically Triangulated...hypercubic lattice ; in the second, the surface is randomly triangulated once at the beginning of the simulation; and in the third the random...Sharpe, QCD with Dynamical Wilson Fermions 1I, Phys. Rev. D44, 3272 (1991), 8. R. Gupta and C. F. Baillie, Critical Behavior of the 2D XY Model, Phys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yahya, Noorazrul, E-mail: noorazrul.yahya@research.uwa.edu.au; Ebert, Martin A.; Bulsara, Max
Purpose: Given the paucity of available data concerning radiotherapy-induced urinary toxicity, it is important to ensure derivation of the most robust models with superior predictive performance. This work explores multiple statistical-learning strategies for prediction of urinary symptoms following external beam radiotherapy of the prostate. Methods: The performance of logistic regression, elastic-net, support-vector machine, random forest, neural network, and multivariate adaptive regression splines (MARS) to predict urinary symptoms was analyzed using data from 754 participants accrued by TROG03.04-RADAR. Predictive features included dose-surface data, comorbidities, and medication-intake. Four symptoms were analyzed: dysuria, haematuria, incontinence, and frequency, each with three definitions (grade ≥more » 1, grade ≥ 2 and longitudinal) with event rate between 2.3% and 76.1%. Repeated cross-validations producing matched models were implemented. A synthetic minority oversampling technique was utilized in endpoints with rare events. Parameter optimization was performed on the training data. Area under the receiver operating characteristic curve (AUROC) was used to compare performance using sample size to detect differences of ≥0.05 at the 95% confidence level. Results: Logistic regression, elastic-net, random forest, MARS, and support-vector machine were the highest-performing statistical-learning strategies in 3, 3, 3, 2, and 1 endpoints, respectively. Logistic regression, MARS, elastic-net, random forest, neural network, and support-vector machine were the best, or were not significantly worse than the best, in 7, 7, 5, 5, 3, and 1 endpoints. The best-performing statistical model was for dysuria grade ≥ 1 with AUROC ± standard deviation of 0.649 ± 0.074 using MARS. For longitudinal frequency and dysuria grade ≥ 1, all strategies produced AUROC>0.6 while all haematuria endpoints and longitudinal incontinence models produced AUROC<0.6. Conclusions: Logistic regression and MARS were most likely to be the best-performing strategy for the prediction of urinary symptoms with elastic-net and random forest producing competitive results. The predictive power of the models was modest and endpoint-dependent. New features, including spatial dose maps, may be necessary to achieve better models.« less
Effective dust control systems on concrete dowel drilling machinery.
Echt, Alan S; Sanderson, Wayne T; Mead, Kenneth R; Feng, H Amy; Farwick, Daniel R; Farwick, Dawn Ramsey
2016-09-01
Rotary-type percussion dowel drilling machines, which drill horizontal holes in concrete pavement, have been documented to produce respirable crystalline silica concentrations above recommended exposure criteria. This places operators at potential risk for developing health effects from exposure. United States manufacturers of these machines offer optional dust control systems. The effectiveness of the dust control systems to reduce respirable dust concentrations on two types of drilling machines was evaluated under controlled conditions with the machines operating inside large tent structures in an effort to eliminate secondary exposure sources not related to the dowel-drilling operation. Area air samples were collected at breathing zone height at three locations around each machine. Through equal numbers of sampling rounds with the control systems randomly selected to be on or off, the control systems were found to significantly reduce respirable dust concentrations from a geometric mean of 54 mg per cubic meter to 3.0 mg per cubic meter on one machine and 57 mg per cubic meter to 5.3 mg per cubic meter on the other machine. This research shows that the dust control systems can dramatically reduce respirable dust concentrations by over 90% under controlled conditions. However, these systems need to be evaluated under actual work conditions to determine their effectiveness in reducing worker exposures to crystalline silica below hazardous levels.
Risk estimation using probability machines.
Dasgupta, Abhijit; Szymczak, Silke; Moore, Jason H; Bailey-Wilson, Joan E; Malley, James D
2014-03-01
Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a "risk machine", will share properties from the statistical machine that it is derived from.
Advantages of Synthetic Noise and Machine Learning for Analyzing Radioecological Data Sets.
Shuryak, Igor
2017-01-01
The ecological effects of accidental or malicious radioactive contamination are insufficiently understood because of the hazards and difficulties associated with conducting studies in radioactively-polluted areas. Data sets from severely contaminated locations can therefore be small. Moreover, many potentially important factors, such as soil concentrations of toxic chemicals, pH, and temperature, can be correlated with radiation levels and with each other. In such situations, commonly-used statistical techniques like generalized linear models (GLMs) may not be able to provide useful information about how radiation and/or these other variables affect the outcome (e.g. abundance of the studied organisms). Ensemble machine learning methods such as random forests offer powerful alternatives. We propose that analysis of small radioecological data sets by GLMs and/or machine learning can be made more informative by using the following techniques: (1) adding synthetic noise variables to provide benchmarks for distinguishing the performances of valuable predictors from irrelevant ones; (2) adding noise directly to the predictors and/or to the outcome to test the robustness of analysis results against random data fluctuations; (3) adding artificial effects to selected predictors to test the sensitivity of the analysis methods in detecting predictor effects; (4) running a selected machine learning method multiple times (with different random-number seeds) to test the robustness of the detected "signal"; (5) using several machine learning methods to test the "signal's" sensitivity to differences in analysis techniques. Here, we applied these approaches to simulated data, and to two published examples of small radioecological data sets: (I) counts of fungal taxa in samples of soil contaminated by the Chernobyl nuclear power plan accident (Ukraine), and (II) bacterial abundance in soil samples under a ruptured nuclear waste storage tank (USA). We show that the proposed techniques were advantageous compared with the methodology used in the original publications where the data sets were presented. Specifically, our approach identified a negative effect of radioactive contamination in data set I, and suggested that in data set II stable chromium could have been a stronger limiting factor for bacterial abundance than the radionuclides 137Cs and 99Tc. This new information, which was extracted from these data sets using the proposed techniques, can potentially enhance the design of radioactive waste bioremediation.
Advantages of Synthetic Noise and Machine Learning for Analyzing Radioecological Data Sets
Shuryak, Igor
2017-01-01
The ecological effects of accidental or malicious radioactive contamination are insufficiently understood because of the hazards and difficulties associated with conducting studies in radioactively-polluted areas. Data sets from severely contaminated locations can therefore be small. Moreover, many potentially important factors, such as soil concentrations of toxic chemicals, pH, and temperature, can be correlated with radiation levels and with each other. In such situations, commonly-used statistical techniques like generalized linear models (GLMs) may not be able to provide useful information about how radiation and/or these other variables affect the outcome (e.g. abundance of the studied organisms). Ensemble machine learning methods such as random forests offer powerful alternatives. We propose that analysis of small radioecological data sets by GLMs and/or machine learning can be made more informative by using the following techniques: (1) adding synthetic noise variables to provide benchmarks for distinguishing the performances of valuable predictors from irrelevant ones; (2) adding noise directly to the predictors and/or to the outcome to test the robustness of analysis results against random data fluctuations; (3) adding artificial effects to selected predictors to test the sensitivity of the analysis methods in detecting predictor effects; (4) running a selected machine learning method multiple times (with different random-number seeds) to test the robustness of the detected “signal”; (5) using several machine learning methods to test the “signal’s” sensitivity to differences in analysis techniques. Here, we applied these approaches to simulated data, and to two published examples of small radioecological data sets: (I) counts of fungal taxa in samples of soil contaminated by the Chernobyl nuclear power plan accident (Ukraine), and (II) bacterial abundance in soil samples under a ruptured nuclear waste storage tank (USA). We show that the proposed techniques were advantageous compared with the methodology used in the original publications where the data sets were presented. Specifically, our approach identified a negative effect of radioactive contamination in data set I, and suggested that in data set II stable chromium could have been a stronger limiting factor for bacterial abundance than the radionuclides 137Cs and 99Tc. This new information, which was extracted from these data sets using the proposed techniques, can potentially enhance the design of radioactive waste bioremediation. PMID:28068401
Support vector machine multiuser receiver for DS-CDMA signals in multipath channels.
Chen, S; Samingan, A K; Hanzo, L
2001-01-01
The problem of constructing an adaptive multiuser detector (MUD) is considered for direct sequence code division multiple access (DS-CDMA) signals transmitted through multipath channels. The emerging learning technique, called support vector machines (SVM), is proposed as a method of obtaining a nonlinear MUD from a relatively small training data block. Computer simulation is used to study this SVM MUD, and the results show that it can closely match the performance of the optimal Bayesian one-shot detector. Comparisons with an adaptive radial basis function (RBF) MUD trained by an unsupervised clustering algorithm are discussed.
Expert opinions on optimal enforcement of minimum purchase age laws for tobacco.
Levy, D T; Chaloupka, F; Slater, S
2000-05-01
A questionnaire on how youth access laws should be enforced was sent to 20 experts who had administered and/or evaluated a youth access enforcement program. Respondents agreed on the need for a high level of retail compliance, checkers representative of the community, checks at least twice per year, a graduated penalty structure with license revocation, and bans on self-service and vending machines. Respondents indicated the need for research on the effects of ID use, frequency of checks, penalty structures, and the effects on smoking rates of youth access policies alone and in conjunction with other tobacco control policies.
Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv
2014-01-01
JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded. PMID:25110745
SAD-Based Stereo Vision Machine on a System-on-Programmable-Chip (SoPC)
Zhang, Xiang; Chen, Zhangwei
2013-01-01
This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC) architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users' configuration data. The Sum of Absolute Differences (SAD) algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels. PMID:23459385
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dayman, Ken J; Ade, Brian J; Weber, Charles F
High-dimensional, nonlinear function estimation using large datasets is a current area of interest in the machine learning community, and applications may be found throughout the analytical sciences, where ever-growing datasets are making more information available to the analyst. In this paper, we leverage the existing relevance vector machine, a sparse Bayesian version of the well-studied support vector machine, and expand the method to include integrated feature selection and automatic function shaping. These innovations produce an algorithm that is able to distinguish variables that are useful for making predictions of a response from variables that are unrelated or confusing. We testmore » the technology using synthetic data, conduct initial performance studies, and develop a model capable of making position-independent predictions of the coreaveraged burnup using a single specimen drawn randomly from a nuclear reactor core.« less
Grau, Cai; Defourny, Noémie; Malicki, Julian; Dunscombe, Peter; Borras, Josep M; Coffey, Mary; Slotman, Ben; Bogusz, Marta; Gasparotto, Chiara; Lievens, Yolande; Kokobobo, Arianit; Sedlmayer, Felix; Slobina, Elena; Feyen, Karen; Hadjieva, Tatiana; Odrazka, Karel; Grau Eriksen, Jesper; Jaal, Jana; Bly, Ritva; Chauvet, Bruno; Willich, Normann; Polgar, Csaba; Johannsson, Jakob; Cunningham, Moya; Magrini, Stefano; Atkocius, Vydmantas; Untereiner, Michel; Pirotta, Martin; Karadjinovic, Vanja; Levernes, Sverre; Sladowski, Krystol; Lurdes Trigo, Maria; Šegedin, Barbara; Rodriguez, Aurora; Lagerlund, Magnus; Pastoors, Bert; Hoskin, Peter; Vaarkamp, Jaap; Cleries Soler, Ramon
2014-08-01
Documenting the distribution of radiotherapy departments and the availability of radiotherapy equipment in the European countries is an important part of HERO - the ESTRO Health Economics in Radiation Oncology project. HERO has the overall aim to develop a knowledge base of the provision of radiotherapy in Europe and build a model for health economic evaluation of radiation treatments at the European level. The aim of the current report is to describe the distribution of radiotherapy equipment in European countries. An 84-item questionnaire was sent out to European countries, principally through their national societies. The current report includes a detailed analysis of radiotherapy departments and equipment (questionnaire items 26-29), analyzed in relation to the annual number of treatment courses and the socio-economic status of the countries. The analysis is based on validated responses from 28 of the 40 European countries defined by the European Cancer Observatory (ECO). A large variation between countries was found for most parameters studied. There were 2192 linear accelerators, 96 dedicated stereotactic machines, and 77 cobalt machines reported in the 27 countries where this information was available. A total of 12 countries had at least one cobalt machine in use. There was a median of 0.5 simulator per MV unit (range 0.3-1.5) and 1.4 (range 0.4-4.4) simulators per department. Of the 874 simulators, a total of 654 (75%) were capable of 3D imaging (CT-scanner or CBCT-option). The number of MV machines (cobalt, linear accelerators, and dedicated stereotactic machines) per million inhabitants ranged from 1.4 to 9.5 (median 5.3) and the average number of MV machines per department from 0.9 to 8.2 (median 2.6). The average number of treatment courses per year per MV machine varied from 262 to 1061 (median 419). While 69% of MV units were capable of IMRT only 49% were equipped for image guidance (IGRT). There was a clear relation between socio-economic status, as measured by GNI per capita, and availability of radiotherapy equipment in the countries. In many low income countries in Southern and Central-Eastern Europe there was very limited access to radiotherapy and especially to equipment for IMRT or IGRT. The European average number of MV machines per million inhabitants and per department is now better in line with QUARTS recommendations from 2005, but the survey also showed a significant heterogeneity in the access to modern radiotherapy equipment in Europe. High income countries especially in Northern-Western Europe are well-served with radiotherapy resources, other countries are facing important shortages of both equipment in general and especially machines capable of delivering high precision conformal treatments (IMRT, IGRT). Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Systematic Poisoning Attacks on and Defenses for Machine Learning in Healthcare.
Mozaffari-Kermani, Mehran; Sur-Kolay, Susmita; Raghunathan, Anand; Jha, Niraj K
2015-11-01
Machine learning is being used in a wide range of application domains to discover patterns in large datasets. Increasingly, the results of machine learning drive critical decisions in applications related to healthcare and biomedicine. Such health-related applications are often sensitive, and thus, any security breach would be catastrophic. Naturally, the integrity of the results computed by machine learning is of great importance. Recent research has shown that some machine-learning algorithms can be compromised by augmenting their training datasets with malicious data, leading to a new class of attacks called poisoning attacks. Hindrance of a diagnosis may have life-threatening consequences and could cause distrust. On the other hand, not only may a false diagnosis prompt users to distrust the machine-learning algorithm and even abandon the entire system but also such a false positive classification may cause patient distress. In this paper, we present a systematic, algorithm-independent approach for mounting poisoning attacks across a wide range of machine-learning algorithms and healthcare datasets. The proposed attack procedure generates input data, which, when added to the training set, can either cause the results of machine learning to have targeted errors (e.g., increase the likelihood of classification into a specific class), or simply introduce arbitrary errors (incorrect classification). These attacks may be applied to both fixed and evolving datasets. They can be applied even when only statistics of the training dataset are available or, in some cases, even without access to the training dataset, although at a lower efficacy. We establish the effectiveness of the proposed attacks using a suite of six machine-learning algorithms and five healthcare datasets. Finally, we present countermeasures against the proposed generic attacks that are based on tracking and detecting deviations in various accuracy metrics, and benchmark their effectiveness.
CFCC: A Covert Flows Confinement Mechanism for Virtual Machine Coalitions
NASA Astrophysics Data System (ADS)
Cheng, Ge; Jin, Hai; Zou, Deqing; Shi, Lei; Ohoussou, Alex K.
Normally, virtualization technology is adopted to construct the infrastructure of cloud computing environment. Resources are managed and organized dynamically through virtual machine (VM) coalitions in accordance with the requirements of applications. Enforcing mandatory access control (MAC) on the VM coalitions will greatly improve the security of VM-based cloud computing. However, the existing MAC models lack the mechanism to confine the covert flows and are hard to eliminate the convert channels. In this paper, we propose a covert flows confinement mechanism for virtual machine coalitions (CFCC), which introduces dynamic conflicts of interest based on the activity history of VMs, each of which is attached with a label. The proposed mechanism can be used to confine the covert flows between VMs in different coalitions. We implement a prototype system, evaluate its performance, and show that our mechanism is practical.
Machine-learned and codified synthesis parameters of oxide materials
NASA Astrophysics Data System (ADS)
Kim, Edward; Huang, Kevin; Tomala, Alex; Matthews, Sara; Strubell, Emma; Saunders, Adam; McCallum, Andrew; Olivetti, Elsa
2017-09-01
Predictive materials design has rapidly accelerated in recent years with the advent of large-scale resources, such as materials structure and property databases generated by ab initio computations. In the absence of analogous ab initio frameworks for materials synthesis, high-throughput and machine learning techniques have recently been harnessed to generate synthesis strategies for select materials of interest. Still, a community-accessible, autonomously-compiled synthesis planning resource which spans across materials systems has not yet been developed. In this work, we present a collection of aggregated synthesis parameters computed using the text contained within over 640,000 journal articles using state-of-the-art natural language processing and machine learning techniques. We provide a dataset of synthesis parameters, compiled autonomously across 30 different oxide systems, in a format optimized for planning novel syntheses of materials.
Machine learning for micro-tomography
NASA Astrophysics Data System (ADS)
Parkinson, Dilworth Y.; Pelt, Daniël. M.; Perciano, Talita; Ushizima, Daniela; Krishnan, Harinarayan; Barnard, Harold S.; MacDowell, Alastair A.; Sethian, James
2017-09-01
Machine learning has revolutionized a number of fields, but many micro-tomography users have never used it for their work. The micro-tomography beamline at the Advanced Light Source (ALS), in collaboration with the Center for Applied Mathematics for Energy Research Applications (CAMERA) at Lawrence Berkeley National Laboratory, has now deployed a series of tools to automate data processing for ALS users using machine learning. This includes new reconstruction algorithms, feature extraction tools, and image classification and recommen- dation systems for scientific image. Some of these tools are either in automated pipelines that operate on data as it is collected or as stand-alone software. Others are deployed on computing resources at Berkeley Lab-from workstations to supercomputers-and made accessible to users through either scripting or easy-to-use graphical interfaces. This paper presents a progress report on this work.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-25
... identification, pass through a metal detector, and sign the EPA visitor log. All visitor bags are processed through an X-ray machine and subject to search. Visitors will be provided an EPA/DC badge that must be...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-18
... identification, pass through a metal detector, and sign the EPA visitor log. All visitor bags are processed through an X-ray machine and subject to search. Visitors will be provided an EPA/DC badge that must be...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-24
... identification, pass through a metal detector, and sign the EPA visitor log. All visitor bags are processed through an X-ray machine and subject to search. Visitors will be provided an EPA/DC badge that must be...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-24
... identification, pass through a metal detector, and sign the EPA visitor log. All visitor bags are processed through an X-ray machine and subject to search. Visitors will be provided an EPA/DC badge that must be...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-22
... identification, pass through a metal detector, and sign the EPA visitor log. All visitor bags are processed through an X-ray machine and subject to search. Visitors will be provided an EPA/DC badge that must be...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-14
... identification, pass through a metal detector, and sign the EPA visitor log. All visitor bags are processed through an X-ray machine and subject to search. Visitors will be provided an EPA/DC badge that must be...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-21
... identification, pass through a metal detector, and sign the EPA visitor log. All visitor bags are processed through an X-ray machine and subject to search. Visitors will be provided an EPA/DC badge that must be...
47 CFR 76.309 - Customer service obligations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... representatives will be available to respond to customer telephone inquiries during normal business hours. (B) After normal business hours, the access line may be answered by a service or an automated response system, including an answering machine. Inquiries received after normal business hours must be responded...
U-View: Student Access to Information Using ATMs.
ERIC Educational Resources Information Center
Springfield, John J.
1990-01-01
A discussion of Boston College's system allowing students to display and print their campus records at automated teller machines (ATMs) around the institution looks at the system's evolution, current operations, human factors affecting system design and operation, shared responsibility, campus acceptance, future enhancements, and cost…
NASA Astrophysics Data System (ADS)
Iwai, Go
2015-12-01
We describe the development of an environment for Geant4 consisting of an application and data that provide users with a more efficient way to access Geant4 applications without having to download and build the software locally. The environment is platform neutral and offers the users near-real time performance. In addition, the environment consists of data and Geant4 libraries built using low-level virtual machine (LLVM) tools which can produce bitcode that can be embedded in HTML and accessed via a browser. The bitcode is downloaded to the local machine via the browser and can then be configured by the user. This approach provides a way of minimising the risk of leaking potentially sensitive data used to construct the Geant4 model and application in the medical domain for treatment planning. We describe several applications that have used this approach and compare their performance with that of native applications. We also describe potential user communities that could benefit from this approach.
Vita, Randi; Overton, James A; Mungall, Christopher J; Sette, Alessandro
2018-01-01
Abstract The Immune Epitope Database (IEDB), at www.iedb.org, has the mission to make published experimental data relating to the recognition of immune epitopes easily available to the scientific public. By presenting curated data in a searchable database, we have liberated it from the tables and figures of journal articles, making it more accessible and usable by immunologists. Recently, the principles of Findability, Accessibility, Interoperability and Reusability have been formulated as goals that data repositories should meet to enhance the usefulness of their data holdings. We here examine how the IEDB complies with these principles and identify broad areas of success, but also areas for improvement. We describe short-term improvements to the IEDB that are being implemented now, as well as a long-term vision of true ‘machine-actionable interoperability’, which we believe will require community agreement on standardization of knowledge representation that can be built on top of the shared use of ontologies. PMID:29688354
Machine learning assembly landscapes from particle tracking data.
Long, Andrew W; Zhang, Jie; Granick, Steve; Ferguson, Andrew L
2015-11-07
Bottom-up self-assembly offers a powerful route for the fabrication of novel structural and functional materials. Rational engineering of self-assembling systems requires understanding of the accessible aggregation states and the structural assembly pathways. In this work, we apply nonlinear machine learning to experimental particle tracking data to infer low-dimensional assembly landscapes mapping the morphology, stability, and assembly pathways of accessible aggregates as a function of experimental conditions. To the best of our knowledge, this represents the first time that collective order parameters and assembly landscapes have been inferred directly from experimental data. We apply this technique to the nonequilibrium self-assembly of metallodielectric Janus colloids in an oscillating electric field, and quantify the impact of field strength, oscillation frequency, and salt concentration on the dominant assembly pathways and terminal aggregates. This combined computational and experimental framework furnishes new understanding of self-assembling systems, and quantitatively informs rational engineering of experimental conditions to drive assembly along desired aggregation pathways.
Osteoporosis risk prediction using machine learning and conventional methods.
Kim, Sung Kean; Yoo, Tae Keun; Oh, Ein; Kim, Deok Won
2013-01-01
A number of clinical decision tools for osteoporosis risk assessment have been developed to select postmenopausal women for the measurement of bone mineral density. We developed and validated machine learning models with the aim of more accurately identifying the risk of osteoporosis in postmenopausal women, and compared with the ability of a conventional clinical decision tool, osteoporosis self-assessment tool (OST). We collected medical records from Korean postmenopausal women based on the Korea National Health and Nutrition Surveys (KNHANES V-1). The training data set was used to construct models based on popular machine learning algorithms such as support vector machines (SVM), random forests (RF), artificial neural networks (ANN), and logistic regression (LR) based on various predictors associated with low bone density. The learning models were compared with OST. SVM had significantly better area under the curve (AUC) of the receiver operating characteristic (ROC) than ANN, LR, and OST. Validation on the test set showed that SVM predicted osteoporosis risk with an AUC of 0.827, accuracy of 76.7%, sensitivity of 77.8%, and specificity of 76.0%. We were the first to perform comparisons of the performance of osteoporosis prediction between the machine learning and conventional methods using population-based epidemiological data. The machine learning methods may be effective tools for identifying postmenopausal women at high risk for osteoporosis.
Machine vision based quality inspection of flat glass products
NASA Astrophysics Data System (ADS)
Zauner, G.; Schagerl, M.
2014-03-01
This application paper presents a machine vision solution for the quality inspection of flat glass products. A contact image sensor (CIS) is used to generate digital images of the glass surfaces. The presented machine vision based quality inspection at the end of the production line aims to classify five different glass defect types. The defect images are usually characterized by very little `image structure', i.e. homogeneous regions without distinct image texture. Additionally, these defect images usually consist of only a few pixels. At the same time the appearance of certain defect classes can be very diverse (e.g. water drops). We used simple state-of-the-art image features like histogram-based features (std. deviation, curtosis, skewness), geometric features (form factor/elongation, eccentricity, Hu-moments) and texture features (grey level run length matrix, co-occurrence matrix) to extract defect information. The main contribution of this work now lies in the systematic evaluation of various machine learning algorithms to identify appropriate classification approaches for this specific class of images. In this way, the following machine learning algorithms were compared: decision tree (J48), random forest, JRip rules, naive Bayes, Support Vector Machine (multi class), neural network (multilayer perceptron) and k-Nearest Neighbour. We used a representative image database of 2300 defect images and applied cross validation for evaluation purposes.
Price, John M; Colflesh, Gregory J H; Cerella, John; Verhaeghen, Paul
2014-05-01
We investigated the effects of 10h of practice on variations of the N-Back task to investigate the processes underlying possible expansion of the focus of attention within working memory. Using subtractive logic, we showed that random access (i.e., Sternberg-like search) yielded a modest effect (a 50% increase in speed) whereas the processes of forward access (i.e., retrieval in order, as in a standard N-Back task) and updating (i.e., changing the contents of working memory) were executed about 5 times faster after extended practice. We additionally found that extended practice increased working memory capacity as measured by the size of the focus of attention for the forward-access task, but not for variations where probing was in random order. This suggests that working memory capacity may depend on the type of search process engaged, and that certain working-memory-related cognitive processes are more amenable to practice than others. Copyright © 2014 Elsevier B.V. All rights reserved.
Schnell, R J; Ronning, C M; Knight, R J
1995-02-01
Twenty-five accessions of mango were examined for random amplified polymorphic DNA (RAPD) genetic markers with 80 10-mer random primers. Of the 80 primers screened, 33 did not amplify, 19 were monomorphic, and 28 gave reproducible, polymorphic DNA amplification patterns. Eleven primers were selected from the 28 for the study. The number of bands generated was primer- and genotype-dependent, and ranged from 1 to 10. No primer gave unique banding patterns for each of the 25 accessions; however, ten different combinations of 2 primer banding patterns produced unique fingerprints for each accession. A maternal half-sib (MHS) family was included among the 25 accessions to see if genetic relationships could be detected. RAPD data were used to generate simple matching coefficients, which were analyzed phenetically and by means of principal coordinate analysis (PCA). The MHS clustered together in both the phenetic and the PCA while the randomly selected accessions were scattered with no apparent pattern. The uses of RAPD analysis for Mangifera germ plasm classification and clonal identification are discussed.
Ntozini, Robert; Marks, Sara J; Mangwadu, Goldberg; Mbuya, Mduduzi N N; Gerema, Grace; Mutasa, Batsirai; Julian, Timothy R; Schwab, Kellogg J; Humphrey, Jean H; Zungu, Lindiwe I
2015-12-15
Access to water and sanitation are important determinants of behavioral responses to hygiene and sanitation interventions. We estimated cluster-specific water access and sanitation coverage to inform a constrained randomization technique in the SHINE trial. Technicians and engineers inspected all public access water sources to ascertain seasonality, function, and geospatial coordinates. Households and water sources were mapped using open-source geospatial software. The distance from each household to the nearest perennial, functional, protected water source was calculated, and for each cluster, the median distance and the proportion of households within <500 m and >1500 m of such a water source. Cluster-specific sanitation coverage was ascertained using a random sample of 13 households per cluster. These parameters were included as covariates in randomization to optimize balance in water and sanitation access across treatment arms at the start of the trial. The observed high variability between clusters in both parameters suggests that constraining on these factors was needed to reduce risk of bias. © The Author 2015. Published by Oxford University Press for the Infectious Diseases Society of America.
NASA Astrophysics Data System (ADS)
Pham, Binh Thai; Prakash, Indra; Tien Bui, Dieu
2018-02-01
A hybrid machine learning approach of Random Subspace (RSS) and Classification And Regression Trees (CART) is proposed to develop a model named RSSCART for spatial prediction of landslides. This model is a combination of the RSS method which is known as an efficient ensemble technique and the CART which is a state of the art classifier. The Luc Yen district of Yen Bai province, a prominent landslide prone area of Viet Nam, was selected for the model development. Performance of the RSSCART model was evaluated through the Receiver Operating Characteristic (ROC) curve, statistical analysis methods, and the Chi Square test. Results were compared with other benchmark landslide models namely Support Vector Machines (SVM), single CART, Naïve Bayes Trees (NBT), and Logistic Regression (LR). In the development of model, ten important landslide affecting factors related with geomorphology, geology and geo-environment were considered namely slope angles, elevation, slope aspect, curvature, lithology, distance to faults, distance to rivers, distance to roads, and rainfall. Performance of the RSSCART model (AUC = 0.841) is the best compared with other popular landslide models namely SVM (0.835), single CART (0.822), NBT (0.821), and LR (0.723). These results indicate that performance of the RSSCART is a promising method for spatial landslide prediction.
Predicting human liver microsomal stability with machine learning techniques.
Sakiyama, Yojiro; Yuki, Hitomi; Moriya, Takashi; Hattori, Kazunari; Suzuki, Misaki; Shimada, Kaoru; Honma, Teruki
2008-02-01
To ensure a continuing pipeline in pharmaceutical research, lead candidates must possess appropriate metabolic stability in the drug discovery process. In vitro ADMET (absorption, distribution, metabolism, elimination, and toxicity) screening provides us with useful information regarding the metabolic stability of compounds. However, before the synthesis stage, an efficient process is required in order to deal with the vast quantity of data from large compound libraries and high-throughput screening. Here we have derived a relationship between the chemical structure and its metabolic stability for a data set of in-house compounds by means of various in silico machine learning such as random forest, support vector machine (SVM), logistic regression, and recursive partitioning. For model building, 1952 proprietary compounds comprising two classes (stable/unstable) were used with 193 descriptors calculated by Molecular Operating Environment. The results using test compounds have demonstrated that all classifiers yielded satisfactory results (accuracy > 0.8, sensitivity > 0.9, specificity > 0.6, and precision > 0.8). Above all, classification by random forest as well as SVM yielded kappa values of approximately 0.7 in an independent validation set, slightly higher than other classification tools. These results suggest that nonlinear/ensemble-based classification methods might prove useful in the area of in silico ADME modeling.
Mortality risk score prediction in an elderly population using machine learning.
Rose, Sherri
2013-03-01
Standard practice for prediction often relies on parametric regression methods. Interesting new methods from the machine learning literature have been introduced in epidemiologic studies, such as random forest and neural networks. However, a priori, an investigator will not know which algorithm to select and may wish to try several. Here I apply the super learner, an ensembling machine learning approach that combines multiple algorithms into a single algorithm and returns a prediction function with the best cross-validated mean squared error. Super learning is a generalization of stacking methods. I used super learning in the Study of Physical Performance and Age-Related Changes in Sonomans (SPPARCS) to predict death among 2,066 residents of Sonoma, California, aged 54 years or more during the period 1993-1999. The super learner for predicting death (risk score) improved upon all single algorithms in the collection of algorithms, although its performance was similar to that of several algorithms. Super learner outperformed the worst algorithm (neural networks) by 44% with respect to estimated cross-validated mean squared error and had an R2 value of 0.201. The improvement of super learner over random forest with respect to R2 was approximately 2-fold. Alternatives for risk score prediction include the super learner, which can provide improved performance.
Learning molecular energies using localized graph kernels
Ferré, Grégoire; Haut, Terry Scot; Barros, Kipton Marcos
2017-03-21
We report that recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturallymore » incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. Finally, we benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.« less