Sample records for automatic item generation

  1. The Role of Item Models in Automatic Item Generation

    ERIC Educational Resources Information Center

    Gierl, Mark J.; Lai, Hollis

    2012-01-01

    Automatic item generation represents a relatively new but rapidly evolving research area where cognitive and psychometric theories are used to produce tests that include items generated using computer technology. Automatic item generation requires two steps. First, test development specialists create item models, which are comparable to templates…

  2. Applying Hierarchical Model Calibration to Automatically Generated Items.

    ERIC Educational Resources Information Center

    Williamson, David M.; Johnson, Matthew S.; Sinharay, Sandip; Bejar, Isaac I.

    This study explored the application of hierarchical model calibration as a means of reducing, if not eliminating, the need for pretesting of automatically generated items from a common item model prior to operational use. Ultimately the successful development of automatic item generation (AIG) systems capable of producing items with highly similar…

  3. Automatic Item Generation of Probability Word Problems

    ERIC Educational Resources Information Center

    Holling, Heinz; Bertling, Jonas P.; Zeuch, Nina

    2009-01-01

    Mathematical word problems represent a common item format for assessing student competencies. Automatic item generation (AIG) is an effective way of constructing many items with predictable difficulties, based on a set of predefined task parameters. The current study presents a framework for the automatic generation of probability word problems…

  4. A Model-Based Method for Content Validation of Automatically Generated Test Items

    ERIC Educational Resources Information Center

    Zhang, Xinxin; Gierl, Mark

    2016-01-01

    The purpose of this study is to describe a methodology to recover the item model used to generate multiple-choice test items with a novel graph theory approach. Beginning with the generated test items and working backward to recover the original item model provides a model-based method for validating the content used to automatically generate test…

  5. An Application of Reverse Engineering to Automatic Item Generation: A Proof of Concept Using Automatically Generated Figures

    ERIC Educational Resources Information Center

    Lorié, William A.

    2013-01-01

    A reverse engineering approach to automatic item generation (AIG) was applied to a figure-based publicly released test item from the Organisation for Economic Cooperation and Development (OECD) Programme for International Student Assessment (PISA) mathematical literacy cognitive instrument as part of a proof of concept. The author created an item…

  6. Automatic Item Generation: A More Efficient Process for Developing Mathematics Achievement Items?

    ERIC Educational Resources Information Center

    Embretson, Susan E.; Kingston, Neal M.

    2018-01-01

    The continual supply of new items is crucial to maintaining quality for many tests. Automatic item generation (AIG) has the potential to rapidly increase the number of items that are available. However, the efficiency of AIG will be mitigated if the generated items must be submitted to traditional, time-consuming review processes. In two studies,…

  7. Using Automatic Item Generation to Meet the Increasing Item Demands of High-Stakes Educational and Occupational Assessment

    ERIC Educational Resources Information Center

    Arendasy, Martin E.; Sommer, Markus

    2012-01-01

    The use of new test administration technologies such as computerized adaptive testing in high-stakes educational and occupational assessments demands large item pools. Classic item construction processes and previous approaches to automatic item generation faced the problems of a considerable loss of items after the item calibration phase. In this…

  8. Automatic Item Generation via Frame Semantics: Natural Language Generation of Math Word Problems.

    ERIC Educational Resources Information Center

    Deane, Paul; Sheehan, Kathleen

    This paper is an exploration of the conceptual issues that have arisen in the course of building a natural language generation (NLG) system for automatic test item generation. While natural language processing techniques are applicable to general verbal items, mathematics word problems are particularly tractable targets for natural language…

  9. The Effect of Different Types of Perceptual Manipulations on the Dimensionality of Automatically Generated Figural Matrices

    ERIC Educational Resources Information Center

    Arendasy, M.; Sommer, M.

    2005-01-01

    Two pilot studies (n"1=155, n"2=451) are presented in this article, which were carried out within the development of an item generator for the automatic generation of figural matrices items. The focus of the presented studies was to compare two types of item designs with regard to the effect of variations of the property ''perceptual…

  10. Using Psychometric Technology in Educational Assessment: The Case of a Schema-Based Isomorphic Approach to the Automatic Generation of Quantitative Reasoning Items

    ERIC Educational Resources Information Center

    Arendasy, Martin; Sommer, Markus

    2007-01-01

    This article deals with the investigation of the psychometric quality and constructs validity of algebra word problems generated by means of a schema-based version of the automatic min-max approach. Based on review of the research literature in algebra word problem solving and automatic item generation this new approach is introduced as a…

  11. Evaluating the Psychometric Characteristics of Generated Multiple-Choice Test Items

    ERIC Educational Resources Information Center

    Gierl, Mark J.; Lai, Hollis; Pugh, Debra; Touchie, Claire; Boulais, André-Philippe; De Champlain, André

    2016-01-01

    Item development is a time- and resource-intensive process. Automatic item generation integrates cognitive modeling with computer technology to systematically generate test items. To date, however, items generated using cognitive modeling procedures have received limited use in operational testing situations. As a result, the psychometric…

  12. Applying automatic item generation to create cohesive physics testlets

    NASA Astrophysics Data System (ADS)

    Mindyarto, B. N.; Nugroho, S. E.; Linuwih, S.

    2018-03-01

    Computer-based testing has created the demand for large numbers of items. This paper discusses the production of cohesive physics testlets using an automatic item generation concepts and procedures. The testlets were composed by restructuring physics problems to reveal deeper understanding of the underlying physical concepts by inserting a qualitative question and its scientific reasoning question. A template-based testlet generator was used to generate the testlet variants. Using this methodology, 1248 testlet variants were effectively generated from 25 testlet templates. Some issues related to the effective application of the generated physics testlets in practical assessments were discussed.

  13. Automatic Generation of Rasch-Calibrated Items: Figural Matrices Test GEOM and Endless-Loops Test EC

    ERIC Educational Resources Information Center

    Arendasy, Martin

    2005-01-01

    The future of test construction for certain psychological ability domains that can be analyzed well in a structured manner may lie--at the very least for reasons of test security--in the field of automatic item generation. In this context, a question that has not been explicitly addressed is whether it is possible to embed an item response theory…

  14. Instructional Topics in Educational Measurement (ITEMS) Module: Using Automated Processes to Generate Test Items

    ERIC Educational Resources Information Center

    Gierl, Mark J.; Lai, Hollis

    2013-01-01

    Changes to the design and development of our educational assessments are resulting in the unprecedented demand for a large and continuous supply of content-specific test items. One way to address this growing demand is with automatic item generation (AIG). AIG is the process of using item models to generate test items with the aid of computer…

  15. Automatic Association of News Items.

    ERIC Educational Resources Information Center

    Carrick, Christina; Watters, Carolyn

    1997-01-01

    Discussion of electronic news delivery systems and the automatic generation of electronic editions focuses on the association of related items of different media type, specifically photos and stories. The goal is to be able to determine to what degree any two news items refer to the same news event. (Author/LRW)

  16. Calibrating Item Families and Summarizing the Results Using Family Expected Response Functions

    ERIC Educational Resources Information Center

    Sinharay, Sandip; Johnson, Matthew S.; Williamson, David M.

    2003-01-01

    Item families, which are groups of related items, are becoming increasingly popular in complex educational assessments. For example, in automatic item generation (AIG) systems, a test may consist of multiple items generated from each of a number of item models. Item calibration or scoring for such an assessment requires fitting models that can…

  17. Modeling the Hyperdistribution of Item Parameters To Improve the Accuracy of Recovery in Estimation Procedures.

    ERIC Educational Resources Information Center

    Matthews-Lopez, Joy L.; Hombo, Catherine M.

    The purpose of this study was to examine the recovery of item parameters in simulated Automatic Item Generation (AIG) conditions, using Markov chain Monte Carlo (MCMC) estimation methods to attempt to recover the generating distributions. To do this, variability in item and ability parameters was manipulated. Realistic AIG conditions were…

  18. Automatic NEPHIS Coding of Descriptive Titles for Permuted Index Generation.

    ERIC Educational Resources Information Center

    Craven, Timothy C.

    1982-01-01

    Describes a system for the automatic coding of most descriptive titles which generates Nested Phrase Indexing System (NEPHIS) input strings of sufficient quality for permuted index production. A series of examples and an 11-item reference list accompany the text. (JL)

  19. Automatic item generation implemented for measuring artistic judgment aptitude.

    PubMed

    Bezruczko, Nikolaus

    2014-01-01

    Automatic item generation (AIG) is a broad class of methods that are being developed to address psychometric issues arising from internet and computer-based testing. In general, issues emphasize efficiency, validity, and diagnostic usefulness of large scale mental testing. Rapid prominence of AIG methods and their implicit perspective on mental testing is bringing painful scrutiny to many sacred psychometric assumptions. This report reviews basic AIG ideas, then presents conceptual foundations, image model development, and operational application to artistic judgment aptitude testing.

  20. Calibration of Automatically Generated Items Using Bayesian Hierarchical Modeling.

    ERIC Educational Resources Information Center

    Johnson, Matthew S.; Sinharay, Sandip

    For complex educational assessments, there is an increasing use of "item families," which are groups of related items. However, calibration or scoring for such an assessment requires fitting models that take into account the dependence structure inherent among the items that belong to the same item family. C. Glas and W. van der Linden…

  1. Automatic Generation of Mashups for Personalized Commerce in Digital TV by Semantic Reasoning

    NASA Astrophysics Data System (ADS)

    Blanco-Fernández, Yolanda; López-Nores, Martín; Pazos-Arias, José J.; Martín-Vicente, Manuela I.

    The evolution of information technologies is consolidating recommender systems as essential tools in e-commerce. To date, these systems have focused on discovering the items that best match the preferences, interests and needs of individual users, to end up listing those items by decreasing relevance in some menus. In this paper, we propose extending the current scope of recommender systems to better support trading activities, by automatically generating interactive applications that provide the users with personalized commercial functionalities related to the selected items. We explore this idea in the context of Digital TV advertising, with a system that brings together semantic reasoning techniques and new architectural solutions for web services and mashups.

  2. A Method for Generating Educational Test Items That Are Aligned to the Common Core State Standards

    ERIC Educational Resources Information Center

    Gierl, Mark J.; Lai, Hollis; Hogan, James B.; Matovinovic, Donna

    2015-01-01

    The demand for test items far outstrips the current supply. This increased demand can be attributed, in part, to the transition to computerized testing, but, it is also linked to dramatic changes in how 21st century educational assessments are designed and administered. One way to address this growing demand is with automatic item generation.…

  3. Designing a Virtual Item Bank Based on the Techniques of Image Processing

    ERIC Educational Resources Information Center

    Liao, Wen-Wei; Ho, Rong-Guey

    2011-01-01

    One of the major weaknesses of the item exposure rates of figural items in Intelligence Quotient (IQ) tests lies in its inaccuracies. In this study, a new approach is proposed and a useful test tool known as the Virtual Item Bank (VIB) is introduced. The VIB combine Automatic Item Generation theory and image processing theory with the concepts of…

  4. Applying modern psychometric techniques to melodic discrimination testing: Item response theory, computerised adaptive testing, and automatic item generation.

    PubMed

    Harrison, Peter M C; Collins, Tom; Müllensiefen, Daniel

    2017-06-15

    Modern psychometric theory provides many useful tools for ability testing, such as item response theory, computerised adaptive testing, and automatic item generation. However, these techniques have yet to be integrated into mainstream psychological practice. This is unfortunate, because modern psychometric techniques can bring many benefits, including sophisticated reliability measures, improved construct validity, avoidance of exposure effects, and improved efficiency. In the present research we therefore use these techniques to develop a new test of a well-studied psychological capacity: melodic discrimination, the ability to detect differences between melodies. We calibrate and validate this test in a series of studies. Studies 1 and 2 respectively calibrate and validate an initial test version, while Studies 3 and 4 calibrate and validate an updated test version incorporating additional easy items. The results support the new test's viability, with evidence for strong reliability and construct validity. We discuss how these modern psychometric techniques may also be profitably applied to other areas of music psychology and psychological science in general.

  5. Research on Generating Method of Embedded Software Test Document Based on Dynamic Model

    NASA Astrophysics Data System (ADS)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying

    2018-03-01

    This paper provides a dynamic model-based test document generation method for embedded software that provides automatic generation of two documents: test requirements specification documentation and configuration item test documentation. This method enables dynamic test requirements to be implemented in dynamic models, enabling dynamic test demand tracking to be easily generated; able to automatically generate standardized, standardized test requirements and test documentation, improved document-related content inconsistency and lack of integrity And other issues, improve the efficiency.

  6. Using the Free-Response Scoring Tool To Automatically Score the Formulating-Hypotheses Item. GRE Board Professional Report No. 90-02bP.

    ERIC Educational Resources Information Center

    Kaplan, Randy M.; Bennett, Randy Elliot

    This study explores the potential for using a computer-based scoring procedure for the formulating-hypotheses (F-H) item. This item type presents a situation and asks the examinee to generate explanations for it. Each explanation is judged right or wrong, and the number of creditable explanations is summed to produce an item score. Scores were…

  7. Evaluating the Contribution of Different Item Features to the Effect Size of the Gender Difference in Three-Dimensional Mental Rotation Using Automatic Item Generation

    ERIC Educational Resources Information Center

    Arendasy, Martin E.; Sommer, Markus

    2010-01-01

    In complex three-dimensional mental rotation tasks males have been reported to score up to one standard deviation higher than females. However, this effect size estimate could be compromised by the presence of gender bias at the item level, which calls the validity of purely quantitative performance comparisons into question. We hypothesized that…

  8. Ontology-Based Multiple Choice Question Generation

    PubMed Central

    Al-Yahya, Maha

    2014-01-01

    With recent advancements in Semantic Web technologies, a new trend in MCQ item generation has emerged through the use of ontologies. Ontologies are knowledge representation structures that formally describe entities in a domain and their relationships, thus enabling automated inference and reasoning. Ontology-based MCQ item generation is still in its infancy, but substantial research efforts are being made in the field. However, the applicability of these models for use in an educational setting has not been thoroughly evaluated. In this paper, we present an experimental evaluation of an ontology-based MCQ item generation system known as OntoQue. The evaluation was conducted using two different domain ontologies. The findings of this study show that ontology-based MCQ generation systems produce satisfactory MCQ items to a certain extent. However, the evaluation also revealed a number of shortcomings with current ontology-based MCQ item generation systems with regard to the educational significance of an automatically constructed MCQ item, the knowledge level it addresses, and its language structure. Furthermore, for the task to be successful in producing high-quality MCQ items for learning assessments, this study suggests a novel, holistic view that incorporates learning content, learning objectives, lexical knowledge, and scenarios into a single cohesive framework. PMID:24982937

  9. Personalized professional content recommendation

    DOEpatents

    Xu, Songhua

    2015-10-27

    A personalized content recommendation system includes a client interface configured to automatically monitor a user's information data stream transmitted on the Internet. A hybrid contextual behavioral and collaborative personal interest inference engine resident to a non-transient media generates automatic predictions about the interests of individual users of the system. A database server retains the user's personal interest profile based on a plurality of monitored information. The system also includes a server programmed to filter items in an incoming information stream with the personal interest profile and is further programmed to identify only those items of the incoming information stream that substantially match the personal interest profile.

  10. Automated Item Generation with Recurrent Neural Networks.

    PubMed

    von Davier, Matthias

    2018-03-12

    Utilizing technology for automated item generation is not a new idea. However, test items used in commercial testing programs or in research are still predominantly written by humans, in most cases by content experts or professional item writers. Human experts are a limited resource and testing agencies incur high costs in the process of continuous renewal of item banks to sustain testing programs. Using algorithms instead holds the promise of providing unlimited resources for this crucial part of assessment development. The approach presented here deviates in several ways from previous attempts to solve this problem. In the past, automatic item generation relied either on generating clones of narrowly defined item types such as those found in language free intelligence tests (e.g., Raven's progressive matrices) or on an extensive analysis of task components and derivation of schemata to produce items with pre-specified variability that are hoped to have predictable levels of difficulty. It is somewhat unlikely that researchers utilizing these previous approaches would look at the proposed approach with favor; however, recent applications of machine learning show success in solving tasks that seemed impossible for machines not too long ago. The proposed approach uses deep learning to implement probabilistic language models, not unlike what Google brain and Amazon Alexa use for language processing and generation.

  11. Combining Automatic Item Generation and Experimental Designs to Investigate the Contribution of Cognitive Components to the Gender Difference in Mental Rotation

    ERIC Educational Resources Information Center

    Arendasy, Martin E.; Sommer, Markus; Gittler, Georg

    2010-01-01

    Marked gender differences in three-dimensional mental rotation have been broadly reported in the literature in the last few decades. Various theoretical models and accounts were used to explain the observed differences. Within the framework of linking item design features of mental rotation tasks to cognitive component processes associated with…

  12. Evaluating the Impact of Depth Cue Salience in Working Three-Dimensional Mental Rotation Tasks by Means of Psychometric Experiments

    ERIC Educational Resources Information Center

    Arendasy, Martin; Sommer, Markus; Hergovich, Andreas; Feldhammer, Martina

    2011-01-01

    The gender difference in three-dimensional mental rotation is well documented in the literature. In this article we combined automatic item generation, (quasi-)experimental research designs and item response theory models of change measurement to evaluate the effect of the ability to extract the depth information conveyed in the two-dimensional…

  13. Learning Factors Transfer Analysis: Using Learning Curve Analysis to Automatically Generate Domain Models

    ERIC Educational Resources Information Center

    Pavlik, Philip I. Jr.; Cen, Hao; Koedinger, Kenneth R.

    2009-01-01

    This paper describes a novel method to create a quantitative model of an educational content domain of related practice item-types using learning curves. By using a pairwise test to search for the relationships between learning curves for these item-types, we show how the test results in a set of pairwise transfer relationships that can be…

  14. Active suppression of distractors that match the contents of visual working memory.

    PubMed

    Sawaki, Risa; Luck, Steven J

    2011-08-01

    The biased competition theory proposes that items matching the contents of visual working memory will automatically have an advantage in the competition for attention. However, evidence for an automatic effect has been mixed, perhaps because the memory-driven attentional bias can be overcome by top-down suppression. To test this hypothesis, the Pd component of the event-related potential waveform was used as a marker of attentional suppression. While observers maintained a color in working memory, task-irrelevant probe arrays were presented that contained an item matching the color being held in memory. We found that the memory-matching probe elicited a Pd component, indicating that it was being actively suppressed. This result suggests that sensory inputs matching the information being held in visual working memory are automatically detected and generate an "attend-to-me" signal, but this signal can be overridden by an active suppression mechanism to prevent the actual capture of attention.

  15. QA-driven Guidelines Generation for Bacteriotherapy

    PubMed Central

    Pasche, Emilie; Teodoro, Douglas; Gobeill, Julien; Ruch, Patrick; Lovis, Christian

    2009-01-01

    PURPOSE We propose a question-answering (QA) driven generation approach for automatic acquisition of structured rules that can be used in a knowledge authoring tool for antibiotic prescription guidelines management. METHODS: The rule generation is seen as a question-answering problem, where the parameters of the questions are known items of the rule (e.g. an infectious disease, caused by a given bacterium) and answers (e.g. some antibiotics) are obtained by a question-answering engine. RESULTS: When looking for a drug given a pathogen and a disease, top-precision of 0.55 is obtained by the combination of the Boolean engine (PubMed) and the relevance-driven engine (easyIR), which means that for more than half of our evaluation benchmark at least one of the recommended antibiotics was automatically acquired by the rule generation method. CONCLUSION: These results suggest that such an automatic text mining approach could provide a useful tool for guidelines management, by improving knowledge update and discovery. PMID:20351908

  16. Automatic Scoring of Paper-and-Pencil Figural Responses. Research Report.

    ERIC Educational Resources Information Center

    Martinez, Michael E.; And Others

    Large-scale testing is dominated by the multiple-choice question format. Widespread use of the format is due, in part, to the ease with which multiple-choice items can be scored automatically. This paper examines automatic scoring procedures for an alternative item type: figural response. Figural response items call for the completion or…

  17. Automatically Scoring Short Essays for Content. CRESST Report 836

    ERIC Educational Resources Information Center

    Kerr, Deirdre; Mousavi, Hamid; Iseli, Markus R.

    2013-01-01

    The Common Core assessments emphasize short essay constructed response items over multiple choice items because they are more precise measures of understanding. However, such items are too costly and time consuming to be used in national assessments unless a way is found to score them automatically. Current automatic essay scoring techniques are…

  18. Re-engineering Ammunition Residue Management in IMCOM-SE

    DTIC Science & Technology

    2008-06-01

    Max Accoutability ) Non-Automatic Return Item Recycling (OBJ: Max Items, Max Profit) Regionalize Store Brass (OBJ: Min Time) Demilitarize... Accoutability ) Non-Automatic Return Item Recycling (OBJ: Max Items, Max Profit) Regionalize Store Brass (OBJ: Min Time) Demilitarize Brass (OBJ: Min

  19. Active suppression of distractors that match the contents of visual working memory

    PubMed Central

    Sawaki, Risa; Luck, Steven J.

    2011-01-01

    The biased competition theory proposes that items matching the contents of visual working memory will automatically have an advantage in the competition for attention. However, evidence for an automatic effect has been mixed, perhaps because the memory-driven attentional bias can be overcome by top-down suppression. To test this hypothesis, the Pd component of the event-related potential waveform was used as a marker of attentional suppression. While observers maintained a color in working memory, task-irrelevant probe arrays were presented that contained an item matching the color being held in memory. We found that the memory-matching probe elicited a Pd component, indicating that it was being actively suppressed. This result suggests that sensory inputs matching the information being held in visual working memory are automatically detected and generate an “attend-to-me” signal, but this signal can be overridden by an active suppression mechanism to prevent the actual capture of attention. PMID:22053147

  20. Enhanced Automatic Question Creator--EAQC: Concept, Development and Evaluation of an Automatic Test Item Creation Tool to Foster Modern e-Education

    ERIC Educational Resources Information Center

    Gutl, Christian; Lankmayr, Klaus; Weinhofer, Joachim; Hofler, Margit

    2011-01-01

    Research in automated creation of test items for assessment purposes became increasingly important during the recent years. Due to automatic question creation it is possible to support personalized and self-directed learning activities by preparing appropriate and individualized test items quite easily with relatively little effort or even fully…

  1. Automatic Short Essay Scoring Using Natural Language Processing to Extract Semantic Information in the Form of Propositions. CRESST Report 831

    ERIC Educational Resources Information Center

    Kerr, Deirdre; Mousavi, Hamid; Iseli, Markus R.

    2013-01-01

    The Common Core assessments emphasize short essay constructed-response items over multiple-choice items because they are more precise measures of understanding. However, such items are too costly and time consuming to be used in national assessments unless a way to score them automatically can be found. Current automatic essay-scoring techniques…

  2. Integrating personalized medical test contents with XML and XSL-FO.

    PubMed

    Toddenroth, Dennis; Dugas, Martin; Frankewitsch, Thomas

    2011-03-01

    In 2004 the adoption of a modular curriculum at the medical faculty in Muenster led to the introduction of centralized examinations based on multiple-choice questions (MCQs). We report on how organizational challenges of realizing faculty-wide personalized tests were addressed by implementation of a specialized software module to automatically generate test sheets from individual test registrations and MCQ contents. Key steps of the presented method for preparing personalized test sheets are (1) the compilation of relevant item contents and graphical media from a relational database with database queries, (2) the creation of Extensible Markup Language (XML) intermediates, and (3) the transformation into paginated documents. The software module by use of an open source print formatter consistently produced high-quality test sheets, while the blending of vectorized textual contents and pixel graphics resulted in efficient output file sizes. Concomitantly the module permitted an individual randomization of item sequences to prevent illicit collusion. The automatic generation of personalized MCQ test sheets is feasible using freely available open source software libraries, and can be efficiently deployed on a faculty-wide scale.

  3. Automatic calculation of the nine equivalents of nursing manpower use score (NEMS) using a patient data management system.

    PubMed

    Junger, Axel; Brenck, Florian; Hartmann, Bernd; Klasen, Joachim; Quinzio, Lorenzo; Benson, Matthias; Michel, Achim; Röhrig, Rainer; Hempelmann, Gunter

    2004-07-01

    The most recent approach to estimate nursing resources consumption has led to the generation of the Nine Equivalents of Nursing Manpower use Score (NEMS). The objective of this prospective study was to establish a completely automatically generated calculation of the NEMS using a patient data management system (PDMS) database and to validate this approach by comparing the results with those of the conventional manual method. Prospective study. Operative intensive care unit of a university hospital. Patients admitted to the ICU between 24 July 2002 and 22 August 2002. Patients under the age of 16 years, and patients undergoing cardiovascular surgery or with burn injuries were excluded. None. The NEMS of all patients was calculated automatically with a PDMS and manually by a physician in parallel. The results of the two methods were compared using the Bland and Altman approach, the interclass correlation coefficient (ICC), and the kappa-statistic. On 20 consecutive working days, the NEMS was calculated in 204 cases. The Bland Altman analysis did not show significant differences in NEMS scoring between the two methods. The ICC (95% confidence intervals) 0.87 (0.84-0.90) revealed a high inter-rater agreement between the PDMS and the physician. The kappa-statistic showed good results (kappa>0.55) for all NEMS items apart from the item "supplementary ventilatory care". This study demonstrates that automatical calculation of the NEMS is possible with high accuracy by means of a PDMS. This may lead to a decrease in consumption of nursing resources.

  4. Evaluation of Automatic Item Generation Utilities in Formative Assessment Application for Korean High School Students

    ERIC Educational Resources Information Center

    Choi, Jaehwa; Kim, HeeKyoung; Pak, Seohong

    2018-01-01

    The recent interests in research in the assessment field have been rapidly shifting from decision-maker-centered assessments to learner-centered assessments (i.e., diagnostic and/or formative assessments). In particular, it is a very important research topic in this field to analyze how these learner-centered assessments are developed more…

  5. Investigating the "g"-Saturation of Various Stratum-Two Factors Using Automatic Item Generation

    ERIC Educational Resources Information Center

    Arendasy, Martin E.; Hergovich, Andreas; Sommer, Markus

    2008-01-01

    Even though researchers agree on the hierarchical structure of intelligence there is considerable disagreement on the g-saturation of the lower stratum-two factors. In this article it is argued that the mixed evidence in the research literature can be at least partially attributed to the construct representation of the individual tests used to…

  6. Why does lag affect the durability of memory-based automaticity: loss of memory strength or interference?

    PubMed

    Wilkins, Nicolas J; Rawson, Katherine A

    2013-10-01

    In Rickard, Lau, and Pashler's (2008) investigation of the lag effect on memory-based automaticity, response times were faster and proportion of trials retrieved was higher at the end of practice for short lag items than for long lag items. However, during testing after a delay, response times were slower and proportion of trials retrieved was lower for short lag items than for long lag items. The current study investigated the extent to which the lag effect on the durability of memory-based automaticity is due to interference or to the loss of memory strength with time. Participants repeatedly practiced alphabet subtraction items in short lag and long lag conditions. After practice, half of the participants were immediately tested and the other half were tested after a 7-day delay. Results indicate that the lag effect on the durability of memory-based automaticity is primarily due to interference. We discuss potential modification of current memory-based processing theories to account for these effects. © 2013.

  7. What do firefighters desire from the next generation of personal protective equipment? Outcomes from an international survey

    PubMed Central

    LEE, Joo-Young; PARK, Joonhee; PARK, Huiju; COCA, Aitor; KIM, Jung-Hyun; TAYLOR, Nigel A.S.; SON, Su-Young; TOCHIHARA, Yutaka

    2015-01-01

    The purpose of this study was to investigate smart features required for the next generation of personal protective equipment (PPE) for firefighters in Australia, Korea, Japan, and the USA. Questionnaire responses were obtained from 167 Australian, 351 Japanese, 413 Korean, and 763 U.S. firefighters (1,611 males and 61 females). Preferences concerning smart features varied among countries, with 27% of Korean and 30% of U.S. firefighters identifying ‘a location monitoring system’ as the most important element. On the other hand, 43% of Japanese firefighters preferred ‘an automatic body cooling system’ while 21% of the Australian firefighters selected equally ‘an automatic body cooling system’ and ‘a wireless communication system’. When asked to rank these elements in descending priority, responses across these countries were very similar with the following items ranked highest: ‘a location monitoring system’, ‘an automatic body cooling system’, ‘a wireless communication system’, and ‘a vision support system’. The least preferred elements were ‘an automatic body warming system’ and ‘a voice recording system’. No preferential relationship was apparent for age, work experience, gender or anthropometric characteristics. These results have implications for the development of the next generation of PPE along with the international standardisation of the smart PPE. PMID:26027710

  8. Computer based interpretation of infrared spectra-structure of the knowledge-base, automatic rule generation and interpretation

    NASA Astrophysics Data System (ADS)

    Ehrentreich, F.; Dietze, U.; Meyer, U.; Abbas, S.; Schulz, H.

    1995-04-01

    It is a main task within the SpecInfo-Project to develop interpretation tools that can handle a great deal more of the complicated, more specific spectrum-structure-correlations. In the first step the empirical knowledge about the assignment of structural groups and their characteristic IR-bands has been collected from literature and represented in a computer readable well-structured form. Vague, verbal rules are managed by introduction of linguistic variables. The next step was the development of automatic rule generating procedures. We had combined and enlarged the IDIOTS algorithm with the algorithm by Blaffert relying on set theory. The procedures were successfully applied to the SpecInfo database. The realization of the preceding items is a prerequisite for the improvement of the computerized structure elucidation procedure.

  9. Connecting Lines of Research on Task Model Variables, Automatic Item Generation, and Learning Progressions in Game-Based Assessment

    ERIC Educational Resources Information Center

    Graf, Edith Aurora

    2014-01-01

    In "How Task Features Impact Evidence from Assessments Embedded in Simulations and Games," Almond, Kim, Velasquez, and Shute have prepared a thought-provoking piece contrasting the roles of task model variables in a traditional assessment of mathematics word problems to their roles in "Newton's Playground," a game designed…

  10. The Effectiveness of Computer-Based Spaced Repetition in Foreign Language Vocabulary Instruction: A Double-Blind Study

    ERIC Educational Resources Information Center

    Chukharev-Hudilainen, Evgeny; Klepikova, Tatiana A.

    2016-01-01

    The purpose of the present paper is twofold; first, we present an empirical study evaluating the effectiveness of a novel CALL tool for foreign language vocabulary instruction based on spaced repetition of target vocabulary items. The study demonstrates that by spending an average of three minutes each day on automatically generated vocabulary…

  11. Using a MaxEnt Classifier for the Automatic Content Scoring of Free-Text Responses

    NASA Astrophysics Data System (ADS)

    Sukkarieh, Jana Z.

    2011-03-01

    Criticisms against multiple-choice item assessments in the USA have prompted researchers and organizations to move towards constructed-response (free-text) items. Constructed-response (CR) items pose many challenges to the education community—one of which is that they are expensive to score by humans. At the same time, there has been widespread movement towards computer-based assessment and hence, assessment organizations are competing to develop automatic content scoring engines for such items types—which we view as a textual entailment task. This paper describes how MaxEnt Modeling is used to help solve the task. MaxEnt has been used in many natural language tasks but this is the first application of the MaxEnt approach to textual entailment and automatic content scoring.

  12. Automatic Identification of Critical Data Items in a Database to Mitigate the Effects of Malicious Insiders

    NASA Astrophysics Data System (ADS)

    White, Jonathan; Panda, Brajendra

    A major concern for computer system security is the threat from malicious insiders who target and abuse critical data items in the system. In this paper, we propose a solution to enable automatic identification of critical data items in a database by way of data dependency relationships. This identification of critical data items is necessary because insider threats often target mission critical data in order to accomplish malicious tasks. Unfortunately, currently available systems fail to address this problem in a comprehensive manner. It is more difficult for non-experts to identify these critical data items because of their lack of familiarity and due to the fact that data systems are constantly changing. By identifying the critical data items automatically, security engineers will be better prepared to protect what is critical to the mission of the organization and also have the ability to focus their security efforts on these critical data items. We have developed an algorithm that scans the database logs and forms a directed graph showing which items influence a large number of other items and at what frequency this influence occurs. This graph is traversed to reveal the data items which have a large influence throughout the database system by using a novel metric based formula. These items are critical to the system because if they are maliciously altered or stolen, the malicious alterations will spread throughout the system, delaying recovery and causing a much more malignant effect. As these items have significant influence, they are deemed to be critical and worthy of extra security measures. Our proposal is not intended to replace existing intrusion detection systems, but rather is intended to complement current and future technologies. Our proposal has never been performed before, and our experimental results have shown that it is very effective in revealing critical data items automatically.

  13. Algorithm for designing smart factory Industry 4.0

    NASA Astrophysics Data System (ADS)

    Gurjanov, A. V.; Zakoldaev, D. A.; Shukalov, A. V.; Zharinov, I. O.

    2018-03-01

    The designing task of production division of the Industry 4.0 item designing company is being studied. The authors proposed an algorithm, which is based on the modified V L Volkovich method. This algorithm allows generating options how to arrange the production with robotized technological equipment functioning in the automatic mode. The optimization solution of the multi-criteria task for some additive criteria is the base of the algorithm.

  14. Intentional Subitizing: Exploring the Role of Automaticity in Enumeration

    ERIC Educational Resources Information Center

    Pincham, Hannah L.; Szucs, Denes

    2012-01-01

    Subitizing is traditionally described as the rapid, preattentive and automatic enumeration of up to four items. Counting, by contrast, describes the enumeration of larger sets of items and requires slower serial shifts of attention. Although recent research has called into question the preattentive nature of subitizing, whether or not numerosities…

  15. Validation of the Automatic Thoughts Questionnaire (ATQ) Among Mainland Chinese Students in Hong Kong.

    PubMed

    Pan, Jia-Yan; Ye, Shengquan; Ng, Petrus

    2016-01-01

    The present study validated the combined version of the 8-item Automatic Thought Questionnaire (ATQ) and 10 positive items from the ATQ-revised among Chinese university students. A total of 412 Mainland Chinese university students were recruited in Hong Kong by an online survey. A 14-item Chinese ATQ was derived via item analysis. Satisfactory internal consistency reliability and good split-half reliability were obtained. Exploratory and confirmatory factor analysis revealed a 3-correlated-factor solution for the Chinese ATQ: negative thought, positive thought (emotional), and positive thought (cognitive). The negative ATQ subscale score was positively correlated with negative affect, and negatively correlated with positive affect and life satisfaction. The two positive ATQ subscale scores were negatively correlated with negative affect, and positively correlated with positive affect and life satisfaction. The 14-item ATQ is a valid and reliable instrument for measuring automatic thoughts in the Chinese context of Hong Kong. © 2015 Wiley Periodicals, Inc.

  16. Bridging Media with the Help of Players

    NASA Astrophysics Data System (ADS)

    Nitsche, Michael; Drake, Matthew; Murray, Janet

    We suggest harvesting the power of multiplayer design to bridge content across different media platforms and develop player-driven cross-media experiences. This paper first argues to partially replace complex AI systems with multiplayer design strategies to provide the necessary level of flexibility in the content generation for cross-media applications. The second part describes one example project - the Next Generation Play (NGP) project - that illustrates one practical approach of such a player-driven cross-media content generation. NGP allows players to collect virtual items while watching a TV show. These items are re-used in a multiplayer casual game that automatically generates new game worlds based on the various collections of active players joining a game session. While the TV experience is designed for the single big screen, the game executes on multiple mobile phones. Design and technical implementation of the prototype are explained in more detail to clarify how players carry elements of television narratives into a non-linear handheld gaming experience. The system describes a practical way to create casual game adaptations based on players' personal preferences in a multi-user environment.

  17. ODM Data Analysis-A tool for the automatic validation, monitoring and generation of generic descriptive statistics of patient data.

    PubMed

    Brix, Tobias Johannes; Bruland, Philipp; Sarfraz, Saad; Ernsting, Jan; Neuhaus, Philipp; Storck, Michael; Doods, Justin; Ständer, Sonja; Dugas, Martin

    2018-01-01

    A required step for presenting results of clinical studies is the declaration of participants demographic and baseline characteristics as claimed by the FDAAA 801. The common workflow to accomplish this task is to export the clinical data from the used electronic data capture system and import it into statistical software like SAS software or IBM SPSS. This software requires trained users, who have to implement the analysis individually for each item. These expenditures may become an obstacle for small studies. Objective of this work is to design, implement and evaluate an open source application, called ODM Data Analysis, for the semi-automatic analysis of clinical study data. The system requires clinical data in the CDISC Operational Data Model format. After uploading the file, its syntax and data type conformity of the collected data is validated. The completeness of the study data is determined and basic statistics, including illustrative charts for each item, are generated. Datasets from four clinical studies have been used to evaluate the application's performance and functionality. The system is implemented as an open source web application (available at https://odmanalysis.uni-muenster.de) and also provided as Docker image which enables an easy distribution and installation on local systems. Study data is only stored in the application as long as the calculations are performed which is compliant with data protection endeavors. Analysis times are below half an hour, even for larger studies with over 6000 subjects. Medical experts have ensured the usefulness of this application to grant an overview of their collected study data for monitoring purposes and to generate descriptive statistics without further user interaction. The semi-automatic analysis has its limitations and cannot replace the complex analysis of statisticians, but it can be used as a starting point for their examination and reporting.

  18. Evaluation of the automatic optical authentication technologies for control systems of objects

    NASA Astrophysics Data System (ADS)

    Averkin, Vladimir V.; Volegov, Peter L.; Podgornov, Vladimir A.

    2000-03-01

    The report considers the evaluation of the automatic optical authentication technologies for the automated integrated system of physical protection, control and accounting of nuclear materials at RFNC-VNIITF, and for providing of the nuclear materials nonproliferation regime. The report presents the nuclear object authentication objectives and strategies, the methodology of the automatic optical authentication and results of the development of pattern recognition techniques carried out under the ISTC project #772 with the purpose of identification of unique features of surface structure of a controlled object and effects of its random treatment. The current decision of following functional control tasks is described in the report: confirmation of the item authenticity (proof of the absence of its substitution by an item of similar shape), control over unforeseen change of item state, control over unauthorized access to the item. The most important distinctive feature of all techniques is not comprehensive description of some properties of controlled item, but unique identification of item using minimum necessary set of parameters, properly comprising identification attribute of the item. The main emphasis in the technical approach is made on the development of rather simple technological methods for the first time intended for use in the systems of physical protection, control and accounting of nuclear materials. The developed authentication devices and system are described.

  19. Towards parsimony in habit measurement: Testing the convergent and predictive validity of an automaticity subscale of the Self-Report Habit Index

    PubMed Central

    2012-01-01

    Background The twelve-item Self-Report Habit Index (SRHI) is the most popular measure of energy-balance related habits. This measure characterises habit by automatic activation, behavioural frequency, and relevance to self-identity. Previous empirical research suggests that the SRHI may be abbreviated with no losses in reliability or predictive utility. Drawing on recent theorising suggesting that automaticity is the ‘active ingredient’ of habit-behaviour relationships, we tested whether an automaticity-specific SRHI subscale could capture habit-based behaviour patterns in self-report data. Methods A content validity task was undertaken to identify a subset of automaticity indicators within the SRHI. The reliability, convergent validity and predictive validity of the automaticity item subset was subsequently tested in secondary analyses of all previous SRHI applications, identified via systematic review, and in primary analyses of four raw datasets relating to energy‐balance relevant behaviours (inactive travel, active travel, snacking, and alcohol consumption). Results A four-item automaticity subscale (the ‘Self-Report Behavioural Automaticity Index’; ‘SRBAI’) was found to be reliable and sensitive to two hypothesised effects of habit on behaviour: a habit-behaviour correlation, and a moderating effect of habit on the intention-behaviour relationship. Conclusion The SRBAI offers a parsimonious measure that adequately captures habitual behaviour patterns. The SRBAI may be of particular utility in predicting future behaviour and in studies tracking habit formation or disruption. PMID:22935297

  20. Automatic and strategic effects in the guidance of attention by working memory representations

    PubMed Central

    Carlisle, Nancy B.; Woodman, Geoffrey F.

    2010-01-01

    Theories of visual attention suggest that working memory representations automatically guide attention toward memory-matching objects. Some empirical tests of this prediction have produced results consistent with working memory automatically guiding attention. However, others have shown that individuals can strategically control whether working memory representations guide visual attention. Previous studies have not independently measured automatic and strategic contributions to the interactions between working memory and attention. In this study, we used a classic manipulation of the probability of valid, neutral, and invalid cues to tease apart the nature of such interactions. This framework utilizes measures of reaction time (RT) to quantify the costs and benefits of attending to memory-matching items and infer the relative magnitudes of automatic and strategic effects. We found both costs and benefits even when the memory-matching item was no more likely to be the target than other items, indicating an automatic component of attentional guidance. However, the costs and benefits essentially doubled as the probability of a trial with a valid cue increased from 20% to 80%, demonstrating a potent strategic effect. We also show that the instructions given to participants led to a significant change in guidance distinct from the actual probability of events during the experiment. Together, these findings demonstrate that the influence of working memory representations on attention is driven by both automatic and strategic interactions. PMID:20643386

  1. Automatic and strategic effects in the guidance of attention by working memory representations.

    PubMed

    Carlisle, Nancy B; Woodman, Geoffrey F

    2011-06-01

    Theories of visual attention suggest that working memory representations automatically guide attention toward memory-matching objects. Some empirical tests of this prediction have produced results consistent with working memory automatically guiding attention. However, others have shown that individuals can strategically control whether working memory representations guide visual attention. Previous studies have not independently measured automatic and strategic contributions to the interactions between working memory and attention. In this study, we used a classic manipulation of the probability of valid, neutral, and invalid cues to tease apart the nature of such interactions. This framework utilizes measures of reaction time (RT) to quantify the costs and benefits of attending to memory-matching items and infer the relative magnitudes of automatic and strategic effects. We found both costs and benefits even when the memory-matching item was no more likely to be the target than other items, indicating an automatic component of attentional guidance. However, the costs and benefits essentially doubled as the probability of a trial with a valid cue increased from 20% to 80%, demonstrating a potent strategic effect. We also show that the instructions given to participants led to a significant change in guidance distinct from the actual probability of events during the experiment. Together, these findings demonstrate that the influence of working memory representations on attention is driven by both automatic and strategic interactions. Copyright © 2010 Elsevier B.V. All rights reserved.

  2. Adaptable Learning Assistant for Item Bank Management

    ERIC Educational Resources Information Center

    Nuntiyagul, Atorn; Naruedomkul, Kanlaya; Cercone, Nick; Wongsawang, Damras

    2008-01-01

    We present PKIP, an adaptable learning assistant tool for managing question items in item banks. PKIP is not only able to automatically assist educational users to categorize the question items into predefined categories by their contents but also to correctly retrieve the items by specifying the category and/or the difficulty level. PKIP adapts…

  3. The Color Red Supports Avoidance Reactions to Unhealthy Food.

    PubMed

    Rohr, Michaela; Kamm, Friederike; Koenigstorfer, Joerg; Groeppel-Klein, Andrea; Wentura, Dirk

    2015-01-01

    Empirical evidence suggests that the color red acts like an implicit avoidance cue in food contexts. Thus specific colors seem to guide the implicit evaluation of food items. We built upon this research by investigating the implicit meaning of color (red vs. green) in an approach-avoidance task with healthy and unhealthy food items. Thus, we examined the joint evaluative effects of color and food: Participants had to categorize food items by approach-avoidance reactions, according to their healthfulness. Items were surrounded by task-irrelevant red or green circles. We found that the implicit meaning of the traffic light colors influenced participants' reactions to the food items. The color red (compared to green) facilitated automatic avoidance reactions to unhealthy foods. By contrast, approach behavior toward healthy food items was not moderated by color. Our findings suggest that traffic light colors can act as implicit cues that guide automatic behavioral reactions to food.

  4. Central receiver solar thermal power system, Phase 1. CDRL item 2. Pilot plant preliminary design report. Volume VI. Electrical power generation and master control subsystems and balance of plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hallet, Jr., R. W.; Gervais, R. L.

    1977-10-01

    The requirements, performance, and subsystem configuration for both the Commercial and Pilot Plant electrical power generation subsystems (EPGS) and balance of plants are presented. The EPGS for both the Commercial Plant and Pilot Plant make use of conventional, proven equipment consistent with good power plant design practices in order to minimize risk and maximize reliability. The basic EPGS cycle selected is a regenerative cycle that uses a single automatic admission, condensing, tandem-compound double-flow turbine. Specifications, performance data, drawings, and schematics are included. (WHK)

  5. Numerical Differentiation Methods for Computing Error Covariance Matrices in Item Response Theory Modeling: An Evaluation and a New Proposal

    ERIC Educational Resources Information Center

    Tian, Wei; Cai, Li; Thissen, David; Xin, Tao

    2013-01-01

    In item response theory (IRT) modeling, the item parameter error covariance matrix plays a critical role in statistical inference procedures. When item parameters are estimated using the EM algorithm, the parameter error covariance matrix is not an automatic by-product of item calibration. Cai proposed the use of Supplemented EM algorithm for…

  6. Children's associative learning: automatic and deliberate encoding of meaningful associations.

    PubMed

    Guttentag, R

    1995-01-01

    Three experiments were conducted examining 10- and 11-year-old children's deliberate and automatic encoding of meaningful associative relationships on a paired-associate learning task. Subjects in Experiment 1 were presented pairs of related and unrelated words under deliberate memorization and item-specific incidental-learning conditions. Cued-recall performance was superior with related relative to unrelated pairs under both instructional conditions, suggesting that the encoding of an association between items occurred automatically with meaningfully related words. In Experiment 2, it was found that execution of a verbal elaboration strategy required more time with unrelated than with related pairs, suggesting greater ease of elaboration strategy execution with related materials. Experiment 3 monitored strategy use online using a think-aloud procedure. Cued-recall performance was superior with related pairs when subjects used rehearsal. In contrast, elaboration produced equivalent levels of recall with both types of items, but subjects executed the strategy successfully more often with related than with unrelated pairs. These findings are discussed in terms of the role of automatic processes and the effort demands of strategy execution in children's strategy use.

  7. Using Bayesian networks to support decision-focused information retrieval

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehner, P.; Elsaesser, C.; Seligman, L.

    This paper has described an approach to controlling the process of pulling data/information from distributed data bases in a way that is specific to a persons specific decision making context. Our prototype implementation of this approach uses a knowledge-based planner to generate a plan, an automatically constructed Bayesian network to evaluate the plan, specialized processing of the network to derive key information items that would substantially impact the evaluation of the plan (e.g., determine that replanning is needed), automated construction of Standing Requests for Information (SRIs) which are automated functions that monitor changes and trends in distributed data base thatmore » are relevant to the key information items. This emphasis of this paper is on how Bayesian networks are used.« less

  8. 48 CFR 252.211-7003 - Item identification and valuation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., used to retrieve data encoded on machine-readable media. Concatenated unique item identifier means— (1... (or controlling) authority for the enterprise identifier. Item means a single hardware article or a...-readable means an automatic identification technology media, such as bar codes, contact memory buttons...

  9. 48 CFR 252.211-7003 - Item identification and valuation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., used to retrieve data encoded on machine-readable media. Concatenated unique item identifier means— (1... (or controlling) authority for the enterprise identifier. Item means a single hardware article or a...-readable means an automatic identification technology media, such as bar codes, contact memory buttons...

  10. 48 CFR 252.211-7003 - Item identification and valuation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., used to retrieve data encoded on machine-readable media. Concatenated unique item identifier means— (1... (or controlling) authority for the enterprise identifier. Item means a single hardware article or a...-readable means an automatic identification technology media, such as bar codes, contact memory buttons...

  11. Decomposing the relation between Rapid Automatized Naming (RAN) and reading ability.

    PubMed

    Arnell, Karen M; Joanisse, Marc F; Klein, Raymond M; Busseri, Michael A; Tannock, Rosemary

    2009-09-01

    The Rapid Automatized Naming (RAN) test involves rapidly naming sequences of items presented in a visual array. RAN has generated considerable interest because RAN performance predicts reading achievement. This study sought to determine what elements of RAN are responsible for the shared variance between RAN and reading performance using a series of cognitive tasks and a latent variable modelling approach. Participants performed RAN measures, a test of reading speed and comprehension, and six tasks, which tapped various hypothesised components of the RAN. RAN shared 10% of the variance with reading comprehension and 17% with reading rate. Together, the decomposition tasks explained 52% and 39% of the variance shared between RAN and reading comprehension and between RAN and reading rate, respectively. Significant predictors suggested that working memory encoding underlies part of the relationship between RAN and reading ability.

  12. 48 CFR 252.211-7003 - Item unique identification and valuation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... reader or interrogator, used to retrieve data encoded on machine-readable media. Concatenated unique item... identifier. Item means a single hardware article or a single unit formed by a grouping of subassemblies... manufactured under identical conditions. Machine-readable means an automatic identification technology media...

  13. Do the Contents of Visual Working Memory Automatically Influence Attentional Selection During Visual Search?

    PubMed Central

    Woodman, Geoffrey F.; Luck, Steven J.

    2007-01-01

    In many theories of cognition, researchers propose that working memory and perception operate interactively. For example, in previous studies researchers have suggested that sensory inputs matching the contents of working memory will have an automatic advantage in the competition for processing resources. The authors tested this hypothesis by requiring observers to perform a visual search task while concurrently maintaining object representations in visual working memory. The hypothesis that working memory activation produces a simple but uncontrollable bias signal leads to the prediction that items matching the contents of working memory will automatically capture attention. However, no evidence for automatic attentional capture was obtained; instead, the participants avoided attending to these items. Thus, the contents of working memory can be used in a flexible manner for facilitation or inhibition of processing. PMID:17469973

  14. Do the contents of visual working memory automatically influence attentional selection during visual search?

    PubMed

    Woodman, Geoffrey F; Luck, Steven J

    2007-04-01

    In many theories of cognition, researchers propose that working memory and perception operate interactively. For example, in previous studies researchers have suggested that sensory inputs matching the contents of working memory will have an automatic advantage in the competition for processing resources. The authors tested this hypothesis by requiring observers to perform a visual search task while concurrently maintaining object representations in visual working memory. The hypothesis that working memory activation produces a simple but uncontrollable bias signal leads to the prediction that items matching the contents of working memory will automatically capture attention. However, no evidence for automatic attentional capture was obtained; instead, the participants avoided attending to these items. Thus, the contents of working memory can be used in a flexible manner for facilitation or inhibition of processing.

  15. Automatic processing influences free recall: converging evidence from the process dissociation procedure and remember-know judgments.

    PubMed

    McCabe, David P; Roediger, Henry L; Karpicke, Jeffrey D

    2011-04-01

    Dual-process theories of retrieval suggest that controlled and automatic processing contribute to memory performance. Free recall tests are often considered pure measures of recollection, assessing only the controlled process. We report two experiments demonstrating that automatic processes also influence free recall. Experiment 1 used inclusion and exclusion tasks to estimate recollection and automaticity in free recall, adopting a new variant of the process dissociation procedure. Dividing attention during study selectively reduced the recollection estimate but did not affect the automatic component. In Experiment 2, we replicated the results of Experiment 1, and subjects additionally reported remember-know-guess judgments during recall in the inclusion condition. In the latter task, dividing attention during study reduced remember judgments for studied items, but know responses were unaffected. Results from both methods indicated that free recall is partly driven by automatic processes. Thus, we conclude that retrieval in free recall tests is not driven solely by conscious recollection (or remembering) but also by automatic influences of the same sort believed to drive priming on implicit memory tests. Sometimes items come to mind without volition in free recall.

  16. Subliminal gaze cues increase preference levels for items in the gaze direction.

    PubMed

    Mitsuda, Takashi; Masaki, Syuta

    2017-08-29

    Another individual's gaze automatically shifts an observer's attention to a location. This reflexive response occurs even when the gaze is presented subliminally over a short period. Another's gaze also increases the preference level for items in the gaze direction; however, it was previously unclear if this effect occurs when the gaze is presented subliminally. This study showed that the preference levels for nonsense figures looked at by a subliminal gaze were significantly greater than those for items that were subliminally looked away from (Task 1). Targets that were looked at by a subliminal gaze were detected faster (Task 2); however, the participants were unable to detect the gaze direction (Task 3). These results indicate that another individual's gaze automatically increases the preference levels for items in the gaze direction without conscious awareness.

  17. Automatic rapid attachable warhead section

    DOEpatents

    Trennel, A.J.

    1994-05-10

    Disclosed are a method and apparatus for automatically selecting warheads or reentry vehicles from a storage area containing a plurality of types of warheads or reentry vehicles, automatically selecting weapon carriers from a storage area containing at least one type of weapon carrier, manipulating and aligning the selected warheads or reentry vehicles and weapon carriers, and automatically coupling the warheads or reentry vehicles with the weapon carriers such that coupling of improperly selected warheads or reentry vehicles with weapon carriers is inhibited. Such inhibition enhances safety of operations and is achieved by a number of means including computer control of the process of selection and coupling and use of connectorless interfaces capable of assuring that improperly selected items will be rejected or rendered inoperable prior to coupling. Also disclosed are a method and apparatus wherein the stated principles pertaining to selection, coupling and inhibition are extended to apply to any item-to-be-carried and any carrying assembly. 10 figures.

  18. Automatic rapid attachable warhead section

    DOEpatents

    Trennel, Anthony J.

    1994-05-10

    Disclosed are a method and apparatus for (1) automatically selecting warheads or reentry vehicles from a storage area containing a plurality of types of warheads or reentry vehicles, (2) automatically selecting weapon carriers from a storage area containing at least one type of weapon carrier, (3) manipulating and aligning the selected warheads or reentry vehicles and weapon carriers, and (4) automatically coupling the warheads or reentry vehicles with the weapon carriers such that coupling of improperly selected warheads or reentry vehicles with weapon carriers is inhibited. Such inhibition enhances safety of operations and is achieved by a number of means including computer control of the process of selection and coupling and use of connectorless interfaces capable of assuring that improperly selected items will be rejected or rendered inoperable prior to coupling. Also disclosed are a method and apparatus wherein the stated principles pertaining to selection, coupling and inhibition are extended to apply to any item-to-be-carried and any carrying assembly.

  19. Automatic food detection in egocentric images using artificial intelligence technology.

    PubMed

    Jia, Wenyan; Li, Yuecheng; Qu, Ruowei; Baranowski, Thomas; Burke, Lora E; Zhang, Hong; Bai, Yicheng; Mancino, Juliet M; Xu, Guizhi; Mao, Zhi-Hong; Sun, Mingui

    2018-03-26

    To develop an artificial intelligence (AI)-based algorithm which can automatically detect food items from images acquired by an egocentric wearable camera for dietary assessment. To study human diet and lifestyle, large sets of egocentric images were acquired using a wearable device, called eButton, from free-living individuals. Three thousand nine hundred images containing real-world activities, which formed eButton data set 1, were manually selected from thirty subjects. eButton data set 2 contained 29 515 images acquired from a research participant in a week-long unrestricted recording. They included both food- and non-food-related real-life activities, such as dining at both home and restaurants, cooking, shopping, gardening, housekeeping chores, taking classes, gym exercise, etc. All images in these data sets were classified as food/non-food images based on their tags generated by a convolutional neural network. A cross data-set test was conducted on eButton data set 1. The overall accuracy of food detection was 91·5 and 86·4 %, respectively, when one-half of data set 1 was used for training and the other half for testing. For eButton data set 2, 74·0 % sensitivity and 87·0 % specificity were obtained if both 'food' and 'drink' were considered as food images. Alternatively, if only 'food' items were considered, the sensitivity and specificity reached 85·0 and 85·8 %, respectively. The AI technology can automatically detect foods from low-quality, wearable camera-acquired real-world egocentric images with reasonable accuracy, reducing both the burden of data processing and privacy concerns.

  20. A method for feature selection of APT samples based on entropy

    NASA Astrophysics Data System (ADS)

    Du, Zhenyu; Li, Yihong; Hu, Jinsong

    2018-05-01

    By studying the known APT attack events deeply, this paper propose a feature selection method of APT sample and a logic expression generation algorithm IOCG (Indicator of Compromise Generate). The algorithm can automatically generate machine readable IOCs (Indicator of Compromise), to solve the existing IOCs logical relationship is fixed, the number of logical items unchanged, large scale and cannot generate a sample of the limitations of the expression. At the same time, it can reduce the redundancy and useless APT sample processing time consumption, and improve the sharing rate of information analysis, and actively respond to complex and volatile APT attack situation. The samples were divided into experimental set and training set, and then the algorithm was used to generate the logical expression of the training set with the IOC_ Aware plug-in. The contrast expression itself was different from the detection result. The experimental results show that the algorithm is effective and can improve the detection effect.

  1. AN EVALUATION OF ANTECEDENT EXERCISE ON BEHAVIOR MAINTAINED BY AUTOMATIC REINFORCEMENT USING A THREE-COMPONENT MULTIPLE SCHEDULE

    PubMed Central

    Morrison, Heather; Roscoe, Eileen M; Atwell, Amy

    2011-01-01

    We evaluated antecedent exercise for treating the automatically reinforced problem behavior of 4 individuals with autism. We conducted preference assessments to identify leisure and exercise items that were associated with high levels of engagement and low levels of problem behavior. Next, we conducted three 3-component multiple-schedule sequences: an antecedent-exercise test sequence, a noncontingent leisure-item control sequence, and a social-interaction control sequence. Within each sequence, we used a 3-component multiple schedule to evaluate preintervention, intervention, and postintervention effects. Problem behavior decreased during the postintervention component relative to the preintervention component for 3 of the 4 participants during the exercise-item assessment; however, the effects could not be attributed solely to exercise for 1 of these participants. PMID:21941383

  2. A guidebook for using automatic passenger counter data for National Transit Database (NTD) reporting

    DOT National Transportation Integrated Search

    2010-12-01

    This document provides guidance for transit agencies to use data from their automatic passenger counters (APCs) for reporting to the National Transit Database (NTD). It first reviews both the traditional data requirements on the data items to be repo...

  3. Identifying predictors of physics item difficulty: A linear regression approach

    NASA Astrophysics Data System (ADS)

    Mesic, Vanes; Muratovic, Hasnija

    2011-06-01

    Large-scale assessments of student achievement in physics are often approached with an intention to discriminate students based on the attained level of their physics competencies. Therefore, for purposes of test design, it is important that items display an acceptable discriminatory behavior. To that end, it is recommended to avoid extraordinary difficult and very easy items. Knowing the factors that influence physics item difficulty makes it possible to model the item difficulty even before the first pilot study is conducted. Thus, by identifying predictors of physics item difficulty, we can improve the test-design process. Furthermore, we get additional qualitative feedback regarding the basic aspects of student cognitive achievement in physics that are directly responsible for the obtained, quantitative test results. In this study, we conducted a secondary analysis of data that came from two large-scale assessments of student physics achievement at the end of compulsory education in Bosnia and Herzegovina. Foremost, we explored the concept of “physics competence” and performed a content analysis of 123 physics items that were included within the above-mentioned assessments. Thereafter, an item database was created. Items were described by variables which reflect some basic cognitive aspects of physics competence. For each of the assessments, Rasch item difficulties were calculated in separate analyses. In order to make the item difficulties from different assessments comparable, a virtual test equating procedure had to be implemented. Finally, a regression model of physics item difficulty was created. It has been shown that 61.2% of item difficulty variance can be explained by factors which reflect the automaticity, complexity, and modality of the knowledge structure that is relevant for generating the most probable correct solution, as well as by the divergence of required thinking and interference effects between intuitive and formal physics knowledge structures. Identified predictors point out the fundamental cognitive dimensions of student physics achievement at the end of compulsory education in Bosnia and Herzegovina, whose level of development influenced the test results within the conducted assessments.

  4. Automatic HDL firmware generation for FPGA-based reconfigurable measurement and control systems with mezzanines in FMC standard

    NASA Astrophysics Data System (ADS)

    Wojenski, Andrzej; Kasprowicz, Grzegorz; Pozniak, Krzysztof T.; Romaniuk, Ryszard

    2013-10-01

    The paper describes a concept of automatic firmware generation for reconfigurable measurement systems, which uses FPGA devices and measurement cards in FMC standard. Following sections are described in details: automatic HDL code generation for FPGA devices, automatic communication interfaces implementation, HDL drivers for measurement cards, automatic serial connection between multiple measurement backplane boards, automatic build of memory map (address space), automatic generated firmware management. Presented solutions are required in many advanced measurement systems, like Beam Position Monitors or GEM detectors. This work is a part of a wider project for automatic firmware generation and management of reconfigurable systems. Solutions presented in this paper are based on previous publication in SPIE.

  5. Smartphone data as an electronic biomarker of illness activity in bipolar disorder.

    PubMed

    Faurholt-Jepsen, Maria; Vinberg, Maj; Frost, Mads; Christensen, Ellen Margrethe; Bardram, Jakob E; Kessing, Lars Vedel

    2015-11-01

    Objective methods are lacking for continuous monitoring of illness activity in bipolar disorder. Smartphones offer unique opportunities for continuous monitoring and automatic collection of real-time data. The objectives of the paper were to test the hypotheses that (i) daily electronic self-monitored data and (ii) automatically generated objective data collected using smartphones correlate with clinical ratings of depressive and manic symptoms in patients with bipolar disorder. Software for smartphones (the MONARCA I system) that collects automatically generated objective data and self-monitored data on illness activity in patients with bipolar disorder was developed by the authors. A total of 61 patients aged 18-60 years and with a diagnosis of bipolar disorder according to ICD-10 used the MONARCA I system for six months. Depressive and manic symptoms were assessed monthly using the Hamilton Depression Rating Scale 17-item (HDRS-17) and the Young Mania Rating Scale (YMRS), respectively. Data are representative of over 400 clinical ratings. Analyses were computed using linear mixed-effect regression models allowing for both between individual variation and within individual variation over time. Analyses showed significant positive correlations between the duration of incoming and outgoing calls/day and scores on the HDRS-17, and significant positive correlations between the number and duration of incoming calls/day and scores on the YMRS; the number of and duration of outgoing calls/day and scores on the YMRS; and the number of outgoing text messages/day and scores on the YMRS. Analyses showed significant negative correlations between self-monitored data (i.e., mood and activity) and scores on the HDRS-17, and significant positive correlations between self-monitored data (i.e., mood and activity) and scores on the YMRS. Finally, the automatically generated objective data were able to discriminate between affective states. Automatically generated objective data and self-monitored data collected using smartphones correlate with clinically rated depressive and manic symptoms and differ between affective states in patients with bipolar disorder. Smartphone apps represent an easy and objective way to monitor illness activity with real-time data in bipolar disorder and may serve as an electronic biomarker of illness activity. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  6. A Recommendation Algorithm for Automating Corollary Order Generation

    PubMed Central

    Klann, Jeffrey; Schadow, Gunther; McCoy, JM

    2009-01-01

    Manual development and maintenance of decision support content is time-consuming and expensive. We explore recommendation algorithms, e-commerce data-mining tools that use collective order history to suggest purchases, to assist with this. In particular, previous work shows corollary order suggestions are amenable to automated data-mining techniques. Here, an item-based collaborative filtering algorithm augmented with association rule interestingness measures mined suggestions from 866,445 orders made in an inpatient hospital in 2007, generating 584 potential corollary orders. Our expert physician panel evaluated the top 92 and agreed 75.3% were clinically meaningful. Also, at least one felt 47.9% would be directly relevant in guideline development. This automated generation of a rough-cut of corollary orders confirms prior indications about automated tools in building decision support content. It is an important step toward computerized augmentation to decision support development, which could increase development efficiency and content quality while automatically capturing local standards. PMID:20351875

  7. A recommendation algorithm for automating corollary order generation.

    PubMed

    Klann, Jeffrey; Schadow, Gunther; McCoy, J M

    2009-11-14

    Manual development and maintenance of decision support content is time-consuming and expensive. We explore recommendation algorithms, e-commerce data-mining tools that use collective order history to suggest purchases, to assist with this. In particular, previous work shows corollary order suggestions are amenable to automated data-mining techniques. Here, an item-based collaborative filtering algorithm augmented with association rule interestingness measures mined suggestions from 866,445 orders made in an inpatient hospital in 2007, generating 584 potential corollary orders. Our expert physician panel evaluated the top 92 and agreed 75.3% were clinically meaningful. Also, at least one felt 47.9% would be directly relevant in guideline development. This automated generation of a rough-cut of corollary orders confirms prior indications about automated tools in building decision support content. It is an important step toward computerized augmentation to decision support development, which could increase development efficiency and content quality while automatically capturing local standards.

  8. Automatic food detection in egocentric images using artificial intelligence technology

    USDA-ARS?s Scientific Manuscript database

    Our objective was to develop an artificial intelligence (AI)-based algorithm which can automatically detect food items from images acquired by an egocentric wearable camera for dietary assessment. To study human diet and lifestyle, large sets of egocentric images were acquired using a wearable devic...

  9. 38 CFR 36.4353 - Withdrawal of authority to close loans on the automatic basis.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... imprudent from a lending standpoint or which are prejudicial to the interests of veterans or the Government...) Automatic loan submissions show deficiencies in credit underwriting, such as use of unstable sources of income to qualify the borrower, ignoring significant adverse credit items affecting the applicant's...

  10. Automatic Conceptual Encoding of Printed Verbal Material: Assessment of Population Differences.

    ERIC Educational Resources Information Center

    Kee, Daniel W.; And Others

    1984-01-01

    The release from proactive interference task as used to investigate categorical encoding of items. Low socioeconomic status Black and middle socioeconomic status White children were compared. Conceptual encoding differences between these populations were not detected in automatic conceptual encoding but were detected when the free recall method…

  11. Time Requirements for the Different Item Types Proposed for Use in the Revised SAT®. Research Report No. 2007-3. ETS RR-07-35

    ERIC Educational Resources Information Center

    Bridgeman, Brent; Laitusis, Cara Cahalan; Cline, Frederick

    2007-01-01

    The current study used three data sources to estimate time requirements for different item types on the now current SAT Reasoning Test™. First, we estimated times from a computer-adaptive version of the SAT® (SAT CAT) that automatically recorded item times. Second, we observed students as they answered SAT questions under strict time limits and…

  12. PFP Public Automatic Exchange (PAX) Commercial Grade Item (CGI) Critical Characteristics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    WHITE, W.F.

    2000-04-04

    This document specifies the critical characteristics for Commercial Grade Items (CGI) procured for use within the safety envelope of PFP's PAX system as required by HNF-PRO-268 and HNF-PRO-1819. These are the minimum specifications that the equipment must meet in order to properly perform its safety function. There may be several manufacturers or models that meet the critical characteristics for any one item.

  13. Method for automatic measurement of second language speaking proficiency

    NASA Astrophysics Data System (ADS)

    Bernstein, Jared; Balogh, Jennifer

    2005-04-01

    Spoken language proficiency is intuitively related to effective and efficient communication in spoken interactions. However, it is difficult to derive a reliable estimate of spoken language proficiency by situated elicitation and evaluation of a person's communicative behavior. This paper describes the task structure and scoring logic of a group of fully automatic spoken language proficiency tests (for English, Spanish and Dutch) that are delivered via telephone or Internet. Test items are presented in spoken form and require a spoken response. Each test is automatically-scored and primarily based on short, decontextualized tasks that elicit integrated listening and speaking performances. The tests present several types of tasks to candidates, including sentence repetition, question answering, sentence construction, and story retelling. The spoken responses are scored according to the lexical content of the response and a set of acoustic base measures on segments, words and phrases, which are scaled with IRT methods or parametrically combined to optimize fit to human listener judgments. Most responses are isolated spoken phrases and sentences that are scored according to their linguistic content, their latency, and their fluency and pronunciation. The item development procedures and item norming are described.

  14. Transport aircraft loading and balancing system: Using a CLIPS expert system for military aircraft load planning

    NASA Technical Reports Server (NTRS)

    Richardson, J.; Labbe, M.; Belala, Y.; Leduc, Vincent

    1994-01-01

    The requirement for improving aircraft utilization and responsiveness in airlift operations has been recognized for quite some time by the Canadian Forces. To date, the utilization of scarce airlift resources has been planned mainly through the employment of manpower-intensive manual methods in combination with the expertise of highly qualified personnel. In this paper, we address the problem of facilitating the load planning process for military aircraft cargo planes through the development of a computer-based system. We introduce TALBAS (Transport Aircraft Loading and BAlancing System), a knowledge-based system designed to assist personnel involved in preparing valid load plans for the C130 Hercules aircraft. The main features of this system which are accessible through a convivial graphical user interface, consists of the automatic generation of valid cargo arrangements given a list of items to be transported, the user-definition of load plans and the automatic validation of such load plans.

  15. Research on Automatic Classification, Indexing and Extracting. Annual Progress Report.

    ERIC Educational Resources Information Center

    Baker, F.T.; And Others

    In order to contribute to the success of several studies for automatic classification, indexing and extracting currently in progress, as well as to further the theoretical and practical understanding of textual item distributions, the development of a frequency program capable of supplying these types of information was undertaken. The program…

  16. Breaking the Cost Barrier in Automatic Classification.

    ERIC Educational Resources Information Center

    Doyle, L. B.

    A low-cost automatic classification method is reported that uses computer time in proportion to NlogN, where N is the number of information items and the base is a parameter, some barriers besides cost are treated briefly in the opening section, including types of intellectual resistance to the idea of doing classification by content-word…

  17. MR imaging and proton spectroscopy of the breast: how to select the images useful to convey the diagnostic message.

    PubMed

    Fausto, A; Magaldi, A; Babaei Paskeh, B; Menicagli, L; Lupo, E N; Sardanelli, F

    2007-10-01

    The purpose of this study was to propose a short way to summarise a breast magnetic resonance (MR) examination including a precontrast and contrast-enhanced dynamic study and proton spectroscopy (1H-MRS) in order to convey the diagnostic message. At the Department of Radiology of the Policlinico San Donato (University of Milan), breast MR is routinely performed at 1.5 T as follows: 36-slice axial 2D short-time inversion-recovery (STIR) sequence; 128-partition 3D gradient-echo coronal sequence (1-mm3 siotropic voxel) before and after rapid automatic intravenous injection of 0.1 mmol/kg of Gd-DOTA (one precontrast and four postcontrast phases). Postprocessing includes temporal subtraction (postcontrast minus precontrast), maximum intensity projections (MIPs), percent enhancement-to-time curves for small regions of interest, and axial and/or sagittal multiplanar reconstructions. Single-voxel 1H-MRS is acquired to characterise focal lesions. Applying this protocol, more than 1,200 images are generated for each examination. We select only four MIPs of an early subtracted dynamic phase: one axial similar to craniocaudal x-ray mammographic views, one coronal, and two lateral similar to lateral 90 degrees x-ray mammographic views. For each lesion described in the report, we select five items, including three images, one graph, and one table: STIR image, precontrast and subtracted postcontrast images (morphology), percent enhancement-to-time curves and a table of raw data generating the curves (dynamics). If 1H-MRS has been performed, we add other five items: two postprocessed spectra (metabolism) and three images localising the volume of interest. Only the selected items are printed on films and attached to the report. The selected items range usually from four (no detected lesion) to 14 (one lesion, studied also with 1H-MRS), to 44 (five lesions, one of them studied also with 1H-MRS). The percentage of items presented with the report if compared with the total number of generated images is equal to 0.33% (4/1,200), 1.17% (14/1,200), and 2.83% (34/1,200), respectively. Breast MR imaging and 1H-MRS can be effectively summarised presenting only a minimal fraction of all generated images.

  18. [Development of a Software for Automatically Generated Contours in Eclipse TPS].

    PubMed

    Xie, Zhao; Hu, Jinyou; Zou, Lian; Zhang, Weisha; Zou, Yuxin; Luo, Kelin; Liu, Xiangxiang; Yu, Luxin

    2015-03-01

    The automatic generation of planning targets and auxiliary contours have achieved in Eclipse TPS 11.0. The scripting language autohotkey was used to develop a software for automatically generated contours in Eclipse TPS. This software is named Contour Auto Margin (CAM), which is composed of operational functions of contours, script generated visualization and script file operations. RESULTS Ten cases in different cancers have separately selected, in Eclipse TPS 11.0 scripts generated by the software could not only automatically generate contours but also do contour post-processing. For different cancers, there was no difference between automatically generated contours and manually created contours. The CAM is a user-friendly and powerful software, and can automatically generated contours fast in Eclipse TPS 11.0. With the help of CAM, it greatly save plan preparation time and improve working efficiency of radiation therapy physicists.

  19. [Food and beverages available in automatic food dispensers in health care facilities of the Portugal North Health Region].

    PubMed

    Rodrigues, Filipa Gomes; Ramos, Elisabete; Freitas, Mário; Neto, Maria

    2010-01-01

    Patients and health staff frequently need to stay in health care facilities for quite a long time. Therefore, it's necessary to create the conditions that allow the ingestion of food during those periods, namely through the existence of automatic food dispensers. However, the available food and beverages might not always be compatible with a healthy diet. The aim of this work was to evaluate if the food and beverages available in automatic food dispensers in public Ambulatory Care Facilities (ACF) and Hospitals of the Portugal North Health Region were contributing to a healthy diet, during the year of 2007. A questionnaire was elaborated and sent to the Coordinators of the Health Sub-Regions and to the Hospital Administrators. The questionnaire requested information about the existence of automatic food dispensers in the several departments of each health care facility, as well as which food and beverages were available and most sold. Afterwards, the pre-processing of the results involved the classification of the food and beverages in three categories: recommended, sometimes recommended and not recommended. The questionnaire reply ratio was 71% in ACF and 83% in Hospitals. Automatic food dispensers were available in all the Hospitals and 86.5% of ACF. It wasn't possible to acquire food in 37% of the health facility departments. These departments were all located in ACF. The more frequently available beverages in departments with automatic food dispensers were coffee, still water, tea, juices and nectars and soft drinks. Still water, coffee, yogurt, juices and nectars and soft drinks were reported as the most sold. The more frequently avaliable food items were chocolate, recommended cookies, not recommended cakes, recommended sandwiches and sometimes recommended croissants. The food items reported as being the most sold were recommended sandwiches, chocolate, recommended cookies, sometimes recommended croissants and not recommended cookies. The beverages in the recommended and sometimes recommended groups were the most frequently available and sold. The not recommended food items were reported as being the most available, while both the recommended and not recommended food items were equally reported as being the most sold. Results show that unhealthy food and beverages are widely available in public health care facilities of the Portugal North Health Region.

  20. 46 CFR 112.05-5 - Emergency power source.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... generator must be either a diesel engine or a gas turbine. [CGD 74-125A, 47 FR 15267, Apr. 8, 1982, as... power source (automatically connected storage battery or an automatically started generator) 36 hours.1... power source (automatically connected storage battery or an automatically started generator) 8 hours or...

  1. 46 CFR 112.05-5 - Emergency power source.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... generator must be either a diesel engine or a gas turbine. [CGD 74-125A, 47 FR 15267, Apr. 8, 1982, as... power source (automatically connected storage battery or an automatically started generator) 36 hours.1... power source (automatically connected storage battery or an automatically started generator) 8 hours or...

  2. Application of NASA-developed technology to the automatic control of municipal sewage treatment plants

    NASA Technical Reports Server (NTRS)

    Hiser, L. L.; Herrera, W. R.

    1973-01-01

    A search was made of NASA developed technology and commercial technology for process control sensors and instrumentation which would be applicable to the operation of municipal sewage treatment plants. Several notable items were found from which process control concepts were formulated that incorporated these items into systems to automatically operate municipal sewage treatment plants. A preliminary design of the most promising concept was developed into a process control scheme for an activated sludge treatment plant. This design included process control mechanisms for maintaining constant food to sludge mass (F/M) ratio, and for such unit processes as primary sedimentation, sludge wastage, and underflow control from the final clarifier.

  3. Development and Appraisal of Multiple Accounting Record System (Mars).

    PubMed

    Yu, H C; Chen, M C

    2016-01-01

    The aim of the system is to achieve simplification of workflow, reduction of recording time, and increase the income for the study hospital. The project team decided to develop a multiple accounting record system that generates the account records based on the nursing records automatically, reduces the time and effort for nurses to review the procedure and provide another note of material consumption. Three configuration files were identified to demonstrate the relationship of treatments and reimbursement items. The workflow was simplified. The nurses averagely reduced 10 minutes of daily recording time, and the reimbursement points have been increased by 7.49%. The project streamlined the workflow and provides the institute a better way in finical management.

  4. The Complete Automation of the Minnesota Multiphasic Personality Inventory and a Study of its Response Latency.

    ERIC Educational Resources Information Center

    Dunn, Thomas G.; And Others

    The feasibility of completely automating the Minnesota Multiphasic Personality Inventory (MMPI) was tested, and item response latencies were compared with other MMPI item characteristics. A total of 26 scales were successfully scored automatically for 165 subjects. The program also typed a Mayo Clinic interpretive report on a computer terminal,…

  5. Automatically Generated Vegetation Density Maps with LiDAR Survey for Orienteering Purpose

    NASA Astrophysics Data System (ADS)

    Petrovič, Dušan

    2018-05-01

    The focus of our research was to automatically generate the most adequate vegetation density maps for orienteering purpose. Application Karttapullatuin was used for automated generation of vegetation density maps, which requires LiDAR data to process an automatically generated map. A part of the orienteering map in the area of Kazlje-Tomaj was used to compare the graphical display of vegetation density. With different settings of parameters in the Karttapullautin application we changed the way how vegetation density of automatically generated map was presented, and tried to match it as much as possible with the orienteering map of Kazlje-Tomaj. Comparing more created maps of vegetation density the most suitable parameter settings to automatically generate maps on other areas were proposed, too.

  6. From Biology to Education: Scoring and Clustering Multilingual Text Sequences and Other Sequential. Research Report. ETS RR-12-25

    ERIC Educational Resources Information Center

    Sukkarieh, Jane Z.; von Davier, Matthias; Yamamoto, Kentaro

    2012-01-01

    This document describes a solution to a problem in the automatic content scoring of the multilingual character-by-character highlighting item type. This solution is language independent and represents a significant enhancement. This solution not only facilitates automatic scoring but plays an important role in clustering students' responses;…

  7. Electric Commerce

    DTIC Science & Technology

    1989-10-01

    risk management, such as the coordination of letters of credit, shipping, payments, delivery, and insurance. All of these necessary steps require...vendor to conduct business with a human customer 6, at a dumb terminal7. In contrast, we want to computerize both. ATMs (Automatic Teller Machines) and...entered the store. Distributers with physical showrooms will always cater to the impulse buyer. Many supermarket items could be automatically procured 20

  8. The Development of a Web-Based Assessment System to Identify Students' Misconception Automatically on Linear Kinematics with a Four-Tier Instrument Test

    ERIC Educational Resources Information Center

    Pujayanto, Pujayanto; Budiharti, Rini; Adhitama, Egy; Nuraini, Niken Rizky Amalia; Putri, Hanung Vernanda

    2018-01-01

    This research proposes the development of a web-based assessment system to identify students' misconception. The system, named WAS (web-based assessment system), can identify students' misconception profile on linear kinematics automatically after the student has finished the test. The test instrument was developed and validated. Items were…

  9. The impact of OCR accuracy on automated cancer classification of pathology reports.

    PubMed

    Zuccon, Guido; Nguyen, Anthony N; Bergheim, Anton; Wickman, Sandra; Grayson, Narelle

    2012-01-01

    To evaluate the effects of Optical Character Recognition (OCR) on the automatic cancer classification of pathology reports. Scanned images of pathology reports were converted to electronic free-text using a commercial OCR system. A state-of-the-art cancer classification system, the Medical Text Extraction (MEDTEX) system, was used to automatically classify the OCR reports. Classifications produced by MEDTEX on the OCR versions of the reports were compared with the classification from a human amended version of the OCR reports. The employed OCR system was found to recognise scanned pathology reports with up to 99.12% character accuracy and up to 98.95% word accuracy. Errors in the OCR processing were found to minimally impact on the automatic classification of scanned pathology reports into notifiable groups. However, the impact of OCR errors is not negligible when considering the extraction of cancer notification items, such as primary site, histological type, etc. The automatic cancer classification system used in this work, MEDTEX, has proven to be robust to errors produced by the acquisition of freetext pathology reports from scanned images through OCR software. However, issues emerge when considering the extraction of cancer notification items.

  10. Focus of attention and automaticity in handwriting.

    PubMed

    MacMahon, Clare; Charness, Neil

    2014-04-01

    This study investigated the nature of automaticity in everyday tasks by testing handwriting performance under single and dual-task conditions. Item familiarity and hand dominance were also manipulated to understand both cognitive and motor components of the task. In line with previous literature, performance was superior in an extraneous focus of attention condition compared to two different skill focus conditions. This effect was found only when writing with the dominant hand. In addition, performance was superior for high familiarity compared to low familiarity items. These findings indicate that motor and cognitive familiarity are related to the degree of automaticity of motor skills and can be manipulated to produce different performance outcomes. The findings also imply that the progression of skill acquisition from novel to novice to expert levels can be traced using different dual-task conditions. The separation of motor and cognitive familiarity is a new approach in the handwriting domain, and provides insight into the nature of attentional demands during performance. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Automatic identification of the number of food items in a meal using clustering techniques based on the monitoring of swallowing and chewing.

    PubMed

    Lopez-Meyer, Paulo; Schuckers, Stephanie; Makeyev, Oleksandr; Fontana, Juan M; Sazonov, Edward

    2012-09-01

    The number of distinct foods consumed in a meal is of significant clinical concern in the study of obesity and other eating disorders. This paper proposes the use of information contained in chewing and swallowing sequences for meal segmentation by food types. Data collected from experiments of 17 volunteers were analyzed using two different clustering techniques. First, an unsupervised clustering technique, Affinity Propagation (AP), was used to automatically identify the number of segments within a meal. Second, performance of the unsupervised AP method was compared to a supervised learning approach based on Agglomerative Hierarchical Clustering (AHC). While the AP method was able to obtain 90% accuracy in predicting the number of food items, the AHC achieved an accuracy >95%. Experimental results suggest that the proposed models of automatic meal segmentation may be utilized as part of an integral application for objective Monitoring of Ingestive Behavior in free living conditions.

  12. Expert system development for commonality analysis in space programs

    NASA Technical Reports Server (NTRS)

    Yeager, Dorian P.

    1987-01-01

    This report is a combination of foundational mathematics and software design. A mathematical model of the Commonality Analysis problem was developed and some important properties discovered. The complexity of the problem is described herein and techniques, both deterministic and heuristic, for reducing that complexity are presented. Weaknesses are pointed out in the existing software (System Commonality Analysis Tool) and several improvements are recommended. It is recommended that: (1) an expert system for guiding the design of new databases be developed; (2) a distributed knowledge base be created and maintained for the purpose of encoding the commonality relationships between design items in commonality databases; (3) a software module be produced which automatically generates commonality alternative sets from commonality databases using the knowledge associated with those databases; and (4) a more complete commonality analysis module be written which is capable of generating any type of feasible solution.

  13. Automatic Text Structuring and Summarization.

    ERIC Educational Resources Information Center

    Salton, Gerard; And Others

    1997-01-01

    Discussion of the use of information retrieval techniques for automatic generation of semantic hypertext links focuses on automatic text summarization. Topics include World Wide Web links, text segmentation, and evaluation of text summarization by comparing automatically generated abstracts with manually prepared abstracts. (Author/LRW)

  14. Worklist handling in workflow-enabled radiological application systems

    NASA Astrophysics Data System (ADS)

    Wendler, Thomas; Meetz, Kirsten; Schmidt, Joachim; von Berg, Jens

    2000-05-01

    For the next generation integrated information systems for health care applications, more emphasis has to be put on systems which, by design, support the reduction of cost, the increase inefficiency and the improvement of the quality of services. A substantial contribution to this will be the modeling. optimization, automation and enactment of processes in health care institutions. One of the perceived key success factors for the system integration of processes will be the application of workflow management, with workflow management systems as key technology components. In this paper we address workflow management in radiology. We focus on an important aspect of workflow management, the generation and handling of worklists, which provide workflow participants automatically with work items that reflect tasks to be performed. The display of worklists and the functions associated with work items are the visible part for the end-users of an information system using a workflow management approach. Appropriate worklist design and implementation will influence user friendliness of a system and will largely influence work efficiency. Technically, in current imaging department information system environments (modality-PACS-RIS installations), a data-driven approach has been taken: Worklist -- if present at all -- are generated from filtered views on application data bases. In a future workflow-based approach, worklists will be generated by autonomous workflow services based on explicit process models and organizational models. This process-oriented approach will provide us with an integral view of entire health care processes or sub- processes. The paper describes the basic mechanisms of this approach and summarizes its benefits.

  15. Approaches to the automatic generation and control of finite element meshes

    NASA Technical Reports Server (NTRS)

    Shephard, Mark S.

    1987-01-01

    The algorithmic approaches being taken to the development of finite element mesh generators capable of automatically discretizing general domains without the need for user intervention are discussed. It is demonstrated that because of the modeling demands placed on a automatic mesh generator, all the approaches taken to date produce unstructured meshes. Consideration is also given to both a priori and a posteriori mesh control devices for automatic mesh generators as well as their integration with geometric modeling and adaptive analysis procedures.

  16. Automated Scoring of Speaking Tasks in the Test of English-for-Teaching ("TEFT"™). Research Report. ETS RR-15-31

    ERIC Educational Resources Information Center

    Zechner, Klaus; Chen, Lei; Davis, Larry; Evanini, Keelan; Lee, Chong Min; Leong, Chee Wee; Wang, Xinhao; Yoon, Su-Youn

    2015-01-01

    This research report presents a summary of research and development efforts devoted to creating scoring models for automatically scoring spoken item responses of a pilot administration of the Test of English-for-Teaching ("TEFT"™) within the "ELTeach"™ framework.The test consists of items for all four language modalities:…

  17. The Development of Automaticity in Short-Term Memory Search: Item-Response Learning and Category Learning

    ERIC Educational Resources Information Center

    Cao, Rui; Nosofsky, Robert M.; Shiffrin, Richard M.

    2017-01-01

    In short-term-memory (STM)-search tasks, observers judge whether a test probe was present in a short list of study items. Here we investigated the long-term learning mechanisms that lead to the highly efficient STM-search performance observed under conditions of consistent-mapping (CM) training, in which targets and foils never switch roles across…

  18. Pushing typists back on the learning curve: revealing chunking in skilled typewriting.

    PubMed

    Yamaguchi, Motonori; Logan, Gordon D

    2014-04-01

    Theories of skilled performance propose that highly trained skills involve hierarchically structured control processes. The present study examined and demonstrated hierarchical control at several levels of processing in skilled typewriting. In the first two experiments, we scrambled the order of letters in words to prevent skilled typists from chunking letters, and compared typing words and scrambled words. Experiment 1 manipulated stimulus quality to reveal chunking in perception, and Experiment 2 manipulated concurrent memory load to reveal chunking in short-term memory (STM). Both experiments manipulated the number of letters in words and nonwords to reveal chunking in motor planning. In the next two experiments, we degraded typing skill by altering the usual haptic feedback by using a laser-projection keyboard, so that typists had to monitor keystrokes. Neither the number of motor chunks (Experiment 3) nor the number of STM items (Experiment 4) was influenced by the manipulation. The results indicate that the utilization of hierarchical control depends on whether the input allows chunking but not on whether the output is generated automatically. We consider the role of automaticity in hierarchical control of skilled performance.

  19. Validation of an automatically generated screening score for frailty: the care assessment need (CAN) score.

    PubMed

    Ruiz, Jorge G; Priyadarshni, Shivani; Rahaman, Zubair; Cabrera, Kimberly; Dang, Stuti; Valencia, Willy M; Mintzer, Michael J

    2018-05-04

    Frailty is a state of vulnerability to stressors that is prevalent in older adults and is associated with higher morbidity, mortality and healthcare utilization. Multiple instruments are used to measure frailty; most are time-consuming. The Care Assessment Need (CAN) score is automatically generated from electronic health record data using a statistical model. The methodology for calculation of the CAN score is consistent with the deficit accumulation model of frailty. At a 95 percentile, the CAN score is a predictor of hospitalization and mortality in Veteran populations. The purpose of this study was to validate the CAN score as a screening tool for frailty in primary care. This is a cross-sectional, validation study compared the CAN score with a 40-item Frailty Index reference standard based on a comprehensive geriatric assessment. We included community-dwelling male patients over age 65 from an outpatient geriatric medicine clinic. We calculated the sensitivity, specificity, positive predictive value, negative predictive value and diagnostic accuracy of the CAN score. 184 patients over age 65 were included in the study: 97.3% male, 64.2% White, 80.9% non-Hispanic. The CGA-based Frailty Index defined 14.1% as robust, 53.3% as prefrail and 32.6% as frail. For the frail, statistical analysis demonstrated that a CAN score of 55 provides sensitivity, specificity, PPV and NPV of 91.67, 40.32, 42.64 and 90.91% respectively whereas at a score of 95 the sensitivity, specificity, PPV and NPV were 43.33, 88.81, 63.41, 77.78% respectively. Area under the receiver operating characteristics curve was 0.736 (95% CI = .661-.811). CAN score is a potential screening tool for frailty among older adults; it is generated automatically and provides acceptable diagnostic accuracy. Hence, the CAN score may be a useful tool to primary care providers for detection of frailty in their patient panels.

  20. Automatic 3d Building Model Generations with Airborne LiDAR Data

    NASA Astrophysics Data System (ADS)

    Yastikli, N.; Cetin, Z.

    2017-11-01

    LiDAR systems become more and more popular because of the potential use for obtaining the point clouds of vegetation and man-made objects on the earth surface in an accurate and quick way. Nowadays, these airborne systems have been frequently used in wide range of applications such as DEM/DSM generation, topographic mapping, object extraction, vegetation mapping, 3 dimensional (3D) modelling and simulation, change detection, engineering works, revision of maps, coastal management and bathymetry. The 3D building model generation is the one of the most prominent applications of LiDAR system, which has the major importance for urban planning, illegal construction monitoring, 3D city modelling, environmental simulation, tourism, security, telecommunication and mobile navigation etc. The manual or semi-automatic 3D building model generation is costly and very time-consuming process for these applications. Thus, an approach for automatic 3D building model generation is needed in a simple and quick way for many studies which includes building modelling. In this study, automatic 3D building models generation is aimed with airborne LiDAR data. An approach is proposed for automatic 3D building models generation including the automatic point based classification of raw LiDAR point cloud. The proposed point based classification includes the hierarchical rules, for the automatic production of 3D building models. The detailed analyses for the parameters which used in hierarchical rules have been performed to improve classification results using different test areas identified in the study area. The proposed approach have been tested in the study area which has partly open areas, forest areas and many types of the buildings, in Zekeriyakoy, Istanbul using the TerraScan module of TerraSolid. The 3D building model was generated automatically using the results of the automatic point based classification. The obtained results of this research on study area verified that automatic 3D building models can be generated successfully using raw LiDAR point cloud data.

  1. Directed forgetting and aging: the role of retrieval processes, processing speed, and proactive interference.

    PubMed

    Hogge, Michaël; Adam, Stéphane; Collette, Fabienne

    2008-07-01

    The directed forgetting effect obtained with the item method is supposed to depend on both selective rehearsal of to-be-remembered (TBR) items and attentional inhibition of to-be-forgotten (TBF) items. In this study, we investigated the locus of the directed forgetting deficit in older adults by exploring the influence of recollection and familiarity-based retrieval processes on age-related differences in directed forgetting. Moreover, we explored the influence of processing speed, short-term memory capacity, thought suppression tendencies, and sensitivity to proactive interference on performance. The results indicated that older adults' directed forgetting difficulties are due to decreased recollection of TBR items, associated with increased automatic retrieval of TBF items. Moreover, processing speed and proactive interference appeared to be responsible for the decreased recall of TBR items.

  2. Role of attentional tags in working memory-driven attentional capture.

    PubMed

    Kuo, Chun-Yu; Chao, Hsuan-Fu

    2014-08-01

    Recent studies have demonstrated that the contents of working memory capture attention when performing a visual search task. However, it remains an intriguing and unresolved question whether all kinds of items stored in working memory capture attention. The present study investigated this issue by manipulating the attentional tags (target or distractor) associated with information maintained in working memory. The results showed that working memory-driven attentional capture is a flexible process, and that attentional tags associated with items stored in working memory do modulate attentional capture. When items were tagged as a target, they automatically captured attention; however, when items were tagged as a distractor, attentional capture was reduced.

  3. 46 CFR 112.05-5 - Emergency power source.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... with § 112.05-1(c). Table 112.05-5(a) Size of vessel and service Type of emergency power source or... power source (automatically connected storage battery or an automatically started generator) 36 hours.1... power source (automatically connected storage battery or an automatically started generator) 8 hours or...

  4. 46 CFR 112.05-5 - Emergency power source.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... with § 112.05-1(c). Table 112.05-5(a) Size of vessel and service Type of emergency power source or... power source (automatically connected storage battery or an automatically started generator) 36 hours.1... power source (automatically connected storage battery or an automatically started generator) 8 hours or...

  5. 46 CFR 112.05-5 - Emergency power source.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... with § 112.05-1(c). Table 112.05-5(a) Size of vessel and service Type of emergency power source or... power source (automatically connected storage battery or an automatically started generator) 36 hours.1... power source (automatically connected storage battery or an automatically started generator) 8 hours or...

  6. Performance of Automated Speech Scoring on Different Low- to Medium-Entropy Item Types for Low-Proficiency English Learners. Research Report. ETS RR-17-12

    ERIC Educational Resources Information Center

    Loukina, Anastassia; Zechner, Klaus; Yoon, Su-Youn; Zhang, Mo; Tao, Jidong; Wang, Xinhao; Lee, Chong Min; Mulholland, Matthew

    2017-01-01

    This report presents an overview of the "SpeechRater"? automated scoring engine model building and evaluation process for several item types with a focus on a low-English-proficiency test-taker population. We discuss each stage of speech scoring, including automatic speech recognition, filtering models for nonscorable responses, and…

  7. What Does it Mean to Know a Language, Or How Do You Get Someone to Perform His Competence?

    ERIC Educational Resources Information Center

    Spolsky, Bernard

    Fries' definition of knowing a language rejects the layman's notion that the criterion is knowing a certain number of words. It involves, rather, knowing a set of items--sound segments, sentence patterns, lexical items--which must be made a matter of automatic habit. Various approaches to testing someone's use of a language have failed to take…

  8. Automated Data Base Implementation Requirements for the Avionics Planning Baseline - Army

    DTIC Science & Technology

    1983-07-01

    PJRQT PJRSG .... PRJR owns PJRQTR Item EFT A32 A26 In record EFR Item ESFT A36 A40 In record ESFR Item EQPOC ALCPOC A20 In record EQR Iten EPHONE LPHONE...USING EF DUPLICATES ARE NOT ALLOWED WITHIN EQSEG. EF TYPE CHARACTER 4. EFT TYPE CHARACTER 32. EG TYPE CHARACTER 4. RECORD NAME IS ESFR LOCATION MODE... ESFR MANDATORY AUTOMATIC LINKED TO OWNER ASCENDING KEY IS ESF DUPLICATES NOT SET SELECTION THRU LOCATION MODE OF OWNER. SET NAME IS ESEQ MODE CHAIN

  9. Automatic Registration of Scanned Satellite Imagery with a Digital Map Data Base.

    DTIC Science & Technology

    1980-11-01

    define the corresponding map window (mW)(procedure TRANSFORMWINDOW MAP A-- S4S Araofms Cpo iin et Serc Area deiatl compAr tal _______________ T...to a LIST-item). LIN: = ( ® code 2621431 ; ® pointer LA to the line list, © pointer PRI; pointer PR2), LIST: = ( Q pointer PL to a LIN-item; n pointer...items where PL -pointers are replaced by a code for the beginning (the number 262140 in our case) and end (the number 26241). Figure 3.2 illustra- tes a

  10. Automatic Certification of Kalman Filters for Reliable Code Generation

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Fischer, Bernd; Schumann, Johann; Richardson, Julian

    2005-01-01

    AUTOFILTER is a tool for automatically deriving Kalman filter code from high-level declarative specifications of state estimation problems. It can generate code with a range of algorithmic characteristics and for several target platforms. The tool has been designed with reliability of the generated code in mind and is able to automatically certify that the code it generates is free from various error classes. Since documentation is an important part of software assurance, AUTOFILTER can also automatically generate various human-readable documents, containing both design and safety related information. We discuss how these features address software assurance standards such as DO-178B.

  11. Applying Independent Verification and Validation to Automatic Test Equipment

    NASA Technical Reports Server (NTRS)

    Calhoun, Cynthia C.

    1997-01-01

    This paper describes a general overview of applying Independent Verification and Validation (IV&V) to Automatic Test Equipment (ATE). The overview is not inclusive of all IV&V activities that can occur or of all development and maintenance items that can be validated and verified, during the IV&V process. A sampling of possible IV&V activities that can occur within each phase of the ATE life cycle are described.

  12. Design of efficient and simple interface testing equipment for opto-electric tracking system

    NASA Astrophysics Data System (ADS)

    Liu, Qiong; Deng, Chao; Tian, Jing; Mao, Yao

    2016-10-01

    Interface testing for opto-electric tracking system is one important work to assure system running performance, aiming to verify the design result of every electronic interface matching the communication protocols or not, by different levels. Opto-electric tracking system nowadays is more complicated, composed of many functional units. Usually, interface testing is executed between units manufactured completely, highly depending on unit design and manufacture progress as well as relative people. As a result, it always takes days or weeks, inefficiently. To solve the problem, this paper promotes an efficient and simple interface testing equipment for opto-electric tracking system, consisting of optional interface circuit card, processor and test program. The hardware cards provide matched hardware interface(s), easily offered from hardware engineer. Automatic code generation technique is imported, providing adaption to new communication protocols. Automatic acquiring items, automatic constructing code architecture and automatic encoding are used to form a new program quickly with adaption. After simple steps, a standard customized new interface testing equipment with matching test program and interface(s) is ready for a waiting-test system in minutes. The efficient and simple interface testing equipment for opto-electric tracking system has worked for many opto-electric tracking system to test entire or part interfaces, reducing test time from days to hours, greatly improving test efficiency, with high software quality and stability, without manual coding. Used as a common tool, the efficient and simple interface testing equipment for opto-electric tracking system promoted by this paper has changed traditional interface testing method and created much higher efficiency.

  13. Optimal Test Design with Rule-Based Item Generation

    ERIC Educational Resources Information Center

    Geerlings, Hanneke; van der Linden, Wim J.; Glas, Cees A. W.

    2013-01-01

    Optimal test-design methods are applied to rule-based item generation. Three different cases of automated test design are presented: (a) test assembly from a pool of pregenerated, calibrated items; (b) test generation on the fly from a pool of calibrated item families; and (c) test generation on the fly directly from calibrated features defining…

  14. A procedure for automating CFD simulations of an inlet-bleed problem

    NASA Technical Reports Server (NTRS)

    Chyu, Wei J.; Rimlinger, Mark J.; Shih, Tom I.-P.

    1995-01-01

    A procedure was developed to improve the turn-around time for computational fluid dynamics (CFD) simulations of an inlet-bleed problem involving oblique shock-wave/boundary-layer interactions on a flat plate with bleed into a plenum through one or more circular holes. This procedure is embodied in a preprocessor called AUTOMAT. With AUTOMAT, once data for the geometry and flow conditions have been specified (either interactively or via a namelist), it will automatically generate all input files needed to perform a three-dimensional Navier-Stokes simulation of the prescribed inlet-bleed problem by using the PEGASUS and OVERFLOW codes. The input files automatically generated by AUTOMAT include those for the grid system and those for the initial and boundary conditions. The grid systems automatically generated by AUTOMAT are multi-block structured grids of the overlapping type. Results obtained by using AUTOMAT are presented to illustrate its capability.

  15. Experience in connecting the power generating units of thermal power plants to automatic secondary frequency regulation within the united power system of Russia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhukov, A. V.; Komarov, A. N.; Safronov, A. N.

    The principles of central control of the power generating units of thermal power plants by automatic secondary frequency and active power overcurrent regulation systems, and the algorithms for interactions between automatic power control systems for the power production units in thermal power plants and centralized systems for automatic frequency and power regulation, are discussed. The order of switching the power generating units of thermal power plants over to control by a centralized system for automatic frequency and power regulation and by the Central Coordinating System for automatic frequency and power regulation is presented. The results of full-scale system tests ofmore » the control of power generating units of the Kirishskaya, Stavropol, and Perm GRES (State Regional Electric Power Plants) by the Central Coordinating System for automatic frequency and power regulation at the United Power System of Russia on September 23-25, 2008, are reported.« less

  16. Software For Nearly Optimal Packing Of Cargo

    NASA Technical Reports Server (NTRS)

    Fennel, Theron R.; Daughtrey, Rodney S.; Schwaab, Doug G.

    1994-01-01

    PACKMAN computer program used to find nearly optimal arrangements of cargo items in storage containers, subject to such multiple packing objectives as utilization of volumes of containers, utilization of containers up to limits on weights, and other considerations. Automatic packing algorithm employed attempts to find best positioning of cargo items in container, such that volume and weight capacity of container both utilized to maximum extent possible. Written in Common LISP.

  17. Task Versus Component Consistency in the Development of Automatic Processes: Consistent Attending Versus Consistent Responding.

    DTIC Science & Technology

    1982-03-01

    are two qualitatively different forms of human information processing (James, 1890; Hasher & Zacks, 1979; LaBerge , 1973, 1975; Logan, 1978, 1979...Kristofferson, M. W. When item recognition and visual search functions are similar. Perception & Psychophysics, 1972, 12, 379-384. LaBerge , D. Attention and...the measurement of perceptual learning. Hemory and3 Conition, 1973, 1, 263-276. LaBerge , D. Acquisition of automatic processing in purceptual and

  18. Item Difficulty Modeling of Paragraph Comprehension Items

    ERIC Educational Resources Information Center

    Gorin, Joanna S.; Embretson, Susan E.

    2006-01-01

    Recent assessment research joining cognitive psychology and psychometric theory has introduced a new technology, item generation. In algorithmic item generation, items are systematically created based on specific combinations of features that underlie the processing required to correctly solve a problem. Reading comprehension items have been more…

  19. A strategy for automatically generating programs in the lucid programming language

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1987-01-01

    A strategy for automatically generating and verifying simple computer programs is described. The programs are specified by a precondition and a postcondition in predicate calculus. The programs generated are in the Lucid programming language, a high-level, data-flow language known for its attractive mathematical properties and ease of program verification. The Lucid programming is described, and the automatic program generation strategy is described and applied to several example problems.

  20. [Development of a Compared Software for Automatically Generated DVH in Eclipse TPS].

    PubMed

    Xie, Zhao; Luo, Kelin; Zou, Lian; Hu, Jinyou

    2016-03-01

    This study is to automatically calculate the dose volume histogram(DVH) for the treatment plan, then to compare it with requirements of doctor's prescriptions. The scripting language Autohotkey and programming language C# were used to develop a compared software for automatically generated DVH in Eclipse TPS. This software is named Show Dose Volume Histogram (ShowDVH), which is composed of prescription documents generation, operation functions of DVH, software visualization and DVH compared report generation. Ten cases in different cancers have been separately selected, in Eclipse TPS 11.0 ShowDVH could not only automatically generate DVH reports but also accurately determine whether treatment plans meet the requirements of doctor’s prescriptions, then reports gave direction for setting optimization parameters of intensity modulated radiated therapy. The ShowDVH is an user-friendly and powerful software, and can automatically generated compared DVH reports fast in Eclipse TPS 11.0. With the help of ShowDVH, it greatly saves plan designing time and improves working efficiency of radiation therapy physicists.

  1. Memory for pictures, words, and spatial location in older adults: evidence for pictorial superiority.

    PubMed

    Park, D C; Puglisi, J T; Sovacool, M

    1983-09-01

    In the present study the spatial location of picture and word stimuli was varied across four quadrants of photographic slides. Young and old people received either pictures or words to study and were told to remember either just the item or the item and its location. Recognition memory for items and memory for spatial location were tested. A pictorial superiority effect occurred for both old and young people's item recognition. Additionally, instructions to study position decreased item memory and facilitated position memory in both age groups. Spatial memory was markedly superior for pictures compared with matched words for old and young adults. The results are interpreted within the Hasher and Zacks framework of automatic processing. The implications of the data for designing mnemonic aids for elderly persons are considered.

  2. How generation affects source memory.

    PubMed

    Geghman, Kindiya D; Multhaup, Kristi S

    2004-07-01

    Generation effects (better memory for self-produced items than for provided items) typically occur in item memory. Jurica and Shimamura (1999) reported a negative generation effect in source memory, but their procedure did not test participants on the items they had generated. In Experiment 1, participants answered questions and read statements made by a face on a computer screen. The target word was unscrambled, or letters were filled in. Generation effects were found for target recall and source recognition (which person did which task). Experiment 2 extended these findings to a condition in which the external sources were two different faces. Generation had a positive effect on source memory, supporting an overlap in the underlying mechanisms of item and source memory.

  3. Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder

    NASA Technical Reports Server (NTRS)

    Staats, Matt

    2009-01-01

    We present work on a prototype tool based on the JavaPathfinder (JPF) model checker for automatically generating tests satisfying the MC/DC code coverage criterion. Using the Eclipse IDE, developers and testers can quickly instrument Java source code with JPF annotations covering all MC/DC coverage obligations, and JPF can then be used to automatically generate tests that satisfy these obligations. The prototype extension to JPF enables various tasks useful in automatic test generation to be performed, such as test suite reduction and execution of generated tests.

  4. Development of an Automatic Differentiation Version of the FPX Rotor Code

    NASA Technical Reports Server (NTRS)

    Hu, Hong

    1996-01-01

    The ADIFOR2.0 automatic differentiator is applied to the FPX rotor code along with the grid generator GRGN3. The FPX is an eXtended Full-Potential CFD code for rotor calculations. The automatic differentiation version of the code is obtained, which provides both non-geometry and geometry sensitivity derivatives. The sensitivity derivatives via automatic differentiation are presented and compared with divided difference generated derivatives. The study shows that automatic differentiation method gives accurate derivative values in an efficient manner.

  5. Automatic rule generation for high-level vision

    NASA Technical Reports Server (NTRS)

    Rhee, Frank Chung-Hoon; Krishnapuram, Raghu

    1992-01-01

    A new fuzzy set based technique that was developed for decision making is discussed. It is a method to generate fuzzy decision rules automatically for image analysis. This paper proposes a method to generate rule-based approaches to solve problems such as autonomous navigation and image understanding automatically from training data. The proposed method is also capable of filtering out irrelevant features and criteria from the rules.

  6. Analyzing Item Generation with Natural Language Processing Tools for the "TOEIC"® Listening Test. Research Report. ETS RR-17-52

    ERIC Educational Resources Information Center

    Yoon, Su-Youn; Lee, Chong Min; Houghton, Patrick; Lopez, Melissa; Sakano, Jennifer; Loukina, Anastasia; Krovetz, Bob; Lu, Chi; Madani, Nitin

    2017-01-01

    In this study, we developed assistive tools and resources to support TOEIC® Listening test item generation. There has recently been an increased need for a large pool of items for these tests. This need has, in turn, inspired efforts to increase the efficiency of item generation while maintaining the quality of the created items. We aimed to…

  7. Preschool Personality Antecedents of Narcissism in Adolescence and Emergent Adulthood: A 20-Year Longitudinal Study

    PubMed Central

    Carlson, Kevin S.; Gjerde, Per F.

    2009-01-01

    This prospective study examined relations between preschool personality attributes and narcissism during adolescence and emerging adulthood. We created five a priori preschool scales anticipated to foretell future narcissism. Independent assessors evaluated the participants' personality at ages 14, 18, and 23. Based upon these evaluations, we generated observer-based narcissism scales for each of these three ages. All preschool scales predicted subsequent narcissism, except Interpersonal Antagonism at age 23. According to mean scale and item scores analyses, narcissism increased significantly from age 14 to 18, followed by a slight but non-significant decline from age 18 to 23. The discussion focused on a developmental view of narcissism, the need for research on automatic processing and psychological defenses, and links between narcissism and attachment. PMID:20161614

  8. Lightweight Trauma Module - LTM

    NASA Technical Reports Server (NTRS)

    Hatfield, Thomas

    2008-01-01

    Current patient movement items (PMI) supporting the military's Critical Care Air Transport Team (CCATT) mission as well as the Crew Health Care System for space (CHeCS) have significant limitations: size, weight, battery duration, and dated clinical technology. The LTM is a small, 20 lb., system integrating diagnostic and therapeutic clinical capabilities along with onboard data management, communication services and automated care algorithms to meet new Aeromedical Evacuation requirements. The Lightweight Trauma Module is an Impact Instrumentation, Inc. project with strong Industry, DoD, NASA, and Academia partnerships aimed at developing the next generation of smart and rugged critical care tools for hazardous environments ranging from the battlefield to space exploration. The LTM is a combination ventilator/critical care monitor/therapeutic system with integrated automatic control systems. Additional capabilities are provided with small external modules.

  9. User-Assisted Store Recycling for Dynamic Task Graph Schedulers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurt, Mehmet Can; Krishnamoorthy, Sriram; Agrawal, Gagan

    The emergence of the multi-core era has led to increased interest in designing effective yet practical parallel programming models. Models based on task graphs that operate on single-assignment data are attractive in several ways: they can support dynamic applications and precisely represent the available concurrency. However, they also require nuanced algorithms for scheduling and memory management for efficient execution. In this paper, we consider memory-efficient dynamic scheduling of task graphs. Specifically, we present a novel approach for dynamically recycling the memory locations assigned to data items as they are produced by tasks. We develop algorithms to identify memory-efficient store recyclingmore » functions by systematically evaluating the validity of a set of (user-provided or automatically generated) alternatives. Because recycling function can be input data-dependent, we have also developed support for continued correct execution of a task graph in the presence of a potentially incorrect store recycling function. Experimental evaluation demonstrates that our approach to automatic store recycling incurs little to no overheads, achieves memory usage comparable to the best manually derived solutions, often produces recycling functions valid across problem sizes and input parameters, and efficiently recovers from an incorrect choice of store recycling functions.« less

  10. Reducing the cost of dietary assessment: self-completed recall and analysis of nutrition for use with children (SCRAN24).

    PubMed

    Foster, E; Hawkins, A; Delve, J; Adamson, A J

    2014-01-01

    Self-Completed Recall and Analysis of Nutrition (scran24) is a prototype computerised 24-h recall system for use with 11-16 year olds. It is based on the Multiple Pass 24-h Recall method and includes prompts and checks throughout the system for forgotten food items. The development of scran24 was informed by an extensive literature review, a series of focus groups and usability testing. The first stage of the recall is a quick list where the user is asked to input all the foods and drinks they remember consuming the previous day. The quick list is structured into meals and snacks. Once the quick list is complete, additional information is collected on each food to determine food type and to obtain an estimate of portion size using digital images of food. Foods are located within the system using a free text search, which is linked to the information entered into the quick list. A time is assigned to each eating occasion using drag and drop onto a timeline. The system prompts the user if no foods or drinks have been consumed within a 3-h time frame, or if fewer than three drinks have been consumed throughout the day. The food composition code and weight (g) of all items selected are automatically allocated and stored. Nutritional information can be generated automatically via the scran24 companion Access database. scran24 was very well received by young people and was relatively quick to complete. The accuracy and precision was close to that of similar computer-based systems currently used in dietary studies. © 2013 The Authors Journal of Human Nutrition and Dietetics © 2013 The British Dietetic Association Ltd.

  11. Using Conditional Percentages During Free-Operant Stimulus Preference Assessments to Predict the Effects of Preferred Items on Stereotypy: Preliminary Findings.

    PubMed

    Frewing, Tyla M; Rapp, John T; Pastrana, Sarah J

    2015-09-01

    To date, researchers have not identified an efficient methodology for selecting items that will compete with automatically reinforced behavior. In the present study, we identified high preference, high stereotypy (HP-HS), high preference, low stereotypy (HP-LS), low preference, high stereotypy (LP-HS), and low preference, low stereotypy (LP-LS) items based on response allocation to items and engagement in stereotypy during one to three, 30-min free-operant competing stimulus assessments (CSAs). The results showed that access to HP-LS items decreased stereotypy for all four participants; however, the results for other items were only predictive for one participant. Reanalysis of the CSA results revealed that the HP-LS item was typically identified by (a) the combined results of the first 10 min of the three 30-min assessments or (b) the results of one 30-min assessment. The clinical implications for the use of this method, as well as future directions for research, are briefly discussed. © The Author(s) 2015.

  12. A Unified Overset Grid Generation Graphical Interface and New Concepts on Automatic Gridding Around Surface Discontinuities

    NASA Technical Reports Server (NTRS)

    Chan, William M.; Akien, Edwin (Technical Monitor)

    2002-01-01

    For many years, generation of overset grids for complex configurations has required the use of a number of different independently developed software utilities. Results created by each step were then visualized using a separate visualization tool before moving on to the next. A new software tool called OVERGRID was developed which allows the user to perform all the grid generation steps and visualization under one environment. OVERGRID provides grid diagnostic functions such as surface tangent and normal checks as well as grid manipulation functions such as extraction, extrapolation, concatenation, redistribution, smoothing, and projection. Moreover, it also contains hyperbolic surface and volume grid generation modules that are specifically suited for overset grid generation. It is the first time that such a unified interface existed for the creation of overset grids for complex geometries. New concepts on automatic overset surface grid generation around surface discontinuities will also be briefly presented. Special control curves on the surface such as intersection curves, sharp edges, open boundaries, are called seam curves. The seam curves are first automatically extracted from a multiple panel network description of the surface. Points where three or more seam curves meet are automatically identified and are called seam corners. Seam corner surface grids are automatically generated using a singular axis topology. Hyperbolic surface grids are then grown from the seam curves that are automatically trimmed away from the seam corners.

  13. Automatic Thesaurus Generation for an Electronic Community System.

    ERIC Educational Resources Information Center

    Chen, Hsinchun; And Others

    1995-01-01

    This research reports an algorithmic approach to the automatic generation of thesauri for electronic community systems. The techniques used include term filtering, automatic indexing, and cluster analysis. The Worm Community System, used by molecular biologists studying the nematode worm C. elegans, was used as the testbed for this research.…

  14. Automatic Semantic Generation and Arabic Translation of Mathematical Expressions on the Web

    ERIC Educational Resources Information Center

    Doush, Iyad Abu; Al-Bdarneh, Sondos

    2013-01-01

    Automatic processing of mathematical information on the web imposes some difficulties. This paper presents a novel technique for automatic generation of mathematical equations semantic and Arabic translation on the web. The proposed system facilitates unambiguous representation of mathematical equations by correlating equations to their known…

  15. Destination memory for self-generated actions.

    PubMed

    El Haj, Mohamad

    2016-10-01

    There is a substantial body of literature showing memory enhancement for self-generated information in normal aging. The present paper investigated this outcome for destination memory or memory for outputted information. In Experiment 1, younger adults and older adults had to place (self-generated actions) and observe an experimenter placing (experiment-generated actions) items into two different destinations (i.e., a black circular box and a white square box). On a subsequent recognition task, the participants had to decide into which box each item had originally been placed. These procedures showed better destination memory for self- than experimenter-generated actions. In Experiment 2, destination and source memory were assessed for self-generated actions. Younger adults and older adults had to place items into the two boxes (self-generated actions), take items out of the boxes (self-generated actions), and observe an experimenter taking items out of the boxes (experiment-generated actions). On a subsequent recognition task, they had to decide into which box (destination memory)/from which box (source memory) each item had originally been placed/taken. For both populations, source memory was better than destination memory for self-generated actions, and both were better than source memory for experimenter-generated actions. Taken together, these findings highlight the beneficial effect of self-generation on destination memory in older adults.

  16. Executive Functions Are Employed to Process Episodic and Relational Memories in Children With Autism Spectrum Disorders

    PubMed Central

    2013-01-01

    Objective: Long-term memory functioning in autism spectrum disorders (ASDs) is marked by a characteristic pattern of impairments and strengths. Individuals with ASD show impairment in memory tasks that require the processing of relational and contextual information, but spared performance on tasks requiring more item-based, acontextual processing. Two experiments investigated the cognitive mechanisms underlying this memory profile. Method: A sample of 14 children with a diagnosis of high-functioning ASD (age: M = 12.2 years), and a matched control group of 14 typically developing (TD) children (age: M = 12.1 years), participated in a range of behavioral memory tasks in which we measured both relational and item-based memory abilities. They also completed a battery of executive function measures. Results: The ASD group showed specific deficits in relational memory, but spared or superior performance in item-based memory, across all tasks. Importantly, for ASD children, executive ability was significantly correlated with relational memory but not with item-based memory. No such relationship was present in the control group. This suggests that children with ASD atypically employed effortful, executive strategies to retrieve relational (but not item-specific) information, whereas TD children appeared to use more automatic processes. Conclusions: The relational memory impairment in ASD may result from a specific impairment in automatic associative retrieval processes with an increased reliance on effortful and strategic retrieval processes. Our findings allow specific neural predictions to be made regarding the interactive functioning of the hippocampus, prefrontal cortex, and posterior parietal cortex in ASD as a neural network supporting relational memory processing. PMID:24245930

  17. 78 FR 13213 - Regional Reliability Standard PRC-006-NPCC-1- Automatic Underfrequency Load Shedding

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-27

    ...; Order No. 775] Regional Reliability Standard PRC-006-NPCC-1--Automatic Underfrequency Load Shedding... transferred to the system upon loss of the facility.'' \\27\\ Compensatory load shedding is automatic shedding of load adequate to compensate for the loss of a generator due to the generator tripping early (i.e...

  18. System for Automatic Generation of Examination Papers in Discrete Mathematics

    ERIC Educational Resources Information Center

    Fridenfalk, Mikael

    2013-01-01

    A system was developed for automatic generation of problems and solutions for examinations in a university distance course in discrete mathematics and tested in a pilot experiment involving 200 students. Considering the success of such systems in the past, particularly including automatic assessment, it should not take long before such systems are…

  19. Neural correlates of economic value and valuation context: an event-related potential study.

    PubMed

    Tyson-Carr, John; Kokmotou, Katerina; Soto, Vicente; Cook, Stephanie; Fallon, Nicholas; Giesbrecht, Timo; Stancak, Andrej

    2018-05-01

    The value of environmental cues and internal states is continuously evaluated by the human brain, and it is this subjective value that largely guides decision making. The present study aimed to investigate the initial value attribution process, specifically the spatiotemporal activation patterns associated with values and valuation context, using electroencephalographic event-related potentials (ERPs). Participants completed a stimulus rating task in which everyday household items marketed up to a price of £4 were evaluated with respect to their desirability or material properties. The subjective values of items were evaluated as willingness to pay (WTP) in a Becker-DeGroot-Marschak auction. On the basis of the individual's subjective WTP values, the stimuli were divided into high- and low-value items. Source dipole modeling was applied to estimate the cortical sources underlying ERP components modulated by subjective values (high vs. low WTP) and the evaluation condition (value-relevant vs. value-irrelevant judgments). Low-WTP items and value-relevant judgments both led to a more pronounced N2 visual evoked potential at right frontal scalp electrodes. Source activity in right anterior insula and left orbitofrontal cortex was larger for low vs. high WTP at ∼200 ms. At a similar latency, source activity in right anterior insula and right parahippocampal gyrus was larger for value-relevant vs. value-irrelevant judgments. A stronger response for low- than high-value items in anterior insula and orbitofrontal cortex appears to reflect aversion to low-valued item acquisition, which in an auction experiment would be perceived as a relative loss. This initial low-value bias occurs automatically irrespective of the valuation context. NEW & NOTEWORTHY We demonstrate the spatiotemporal characteristics of the brain valuation process using event-related potentials and willingness to pay as a measure of subjective value. The N2 component resolves values of objects with a bias toward low-value items. The value-related changes of the N2 component are part of an automatic valuation process.

  20. 75 FR 54001 - Fifty-Second Meeting: RTCA Special Committee 186: Automatic Dependent Surveillance-Broadcast (ADS-B)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-02

    ... unless stated otherwise. ADDRESSES: The meeting will be held at the Dutch National Aerospace Laboratory... Items/Work Programs. Adjourn Plenary. Attendance is open to the interested public but limited to space...

  1. Generation and memory for contextual detail.

    PubMed

    Mulligan, Neil W

    2004-07-01

    Generation enhances item memory but may not enhance other aspects of memory. In 12 experiments, the author investigated the effect of generation on context memory, motivated in part by the hypothesis that generation produces a trade-off in encoding item and contextual information. Participants generated some study words (e.g., hot-c__) and read others (e.g., hot-cold). Generation consistently enhanced item memory but did not enhance context memory. More specifically, generation disrupted context memory for the color of the target word but did not affect context memory for location, background color, and cue-word color. The specificity of the negative generation effect in context memory argues against a general item-context trade-off. A processing account of generation meets greater success. In addition, the results provide no evidence that generation enhances recollection of contextual details. Copyright 2004 APA, all rights reserved

  2. The SIETTE Automatic Assessment Environment

    ERIC Educational Resources Information Center

    Conejo, Ricardo; Guzmán, Eduardo; Trella, Monica

    2016-01-01

    This article describes the evolution and current state of the domain-independent Siette assessment environment. Siette supports different assessment methods--including classical test theory, item response theory, and computer adaptive testing--and integrates them with multidimensional student models used by intelligent educational systems.…

  3. One Idea for a Next Generation Shuttle

    NASA Technical Reports Server (NTRS)

    MacConochie, Ian O.; Cerro, Jeffrey A.

    2004-01-01

    In this configuration, the current Shuttle External Tank serves as core structure for a fully reusable second stage. This stage is equipped with wings, vertical fin, landing gear, and thermal protection. The stage is geometrically identical to (but smaller than) a single stage that has been tested hyper-sonically, super-sonically, and sub-sonically in the NASA Langley Research Center wind tunnels. The three LOX/LH engines that currently serve as main propulsion for the Shuttle Orbiter, serve as main propulsion on the new stage. The new stage is unmanned but is equipped with the avionics needed for automatic maneuvering on orbit and for landing on a runway. Three rails are installed along the top surface of the vehicle for attachment of various payloads. Pay- loads might include third stages with satellites attached, personnel pods, propellants, or other items.

  4. Evaluation of an automated knowledge-based textual summarization system for longitudinal clinical data, in the intensive care domain.

    PubMed

    Goldstein, Ayelet; Shahar, Yuval; Orenbuch, Efrat; Cohen, Matan J

    2017-10-01

    To examine the feasibility of the automated creation of meaningful free-text summaries of longitudinal clinical records, using a new general methodology that we had recently developed; and to assess the potential benefits to the clinical decision-making process of using such a method to generate draft letters that can be further manually enhanced by clinicians. We had previously developed a system, CliniText (CTXT), for automated summarization in free text of longitudinal medical records, using a clinical knowledge base. In the current study, we created an Intensive Care Unit (ICU) clinical knowledge base, assisted by two ICU clinical experts in an academic tertiary hospital. The CTXT system generated free-text summary letters from the data of 31 different patients, which were compared to the respective original physician-composed discharge letters. The main evaluation measures were (1) relative completeness, quantifying the data items missed by one of the letters but included by the other, and their importance; (2) quality parameters, such as readability; (3) functional performance, assessed by the time needed, by three clinicians reading each of the summaries, to answer five key questions, based on the discharge letter (e.g., "What are the patient's current respiratory requirements?"), and by the correctness of the clinicians' answers. Completeness: In 13/31 (42%) of the letters the number of important items missed in the CTXT-generated letter was actually less than or equal to the number of important items missed by the MD-composed letter. In each of the MD-composed letters, at least two important items that were mentioned by the CTXT system were missed (a mean of 7.2±5.74). In addition, the standard deviation in the number of missed items in the MD letters (STD=15.4) was much higher than the standard deviation in the CTXT-generated letters (STD=5.3). Quality: The MD-composed letters obtained a significantly better grade in three out of four measured parameters. However, the standard variation in the quality of the MD-composed letters was much greater than the standard variation in the quality of the CTXT-generated letters (STD=6.25 vs. STD=2.57, respectively). Functional evaluation: The clinicians answered the five questions on average 40% faster (p<0.001) when using the CTXT-generated letters than when using the MD-composed letters. In four out of the five questions the clinicians' correctness was equal to or significantly better (p<0.005) when using the CTXT-generated letters than when using the MD-composed letters. An automatic knowledge-based summarization system, such as the CTXT system, has the capability to model complex clinical domains, such as the ICU, and to support interpretation and summarization tasks such as the creation of a discharge summary letter. Based on the results, we suggest that the use of such systems could potentially enhance the standardization of the letters, significantly increase their completeness, and reduce the time to write the discharge summary. The results also suggest that using the resultant structured letters might reduce the decision time, and enhance the decision quality, of decisions made by other clinicians. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Automatic capture of attention by conceptually generated working memory templates.

    PubMed

    Sun, Sol Z; Shen, Jenny; Shaw, Mark; Cant, Jonathan S; Ferber, Susanne

    2015-08-01

    Many theories of attention propose that the contents of working memory (WM) can act as an attentional template, which biases processing in favor of perceptually similar inputs. While support has been found for this claim, it is unclear how attentional templates are generated when searching real-world environments. We hypothesized that in naturalistic settings, attentional templates are commonly generated from conceptual knowledge, an idea consistent with sensorimotor models of knowledge representation. Participants performed a visual search task in the delay period of a WM task, where the item in memory was either a colored disk or a word associated with a color concept (e.g., "Rose," associated with red). During search, we manipulated whether a singleton distractor in the array matched the contents of WM. Overall, we found that search times were impaired in the presence of a memory-matching distractor. Furthermore, the degree of impairment did not differ based on the contents of WM. Put differently, regardless of whether participants were maintaining a perceptually colored disk identical to the singleton distractor, or whether they were simply maintaining a word associated with the color of the distractor, the magnitude of attentional capture was the same. Our results suggest that attentional templates can be generated from conceptual knowledge, in the physical absence of the visual feature.

  6. Generating Safety-Critical PLC Code From a High-Level Application Software Specification

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The benefits of automatic-application code generation are widely accepted within the software engineering community. These benefits include raised abstraction level of application programming, shorter product development time, lower maintenance costs, and increased code quality and consistency. Surprisingly, code generation concepts have not yet found wide acceptance and use in the field of programmable logic controller (PLC) software development. Software engineers at Kennedy Space Center recognized the need for PLC code generation while developing the new ground checkout and launch processing system, called the Launch Control System (LCS). Engineers developed a process and a prototype software tool that automatically translates a high-level representation or specification of application software into ladder logic that executes on a PLC. All the computer hardware in the LCS is planned to be commercial off the shelf (COTS), including industrial controllers or PLCs that are connected to the sensors and end items out in the field. Most of the software in LCS is also planned to be COTS, with only small adapter software modules that must be developed in order to interface between the various COTS software products. A domain-specific language (DSL) is a programming language designed to perform tasks and to solve problems in a particular domain, such as ground processing of launch vehicles. The LCS engineers created a DSL for developing test sequences of ground checkout and launch operations of future launch vehicle and spacecraft elements, and they are developing a tabular specification format that uses the DSL keywords and functions familiar to the ground and flight system users. The tabular specification format, or tabular spec, allows most ground and flight system users to document how the application software is intended to function and requires little or no software programming knowledge or experience. A small sample from a prototype tabular spec application is shown.

  7. Flexible Energy Scheduling Tool for Integrating Variable Generation | Grid

    Science.gov Websites

    , security-constrained economic dispatch, and automatic generation control programs. DOWNLOAD PAPER Electric commitment, security-constrained economic dispatch, and automatic generation control sub-models. Each sub resolutions and operating strategies can be explored. FESTIV produces not only economic metrics but also

  8. Application of latent variable model in Rosenberg self-esteem scale.

    PubMed

    Leung, Shing-On; Wu, Hui-Ping

    2013-01-01

    Latent Variable Models (LVM) are applied to Rosenberg Self-Esteem Scale (RSES). Parameter estimations automatically give negative signs hence no recoding is necessary for negatively scored items. Bad items can be located through parameter estimate, item characteristic curves and other measures. Two factors are extracted with one on self-esteem and the other on the degree to take moderate views, with the later not often being covered in previous studies. A goodness-of-fit measure based on two-way margins is used but more works are needed. Results show that scaling provided by models with more formal statistical ground correlated highly with conventional method, which may provide justification for usual practice.

  9. Attention capture by abrupt onsets: re-visiting the priority tag model.

    PubMed

    Sunny, Meera M; von Mühlenen, Adrian

    2013-01-01

    Abrupt onsets have been shown to strongly attract attention in a stimulus-driven, bottom-up manner. However, the precise mechanism that drives capture by onsets is still debated. According to the new object account, abrupt onsets capture attention because they signal the appearance of a new object. Yantis and Johnson (1990) used a visual search task and showed that up to four onsets can be automatically prioritized. However, in their study the number of onsets co-varied with the total number of items in the display, allowing for a possible confound between these two variables. In the present study, display size was fixed at eight items while the number of onsets was systematically varied between zero and eight. Experiment 1 showed a systematic increase in reactions times with increasing number of onsets. This increase was stronger when the target was an onset than when it was a no-onset item, a result that is best explained by a model according to which only one onset is automatically prioritized. Even when the onsets were marked in red (Experiment 2), nearly half of the participants continued to prioritize only one onset item. Only when onset and no-onset targets were blocked (Experiment 3), participants started to search selectively through the set of only the relevant target type. These results further support the finding that only one onset captures attention. Many bottom-up models of attention capture, like masking or saliency accounts, can efficiently explain this finding.

  10. Equipment and New Products

    ERIC Educational Resources Information Center

    Poitras, Adrian W., Ed.

    1973-01-01

    The following items are discussed: Digital Counters and Readout Devices, Automatic Burette Outfits, Noise Exposure System, Helium-Cadmium Laser, New pH Buffers and Flip-Top Dispenser, Voltage Calibrator Transfer Standard, Photomicrographic Stereo Zoom Microscope, Portable pH Meter, Micromanipulators, The Snuffer, Electronic Top-Loading Balances,…

  11. The content and process of self-stigma in people with mental illness.

    PubMed

    Chan, Kevin K S; Mak, Winnie W S

    2017-01-01

    Although many individuals with mental illness may self-concur with the "content" of stigmatizing thoughts at some point in their lives, they may have varying degrees of habitual recurrence of such thoughts, which could exacerbate their experience of self-stigma and perpetuate its damaging effects on their mental health. Although it is important to understand the "process" of how self-stigmatizing thoughts are sustained and perpetuated over time, no research to date has conceptualized and distinguished the habitual process of self-stigma from its cognitive content. Thus, the present study aims to develop and validate a measure of the habitual process of self-stigma-the Self-stigmatizing Thinking's Automaticity and Repetition Scale (STARS). In this study, 189 individuals with mental illness completed the STARS, along with several explicit (self-report) and implicit (response latency) measures of theoretically related constructs. Consistent with theories of mental habit, an exploratory factor analysis of the STARS items identified a 2-factor structure that represents the repetition (4 items) and automaticity (4 items) of self-stigmatization. The reliability of the STARS was supported by a Cronbach's α of .90, and its validity was supported by its significant correlations with theoretical predictors (content of self-stigma, experiential avoidance, and lack of mindfulness), expected outcomes (decreased self-esteem, life satisfaction, and recovery), and the Brief Implicit Association Tests measuring the automatic processing of self-stigmatizing information. With the validation of the STARS, future research can consider both the content and process of self-stigma so that a richer picture of its development, perpetuation, and influence can be captured. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  12. 46 CFR 63.01-3 - Scope and applicability.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING AUTOMATIC AUXILIARY... automatic auxiliary boilers, automatic heating boilers, automatic waste heat boilers, donkey boilers... control systems) used for the generation of steam and/or oxidation of ordinary waste materials and garbage...

  13. 46 CFR 63.01-3 - Scope and applicability.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING AUTOMATIC AUXILIARY... automatic auxiliary boilers, automatic heating boilers, automatic waste heat boilers, donkey boilers... control systems) used for the generation of steam and/or oxidation of ordinary waste materials and garbage...

  14. Exogenous temporal cues enhance recognition memory in an object-based manner.

    PubMed

    Ohyama, Junji; Watanabe, Katsumi

    2010-11-01

    Exogenous attention enhances the perception of attended items in both a space-based and an object-based manner. Exogenous attention also improves recognition memory for attended items in the space-based mode. However, it has not been examined whether object-based exogenous attention enhances recognition memory. To address this issue, we examined whether a sudden visual change in a task-irrelevant stimulus (an exogenous cue) would affect participants' recognition memory for items that were serially presented around a cued time. The results showed that recognition accuracy for an item was strongly enhanced when the visual cue occurred at the same location and time as the item (Experiments 1 and 2). The memory enhancement effect occurred when the exogenous visual cue and an item belonged to the same object (Experiments 3 and 4) and even when the cue was counterpredictive of the timing of an item to be asked about (Experiment 5). The present study suggests that an exogenous temporal cue automatically enhances the recognition accuracy for an item that is presented at close temporal proximity to the cue and that recognition memory enhancement occurs in an object-based manner.

  15. Development of an Automatic Grid Generator for Multi-Element High-Lift Wings

    NASA Technical Reports Server (NTRS)

    Eberhardt, Scott; Wibowo, Pratomo; Tu, Eugene

    1996-01-01

    The procedure to generate the grid around a complex wing configuration is presented in this report. The automatic grid generation utilizes the Modified Advancing Front Method as a predictor and an elliptic scheme as a corrector. The scheme will advance the surface grid one cell outward and the newly obtained grid is corrected using the Laplace equation. The predictor-corrector step ensures that the grid produced will be smooth for every configuration. The predictor-corrector scheme is extended for a complex wing configuration. A new technique is developed to deal with the grid generation in the wing-gaps and on the flaps. It will create the grids that fill the gap on the wing surface and the gap created by the flaps. The scheme recognizes these configurations automatically so that minimal user input is required. By utilizing an appropriate sequence in advancing the grid points on a wing surface, the automatic grid generation for complex wing configurations is achieved.

  16. Remembering spatial locations: effects of material and intelligence.

    PubMed

    Zucco, G M; Tessari, A; Soresi, S

    1995-04-01

    The aim of the present work was to test some of the criteria for automaticity of spatial-location coding claimed by Hasher and Zacks, particularly individual differences (as intelligence invariance) and effortful encoding strategies. Two groups of subjects, 15 with mental retardation (Down Syndrome, mean chronological age, 20.9 yr.; mean mental age, 11.6 yr.) and 15 normal children (mean age, 11.5 yr.), were administered four kinds of stimuli (pictures, concrete words, nonsense pictures, and abstract words) at one location on a card. Subsequently, subjects were presented the items on the card's centre and were required to place the items in their original locations. Analysis indicated that those with Down Syndrome scored lower than normal children on the four tasks and that stimuli were better or worse remembered according to their characteristics, e.g., their imaginability. Results do not support some of the conditions claimed to be necessary criteria for automaticity in the recall of spatial locations as stated by Hasher and Zacks.

  17. Knowledge Base for Automatic Generation of Online IMS LD Compliant Course Structures

    ERIC Educational Resources Information Center

    Pacurar, Ecaterina Giacomini; Trigano, Philippe; Alupoaie, Sorin

    2006-01-01

    Our article presents a pedagogical scenarios-based web application that allows the automatic generation and development of pedagogical websites. These pedagogical scenarios are represented in the IMS Learning Design standard. Our application is a web portal helping teachers to dynamically generate web course structures, to edit pedagogical content…

  18. 26 CFR 26.6081-1 - Automatic extension of time for filing generation-skipping transfer tax returns.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 14 2011-04-01 2010-04-01 true Automatic extension of time for filing generation-skipping transfer tax returns. 26.6081-1 Section 26.6081-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) ESTATE AND GIFT TAXES GENERATION-SKIPPING TRANSFER TAX...

  19. 26 CFR 26.6081-1 - Automatic extension of time for filing generation-skipping transfer tax returns.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 14 2010-04-01 2010-04-01 false Automatic extension of time for filing generation-skipping transfer tax returns. 26.6081-1 Section 26.6081-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) ESTATE AND GIFT TAXES GENERATION-SKIPPING TRANSFER TAX...

  20. 26 CFR 26.6081-1 - Automatic extension of time for filing generation-skipping transfer tax returns.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 26 Internal Revenue 14 2013-04-01 2013-04-01 false Automatic extension of time for filing generation-skipping transfer tax returns. 26.6081-1 Section 26.6081-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) ESTATE AND GIFT TAXES GENERATION-SKIPPING TRANSFER TAX...

  1. 26 CFR 26.6081-1 - Automatic extension of time for filing generation-skipping transfer tax returns.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 14 2014-04-01 2013-04-01 true Automatic extension of time for filing generation-skipping transfer tax returns. 26.6081-1 Section 26.6081-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) ESTATE AND GIFT TAXES GENERATION-SKIPPING TRANSFER TAX...

  2. 26 CFR 26.6081-1 - Automatic extension of time for filing generation-skipping transfer tax returns.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 26 Internal Revenue 14 2012-04-01 2012-04-01 false Automatic extension of time for filing generation-skipping transfer tax returns. 26.6081-1 Section 26.6081-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) ESTATE AND GIFT TAXES GENERATION-SKIPPING TRANSFER TAX...

  3. Explaining and Controlling for the Psychometric Properties of Computer-Generated Figural Matrix Items

    ERIC Educational Resources Information Center

    Freund, Philipp Alexander; Hofer, Stefan; Holling, Heinz

    2008-01-01

    Figural matrix items are a popular task type for assessing general intelligence (Spearman's g). Items of this kind can be constructed rationally, allowing the implementation of computerized generation algorithms. In this study, the influence of different task parameters on the degree of difficulty in matrix items was investigated. A sample of N =…

  4. Perceiving pain in others: validation of a dual processing model.

    PubMed

    McCrystal, Kalie N; Craig, Kenneth D; Versloot, Judith; Fashler, Samantha R; Jones, Daniel N

    2011-05-01

    Accurate perception of another person's painful distress would appear to be accomplished through sensitivity to both automatic (unintentional, reflexive) and controlled (intentional, purposive) behavioural expression. We examined whether observers would construe diverse behavioural cues as falling within these domains, consistent with cognitive neuroscience findings describing activation of both automatic and controlled neuroregulatory processes. Using online survey methodology, 308 research participants rated behavioural cues as "goal directed vs. non-goal directed," "conscious vs. unconscious," "uncontrolled vs. controlled," "fast vs. slow," "intentional (deliberate) vs. unintentional," "stimulus driven (obligatory) vs. self driven," and "requiring contemplation vs. not requiring contemplation." The behavioural cues were the 39 items provided by the PROMIS pain behaviour bank, constructed to be representative of the diverse possibilities for pain expression. Inter-item correlations among rating scales provided evidence of sufficient internal consistency justifying a single score on an automatic/controlled dimension (excluding the inconsistent fast vs. slow scale). An initial exploratory factor analysis on 151 participant data sets yielded factors consistent with "controlled" and "automatic" actions, as well as behaviours characterized as "ambiguous." A confirmatory factor analysis using the remaining 151 data sets replicated EFA findings, supporting theoretical predictions that observers would distinguish immediate, reflexive, and spontaneous reactions (primarily facial expression and paralinguistic features of speech) from purposeful and controlled expression (verbal behaviour, instrumental behaviour requiring ongoing, integrated responses). There are implicit dispositions to organize cues signaling pain in others into the well-defined categories predicted by dual process theory. Copyright © 2011 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.

  5. Relational and item-specific influences on generate-recognize processes in recall.

    PubMed

    Guynn, Melissa J; McDaniel, Mark A; Strosser, Garrett L; Ramirez, Juan M; Castleberry, Erica H; Arnett, Kristen H

    2014-02-01

    The generate-recognize model and the relational-item-specific distinction are two approaches to explaining recall. In this study, we consider the two approaches in concert. Following Jacoby and Hollingshead (Journal of Memory and Language 29:433-454, 1990), we implemented a production task and a recognition task following production (1) to evaluate whether generation and recognition components were evident in cued recall and (2) to gauge the effects of relational and item-specific processing on these components. An encoding task designed to augment item-specific processing (anagram-transposition) produced a benefit on the recognition component (Experiments 1-3) but no significant benefit on the generation component (Experiments 1-3), in the context of a significant benefit to cued recall. By contrast, an encoding task designed to augment relational processing (category-sorting) did produce a benefit on the generation component (Experiment 3). These results converge on the idea that in recall, item-specific processing impacts a recognition component, whereas relational processing impacts a generation component.

  6. Automatic query formulations in information retrieval.

    PubMed

    Salton, G; Buckley, C; Fox, E A

    1983-07-01

    Modern information retrieval systems are designed to supply relevant information in response to requests received from the user population. In most retrieval environments the search requests consist of keywords, or index terms, interrelated by appropriate Boolean operators. Since it is difficult for untrained users to generate effective Boolean search requests, trained search intermediaries are normally used to translate original statements of user need into useful Boolean search formulations. Methods are introduced in this study which reduce the role of the search intermediaries by making it possible to generate Boolean search formulations completely automatically from natural language statements provided by the system patrons. Frequency considerations are used automatically to generate appropriate term combinations as well as Boolean connectives relating the terms. Methods are covered to produce automatic query formulations both in a standard Boolean logic system, as well as in an extended Boolean system in which the strict interpretation of the connectives is relaxed. Experimental results are supplied to evaluate the effectiveness of the automatic query formulation process, and methods are described for applying the automatic query formulation process in practice.

  7. Behavioral decoding of working memory items inside and outside the focus of attention.

    PubMed

    Mallett, Remington; Lewis-Peacock, Jarrod A

    2018-03-31

    How we attend to our thoughts affects how we attend to our environment. Holding information in working memory can automatically bias visual attention toward matching information. By observing attentional biases on reaction times to visual search during a memory delay, it is possible to reconstruct the source of that bias using machine learning techniques and thereby behaviorally decode the content of working memory. Can this be done when more than one item is held in working memory? There is some evidence that multiple items can simultaneously bias attention, but the effects have been inconsistent. One explanation may be that items are stored in different states depending on the current task demands. Recent models propose functionally distinct states of representation for items inside versus outside the focus of attention. Here, we use behavioral decoding to evaluate whether multiple memory items-including temporarily irrelevant items outside the focus of attention-exert biases on visual attention. Only the single item in the focus of attention was decodable. The other item showed a brief attentional bias that dissipated until it returned to the focus of attention. These results support the idea of dynamic, flexible states of working memory across time and priority. © 2018 New York Academy of Sciences.

  8. Attention capture by contour onsets and offsets: no special role for onsets.

    PubMed

    Watson, D G; Humphreys, G W

    1995-07-01

    In five experiments, we investigated the power of targets defined by the onset or offset of one of an object's parts (contour onsets and offsets) either to guide or to capture visual attention. In Experiment 1, search for a single contour onset target was compared with search for a single contour offset target against a static background of distractors; no difference was found between the efficiency with which each could be detected. In Experiment 2, onsets and offsets were compared for automatic attention capture, when both occurred simultaneously. Unlike in previous studies, the effects of overall luminance change, new-object creation, and number of onset and offset items were controlled. It was found that contour onset and offset items captured attention equally well. However, display size effects on both target types were also apparent. Such effects may have been due to competition for selection between multiple onset and offset stimuli. In Experiments 3 and 4, single onset and offset stimuli were presented simultaneously and pitted directly against one another among a background of static distractors. In Experiment 3, we examined "guided search," for a target that was formed either from an onset or from an offset among static items. In Experiment 4, the onsets and offsets were uncorrelated with the target location. Similar results occurred in both experiments: target onsets and offsets were detected more efficiently than static stimuli which needed serial search; there remained effects of display size on performance; but there was still no advantage for onsets. In Experiment 5, we examined automatic attention capture by single onset and offset stimuli presented individually among static distractors. Again, there was no advantage for onset over offset targets and a display size effect was also present. These results suggest that, both in isolation and in competition, onsets that do not form new objects neither guide nor gain automatic attention more efficiently than offsets. In addition, in contrast to previous studies in which onsets formed new objects, contour onsets and offsets did not reliably capture attention automatically.

  9. Chance of Tweetstorms

    ERIC Educational Resources Information Center

    Harney, John O.

    2017-01-01

    Every "New England Journal of Higher Education" ("NEJHE") item automatically posts to Twitter, but Twitter is also used to disseminate interesting news or opinion pieces from elsewhere. These are often juxtaposed with something the New England Board of Education (NEBHE) has worked on in the past and sometimes presented with an…

  10. The Future of Access Technology for Blind and Visually Impaired People.

    ERIC Educational Resources Information Center

    Schreier, E. M.

    1990-01-01

    This article describes potential use of new technological products and services by blind/visually impaired people. Items discussed include computer input devices, public telephones, automatic teller machines, airline and rail arrival/departure displays, ticketing machines, information retrieval systems, order-entry terminals, optical character…

  11. Braking mechanism is self actuating and bidirectional

    NASA Technical Reports Server (NTRS)

    Pizzo, J.

    1966-01-01

    Mechanism automatically applies a braking action on a moving item, in either direction of motion, immediately upon removal of the driving force and with no human operator involvement. This device would be useful wherever free movement is undesirable after an object has been guided into a precise position.

  12. USSR Report, Consumer Goods and Domestic Trade, No. 62.

    DTIC Science & Technology

    1983-04-28

    dough preparation, automatic dough make-up and rolling machines and 3 others) is the most important task when producing equipment for the baking...candy production. It is planned to provide the production of flour confectionary items with completely mechanized lines for elongated types of cookies and

  13. System and method for generating a relationship network

    DOEpatents

    Franks, Kasian; Myers, Cornelia A; Podowski, Raf M

    2015-05-05

    A computer-implemented system and process for generating a relationship network is disclosed. The system provides a set of data items to be related and generates variable length data vectors to represent the relationships between the terms within each data item. The system can be used to generate a relationship network for documents, images, or any other type of file. This relationship network can then be queried to discover the relationships between terms within the set of data items.

  14. System and method for generating a relationship network

    DOEpatents

    Franks, Kasian [Kensington, CA; Myers, Cornelia A [St. Louis, MO; Podowski, Raf M [Pleasant Hill, CA

    2011-07-26

    A computer-implemented system and process for generating a relationship network is disclosed. The system provides a set of data items to be related and generates variable length data vectors to represent the relationships between the terms within each data item. The system can be used to generate a relationship network for documents, images, or any other type of file. This relationship network can then be queried to discover the relationships between terms within the set of data items.

  15. Generating constrained randomized sequences: item frequency matters.

    PubMed

    French, Robert M; Perruchet, Pierre

    2009-11-01

    All experimental psychologists understand the importance of randomizing lists of items. However, randomization is generally constrained, and these constraints-in particular, not allowing immediately repeated items-which are designed to eliminate particular biases, frequently engender others. We describe a simple Monte Carlo randomization technique that solves a number of these problems. However, in many experimental settings, we are concerned not only with the number and distribution of items but also with the number and distribution of transitions between items. The algorithm mentioned above provides no control over this. We therefore introduce a simple technique that uses transition tables for generating correctly randomized sequences. We present an analytic method of producing item-pair frequency tables and item-pair transitional probability tables when immediate repetitions are not allowed. We illustrate these difficulties and how to overcome them, with reference to a classic article on word segmentation in infants. Finally, we provide free access to an Excel file that allows users to generate transition tables with up to 10 different item types, as well as to generate appropriately distributed randomized sequences of any length without immediately repeated elements. This file is freely available from http://leadserv.u-bourgogne.fr/IMG/xls/TransitionMatrix.xls.

  16. Automatic finite element generators

    NASA Technical Reports Server (NTRS)

    Wang, P. S.

    1984-01-01

    The design and implementation of a software system for generating finite elements and related computations are described. Exact symbolic computational techniques are employed to derive strain-displacement matrices and element stiffness matrices. Methods for dealing with the excessive growth of symbolic expressions are discussed. Automatic FORTRAN code generation is described with emphasis on improving the efficiency of the resultant code.

  17. Automatic Commercial Permit Sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grana, Paul

    Final report for Folsom Labs’ Solar Permit Generator project, which has successfully completed, resulting in the development and commercialization of a software toolkit within the cloud-based HelioScope software environment that enables solar engineers to automatically generate and manage draft documents for permit submission.

  18. Generation and Memory for Contextual Detail

    ERIC Educational Resources Information Center

    Mulligan, Neil W.

    2004-01-01

    Generation enhances item memory but may not enhance other aspects of memory. In 12 experiments, the author investigated the effect of generation on context memory, motivated in part by the hypothesis that generation produces a trade-off in encoding item and contextual information. Participants generated some study words (e.g., hot-___) and read…

  19. UIVerify: A Web-Based Tool for Verification and Automatic Generation of User Interfaces

    NASA Technical Reports Server (NTRS)

    Shiffman, Smadar; Degani, Asaf; Heymann, Michael

    2004-01-01

    In this poster, we describe a web-based tool for verification and automatic generation of user interfaces. The verification component of the tool accepts as input a model of a machine and a model of its interface, and checks that the interface is adequate (correct). The generation component of the tool accepts a model of a given machine and the user's task, and then generates a correct and succinct interface. This write-up will demonstrate the usefulness of the tool by verifying the correctness of a user interface to a flight-control system. The poster will include two more examples of using the tool: verification of the interface to an espresso machine, and automatic generation of a succinct interface to a large hypothetical machine.

  20. Automatic mathematical modeling for real time simulation program (AI application)

    NASA Technical Reports Server (NTRS)

    Wang, Caroline; Purinton, Steve

    1989-01-01

    A methodology is described for automatic mathematical modeling and generating simulation models. The major objective was to create a user friendly environment for engineers to design, maintain, and verify their models; to automatically convert the mathematical models into conventional code for computation; and finally, to document the model automatically.

  1. Evaluating the healthiness of chain-restaurant menu items using crowdsourcing: a new method.

    PubMed

    Lesser, Lenard I; Wu, Leslie; Matthiessen, Timothy B; Luft, Harold S

    2017-01-01

    To develop a technology-based method for evaluating the nutritional quality of chain-restaurant menus to increase the efficiency and lower the cost of large-scale data analysis of food items. Using a Modified Nutrient Profiling Index (MNPI), we assessed chain-restaurant items from the MenuStat database with a process involving three steps: (i) testing 'extreme' scores; (ii) crowdsourcing to analyse fruit, nut and vegetable (FNV) amounts; and (iii) analysis of the ambiguous items by a registered dietitian. In applying the approach to assess 22 422 foods, only 3566 could not be scored automatically based on MenuStat data and required further evaluation to determine healthiness. Items for which there was low agreement between trusted crowd workers, or where the FNV amount was estimated to be >40 %, were sent to a registered dietitian. Crowdsourcing was able to evaluate 3199, leaving only 367 to be reviewed by the registered dietitian. Overall, 7 % of items were categorized as healthy. The healthiest category was soups (26 % healthy), while desserts were the least healthy (2 % healthy). An algorithm incorporating crowdsourcing and a dietitian can quickly and efficiently analyse restaurant menus, allowing public health researchers to analyse the healthiness of menu items.

  2. Using Web-Based Practice to Enhance Mathematics Learning and Achievement

    ERIC Educational Resources Information Center

    Nguyen, Diem M.; Kulm, Gerald

    2005-01-01

    This article describes 1) the special features and accessibility of an innovative web-based practice instrument (WebMA) designed with randomized short-answer, matching and multiple choice items incorporated with automatically adapted feedback for middle school students; and 2) an exploratory study that compares the effects and contributions of…

  3. Testing methods and techniques: Environmental testing: A compilation

    NASA Technical Reports Server (NTRS)

    1971-01-01

    Various devices and techniques are described for testing hardware and components in four special environments: low temperature, high temperature, high pressure, and vibration. Items ranging from an automatic calibrator for pressure transducers to a fixture for testing the susceptibility of materials to ignition by electric spark are included.

  4. Language Assessment in a Snap: Monitoring Progress up to 36 Months

    ERIC Educational Resources Information Center

    Gilkerson, Jill; Richards, Jeffrey A.; Greenwood, Charles R.; Montgomery, Judy K.

    2017-01-01

    This article describes the development and validation of the Developmental Snapshot, a 52-item parent questionnaire on child language and vocal communication development that can be administered monthly and scored automatically. The Snapshot was created to provide an easily administered monthly progress monitoring tool that enables parents to…

  5. 78 FR 12136 - Fifty Eighth Meeting: RTCA Special Committee 186, Automatic Dependent Surveillance-Broadcast (ADS-B)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-21

    .... Issued in Washington, DC, on February 19, 2013. Paige Williams, Management Analyst, Business Operations... Trajectory Management Other? Other Business. None Identified Review Action Items/Work Programs. Adjourn...) [ssquf] Flight-deck Interval Management (FIM) [ssquf] CAVS and CDTI Assisted Pilot Procedures (CAPP...

  6. Free Recall Test Experience Potentiates Strategy-Driven Effects of Value on Memory

    ERIC Educational Resources Information Center

    Cohen, Michael S.; Rissman, Jesse; Hovhannisyan, Mariam; Castel, Alan D.; Knowlton, Barbara J.

    2017-01-01

    People tend to show better memory for information that is deemed valuable or important. By one mechanism, individuals selectively engage deeper, semantic encoding strategies for high value items (Cohen, Rissman, Suthana, Castel, & Knowlton, 2014). By another mechanism, information paired with value or reward is automatically strengthened in…

  7. 32 CFR 552.100 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... designed or redesigned, made or remade, modified or remodified to automatically fire more than one shot by..., incendiary, blank, shotgun, black powder, and shot). Items shall only be considered as ammunition when loaded... smooth bore either a number of ball shot or a single projectile for each single pull of the trigger. (j...

  8. 32 CFR 552.100 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... designed or redesigned, made or remade, modified or remodified to automatically fire more than one shot by..., incendiary, blank, shotgun, black powder, and shot). Items shall only be considered as ammunition when loaded... smooth bore either a number of ball shot or a single projectile for each single pull of the trigger. (j...

  9. 32 CFR 552.100 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... designed or redesigned, made or remade, modified or remodified to automatically fire more than one shot by..., incendiary, blank, shotgun, black powder, and shot). Items shall only be considered as ammunition when loaded... smooth bore either a number of ball shot or a single projectile for each single pull of the trigger. (j...

  10. 32 CFR 552.100 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... designed or redesigned, made or remade, modified or remodified to automatically fire more than one shot by..., incendiary, blank, shotgun, black powder, and shot). Items shall only be considered as ammunition when loaded... smooth bore either a number of ball shot or a single projectile for each single pull of the trigger. (j...

  11. 32 CFR 552.126 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... designed to shoot, or can be readily restored to shoot automatically more than one shot without manual..., trace, incendiary, blank, shotgun, black powder, and shot). Items shall only be considered as ammunition... one or more barrels when held in one hand, and having: (1) A chamber(s) as an integral part(s) of, or...

  12. 32 CFR 552.126 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... designed to shoot, or can be readily restored to shoot automatically more than one shot without manual..., trace, incendiary, blank, shotgun, black powder, and shot). Items shall only be considered as ammunition... one or more barrels when held in one hand, and having: (1) A chamber(s) as an integral part(s) of, or...

  13. 32 CFR 552.126 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... designed to shoot, or can be readily restored to shoot automatically more than one shot without manual..., trace, incendiary, blank, shotgun, black powder, and shot). Items shall only be considered as ammunition... one or more barrels when held in one hand, and having: (1) A chamber(s) as an integral part(s) of, or...

  14. 32 CFR 552.126 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... designed to shoot, or can be readily restored to shoot automatically more than one shot without manual..., trace, incendiary, blank, shotgun, black powder, and shot). Items shall only be considered as ammunition... one or more barrels when held in one hand, and having: (1) A chamber(s) as an integral part(s) of, or...

  15. 76 FR 79754 - Twelfth Meeting: RTCA Special Committee 220, Automatic Flight Guidance and Control

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-22

    ... technologies Administrative items (meeting schedule, location, and next meeting agenda) Any other business... 2 status--progress, issues and plan Review of WG 3 status--progress, issues and plans Review action.... Issued in Washington, DC, on December 15, 2011. Robert L. Bostiga, Manager, Business Operations Branch...

  16. 77 FR 12105 - 56th Meeting: RTCA Special Committee 186, Automatic Dependent Surveillance-Broadcast (ADS-B)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-28

    ... Existing Traffic Safety Nets EUORCAE WG51 TSAA Perpective Flight-deck Interval Management (FIM)--Status... Only Agenda Items Document Approval: DO-xxx--Minimum Aviation System Performance Standards (MASPS) for... DEPARTMENT OF TRANSPORTATION Federal Aviation Administration 56th Meeting: RTCA Special Committee...

  17. Strategies for automatic processing of large aftershock sequences

    NASA Astrophysics Data System (ADS)

    Kvaerna, T.; Gibbons, S. J.

    2017-12-01

    Aftershock sequences following major earthquakes present great challenges to seismic bulletin generation. The analyst resources needed to locate events increase with increased event numbers as the quality of underlying, fully automatic, event lists deteriorates. While current pipelines, designed a generation ago, are usually limited to single passes over the raw data, modern systems also allow multiple passes. Processing the raw data from each station currently generates parametric data streams that are later subject to phase-association algorithms which form event hypotheses. We consider a major earthquake scenario and propose to define a region of likely aftershock activity in which we will detect and accurately locate events using a separate, specially targeted, semi-automatic process. This effort may use either pattern detectors or more general algorithms that cover wider source regions without requiring waveform similarity. An iterative procedure to generate automatic bulletins would incorporate all the aftershock event hypotheses generated by the auxiliary process, and filter all phases from these events from the original detection lists prior to a new iteration of the global phase-association algorithm.

  18. Research-oriented image registry for multimodal image integration.

    PubMed

    Tanaka, M; Sadato, N; Ishimori, Y; Yonekura, Y; Yamashita, Y; Komuro, H; Hayahsi, N; Ishii, Y

    1998-01-01

    To provide multimodal biomedical images automatically, we constructed the research-oriented image registry, Data Delivery System (DDS). DDS was constructed on the campus local area network. Machines which generate images (imagers: DSA, ultrasound, PET, MRI, SPECT and CT) were connected to the campus LAN. Once a patient is registered, all his images are automatically picked up by DDS as they are generated, transferred through the gateway server to the intermediate server, and copied into the directory of the user who registered the patient. DDS informs the user through e-mail that new data have been generated and transferred. Data format is automatically converted into one which is chosen by the user. Data inactive for a certain period in the intermediate server are automatically achieved into the final and permanent data server based on compact disk. As a soft link is automatically generated through this step, a user has access to all (old or new) image data of the patient of his interest. As DDS runs with minimal maintenance, cost and time for data transfer are significantly saved. By making the complex process of data transfer and conversion invisible, DDS has made it easy for naive-to-computer researchers to concentrate on their biomedical interest.

  19. Validity Evidence of the Spanish Version of the Automatic Thoughts Questionnaire-8 in Colombia.

    PubMed

    Ruiz, Francisco J; Suárez-Falcón, Juan C; Riaño-Hernández, Diana

    2017-02-13

    The Automatic Thoughts Questionnaire (ATQ) is a widely used, 30-item, 5-point Likert-type scale that measures the frequency of negative automatic thoughts as experienced by individuals suffering from depression. However, there is some controversy about the factor structure of the ATQ, and its application can be too time-consuming for survey research. Accordingly, an abbreviated, 8-item version of the ATQ has been proposed. The aim of this study was to analyze the validity evidence of the Spanish version of the ATQ-8 in Colombia. The ATQ-8 was administered to a total of 1587 participants, including a sample of undergraduates, one of general population, and a clinical sample. The internal consistency across the different samples was good (α = .89). The one-factor model found in the original scale showed a good fit to the data (RMSEA = .083, 90% CI [.074, .092]; CFI = .96; NNFI = .95). The clinical sample's mean score on the ATQ-8 was significantly higher than the scores of the nonclinical samples. The ATQ-8 was sensitive to the effects of a 1-session acceptance and commitment therapy focused on disrupting negative repetitive thinking. ATQ-8 scores were significantly related to dysfunctional schemas, emotional symptoms, mindfulness, experiential avoidance, satisfaction with life, and dysfunctional attitudes. In conclusion, the Spanish version of the ATQ-8 showed good psychometric properties in Colombia.

  20. Using Automatic Code Generation in the Attitude Control Flight Software Engineering Process

    NASA Technical Reports Server (NTRS)

    McComas, David; O'Donnell, James R., Jr.; Andrews, Stephen F.

    1999-01-01

    This paper presents an overview of the attitude control subsystem flight software development process, identifies how the process has changed due to automatic code generation, analyzes each software development phase in detail, and concludes with a summary of our lessons learned.

  1. GIS Data Based Automatic High-Fidelity 3D Road Network Modeling

    NASA Technical Reports Server (NTRS)

    Wang, Jie; Shen, Yuzhong

    2011-01-01

    3D road models are widely used in many computer applications such as racing games and driving simulations_ However, almost all high-fidelity 3D road models were generated manually by professional artists at the expense of intensive labor. There are very few existing methods for automatically generating 3D high-fidelity road networks, especially those existing in the real world. This paper presents a novel approach thai can automatically produce 3D high-fidelity road network models from real 2D road GIS data that mainly contain road. centerline in formation. The proposed method first builds parametric representations of the road centerlines through segmentation and fitting . A basic set of civil engineering rules (e.g., cross slope, superelevation, grade) for road design are then selected in order to generate realistic road surfaces in compliance with these rules. While the proposed method applies to any types of roads, this paper mainly addresses automatic generation of complex traffic interchanges and intersections which are the most sophisticated elements in the road networks

  2. Automating Traceability for Generated Software Artifacts

    NASA Technical Reports Server (NTRS)

    Richardson, Julian; Green, Jeffrey

    2004-01-01

    Program synthesis automatically derives programs from specifications of their behavior. One advantage of program synthesis, as opposed to manual coding, is that there is a direct link between the specification and the derived program. This link is, however, not very fine-grained: it can be best characterized as Program is-derived- from Specification. When the generated program needs to be understood or modified, more $ne-grained linking is useful. In this paper, we present a novel technique for automatically deriving traceability relations between parts of a specification and parts of the synthesized program. The technique is very lightweight and works -- with varying degrees of success - for any process in which one artifact is automatically derived from another. We illustrate the generality of the technique by applying it to two kinds of automatic generation: synthesis of Kalman Filter programs from speci3cations using the Aut- oFilter program synthesis system, and generation of assembly language programs from C source code using the GCC C compilel: We evaluate the effectiveness of the technique in the latter application.

  3. Tuned grid generation with ICEM CFD

    NASA Technical Reports Server (NTRS)

    Wulf, Armin; Akdag, Vedat

    1995-01-01

    ICEM CFD is a CAD based grid generation package that supports multiblock structured, unstructured tetrahedral and unstructured hexahedral grids. Major development efforts have been spent to extend ICEM CFD's multiblock structured and hexahedral unstructured grid generation capabilities. The modules added are: a parametric grid generation module and a semi-automatic hexahedral grid generation module. A fully automatic version of the hexahedral grid generation module for around a set of predefined objects in rectilinear enclosures has been developed. These modules will be presented and the procedures used will be described, and examples will be discussed.

  4. Gaussian curvature analysis allows for automatic block placement in multi-block hexahedral meshing.

    PubMed

    Ramme, Austin J; Shivanna, Kiran H; Magnotta, Vincent A; Grosland, Nicole M

    2011-10-01

    Musculoskeletal finite element analysis (FEA) has been essential to research in orthopaedic biomechanics. The generation of a volumetric mesh is often the most challenging step in a FEA. Hexahedral meshing tools that are based on a multi-block approach rely on the manual placement of building blocks for their mesh generation scheme. We hypothesise that Gaussian curvature analysis could be used to automatically develop a building block structure for multi-block hexahedral mesh generation. The Automated Building Block Algorithm incorporates principles from differential geometry, combinatorics, statistical analysis and computer science to automatically generate a building block structure to represent a given surface without prior information. We have applied this algorithm to 29 bones of varying geometries and successfully generated a usable mesh in all cases. This work represents a significant advancement in automating the definition of building blocks.

  5. Gas turbine engines and transmissions for bus demonstration program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nigro, D.N.

    1981-11-01

    This final report is to fulfill the contractural requirements of Contract DE-AC02-78CS54867 which required the delivery of 11 Allison GT 404-4 Industrial Gas Turbine Engines and five HT740CT and six V730CT Allison Automatic Transmissions for the Greyhound and Transit Coaches, respectively. In addition, software items such as cost reports, technical reports, installation drawings, acceptance test data and parts lists were required. Engine and transmission deliveries were completed with shipment of the last power package on 11 April 1980. Software items were submitted when required during the performance period of this contract.

  6. ADMAP (automatic data manipulation program)

    NASA Technical Reports Server (NTRS)

    Mann, F. I.

    1971-01-01

    Instructions are presented on the use of ADMAP, (automatic data manipulation program) an aerospace data manipulation computer program. The program was developed to aid in processing, reducing, plotting, and publishing electric propulsion trajectory data generated by the low thrust optimization program, HILTOP. The program has the option of generating SC4020 electric plots, and therefore requires the SC4020 routines to be available at excution time (even if not used). Several general routines are present, including a cubic spline interpolation routine, electric plotter dash line drawing routine, and single parameter and double parameter sorting routines. Many routines are tailored for the manipulation and plotting of electric propulsion data, including an automatic scale selection routine, an automatic curve labelling routine, and an automatic graph titling routine. Data are accepted from either punched cards or magnetic tape.

  7. 2D Automatic body-fitted structured mesh generation using advancing extraction method

    USDA-ARS?s Scientific Manuscript database

    This paper presents an automatic mesh generation algorithm for body-fitted structured meshes in Computational Fluids Dynamics (CFD) analysis using the Advancing Extraction Method (AEM). The method is applicable to two-dimensional domains with complex geometries, which have the hierarchical tree-like...

  8. 2D automatic body-fitted structured mesh generation using advancing extraction method

    USDA-ARS?s Scientific Manuscript database

    This paper presents an automatic mesh generation algorithm for body-fitted structured meshes in Computational Fluids Dynamics (CFD) analysis using the Advancing Extraction Method (AEM). The method is applicable to two-dimensional domains with complex geometries, which have the hierarchical tree-like...

  9. Installation and Testing Instructions for the Sandia Automatic Report Generator (ARG).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clay, Robert L.

    Robert L. CLAY Sandia National Laboratories P.O. Box 969 Livermore, CA 94551, U.S.A. rlclay@sandia.gov In this report, we provide detailed and reproducible installation instructions of the Automatic Report Generator (ARG), for both Linux and macOS target platforms.

  10. Generative and Item-Specific Knowledge of Language

    ERIC Educational Resources Information Center

    Morgan, Emily Ida Popper

    2016-01-01

    The ability to generate novel utterances compositionally using generative knowledge is a hallmark property of human language. At the same time, languages contain non-compositional or idiosyncratic items, such as irregular verbs, idioms, etc. This dissertation asks how and why language achieves a balance between these two systems--generative and…

  11. Monitoring item and source information: evidence for a negative generation effect in source memory.

    PubMed

    Jurica, P J; Shimamura, A P

    1999-07-01

    Item memory and source memory were assessed in a task that simulated a social conversation. Participants generated answers to questions or read statements presented by one of three sources (faces on a computer screen). Positive generation effects were observed for item memory. That is, participants remembered topics of conversation better if they were asked questions about the topics than if they simply read statements about topics. However, a negative generation effect occurred for source memory. That is, remembering the source of some information was disrupted if participants were required to answer questions pertaining to that information. These findings support the notion that item and source memory are mediated, as least in part, by different processes during encoding.

  12. Autonomously generating operations sequences for a Mars Rover using AI-based planning

    NASA Technical Reports Server (NTRS)

    Sherwood, Rob; Mishkin, Andrew; Estlin, Tara; Chien, Steve; Backes, Paul; Cooper, Brian; Maxwell, Scott; Rabideau, Gregg

    2001-01-01

    This paper discusses a proof-of-concept prototype for ground-based automatic generation of validated rover command sequences from highlevel science and engineering activities. This prototype is based on ASPEN, the Automated Scheduling and Planning Environment. This Artificial Intelligence (AI) based planning and scheduling system will automatically generate a command sequence that will execute within resource constraints and satisfy flight rules.

  13. SU-G-TeP1-05: Development and Clinical Introduction of Automated Radiotherapy Treatment Planning for Prostate Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winkel, D; Bol, GH; Asselen, B van

    Purpose: To develop an automated radiotherapy treatment planning and optimization workflow for prostate cancer in order to generate clinical treatment plans. Methods: A fully automated radiotherapy treatment planning and optimization workflow was developed based on the treatment planning system Monaco (Elekta AB, Stockholm, Sweden). To evaluate our method, a retrospective planning study (n=100) was performed on patients treated for prostate cancer with 5 field intensity modulated radiotherapy, receiving a dose of 35×2Gy to the prostate and vesicles and a simultaneous integrated boost of 35×0.2Gy to the prostate only. A comparison was made between the dosimetric values of the automatically andmore » manually generated plans. Operator time to generate a plan and plan efficiency was measured. Results: A comparison of the dosimetric values show that automatically generated plans yield more beneficial dosimetric values. In automatic plans reductions of 43% in the V72Gy of the rectum and 13% in the V72Gy of the bladder are observed when compared to the manually generated plans. Smaller variance in dosimetric values is seen, i.e. the intra- and interplanner variability is decreased. For 97% of the automatically generated plans and 86% of the clinical plans all criteria for target coverage and organs at risk constraints are met. The amount of plan segments and monitor units is reduced by 13% and 9% respectively. Automated planning requires less than one minute of operator time compared to over an hour for manual planning. Conclusion: The automatically generated plans are highly suitable for clinical use. The plans have less variance and a large gain in time efficiency has been achieved. Currently, a pilot study is performed, comparing the preference of the clinician and clinical physicist for the automatic versus manual plan. Future work will include expanding our automated treatment planning method to other tumor sites and develop other automated radiotherapy workflows.« less

  14. Puzzle test: A tool for non-analytical clinical reasoning assessment.

    PubMed

    Monajemi, Alireza; Yaghmaei, Minoo

    2016-01-01

    Most contemporary clinical reasoning tests typically assess non-automatic thinking. Therefore, a test is needed to measure automatic reasoning or pattern recognition, which has been largely neglected in clinical reasoning tests. The Puzzle Test (PT) is dedicated to assess automatic clinical reasoning in routine situations. This test has been introduced first in 2009 by Monajemi et al in the Olympiad for Medical Sciences Students.PT is an item format that has gained acceptance in medical education, but no detailed guidelines exist for this test's format, construction and scoring. In this article, a format is described and the steps to prepare and administer valid and reliable PTs are presented. PT examines a specific clinical reasoning task: Pattern recognition. PT does not replace other clinical reasoning assessment tools. However, it complements them in strategies for assessing comprehensive clinical reasoning.

  15. Development of an Automatic Dispensing System for Traditional Chinese Herbs.

    PubMed

    Lin, Chi-Ying; Hsieh, Ping-Jung

    2017-01-01

    The gathering of ingredients for decoctions of traditional Chinese herbs still relies on manual dispensation, due to the irregular shape of many items and inconsistencies in weights. In this study, we developed an automatic dispensing system for Chinese herbal decoctions with the aim of reducing manpower costs and the risk of mistakes. We employed machine vision in conjunction with a robot manipulator to facilitate the grasping of ingredients. The name and formulation of the decoction are input via a human-computer interface, and the dispensing of multiple medicine packets is performed automatically. An off-line least-squared curve fitting method was used to calculate the amount of material grasped by the claws and thereby improve system efficiency as well as the accuracy of individual dosages. Experiments on the dispensing of actual ingredients demonstrate the feasibility of the proposed system.

  16. Computerized Adaptive Testing with Item Clones. Research Report.

    ERIC Educational Resources Information Center

    Glas, Cees A. W.; van der Linden, Wim J.

    To reduce the cost of item writing and to enhance the flexibility of item presentation, items can be generated by item-cloning techniques. An important consequence of cloning is that it may cause variability on the item parameters. Therefore, a multilevel item response model is presented in which it is assumed that the item parameters of a…

  17. A Study of the Homogeneity of Items Produced From Item Forms Across Different Taxonomic Levels.

    ERIC Educational Resources Information Center

    Weber, Margaret B.; Argo, Jana K.

    This study determined whether item forms ( rules for constructing items related to a domain or set of tasks) would enable naive item writers to generate multiple-choice items at three taxonomic levels--knowledge, comprehension, and application. Students wrote 120 multiple-choice items from 20 item forms, corresponding to educational objectives…

  18. Unsupervised MDP Value Selection for Automating ITS Capabilities

    ERIC Educational Resources Information Center

    Stamper, John; Barnes, Tiffany

    2009-01-01

    We seek to simplify the creation of intelligent tutors by using student data acquired from standard computer aided instruction (CAI) in conjunction with educational data mining methods to automatically generate adaptive hints. In our previous work, we have automatically generated hints for logic tutoring by constructing a Markov Decision Process…

  19. Automatic Digital Content Generation System for Real-Time Distance Lectures

    ERIC Educational Resources Information Center

    Iwatsuki, Masami; Takeuchi, Norio; Kobayashi, Hisato; Yana, Kazuo; Takeda, Hiroshi; Yaginuma, Hisashi; Kiyohara, Hajime; Tokuyasu, Akira

    2007-01-01

    This article describes a new automatic digital content generation system we have developed. Recently some universities, including Hosei University, have been offering students opportunities to take distance interactive classes over the Internet from overseas. When such distance lectures are delivered in English to Japanese students, there is a…

  20. The Development and Validation of the Intercultural Sensitivity Scale.

    ERIC Educational Resources Information Center

    Chen, Guo-Ming; Starosta, William J.

    The present study developed and assessed reliability and validity of a new instrument, the Intercultural Sensitivity Scale (ISS). Based on a review of the literature, 44 items thought to be important for intercultural sensitivity were generated. A sample of 414 college students rated these items and generated a 24-item final version of the…

  1. FIM-Minimum Data Set Motor Item Bank: Short Forms Development and Precision Comparison in Veterans.

    PubMed

    Li, Chih-Ying; Romero, Sergio; Simpson, Annie N; Bonilha, Heather S; Simpson, Kit N; Hong, Ickpyo; Velozo, Craig A

    2018-03-01

    To improve the practical use of the short forms (SFs) developed from the item bank, we compared the measurement precision of the 4- and 8-item SFs generated from a motor item bank composed of the FIM and the Minimum Data Set (MDS). The FIM-MDS motor item bank allowed scores generated from different instruments to be co-calibrated. The 4- and 8-item SFs were developed based on Rasch analysis procedures. This article compared person strata, ceiling/floor effects, and test SE plots for each administration form and examined 95% confidence interval error bands of anchored person measures with the corresponding SFs. We used 0.3 SE as a criterion to reflect a reliability level of .90. Veterans' inpatient rehabilitation facilities and community living centers. Veterans (N=2500) who had both FIM and the MDS data within 6 days during 2008 through 2010. Not applicable. Four- and 8-item SFs of FIM, MDS, and FIM-MDS motor item bank. Six SFs were generated with 4 and 8 items across a range of difficulty levels from the FIM-MDS motor item bank. The three 8-item SFs all had higher correlations with the item bank (r=.82-.95), higher person strata, and less test error than the corresponding 4-item SFs (r=.80-.90). The three 4-item SFs did not meet the criteria of SE <0.3 for any theta values. Eight-item SFs could improve clinical use of the item bank composed of existing instruments across the continuum of care in veterans. We also found that the number of items, not test specificity, determines the precision of the instrument. Copyright © 2017 American Congress of Rehabilitation Medicine. All rights reserved.

  2. 38 CFR 36.4226 - Withdrawal of authority to close manufactured home loans on the automatic basis.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... imprudent from a lending standpoint or which are prejudicial to the interests of veterans or the Government... show deficiencies in credit underwriting, such as use of unstable sources of income to qualify the borrower, ignoring significant adverse credit items affecting the applicant's creditworthiness, etc., after...

  3. Librarian of the Year 2009: Team Cedar Rapids

    ERIC Educational Resources Information Center

    Berry, John N., III

    2009-01-01

    When flood came to Cedar Rapids city, the Cedar Rapids Public Library (CRPL), IA, lost 160,000 items including large parts of its adult and youth collections, magazines, newspapers, reference materials, CDs, and DVDs. Most of its public access computers were destroyed as was its computer lab and microfilm equipment. The automatic circulation and…

  4. Bees Algorithm for Construction of Multiple Test Forms in E-Testing

    ERIC Educational Resources Information Center

    Songmuang, Pokpong; Ueno, Maomi

    2011-01-01

    The purpose of this research is to automatically construct multiple equivalent test forms that have equivalent qualities indicated by test information functions based on item response theory. There has been a trade-off in previous studies between the computational costs and the equivalent qualities of test forms. To alleviate this problem, we…

  5. The Promise of NLP and Speech Processing Technologies in Language Assessment

    ERIC Educational Resources Information Center

    Chapelle, Carol A.; Chung, Yoo-Ree

    2010-01-01

    Advances in natural language processing (NLP) and automatic speech recognition and processing technologies offer new opportunities for language testing. Despite their potential uses on a range of language test item types, relatively little work has been done in this area, and it is therefore not well understood by test developers, researchers or…

  6. Periodic, On-Demand, and User-Specified Information Reconciliation

    NASA Technical Reports Server (NTRS)

    Kolano, Paul

    2007-01-01

    Automated sequence generation (autogen) signifies both a process and software used to automatically generate sequences of commands to operate various spacecraft. Autogen requires fewer workers than are needed for older manual sequence-generation processes and reduces sequence-generation times from weeks to minutes. The autogen software comprises the autogen script plus the Activity Plan Generator (APGEN) program. APGEN can be used for planning missions and command sequences. APGEN includes a graphical user interface that facilitates scheduling of activities on a time line and affords a capability to automatically expand, decompose, and schedule activities.

  7. Automatic generation of natural language nursing shift summaries in neonatal intensive care: BT-Nurse.

    PubMed

    Hunter, James; Freer, Yvonne; Gatt, Albert; Reiter, Ehud; Sripada, Somayajulu; Sykes, Cindy

    2012-11-01

    Our objective was to determine whether and how a computer system could automatically generate helpful natural language nursing shift summaries solely from an electronic patient record system, in a neonatal intensive care unit (NICU). A system was developed which automatically generates partial NICU shift summaries (for the respiratory and cardiovascular systems), using data-to-text technology. It was evaluated for 2 months in the NICU at the Royal Infirmary of Edinburgh, under supervision. In an on-ward evaluation, a substantial majority of the summaries was found by outgoing and incoming nurses to be understandable (90%), and a majority was found to be accurate (70%), and helpful (59%). The evaluation also served to identify some outstanding issues, especially with regard to extra content the nurses wanted to see in the computer-generated summaries. It is technically possible automatically to generate limited natural language NICU shift summaries from an electronic patient record. However, it proved difficult to handle electronic data that was intended primarily for display to the medical staff, and considerable engineering effort would be required to create a deployable system from our proof-of-concept software. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Automatic Query Formulations in Information Retrieval.

    ERIC Educational Resources Information Center

    Salton, G.; And Others

    1983-01-01

    Introduces methods designed to reduce role of search intermediaries by generating Boolean search formulations automatically using term frequency considerations from natural language statements provided by system patrons. Experimental results are supplied and methods are described for applying automatic query formulation process in practice.…

  9. Payload accommodation and development planning tools - A Desktop Resource Leveling Model (DRLM)

    NASA Technical Reports Server (NTRS)

    Hilchey, John D.; Ledbetter, Bobby; Williams, Richard C.

    1989-01-01

    The Desktop Resource Leveling Model (DRLM) has been developed as a tool to rapidly structure and manipulate accommodation, schedule, and funding profiles for any kind of experiments, payloads, facilities, and flight systems or other project hardware. The model creates detailed databases describing 'end item' parameters, such as mass, volume, power requirements or costs and schedules for payload, subsystem, or flight system elements. It automatically spreads costs by calendar quarters and sums costs or accommodation parameters by total project, payload, facility, payload launch, or program phase. Final results can be saved or printed out, automatically documenting all assumptions, inputs, and defaults.

  10. Study on the Automatic Detection Method and System of Multifunctional Hydrocephalus Shunt

    NASA Astrophysics Data System (ADS)

    Sun, Xuan; Wang, Guangzhen; Dong, Quancheng; Li, Yuzhong

    2017-07-01

    Aiming to the difficulty of micro pressure detection and the difficulty of micro flow control in the testing process of hydrocephalus shunt, the principle of the shunt performance detection was analyzed.In this study, the author analyzed the principle of several items of shunt performance detection,and used advanced micro pressure sensor and micro flow peristaltic pump to overcome the micro pressure detection and micro flow control technology.At the same time,This study also puted many common experimental projects integrated, and successfully developed the automatic detection system for a shunt performance detection function, to achieve a test with high precision, high efficiency and automation.

  11. A project of upgrading the operations control system of the Hungarian electric power system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oroszki, L.; Kovacs, G.

    About 20 years ago an on-line EMS/SCADA system replaced the previously used off-line control system in the Hungarian power system. The system that has met the technological requirements of that time now became obsolete. A project started in 1995 by the Hungarian Power Companies, Ltd. (MVM Rt.), the regional utility companies and the power plant companies, with funding through a World Bank loan to cover international procurement, aims to upgrade that system into a complex, intelligent and state-of-the-art process control system. The new hierarchical system will rely on a distributed computer network structure, universally accepted hardware/software interface standards and communicationmore » protocols and use hardware platform independent software. The automatic generation control, performed from the National Dispatch Centre, will have expanded functionality, the most important single item of this will be the inclusion of automatic voltage/var control. The upgrading project includes the replacement of the substation and power plant remote terminal units and the installation of a telecommunication network to provide this telecontrol system with the necessary communications links. The supply contracts for both the master station and the remote terminal unit parts were awarded to the winners of open international bidding processes. In the project implementation MVM has the overall responsibility and works with assistance from international and Hungarian engineering firms.« less

  12. Earth Science Datacasting v2.0

    NASA Technical Reports Server (NTRS)

    Bingham, Andrew W.; Deen, Robert G.; Hussey, Kevin J.; Stough, Timothy M.; McCleese, Sean W.; Toole, Nicholas T.

    2012-01-01

    The Datacasting software, which consists of a server and a client, has been developed as part of the Earth Science (ES) Datacasting project. The goal of ES Datacasting is to provide scientists the ability to automatically and continuously download Earth science data that meets a precise, predefined need, and then to instantaneously visualize it on a local computer. This is achieved by applying the concept of podcasting to deliver science data over the Internet using RSS (Really Simple Syndication) XML feeds. By extending the RSS specification, scientists can filter a feed and only download the files that are required for a particular application (for example, only files that contain information about a particular event, such as a hurricane or flood). The extension also provides the ability for the client to understand the format of the data and visualize the information locally. The server part enables a data provider to create and serve basic Datacasting (RSS-based) feeds. The user can subscribe to any number of feeds, view the information related to each item contained within a feed (including browse pre-made images), manually download files associated with items, and place these files in a local store. The client-server architecture enables users to: a) Subscribe and interpret multiple Datacasting feeds (same look and feel as a typical mail client), b) Maintain a list of all items within each feed, c) Enable filtering on the lists based on different metadata attributes contained within the feed (list will reference only data files of interest), d) Visualize the reference data and associated metadata, e) Download files referenced within the list, and f) Automatically download files as new items become available.

  13. MeSH indexing based on automatically generated summaries.

    PubMed

    Jimeno-Yepes, Antonio J; Plaza, Laura; Mork, James G; Aronson, Alan R; Díaz, Alberto

    2013-06-26

    MEDLINE citations are manually indexed at the U.S. National Library of Medicine (NLM) using as reference the Medical Subject Headings (MeSH) controlled vocabulary. For this task, the human indexers read the full text of the article. Due to the growth of MEDLINE, the NLM Indexing Initiative explores indexing methodologies that can support the task of the indexers. Medical Text Indexer (MTI) is a tool developed by the NLM Indexing Initiative to provide MeSH indexing recommendations to indexers. Currently, the input to MTI is MEDLINE citations, title and abstract only. Previous work has shown that using full text as input to MTI increases recall, but decreases precision sharply. We propose using summaries generated automatically from the full text for the input to MTI to use in the task of suggesting MeSH headings to indexers. Summaries distill the most salient information from the full text, which might increase the coverage of automatic indexing approaches based on MEDLINE. We hypothesize that if the results were good enough, manual indexers could possibly use automatic summaries instead of the full texts, along with the recommendations of MTI, to speed up the process while maintaining high quality of indexing results. We have generated summaries of different lengths using two different summarizers, and evaluated the MTI indexing on the summaries using different algorithms: MTI, individual MTI components, and machine learning. The results are compared to those of full text articles and MEDLINE citations. Our results show that automatically generated summaries achieve similar recall but higher precision compared to full text articles. Compared to MEDLINE citations, summaries achieve higher recall but lower precision. Our results show that automatic summaries produce better indexing than full text articles. Summaries produce similar recall to full text but much better precision, which seems to indicate that automatic summaries can efficiently capture the most important contents within the original articles. The combination of MEDLINE citations and automatically generated summaries could improve the recommendations suggested by MTI. On the other hand, indexing performance might be dependent on the MeSH heading being indexed. Summarization techniques could thus be considered as a feature selection algorithm that might have to be tuned individually for each MeSH heading.

  14. Automatic generation of user material subroutines for biomechanical growth analysis.

    PubMed

    Young, Jonathan M; Yao, Jiang; Ramasubramanian, Ashok; Taber, Larry A; Perucchio, Renato

    2010-10-01

    The analysis of the biomechanics of growth and remodeling in soft tissues requires the formulation of specialized pseudoelastic constitutive relations. The nonlinear finite element analysis package ABAQUS allows the user to implement such specialized material responses through the coding of a user material subroutine called UMAT. However, hand coding UMAT subroutines is a challenge even for simple pseudoelastic materials and requires substantial time to debug and test the code. To resolve this issue, we develop an automatic UMAT code generation procedure for pseudoelastic materials using the symbolic mathematics package MATHEMATICA and extend the UMAT generator to include continuum growth. The performance of the automatically coded UMAT is tested by simulating the stress-stretch response of a material defined by a Fung-orthotropic strain energy function, subject to uniaxial stretching, equibiaxial stretching, and simple shear in ABAQUS. The MATHEMATICA UMAT generator is then extended to include continuum growth by adding a growth subroutine to the automatically generated UMAT. The MATHEMATICA UMAT generator correctly derives the variables required in the UMAT code, quickly providing a ready-to-use UMAT. In turn, the UMAT accurately simulates the pseudoelastic response. In order to test the growth UMAT, we simulate the growth-based bending of a bilayered bar with differing fiber directions in a nongrowing passive layer. The anisotropic passive layer, being topologically tied to the growing isotropic layer, causes the bending bar to twist laterally. The results of simulations demonstrate the validity of the automatically coded UMAT, used in both standardized tests of hyperelastic materials and for a biomechanical growth analysis.

  15. Automatic Dance Lesson Generation

    ERIC Educational Resources Information Center

    Yang, Yang; Leung, H.; Yue, Lihua; Deng, LiQun

    2012-01-01

    In this paper, an automatic lesson generation system is presented which is suitable in a learning-by-mimicking scenario where the learning objects can be represented as multiattribute time series data. The dance is used as an example in this paper to illustrate the idea. Given a dance motion sequence as the input, the proposed lesson generation…

  16. Alleviating Search Uncertainty through Concept Associations: Automatic Indexing, Co-Occurrence Analysis, and Parallel Computing.

    ERIC Educational Resources Information Center

    Chen, Hsinchun; Martinez, Joanne; Kirchhoff, Amy; Ng, Tobun D.; Schatz, Bruce R.

    1998-01-01

    Grounded on object filtering, automatic indexing, and co-occurrence analysis, an experiment was performed using a parallel supercomputer to analyze over 400,000 abstracts in an INSPEC computer engineering collection. A user evaluation revealed that system-generated thesauri were better than the human-generated INSPEC subject thesaurus in concept…

  17. Automatic Generation of Tests from Domain and Multimedia Ontologies

    ERIC Educational Resources Information Center

    Papasalouros, Andreas; Kotis, Konstantinos; Kanaris, Konstantinos

    2011-01-01

    The aim of this article is to present an approach for generating tests in an automatic way. Although other methods have been already reported in the literature, the proposed approach is based on ontologies, representing both domain and multimedia knowledge. The article also reports on a prototype implementation of this approach, which…

  18. Automatic Generation and Ranking of Questions for Critical Review

    ERIC Educational Resources Information Center

    Liu, Ming; Calvo, Rafael A.; Rus, Vasile

    2014-01-01

    Critical review skill is one important aspect of academic writing. Generic trigger questions have been widely used to support this activity. When students have a concrete topic in mind, trigger questions are less effective if they are too general. This article presents a learning-to-rank based system which automatically generates specific trigger…

  19. Use of an Automatic Problem Generator to Teach Basic Skills in a First Course in Assembly Language.

    ERIC Educational Resources Information Center

    Benander, Alan; And Others

    1989-01-01

    Discussion of the use of computer aided instruction (CAI) and instructional software in college level courses highlights an automatic problem generator, AUTOGEN, that was written for computer science students learning assembly language. Design of the software is explained, and student responses are reported. (nine references) (LRW)

  20. Automatic Generation of Cycle-Approximate TLMs with Timed RTOS Model Support

    NASA Astrophysics Data System (ADS)

    Hwang, Yonghyun; Schirner, Gunar; Abdi, Samar

    This paper presents a technique for automatically generating cycle-approximate transaction level models (TLMs) for multi-process applications mapped to embedded platforms. It incorporates three key features: (a) basic block level timing annotation, (b) RTOS model integration, and (c) RTOS overhead delay modeling. The inputs to TLM generation are application C processes and their mapping to processors in the platform. A processor data model, including pipelined datapath, memory hierarchy and branch delay model is used to estimate basic block execution delays. The delays are annotated to the C code, which is then integrated with a generated SystemC RTOS model. Our abstract RTOS provides dynamic scheduling and inter-process communication (IPC) with processor- and RTOS-specific pre-characterized timing. Our experiments using a MP3 decoder and a JPEG encoder show that timed TLMs, with integrated RTOS models, can be automatically generated in less than a minute. Our generated TLMs simulated three times faster than real-time and showed less than 10% timing error compared to board measurements.

  1. Automatic analysis of medical dialogue in the home hemodialysis domain: structure induction and summarization.

    PubMed

    Lacson, Ronilda C; Barzilay, Regina; Long, William J

    2006-10-01

    Spoken medical dialogue is a valuable source of information for patients and caregivers. This work presents a first step towards automatic analysis and summarization of spoken medical dialogue. We first abstract a dialogue into a sequence of semantic categories using linguistic and contextual features integrated in a supervised machine-learning framework. Our model has a classification accuracy of 73%, compared to 33% achieved by a majority baseline (p<0.01). We then describe and implement a summarizer that utilizes this automatically induced structure. Our evaluation results indicate that automatically generated summaries exhibit high resemblance to summaries written by humans. In addition, task-based evaluation shows that physicians can reasonably answer questions related to patient care by looking at the automatically generated summaries alone, in contrast to the physicians' performance when they were given summaries from a naïve summarizer (p<0.05). This work demonstrates the feasibility of automatically structuring and summarizing spoken medical dialogue.

  2. To develop a flying fish egg inspection system by a digital imaging base system

    NASA Astrophysics Data System (ADS)

    Chen, Chun-Jen; Jywe, Wenyuh; Hsieh, Tung-Hsien; Chen, Chien Hung

    2015-07-01

    This paper develops an automatic optical inspection system for flying fish egg quality inspection. The automatic optical inspection system consists of a 2-axes stage, a digital camera, a lens, a LED light source, a vacuum generator, a tube and a tray. This system can automatically find the particle on the flying egg tray and used stage to driver the tube onto the particle. Then use straw and vacuum generator to pick up the particle. The system pick rate is about 30 particles per minute.

  3. The development of a computer assisted instruction and assessment system in pharmacology.

    PubMed

    Madsen, B W; Bell, R C

    1977-01-01

    We describe the construction of a computer based system for instruction and assessment in pharmacology, utilizing a large bank of multiple choice questions. Items were collected from many sources, edited and coded for student suitability, topic, taxonomy and difficulty and text references. Students reserve a time during the day, specify the type of test desired and questions are presented randomly from the subset satisfying their criteria. Answers are scored after each question and a summary given at the end of every test; details on item performance are recorded automatically. The biggest hurdle in implementation was the assembly, review, classification and editing of items, while the programming was relatively straight-forward. A number of modifications had to be made to the initial plans and changes will undoubtedly continue with further experience. When fully operational the system will possess a number of advantages including: elimination of test preparation, editing and marking; facilitated item review opportunities; increased objectivity, feedback, flexibility and descreased anxiety in students.

  4. An analysis of the optimal multiobjective inventory clustering decision with small quantity and great variety inventory by applying a DPSO.

    PubMed

    Wang, Shen-Tsu; Li, Meng-Hua

    2014-01-01

    When an enterprise has thousands of varieties in its inventory, the use of a single management method could not be a feasible approach. A better way to manage this problem would be to categorise inventory items into several clusters according to inventory decisions and to use different management methods for managing different clusters. The present study applies DPSO (dynamic particle swarm optimisation) to a problem of clustering of inventory items. Without the requirement of prior inventory knowledge, inventory items are automatically clustered into near optimal clustering number. The obtained clustering results should satisfy the inventory objective equation, which consists of different objectives such as total cost, backorder rate, demand relevance, and inventory turnover rate. This study integrates the above four objectives into a multiobjective equation, and inputs the actual inventory items of the enterprise into DPSO. In comparison with other clustering methods, the proposed method can consider different objectives and obtain an overall better solution to obtain better convergence results and inventory decisions.

  5. [Alexithymia and automatic activation of emotional-evaluative information].

    PubMed

    Suslow, T; Arolt, V; Junghanns, K

    1998-05-01

    The emotional valence of stimuli seems to be stored in the associative network and is automatically activated on the mere observation of a stimulus. A principal characteristic of alexithymia represents the difficulty to symbolize emotions verbally. The present study examines the relationship between the dimensions of the alexithymia construct and emotional priming effects in a word-word paradigma. The 20-Item Toronto Alexithymia Scale was administered to 32 subjects along with two word reading tasks as measures of emotional and semantic priming effects. The subscale "difficulty describing feelings" correlated as expected negatively with the negative inhibition effect. The subscale "externally oriented thinking" tended to correlate negatively with the negative facilitation effect. Thus, these dimensions of alexithymia are inversely related to the degree of automatic emotional priming. In summary, there is evidence for an impaired structural integration of emotion and language in persons with difficulties in describing feelings. Poor "symbolization" of emotions in alexithymia is discussed from a cognitive perspective.

  6. Natural language processing of spoken diet records (SDRs).

    PubMed

    Lacson, Ronilda; Long, William

    2006-01-01

    Dietary assessment is a fundamental aspect of nutritional evaluation that is essential for management of obesity as well as for assessing dietary impact on chronic diseases. Various methods have been used for dietary assessment including written records, 24-hour recalls, and food frequency questionnaires. The use of mobile phones to provide real-time dietary records provides potential advantages for accessibility, ease of use and automated documentation. However, understanding even a perfect transcript of spoken dietary records (SDRs) is challenging for people. This work presents a first step towards automatic analysis of SDRs. Our approach consists of four steps - identification of food items, identification of food quantifiers, classification of food quantifiers and temporal annotation. Our method enables automatic extraction of dietary information from SDRs, which in turn allows automated mapping to a Diet History Questionnaire dietary database. Our model has an accuracy of 90%. This work demonstrates the feasibility of automatically processing SDRs.

  7. Psychometric Properties of the Children's Automatic Thoughts Scale (CATS) in Chinese Adolescents.

    PubMed

    Sun, Ling; Rapee, Ronald M; Tao, Xuan; Yan, Yulei; Wang, Shanshan; Xu, Wei; Wang, Jianping

    2015-08-01

    The Children's Automatic Thoughts Scale (CATS) is a 40-item self-report questionnaire designed to measure children's negative thoughts. This study examined the psychometric properties of the Chinese translation of the CATS. Participants included 1,993 students (average age = 14.73) from three schools in Mainland China. A subsample of the participants was retested after 4 weeks. Confirmatory factor analysis replicated the original structure with four first-order factors loading on a single higher-order factor. The convergent and divergent validity of the CATS were good. The CATS demonstrated high internal consistency and test-retest reliability. Boys scored higher on the CATS-hostility subscale, but there were no other gender differences. Older adolescents (15-18 years) reported higher scores than younger adolescents (12-14 years) on the total score and on the physical threat, social threat, and hostility subscales. The CATS proved to be a reliable and valid measure of automatic thoughts in Chinese adolescents.

  8. Recovery of Visual Search following Moderate to Severe Traumatic Brain Injury

    PubMed Central

    Schmitter-Edgecombe, Maureen; Robertson, Kayela

    2015-01-01

    Introduction Deficits in attentional abilities can significantly impact rehabilitation and recovery from traumatic brain injury (TBI). This study investigated the nature and recovery of pre-attentive (parallel) and attentive (serial) visual search abilities after TBI. Methods Participants were 40 individuals with moderate to severe TBI who were tested following emergence from post-traumatic amnesia and approximately 8-months post-injury, as well as 40 age and education matched controls. Pre-attentive (automatic) and attentive (controlled) visual search situations were created by manipulating the saliency of the target item amongst distractor items in visual displays. The relationship between pre-attentive and attentive visual search rates and follow-up community integration were also explored. Results The results revealed intact parallel (automatic) processing skills in the TBI group both post-acutely and at follow-up. In contrast, when attentional demands on visual search were increased by reducing the saliency of the target, the TBI group demonstrated poorer performances compared to the control group both post-acutely and 8-months post-injury. Neither pre-attentive nor attentive visual search slope values correlated with follow-up community integration. Conclusions These results suggest that utilizing intact pre-attentive visual search skills during rehabilitation may help to reduce high mental workload situations, thereby improving the rehabilitation process. For example, making commonly used objects more salient in the environment should increase reliance or more automatic visual search processes and reduce visual search time for individuals with TBI. PMID:25671675

  9. Integrating hidden Markov model and PRAAT: a toolbox for robust automatic speech transcription

    NASA Astrophysics Data System (ADS)

    Kabir, A.; Barker, J.; Giurgiu, M.

    2010-09-01

    An automatic time-aligned phone transcription toolbox of English speech corpora has been developed. Especially the toolbox would be very useful to generate robust automatic transcription and able to produce phone level transcription using speaker independent models as well as speaker dependent models without manual intervention. The system is based on standard Hidden Markov Models (HMM) approach and it was successfully experimented over a large audiovisual speech corpus namely GRID corpus. One of the most powerful features of the toolbox is the increased flexibility in speech processing where the speech community would be able to import the automatic transcription generated by HMM Toolkit (HTK) into a popular transcription software, PRAAT, and vice-versa. The toolbox has been evaluated through statistical analysis on GRID data which shows that automatic transcription deviates by an average of 20 ms with respect to manual transcription.

  10. 19 CFR 19.35 - Establishment of duty-free stores (Class 9 warehouses).

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... merchandise departs the Customs territory; (2) Within 25 statute miles from the exit point through which a... on-hand balance of each inventory item in each storage location, sales room, crib, mobile crib... centralized up to the point where a sale is made so as to automatically reduce the sale quantity by location...

  11. Real Tweets, Fake News … and More from the NEJHE Beat …

    ERIC Educational Resources Information Center

    Harney, John O.

    2017-01-01

    Twitter is the closest thing that New England Higher Education has to a news service. Every New England Journal of Higher Education (NEJHE) item automatically posts to Twitter. But NEJHE also uses Twitter to disseminate relevant stories from outside. Not so much communicating personally, but aggregating interesting news or opinion from elsewhere,…

  12. Coding hazardous tree failures for a data management system

    Treesearch

    Lee A. Paine

    1978-01-01

    Codes for automatic data processing (ADP) are provided for hazardous tree failure data submitted on Report of Tree Failure forms. Definitions of data items and suggestions for interpreting ambiguously worded reports are also included. The manual is intended to insure the production of accurate and consistent punched ADP cards which are used in transfer of the data to...

  13. 10 CFR Appendix O to Part 110 - Illustrative List of Fuel Element Fabrication Plant Equipment and Components Under NRC's Export...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... performance and safety during reactor operation. Also, in all cases precise control of processes, procedures... performance. (a) Items that are considered especially designed or prepared for the fabrication of fuel... pellets; (2) Automatic welding machines especially designed or prepared for welding end caps onto the fuel...

  14. 2D automatic body-fitted structured mesh generation using advancing extraction method

    NASA Astrophysics Data System (ADS)

    Zhang, Yaoxin; Jia, Yafei

    2018-01-01

    This paper presents an automatic mesh generation algorithm for body-fitted structured meshes in Computational Fluids Dynamics (CFD) analysis using the Advancing Extraction Method (AEM). The method is applicable to two-dimensional domains with complex geometries, which have the hierarchical tree-like topography with extrusion-like structures (i.e., branches or tributaries) and intrusion-like structures (i.e., peninsula or dikes). With the AEM, the hierarchical levels of sub-domains can be identified, and the block boundary of each sub-domain in convex polygon shape in each level can be extracted in an advancing scheme. In this paper, several examples were used to illustrate the effectiveness and applicability of the proposed algorithm for automatic structured mesh generation, and the implementation of the method.

  15. Automatic control system generation for robot design validation

    NASA Technical Reports Server (NTRS)

    Bacon, James A. (Inventor); English, James D. (Inventor)

    2012-01-01

    The specification and drawings present a new method, system and software product for and apparatus for generating a robotic validation system for a robot design. The robotic validation system for the robot design of a robotic system is automatically generated by converting a robot design into a generic robotic description using a predetermined format, then generating a control system from the generic robotic description and finally updating robot design parameters of the robotic system with an analysis tool using both the generic robot description and the control system.

  16. Automated UMLS-Based Comparison of Medical Forms

    PubMed Central

    Dugas, Martin; Fritz, Fleur; Krumm, Rainer; Breil, Bernhard

    2013-01-01

    Medical forms are very heterogeneous: on a European scale there are thousands of data items in several hundred different systems. To enable data exchange for clinical care and research purposes there is a need to develop interoperable documentation systems with harmonized forms for data capture. A prerequisite in this harmonization process is comparison of forms. So far – to our knowledge – an automated method for comparison of medical forms is not available. A form contains a list of data items with corresponding medical concepts. An automatic comparison needs data types, item names and especially item with these unique concept codes from medical terminologies. The scope of the proposed method is a comparison of these items by comparing their concept codes (coded in UMLS). Each data item is represented by item name, concept code and value domain. Two items are called identical, if item name, concept code and value domain are the same. Two items are called matching, if only concept code and value domain are the same. Two items are called similar, if their concept codes are the same, but the value domains are different. Based on these definitions an open-source implementation for automated comparison of medical forms in ODM format with UMLS-based semantic annotations was developed. It is available as package compareODM from http://cran.r-project.org. To evaluate this method, it was applied to a set of 7 real medical forms with 285 data items from a large public ODM repository with forms for different medical purposes (research, quality management, routine care). Comparison results were visualized with grid images and dendrograms. Automated comparison of semantically annotated medical forms is feasible. Dendrograms allow a view on clustered similar forms. The approach is scalable for a large set of real medical forms. PMID:23861827

  17. Semi-Automatic Modelling of Building FAÇADES with Shape Grammars Using Historic Building Information Modelling

    NASA Astrophysics Data System (ADS)

    Dore, C.; Murphy, M.

    2013-02-01

    This paper outlines a new approach for generating digital heritage models from laser scan or photogrammetric data using Historic Building Information Modelling (HBIM). HBIM is a plug-in for Building Information Modelling (BIM) software that uses parametric library objects and procedural modelling techniques to automate the modelling stage. The HBIM process involves a reverse engineering solution whereby parametric interactive objects representing architectural elements are mapped onto laser scan or photogrammetric survey data. A library of parametric architectural objects has been designed from historic manuscripts and architectural pattern books. These parametric objects were built using an embedded programming language within the ArchiCAD BIM software called Geometric Description Language (GDL). Procedural modelling techniques have been implemented with the same language to create a parametric building façade which automatically combines library objects based on architectural rules and proportions. Different configurations of the façade are controlled by user parameter adjustment. The automatically positioned elements of the façade can be subsequently refined using graphical editing while overlaying the model with orthographic imagery. Along with this semi-automatic method for generating façade models, manual plotting of library objects can also be used to generate a BIM model from survey data. After the 3D model has been completed conservation documents such as plans, sections, elevations and 3D views can be automatically generated for conservation projects.

  18. LIMSI @ 2014 Clinical Decision Support Track

    DTIC Science & Technology

    2014-11-01

    MeSH and BoW runs) was based on the automatic generation of disease hypotheses for which we used data from OrphaNet [4] and the Disease Symptom Knowledge...with the MeSH terms of the top 5 disease hypotheses generated for the case reports. Compared to the other participants we achieved low scores...clinical question types. Query expansion (for both MeSH and BoW runs) was based on the automatic generation of disease hypotheses for which we used data

  19. Testing the Invariance of the National Health and Nutrition Examination Survey's Sexual Behavior Questionnaire Across Gender, Ethnicity/Race, and Generation.

    PubMed

    Zhou, Anne Q; Hsueh, Loretta; Roesch, Scott C; Vaughn, Allison A; Sotelo, Frank L; Lindsay, Suzanne; Klonoff, Elizabeth A

    2016-02-01

    Federal and state policies are based on data from surveys that examine sexual-related cognitions and behaviors through self-reports of attitudes and actions. No study has yet examined their factorial invariance--specifically, whether the relationship between items assessing sexual behavior and their underlying construct differ depending on gender, ethnicity/race, or age. This study examined the factor structure of four items from the sexual behavior questionnaire part of the National Health and Nutrition Examination Survey (NHANES). As NHANES provided different versions of the survey per gender, invariance was tested across gender to determine whether subsequent tests across ethnicity/race and generation could be done across gender. Items were not invariant across gender groups so data files for women and men were not collapsed. Across ethnicity/race for both genders, and across generation for women, items were configurally invariant, and exhibited metric invariance across Latino/Latina and Black participants for both genders. Across generation for men, the configural invariance model could not be identified so the baseline models were examined. The four item one factor model fit well for the Millennial and GenerationX groups but was a poor fit for the baby boomer and silent generation groups, suggesting that gender moderated the invariance across generation. Thus, comparisons between ethnic/racial and generational groups should not be made between the genders or even within gender. Findings highlight the need for programs and interventions that promote a more inclusive definition of "having had sex."

  20. Accuracy assessment of building point clouds automatically generated from iphone images

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R.

    2014-06-01

    Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (σ) of roughness histograms are calculated as (μ1 = 0.44 m., σ1 = 0.071 m.) and (μ2 = 0.025 m., σ2 = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.

  1. Automatic Implementation of Ttethernet-Based Time-Triggered Avionics Applications

    NASA Astrophysics Data System (ADS)

    Gorcitz, Raul Adrian; Carle, Thomas; Lesens, David; Monchaux, David; Potop-Butucaruy, Dumitru; Sorel, Yves

    2015-09-01

    The design of safety-critical embedded systems such as those used in avionics still involves largely manual phases. But in avionics the definition of standard interfaces embodied in standards such as ARINC 653 or TTEthernet should allow the definition of fully automatic code generation flows that reduce the costs while improving the quality of the generated code, much like compilers have done when replacing manual assembly coding. In this paper, we briefly present such a fully automatic implementation tool, called Lopht, for ARINC653-based time-triggered systems, and then explain how it is currently extended to include support for TTEthernet networks.

  2. Automated platform for designing multiple robot work cells

    NASA Astrophysics Data System (ADS)

    Osman, N. S.; Rahman, M. A. A.; Rahman, A. A. Abdul; Kamsani, S. H.; Bali Mohamad, B. M.; Mohamad, E.; Zaini, Z. A.; Rahman, M. F. Ab; Mohamad Hatta, M. N. H.

    2017-06-01

    Designing the multiple robot work cells is very knowledge-intensive, intricate, and time-consuming process. This paper elaborates the development process of a computer-aided design program for generating the multiple robot work cells which offer a user-friendly interface. The primary purpose of this work is to provide a fast and easy platform for less cost and human involvement with minimum trial and errors adjustments. The automated platform is constructed based on the variant-shaped configuration concept with its mathematical model. A robot work cell layout, system components, and construction procedure of the automated platform are discussed in this paper where integration of these items will be able to automatically provide the optimum robot work cell design according to the information set by the user. This system is implemented on top of CATIA V5 software and utilises its Part Design, Assembly Design, and Macro tool. The current outcomes of this work provide a basis for future investigation in developing a flexible configuration system for the multiple robot work cells.

  3. SVGMap: configurable image browser for experimental data.

    PubMed

    Rafael-Palou, Xavier; Schroeder, Michael P; Lopez-Bigas, Nuria

    2012-01-01

    Spatial data visualization is very useful to represent biological data and quickly interpret the results. For instance, to show the expression pattern of a gene in different tissues of a fly, an intuitive approach is to draw the fly with the corresponding tissues and color the expression of the gene in each of them. However, the creation of these visual representations may be a burdensome task. Here we present SVGMap, a java application that automatizes the generation of high-quality graphics for singular data items (e.g. genes) and biological conditions. SVGMap contains a browser that allows the user to navigate the different images created and can be used as a web-based results publishing tool. SVGMap is freely available as precompiled java package as well as source code at http://bg.upf.edu/svgmap. It requires Java 6 and any recent web browser with JavaScript enabled. The software can be run on Linux, Mac OS X and Windows systems. nuria.lopez@upf.edu

  4. Enhancing the Automatic Generation of Hints with Expert Seeding

    ERIC Educational Resources Information Center

    Stamper, John; Barnes, Tiffany; Croy, Marvin

    2011-01-01

    The Hint Factory is an implementation of our novel method to automatically generate hints using past student data for a logic tutor. One disadvantage of the Hint Factory is the time needed to gather enough data on new problems in order to provide hints. In this paper we describe the use of expert sample solutions to "seed" the hint generation…

  5. The Automation of Stochastization Algorithm with Use of SymPy Computer Algebra Library

    NASA Astrophysics Data System (ADS)

    Demidova, Anastasya; Gevorkyan, Migran; Kulyabov, Dmitry; Korolkova, Anna; Sevastianov, Leonid

    2018-02-01

    SymPy computer algebra library is used for automatic generation of ordinary and stochastic systems of differential equations from the schemes of kinetic interaction. Schemes of this type are used not only in chemical kinetics but also in biological, ecological and technical models. This paper describes the automatic generation algorithm with an emphasis on application details.

  6. Design and development of a prototypical software for semi-automatic generation of test methodologies and security checklists for IT vulnerability assessment in small- and medium-sized enterprises (SME)

    NASA Astrophysics Data System (ADS)

    Möller, Thomas; Bellin, Knut; Creutzburg, Reiner

    2015-03-01

    The aim of this paper is to show the recent progress in the design and prototypical development of a software suite Copra Breeder* for semi-automatic generation of test methodologies and security checklists for IT vulnerability assessment in small and medium-sized enterprises.

  7. Automatic rule generation for high-level vision

    NASA Technical Reports Server (NTRS)

    Rhee, Frank Chung-Hoon; Krishnapuram, Raghu

    1992-01-01

    Many high-level vision systems use rule-based approaches to solving problems such as autonomous navigation and image understanding. The rules are usually elaborated by experts. However, this procedure may be rather tedious. In this paper, we propose a method to generate such rules automatically from training data. The proposed method is also capable of filtering out irrelevant features and criteria from the rules.

  8. Regulation and Measurement of the Heat Generated by Automatic Tooth Preparation in a Confined Space.

    PubMed

    Yuan, Fusong; Zheng, Jianqiao; Sun, Yuchun; Wang, Yong; Lyu, Peijun

    2017-06-01

    The aim of this study was to assess and regulate heat generation in the dental pulp cavity and circumambient temperature around a tooth during laser ablation with a femtosecond laser in a confined space. The automatic tooth preparing technique is one of the traditional oral clinical technology innovations. In this technique, a robot controlled an ultrashort pulse laser to automatically complete the three-dimensional teeth preparing in a confined space. The temperature control is the main measure for protecting the tooth nerve. Ten tooth specimens were irradiated with a femtosecond laser controlled by a robot in a confined space to generate 10 teeth preparation. During the process, four thermocouple sensors were used to record the pulp cavity and circumambient environment temperatures with or without air cooling. A statistical analysis of the temperatures was performed between the conditions with and without air cooling (p < 0.05). The recordings showed that the temperature with air cooling was lower than that without air cooling and that the heat generated in the pulp cavity was lower than the threshold for dental pulp damage. These results indicate that femtosecond laser ablation with air cooling might be an appropriate method for automatic tooth preparing.

  9. MeSH indexing based on automatically generated summaries

    PubMed Central

    2013-01-01

    Background MEDLINE citations are manually indexed at the U.S. National Library of Medicine (NLM) using as reference the Medical Subject Headings (MeSH) controlled vocabulary. For this task, the human indexers read the full text of the article. Due to the growth of MEDLINE, the NLM Indexing Initiative explores indexing methodologies that can support the task of the indexers. Medical Text Indexer (MTI) is a tool developed by the NLM Indexing Initiative to provide MeSH indexing recommendations to indexers. Currently, the input to MTI is MEDLINE citations, title and abstract only. Previous work has shown that using full text as input to MTI increases recall, but decreases precision sharply. We propose using summaries generated automatically from the full text for the input to MTI to use in the task of suggesting MeSH headings to indexers. Summaries distill the most salient information from the full text, which might increase the coverage of automatic indexing approaches based on MEDLINE. We hypothesize that if the results were good enough, manual indexers could possibly use automatic summaries instead of the full texts, along with the recommendations of MTI, to speed up the process while maintaining high quality of indexing results. Results We have generated summaries of different lengths using two different summarizers, and evaluated the MTI indexing on the summaries using different algorithms: MTI, individual MTI components, and machine learning. The results are compared to those of full text articles and MEDLINE citations. Our results show that automatically generated summaries achieve similar recall but higher precision compared to full text articles. Compared to MEDLINE citations, summaries achieve higher recall but lower precision. Conclusions Our results show that automatic summaries produce better indexing than full text articles. Summaries produce similar recall to full text but much better precision, which seems to indicate that automatic summaries can efficiently capture the most important contents within the original articles. The combination of MEDLINE citations and automatically generated summaries could improve the recommendations suggested by MTI. On the other hand, indexing performance might be dependent on the MeSH heading being indexed. Summarization techniques could thus be considered as a feature selection algorithm that might have to be tuned individually for each MeSH heading. PMID:23802936

  10. Planning applications in image analysis

    NASA Technical Reports Server (NTRS)

    Boddy, Mark; White, Jim; Goldman, Robert; Short, Nick, Jr.

    1994-01-01

    We describe two interim results from an ongoing effort to automate the acquisition, analysis, archiving, and distribution of satellite earth science data. Both results are applications of Artificial Intelligence planning research to the automatic generation of processing steps for image analysis tasks. First, we have constructed a linear conditional planner (CPed), used to generate conditional processing plans. Second, we have extended an existing hierarchical planning system to make use of durations, resources, and deadlines, thus supporting the automatic generation of processing steps in time and resource-constrained environments.

  11. Changing value through cued approach: An automatic mechanism of behavior change

    PubMed Central

    Schonberg, Tom; Bakkour, Akram; Hover, Ashleigh M.; Mumford, Jeanette A.; Nagar, Lakshya; Perez, Jacob; Poldrack, Russell A.

    2014-01-01

    It is believed that choice behavior reveals the underlying value of goods. The subjective values of stimuli can be changed through reward-based learning mechanisms as well as by modifying the description of the decision problem, but it has yet to be shown that preferences can be manipulated by perturbing intrinsic values of individual items. Here we show that the value of food items can be modulated by the concurrent presentation of an irrelevant auditory cue to which subjects must make a simple motor response (i.e. cue-approach training). Follow-up tests show that the effects of this pairing on choice lasted at least two months after prolonged training. Eye-tracking during choice confirmed that cue-approach training increased attention to the cued items. Neuroimaging revealed the neural signature of a value change in the form of amplified preference-related activity in ventromedial prefrontal cortex. PMID:24609465

  12. Formalizing Evidence Type Definitions for Drug-Drug Interaction Studies to Improve Evidence Base Curation.

    PubMed

    Utecht, Joseph; Brochhausen, Mathias; Judkins, John; Schneider, Jodi; Boyce, Richard D

    2017-01-01

    In this research we aim to demonstrate that an ontology-based system can categorize potential drug-drug interaction (PDDI) evidence items into complex types based on a small set of simple questions. Such a method could increase the transparency and reliability of PDDI evidence evaluation, while also reducing the variations in content and seriousness ratings present in PDDI knowledge bases. We extended the DIDEO ontology with 44 formal evidence type definitions. We then manually annotated the evidence types of 30 evidence items. We tested an RDF/OWL representation of answers to a small number of simple questions about each of these 30 evidence items and showed that automatic inference can determine the detailed evidence types based on this small number of simpler questions. These results show proof-of-concept for a decision support infrastructure that frees the evidence evaluator from mastering relatively complex written evidence type definitions.

  13. The five item Barthel index

    PubMed Central

    Hobart, J; Thompson, A

    2001-01-01

    OBJECTIVES—Routine data collection is now considered mandatory. Therefore, staff rated clinical scales that consist of multiple items should have the minimum number of items necessary for rigorous measurement. This study explores the possibility of developing a short form Barthel index, suitable for use in clinical trials, epidemiological studies, and audit, that satisfies criteria for rigorous measurement and is psychometrically equivalent to the 10 item instrument.
METHODS—Data were analysed from 844 consecutive admissions to a neurological rehabilitation unit in London. Random half samples were generated. Short forms were developed in one sample (n=419), by selecting items with the best measurement properties, and tested in the other (n=418). For each of the 10 items of the BI, item total correlations and effect sizes were computed and rank ordered. The best items were defined as those with the lowest cross product of these rank orderings. The acceptability, reliability, validity, and responsiveness of three short form BIs (five, four, and three item) were determined and compared with the 10 item BI. Agreement between scores generated by short forms and 10 item BI was determined using intraclass correlation coefficients and the method of Bland and Altman.
RESULTS—The five best items in this sample were transfers, bathing, toilet use, stairs, and mobility. Of the three short forms examined, the five item BI had the best measurement properties and was psychometrically equivalent to the 10 item BI. Agreement between scores generated by the two measures for individual patients was excellent (ICC=0.90) but not identical (limits of agreement=1.84±3.84).
CONCLUSIONS—The five item short form BI may be a suitable outcome measure for group comparison studies in comparable samples. Further evaluations are needed. Results demonstrate a fundamental difference between assessment and measurement and the importance of incorporating psychometric methods in the development and evaluation of health measures.

 PMID:11459898

  14. Processing Strategy and PI Effects in Recognition Memory of Word Lists.

    ERIC Educational Resources Information Center

    Hodge, Milton H.; Britton, Bruce K.

    Previous research by A. I. Schulman argued that an observed systematic decline in recognition memory in long word lists was due to the build-up of input and output proactive interference (PI). It also suggested that input PI resulted from process automatization; that is, each list item was processed or encoded in much the same way, producing a set…

  15. Relative Recency Judgments in Learning Disabled Children: A Semi-Automatic Process.

    ERIC Educational Resources Information Center

    Stein, Debra K.; And Others

    The ability of 20 learning disabled (LD) and 20 non-LD students (mean age of 9 years) to process temporal order information was assessed by employing a relative recency judgment task. Ss were administered lists composed of pictures of everyday objects and were then asked to indicate which item appeared latest on the list (that is, most recently).…

  16. Automatically-generated rectal dose constraints in intensity-modulated radiation therapy for prostate cancer

    NASA Astrophysics Data System (ADS)

    Hwang, Taejin; Kim, Yong Nam; Kim, Soo Kon; Kang, Sei-Kwon; Cheong, Kwang-Ho; Park, Soah; Yoon, Jai-Woong; Han, Taejin; Kim, Haeyoung; Lee, Meyeon; Kim, Kyoung-Joo; Bae, Hoonsik; Suh, Tae-Suk

    2015-06-01

    The dose constraint during prostate intensity-modulated radiation therapy (IMRT) optimization should be patient-specific for better rectum sparing. The aims of this study are to suggest a novel method for automatically generating a patient-specific dose constraint by using an experience-based dose volume histogram (DVH) of the rectum and to evaluate the potential of such a dose constraint qualitatively. The normal tissue complication probabilities (NTCPs) of the rectum with respect to V %ratio in our study were divided into three groups, where V %ratio was defined as the percent ratio of the rectal volume overlapping the planning target volume (PTV) to the rectal volume: (1) the rectal NTCPs in the previous study (clinical data), (2) those statistically generated by using the standard normal distribution (calculated data), and (3) those generated by combining the calculated data and the clinical data (mixed data). In the calculated data, a random number whose mean value was on the fitted curve described in the clinical data and whose standard deviation was 1% was generated by using the `randn' function in the MATLAB program and was used. For each group, we validated whether the probability density function (PDF) of the rectal NTCP could be automatically generated with the density estimation method by using a Gaussian kernel. The results revealed that the rectal NTCP probability increased in proportion to V %ratio , that the predictive rectal NTCP was patient-specific, and that the starting point of IMRT optimization for the given patient might be different. The PDF of the rectal NTCP was obtained automatically for each group except that the smoothness of the probability distribution increased with increasing number of data and with increasing window width. We showed that during the prostate IMRT optimization, the patient-specific dose constraints could be automatically generated and that our method could reduce the IMRT optimization time as well as maintain the IMRT plan quality.

  17. Reliability model generator

    NASA Technical Reports Server (NTRS)

    Cohen, Gerald C. (Inventor); McMann, Catherine M. (Inventor)

    1991-01-01

    An improved method and system for automatically generating reliability models for use with a reliability evaluation tool is described. The reliability model generator of the present invention includes means for storing a plurality of low level reliability models which represent the reliability characteristics for low level system components. In addition, the present invention includes means for defining the interconnection of the low level reliability models via a system architecture description. In accordance with the principles of the present invention, a reliability model for the entire system is automatically generated by aggregating the low level reliability models based on the system architecture description.

  18. A knowledge-base generating hierarchical fuzzy-neural controller.

    PubMed

    Kandadai, R M; Tien, J M

    1997-01-01

    We present an innovative fuzzy-neural architecture that is able to automatically generate a knowledge base, in an extractable form, for use in hierarchical knowledge-based controllers. The knowledge base is in the form of a linguistic rule base appropriate for a fuzzy inference system. First, we modify Berenji and Khedkar's (1992) GARIC architecture to enable it to automatically generate a knowledge base; a pseudosupervised learning scheme using reinforcement learning and error backpropagation is employed. Next, we further extend this architecture to a hierarchical controller that is able to generate its own knowledge base. Example applications are provided to underscore its viability.

  19. The initial development of the WebMedQual scale: domain assessment of the construct of quality of health web sites.

    PubMed

    Provost, Mélanie; Koompalum, Dayin; Dong, Diane; Martin, Bradley C

    2006-01-01

    To develop a comprehensive instrument assessing quality of health-related web sites. Phase I consisted of a literature review to identify constructs thought to indicate web site quality and to identify items. During content analysis, duplicate items were eliminated and items that were not clear, meaningful, or measurable were reworded or removed. Some items were generated by the authors. Phase II: a panel consisting of six healthcare and MIS reviewers was convened to assess each item for its relevance and importance to the construct and to assess item clarity and measurement feasibility. Three hundred and eighty-four items were generated from 26 sources. The initial content analysis reduced the scale to 104 items. Four of the six expert reviewers responded; high concordance on the relevance, importance and measurement feasibility of each item was observed: 3 out of 4, or all raters agreed on 76-85% of items. Based on the panel ratings, 9 items were removed, 3 added, and 10 revised. The WebMedQual consists of 8 categories, 8 sub-categories, 95 items and 3 supplemental items to assess web site quality. The constructs are: content (19 items), authority of source (18 items), design (19 items), accessibility and availability (6 items), links (4 items), user support (9 items), confidentiality and privacy (17 items), e-commerce (6 items). The "WebMedQual" represents a first step toward a comprehensive and standard quality assessment of health web sites. This scale will allow relatively easy assessment of quality with possible numeric scoring.

  20. Creating a medical dictionary using word alignment: the influence of sources and resources.

    PubMed

    Nyström, Mikael; Merkel, Magnus; Petersson, Håkan; Ahlfeldt, Hans

    2007-11-23

    Automatic word alignment of parallel texts with the same content in different languages is among other things used to generate dictionaries for new translations. The quality of the generated word alignment depends on the quality of the input resources. In this paper we report on automatic word alignment of the English and Swedish versions of the medical terminology systems ICD-10, ICF, NCSP, KSH97-P and parts of MeSH and how the terminology systems and type of resources influence the quality. We automatically word aligned the terminology systems using static resources, like dictionaries, statistical resources, like statistically derived dictionaries, and training resources, which were generated from manual word alignment. We varied which part of the terminology systems that we used to generate the resources, which parts that we word aligned and which types of resources we used in the alignment process to explore the influence the different terminology systems and resources have on the recall and precision. After the analysis, we used the best configuration of the automatic word alignment for generation of candidate term pairs. We then manually verified the candidate term pairs and included the correct pairs in an English-Swedish dictionary. The results indicate that more resources and resource types give better results but the size of the parts used to generate the resources only partly affects the quality. The most generally useful resources were generated from ICD-10 and resources generated from MeSH were not as general as other resources. Systematic inter-language differences in the structure of the terminology system rubrics make the rubrics harder to align. Manually created training resources give nearly as good results as a union of static resources, statistical resources and training resources and noticeably better results than a union of static resources and statistical resources. The verified English-Swedish dictionary contains 24,000 term pairs in base forms. More resources give better results in the automatic word alignment, but some resources only give small improvements. The most important type of resource is training and the most general resources were generated from ICD-10.

  1. Creating a medical dictionary using word alignment: The influence of sources and resources

    PubMed Central

    Nyström, Mikael; Merkel, Magnus; Petersson, Håkan; Åhlfeldt, Hans

    2007-01-01

    Background Automatic word alignment of parallel texts with the same content in different languages is among other things used to generate dictionaries for new translations. The quality of the generated word alignment depends on the quality of the input resources. In this paper we report on automatic word alignment of the English and Swedish versions of the medical terminology systems ICD-10, ICF, NCSP, KSH97-P and parts of MeSH and how the terminology systems and type of resources influence the quality. Methods We automatically word aligned the terminology systems using static resources, like dictionaries, statistical resources, like statistically derived dictionaries, and training resources, which were generated from manual word alignment. We varied which part of the terminology systems that we used to generate the resources, which parts that we word aligned and which types of resources we used in the alignment process to explore the influence the different terminology systems and resources have on the recall and precision. After the analysis, we used the best configuration of the automatic word alignment for generation of candidate term pairs. We then manually verified the candidate term pairs and included the correct pairs in an English-Swedish dictionary. Results The results indicate that more resources and resource types give better results but the size of the parts used to generate the resources only partly affects the quality. The most generally useful resources were generated from ICD-10 and resources generated from MeSH were not as general as other resources. Systematic inter-language differences in the structure of the terminology system rubrics make the rubrics harder to align. Manually created training resources give nearly as good results as a union of static resources, statistical resources and training resources and noticeably better results than a union of static resources and statistical resources. The verified English-Swedish dictionary contains 24,000 term pairs in base forms. Conclusion More resources give better results in the automatic word alignment, but some resources only give small improvements. The most important type of resource is training and the most general resources were generated from ICD-10. PMID:18036221

  2. Demonstration of Automatically-Generated Adjoint Code for Use in Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Green, Lawrence; Carle, Alan; Fagan, Mike

    1999-01-01

    Gradient-based optimization requires accurate derivatives of the objective function and constraints. These gradients may have previously been obtained by manual differentiation of analysis codes, symbolic manipulators, finite-difference approximations, or existing automatic differentiation (AD) tools such as ADIFOR (Automatic Differentiation in FORTRAN). Each of these methods has certain deficiencies, particularly when applied to complex, coupled analyses with many design variables. Recently, a new AD tool called ADJIFOR (Automatic Adjoint Generation in FORTRAN), based upon ADIFOR, was developed and demonstrated. Whereas ADIFOR implements forward-mode (direct) differentiation throughout an analysis program to obtain exact derivatives via the chain rule of calculus, ADJIFOR implements the reverse-mode counterpart of the chain rule to obtain exact adjoint form derivatives from FORTRAN code. Automatically-generated adjoint versions of the widely-used CFL3D computational fluid dynamics (CFD) code and an algebraic wing grid generation code were obtained with just a few hours processing time using the ADJIFOR tool. The codes were verified for accuracy and were shown to compute the exact gradient of the wing lift-to-drag ratio, with respect to any number of shape parameters, in about the time required for 7 to 20 function evaluations. The codes have now been executed on various computers with typical memory and disk space for problems with up to 129 x 65 x 33 grid points, and for hundreds to thousands of independent variables. These adjoint codes are now used in a gradient-based aerodynamic shape optimization problem for a swept, tapered wing. For each design iteration, the optimization package constructs an approximate, linear optimization problem, based upon the current objective function, constraints, and gradient values. The optimizer subroutines are called within a design loop employing the approximate linear problem until an optimum shape is found, the design loop limit is reached, or no further design improvement is possible due to active design variable bounds and/or constraints. The resulting shape parameters are then used by the grid generation code to define a new wing surface and computational grid. The lift-to-drag ratio and its gradient are computed for the new design by the automatically-generated adjoint codes. Several optimization iterations may be required to find an optimum wing shape. Results from two sample cases will be discussed. The reader should note that this work primarily represents a demonstration of use of automatically- generated adjoint code within an aerodynamic shape optimization. As such, little significance is placed upon the actual optimization results, relative to the method for obtaining the results.

  3. The development of automaticity in short-term memory search: Item-response learning and category learning.

    PubMed

    Cao, Rui; Nosofsky, Robert M; Shiffrin, Richard M

    2017-05-01

    In short-term-memory (STM)-search tasks, observers judge whether a test probe was present in a short list of study items. Here we investigated the long-term learning mechanisms that lead to the highly efficient STM-search performance observed under conditions of consistent-mapping (CM) training, in which targets and foils never switch roles across trials. In item-response learning, subjects learn long-term mappings between individual items and target versus foil responses. In category learning, subjects learn high-level codes corresponding to separate sets of items and learn to attach old versus new responses to these category codes. To distinguish between these 2 forms of learning, we tested subjects in categorized varied mapping (CV) conditions: There were 2 distinct categories of items, but the assignment of categories to target versus foil responses varied across trials. In cases involving arbitrary categories, CV performance closely resembled standard varied-mapping performance without categories and departed dramatically from CM performance, supporting the item-response-learning hypothesis. In cases involving prelearned categories, CV performance resembled CM performance, as long as there was sufficient practice or steps taken to reduce trial-to-trial category-switching costs. This pattern of results supports the category-coding hypothesis for sufficiently well-learned categories. Thus, item-response learning occurs rapidly and is used early in CM training; category learning is much slower but is eventually adopted and is used to increase the efficiency of search beyond that available from item-response learning. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  4. Examination of Polytomous Items' Psychometric Properties According to Nonparametric Item Response Theory Models in Different Test Conditions

    ERIC Educational Resources Information Center

    Sengul Avsar, Asiye; Tavsancil, Ezel

    2017-01-01

    This study analysed polytomous items' psychometric properties according to nonparametric item response theory (NIRT) models. Thus, simulated datasets--three different test lengths (10, 20 and 30 items), three sample distributions (normal, right and left skewed) and three samples sizes (100, 250 and 500)--were generated by conducting 20…

  5. A Process for Reviewing and Evaluating Generated Test Items

    ERIC Educational Resources Information Center

    Gierl, Mark J.; Lai, Hollis

    2016-01-01

    Testing organization needs large numbers of high-quality items due to the proliferation of alternative test administration methods and modern test designs. But the current demand for items far exceeds the supply. Test items, as they are currently written, evoke a process that is both time-consuming and expensive because each item is written,…

  6. The use of automatic programming techniques for fault tolerant computing systems

    NASA Technical Reports Server (NTRS)

    Wild, C.

    1985-01-01

    It is conjectured that the production of software for ultra-reliable computing systems such as required by Space Station, aircraft, nuclear power plants and the like will require a high degree of automation as well as fault tolerance. In this paper, the relationship between automatic programming techniques and fault tolerant computing systems is explored. Initial efforts in the automatic synthesis of code from assertions to be used for error detection as well as the automatic generation of assertions and test cases from abstract data type specifications is outlined. Speculation on the ability to generate truly diverse designs capable of recovery from errors by exploring alternate paths in the program synthesis tree is discussed. Some initial thoughts on the use of knowledge based systems for the global detection of abnormal behavior using expectations and the goal-directed reconfiguration of resources to meet critical mission objectives are given. One of the sources of information for these systems would be the knowledge captured during the automatic programming process.

  7. An empirical examination of the factor structure of compassion.

    PubMed

    Gu, Jenny; Cavanagh, Kate; Baer, Ruth; Strauss, Clara

    2017-01-01

    Compassion has long been regarded as a core part of our humanity by contemplative traditions, and in recent years, it has received growing research interest. Following a recent review of existing conceptualisations, compassion has been defined as consisting of the following five elements: 1) recognising suffering, 2) understanding the universality of suffering in human experience, 3) feeling moved by the person suffering and emotionally connecting with their distress, 4) tolerating uncomfortable feelings aroused (e.g., fear, distress) so that we remain open to and accepting of the person suffering, and 5) acting or being motivated to act to alleviate suffering. As a prerequisite to developing a high quality compassion measure and furthering research in this field, the current study empirically investigated the factor structure of the five-element definition using a combination of existing and newly generated self-report items. This study consisted of three stages: a systematic consultation with experts to review items from existing self-report measures of compassion and generate additional items (Stage 1), exploratory factor analysis of items gathered from Stage 1 to identify the underlying structure of compassion (Stage 2), and confirmatory factor analysis to validate the identified factor structure (Stage 3). Findings showed preliminary empirical support for a five-factor structure of compassion consistent with the five-element definition. However, findings indicated that the 'tolerating' factor may be problematic and not a core aspect of compassion. This possibility requires further empirical testing. Limitations with items from included measures lead us to recommend against using these items collectively to assess compassion. Instead, we call for the development of a new self-report measure of compassion, using the five-element definition to guide item generation. We recommend including newly generated 'tolerating' items in the initial item pool, to determine whether or not factor-level issues are resolved once item-level issues are addressed.

  8. Automatic Generation of English-Japanese Translation Pattern Utilizing Genetic Programming Technique

    NASA Astrophysics Data System (ADS)

    Matsumura, Koki; Tamekuni, Yuji; Kimura, Shuhei

    There are a lot of constructional differences in an English-Japanese phrase template, and that often makes the act of translation difficult. Moreover, there exist various and tremendous phrase templates and sentence to be refered to. It is not easy to prepare the corpus that covers the all. Therefore, it is very significant to generate the translation pattern of the sentence pattern automatically from a viewpoint of the translation success rate and the capacity of the pattern dictionary. Then, for the purpose of realizing the automatic generation of the translation pattern, this paper proposed the new method for the generation of the translation pattern by using the genetic programming technique (GP). The technique tries to generate the translation pattern of various sentences which are not registered in the phrase template dictionary automatically by giving the genetic operation to the parsing tree of a basic pattern. The tree consists of the pair of the English-Japanese sentence generated as the first stage population. The analysis tree data base with 50,100,150,200 pairs was prepared as the first stage population. And this system was applied and executed for an English input of 1,555 sentences. As a result, the analysis tree increases from 200 to 517, and the accuracy rate of the translation pattern has improved from 42.57% to 70.10%. And, 86.71% of the generated translations was successfully done, whose meanings are enough acceptable and understandable. It seemed that this proposal technique became a clue to raise the translation success rate, and to find the possibility of the reduction of the analysis tree data base.

  9. Time to first cigarette in the morning as an index of ability to quit smoking: Implications for nicotine dependence

    PubMed Central

    Baker, Timothy B.; Piper, Megan E.; McCarthy, Danielle E.; Bolt, Daniel M.; Smith, Stevens S.; Kim, Su-Young; Colby, Suzanne; Conti, David; Giovino, Gary A.; Hatsukami, Dorothy; Hyland, Andrew; Krishnan-Sarin, Suchitra; Niaura, Raymond; Perkins, Kenneth A.; Toll, Benjamin A.

    2010-01-01

    An inability to maintain abstinence is a key indicator of tobacco dependence. Unfortunately, little evidence exists regarding the ability of the major tobacco dependence measures to predict smoking cessation outcome. This paper used data from four placebo-controlled smoking cessation trials and one international epidemiologic study to determine relations between the Fagerström Test for Nicotine Dependence (FTND; Heatherton et al., 1991), the Heaviness of Smoking Index (HSI; Kozlowski et al., 1994), the Nicotine Dependence Syndrome Scale (NDSS; Shiffman et al., 2004) and the Wisconsin Inventory of Smoking Dependence Motives (WISDM; Piper et al. 2004) with cessation success. Results showed that much of the predictive validity of the FTND could be attributed to its first item, time to first cigarette in the morning, and this item had greater validity than any other single measure. Thus, the time to first cigarette item appears to tap a pattern of heavy, uninterrupted, and automatic smoking and may be a good single-item measure of nicotine dependence. PMID:18067032

  10. An Analysis of the Optimal Multiobjective Inventory Clustering Decision with Small Quantity and Great Variety Inventory by Applying a DPSO

    PubMed Central

    Li, Meng-Hua

    2014-01-01

    When an enterprise has thousands of varieties in its inventory, the use of a single management method could not be a feasible approach. A better way to manage this problem would be to categorise inventory items into several clusters according to inventory decisions and to use different management methods for managing different clusters. The present study applies DPSO (dynamic particle swarm optimisation) to a problem of clustering of inventory items. Without the requirement of prior inventory knowledge, inventory items are automatically clustered into near optimal clustering number. The obtained clustering results should satisfy the inventory objective equation, which consists of different objectives such as total cost, backorder rate, demand relevance, and inventory turnover rate. This study integrates the above four objectives into a multiobjective equation, and inputs the actual inventory items of the enterprise into DPSO. In comparison with other clustering methods, the proposed method can consider different objectives and obtain an overall better solution to obtain better convergence results and inventory decisions. PMID:25197713

  11. Multilingual Generalization of the ModelCreator Software for Math Item Generation. Research Report. ETS RR-05-02

    ERIC Educational Resources Information Center

    Higgins, Derrick; Futagi, Yoko; Deane, Paul

    2005-01-01

    This paper reports on the process of modifying the ModelCreator item generation system to produce output in multiple languages. In particular, Japanese and Spanish are now supported in addition to English. The addition of multilingual functionality was considerably facilitated by the general formulation of our natural language generation system,…

  12. Geometry modeling and multi-block grid generation for turbomachinery configurations

    NASA Technical Reports Server (NTRS)

    Shih, Ming H.; Soni, Bharat K.

    1992-01-01

    An interactive 3D grid generation code, Turbomachinery Interactive Grid genERation (TIGER), was developed for general turbomachinery configurations. TIGER features the automatic generation of multi-block structured grids around multiple blade rows for either internal, external, or internal-external turbomachinery flow fields. Utilization of the Bezier's curves achieves a smooth grid and better orthogonality. TIGER generates the algebraic grid automatically based on geometric information provided by its built-in pseudo-AI algorithm. However, due to the large variation of turbomachinery configurations, this initial grid may not always be as good as desired. TIGER therefore provides graphical user interactions during the process which allow the user to design, modify, as well as manipulate the grid, including the capability of elliptic surface grid generation.

  13. Efficiently measuring dimensions of the externalizing spectrum model: Development of the Externalizing Spectrum Inventory-Computerized Adaptive Test (ESI-CAT).

    PubMed

    Sunderland, Matthew; Slade, Tim; Krueger, Robert F; Markon, Kristian E; Patrick, Christopher J; Kramer, Mark D

    2017-07-01

    The development of the Externalizing Spectrum Inventory (ESI) was motivated by the need to comprehensively assess the interrelated nature of externalizing psychopathology and personality using an empirically driven framework. The ESI measures 23 theoretically distinct yet related unidimensional facets of externalizing, which are structured under 3 superordinate factors representing general externalizing, callous aggression, and substance abuse. One limitation of the ESI is its length at 415 items. To facilitate the use of the ESI in busy clinical and research settings, the current study sought to examine the efficiency and accuracy of a computerized adaptive version of the ESI. Data were collected over 3 waves and totaled 1,787 participants recruited from undergraduate psychology courses as well as male and female state prisons. A series of 6 algorithms with different termination rules were simulated to determine the efficiency and accuracy of each test under 3 different assumed distributions. Scores generated using an optimal adaptive algorithm evidenced high correlations (r > .9) with scores generated using the full ESI, brief ESI item-based factor scales, and the 23 facet scales. The adaptive algorithms for each facet administered a combined average of 115 items, a 72% decrease in comparison to the full ESI. Similarly, scores on the item-based factor scales of the ESI-brief form (57 items) were generated using on average of 17 items, a 70% decrease. The current study successfully demonstrates that an adaptive algorithm can generate similar scores for the ESI and the 3 item-based factor scales using a fraction of the total item pool. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. Use of Item Models in a Large-Scale Admissions Test: A Case Study

    ERIC Educational Resources Information Center

    Sinharay, Sandip; Johnson, Matthew S.

    2008-01-01

    "Item models" (LaDuca, Staples, Templeton, & Holzman, 1986) are classes from which it is possible to generate items that are equivalent/isomorphic to other items from the same model (e.g., Bejar, 1996, 2002). They have the potential to produce large numbers of high-quality items at reduced cost. This article introduces data from an…

  15. Selective loss of verbal imagery.

    PubMed

    Mehta, Z; Newcombe, F

    1996-05-01

    This single case study of the ability to generate verbal and non-verbal imagery in a woman who sustained a gunshot wound to the brain reports a significant difficulty in generating images of word shapes but not a significant problem in generating object images. Further dissociation, however, was observed in her ability to generate images of living vs non-living material. She made more errors in imagery and factual information tasks for non-living items than for living items. This pattern contrasts with our previous report of the agnosic patient, M.S., who had severe difficulty in generating images of living material, whereas his ability to image the shape of words was comparable to that of normal control subjects. Furthermore, with regard to the generation of images of living compared with non-living material, M.S. shows more errors with living than nonliving items. In contrast, the present patient, S.M., made significantly more errors with non-living relative to living items. There appear to be two types of double dissociation which reinforce the growing evidence of dissociable impairments in the ability to generate images for different types of verbal and non-verbal material. Such dissociations, presumably related to sensory and cognitive processing demands, address the problem of the neural basis of imagery.

  16. Mixed deep learning and natural language processing method for fake-food image recognition and standardization to help automated dietary assessment.

    PubMed

    Mezgec, Simon; Eftimov, Tome; Bucher, Tamara; Koroušić Seljak, Barbara

    2018-04-06

    The present study tested the combination of an established and a validated food-choice research method (the 'fake food buffet') with a new food-matching technology to automate the data collection and analysis. The methodology combines fake-food image recognition using deep learning and food matching and standardization based on natural language processing. The former is specific because it uses a single deep learning network to perform both the segmentation and the classification at the pixel level of the image. To assess its performance, measures based on the standard pixel accuracy and Intersection over Union were applied. Food matching firstly describes each of the recognized food items in the image and then matches the food items with their compositional data, considering both their food names and their descriptors. The final accuracy of the deep learning model trained on fake-food images acquired by 124 study participants and providing fifty-five food classes was 92·18 %, while the food matching was performed with a classification accuracy of 93 %. The present findings are a step towards automating dietary assessment and food-choice research. The methodology outperforms other approaches in pixel accuracy, and since it is the first automatic solution for recognizing the images of fake foods, the results could be used as a baseline for possible future studies. As the approach enables a semi-automatic description of recognized food items (e.g. with respect to FoodEx2), these can be linked to any food composition database that applies the same classification and description system.

  17. An Interactive Decision Support System for Scheduling Fighter Pilot Training

    DTIC Science & Technology

    2002-03-26

    Deitel , H.M. and Deitel , P.J. C: How to Program , 2nd ed., Prentice Hall, 1994. 8. Deitel , H.M. and Deitel , P.J. How to Program Java...Visual Basic Programming language, the Excel tool is modified in several ways. Scheduling Dispatch rules are implemented to automatically generate... programming language, the Excel tool was modified in several ways. Scheduling dispatch rules are implemented to automatically generate

  18. A semi-automatic computer-aided method for surgical template design

    NASA Astrophysics Data System (ADS)

    Chen, Xiaojun; Xu, Lu; Yang, Yue; Egger, Jan

    2016-02-01

    This paper presents a generalized integrated framework of semi-automatic surgical template design. Several algorithms were implemented including the mesh segmentation, offset surface generation, collision detection, ruled surface generation, etc., and a special software named TemDesigner was developed. With a simple user interface, a customized template can be semi- automatically designed according to the preoperative plan. Firstly, mesh segmentation with signed scalar of vertex is utilized to partition the inner surface from the input surface mesh based on the indicated point loop. Then, the offset surface of the inner surface is obtained through contouring the distance field of the inner surface, and segmented to generate the outer surface. Ruled surface is employed to connect inner and outer surfaces. Finally, drilling tubes are generated according to the preoperative plan through collision detection and merging. It has been applied to the template design for various kinds of surgeries, including oral implantology, cervical pedicle screw insertion, iliosacral screw insertion and osteotomy, demonstrating the efficiency, functionality and generality of our method.

  19. Intelligent automated surface grid generation

    NASA Technical Reports Server (NTRS)

    Yao, Ke-Thia; Gelsey, Andrew

    1995-01-01

    The goal of our research is to produce a flexible, general grid generator for automated use by other programs, such as numerical optimizers. The current trend in the gridding field is toward interactive gridding. Interactive gridding more readily taps into the spatial reasoning abilities of the human user through the use of a graphical interface with a mouse. However, a sometimes fruitful approach to generating new designs is to apply an optimizer with shape modification operators to improve an initial design. In order for this approach to be useful, the optimizer must be able to automatically grid and evaluate the candidate designs. This paper describes and intelligent gridder that is capable of analyzing the topology of the spatial domain and predicting approximate physical behaviors based on the geometry of the spatial domain to automatically generate grids for computational fluid dynamics simulators. Typically gridding programs are given a partitioning of the spatial domain to assist the gridder. Our gridder is capable of performing this partitioning. This enables the gridder to automatically grid spatial domains of wide range of configurations.

  20. A graphically oriented specification language for automatic code generation. GRASP/Ada: A Graphical Representation of Algorithms, Structure, and Processes for Ada, phase 1

    NASA Technical Reports Server (NTRS)

    Cross, James H., II; Morrison, Kelly I.; May, Charles H., Jr.; Waddel, Kathryn C.

    1989-01-01

    The first phase of a three-phase effort to develop a new graphically oriented specification language which will facilitate the reverse engineering of Ada source code into graphical representations (GRs) as well as the automatic generation of Ada source code is described. A simplified view of the three phases of Graphical Representations for Algorithms, Structure, and Processes for Ada (GRASP/Ada) with respect to three basic classes of GRs is presented. Phase 1 concentrated on the derivation of an algorithmic diagram, the control structure diagram (CSD) (CRO88a) from Ada source code or Ada PDL. Phase 2 includes the generation of architectural and system level diagrams such as structure charts and data flow diagrams and should result in a requirements specification for a graphically oriented language able to support automatic code generation. Phase 3 will concentrate on the development of a prototype to demonstrate the feasibility of this new specification language.

  1. A semi-automatic computer-aided method for surgical template design

    PubMed Central

    Chen, Xiaojun; Xu, Lu; Yang, Yue; Egger, Jan

    2016-01-01

    This paper presents a generalized integrated framework of semi-automatic surgical template design. Several algorithms were implemented including the mesh segmentation, offset surface generation, collision detection, ruled surface generation, etc., and a special software named TemDesigner was developed. With a simple user interface, a customized template can be semi- automatically designed according to the preoperative plan. Firstly, mesh segmentation with signed scalar of vertex is utilized to partition the inner surface from the input surface mesh based on the indicated point loop. Then, the offset surface of the inner surface is obtained through contouring the distance field of the inner surface, and segmented to generate the outer surface. Ruled surface is employed to connect inner and outer surfaces. Finally, drilling tubes are generated according to the preoperative plan through collision detection and merging. It has been applied to the template design for various kinds of surgeries, including oral implantology, cervical pedicle screw insertion, iliosacral screw insertion and osteotomy, demonstrating the efficiency, functionality and generality of our method. PMID:26843434

  2. A semi-automatic computer-aided method for surgical template design.

    PubMed

    Chen, Xiaojun; Xu, Lu; Yang, Yue; Egger, Jan

    2016-02-04

    This paper presents a generalized integrated framework of semi-automatic surgical template design. Several algorithms were implemented including the mesh segmentation, offset surface generation, collision detection, ruled surface generation, etc., and a special software named TemDesigner was developed. With a simple user interface, a customized template can be semi- automatically designed according to the preoperative plan. Firstly, mesh segmentation with signed scalar of vertex is utilized to partition the inner surface from the input surface mesh based on the indicated point loop. Then, the offset surface of the inner surface is obtained through contouring the distance field of the inner surface, and segmented to generate the outer surface. Ruled surface is employed to connect inner and outer surfaces. Finally, drilling tubes are generated according to the preoperative plan through collision detection and merging. It has been applied to the template design for various kinds of surgeries, including oral implantology, cervical pedicle screw insertion, iliosacral screw insertion and osteotomy, demonstrating the efficiency, functionality and generality of our method.

  3. Automatic Evolution of Molecular Nanotechnology Designs

    NASA Technical Reports Server (NTRS)

    Globus, Al; Lawton, John; Wipke, Todd; Saini, Subhash (Technical Monitor)

    1998-01-01

    This paper describes strategies for automatically generating designs for analog circuits at the molecular level. Software maps out the edges and vertices of potential nanotechnology systems on graphs, then selects appropriate ones through evolutionary or genetic paradigms.

  4. Automation in visual inspection tasks: X-ray luggage screening supported by a system of direct, indirect or adaptable cueing with low and high system reliability.

    PubMed

    Chavaillaz, Alain; Schwaninger, Adrian; Michel, Stefan; Sauer, Juergen

    2018-05-25

    The present study evaluated three automation modes for improving performance in an X-ray luggage screening task. 140 participants were asked to detect the presence of prohibited items in X-ray images of cabin luggage. Twenty participants conducted this task without automatic support (control group), whereas the others worked with either indirect cues (system indicated the target presence without specifying its location), or direct cues (system pointed out the exact target location) or adaptable automation (participants could freely choose between no cue, direct and indirect cues). Furthermore, automatic support reliability was manipulated (low vs. high). The results showed a clear advantage for direct cues regarding detection performance and response time. No benefits were observed for adaptable automation. Finally, high automation reliability led to better performance and higher operator trust. The findings overall confirmed that automatic support systems for luggage screening should be designed such that they provide direct, highly reliable cues.

  5. [Relationship between cognitive content and emotions following dilatory behavior: considering the level of trait procrastination].

    PubMed

    Hayashi, Junichiro

    2009-02-01

    The present study developed and evaluated the Automatic Thoughts List following Dilatory Behavior (ATL-DB) to explore the mediation hypothesis and the content-specificity hypothesis about the automatic thoughts with trait procrastination and emotions. In Study 1, data from 113 Japanese college students were used to choose 22 items to construct the ATL-DB. Two factors were indentified, I. Criticism of Self and Behavior, II. Difficulty in Achievement. These factors had high degrees of internal consistency and had positive correlations to trait procrastination. In Study 2, the relationships among trait procrastination, the automatic thoughts, depression, and anxiety were examined in 261 college students by using Structural Equation Modeling. The results showed that the influence of trait procrastination on depression was mainly mediated through Criticism of Self and Behavior only, while the influence of trait procrastination to anxiety was mediated through Criticism of Self and Behavior and Difficulty in Achievement. Therefore, the mediation hypothesis was supported and the content-specificity hypothesis was partially supported.

  6. Automatic portion estimation and visual refinement in mobile dietary assessment

    PubMed Central

    Woo, Insoo; Otsmo, Karl; Kim, SungYe; Ebert, David S.; Delp, Edward J.; Boushey, Carol J.

    2011-01-01

    As concern for obesity grows, the need for automated and accurate methods to monitor nutrient intake becomes essential as dietary intake provides a valuable basis for managing dietary imbalance. Moreover, as mobile devices with built-in cameras have become ubiquitous, one potential means of monitoring dietary intake is photographing meals using mobile devices and having an automatic estimate of the nutrient contents returned. One of the challenging problems of the image-based dietary assessment is the accurate estimation of food portion size from a photograph taken with a mobile digital camera. In this work, we describe a method to automatically calculate portion size of a variety of foods through volume estimation using an image. These “portion volumes” utilize camera parameter estimation and model reconstruction to determine the volume of food items, from which nutritional content is then extrapolated. In this paper, we describe our initial results of accuracy evaluation using real and simulated meal images and demonstrate the potential of our approach. PMID:22242198

  7. Automatic portion estimation and visual refinement in mobile dietary assessment

    NASA Astrophysics Data System (ADS)

    Woo, Insoo; Otsmo, Karl; Kim, SungYe; Ebert, David S.; Delp, Edward J.; Boushey, Carol J.

    2010-01-01

    As concern for obesity grows, the need for automated and accurate methods to monitor nutrient intake becomes essential as dietary intake provides a valuable basis for managing dietary imbalance. Moreover, as mobile devices with built-in cameras have become ubiquitous, one potential means of monitoring dietary intake is photographing meals using mobile devices and having an automatic estimate of the nutrient contents returned. One of the challenging problems of the image-based dietary assessment is the accurate estimation of food portion size from a photograph taken with a mobile digital camera. In this work, we describe a method to automatically calculate portion size of a variety of foods through volume estimation using an image. These "portion volumes" utilize camera parameter estimation and model reconstruction to determine the volume of food items, from which nutritional content is then extrapolated. In this paper, we describe our initial results of accuracy evaluation using real and simulated meal images and demonstrate the potential of our approach.

  8. Definite Integral Automatic Analysis Mechanism Research and Development Using the "Find the Area by Integration" Unit as an Example

    ERIC Educational Resources Information Center

    Ting, Mu Yu

    2017-01-01

    Using the capabilities of expert knowledge structures, the researcher prepared test questions on the university calculus topic of "finding the area by integration." The quiz is divided into two types of multiple choice items (one out of four and one out of many). After the calculus course was taught and tested, the results revealed that…

  9. Processes in the Resolution of Ambiguous Words: Towards a Model of Selective Inhibition. Cognitive Science Program, Technical Report No. 86-6.

    ERIC Educational Resources Information Center

    Yee, Penny L.

    This study investigates the role of specific inhibitory processes in lexical ambiguity resolution. An attentional view of inhibition and a view based on specific automatic inhibition between nodes predict different results when a neutral item is processed between an ambiguous word and a related target. Subjects were 32 English speakers with normal…

  10. Thai Automatic Speech Recognition

    DTIC Science & Technology

    2005-01-01

    used in an external DARPA evaluation involving medical scenarios between an American Doctor and a naïve monolingual Thai patient. 2. Thai Language... dictionary generation more challenging, and (3) the lack of word segmentation, which calls for automatic segmentation approaches to make n-gram language...requires a dictionary and provides various segmentation algorithms to automatically select suitable segmentations. Here we used a maximal matching

  11. Halbach array generator/motor having an automatically regulated output voltage and mechanical power output

    DOEpatents

    Post, Richard F.

    2005-02-22

    A motor/generator having its stationary portion, i.e., the stator, positioned concentrically within its rotatable element, i.e., the rotor, along its axis of rotation. The rotor includes a Halbach array. The stator windings are switched or commutated to provide a DC motor/generator much the same as in a conventional DC motor/generator. The voltage and power are automatically regulated by using centrifugal force to change the diameter of the rotor, and thereby vary the radial gap in between the stator and the rotating Halbach array, as a function of the angular velocity of the rotor.

  12. Frequency control of wind turbine in power system

    NASA Astrophysics Data System (ADS)

    Xu, Huawei

    2018-06-01

    In order to improve the stability of the overall frequency of the power system, automatic power generation control and secondary frequency adjustment were applied. Automatic power generation control was introduced into power generation planning. A dual-fed wind generator power regulation model suitable for secondary frequency regulation was established. The results showed that this method satisfied the basic requirements of frequency regulation control of large-scale wind power access power systems and improved the stability and reliability of power system operation. Therefore, this system frequency control method and strategy is relatively simple. The effect is significant. The system frequency can quickly reach a steady state. It is worth applying and promoting.

  13. ELECTROMAGNETIC AND ELECTROSTATIC GENERATORS: ANNOTATED BIBLIOGRAPHY.

    DTIC Science & Technology

    generator with split poles, ultrasonic-frequency generator, unipolar generator, single-phase micromotors , synchronous motor, asynchronous motor...asymmetrical rotor, magnetic circuit, dc micromotors , circuit for the automatic control of synchronized induction motors, induction torque micromotors , electric

  14. Commercial grade item (CGI) dedication of generators for nuclear safety related applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Das, R.K.; Hajos, L.G.

    1993-03-01

    The number of nuclear safety related equipment suppliers and the availability of spare and replacement parts designed specifically for nuclear safety related application are shrinking rapidly. These have made it necessary for utilities to apply commercial grade spare and replacement parts in nuclear safety related applications after implementing proper acceptance and dedication process to verify that such items conform with the requirements of their use in nuclear safety related application. The general guidelines for the commercial grade item (CGI) acceptance and dedication are provided in US Nuclear Regulatory Commission (NRC) Generic Letters and Electric Power Research Institute (EPRI) Report NP-5652,more » Guideline for the Utilization of Commercial Grade Items in Nuclear Safety Related Applications. This paper presents an application of these generic guidelines for procurement, acceptance, and dedication of a commercial grade generator for use as a standby generator at Salem Generating Station Units 1 and 2. The paper identifies the critical characteristics of the generator which once verified, will provide reasonable assurance that the generator will perform its intended safety function. The paper also delineates the method of verification of the critical characteristics through tests and provide acceptance criteria for the test results. The methodology presented in this paper may be used as specific guidelines for reliable and cost effective procurement and dedication of commercial grade generators for use as standby generators at nuclear power plants.« less

  15. Forgotten but not gone: Retro-cue costs and benefits in a double-cueing paradigm suggest multiple states in visual short-term memory.

    PubMed

    van Moorselaar, Dirk; Olivers, Christian N L; Theeuwes, Jan; Lamme, Victor A F; Sligte, Ilja G

    2015-11-01

    Visual short-term memory (VSTM) performance is enhanced when the to-be-tested item is cued after encoding. This so-called retro-cue benefit is typically accompanied by a cost for the noncued items, suggesting that information is lost from VSTM upon presentation of a retrospective cue. Here we assessed whether noncued items can be restored to VSTM when made relevant again by a subsequent second cue. We presented either 1 or 2 consecutive retro-cues (80% valid) during the retention interval of a change-detection task. Relative to no cue, a valid cue increased VSTM capacity by 2 items, while an invalid cue decreased capacity by 2. Importantly, when a second, valid cue followed an invalid cue, capacity regained 2 items, so that performance was back on par. In addition, when the second cue was also invalid, there was no extra loss of information from VSTM, suggesting that those items that survived a first invalid cue, automatically also survived a second. We conclude that these results are in support of a very versatile VSTM system, in which memoranda adopt different representational states depending on whether they are deemed relevant now, in the future, or not at all. We discuss a neural model that is consistent with this conclusion. (c) 2015 APA, all rights reserved).

  16. A Comparison of Traditional Test Blueprinting and Item Development to Assessment Engineering in a Licensure Context

    ERIC Educational Resources Information Center

    Masters, James S.

    2010-01-01

    With the need for larger and larger banks of items to support adaptive testing and to meet security concerns, large-scale item generation is a requirement for many certification and licensure programs. As part of the mass production of items, it is critical that the difficulty and the discrimination of the items be known without the need for…

  17. A Generative Approach to the Development of Hidden-Figure Items.

    ERIC Educational Resources Information Center

    Bejar, Issac I.; Yocom, Peter

    This report explores an approach to item development and psychometric modeling which explicitly incorporates knowledge about the mental models used by examinees in the solution of items into a psychometric model that characterize performances on a test, as well as incorporating that knowledge into the item development process. The paper focuses on…

  18. A Methodology for Zumbo's Third Generation DIF Analyses and the Ecology of Item Responding

    ERIC Educational Resources Information Center

    Zumbo, Bruno D.; Liu, Yan; Wu, Amery D.; Shear, Benjamin R.; Olvera Astivia, Oscar L.; Ark, Tavinder K.

    2015-01-01

    Methods for detecting differential item functioning (DIF) and item bias are typically used in the process of item analysis when developing new measures; adapting existing measures for different populations, languages, or cultures; or more generally validating test score inferences. In 2007 in "Language Assessment Quarterly," Zumbo…

  19. The effects of prior knowledge on study-time allocation and free recall: investigating the discrepancy reduction model.

    PubMed

    Verkoeijen, Peter P J L; Rikers, Remy M J P; Schmidt, Henk G

    2005-01-01

    In this study, the authors examined the influence of prior knowledge activation on information processing by means of a prior knowledge activation procedure adopted from the read-generate paradigm. On the basis of cue-target pairs, participants in the experimental groups generated two different sets of items before studying a relevant list. Subsequently, participants were informed that they had to study the items in the list and that they should try to remember as many items as possible. The authors assessed the processing time allocated to the items in the list and free recall of those items. The results revealed that the experimental groups spent less time on items that had already been activated. In addition, the experimental groups outperformed the control group in overall free recall and in free recall of the activated items. Between-group comparisons did not demonstrate significant effects with respect to the processing time and free recall of nonactivated items. The authors interpreted these results in terms of the discrepancy reduction model of regulating the amount of processing time allocated to different parts of the list.

  20. A comparison of conscious and automatic memory processes for picture and word stimuli: a process dissociation analysis.

    PubMed

    McBride, Dawn M; Anne Dosher, Barbara

    2002-09-01

    Four experiments were conducted to evaluate explanations of picture superiority effects previously found for several tasks. In a process dissociation procedure (Jacoby, 1991) with word stem completion, picture fragment completion, and category production tasks, conscious and automatic memory processes were compared for studied pictures and words with an independent retrieval model and a generate-source model. The predictions of a transfer appropriate processing account of picture superiority were tested and validated in "process pure" latent measures of conscious and unconscious, or automatic and source, memory processes. Results from both model fits verified that pictures had a conceptual (conscious/source) processing advantage over words for all tasks. The effects of perceptual (automatic/word generation) compatibility depended on task type, with pictorial tasks favoring pictures and linguistic tasks favoring words. Results show support for an explanation of the picture superiority effect that involves an interaction of encoding and retrieval processes.

  1. Automatic textual annotation of video news based on semantic visual object extraction

    NASA Astrophysics Data System (ADS)

    Boujemaa, Nozha; Fleuret, Francois; Gouet, Valerie; Sahbi, Hichem

    2003-12-01

    In this paper, we present our work for automatic generation of textual metadata based on visual content analysis of video news. We present two methods for semantic object detection and recognition from a cross modal image-text thesaurus. These thesaurus represent a supervised association between models and semantic labels. This paper is concerned with two semantic objects: faces and Tv logos. In the first part, we present our work for efficient face detection and recogniton with automatic name generation. This method allows us also to suggest the textual annotation of shots close-up estimation. On the other hand, we were interested to automatically detect and recognize different Tv logos present on incoming different news from different Tv Channels. This work was done jointly with the French Tv Channel TF1 within the "MediaWorks" project that consists on an hybrid text-image indexing and retrieval plateform for video news.

  2. Fully automatic adjoints: a robust and efficient mechanism for generating adjoint ocean models

    NASA Astrophysics Data System (ADS)

    Ham, D. A.; Farrell, P. E.; Funke, S. W.; Rognes, M. E.

    2012-04-01

    The problem of generating and maintaining adjoint models is sufficiently difficult that typically only the most advanced and well-resourced community ocean models achieve it. There are two current technologies which each suffer from their own limitations. Algorithmic differentiation, also called automatic differentiation, is employed by models such as the MITGCM [2] and the Alfred Wegener Institute model FESOM [3]. This technique is very difficult to apply to existing code, and requires a major initial investment to prepare the code for automatic adjoint generation. AD tools may also have difficulty with code employing modern software constructs such as derived data types. An alternative is to formulate the adjoint differential equation and to discretise this separately. This approach, known as the continuous adjoint and employed in ROMS [4], has the disadvantage that two different model code bases must be maintained and manually kept synchronised as the model develops. The discretisation of the continuous adjoint is not automatically consistent with that of the forward model, producing an additional source of error. The alternative presented here is to formulate the flow model in the high level language UFL (Unified Form Language) and to automatically generate the model using the software of the FEniCS project. In this approach it is the high level code specification which is differentiated, a task very similar to the formulation of the continuous adjoint [5]. However since the forward and adjoint models are generated automatically, the difficulty of maintaining them vanishes and the software engineering process is therefore robust. The scheduling and execution of the adjoint model, including the application of an appropriate checkpointing strategy is managed by libadjoint [1]. In contrast to the conventional algorithmic differentiation description of a model as a series of primitive mathematical operations, libadjoint employs a new abstraction of the simulation process as a sequence of discrete equations which are assembled and solved. It is the coupling of the respective abstractions employed by libadjoint and the FEniCS project which produces the adjoint model automatically, without further intervention from the model developer. This presentation will demonstrate this new technology through linear and non-linear shallow water test cases. The exceptionally simple model syntax will be highlighted and the correctness of the resulting adjoint simulations will be demonstrated using rigorous convergence tests.

  3. Identification and Development of Items Comprising Organizational Citizenship Behaviors Among Pharmacy Faculty

    PubMed Central

    Semsick, Gretchen R.

    2016-01-01

    Objective. Identify behaviors that can compose a measure of organizational citizenship by pharmacy faculty. Methods. A four-round, modified Delphi procedure using open-ended questions (Round 1) was conducted with 13 panelists from pharmacy academia. The items generated were evaluated and refined for inclusion in subsequent rounds. A consensus was reached after completing four rounds. Results. The panel produced a set of 26 items indicative of extra-role behaviors by faculty colleagues considered to compose a measure of citizenship, which is an expressed manifestation of collegiality. Conclusions. The items generated require testing for validation and reliability in a large sample to create a measure of organizational citizenship. Even prior to doing so, the list of items can serve as a resource for mentorship of junior and senior faculty alike. PMID:28179717

  4. Patients with schizophrenia do not preserve automatic grouping when mentally re-grouping figures: shedding light on an ignored difficulty.

    PubMed

    Giersch, Anne; van Assche, Mitsouko; Capa, Rémi L; Marrer, Corinne; Gounot, Daniel

    2012-01-01

    Looking at a pair of objects is easy when automatic grouping mechanisms bind these objects together, but visual exploration can also be more flexible. It is possible to mentally "re-group" two objects that are not only separate but belong to different pairs of objects. "Re-grouping" is in conflict with automatic grouping, since it entails a separation of each item from the set it belongs to. This ability appears to be impaired in patients with schizophrenia. Here we check if this impairment is selective, which would suggest a dissociation between grouping and "re-grouping," or if it impacts on usual, automatic grouping, which would call for a better understanding of the interactions between automatic grouping and "re-grouping." Sixteen outpatients with schizophrenia and healthy controls had to identify two identical and contiguous target figures within a display of circles and squares alternating around a fixation point. Eye-tracking was used to check central fixation. The target pair could be located in the same or separate hemifields. Identical figures were grouped by a connector (grouped automatically) or not (to be re-grouped). Attention modulation of automatic grouping was tested by manipulating the proportion of connected and unconnected targets, thus prompting subjects to focalize on either connected or unconnected pairs. Both groups were sensitive to automatic grouping in most conditions, but patients were unusually slowed down for connected targets while focalizing on unconnected pairs. In addition, this unusual effect occurred only when targets were presented within the same hemifield. Patients and controls differed on this asymmetry between within- and across-hemifield presentation, suggesting that patients with schizophrenia do not re-group figures in the same way as controls do. We discuss possible implications on how "re-grouping" ties in with ongoing, automatic perception in healthy volunteers.

  5. Patients with Schizophrenia Do Not Preserve Automatic Grouping When Mentally Re-Grouping Figures: Shedding Light on an Ignored Difficulty

    PubMed Central

    Giersch, Anne; van Assche, Mitsouko; Capa, Rémi L.; Marrer, Corinne; Gounot, Daniel

    2012-01-01

    Looking at a pair of objects is easy when automatic grouping mechanisms bind these objects together, but visual exploration can also be more flexible. It is possible to mentally “re-group” two objects that are not only separate but belong to different pairs of objects. “Re-grouping” is in conflict with automatic grouping, since it entails a separation of each item from the set it belongs to. This ability appears to be impaired in patients with schizophrenia. Here we check if this impairment is selective, which would suggest a dissociation between grouping and “re-grouping,” or if it impacts on usual, automatic grouping, which would call for a better understanding of the interactions between automatic grouping and “re-grouping.” Sixteen outpatients with schizophrenia and healthy controls had to identify two identical and contiguous target figures within a display of circles and squares alternating around a fixation point. Eye-tracking was used to check central fixation. The target pair could be located in the same or separate hemifields. Identical figures were grouped by a connector (grouped automatically) or not (to be re-grouped). Attention modulation of automatic grouping was tested by manipulating the proportion of connected and unconnected targets, thus prompting subjects to focalize on either connected or unconnected pairs. Both groups were sensitive to automatic grouping in most conditions, but patients were unusually slowed down for connected targets while focalizing on unconnected pairs. In addition, this unusual effect occurred only when targets were presented within the same hemifield. Patients and controls differed on this asymmetry between within- and across-hemifield presentation, suggesting that patients with schizophrenia do not re-group figures in the same way as controls do. We discuss possible implications on how “re-grouping” ties in with ongoing, automatic perception in healthy volunteers. PMID:22912621

  6. Random Item Generation Is Affected by Age

    ERIC Educational Resources Information Center

    Multani, Namita; Rudzicz, Frank; Wong, Wing Yiu Stephanie; Namasivayam, Aravind Kumar; van Lieshout, Pascal

    2016-01-01

    Purpose: Random item generation (RIG) involves central executive functioning. Measuring aspects of random sequences can therefore provide a simple method to complement other tools for cognitive assessment. We examine the extent to which RIG relates to specific measures of cognitive function, and whether those measures can be estimated using RIG…

  7. Automatic 3D Extraction of Buildings, Vegetation and Roads from LIDAR Data

    NASA Astrophysics Data System (ADS)

    Bellakaout, A.; Cherkaoui, M.; Ettarid, M.; Touzani, A.

    2016-06-01

    Aerial topographic surveys using Light Detection and Ranging (LiDAR) technology collect dense and accurate information from the surface or terrain; it is becoming one of the important tools in the geosciences for studying objects and earth surface. Classification of Lidar data for extracting ground, vegetation, and buildings is a very important step needed in numerous applications such as 3D city modelling, extraction of different derived data for geographical information systems (GIS), mapping, navigation, etc... Regardless of what the scan data will be used for, an automatic process is greatly required to handle the large amount of data collected because the manual process is time consuming and very expensive. This paper is presenting an approach for automatic classification of aerial Lidar data into five groups of items: buildings, trees, roads, linear object and soil using single return Lidar and processing the point cloud without generating DEM. Topological relationship and height variation analysis is adopted to segment, preliminary, the entire point cloud preliminarily into upper and lower contours, uniform and non-uniform surface, non-uniform surfaces, linear objects, and others. This primary classification is used on the one hand to know the upper and lower part of each building in an urban scene, needed to model buildings façades; and on the other hand to extract point cloud of uniform surfaces which contain roofs, roads and ground used in the second phase of classification. A second algorithm is developed to segment the uniform surface into buildings roofs, roads and ground, the second phase of classification based on the topological relationship and height variation analysis, The proposed approach has been tested using two areas : the first is a housing complex and the second is a primary school. The proposed approach led to successful classification results of buildings, vegetation and road classes.

  8. Generation and context memory.

    PubMed

    Mulligan, Neil W; Lozito, Jeffrey P; Rosner, Zachary A

    2006-07-01

    Generation enhances memory for occurrence but may not enhance other aspects of memory. The present study further delineates the negative generation effect in context memory reported in N. W. Mulligan (2004). First, the negative generation effect occurred for perceptual attributes of the target item (its color and font) but not for extratarget aspects of context (location and background color). Second, nonvisual generation tasks with either semantic or nonsemantic generation rules (antonym and rhyme generation, respectively) produced the same pattern of results. In contrast, a visual (or data-driven) generation task (letter transposition) did not disrupt context memory for color. Third, generating nonwords produced no effect on item memory but persisted in producing a negative effect on context memory for target attributes, implying that (a) the negative generation effect in context memory is not mediated by semantic encoding, and (b) the negative effect on context memory can be dissociated from the positive effect on item memory. The results are interpreted in terms of the processing account of generation. The original, perceptual-conceptual version of this account is too narrow, but a modified processing account, based on a more generic visual versus nonvisual processing distinction, accommodates the results. Copyright 2006 APA, all rights reserved.

  9. Measuring the effects of online health information for patients: Item generation for an e-health impact questionnaire

    PubMed Central

    Kelly, Laura; Jenkinson, Crispin; Ziebland, Sue

    2013-01-01

    Objective The internet is a valuable resource for accessing health information and support. We are developing an instrument to assess the effects of websites with experiential and factual health information. This study aimed to inform an item pool for the proposed questionnaire. Methods Items were informed through a review of relevant literature and secondary qualitative analysis of 99 narrative interviews relating to patient and carer experiences of health. Statements relating to identified themes were re-cast as questionnaire items and shown for review to an expert panel. Cognitive debrief interviews (n = 21) were used to assess items for face and content validity. Results Eighty-two generic items were identified following secondary qualitative analysis and expert review. Cognitive interviewing confirmed the questionnaire instructions, 62 items and the response options were acceptable to patients and carers. Conclusion Using a clear conceptual basis to inform item generation, 62 items have been identified as suitable to undergo further psychometric testing. Practice implications The final questionnaire will initially be used in a randomized controlled trial examining the effects of online patient's experiences. This will inform recommendations on the best way to present patients’ experiences within health information websites. PMID:23598293

  10. Automatic Title Generation for Spoken Broadcast News

    DTIC Science & Technology

    2001-01-01

    degrades much less with speech -recognized transcripts. Meanwhile, even though KNN performance not as well as TF.IDF and NBL in terms of F1 metric, it...test corpus of 1006 broadcast news documents, comparing the results over manual transcription to the results over automatically recognized speech . We...use both F1 and the average number of correct title words in the correct order as metric. Overall, the results show that title generation for speech

  11. Automatic Generation of Heuristics for Scheduling

    NASA Technical Reports Server (NTRS)

    Morris, Robert A.; Bresina, John L.; Rodgers, Stuart M.

    1997-01-01

    This paper presents a technique, called GenH, that automatically generates search heuristics for scheduling problems. The impetus for developing this technique is the growing consensus that heuristics encode advice that is, at best, useful in solving most, or typical, problem instances, and, at worst, useful in solving only a narrowly defined set of instances. In either case, heuristic problem solvers, to be broadly applicable, should have a means of automatically adjusting to the idiosyncrasies of each problem instance. GenH generates a search heuristic for a given problem instance by hill-climbing in the space of possible multi-attribute heuristics, where the evaluation of a candidate heuristic is based on the quality of the solution found under its guidance. We present empirical results obtained by applying GenH to the real world problem of telescope observation scheduling. These results demonstrate that GenH is a simple and effective way of improving the performance of an heuristic scheduler.

  12. Using automatic generation of Labanotation to protect folk dance

    NASA Astrophysics Data System (ADS)

    Wang, Jiaji; Miao, Zhenjiang; Guo, Hao; Zhou, Ziming; Wu, Hao

    2017-01-01

    Labanotation uses symbols to describe human motion and is an effective means of protecting folk dance. We use motion capture data to automatically generate Labanotation. First, we convert the motion capture data of the biovision hierarchy file into three-dimensional coordinate data. Second, we divide human motion into element movements. Finally, we analyze each movement and find the corresponding notation. Our work has been supervised by an expert in Labanotation to ensure the correctness of the results. At present, the work deals with a subset of symbols in Labanotation that correspond to several basic movements. Labanotation contains many symbols and several new symbols may be introduced for improvement in the future. We will refine our work to handle more symbols. The automatic generation of Labanotation can greatly improve the work efficiency of documenting movements. Thus, our work will significantly contribute to the protection of folk dance and other action arts.

  13. Fully Automated Single-Zone Elliptic Grid Generation for Mars Science Laboratory (MSL) Aeroshell and Canopy Geometries

    NASA Technical Reports Server (NTRS)

    kaul, Upender K.

    2008-01-01

    A procedure for generating smooth uniformly clustered single-zone grids using enhanced elliptic grid generation has been demonstrated here for the Mars Science Laboratory (MSL) geometries such as aeroshell and canopy. The procedure obviates the need for generating multizone grids for such geometries, as reported in the literature. This has been possible because the enhanced elliptic grid generator automatically generates clustered grids without manual prescription of decay parameters needed with the conventional approach. In fact, these decay parameters are calculated as decay functions as part of the solution, and they are not constant over a given boundary. Since these decay functions vary over a given boundary, orthogonal grids near any arbitrary boundary can be clustered automatically without having to break up the boundaries and the corresponding interior domains into various zones for grid generation.

  14. Development of a brief measure of generativity and ego-integrity for use in palliative care settings.

    PubMed

    Vuksanovic, Dean; Dyck, Murray; Green, Heather

    2015-10-01

    Our aim was to develop and test a brief measure of generativity and ego-integrity that is suitable for use in palliative care settings. Two measures of generativity and ego-integrity were modified and combined to create a new 11-item questionnaire, which was then administered to 143 adults. A principal-component analysis with oblique rotation was performed in order to identify underlying components that can best account for variation in the 11 questionnaire items. The two-component solution was consistent with the items that, on conceptual grounds, were intended to comprise the two constructs assessed by the questionnaire. Results suggest that the selected 11 items were good representatives of the larger scales from which they were selected, and they are expected to provide a useful means of measuring these concepts near the end of life.

  15. CT-based patient modeling for head and neck hyperthermia treatment planning: manual versus automatic normal-tissue-segmentation.

    PubMed

    Verhaart, René F; Fortunati, Valerio; Verduijn, Gerda M; van Walsum, Theo; Veenland, Jifke F; Paulides, Margarethus M

    2014-04-01

    Clinical trials have shown that hyperthermia, as adjuvant to radiotherapy and/or chemotherapy, improves treatment of patients with locally advanced or recurrent head and neck (H&N) carcinoma. Hyperthermia treatment planning (HTP) guided H&N hyperthermia is being investigated, which requires patient specific 3D patient models derived from Computed Tomography (CT)-images. To decide whether a recently developed automatic-segmentation algorithm can be introduced in the clinic, we compared the impact of manual- and automatic normal-tissue-segmentation variations on HTP quality. CT images of seven patients were segmented automatically and manually by four observers, to study inter-observer and intra-observer geometrical variation. To determine the impact of this variation on HTP quality, HTP was performed using the automatic and manual segmentation of each observer, for each patient. This impact was compared to other sources of patient model uncertainties, i.e. varying gridsizes and dielectric tissue properties. Despite geometrical variations, manual and automatic generated 3D patient models resulted in an equal, i.e. 1%, variation in HTP quality. This variation was minor with respect to the total of other sources of patient model uncertainties, i.e. 11.7%. Automatically generated 3D patient models can be introduced in the clinic for H&N HTP. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  16. SU-F-T-423: Automating Treatment Planning for Cervical Cancer in Low- and Middle- Income Countries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kisling, K; Zhang, L; Yang, J

    Purpose: To develop and test two independent algorithms that automatically create the photon treatment fields for a four-field box beam arrangement, a common treatment technique for cervical cancer in low- and middle-income countries. Methods: Two algorithms were developed and integrated into Eclipse using its Advanced Programming Interface:3D Method: We automatically segment bony anatomy on CT using an in-house multi-atlas contouring tool and project the structures into the beam’s-eye-view. We identify anatomical landmarks on the projections to define the field apertures. 2D Method: We generate DRRs for all four beams. An atlas of DRRs for six standard patients with corresponding fieldmore » apertures are deformably registered to the test patient DRRs. The set of deformed atlas apertures are fitted to an expected shape to define the final apertures. Both algorithms were tested on 39 patient CTs, and the resulting treatment fields were scored by a radiation oncologist. We also investigated the feasibility of using one algorithm as an independent check of the other algorithm. Results: 96% of the 3D-Method-generated fields and 79% of the 2D-method-generated fields were scored acceptable for treatment (“Per Protocol” or “Acceptable Variation”). The 3D Method generated more fields scored “Per Protocol” than the 2D Method (62% versus 17%). The 4% of the 3D-Method-generated fields that were scored “Unacceptable Deviation” were all due to an improper L5 vertebra contour resulting in an unacceptable superior jaw position. When these same patients were planned with the 2D method, the superior jaw was acceptable, suggesting that the 2D method can be used to independently check the 3D method. Conclusion: Our results show that our 3D Method is feasible for automatically generating cervical treatment fields. Furthermore, the 2D Method can serve as an automatic, independent check of the automatically-generated treatment fields. These algorithms will be implemented for fully automated cervical treatment planning.« less

  17. Pediatric post-thrombotic syndrome in children: Toward the development of a new diagnostic and evaluative measurement tool.

    PubMed

    Avila, M L; Brandão, L R; Williams, S; Ward, L C; Montoya, M I; Stinson, J; Kiss, A; Lara-Corrales, I; Feldman, B M

    2016-08-01

    Our goal was to conduct the item generation and piloting phases of a new discriminative and evaluative tool for pediatric post-thrombotic syndrome. We followed a formative model for the development of the tool, focusing on the signs/symptoms (items) that define post-thrombotic syndrome. For item generation, pediatric thrombosis experts and subjects diagnosed with extremity post-thrombotic syndrome during childhood nominated items. In the piloting phase, items were cross-sectionally measured in children with limb deep vein thrombosis to examine item performance. Twenty-three experts and 16 subjects listed 34 items, which were then measured in 140 subjects with previous diagnosis of limb deep vein thrombosis (70 upper extremity and 70 lower extremity). The items with strongest correlation with post-thrombotic syndrome severity and largest area under the curve were pain (in older children), paresthesia, and swollen limb for the upper extremity group, and pain (in older children), tired limb, heaviness, tightness and paresthesia for the lower extremity group. The diagnostic properties of the items and their correlations with post-thrombotic syndrome severity varied according to the assessed venous territory. The information gathered in this study will help experts decide which item should be considered for inclusion in the new tool. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Evaluation of the efficiency and fault density of software generated by code generators

    NASA Technical Reports Server (NTRS)

    Schreur, Barbara

    1993-01-01

    Flight computers and flight software are used for GN&C (guidance, navigation, and control), engine controllers, and avionics during missions. The software development requires the generation of a considerable amount of code. The engineers who generate the code make mistakes and the generation of a large body of code with high reliability requires considerable time. Computer-aided software engineering (CASE) tools are available which generates code automatically with inputs through graphical interfaces. These tools are referred to as code generators. In theory, code generators could write highly reliable code quickly and inexpensively. The various code generators offer different levels of reliability checking. Some check only the finished product while some allow checking of individual modules and combined sets of modules as well. Considering NASA's requirement for reliability, an in house manually generated code is needed. Furthermore, automatically generated code is reputed to be as efficient as the best manually generated code when executed. In house verification is warranted.

  19. Automatic structured grid generation using Gridgen (some restrictions apply)

    NASA Technical Reports Server (NTRS)

    Chawner, John R.; Steinbrenner, John P.

    1995-01-01

    The authors have noticed in the recent grid generation literature an emphasis on the automation of structured grid generation. The motivation behind such work is clear; grid generation is easily the most despised task in the grid-analyze-visualize triad of computational analysis (CA). However, because grid generation is closely coupled to both the design and analysis software and because quantitative measures of grid quality are lacking, 'push button' grid generation usually results in a compromise between speed, control, and quality. Overt emphasis on automation obscures the substantive issues of providing users with flexible tools for generating and modifying high quality grids in a design environment. In support of this paper's tongue-in-cheek title, many features of the Gridgen software are described. Gridgen is by no stretch of the imagination an automatic grid generator. Despite this fact, the code does utilize many automation techniques that permit interesting regenerative features.

  20. Modeling Local Item Dependence in Cloze and Reading Comprehension Test Items Using Testlet Response Theory

    ERIC Educational Resources Information Center

    Baghaei, Purya; Ravand, Hamdollah

    2016-01-01

    In this study the magnitudes of local dependence generated by cloze test items and reading comprehension items were compared and their impact on parameter estimates and test precision was investigated. An advanced English as a foreign language reading comprehension test containing three reading passages and a cloze test was analyzed with a…

  1. Integrating Test-Form Formatting into Automated Test Assembly

    ERIC Educational Resources Information Center

    Diao, Qi; van der Linden, Wim J.

    2013-01-01

    Automated test assembly uses the methodology of mixed integer programming to select an optimal set of items from an item bank. Automated test-form generation uses the same methodology to optimally order the items and format the test form. From an optimization point of view, production of fully formatted test forms directly from the item pool using…

  2. SYMBOD - A computer program for the automatic generation of symbolic equations of motion for systems of hinge-connected rigid bodies

    NASA Technical Reports Server (NTRS)

    Macala, G. A.

    1983-01-01

    A computer program is described that can automatically generate symbolic equations of motion for systems of hinge-connected rigid bodies with tree topologies. The dynamical formulation underlying the program is outlined, and examples are given to show how a symbolic language is used to code the formulation. The program is applied to generate the equations of motion for a four-body model of the Galileo spacecraft. The resulting equations are shown to be a factor of three faster in execution time than conventional numerical subroutines.

  3. Model-Based GUI Testing Using Uppaal at Novo Nordisk

    NASA Astrophysics Data System (ADS)

    Hjort, Ulrik H.; Illum, Jacob; Larsen, Kim G.; Petersen, Michael A.; Skou, Arne

    This paper details a collaboration between Aalborg University and Novo Nordiskin developing an automatic model-based test generation tool for system testing of the graphical user interface of a medical device on an embedded platform. The tool takes as input an UML Statemachine model and generates a test suite satisfying some testing criterion, such as edge or state coverage, and converts the individual test case into a scripting language that can be automatically executed against the target. The tool has significantly reduced the time required for test construction and generation, and reduced the number of test scripts while increasing the coverage.

  4. Metacognitive unawareness of the errorful generation benefit and its effects on self-regulated learning.

    PubMed

    Yang, Chunliang; Potts, Rosalind; Shanks, David R

    2017-07-01

    Generating errors followed by corrective feedback enhances retention more effectively than does reading-the benefit of errorful generation-but people tend to be unaware of this benefit. The current research explored this metacognitive unawareness, its effect on self-regulated learning, and how to alleviate or reverse it. People's beliefs about the relative learning efficacy of generating errors followed by corrective feedback compared to reading, and the effects of generation fluency, are also explored. In Experiments 1 and 2, lower judgments of learning (JOLs) were consistently given to incorrectly generated word pairs than to studied (read) pairs and led participants to distribute more study resources to incorrectly generated pairs, even though superior recall of these pairs was exhibited in the final test. In Experiment 3, a survey revealed that people believe that generating errors followed by corrective feedback is inferior to reading. Experiment 4 was designed to alter participants' metacognition by informing them of the errorful generation benefit prior to study. Although metacognitive misalignment was partly countered, participants still tended to be unaware of this benefit when making item-by-item JOLs. In Experiment 5, in a delayed JOL condition, higher JOLs were given to incorrectly generated pairs and read pairs were more likely to be selected for restudy. The current research reveals that people tend to underestimate the learning efficiency of generating errors followed by corrective feedback relative to reading when making immediate item-by-item JOLs. Informing people of the errorful generation benefit prior to study and asking them to make delayed JOLs are effective ways to alleviate this metacognitive miscalibration. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. Valve technology: A compilation

    NASA Technical Reports Server (NTRS)

    1971-01-01

    A technical compilation on the types, applications and modifications to certain valves is presented. Data cover the following: (1) valves that feature automatic response to stimuli (thermal, electrical, fluid pressure, etc.), (2) modified valves changed by redesign of components to increase initial design effectiveness or give the item versatility beyond its basic design capability, and (3) special purpose valves with limited application as presented, but lending themselves to other uses with minor changes.

  6. Divided attention enhances the recognition of emotional stimuli: evidence from the attentional boost effect.

    PubMed

    Rossi-Arnaud, Clelia; Spataro, Pietro; Costanzi, Marco; Saraulli, Daniele; Cestari, Vincenzo

    2018-01-01

    The present study examined predictions of the early-phase-elevated-attention hypothesis of the attentional boost effect (ABE), which suggests that transient increases in attention at encoding, as instantiated in the ABE paradigm, should enhance the recognition of neutral and positive items (whose encoding is mostly based on controlled processes), while having small or null effects on the recognition of negative items (whose encoding is primarily based on automatic processes). Participants were presented a sequence of negative, neutral and positive stimuli (pictures in Experiment 1, words in Experiment 2) associated to target (red) squares, distractor (green) squares or no squares (baseline condition). They were told to attend to the pictures/words and simultaneously press the spacebar of the computer when a red square appeared. In a later recognition task, stimuli associated to target squares were recognised better than stimuli associated to distractor squares, replicating the standard ABE. More importantly, we also found that: (a) the memory enhancement following target detection occurred with all types of stimuli (neutral, negative and positive) and (b) the advantage of negative stimuli over neutral stimuli was intact in the DA condition. These findings suggest that the encoding of negative stimuli depends on both controlled (attention-dependent) and automatic (attention-independent) processes.

  7. Giro form reading machine

    NASA Astrophysics Data System (ADS)

    Minh Ha, Thien; Niggeler, Dieter; Bunke, Horst; Clarinval, Jose

    1995-08-01

    Although giro forms are used by many people in daily life for money remittance in Switzerland, the processing of these forms at banks and post offices is only partly automated. We describe an ongoing project for building an automatic system that is able to recognize various items printed or written on a giro form. The system comprises three main components, namely, an automatic form feeder, a camera system, and a computer. These components are connected in such a way that the system is able to process a bunch of forms without any human interactions. We present two real applications of our system in the field of payment services, which require the reading of both machine printed and handwritten information that may appear on a giro form. One particular feature of giro forms is their flexible layout, i.e., information items are located differently from one form to another, thus requiring an additional analysis step to localize them before recognition. A commercial optical character recognition software package is used for recognition of machine-printed information, whereas handwritten information is read by our own algorithms, the details of which are presented. The system is implemented by using a client/server architecture providing a high degree of flexibility to change. Preliminary results are reported supporting our claim that the system is usable in practice.

  8. Ultrasound Evaluation of the Abdominal Wall and Lumbar Multifidus Muscles in Participants Who Practice Pilates: A 1-year Follow-up Case Series.

    PubMed

    Gala-Alarcón, Paula; Calvo-Lobo, César; Serrano-Imedio, Ana; Garrido-Marín, Alejandro; Martín-Casas, Patricia; Plaza-Manzano, Gustavo

    2018-04-18

    The purpose of this study was to describe ultrasound (US) changes in muscle thickness produced during automatic activation of the transversus abdominis (TrAb), internal oblique (IO), external oblique (EO), and rectus abdominis (RA), as well as the cross-sectional area (CSA) of the lumbar multifidus (LM), after 1 year of Pilates practice. A 1-year follow-up case series study with a convenience sample of 17 participants was performed. Indeed, TrAb, IO, EO, and RA thickness, as well as LM CSA changes during automatic tests were measured by US scanning before and after 1 year of Pilates practice twice per week. Furthermore, quality of life changes using the 36-Item Short Form Health Survey and US measurement comparisons of participants who practiced exercises other than Pilates were described. Statistically significant changes were observed for the RA muscle thickness reduction during the active straight leg raise test (P = .007). Participants who practiced other exercises presented a larger LM CSA and IO thickness, which was statistically significant (P < .05). Statistically significant changes were not observed for the domains of the analyzed 36-Item Short Form Health Survey (P > .05). A direct moderate correlation was observed (r = 0.562, P = .019) between the TrAb thickness before and after a 1-year follow-up. Long-term Pilates practice may reduce the RA thickness automatic activation during active straight leg raise. Furthermore, LM CSA and IO thickness increases were observed in participants who practice other exercise types in conjunction with Pilates. Despite a moderate positive correlation observed for TrAb thickness, the quality of life did not seem to be modified after long-term Pilates practice. Copyright © 2018. Published by Elsevier Inc.

  9. Automatic Residential/Commercial Classification of Parcels with Solar Panel Detections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morton, April M; Omitaomu, Olufemi A; Kotikot, Susan

    A computational method to automatically detect solar panels on rooftops to aid policy and financial assessment of solar distributed generation. The code automatically classifies parcels containing solar panels in the U.S. as residential or commercial. The code allows the user to specify an input dataset containing parcels and detected solar panels, and then uses information about the parcels and solar panels to automatically classify the rooftops as residential or commercial using machine learning techniques. The zip file containing the code includes sample input and output datasets for the Boston and DC areas.

  10. Design automation techniques for custom LSI arrays

    NASA Technical Reports Server (NTRS)

    Feller, A.

    1975-01-01

    The standard cell design automation technique is described as an approach for generating random logic PMOS, CMOS or CMOS/SOS custom large scale integration arrays with low initial nonrecurring costs and quick turnaround time or design cycle. The system is composed of predesigned circuit functions or cells and computer programs capable of automatic placement and interconnection of the cells in accordance with an input data net list. The program generates a set of instructions to drive an automatic precision artwork generator. A series of support design automation and simulation programs are described, including programs for verifying correctness of the logic on the arrays, performing dc and dynamic analysis of MOS devices, and generating test sequences.

  11. Generating Models of Surgical Procedures using UMLS Concepts and Multiple Sequence Alignment

    PubMed Central

    Meng, Frank; D’Avolio, Leonard W.; Chen, Andrew A.; Taira, Ricky K.; Kangarloo, Hooshang

    2005-01-01

    Surgical procedures can be viewed as a process composed of a sequence of steps performed on, by, or with the patient’s anatomy. This sequence is typically the pattern followed by surgeons when generating surgical report narratives for documenting surgical procedures. This paper describes a methodology for semi-automatically deriving a model of conducted surgeries, utilizing a sequence of derived Unified Medical Language System (UMLS) concepts for representing surgical procedures. A multiple sequence alignment was computed from a collection of such sequences and was used for generating the model. These models have the potential of being useful in a variety of informatics applications such as information retrieval and automatic document generation. PMID:16779094

  12. Activity classification using realistic data from wearable sensors.

    PubMed

    Pärkkä, Juha; Ermes, Miikka; Korpipää, Panu; Mäntyjärvi, Jani; Peltola, Johannes; Korhonen, Ilkka

    2006-01-01

    Automatic classification of everyday activities can be used for promotion of health-enhancing physical activities and a healthier lifestyle. In this paper, methods used for classification of everyday activities like walking, running, and cycling are described. The aim of the study was to find out how to recognize activities, which sensors are useful and what kind of signal processing and classification is required. A large and realistic data library of sensor data was collected. Sixteen test persons took part in the data collection, resulting in approximately 31 h of annotated, 35-channel data recorded in an everyday environment. The test persons carried a set of wearable sensors while performing several activities during the 2-h measurement session. Classification results of three classifiers are shown: custom decision tree, automatically generated decision tree, and artificial neural network. The classification accuracies using leave-one-subject-out cross validation range from 58 to 97% for custom decision tree classifier, from 56 to 97% for automatically generated decision tree, and from 22 to 96% for artificial neural network. Total classification accuracy is 82 % for custom decision tree classifier, 86% for automatically generated decision tree, and 82% for artificial neural network.

  13. Source misattributions and false recognition errors: examining the role of perceptual resemblance and imagery generation processes.

    PubMed

    Foley, Mary Ann; Bays, Rebecca Brooke; Foy, Jeffrey; Woodfield, Mila

    2015-01-01

    In three experiments, we examine the extent to which participants' memory errors are affected by the perceptual features of an encoding series and imagery generation processes. Perceptual features were examined by manipulating the features associated with individual items as well as the relationships among items. An encoding instruction manipulation was included to examine the effects of explicit requests to generate images. In all three experiments, participants falsely claimed to have seen pictures of items presented as words, committing picture misattribution errors. These misattribution errors were exaggerated when the perceptual resemblance between pictures and images was relatively high (Experiment 1) and when explicit requests to generate images were omitted from encoding instructions (Experiments 1 and 2). When perceptual cues made the thematic relationships among items salient, the level and pattern of misattribution errors were also affected (Experiments 2 and 3). Results address alternative views about the nature of internal representations resulting in misattribution errors and refute the idea that these errors reflect only participants' general impressions or beliefs about what was seen.

  14. Estimated content percentages of volatile liquids and fat extractables in ready-to-eat foods.

    PubMed

    Daft, J L; Cline, J K; Palmer, R E; Sisk, R L; Griffitt, K R

    1996-01-01

    Content percentages of volatile liquids and fat extractables in 340 samples of ready-to-eat foods were determined gravimetrically. Volatile liquids were determined by drying samples in a microwave oven with a self-contained balance; results were printed out automatically. Fat extractables were extracted from the samples with mixed ethers; extracts were dried and weighed manually. The samples, 191 nonfat and 149 fatty (containing ca 2% or more fat) foods, represent about 5000 different food items and include infant and toddler, ethnic, fast, and imported items. Samples were initially prepared for screening of essential and toxic elements and chemical contamination by chopping and mixing into homogenous composites. Content determinations were then made on separate portions from each composite. Content results were put into a database for evaluation. Overall, mean results from both determinations agree with published data for moisture and fat contents of similar food items. Coefficients of variation, however, were lower for determination of volatile liquids than for that of fat extractables.

  15. Purchasing Nonprescription Contraceptives: The Underlying Structure of a Multi-Item Scale.

    ERIC Educational Resources Information Center

    Manolis, Chris; Winsor, Robert D.; True, Sheb L.

    1999-01-01

    Developed a multi-item scale for measuring attitudes associated with purchasing nonprescription contraceptives using construct specification and item generation and confirmatory factor analysis. Demonstrated a high degree of invariance across samples of 81 female and 115 male adult consumers. (SLD)

  16. Automatic generation of pictorial transcripts of video programs

    NASA Astrophysics Data System (ADS)

    Shahraray, Behzad; Gibbon, David C.

    1995-03-01

    An automatic authoring system for the generation of pictorial transcripts of video programs which are accompanied by closed caption information is presented. A number of key frames, each of which represents the visual information in a segment of the video (i.e., a scene), are selected automatically by performing a content-based sampling of the video program. The textual information is recovered from the closed caption signal and is initially segmented based on its implied temporal relationship with the video segments. The text segmentation boundaries are then adjusted, based on lexical analysis and/or caption control information, to account for synchronization errors due to possible delays in the detection of scene boundaries or the transmission of the caption information. The closed caption text is further refined through linguistic processing for conversion to lower- case with correct capitalization. The key frames and the related text generate a compact multimedia presentation of the contents of the video program which lends itself to efficient storage and transmission. This compact representation can be viewed on a computer screen, or used to generate the input to a commercial text processing package to generate a printed version of the program.

  17. Representation of research hypotheses

    PubMed Central

    2011-01-01

    Background Hypotheses are now being automatically produced on an industrial scale by computers in biology, e.g. the annotation of a genome is essentially a large set of hypotheses generated by sequence similarity programs; and robot scientists enable the full automation of a scientific investigation, including generation and testing of research hypotheses. Results This paper proposes a logically defined way for recording automatically generated hypotheses in machine amenable way. The proposed formalism allows the description of complete hypotheses sets as specified input and output for scientific investigations. The formalism supports the decomposition of research hypotheses into more specialised hypotheses if that is required by an application. Hypotheses are represented in an operational way – it is possible to design an experiment to test them. The explicit formal description of research hypotheses promotes the explicit formal description of the results and conclusions of an investigation. The paper also proposes a framework for automated hypotheses generation. We demonstrate how the key components of the proposed framework are implemented in the Robot Scientist “Adam”. Conclusions A formal representation of automatically generated research hypotheses can help to improve the way humans produce, record, and validate research hypotheses. Availability http://www.aber.ac.uk/en/cs/research/cb/projects/robotscientist/results/ PMID:21624164

  18. Run-Time Support for Rapid Prototyping

    DTIC Science & Technology

    1988-12-01

    prototyping. One such system is the Computer-Aided Proto- typing System (CAPS). It combines rapid prototypng with automatic program generation. Some of the...a design database, and a design management system [Ref. 3:p. 66. By using both rapid prototyping and automatic program genera- tion. CAPS will be...Most proto- typing systems perform these functions. CAPS is different in that it combines rapid prototyping with a variant of automatic program

  19. The role of the P3 and CNV components in voluntary and automatic temporal orienting: A high spatial-resolution ERP study.

    PubMed

    Mento, Giovanni

    2017-12-01

    A main distinction has been proposed between voluntary and automatic mechanisms underlying temporal orienting (TO) of selective attention. Voluntary TO implies the endogenous directing of attention induced by symbolic cues. Conversely, automatic TO is exogenously instantiated by the physical properties of stimuli. A well-known example of automatic TO is sequential effects (SEs), which refer to the adjustments in participants' behavioral performance as a function of the trial-by-trial sequential distribution of the foreperiod between two stimuli. In this study a group of healthy adults underwent a cued reaction time task purposely designed to assess both voluntary and automatic TO. During the task, both post-cue and post-target event-related potentials (ERPs) were recorded by means of a high spatial resolution EEG system. In the results of the post-cue analysis, the P3a and P3b were identified as two distinct ERP markers showing distinguishable spatiotemporal features and reflecting automatic and voluntary a priori expectancy generation, respectively. The brain source reconstruction further revealed that distinct cortical circuits supported these two temporally dissociable components. Namely, the voluntary P3b was supported by a left sensorimotor network, while the automatic P3a was generated by a more distributed frontoparietal circuit. Additionally, post-cue contingent negative variation (CNV) and post-target P3 modulations were observed as common markers of voluntary and automatic expectancy implementation and response selection, although partially dissociable neural networks subserved these two mechanisms. Overall, these results provide new electrophysiological evidence suggesting that distinct neural substrates can be recruited depending on the voluntary or automatic cognitive nature of the cognitive mechanisms subserving TO. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Development of a questionnaire to measure consumers' perceptions of service quality in Australian community pharmacies.

    PubMed

    Mirzaei, Ardalan; Carter, Stephen R; Chen, Jenny Yimin; Rittsteuer, Claudia; Schneider, Carl R

    2018-06-11

    Recent changes within community pharmacy have seen a shift towards some pharmacies providing "value-added" services. However, providing high levels of service is resource intensive yet revenues from dispensing are declining. Of significance therefore, is how consumers perceive service quality (SQ). However, at present there are no validated and reliable instruments to measure consumers' perceptions of SQ in Australian community pharmacies. The aim of this study was to build a theory-grounded model of service quality (SQ) in community pharmacies and to create a valid survey instrument to measure consumers' perceptions of service quality. Stage 1 dealt with item generation using theory, prior research and qualitative interviews with pharmacy consumers. Selected items were then subjected to content validity and face validity. Stages 2 and 3 included psychometric testing among English-speaking adult consumers of Australian pharmacies. Exploratory factor analysis was used for item reduction and to explain the domains of SQ. In stage 1, item generation for SQ initially generated 113 items which were then refined, through content and face validity, down to 61 items. In stage 2, after subjecting the questionnaire to psychometric testing on the data from the first pharmacy (n = 374), the use of the primary dimensions of SQ was abandoned leaving 32 items representing 5 domains of SQ. In stage 3, the questionnaire was subject to further testing and item reduction in 3 other pharmacies (n = 320). SQ was best described using 23 items representing 6 domains: 'health and medicines advice', 'relationship quality', 'technical quality', 'environmental quality', 'non-prescription service', and 'health outcomes'. This research presents a theoretically-grounded and robust measurement scale developed for consumer perceptions of SQ in a community pharmacy. Copyright © 2018. Published by Elsevier Inc.

  1. An Analysis of Serial Number Tracking Automatic Identification Technology as Used in Naval Aviation Programs

    NASA Astrophysics Data System (ADS)

    Csorba, Robert

    2002-09-01

    The Government Accounting Office found that the Navy, between 1996 and 1998, lost 3 billion in materiel in-transit. This thesis explores the benefits and cost of automatic identification and serial number tracking technologies under consideration by the Naval Supply Systems Command and the Naval Air Systems Command. Detailed cost-savings estimates are made for each aircraft type in the Navy inventory. Project and item managers of repairable components using Serial Number Tracking were surveyed as to the value of this system. It concludes that two thirds of the in-transit losses can be avoided with implementation of effective information technology-based logistics and maintenance tracking systems. Recommendations are made for specific steps and components of such an implementation. Suggestions are made for further research.

  2. Use of the nominal group technique to identify stakeholder priorities and inform survey development: an example with informal caregivers of people with scleroderma.

    PubMed

    Rice, Danielle B; Cañedo-Ayala, Mara; Turner, Kimberly A; Gumuchian, Stephanie T; Malcarne, Vanessa L; Hagedoorn, Mariët; Thombs, Brett D

    2018-03-02

    The nominal group technique (NGT) allows stakeholders to directly generate items for needs assessment surveys. The objective was to demonstrate the use of NGT discussions to develop survey items on (1) challenges experienced by informal caregivers of people living with systemic sclerosis (SSc) and (2) preferences for support services. Three NGT groups were conducted. In each group, participants generated lists of challenges and preferred formats for support services. Participants shared items, and a master list was compiled, then reviewed by participants to remove or merge overlapping items. Once a final list of items was generated, participants independently rated challenges on a scale from 1 (not at all important) to 10 (extremely important) and support services on a scale from 1 (not at all likely to use) to 10 (very likely to use). Lists generated in the NGT discussions were subsequently reviewed and integrated into a single list by research team members. SSc patient conferences held in the USA and Canada. Informal caregivers who previously or currently were providing care for a family member or friend with SSc. A total of six men and seven women participated in the NGT discussions. Mean age was 59.8 years (SD=12.6). Participants provided care for a partner (n=8), parent (n=1), child (n=2) or friend (n=2). A list of 61 unique challenges was generated with challenges related to gaps in information, resources and support needs identified most frequently. A list of 18 unique support services was generated; most involved online or in-person delivery of emotional support and educational material about SSc. The NGT was an efficient method for obtaining survey items directly from SSc caregivers on important challenges and preferences for support services. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  3. Applying reliability analysis to design electric power systems for More-electric aircraft

    NASA Astrophysics Data System (ADS)

    Zhang, Baozhu

    The More-Electric Aircraft (MEA) is a type of aircraft that replaces conventional hydraulic and pneumatic systems with electrically powered components. These changes have significantly challenged the aircraft electric power system design. This thesis investigates how reliability analysis can be applied to automatically generate system topologies for the MEA electric power system. We first use a traditional method of reliability block diagrams to analyze the reliability level on different system topologies. We next propose a new methodology in which system topologies, constrained by a set reliability level, are automatically generated. The path-set method is used for analysis. Finally, we interface these sets of system topologies with control synthesis tools to automatically create correct-by-construction control logic for the electric power system.

  4. Die Starter: A New System to Manage Early Feasibility in Sheet Metal Forming

    NASA Astrophysics Data System (ADS)

    Narainen, Rodrigue; Porzner, Harald

    2016-08-01

    Die Starter, a new system developed by ESI Group, allows the user to drastically reduce the number of iterations during the early tool process feasibility. This innovative system automatically designs the first quick die face, generating binder and addendum surfaces (NURBS surfaces) by taking account the full die process. Die Starter also improves the initial die face based on feasibility criteria (avoiding splits, wrinkles) by automatically generating the geometrical modifications of the binder and addendum and the bead restraining forces with minimal material usage. This paper presents a description of the new system and the methodology of Die Starter. Some industrial examples are presented from the part geometry to final die face including automatic developed flanges, part on binder and inner binder.

  5. Automated quadrilateral surface discretization method and apparatus usable to generate mesh in a finite element analysis system

    DOEpatents

    Blacker, Teddy D.

    1994-01-01

    An automatic quadrilateral surface discretization method and apparatus is provided for automatically discretizing a geometric region without decomposing the region. The automated quadrilateral surface discretization method and apparatus automatically generates a mesh of all quadrilateral elements which is particularly useful in finite element analysis. The generated mesh of all quadrilateral elements is boundary sensitive, orientation insensitive and has few irregular nodes on the boundary. A permanent boundary of the geometric region is input and rows are iteratively layered toward the interior of the geometric region. Also, an exterior permanent boundary and an interior permanent boundary for a geometric region may be input and the rows are iteratively layered inward from the exterior boundary in a first counter clockwise direction while the rows are iteratively layered from the interior permanent boundary toward the exterior of the region in a second clockwise direction. As a result, a high quality mesh for an arbitrary geometry may be generated with a technique that is robust and fast for complex geometric regions and extreme mesh gradations.

  6. Clever generation of rich SPARQL queries from annotated relational schema: application to Semantic Web Service creation for biological databases.

    PubMed

    Wollbrett, Julien; Larmande, Pierre; de Lamotte, Frédéric; Ruiz, Manuel

    2013-04-15

    In recent years, a large amount of "-omics" data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic.

  7. Clever generation of rich SPARQL queries from annotated relational schema: application to Semantic Web Service creation for biological databases

    PubMed Central

    2013-01-01

    Background In recent years, a large amount of “-omics” data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. Results We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. Conclusions BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic. PMID:23586394

  8. Consequences of Ignoring Guessing when Estimating the Latent Density in Item Response Theory

    ERIC Educational Resources Information Center

    Woods, Carol M.

    2008-01-01

    In Ramsay-curve item response theory (RC-IRT), the latent variable distribution is estimated simultaneously with the item parameters. In extant Monte Carlo evaluations of RC-IRT, the item response function (IRF) used to fit the data is the same one used to generate the data. The present simulation study examines RC-IRT when the IRF is imperfectly…

  9. Different Manhattan project: automatic statistical model generation

    NASA Astrophysics Data System (ADS)

    Yap, Chee Keng; Biermann, Henning; Hertzmann, Aaron; Li, Chen; Meyer, Jon; Pao, Hsing-Kuo; Paxia, Salvatore

    2002-03-01

    We address the automatic generation of large geometric models. This is important in visualization for several reasons. First, many applications need access to large but interesting data models. Second, we often need such data sets with particular characteristics (e.g., urban models, park and recreation landscape). Thus we need the ability to generate models with different parameters. We propose a new approach for generating such models. It is based on a top-down propagation of statistical parameters. We illustrate the method in the generation of a statistical model of Manhattan. But the method is generally applicable in the generation of models of large geographical regions. Our work is related to the literature on generating complex natural scenes (smoke, forests, etc) based on procedural descriptions. The difference in our approach stems from three characteristics: modeling with statistical parameters, integration of ground truth (actual map data), and a library-based approach for texture mapping.

  10. Data mining learning bootstrap through semantic thumbnail analysis

    NASA Astrophysics Data System (ADS)

    Battiato, Sebastiano; Farinella, Giovanni Maria; Giuffrida, Giovanni; Tribulato, Giuseppe

    2007-01-01

    The rapid increase of technological innovations in the mobile phone industry induces the research community to develop new and advanced systems to optimize services offered by mobile phones operators (telcos) to maximize their effectiveness and improve their business. Data mining algorithms can run over data produced by mobile phones usage (e.g. image, video, text and logs files) to discover user's preferences and predict the most likely (to be purchased) offer for each individual customer. One of the main challenges is the reduction of the learning time and cost of these automatic tasks. In this paper we discuss an experiment where a commercial offer is composed by a small picture augmented with a short text describing the offer itself. Each customer's purchase is properly logged with all relevant information. Upon arrival of new items we need to learn who the best customers (prospects) for each item are, that is, the ones most likely to be interested in purchasing that specific item. Such learning activity is time consuming and, in our specific case, is not applicable given the large number of new items arriving every day. Basically, given the current customer base we are not able to learn on all new items. Thus, we need somehow to select among those new items to identify the best candidates. We do so by using a joint analysis between visual features and text to estimate how good each new item could be, that is, whether or not is worth to learn on it. Preliminary results show the effectiveness of the proposed approach to improve classical data mining techniques.

  11. Creation and Delivery of New Superpixelized DIRBE Map Products

    NASA Technical Reports Server (NTRS)

    Weiland, J.

    1998-01-01

    Phase 1 called for the following tasks: (1) completion of code to generate intermediate files containing the individual DIRBE observations which would be used to make the superpixelized maps; (2) completion of code necessary to generate the maps themselves; and (3) quality control on test-case maps in the form of point-source extraction and photometry. Items 1 and 2 are well in hand and the tested code is nearly complete. A few test maps have been generated for the tests mentioned in item 3. Map generation is not in production mode yet.

  12. Heightened attentional capture by visual food stimuli in anorexia nervosa.

    PubMed

    Neimeijer, Renate A M; Roefs, Anne; de Jong, Peter J

    2017-08-01

    The present study was designed to test the hypothesis that anorexia nervosa (AN) patients are relatively insensitive to the attentional capture of visual food stimuli. Attentional avoidance of food might help AN patients to prevent more elaborate processing of food stimuli and the subsequent generation of craving, which might enable AN patients to maintain their strict diet. Participants were 66 restrictive AN spectrum patients and 55 healthy controls. A single-target rapid serial visual presentation task was used with food and disorder-neutral cues as critical distracter stimuli and disorder-neutral pictures as target stimuli. AN spectrum patients showed diminished task performance when visual food cues were presented in close temporal proximity of the to-be-identified target. In contrast to our hypothesis, results indicate that food cues automatically capture AN spectrum patients' attention. One explanation could be that the enhanced attentional capture of food cues in AN is driven by the relatively high threat value of food items in AN. Implications and suggestions for future research are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. A peer-to-peer music sharing system based on query-by-humming

    NASA Astrophysics Data System (ADS)

    Wang, Jianrong; Chang, Xinglong; Zhao, Zheng; Zhang, Yebin; Shi, Qingwei

    2007-09-01

    Today, the main traffic in peer-to-peer (P2P) network is still multimedia files including large numbers of music files. The study of Music Information Retrieval (MIR) brings out many encouraging achievements in music search area. Nevertheless, the research of music search based on MIR in P2P network is still insufficient. Query by Humming (QBH) is one MIR technology studied for years. In this paper, we present a server based P2P music sharing system which is based on QBH and integrated with a Hierarchical Index Structure (HIS) to enhance the relation between surface data and potential information. HIS automatically evolving depends on the music related items carried by each peer such as midi files, lyrics and so forth. Instead of adding large amount of redundancy, the system generates a bit of index for multiple search input which improves the traditional keyword-based text search mode largely. When network bandwidth, speed, etc. are no longer a bottleneck of internet serve, the accessibility and accuracy of information provided by internet are being more concerned by end users.

  14. Automatic computation and solution of generalized harmonic balance equations

    NASA Astrophysics Data System (ADS)

    Peyton Jones, J. C.; Yaser, K. S. A.; Stevenson, J.

    2018-02-01

    Generalized methods are presented for generating and solving the harmonic balance equations for a broad class of nonlinear differential or difference equations and for a general set of harmonics chosen by the user. In particular, a new algorithm for automatically generating the Jacobian of the balance equations enables efficient solution of these equations using continuation methods. Efficient numeric validation techniques are also presented, and the combined algorithm is applied to the analysis of dc, fundamental, second and third harmonic response of a nonlinear automotive damper.

  15. The medial temporal lobes distinguish between within-item and item-context relations during autobiographical memory retrieval.

    PubMed

    Sheldon, Signy; Levine, Brian

    2015-12-01

    During autobiographical memory retrieval, the medial temporal lobes (MTL) relate together multiple event elements, including object (within-item relations) and context (item-context relations) information, to create a cohesive memory. There is consistent support for a functional specialization within the MTL according to these relational processes, much of which comes from recognition memory experiments. In this study, we compared brain activation patterns associated with retrieving within-item relations (i.e., associating conceptual and sensory-perceptual object features) and item-context relations (i.e., spatial relations among objects) with respect to naturalistic autobiographical retrieval. We developed a novel paradigm that cued participants to retrieve information about past autobiographical events, non-episodic within-item relations, and non-episodic item-context relations with the perceptuomotor aspects of retrieval equated across these conditions. We used multivariate analysis techniques to extract common and distinct patterns of activity among these conditions within the MTL and across the whole brain, both in terms of spatial and temporal patterns of activity. The anterior MTL (perirhinal cortex and anterior hippocampus) was preferentially recruited for generating within-item relations later in retrieval whereas the posterior MTL (posterior parahippocampal cortex and posterior hippocampus) was preferentially recruited for generating item-context relations across the retrieval phase. These findings provide novel evidence for functional specialization within the MTL with respect to naturalistic memory retrieval. © 2015 Wiley Periodicals, Inc.

  16. Ordinal-To-Interval Scale Conversion Tables and National Items for the New Zealand Version of the WHOQOL-BREF

    PubMed Central

    Billington, D. Rex; Hsu, Patricia Hsien-Chuan; Feng, Xuan Joanna; Medvedev, Oleg N.; Kersten, Paula; Landon, Jason; Siegert, Richard J.

    2016-01-01

    The World Health Organisation Quality of Life (WHOQOL) questionnaires are widely used around the world and can claim strong cross-cultural validity due to their development in collaboration with international field centres. To enhance conceptual equivalence of quality of life across cultures, optional national items are often developed for use alongside the core instrument. The present study outlines the development of national items for the New Zealand WHOQOL-BREF. Focus groups with members of the community as well as health experts discussed what constitutes quality of life in their opinion. Based on themes extracted of aspects not contained in the existing WHOQOL instrument, 46 candidate items were generated and subsequently rated for their importance by a random sample of 585 individuals from the general population. Applying importance criteria reduced these items to 24, which were then sent to another large random sample (n = 808) to be rated alongside the existing WHOQOL-BREF. A final set of five items met the criteria for national items. Confirmatory factor analysis identified four national items as belonging to the psychological domain of quality of life, and one item to the social domain. Rasch analysis validated these results and generated ordinal-to-interval conversion algorithms to allow use of parametric statistics for domain scores with and without national items. PMID:27812203

  17. Fully Automatic Speech-Based Analysis of the Semantic Verbal Fluency Task.

    PubMed

    König, Alexandra; Linz, Nicklas; Tröger, Johannes; Wolters, Maria; Alexandersson, Jan; Robert, Phillipe

    2018-06-08

    Semantic verbal fluency (SVF) tests are routinely used in screening for mild cognitive impairment (MCI). In this task, participants name as many items as possible of a semantic category under a time constraint. Clinicians measure task performance manually by summing the number of correct words and errors. More fine-grained variables add valuable information to clinical assessment, but are time-consuming. Therefore, the aim of this study is to investigate whether automatic analysis of the SVF could provide these as accurate as manual and thus, support qualitative screening of neurocognitive impairment. SVF data were collected from 95 older people with MCI (n = 47), Alzheimer's or related dementias (ADRD; n = 24), and healthy controls (HC; n = 24). All data were annotated manually and automatically with clusters and switches. The obtained metrics were validated using a classifier to distinguish HC, MCI, and ADRD. Automatically extracted clusters and switches were highly correlated (r = 0.9) with manually established values, and performed as well on the classification task separating HC from persons with ADRD (area under curve [AUC] = 0.939) and MCI (AUC = 0.758). The results show that it is possible to automate fine-grained analyses of SVF data for the assessment of cognitive decline. © 2018 S. Karger AG, Basel.

  18. Nondestructive Vibratory Testing and Evaluation Procedure for Military Roads and Streets.

    DTIC Science & Technology

    1984-07-01

    the addition of an auto- matic data acquisition system to the instrumentation control panel. This system , presently available, would automatically ...the data used to further develop and define the basic correlations. c. Consideration be given to installing an automatic data acquisi- tion system to...glows red any time the force generator is not fully elevated. Depressing this switch will stop the automatic cycle at any point and clear all system

  19. Undergraduate Lab Project in Personality Assessment: Measurement of Anal Character.

    ERIC Educational Resources Information Center

    Davidson, William B.

    1987-01-01

    This article describes a project which required students to write assessment items for a personality inventory. The 104 items generated were administered to 126 subjects. Results showed the items were reasonably reliable and valid. The pedagogical value of the project is discussed. (Author/JDH)

  20. Independent Orbiter Assessment (IOA): Assessment of the EPD and C/remote manipulator system FMEA/CIL

    NASA Technical Reports Server (NTRS)

    Robinson, W. W.

    1988-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA effort first completed an analysis of the Electrical Power Distribution and Control (EPD and C)/Remote Manipulator System (RMS) hardware, generating draft failure modes and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The IOA analysis of the EPD and C/RMS hardware initially generated 345 failure mode worksheets and identified 117 Potential Critical Items (PCIs) before starting the assessment process. These analysis results were compared to the proposed NASA Post 51-L baseline of 132 FMEAs and 66 CIL items.

  1. Commercial Digital/ADP Equipment in the Ocean Environment. Volume 2. User Appendices

    DTIC Science & Technology

    1978-12-15

    is that the LINDA system uses a mini computer with a time sharing system software which allows several terminals to be operated at the same time...Acquisition System (ODAS) consists of sensors, computer hardware and computer software . Certain sensors are interfaced to the computers for real time...on USNS KANE, USNS BENT, and USKS WILKES. Commercial automatic data processing equipment used in ODAS includes: Item Model Computer PDP-9 Tape

  2. SU-E-T-362: Automatic Catheter Reconstruction of Flap Applicators in HDR Surface Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buzurovic, I; Devlin, P; Hansen, J

    2014-06-01

    Purpose: Catheter reconstruction is crucial for the accurate delivery of radiation dose in HDR brachytherapy. The process becomes complicated and time-consuming for large superficial clinical targets with a complex topology. A novel method for the automatic catheter reconstruction of flap applicators is proposed in this study. Methods: We have developed a program package capable of image manipulation, using C++class libraries of The-Visualization-Toolkit(VTK) software system. The workflow for automatic catheter reconstruction is: a)an anchor point is placed in 3D or in the axial view of the first slice at the tip of the first, last and middle points for the curvedmore » surface; b)similar points are placed on the last slice of the image set; c)the surface detection algorithm automatically registers the points to the images and applies the surface reconstruction filter; d)then a structured grid surface is generated through the center of the treatment catheters placed at a distance of 5mm from the patient's skin. As a result, a mesh-style plane is generated with the reconstructed catheters placed 10mm apart. To demonstrate automatic catheter reconstruction, we used CT images of patients diagnosed with cutaneous T-cell-lymphoma and imaged with Freiburg-Flap-Applicators (Nucletron™-Elekta, Netherlands). The coordinates for each catheter were generated and compared to the control points selected during the manual reconstruction for 16catheters and 368control point Results: The variation of the catheter tip positions between the automatically and manually reconstructed catheters was 0.17mm(SD=0.23mm). The position difference between the manually selected catheter control points and the corresponding points obtained automatically was 0.17mm in the x-direction (SD=0.23mm), 0.13mm in the y-direction (SD=0.22mm), and 0.14mm in the z-direction (SD=0.24mm). Conclusion: This study shows the feasibility of the automatic catheter reconstruction of flap applicators with a high level of positioning accuracy. Implementation of this technique has potential to decrease the planning time and may improve overall quality in superficial brachytherapy.« less

  3. AUTOCASK (AUTOmatic Generation of 3-D CASK models). A microcomputer based system for shipping cask design review analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard, M.A.; Sommer, S.C.

    1995-04-01

    AUTOCASK (AUTOmatic Generation of 3-D CASK models) is a microcomputer-based system of computer programs and databases developed at the Lawrence Livermore National Laboratory (LLNL) for the structural analysis of shipping casks for radioactive material. Model specification is performed on the microcomputer, and the analyses are performed on an engineering workstation or mainframe computer. AUTOCASK is based on 80386/80486 compatible microcomputers. The system is composed of a series of menus, input programs, display programs, a mesh generation program, and archive programs. All data is entered through fill-in-the-blank input screens that contain descriptive data requests.

  4. Automatic Generation of Supervisory Control System Software Using Graph Composition

    NASA Astrophysics Data System (ADS)

    Nakata, Hideo; Sano, Tatsuro; Kojima, Taizo; Seo, Kazuo; Uchida, Tomoyuki; Nakamura, Yasuaki

    This paper describes the automatic generation of system descriptions for SCADA (Supervisory Control And Data Acquisition) systems. The proposed method produces various types of data and programs for SCADA systems from equipment definitions using conversion rules. At first, this method makes directed graphs, which represent connections between the equipment, from equipment definitions. System descriptions are generated using the conversion rules, by analyzing these directed graphs, and finding the groups of equipment that involve similar operations. This method can make the conversion rules multi levels by using the composition of graphs, and can reduce the number of rules. The developer can define and manage these rules efficiently.

  5. Automatic generation of stop word lists for information retrieval and analysis

    DOEpatents

    Rose, Stuart J

    2013-01-08

    Methods and systems for automatically generating lists of stop words for information retrieval and analysis. Generation of the stop words can include providing a corpus of documents and a plurality of keywords. From the corpus of documents, a term list of all terms is constructed and both a keyword adjacency frequency and a keyword frequency are determined. If a ratio of the keyword adjacency frequency to the keyword frequency for a particular term on the term list is less than a predetermined value, then that term is excluded from the term list. The resulting term list is truncated based on predetermined criteria to form a stop word list.

  6. Algorithms for the automatic generation of 2-D structured multi-block grids

    NASA Technical Reports Server (NTRS)

    Schoenfeld, Thilo; Weinerfelt, Per; Jenssen, Carl B.

    1995-01-01

    Two different approaches to the fully automatic generation of structured multi-block grids in two dimensions are presented. The work aims to simplify the user interactivity necessary for the definition of a multiple block grid topology. The first approach is based on an advancing front method commonly used for the generation of unstructured grids. The original algorithm has been modified toward the generation of large quadrilateral elements. The second method is based on the divide-and-conquer paradigm with the global domain recursively partitioned into sub-domains. For either method each of the resulting blocks is then meshed using transfinite interpolation and elliptic smoothing. The applicability of these methods to practical problems is demonstrated for typical geometries of fluid dynamics.

  7. Automated Sequence Generation Process and Software

    NASA Technical Reports Server (NTRS)

    Gladden, Roy

    2007-01-01

    "Automated sequence generation" (autogen) signifies both a process and software used to automatically generate sequences of commands to operate various spacecraft. The autogen software comprises the autogen script plus the Activity Plan Generator (APGEN) program. APGEN can be used for planning missions and command sequences.

  8. Strategy combination during execution of memory strategies in young and older adults.

    PubMed

    Hinault, Thomas; Lemaire, Patrick; Touron, Dayna

    2017-05-01

    The present study investigated whether people can combine two memory strategies to encode pairs of words more efficiently than with a single strategy, and age-related differences in such strategy combination. Young and older adults were asked to encode pairs of words (e.g., satellite-tunnel). For each item, participants were told to use either the interactive-imagery strategy (e.g., mentally visualising the two words and making them interact), the sentence-generation strategy (i.e., generate a sentence linking the two words), or with strategy combination (i.e., generating a sentence while mentally visualising it). Participants obtained better recall performance on items encoded with strategy combination than on items encoded with interactive-imagery or sentence-generation strategies. Moreover, we found age-related decline in such strategy combination. These findings have important implications to further our understanding of execution of memory strategies, and suggest that strategy combination occurs in a variety of cognitive domains.

  9. Segmentation of stereo terrain images

    NASA Astrophysics Data System (ADS)

    George, Debra A.; Privitera, Claudio M.; Blackmon, Theodore T.; Zbinden, Eric; Stark, Lawrence W.

    2000-06-01

    We have studied four approaches to segmentation of images: three automatic ones using image processing algorithms and a fourth approach, human manual segmentation. We were motivated toward helping with an important NASA Mars rover mission task -- replacing laborious manual path planning with automatic navigation of the rover on the Mars terrain. The goal of the automatic segmentations was to identify an obstacle map on the Mars terrain to enable automatic path planning for the rover. The automatic segmentation was first explored with two different segmentation methods: one based on pixel luminance, and the other based on pixel altitude generated through stereo image processing. The third automatic segmentation was achieved by combining these two types of image segmentation. Human manual segmentation of Martian terrain images was used for evaluating the effectiveness of the combined automatic segmentation as well as for determining how different humans segment the same images. Comparisons between two different segmentations, manual or automatic, were measured using a similarity metric, SAB. Based on this metric, the combined automatic segmentation did fairly well in agreeing with the manual segmentation. This was a demonstration of a positive step towards automatically creating the accurate obstacle maps necessary for automatic path planning and rover navigation.

  10. Independent Orbiter Assessment (IOA): Assessment of the Electrical Power Distribution and Control/Electrical Power Generation (EPD and C/EPG) FMEA/CIL

    NASA Technical Reports Server (NTRS)

    Mccants, C. N.; Bearrow, M.

    1988-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA effort first completed an analysis of the Electrical Power Distribution and Control/Electrical Power Generation (EPD and C/EPG) hardware, generating draft failure modes and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The IOA results were then compared to the NASA FMEA/CIL baseline with proposed Post 51-L updates included. A resolution of each discrepancy from the comparison was provided through additional analysis as required. The results of that comparison is documented for the Orbiter EPD and C/EPG hardware. The IOA product for the EPD and C/EPG analysis consisted of 263 failure mode worksheets that resulted in 42 potential critical items being identified. Comparison was made to the NASA baseline which consisted of 211 FMEA and 47 CIL items.

  11. Attitudes Toward Transgender Men and Women: Development and Validation of a New Measure

    PubMed Central

    Billard, Thomas J

    2018-01-01

    A series of three studies were conducted to generate, develop, and validate the Attitudes toward Transgender Men and Women (ATTMW) scale. In Study 1, 120 American adults responded to an open-ended questionnaire probing various dimensions of their perceptions of transgender individuals and identity. Qualitative thematic analysis generated 200 items based on their responses. In Study 2, 238 American adults completed a questionnaire consisting of the generated items. Exploratory factor analysis (EFA) revealed two non-identical 12-item subscales (ATTM and ATTW) of the full 24-item scale. In Study 3, 150 undergraduate students completed a survey containing the ATTMW and a number of validity-testing variables. Confirmatory factor analysis (CFA) verified the single-factor structures of the ATTM and ATTW subscales, and the convergent, discriminant, predictive, and concurrent validities of the ATTMW were also established. Together, our results demonstrate that the ATTMW is a reliable and valid measure of attitudes toward transgender individuals. PMID:29666595

  12. Difficulty identifying feelings and automatic activation in the fusiform gyrus in response to facial emotion.

    PubMed

    Eichmann, Mischa; Kugel, Harald; Suslow, Thomas

    2008-12-01

    Difficulties in identifying and differentiating one's emotions are a central characteristic of alexithymia. In the present study, automatic activation of the fusiform gyrus to facial emotion was investigated as a function of alexithymia as assessed by the 20-item Toronto Alexithymia Scale. During 3 Tesla fMRI scanning, pictures of faces bearing sad, happy, and neutral expressions masked by neutral faces were presented to 22 healthy adults who also responded to the Toronto Alexithymia Scale. The fusiform gyrus was selected as the region of interest, and voxel values of this region were extracted, summarized as means, and tested among the different conditions (sad, happy, and neutral faces). Masked sad facial emotions were associated with greater bilateral activation of the fusiform gyrus than masked neutral faces. The subscale, Difficulty Identifying Feelings, was negatively correlated with the neural response of the fusiform gyrus to masked sad faces. The correlation results suggest that automatic hyporesponsiveness of the fusiform gyrus to negative emotion stimuli may reflect problems in recognizing one's emotions in everyday life.

  13. Integrated data management for clinical studies: automatic transformation of data models with semantic annotations for principal investigators, data managers and statisticians.

    PubMed

    Dugas, Martin; Dugas-Breit, Susanne

    2014-01-01

    Design, execution and analysis of clinical studies involves several stakeholders with different professional backgrounds. Typically, principle investigators are familiar with standard office tools, data managers apply electronic data capture (EDC) systems and statisticians work with statistics software. Case report forms (CRFs) specify the data model of study subjects, evolve over time and consist of hundreds to thousands of data items per study. To avoid erroneous manual transformation work, a converting tool for different representations of study data models was designed. It can convert between office format, EDC and statistics format. In addition, it supports semantic annotations, which enable precise definitions for data items. A reference implementation is available as open source package ODMconverter at http://cran.r-project.org.

  14. Detecting alerts, notifying the physician, and offering action items: a comprehensive alerting system.

    PubMed Central

    Kuperman, G. J.; Teich, J. M.; Bates, D. W.; Hiltz, F. L.; Hurley, J. M.; Lee, R. Y.; Paterno, M. D.

    1996-01-01

    We developed and evaluated a system to automatically identify serious clinical conditions in inpatients. The system notifies the patient's covering physician via his pager that an alert is present and offers potential therapies for the patient's condition (action items) at the time he views the alert information. Over a 6 month period, physicians responded to 1214 (70.2%) of 1730 alerts for which they were paged; they responded to 1002 (82.5% of the 1214) in less than 15 minutes. They said they would take action in 71.5% of the alerts, and they placed an order directly from the alert display screen in 39.4%. Further study is needed to determine if this alerting system improves processes or outcomes of care. PMID:8947756

  15. Lexicon generation methods, lexicon generation devices, and lexicon generation articles of manufacture

    DOEpatents

    Carter, Richard J [Richland, WA; McCall, Jonathon D [West Richland, WA; Whitney, Paul D [Richland, WA; Gregory, Michelle L [Richland, WA; Turner, Alan E [Kennewick, WA; Hetzler, Elizabeth G [Kennewick, WA; White, Amanda M [Kennewick, WA; Posse, Christian [Seattle, WA; Nakamura, Grant C [Kennewick, WA

    2010-10-26

    Lexicon generation methods, computer implemented lexicon editing methods, lexicon generation devices, lexicon editors, and articles of manufacture are described according to some aspects. In one aspect, a lexicon generation method includes providing a seed vector indicative of occurrences of a plurality of seed terms within a plurality of text items, providing a plurality of content vectors indicative of occurrences of respective ones of a plurality of content terms within the text items, comparing individual ones of the content vectors with respect to the seed vector, and responsive to the comparing, selecting at least one of the content terms as a term of a lexicon usable in sentiment analysis of text.

  16. The guitar chord-generating algorithm based on complex network

    NASA Astrophysics Data System (ADS)

    Ren, Tao; Wang, Yi-fan; Du, Dan; Liu, Miao-miao; Siddiqi, Awais

    2016-02-01

    This paper aims to generate chords for popular songs automatically based on complex network. Firstly, according to the characteristics of guitar tablature, six chord networks of popular songs by six pop singers are constructed and the properties of all networks are concluded. By analyzing the diverse chord networks, the accompaniment regulations and features are shown, with which the chords can be generated automatically. Secondly, in terms of the characteristics of popular songs, a two-tiered network containing a verse network and a chorus network is constructed. With this network, the verse and chorus can be composed respectively with the random walk algorithm. Thirdly, the musical motif is considered for generating chords, with which the bad chord progressions can be revised. This method can make the accompaniments sound more melodious. Finally, a popular song is chosen for generating chords and the new generated accompaniment sounds better than those done by the composers.

  17. Assess and Predict Automatic Generation Control Performances for Thermal Power Generation Units Based on Modeling Techniques

    NASA Astrophysics Data System (ADS)

    Zhao, Yan; Yang, Zijiang; Gao, Song; Liu, Jinbiao

    2018-02-01

    Automatic generation control(AGC) is a key technology to maintain real time power generation and load balance, and to ensure the quality of power supply. Power grids require each power generation unit to have a satisfactory AGC performance, being specified in two detailed rules. The two rules provide a set of indices to measure the AGC performance of power generation unit. However, the commonly-used method to calculate these indices is based on particular data samples from AGC responses and will lead to incorrect results in practice. This paper proposes a new method to estimate the AGC performance indices via system identification techniques. In addition, a nonlinear regression model between performance indices and load command is built in order to predict the AGC performance indices. The effectiveness of the proposed method is validated through industrial case studies.

  18. Social Desirability Bias Against Admitting Anger: Bias in the Test-Taker or Bias in the Test?

    PubMed

    Fernandez, Ephrem; Woldgabreal, Yilma; Guharajan, Deepan; Day, Andrew; Kiageri, Vasiliki; Ramtahal, Nirvana

    2018-05-09

    The veracity of self-report is often questioned, especially in anger, which is particularly susceptible to social desirability bias (SDB). However, could tests of SDB be themselves susceptible to bias? This study aimed to replicate the inverse correlation between a common test of SDB and a test of anger, to deconstruct this relationship according to anger-related versus non-anger-related items, and to reevaluate factor structure and reliability of the SDB test. More than 200 students were administered the Marlowe-Crowne Social Desirability Scale Short Version [M-C1(10)] and the Anger Parameters Scale (APS). Results confirmed that anger and SDB scores were significantly and inversely correlated. This intercorrelation became nonsignificant when the 4 anger-related items were omitted from the M-C1(10). Confirmatory factor analyses showed excellent fit for a model comprising anger items of the M-C1(10) but not for models of the entire instrument or nonanger items. The first model also attained high internal consistency. Thus, the significant negative correlation between anger and SDB is attributable to 4 M-C1(10) anger items, for which low ratings are automatically scored as high SDB; this stems from a tenuous assumption that low anger reports are invariably biased. The SDB test risks false positives of faking good and should be used with caution.

  19. Automated Guidance for Student Inquiry

    ERIC Educational Resources Information Center

    Gerard, Libby F.; Ryoo, Kihyun; McElhaney, Kevin W.; Liu, Ou Lydia; Rafferty, Anna N.; Linn, Marcia C.

    2016-01-01

    In 4 classroom experiments we investigated uses for technologies that automatically score student generated essays, concept diagrams, and drawings in inquiry curricula. We used the automatic scores to assign typical and research-based guidance and studied the impact of the guidance on student progress. Seven teachers and their 897 students…

  20. Automatic, nondestructive test monitors in-process weld quality

    NASA Technical Reports Server (NTRS)

    Deal, F. C.

    1968-01-01

    Instrument automatically and nondestructively monitors the quality of welds produced in microresistance welding. It measures the infrared energy generated in the weld as the weld is made and compares this energy with maximum and minimum limits of infrared energy values previously correlated with acceptable weld-strength tolerances.

  1. An algorithm for generating data accessibility recommendations for flight deck Automatic Dependent Surveillance-Broadcast (ADS-B) applications

    DOT National Transportation Integrated Search

    2014-09-09

    Automatic Dependent Surveillance-Broadcast (ADS-B) In technology supports the display of traffic data on Cockpit Displays of Traffic Information (CDTIs). The data are used by flightcrews to perform defined self-separation procedures, such as the in-t...

  2. RAT Requisition Approval Team - A L6S Initiative

    NASA Technical Reports Server (NTRS)

    Hall, Valerie

    2004-01-01

    L6S Project Description - Problem: The current cycle time for generating and approving Requisitions does not meet "Best-In-Class." . Scope: Only looking at the Florida Requisition Approval process for Orbiter (ORBF & ORBG) and Ground (GFAC) stocked items. This includes the time from when a requirement is generated by Logistics Planning and Supportability in Florida until it is approved and received by Procurement. Requisitions generated at other sites or for non stocked items will be out of scope of this Project

  3. Automatic Detection and Positioning of Ground Control Points Using TerraSAR-X Multiaspect Acquisitions

    NASA Astrophysics Data System (ADS)

    Montazeri, Sina; Gisinger, Christoph; Eineder, Michael; Zhu, Xiao xiang

    2018-05-01

    Geodetic stereo Synthetic Aperture Radar (SAR) is capable of absolute three-dimensional localization of natural Persistent Scatterer (PS)s which allows for Ground Control Point (GCP) generation using only SAR data. The prerequisite for the method to achieve high precision results is the correct detection of common scatterers in SAR images acquired from different viewing geometries. In this contribution, we describe three strategies for automatic detection of identical targets in SAR images of urban areas taken from different orbit tracks. Moreover, a complete work-flow for automatic generation of large number of GCPs using SAR data is presented and its applicability is shown by exploiting TerraSAR-X (TS-X) high resolution spotlight images over the city of Oulu, Finland and a test site in Berlin, Germany.

  4. When generating answers benefits arithmetic skill: the importance of prior knowledge.

    PubMed

    Rittle-Johnson, Bethany; Kmicikewycz, Alexander Oleksij

    2008-09-01

    People remember information better if they generate the information while studying rather than read the information. However, prior research has not investigated whether this generation effect extends to related but unstudied items and has not been conducted in classroom settings. We compared third graders' success on studied and unstudied multiplication problems after they spent a class period generating answers to problems or reading the answers from a calculator. The effect of condition interacted with prior knowledge. Students with low prior knowledge had higher accuracy in the generate condition, but as prior knowledge increased, the advantage of generating answers decreased. The benefits of generating answers may extend to unstudied items and to classroom settings, but only for learners with low prior knowledge.

  5. Generating Customized Verifiers for Automatically Generated Code

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Fischer, Bernd

    2008-01-01

    Program verification using Hoare-style techniques requires many logical annotations. We have previously developed a generic annotation inference algorithm that weaves in all annotations required to certify safety properties for automatically generated code. It uses patterns to capture generator- and property-specific code idioms and property-specific meta-program fragments to construct the annotations. The algorithm is customized by specifying the code patterns and integrating them with the meta-program fragments for annotation construction. However, this is difficult since it involves tedious and error-prone low-level term manipulations. Here, we describe an annotation schema compiler that largely automates this customization task using generative techniques. It takes a collection of high-level declarative annotation schemas tailored towards a specific code generator and safety property, and generates all customized analysis functions and glue code required for interfacing with the generic algorithm core, thus effectively creating a customized annotation inference algorithm. The compiler raises the level of abstraction and simplifies schema development and maintenance. It also takes care of some more routine aspects of formulating patterns and schemas, in particular handling of irrelevant program fragments and irrelevant variance in the program structure, which reduces the size, complexity, and number of different patterns and annotation schemas that are required. The improvements described here make it easier and faster to customize the system to a new safety property or a new generator, and we demonstrate this by customizing it to certify frame safety of space flight navigation code that was automatically generated from Simulink models by MathWorks' Real-Time Workshop.

  6. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.

    PubMed

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-06-24

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.

  7. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    PubMed Central

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-01-01

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961

  8. Development and clinical introduction of automated radiotherapy treatment planning for prostate cancer

    NASA Astrophysics Data System (ADS)

    Winkel, D.; Bol, G. H.; van Asselen, B.; Hes, J.; Scholten, V.; Kerkmeijer, L. G. W.; Raaymakers, B. W.

    2016-12-01

    To develop an automated radiotherapy treatment planning and optimization workflow to efficiently create patient specifically optimized clinical grade treatment plans for prostate cancer and to implement it in clinical practice. A two-phased planning and optimization workflow was developed to automatically generate 77Gy 5-field simultaneously integrated boost intensity modulated radiation therapy (SIB-IMRT) plans for prostate cancer treatment. A retrospective planning study (n  =  100) was performed in which automatically and manually generated treatment plans were compared. A clinical pilot (n  =  21) was performed to investigate the usability of our method. Operator time for the planning process was reduced to  <5 min. The retrospective planning study showed that 98 plans met all clinical constraints. Significant improvements were made in the volume receiving 72Gy (V72Gy) for the bladder and rectum and the mean dose of the bladder and the body. A reduced plan variance was observed. During the clinical pilot 20 automatically generated plans met all constraints and 17 plans were selected for treatment. The automated radiotherapy treatment planning and optimization workflow is capable of efficiently generating patient specifically optimized and improved clinical grade plans. It has now been adopted as the current standard workflow in our clinic to generate treatment plans for prostate cancer.

  9. Automatic digital surface model (DSM) generation from aerial imagery data

    NASA Astrophysics Data System (ADS)

    Zhou, Nan; Cao, Shixiang; He, Hongyan; Xing, Kun; Yue, Chunyu

    2018-04-01

    Aerial sensors are widely used to acquire imagery for photogrammetric and remote sensing application. In general, the images have large overlapped region, which provide a lot of redundant geometry and radiation information for matching. This paper presents a POS supported dense matching procedure for automatic DSM generation from aerial imagery data. The method uses a coarse-to-fine hierarchical strategy with an effective combination of several image matching algorithms: image radiation pre-processing, image pyramid generation, feature point extraction and grid point generation, multi-image geometrically constraint cross-correlation (MIG3C), global relaxation optimization, multi-image geometrically constrained least squares matching (MIGCLSM), TIN generation and point cloud filtering. The image radiation pre-processing is used in order to reduce the effects of the inherent radiometric problems and optimize the images. The presented approach essentially consists of 3 components: feature point extraction and matching procedure, grid point matching procedure and relational matching procedure. The MIGCLSM method is used to achieve potentially sub-pixel accuracy matches and identify some inaccurate and possibly false matches. The feasibility of the method has been tested on different aerial scale images with different landcover types. The accuracy evaluation is based on the comparison between the automatic extracted DSMs derived from the precise exterior orientation parameters (EOPs) and the POS.

  10. Advising on Preferred Reporting Items for patient-reported outcome instrument development: the PRIPROID.

    PubMed

    Hou, Zheng-Kun; Liu, Feng-Bin; Fang, Ji-Qian; Li, Xiao-Ying; Li, Li-Juan; Lin, Chu-Hua

    2013-03-01

    The reporting of patient-reported outcomes (PRO) instrument development is vital for both researchers and clinicians to determine its validity, thus, we propose the Preferred Reporting Items for PRO Instrument Development (PRIPROID) to improve the quality of reports. Abiding by the guidance published by the Enhancing the QUAlity and Transparency Of health Research (EQUATOR) Network, we had performed 6 steps for items development: identified the need for a guideline, performed a literature review, obtained funding for the guideline initiative, identified participants, conducted a Delphi exercise and generated a list of PRIPROID items for consideration at the face-to-face meeting. Twenty three items subheadings under 7 topics were included: title and structured abstract, rationale, objectives, intention, eligibility criteria, conceptual framework, items generation, response options, scoring, times, administrative modes, burden assessment, properties assessment, statistical methods, participants, main results, and additional analysis, summary of evidence, limitations, clinical attentions, and conclusions, item pools or final form, and funding. The PRIPROID contains many elements of the PRO research, and this assists researchers to report their results more accurately and to a certain degree use this instrument to evaluate the quality of the research methods.

  11. Comparison of the class and individual characteristics of Turkish 7.65 mm Browning/.32 Automatic caliber self-loading pistols with consecutive serial numbers.

    PubMed

    Sarıbey, Aylin Yalçin; Hannam, Abigail Grace

    2013-01-01

    Firearms identification is based on the fundamental principle that it is impossible to manufacture two identical items at the microscopic level. As firearm manufacturing technologies and quality assurance are improving, it is necessary to continually challenge this principle. In this study, two different makes of 7.65 mm Browning/.32 Automatic caliber self-loading pistols of Turkish manufacture were selected and examined. Ten pistols with consecutive serial numbers were examined and each fired 10 times. The fired cartridge cases were recovered for comparison purposes. It was found that for each make of pistol, the individual characteristics within the firing pin impression, ejector, and breech face marks of all 10 pistols were found to be significantly different. © 2012 American Academy of Forensic Sciences.

  12. Components of a Measure to Describe Organizational Culture in Academic Pharmacy.

    PubMed

    Desselle, Shane; Rosenthal, Meagen; Holmes, Erin R; Andrews, Brienna; Lui, Julia; Raja, Leela

    2017-12-01

    Objective. To develop a measure of organizational culture in academic pharmacy and identify characteristics of an academic pharmacy program that would be impactful for internal (eg, students, employees) and external (eg, preceptors, practitioners) clients of the program. Methods. A three-round Delphi procedure of 24 panelists from pharmacy schools in the U.S. and Canada generated items based on the Organizational Culture Profile (OCP), which were then evaluated and refined for inclusion in subsequent rounds. Items were assessed for appropriateness and impact. Results. The panel produced 35 items across six domains that measured organizational culture in academic pharmacy: competitiveness, performance orientation, social responsibility, innovation, emphasis on collegial support, and stability. Conclusion. The items generated require testing for validation and reliability in a large sample to finalize this measure of organizational culture.

  13. Automatic Diagnosis of Fetal Heart Rate: Comparison of Different Methodological Approaches

    DTIC Science & Technology

    2001-10-25

    Apgar score). Each recording lasted at least 30 minutes and it contained both the cardiographic series and the toco trace. We focused on four...inference rules automatically generated by the learning procedure showed that n° Rules can be manually reduced to 37 without deteriorating so much the

  14. EPA and California Air Resources Board Approve Remedy to Reduce Excess NOx Emissions from Automatic Transmission “Generation 2” 2.0-Liter Diesel Vehicles

    EPA Pesticide Factsheets

    On May 17, 2017, EPA and the California Air Resources Board (CARB) approved an emissions modification proposed by Volkswagen that will reduce NOx emissions from automatic transmission diesel Passats for model years 2012-2014.

  15. Use of Automated Scoring Features to Generate Hypotheses Regarding Language-Based DIF

    ERIC Educational Resources Information Center

    Shermis, Mark D.; Mao, Liyang; Mulholland, Matthew; Kieftenbeld, Vincent

    2017-01-01

    This study uses the feature sets employed by two automated scoring engines to determine if a "linguistic profile" could be formulated that would help identify items that are likely to exhibit differential item functioning (DIF) based on linguistic features. Sixteen items were administered to 1200 students where demographic information…

  16. Library Specifications for a New Circulation System for Concordia University Libraries.

    ERIC Educational Resources Information Center

    Tallon, James

    This study of library requirements for a new circulation system is organized into three sections: (1) items required for initial implementation in July 1982; (2) items relating to notice generation and activity statistics, with implementation expected by fall 1982; and (3) items provided in the system as initially implemented, with additional…

  17. Automated Test-Form Generation

    ERIC Educational Resources Information Center

    van der Linden, Wim J.; Diao, Qi

    2011-01-01

    In automated test assembly (ATA), the methodology of mixed-integer programming is used to select test items from an item bank to meet the specifications for a desired test form and optimize its measurement accuracy. The same methodology can be used to automate the formatting of the set of selected items into the actual test form. Three different…

  18. iBank

    ERIC Educational Resources Information Center

    Bermundo, Cesar B.; Bermundo, Alex B.; Ballester, Rex C.

    2012-01-01

    iBank is a project that utilizes a software to create an item Bank that store quality questions, generate test and print exam. The items are from analyze teacher-constructed test questions that provides the basis for discussing test results, by determining why a test item is or not discriminating between the better and poorer students, and by…

  19. A framework for diversifying recommendation lists by user interest expansion.

    PubMed

    Zhang, Zhu; Zheng, Xiaolong; Zeng, Daniel Dajun

    2016-08-01

    Recommender systems have been widely used to discover users' preferences and recommend interesting items to users during this age of information load. Researchers in the field of recommender systems have realized that the quality of a top-N recommendation list involves not only relevance but also diversity. Most traditional recommendation algorithms are difficult to generate a diverse item list that can cover most of his/her interests for each user, since they mainly focus on predicting accurate items similar to the dominant interests of users. Additionally, they seldom exploit semantic information such as item tags and users' interest labels to improve recommendation diversity. In this paper, we propose a novel recommendation framework which mainly adopts an expansion strategy of user interests based on social tagging information. The framework enhances the diversity of users' preferences by expanding the sizes and categories of the original user-item interaction records, and then adopts traditional recommendation models to generate recommendation lists. Empirical evaluations on three real-world data sets show that our method can effectively improve the accuracy and diversity of item recommendation.

  20. A framework for diversifying recommendation lists by user interest expansion

    PubMed Central

    Zhang, Zhu; Zeng, Daniel Dajun

    2017-01-01

    Recommender systems have been widely used to discover users’ preferences and recommend interesting items to users during this age of information load. Researchers in the field of recommender systems have realized that the quality of a top-N recommendation list involves not only relevance but also diversity. Most traditional recommendation algorithms are difficult to generate a diverse item list that can cover most of his/her interests for each user, since they mainly focus on predicting accurate items similar to the dominant interests of users. Additionally, they seldom exploit semantic information such as item tags and users’ interest labels to improve recommendation diversity. In this paper, we propose a novel recommendation framework which mainly adopts an expansion strategy of user interests based on social tagging information. The framework enhances the diversity of users’ preferences by expanding the sizes and categories of the original user-item interaction records, and then adopts traditional recommendation models to generate recommendation lists. Empirical evaluations on three real-world data sets show that our method can effectively improve the accuracy and diversity of item recommendation. PMID:28959089

  1. AIRSAR Web-Based Data Processing

    NASA Technical Reports Server (NTRS)

    Chu, Anhua; Van Zyl, Jakob; Kim, Yunjin; Hensley, Scott; Lou, Yunling; Madsen, Soren; Chapman, Bruce; Imel, David; Durden, Stephen; Tung, Wayne

    2007-01-01

    The AIRSAR automated, Web-based data processing and distribution system is an integrated, end-to-end synthetic aperture radar (SAR) processing system. Designed to function under limited resources and rigorous demands, AIRSAR eliminates operational errors and provides for paperless archiving. Also, it provides a yearly tune-up of the processor on flight missions, as well as quality assurance with new radar modes and anomalous data compensation. The software fully integrates a Web-based SAR data-user request subsystem, a data processing system to automatically generate co-registered multi-frequency images from both polarimetric and interferometric data collection modes in 80/40/20 MHz bandwidth, an automated verification quality assurance subsystem, and an automatic data distribution system for use in the remote-sensor community. Features include Survey Automation Processing in which the software can automatically generate a quick-look image from an entire 90-GB SAR raw data 32-MB/s tape overnight without operator intervention. Also, the software allows product ordering and distribution via a Web-based user request system. To make AIRSAR more user friendly, it has been designed to let users search by entering the desired mission flight line (Missions Searching), or to search for any mission flight line by entering the desired latitude and longitude (Map Searching). For precision image automation processing, the software generates the products according to each data processing request stored in the database via a Queue management system. Users are able to have automatic generation of coregistered multi-frequency images as the software generates polarimetric and/or interferometric SAR data processing in ground and/or slant projection according to user processing requests for one of the 12 radar modes.

  2. A Novel Recommendation System to Match College Events and Groups to Students

    NASA Astrophysics Data System (ADS)

    Qazanfari, K.; Youssef, A.; Keane, K.; Nelson, J.

    2017-10-01

    With the recent increase in data online, discovering meaningful opportunities can be time-consuming and complicated for many individuals. To overcome this data overload challenge, we present a novel text-content-based recommender system as a valuable tool to predict user interests. To that end, we develop a specific procedure to create user models and item feature-vectors, where items are described in free text. The user model is generated by soliciting from a user a few keywords and expanding those keywords into a list of weighted near-synonyms. The item feature-vectors are generated from the textual descriptions of the items, using modified tf-idf values of the users’ keywords and their near-synonyms. Once the users are modeled and the items are abstracted into feature vectors, the system returns the maximum-similarity items as recommendations to that user. Our experimental evaluation shows that our method of creating the user models and item feature-vectors resulted in higher precision and accuracy in comparison to well-known feature-vector-generating methods like Glove and Word2Vec. It also shows that stemming and the use of a modified version of tf-idf increase the accuracy and precision by 2% and 3%, respectively, compared to non-stemming and the standard tf-idf definition. Moreover, the evaluation results show that updating the user model from usage histories improves the precision and accuracy of the system. This recommender system has been developed as part of the Agnes application, which runs on iOS and Android platforms and is accessible through the Agnes website.

  3. Towards automatic planning for manufacturing generative processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    CALTON,TERRI L.

    2000-05-24

    Generative process planning describes methods process engineers use to modify manufacturing/process plans after designs are complete. A completed design may be the result from the introduction of a new product based on an old design, an assembly upgrade, or modified product designs used for a family of similar products. An engineer designs an assembly and then creates plans capturing manufacturing processes, including assembly sequences, component joining methods, part costs, labor costs, etc. When new products originate as a result of an upgrade, component geometry may change, and/or additional components and subassemblies may be added to or are omitted from themore » original design. As a result process engineers are forced to create new plans. This is further complicated by the fact that the process engineer is forced to manually generate these plans for each product upgrade. To generate new assembly plans for product upgrades, engineers must manually re-specify the manufacturing plan selection criteria and re-run the planners. To remedy this problem, special-purpose assembly planning algorithms have been developed to automatically recognize design modifications and automatically apply previously defined manufacturing plan selection criteria and constraints.« less

  4. Fuel cell generator energy dissipator

    DOEpatents

    Veyo, Stephen Emery; Dederer, Jeffrey Todd; Gordon, John Thomas; Shockling, Larry Anthony

    2000-01-01

    An apparatus and method are disclosed for eliminating the chemical energy of fuel remaining in a fuel cell generator when the electrical power output of the fuel cell generator is terminated. During a generator shut down condition, electrically resistive elements are automatically connected across the fuel cell generator terminals in order to draw current, thereby depleting the fuel

  5. Steam generator on-line efficiency monitor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, R.K.; Kaya, A.; Keyes, M.A. IV

    1987-08-04

    This patent describes a system for automatically and continuously determining the efficiency of a combustion process in a fossil-fuel fired vapor generator for utilization by an automatic load control system that controls the distribution of a system load among a plurality of vapor generators, comprising: a first function generator, connected to an oxygen transducer for sensing the level of excess air in the flue gas, for generating a first signal indicative of the total air supplied for combustion in percent by weight; a second function generator, connected to a combustibles transducer for sensing the level of combustibles in the fluemore » gas, for generating a second signal indicative of the percent combustibles present in the flue gas; means for correcting the first signal, connected to the first and second function generators, when the oxygen transducer is of a type that operates at a temperature level sufficient to cause the unburned combustibles to react with the oxygen present in the flue gas; an ambient air temperature transducer for generating a third signal indicative of the temperature of the ambient air supplied to the vapor generator for combustion.« less

  6. Model Checking Abstract PLEXIL Programs with SMART

    NASA Technical Reports Server (NTRS)

    Siminiceanu, Radu I.

    2007-01-01

    We describe a method to automatically generate discrete-state models of abstract Plan Execution Interchange Language (PLEXIL) programs that can be analyzed using model checking tools. Starting from a high-level description of a PLEXIL program or a family of programs with common characteristics, the generator lays the framework that models the principles of program execution. The concrete parts of the program are not automatically generated, but require the modeler to introduce them by hand. As a case study, we generate models to verify properties of the PLEXIL macro constructs that are introduced as shorthand notation. After an exhaustive analysis, we conclude that the macro definitions obey the intended semantics and behave as expected, but contingently on a few specific requirements on the timing semantics of micro-steps in the concrete executive implementation.

  7. Testing methods and techniques: Testing electrical and electronic devices: A compilation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The methods, techniques, and devices used in testing various electrical and electronic apparatus are presented. The items described range from semiconductor package leak detectors to automatic circuit analyzer and antenna simulators for system checkout. In many cases the approaches can result in considerable cost savings and improved quality control. The testing of various electronic components, assemblies, and systems; the testing of various electrical devices; and the testing of cables and connectors are explained.

  8. An Analysis of Automated Solutions for the Certification and Accreditation of Navy Medicine Information Assets

    DTIC Science & Technology

    2005-09-01

    discovery of network security threats and vulnerabilities will be done by doing penetration testing during the C&A process. This can be done on a...2.1.1; Appendix E, J COBR -1 Protection of Backup and Restoration Assets Availability 1.3.1; 2.1.3; 2.1.7; 3.1; 4.3; Appendix J, M CODB-2 Data... discovery , inventory, scanning and loading of C&A information in its central database, (2) automatic generation of the SRTM , (3) automatic generation

  9. Minimal-resource computer program for automatic generation of ocean wave ray or crest diagrams in shoaling waters

    NASA Technical Reports Server (NTRS)

    Poole, L. R.; Lecroy, S. R.; Morris, W. D.

    1977-01-01

    A computer program for studying linear ocean wave refraction is described. The program features random-access modular bathymetry data storage. Three bottom topography approximation techniques are available in the program which provide varying degrees of bathymetry data smoothing. Refraction diagrams are generated automatically and can be displayed graphically in three forms: Ray patterns with specified uniform deepwater ray density, ray patterns with controlled nearshore ray density, or crest patterns constructed by using a cubic polynomial to approximate crest segments between adjacent rays.

  10. Method and Tool for Design Process Navigation and Automatic Generation of Simulation Models for Manufacturing Systems

    NASA Astrophysics Data System (ADS)

    Nakano, Masaru; Kubota, Fumiko; Inamori, Yutaka; Mitsuyuki, Keiji

    Manufacturing system designers should concentrate on designing and planning manufacturing systems instead of spending their efforts on creating the simulation models to verify the design. This paper proposes a method and its tool to navigate the designers through the engineering process and generate the simulation model automatically from the design results. The design agent also supports collaborative design projects among different companies or divisions with distributed engineering and distributed simulation techniques. The idea was implemented and applied to a factory planning process.

  11. An engineering approach to automatic programming

    NASA Technical Reports Server (NTRS)

    Rubin, Stuart H.

    1990-01-01

    An exploratory study of the automatic generation and optimization of symbolic programs using DECOM - a prototypical requirement specification model implemented in pure LISP was undertaken. It was concluded, on the basis of this study, that symbolic processing languages such as LISP can support a style of programming based upon formal transformation and dependent upon the expression of constraints in an object-oriented environment. Such languages can represent all aspects of the software generation process (including heuristic algorithms for effecting parallel search) as dynamic processes since data and program are represented in a uniform format.

  12. Parallel scheduling of recursively defined arrays

    NASA Technical Reports Server (NTRS)

    Myers, T. J.; Gokhale, M. B.

    1986-01-01

    A new method of automatic generation of concurrent programs which constructs arrays defined by sets of recursive equations is described. It is assumed that the time of computation of an array element is a linear combination of its indices, and integer programming is used to seek a succession of hyperplanes along which array elements can be computed concurrently. The method can be used to schedule equations involving variable length dependency vectors and mutually recursive arrays. Portions of the work reported here have been implemented in the PS automatic program generation system.

  13. Using CASE tools to write engineering specifications

    NASA Astrophysics Data System (ADS)

    Henry, James E.; Howard, Robert W.; Iveland, Scott T.

    1993-08-01

    There are always a wide variety of obstacles to writing and maintaining engineering documentation. To combat these problems, documentation generation can be linked to the process of engineering development. The same graphics and communication tools used for structured system analysis and design (SSA/SSD) also form the basis for the documentation. The goal is to build a living document, such that as an engineering design changes, the documentation will `automatically' revise. `Automatic' is qualified by the need to maintain textual descriptions associated with the SSA/SSD graphics, and the need to generate new documents. This paper describes a methodology and a computer aided system engineering toolset that enables a relatively seamless transition into document generation for the development engineering team.

  14. Automatic generation of randomized trial sequences for priming experiments.

    PubMed

    Ihrke, Matthias; Behrendt, Jörg

    2011-01-01

    In most psychological experiments, a randomized presentation of successive displays is crucial for the validity of the results. For some paradigms, this is not a trivial issue because trials are interdependent, e.g., priming paradigms. We present a software that automatically generates optimized trial sequences for (negative-) priming experiments. Our implementation is based on an optimization heuristic known as genetic algorithms that allows for an intuitive interpretation due to its similarity to natural evolution. The program features a graphical user interface that allows the user to generate trial sequences and to interactively improve them. The software is based on freely available software and is released under the GNU General Public License.

  15. Development of the Oxford Participation and Activities Questionnaire: constructing an item pool

    PubMed Central

    Kelly, Laura; Jenkinson, Crispin; Dummett, Sarah; Dawson, Jill; Fitzpatrick, Ray; Morley, David

    2015-01-01

    Purpose The Oxford Participation and Activities Questionnaire is a patient-reported outcome measure in development that is grounded on the World Health Organization International Classification of Functioning, Disability, and Health (ICF). The study reported here aimed to inform and generate an item pool for the new measure, which is specifically designed for the assessment of participation and activity in patients experiencing a range of health conditions. Methods Items were informed through in-depth interviews conducted with 37 participants spanning a range of conditions. Interviews aimed to identify how their condition impacted their ability to participate in meaningful activities. Conditions included arthritis, cancer, chronic back pain, diabetes, motor neuron disease, multiple sclerosis, Parkinson’s disease, and spinal cord injury. Transcripts were analyzed using the framework method. Statements relating to ICF themes were recast as questionnaire items and shown for review to an expert panel. Cognitive debrief interviews (n=13) were used to assess items for face and content validity. Results ICF themes relevant to activities and participation in everyday life were explored, and a total of 222 items formed the initial item pool. This item pool was refined by the research team and 28 generic items were mapped onto all nine chapters of the ICF construct, detailing activity and participation. Cognitive interviewing confirmed the questionnaire instructions, items, and response options were acceptable to participants. Conclusion Using a clear conceptual basis to inform item generation, 28 items have been identified as suitable to undergo further psychometric testing. A large-scale postal survey will follow in order to refine the instrument further and to assess its psychometric properties. The final instrument is intended for use in clinical trials and interventions targeted at maintaining or improving activity and participation. PMID:26056503

  16. Development of the Oxford Participation and Activities Questionnaire: constructing an item pool.

    PubMed

    Kelly, Laura; Jenkinson, Crispin; Dummett, Sarah; Dawson, Jill; Fitzpatrick, Ray; Morley, David

    2015-01-01

    The Oxford Participation and Activities Questionnaire is a patient-reported outcome measure in development that is grounded on the World Health Organization International Classification of Functioning, Disability, and Health (ICF). The study reported here aimed to inform and generate an item pool for the new measure, which is specifically designed for the assessment of participation and activity in patients experiencing a range of health conditions. Items were informed through in-depth interviews conducted with 37 participants spanning a range of conditions. Interviews aimed to identify how their condition impacted their ability to participate in meaningful activities. Conditions included arthritis, cancer, chronic back pain, diabetes, motor neuron disease, multiple sclerosis, Parkinson's disease, and spinal cord injury. Transcripts were analyzed using the framework method. Statements relating to ICF themes were recast as questionnaire items and shown for review to an expert panel. Cognitive debrief interviews (n=13) were used to assess items for face and content validity. ICF themes relevant to activities and participation in everyday life were explored, and a total of 222 items formed the initial item pool. This item pool was refined by the research team and 28 generic items were mapped onto all nine chapters of the ICF construct, detailing activity and participation. Cognitive interviewing confirmed the questionnaire instructions, items, and response options were acceptable to participants. Using a clear conceptual basis to inform item generation, 28 items have been identified as suitable to undergo further psychometric testing. A large-scale postal survey will follow in order to refine the instrument further and to assess its psychometric properties. The final instrument is intended for use in clinical trials and interventions targeted at maintaining or improving activity and participation.

  17. Automatic Substitute Computed Tomography Generation and Contouring for Magnetic Resonance Imaging (MRI)-Alone External Beam Radiation Therapy From Standard MRI Sequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dowling, Jason A., E-mail: jason.dowling@csiro.au; University of Newcastle, Callaghan, New South Wales; Sun, Jidi

    Purpose: To validate automatic substitute computed tomography CT (sCT) scans generated from standard T2-weighted (T2w) magnetic resonance (MR) pelvic scans for MR-Sim prostate treatment planning. Patients and Methods: A Siemens Skyra 3T MR imaging (MRI) scanner with laser bridge, flat couch, and pelvic coil mounts was used to scan 39 patients scheduled for external beam radiation therapy for localized prostate cancer. For sCT generation a whole-pelvis MRI scan (1.6 mm 3-dimensional isotropic T2w SPACE [Sampling Perfection with Application optimized Contrasts using different flip angle Evolution] sequence) was acquired. Three additional small field of view scans were acquired: T2w, T2*w, and T1wmore » flip angle 80° for gold fiducials. Patients received a routine planning CT scan. Manual contouring of the prostate, rectum, bladder, and bones was performed independently on the CT and MR scans. Three experienced observers contoured each organ on MRI, allowing interobserver quantification. To generate a training database, each patient CT scan was coregistered to their whole-pelvis T2w using symmetric rigid registration and structure-guided deformable registration. A new multi-atlas local weighted voting method was used to generate automatic contours and sCT results. Results: The mean error in Hounsfield units between the sCT and corresponding patient CT (within the body contour) was 0.6 ± 14.7 (mean ± 1 SD), with a mean absolute error of 40.5 ± 8.2 Hounsfield units. Automatic contouring results were very close to the expert interobserver level (Dice similarity coefficient): prostate 0.80 ± 0.08, bladder 0.86 ± 0.12, rectum 0.84 ± 0.06, bones 0.91 ± 0.03, and body 1.00 ± 0.003. The change in monitor units between the sCT-based plans relative to the gold standard CT plan for the same dose prescription was found to be 0.3% ± 0.8%. The 3-dimensional γ pass rate was 1.00 ± 0.00 (2 mm/2%). Conclusions: The MR-Sim setup and automatic sCT generation methods using standard MR sequences generates realistic contours and electron densities for prostate cancer radiation therapy dose planning and digitally reconstructed radiograph generation.« less

  18. Generation and associative encoding in young and old adults: the effect of the strength of association between cues and targets on a cued recall task.

    PubMed

    Taconnat, Laurence; Froger, Charlotte; Sacher, Mathilde; Isingrini, Michel

    2008-01-01

    The generation effect (i.e., better recall of the generated items than the read items) was investigated with a between-list design in young and elderly participants. The generation task difficulty was manipulated by varying the strength of association between cues and targets. Overall, strong associates were better recalled than weak associates. However, the results showed different generation effect patterns according to strength of association and age, with a greater generation effect for weak associates in younger adults only. These findings suggest that generating weak associates leads to more elaborated encoding, but that elderly adults cannot use this elaborated encoding as well as younger adults to recall the target words at test.

  19. XML-Based Generator of C++ Code for Integration With GUIs

    NASA Technical Reports Server (NTRS)

    Hua, Hook; Oyafuso, Fabiano; Klimeck, Gerhard

    2003-01-01

    An open source computer program has been developed to satisfy a need for simplified organization of structured input data for scientific simulation programs. Typically, such input data are parsed in from a flat American Standard Code for Information Interchange (ASCII) text file into computational data structures. Also typically, when a graphical user interface (GUI) is used, there is a need to completely duplicate the input information while providing it to a user in a more structured form. Heretofore, the duplication of the input information has entailed duplication of software efforts and increases in susceptibility to software errors because of the concomitant need to maintain two independent input-handling mechanisms. The present program implements a method in which the input data for a simulation program are completely specified in an Extensible Markup Language (XML)-based text file. The key benefit for XML is storing input data in a structured manner. More importantly, XML allows not just storing of data but also describing what each of the data items are. That XML file contains information useful for rendering the data by other applications. It also then generates data structures in the C++ language that are to be used in the simulation program. In this method, all input data are specified in one place only, and it is easy to integrate the data structures into both the simulation program and the GUI. XML-to-C is useful in two ways: 1. As an executable, it generates the corresponding C++ classes and 2. As a library, it automatically fills the objects with the input data values.

  20. Categorical and associative relations increase false memory relative to purely associative relations.

    PubMed

    Coane, Jennifer H; McBride, Dawn M; Termonen, Miia-Liisa; Cutting, J Cooper

    2016-01-01

    The goal of the present study was to examine the contributions of associative strength and similarity in terms of shared features to the production of false memories in the Deese/Roediger-McDermott list-learning paradigm. Whereas the activation/monitoring account suggests that false memories are driven by automatic associative activation from list items to nonpresented lures, combined with errors in source monitoring, other accounts (e.g., fuzzy trace theory, global-matching models) emphasize the importance of semantic-level similarity, and thus predict that shared features between list and lure items will increase false memory. Participants studied lists of nine items related to a nonpresented lure. Half of the lists consisted of items that were associated but did not share features with the lure, and the other half included items that were equally associated but also shared features with the lure (in many cases, these were taxonomically related items). The two types of lists were carefully matched in terms of a variety of lexical and semantic factors, and the same lures were used across list types. In two experiments, false recognition of the critical lures was greater following the study of lists that shared features with the critical lure, suggesting that similarity at a categorical or taxonomic level contributes to false memory above and beyond associative strength. We refer to this phenomenon as a "feature boost" that reflects additive effects of shared meaning and association strength and is generally consistent with accounts of false memory that have emphasized thematic or feature-level similarity among studied and nonstudied representations.

  1. Getting the Most from the Twin Mars Rovers

    NASA Technical Reports Server (NTRS)

    Laufenberg, Larry

    2003-01-01

    The report discusses the Mixed-initiative Activity Planning GENerator (MARGEN) automatically generates activity plans for rovers. Decision support system mixes autonomous planning/scheduling with user modifications. Accommodating change. Technology spotlight

  2. Do you remember where sounds, pictures and words came from? The role of the stimulus format in object location memory.

    PubMed

    Delogu, Franco; Lilla, Christopher C

    2017-11-01

    Contrasting results in visual and auditory spatial memory stimulate the debate over the role of sensory modality and attention in identity-to-location binding. We investigated the role of sensory modality in the incidental/deliberate encoding of the location of a sequence of items. In 4 separated blocks, 88 participants memorised sequences of environmental sounds, spoken words, pictures and written words, respectively. After memorisation, participants were asked to recognise old from new items in a new sequence of stimuli. They were also asked to indicate from which side of the screen (visual stimuli) or headphone channel (sounds) the old stimuli were presented in encoding. In the first block, participants were not aware of the spatial requirement while, in blocks 2, 3 and 4 they knew that their memory for item location was going to be tested. Results show significantly lower accuracy of object location memory for the auditory stimuli (environmental sounds and spoken words) than for images (pictures and written words). Awareness of spatial requirement did not influence localisation accuracy. We conclude that: (a) object location memory is more effective for visual objects; (b) object location is implicitly associated with item identity during encoding and (c) visual supremacy in spatial memory does not depend on the automaticity of object location binding.

  3. Computer Applications in Teaching and Learning.

    ERIC Educational Resources Information Center

    Halley, Fred S.; And Others

    Some examples of the usage of computers in teaching and learning are examination generation, automatic exam grading, student tracking, problem generation, computational examination generators, program packages, simulation, and programing skills for problem solving. These applications are non-trivial and do fulfill the basic assumptions necessary…

  4. Using the Item Response Theory (IRT) for Educational Evaluation through Games

    ERIC Educational Resources Information Center

    Euzébio Batista, Marcelo Henrique; Victória Barbosa, Jorge Luis; da Rosa Tavares, João Elison; Hackenhaar, Jonathan Luis

    2013-01-01

    This article shows the application of Item Response Theory (IRT) for educational evaluation using games. The article proposes a computational model to create user profiles, called Psychometric Profile Generator (PPG). PPG uses the IRT mathematical model for exploring the levels of skills and behaviors in the form of items and/or stimuli. The model…

  5. Multi-Item Direct Behavior Ratings: Dependability of Two Levels of Assessment Specificity

    ERIC Educational Resources Information Center

    Volpe, Robert J.; Briesch, Amy M.

    2015-01-01

    Direct Behavior Rating-Multi-Item Scales (DBR-MIS) have been developed as formative measures of behavioral assessment for use in school-based problem-solving models. Initial research has examined the dependability of composite scores generated by summing all items comprising the scales. However, it has been argued that DBR-MIS may offer assessment…

  6. Bubble vector in automatic merging

    NASA Technical Reports Server (NTRS)

    Pamidi, P. R.; Butler, T. G.

    1987-01-01

    It is shown that it is within the capability of the DMAP language to build a set of vectors that can grow incrementally to be applied automatically and economically within a DMAP loop that serves to append sub-matrices that are generated within a loop to a core matrix. The method of constructing such vectors is explained.

  7. Automatic Detection of Student Mental Models during Prior Knowledge Activation in MetaTutor

    ERIC Educational Resources Information Center

    Rus, Vasile; Lintean, Mihai; Azevedo, Roger

    2009-01-01

    This paper presents several methods to automatically detecting students' mental models in MetaTutor, an intelligent tutoring system that teaches students self-regulatory processes during learning of complex science topics. In particular, we focus on detecting students' mental models based on student-generated paragraphs during prior knowledge…

  8. Automatic Identification and Organization of Index Terms for Interactive Browsing.

    ERIC Educational Resources Information Center

    Wacholder, Nina; Evans, David K.; Klavans, Judith L.

    The potential of automatically generated indexes for information access has been recognized for several decades, but the quantity of text and the ambiguity of natural language processing have made progress at this task more difficult than was originally foreseen. Recently, a body of work on development of interactive systems to support phrase…

  9. Automatic Generation of Customized, Model Based Information Systems for Operations Management.

    DTIC Science & Technology

    The paper discusses the need for developing a customized, model based system to support management decision making in the field of operations ... management . It provides a critique of the current approaches available, formulates a framework to classify logistics decisions, and suggests an approach for the automatic development of logistics systems. (Author)

  10. Context reinstatement and memory for intrinsic versus extrinsic context: the role of item generation at encoding or retrieval.

    PubMed

    Nieznański, Marek

    2014-10-01

    According to many theoretical accounts, reinstating study context at the time of test creates optimal circumstances for item retrieval. The role of context reinstatement was tested in reference to context memory in several experiments. On the encoding phase, participants were presented with words printed in two different font colors (intrinsic context) or two different sides of the computer screen (extrinsic context). At test, the context was reinstated or changed and participants were asked to recognize words and recollect their study context. Moreover, a read-generate manipulation was introduced at encoding and retrieval, which was intended to influence the relative salience of item and context information. The results showed that context reinstatement had no effect on memory for extrinsic context but affected memory for intrinsic context when the item was generated at encoding and read at test. These results supported the hypothesis that context information is reconstructed at retrieval only when context was poorly encoded at study. © 2014 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  11. Creative Test Generators

    ERIC Educational Resources Information Center

    Vickers, F. D.

    1973-01-01

    A brief description of a test generating program which generates questions concerning the Fortran programming language in a random but guided fashion and without resorting to an item bank.'' (Author/AK)

  12. Convolutional neural networks for an automatic classification of prostate tissue slides with high-grade Gleason score

    NASA Astrophysics Data System (ADS)

    Jiménez del Toro, Oscar; Atzori, Manfredo; Otálora, Sebastian; Andersson, Mats; Eurén, Kristian; Hedlund, Martin; Rönnquist, Peter; Müller, Henning

    2017-03-01

    The Gleason grading system was developed for assessing prostate histopathology slides. It is correlated to the outcome and incidence of relapse in prostate cancer. Although this grading is part of a standard protocol performed by pathologists, visual inspection of whole slide images (WSIs) has an inherent subjectivity when evaluated by different pathologists. Computer aided pathology has been proposed to generate an objective and reproducible assessment that can help pathologists in their evaluation of new tissue samples. Deep convolutional neural networks are a promising approach for the automatic classification of histopathology images and can hierarchically learn subtle visual features from the data. However, a large number of manual annotations from pathologists are commonly required to obtain sufficient statistical generalization when training new models that can evaluate the daily generated large amounts of pathology data. A fully automatic approach that detects prostatectomy WSIs with high-grade Gleason score is proposed. We evaluate the performance of various deep learning architectures training them with patches extracted from automatically generated regions-of-interest rather than from manually segmented ones. Relevant parameters for training the deep learning model such as size and number of patches as well as the inclusion or not of data augmentation are compared between the tested deep learning architectures. 235 prostate tissue WSIs with their pathology report from the publicly available TCGA data set were used. An accuracy of 78% was obtained in a balanced set of 46 unseen test images with different Gleason grades in a 2-class decision: high vs. low Gleason grade. Grades 7-8, which represent the boundary decision of the proposed task, were particularly well classified. The method is scalable to larger data sets with straightforward re-training of the model to include data from multiple sources, scanners and acquisition techniques. Automatically generated heatmaps for theWSIs could be useful for improving the selection of patches when training networks for big data sets and to guide the visual inspection of these images.

  13. Components of a Measure to Describe Organizational Culture in Academic Pharmacy

    PubMed Central

    Rosenthal, Meagen; Holmes, Erin R.; Andrews, Brienna; Lui, Julia; Raja, Leela

    2017-01-01

    Objective. To develop a measure of organizational culture in academic pharmacy and identify characteristics of an academic pharmacy program that would be impactful for internal (eg, students, employees) and external (eg, preceptors, practitioners) clients of the program. Methods. A three-round Delphi procedure of 24 panelists from pharmacy schools in the U.S. and Canada generated items based on the Organizational Culture Profile (OCP), which were then evaluated and refined for inclusion in subsequent rounds. Items were assessed for appropriateness and impact. Results. The panel produced 35 items across six domains that measured organizational culture in academic pharmacy: competitiveness, performance orientation, social responsibility, innovation, emphasis on collegial support, and stability. Conclusion. The items generated require testing for validation and reliability in a large sample to finalize this measure of organizational culture. PMID:29367768

  14. Orienting attention within visual short-term memory: development and mechanisms.

    PubMed

    Shimi, Andria; Nobre, Anna C; Astle, Duncan; Scerif, Gaia

    2014-01-01

    How does developing attentional control operate within visual short-term memory (VSTM)? Seven-year-olds, 11-year-olds, and adults (total n = 205) were asked to report whether probe items were part of preceding visual arrays. In Experiment 1, central or peripheral cues oriented attention to the location of to-be-probed items either prior to encoding or during maintenance. Cues improved memory regardless of their position, but younger children benefited less from cues presented during maintenance, and these benefits related to VSTM span over and above basic memory in uncued trials. In Experiment 2, cues of low validity eliminated benefits, suggesting that even the youngest children use cues voluntarily, rather than automatically. These findings elucidate the close coupling between developing visuospatial attentional control and VSTM. © 2013 The Authors. Child Development © 2013 Society for Research in Child Development, Inc.

  15. Space shuttle environmental control/life support systems

    NASA Technical Reports Server (NTRS)

    1972-01-01

    This study analyzes and defines a baseline Environmental Control/Life Support System (EC/LSS) for a four-man, seven-day orbital shuttle. In addition, the impact of various mission parameters, crew size, mission length, etc. are examined for their influence on the selected system. Pacing technology items are identified to serve as a guide for application of effort to enhance the total system optimization. A fail safe-fail operation philosophy was utilized in designing the system. This has resulted in a system that requires only one daily routine operation. All other critical item malfunctions are automatically resolved by switching to redundant modes of operation. As a result of this study, it is evident that a practical, flexible, simple and long life, EC/LSS can be designed and manufactured for the shuttle orbiter within the time phase required.

  16. Analysis of Content Shared in Online Cancer Communities: Systematic Review

    PubMed Central

    van de Poll-Franse, Lonneke V; Krahmer, Emiel; Verberne, Suzan; Mols, Floortje

    2018-01-01

    Background The content that cancer patients and their relatives (ie, posters) share in online cancer communities has been researched in various ways. In the past decade, researchers have used automated analysis methods in addition to manual coding methods. Patients, providers, researchers, and health care professionals can learn from experienced patients, provided that their experience is findable. Objective The aim of this study was to systematically review all relevant literature that analyzes user-generated content shared within online cancer communities. We reviewed the quality of available research and the kind of content that posters share with each other on the internet. Methods A computerized literature search was performed via PubMed (MEDLINE), PsycINFO (5 and 4 stars), Cochrane Central Register of Controlled Trials, and ScienceDirect. The last search was conducted in July 2017. Papers were selected if they included the following terms: (cancer patient) and (support group or health communities) and (online or internet). We selected 27 papers and then subjected them to a 14-item quality checklist independently scored by 2 investigators. Results The methodological quality of the selected studies varied: 16 were of high quality and 11 were of adequate quality. Of those 27 studies, 15 were manually coded, 7 automated, and 5 used a combination of methods. The best results can be seen in the papers that combined both analytical methods. The number of analyzed posts ranged from 200 to 1,500,000; the number of analyzed posters ranged from 75 to 90,000. The studies analyzing large numbers of posts mainly related to breast cancer, whereas those analyzing small numbers were related to other types of cancers. A total of 12 studies involved some or entirely automatic analysis of the user-generated content. All the authors referred to two main content categories: informational support and emotional support. In all, 15 studies reported only on the content, 6 studies explicitly reported on content and social aspects, and 6 studies focused on emotional changes. Conclusions In the future, increasing amounts of user-generated content will become available on the internet. The results of content analysis, especially of the larger studies, give detailed insights into patients’ concerns and worries, which can then be used to improve cancer care. To make the results of such analyses as usable as possible, automatic content analysis methods will need to be improved through interdisciplinary collaboration. PMID:29615384

  17. Generating Text from Functional Brain Images

    PubMed Central

    Pereira, Francisco; Detre, Greg; Botvinick, Matthew

    2011-01-01

    Recent work has shown that it is possible to take brain images acquired during viewing of a scene and reconstruct an approximation of the scene from those images. Here we show that it is also possible to generate text about the mental content reflected in brain images. We began with images collected as participants read names of concrete items (e.g., “Apartment’’) while also seeing line drawings of the item named. We built a model of the mental semantic representation of concrete concepts from text data and learned to map aspects of such representation to patterns of activation in the corresponding brain image. In order to validate this mapping, without accessing information about the items viewed for left-out individual brain images, we were able to generate from each one a collection of semantically pertinent words (e.g., “door,” “window” for “Apartment’’). Furthermore, we show that the ability to generate such words allows us to perform a classification task and thus validate our method quantitatively. PMID:21927602

  18. Prospective clinical validation of independent DVH prediction for plan QA in automatic treatment planning for prostate cancer patients.

    PubMed

    Wang, Yibing; Heijmen, Ben J M; Petit, Steven F

    2017-12-01

    To prospectively investigate the use of an independent DVH prediction tool to detect outliers in the quality of fully automatically generated treatment plans for prostate cancer patients. A plan QA tool was developed to predict rectum, anus and bladder DVHs, based on overlap volume histograms and principal component analysis (PCA). The tool was trained with 22 automatically generated, clinical plans, and independently validated with 21 plans. Its use was prospectively investigated for 50 new plans by replanning in case of detected outliers. For rectum D mean , V 65Gy , V 75Gy , anus D mean , and bladder D mean , the difference between predicted and achieved was within 0.4 Gy or 0.3% (SD within 1.8 Gy or 1.3%). Thirteen detected outliers were re-planned, leading to moderate but statistically significant improvements (mean, max): rectum D mean (1.3 Gy, 3.4 Gy), V 65Gy (2.7%, 4.2%), anus D mean (1.6 Gy, 6.9 Gy), and bladder D mean (1.5 Gy, 5.1 Gy). The rectum V 75Gy of the new plans slightly increased (0.2%, p = 0.087). A high accuracy DVH prediction tool was developed and used for independent QA of automatically generated plans. In 28% of plans, minor dosimetric deviations were observed that could be improved by plan adjustments. Larger gains are expected for manually generated plans. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Using Explanatory Item Response Models to Evaluate Complex Scientific Tasks Designed for the Next Generation Science Standards

    NASA Astrophysics Data System (ADS)

    Chiu, Tina

    This dissertation includes three studies that analyze a new set of assessment tasks developed by the Learning Progressions in Middle School Science (LPS) Project. These assessment tasks were designed to measure science content knowledge on the structure of matter domain and scientific argumentation, while following the goals from the Next Generation Science Standards (NGSS). The three studies focus on the evidence available for the success of this design and its implementation, generally labelled as "validity" evidence. I use explanatory item response models (EIRMs) as the overarching framework to investigate these assessment tasks. These models can be useful when gathering validity evidence for assessments as they can help explain student learning and group differences. In the first study, I explore the dimensionality of the LPS assessment by comparing the fit of unidimensional, between-item multidimensional, and Rasch testlet models to see which is most appropriate for this data. By applying multidimensional item response models, multiple relationships can be investigated, and in turn, allow for a more substantive look into the assessment tasks. The second study focuses on person predictors through latent regression and differential item functioning (DIF) models. Latent regression models show the influence of certain person characteristics on item responses, while DIF models test whether one group is differentially affected by specific assessment items, after conditioning on latent ability. Finally, the last study applies the linear logistic test model (LLTM) to investigate whether item features can help explain differences in item difficulties.

  20. Preservation of memory-based automaticity in reading for older adults.

    PubMed

    Rawson, Katherine A; Touron, Dayna R

    2015-12-01

    Concerning age-related effects on cognitive skill acquisition, the modal finding is that older adults do not benefit from practice to the same extent as younger adults in tasks that afford a shift from slower algorithmic processing to faster memory-based processing. In contrast, Rawson and Touron (2009) demonstrated a relatively rapid shift to memory-based processing in the context of a reading task. The current research extended beyond this initial study to provide more definitive evidence for relative preservation of memory-based automaticity in reading tasks for older adults. Younger and older adults read short stories containing unfamiliar noun phrases (e.g., skunk mud) followed by disambiguating information indicating the combination's meaning (either the normatively dominant meaning or an alternative subordinate meaning). Stories were repeated across practice blocks, and then the noun phrases were presented in novel sentence frames in a transfer task. Both age groups shifted from computation to retrieval after relatively few practice trials (as evidenced by convergence of reading times for dominant and subordinate items). Most important, both age groups showed strong evidence for memory-based processing of the noun phrases in the transfer task. In contrast, older adults showed minimal shifting to retrieval in an alphabet arithmetic task, indicating that the preservation of memory-based automaticity in reading was task-specific. Discussion focuses on important implications for theories of memory-based automaticity in general and for specific theoretical accounts of age effects on memory-based automaticity, as well as fruitful directions for future research. (c) 2015 APA, all rights reserved).

  1. Automatic Hidden-Web Table Interpretation by Sibling Page Comparison

    NASA Astrophysics Data System (ADS)

    Tao, Cui; Embley, David W.

    The longstanding problem of automatic table interpretation still illudes us. Its solution would not only be an aid to table processing applications such as large volume table conversion, but would also be an aid in solving related problems such as information extraction and semi-structured data management. In this paper, we offer a conceptual modeling solution for the common special case in which so-called sibling pages are available. The sibling pages we consider are pages on the hidden web, commonly generated from underlying databases. We compare them to identify and connect nonvarying components (category labels) and varying components (data values). We tested our solution using more than 2,000 tables in source pages from three different domains—car advertisements, molecular biology, and geopolitical information. Experimental results show that the system can successfully identify sibling tables, generate structure patterns, interpret tables using the generated patterns, and automatically adjust the structure patterns, if necessary, as it processes a sequence of hidden-web pages. For these activities, the system was able to achieve an overall F-measure of 94.5%.

  2. Automatic Generation of Wide Dynamic Range Image without Pseudo-Edge Using Integration of Multi-Steps Exposure Images

    NASA Astrophysics Data System (ADS)

    Migiyama, Go; Sugimura, Atsuhiko; Osa, Atsushi; Miike, Hidetoshi

    Recently, digital cameras are offering technical advantages rapidly. However, the shot image is different from the sight image generated when that scenery is seen with the naked eye. There are blown-out highlights and crushed blacks in the image that photographed the scenery of wide dynamic range. The problems are hardly generated in the sight image. These are contributory cause of difference between the shot image and the sight image. Blown-out highlights and crushed blacks are caused by the difference of dynamic range between the image sensor installed in a digital camera such as CCD and CMOS and the human visual system. Dynamic range of the shot image is narrower than dynamic range of the sight image. In order to solve the problem, we propose an automatic method to decide an effective exposure range in superposition of edges. We integrate multi-step exposure images using the method. In addition, we try to erase pseudo-edges using the process to blend exposure values. Afterwards, we get a pseudo wide dynamic range image automatically.

  3. Development of a quality of life instrument for children with advanced cancer: the pediatric advanced care quality of life scale (PAC-QoL).

    PubMed

    Cataudella, Danielle; Morley, Tara Elise; Nesin, April; Fernandez, Conrad V; Johnston, Donna Lynn; Sung, Lillian; Zelcer, Shayna

    2014-10-01

    There is currently no published, validated measures available that comprehensively capture quality of life (QoL) symptoms for children with poor-prognosis malignancies. The pediatric advanced care-quality of life scale (PAC-QoL) has been developed to address this gap. The current paper describes the first two phases in the development of this measure. The first two phases included: (1) construct and item generation, and (2) preliminary content validation. Domains of QoL relevant to this population were identified from the literature and items generated to capture each; items were then adapted to create versions sensitive to age/developmental differences. Two types of experts reviewed the draft PAC-QoL and rated items for relevance, understandability, and sensitivity of wording: bereaved parents (n = 8) and health care professionals (HCP; n = 7). Content validity was calculated using the index of content validity (CVI [Lynn. Nurs Res 1986;35:382-385]). One hundred and forty-one candidate items congruent with the domains identified as relevant to children with advanced malignancies were generated, and four report versions with a 5-choice response scale created. Parent mean scores for importance, understandability, and sensitivity of wording ranged from 4.29 (SD = 0.52) to 4.66 (SD = 0.50). The CVI ranged from 95% to 100%. These steps resulted in reductions of the PAC-QoL to 57-65 items, as well as a modification of the response scale to a 4-choice option with new anchors. The next phase of this study will be to conduct cognitive probing with the intended population to further modify and reduce candidate items prior to psychometric evaluation. © 2014 Wiley Periodicals, Inc.

  4. Automated consensus contour building for prostate MRI.

    PubMed

    Khalvati, Farzad

    2014-01-01

    Inter-observer variability is the lack of agreement among clinicians in contouring a given organ or tumour in a medical image. The variability in medical image contouring is a source of uncertainty in radiation treatment planning. Consensus contour of a given case, which was proposed to reduce the variability, is generated by combining the manually generated contours of several clinicians. However, having access to several clinicians (e.g., radiation oncologists) to generate a consensus contour for one patient is costly. This paper presents an algorithm that automatically generates a consensus contour for a given case using the atlases of different clinicians. The algorithm was applied to prostate MR images of 15 patients manually contoured by 5 clinicians. The automatic consensus contours were compared to manual consensus contours where a median Dice similarity coefficient (DSC) of 88% was achieved.

  5. Temporal Cyber Attack Detection.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ingram, Joey Burton; Draelos, Timothy J.; Galiardi, Meghan

    Rigorous characterization of the performance and generalization ability of cyber defense systems is extremely difficult, making it hard to gauge uncertainty, and thus, confidence. This difficulty largely stems from a lack of labeled attack data that fully explores the potential adversarial space. Currently, performance of cyber defense systems is typically evaluated in a qualitative manner by manually inspecting the results of the system on live data and adjusting as needed. Additionally, machine learning has shown promise in deriving models that automatically learn indicators of compromise that are more robust than analyst-derived detectors. However, to generate these models, most algorithms requiremore » large amounts of labeled data (i.e., examples of attacks). Algorithms that do not require annotated data to derive models are similarly at a disadvantage, because labeled data is still necessary when evaluating performance. In this work, we explore the use of temporal generative models to learn cyber attack graph representations and automatically generate data for experimentation and evaluation. Training and evaluating cyber systems and machine learning models requires significant, annotated data, which is typically collected and labeled by hand for one-off experiments. Automatically generating such data helps derive/evaluate detection models and ensures reproducibility of results. Experimentally, we demonstrate the efficacy of generative sequence analysis techniques on learning the structure of attack graphs, based on a realistic example. These derived models can then be used to generate more data. Additionally, we provide a roadmap for future research efforts in this area.« less

  6. 7 CFR 1710.251 - Construction work plans-distribution borrowers.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... generation facilities; (11) Load management equipment, automatic sectionalizing facilities, and centralized... transmission plant, and improvements replacements, and retirements of any generation plant. Construction of new generation capacity need not be included in a CWP but must be specified and supported by specific engineering...

  7. Cerebellum engages in automation of verb-generation skill.

    PubMed

    Yang, Zhi; Wu, Paula; Weng, Xuchu; Bandettini, Peter A

    2014-03-01

    Numerous studies have shown cerebellar involvement in item-specific association, a form of explicit learning. However, very few have demonstrated cerebellar participation in automation of non-motor cognitive tasks. Applying fMRI to a repeated verb-generation task, we sought to distinguish cerebellar involvement in learning of item-specific noun-verb association and automation of verb generation skill. The same set of nouns was repeated in six verb-generation blocks so that subjects practiced generating verbs for the nouns. The practice was followed by a novel block with a different set of nouns. The cerebellar vermis (IV/V) and the right cerebellar lobule VI showed decreased activation following practice; activation in the right cerebellar Crus I was significantly lower in the novel challenge than in the initial verb-generation task. Furthermore, activation in this region during well-practiced blocks strongly correlated with improvement of behavioral performance in both the well-practiced and the novel blocks, suggesting its role in the learning of general mental skills not specific to the practiced noun-verb pairs. Therefore, the cerebellum processes both explicit verbal associative learning and automation of cognitive tasks. Different cerebellar regions predominate in this processing: lobule VI during the acquisition of item-specific association, and Crus I during automation of verb-generation skills through practice.

  8. The influence of age, hearing, and working memory on the speech comprehension benefit derived from an automatic speech recognition system.

    PubMed

    Zekveld, Adriana A; Kramer, Sophia E; Kessens, Judith M; Vlaming, Marcel S M G; Houtgast, Tammo

    2009-04-01

    The aim of the current study was to examine whether partly incorrect subtitles that are automatically generated by an Automatic Speech Recognition (ASR) system, improve speech comprehension by listeners with hearing impairment. In an earlier study (Zekveld et al. 2008), we showed that speech comprehension in noise by young listeners with normal hearing improves when presenting partly incorrect, automatically generated subtitles. The current study focused on the effects of age, hearing loss, visual working memory capacity, and linguistic skills on the benefit obtained from automatically generated subtitles during listening to speech in noise. In order to investigate the effects of age and hearing loss, three groups of participants were included: 22 young persons with normal hearing (YNH, mean age = 21 years), 22 middle-aged adults with normal hearing (MA-NH, mean age = 55 years) and 30 middle-aged adults with hearing impairment (MA-HI, mean age = 57 years). The benefit from automatic subtitling was measured by Speech Reception Threshold (SRT) tests (Plomp & Mimpen, 1979). Both unimodal auditory and bimodal audiovisual SRT tests were performed. In the audiovisual tests, the subtitles were presented simultaneously with the speech, whereas in the auditory test, only speech was presented. The difference between the auditory and audiovisual SRT was defined as the audiovisual benefit. Participants additionally rated the listening effort. We examined the influences of ASR accuracy level and text delay on the audiovisual benefit and the listening effort using a repeated measures General Linear Model analysis. In a correlation analysis, we evaluated the relationships between age, auditory SRT, visual working memory capacity and the audiovisual benefit and listening effort. The automatically generated subtitles improved speech comprehension in noise for all ASR accuracies and delays covered by the current study. Higher ASR accuracy levels resulted in more benefit obtained from the subtitles. Speech comprehension improved even for relatively low ASR accuracy levels; for example, participants obtained about 2 dB SNR audiovisual benefit for ASR accuracies around 74%. Delaying the presentation of the text reduced the benefit and increased the listening effort. Participants with relatively low unimodal speech comprehension obtained greater benefit from the subtitles than participants with better unimodal speech comprehension. We observed an age-related decline in the working-memory capacity of the listeners with normal hearing. A higher age and a lower working memory capacity were associated with increased effort required to use the subtitles to improve speech comprehension. Participants were able to use partly incorrect and delayed subtitles to increase their comprehension of speech in noise, regardless of age and hearing loss. This supports the further development and evaluation of an assistive listening system that displays automatically recognized speech to aid speech comprehension by listeners with hearing impairment.

  9. The flavor-locked flavorful two Higgs doublet model

    NASA Astrophysics Data System (ADS)

    Altmannshofer, Wolfgang; Gori, Stefania; Robinson, Dean J.; Tuckler, Douglas

    2018-03-01

    We propose a new framework to generate the Standard Model (SM) quark flavor hierarchies in the context of two Higgs doublet models (2HDM). The `flavorful' 2HDM couples the SM-like Higgs doublet exclusively to the third quark generation, while the first two generations couple exclusively to an additional source of electroweak symmetry breaking, potentially generating striking collider signatures. We synthesize the flavorful 2HDM with the `flavor-locking' mechanism, that dynamically generates large quark mass hierarchies through a flavor-blind portal to distinct flavon and hierarchon sectors: dynamical alignment of the flavons allows a unique hierarchon to control the respective quark masses. We further develop the theoretical construction of this mechanism, and show that in the context of a flavorful 2HDM-type setup, it can automatically achieve realistic flavor structures: the CKM matrix is automatically hierarchical with | V cb | and | V ub | generically of the observed size. Exotic contributions to meson oscillation observables may also be generated, that may accommodate current data mildly better than the SM itself.

  10. Conversion of Propellant Grade Picrite to Spherical Nitroguanidine, an Insensitive Filler for Melt-Cast TNT Formulations

    DTIC Science & Technology

    1991-09-01

    problem with the solvent/non-solvent process reported by ICT is the inability to recycle the mother liquors. Apparently "strawberries" or " sea urchins ...inevitable for the foreseeable future. Exceptions could include lower production rate items such as sea mines or missile warheads, or speLfic nunitions where...Leitz Orthomat 35 mm automatic camera on polaroid film type 667, Magnification ranged up to X42. 14K 1* Scanning Electron Microscopy ( SEM ) SEM was

  11. Effect of Automatic Processing on Specification of Problem Solutions for Computer Programs.

    DTIC Science & Technology

    1981-03-01

    Number 7 ± 2" item limitaion on human short-term memory capability (Miller, 1956) should be a guiding principle in program design. Yourdon and...input either a single example solution or multiple exam’- le solutions in sequence. If a participant’s P1 has a low value - near 0 - it may be concluded... Principles in Experimental Design, Winer ,1971). 55 Table 12 ANOVA Resultt, For Performance Measure 2 Sb DF MS F Source of Variation Between Subjects

  12. Marine Corps Systems Command (MCSC) Program Executive Officer Land Systems (PEO LS) 2010 Advanced Planning Briefing to Industry (APBI) (BRIEFING CHARTS)

    DTIC Science & Technology

    2010-04-07

    Commercialization Pilot Programs – Portable Fuel Analyzer – Non-woven FR Materials – Automatic Test Equipment – Night Vision Fusion • Significant efforts – Sensing...contract with the government". Advertising material , commercial item offer, or contribution, as defined in FAR 15.601 shall not be considered to...systems through the entire lifecycle. Our portfolio includes; •Individual & crew-served weapons ranging from 9 mm handguns to 87mm mortar systems

  13. Emotion impairs extrinsic source memory--An ERP study.

    PubMed

    Mao, Xinrui; You, Yuqi; Li, Wen; Guo, Chunyan

    2015-09-01

    Substantial advancements in understanding emotional modulation of item memory notwithstanding, controversies remain as to how emotion influences source memory. Using an emotional extrinsic source memory paradigm combined with remember/know judgments and two key event-related potentials (ERPs)-the FN400 (a frontal potential at 300-500 ms related to familiarity) and the LPC (a later parietal potential at 500-700 ms related to recollection), our research investigated the impact of emotion on extrinsic source memory and the underlying processes. We varied a semantic prompt (either "people" or "scene") preceding a study item to manipulate the extrinsic source. Behavioral data indicated a significant effect of emotion on "remember" responses to extrinsic source details, suggesting impaired recollection-based source memory in emotional (both positive and negative) relative to neutral conditions. In parallel, differential FN400 and LPC amplitudes (correctly remembered - incorrectly remembered sources) revealed emotion-related interference, suggesting impaired familiarity and recollection memory of extrinsic sources associated with positive or negative items. These findings thus lend support to the notion of emotion-induced memory trade off: while enhancing memory of central items and intrinsic/integral source details, emotion nevertheless disrupts memory of peripheral contextual details, potentially impairing both familiarity and recollection. Importantly, that positive and negative items result in comparable memory impairment suggests that arousal (vs. affective valence) plays a critical role in modulating dynamic interactions among automatic and elaborate processes involved in memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. The Validity of the 16-Item Version of the Prodromal Questionnaire (PQ-16) to Screen for Ultra High Risk of Developing Psychosis in the General Help-Seeking Population

    PubMed Central

    Ising, Helga K.; Veling, Wim; Loewy, Rachel L.; Rietveld, Marleen W.; Rietdijk, Judith; Dragt, Sara; Klaassen, Rianne M. C.; Nieman, Dorien H.; Wunderink, Lex; Linszen, Don H.; van der Gaag, Mark

    2012-01-01

    In order to bring about implementation of routine screening for psychosis risk, a brief version of the Prodromal Questionnaire (PQ; Loewy et al., 2005) was developed and tested in a general help-seeking population. We assessed a consecutive patient sample of 3533 young adults who were help-seeking for nonpsychotic disorders at the secondary mental health services in the Hague with the PQ. We performed logistic regression analyses and CHi-squared Automatic Interaction Detector decision tree analysis to shorten the original 92 items. Receiver operating characteristic curves were used to examine the psychometric properties of the PQ-16. In the general help-seeking population, a cutoff score of 6 or more positively answered items on the 16-item version of the PQ produced correct classification of Comprehensive Assessment of At-Risk Mental State (Yung et al., 2005) psychosis risk/clinical psychosis in 44% of the cases, distinguishing Comprehensive Assessment of At-Risk Mental States (CAARMS) diagnosis from no CAARMS diagnosis with high sensitivity (87%) and specificity (87%). These results were comparable to the PQ-92. The PQ-16 is a good self-report screen for use in secondary mental health care services to select subjects for interviewing for psychosis risk. The low number of items makes it quite appropriate for screening large help-seeking populations, thus enhancing the feasibility of detection and treatment of ultra high-risk patients in routine mental health services. PMID:22516147

  15. Competing stimuli in the treatment of multiply controlled problem behavior during hygiene routines.

    PubMed

    Long, Ethan S; Hagopian, Louis P; Deleon, Iser G; Marhefka, Jean Marie; Resau, Dawn

    2005-01-01

    The current study describes the use of noncontingent competing stimuli in the treatment of problem behavior exhibited by three individuals during staff-assisted hygiene routines. Functional analyses revealed that particular topographies of problem behaviors appeared to be maintained by their own sensory consequences, whereas other topographies appeared to be maintained by escape from demands. Competing stimulus assessments were then conducted to identify items associated with low levels of automatically-maintained problem behavior and high levels of stimulus engagement. Stimuli associated with low levels of automatically-maintained problem behavior (competing stimuli) were then delivered noncontingently during staff-assisted hygiene routines that were problematic for each participant. In all three cases, substantial reductions in all problem behaviors were observed. These results are discussed in terms of the relative ease of this intervention and possible mechanisms underlying the effects of competing stimuli on behaviors maintained by different types of reinforcement.

  16. Geometrical pose and structural estimation from a single image for automatic inspection of filter components

    NASA Astrophysics Data System (ADS)

    Liu, Yonghuai; Rodrigues, Marcos A.

    2000-03-01

    This paper describes research on the application of machine vision techniques to a real time automatic inspection task of air filter components in a manufacturing line. A novel calibration algorithm is proposed based on a special camera setup where defective items would show a large calibration error. The algorithm makes full use of rigid constraints derived from the analysis of geometrical properties of reflected correspondence vectors which have been synthesized into a single coordinate frame and provides a closed form solution to the estimation of all parameters. For a comparative study of performance, we also developed another algorithm based on this special camera setup using epipolar geometry. A number of experiments using synthetic data have shown that the proposed algorithm is generally more accurate and robust than the epipolar geometry based algorithm and that the geometric properties of reflected correspondence vectors provide effective constraints to the calibration of rigid body transformations.

  17. Standardized mappings--a framework to combine different semantic mappers into a standardized web-API.

    PubMed

    Neuhaus, Philipp; Doods, Justin; Dugas, Martin

    2015-01-01

    Automatic coding of medical terms is an important, but highly complicated and laborious task. To compare and evaluate different strategies a framework with a standardized web-interface was created. Two UMLS mapping strategies are compared to demonstrate the interface. The framework is a Java Spring application running on a Tomcat application server. It accepts different parameters and returns results in JSON format. To demonstrate the framework, a list of medical data items was mapped by two different methods: similarity search in a large table of terminology codes versus search in a manually curated repository. These mappings were reviewed by a specialist. The evaluation shows that the framework is flexible (due to standardized interfaces like HTTP and JSON), performant and reliable. Accuracy of automatically assigned codes is limited (up to 40%). Combining different semantic mappers into a standardized Web-API is feasible. This framework can be easily enhanced due to its modular design.

  18. When Does Memory Monitoring Succeed versus Fail? Comparing Item-Specific and Relational Encoding in the DRM Paradigm

    ERIC Educational Resources Information Center

    Huff, Mark J.; Bodner, Glen E.

    2013-01-01

    We compared the effects of item-specific versus relational encoding on recognition memory in the Deese-Roediger-McDermott paradigm. In Experiment 1, we directly compared item-specific and relational encoding instructions, whereas in Experiments 2 and 3 we biased pleasantness and generation tasks, respectively, toward one or the other type of…

  19. The Effect of Sequential Dependence on the Sampling Distributions of KR-20, KR-21, and Split-Halves Reliabilities.

    ERIC Educational Resources Information Center

    Sullins, Walter L.

    Five-hundred dichotomously scored response patterns were generated with sequentially independent (SI) items and 500 with dependent (SD) items for each of thirty-six combinations of sampling parameters (i.e., three test lengths, three sample sizes, and four item difficulty distributions). KR-20, KR-21, and Split-Half (S-H) reliabilities were…

  20. Rasch Model Parameter Estimation in the Presence of a Nonnormal Latent Trait Using a Nonparametric Bayesian Approach

    ERIC Educational Resources Information Center

    Finch, Holmes; Edwards, Julianne M.

    2016-01-01

    Standard approaches for estimating item response theory (IRT) model parameters generally work under the assumption that the latent trait being measured by a set of items follows the normal distribution. Estimation of IRT parameters in the presence of nonnormal latent traits has been shown to generate biased person and item parameter estimates. A…

Top