Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-31
... of multiple mandatory documents including: (1) a PDF fillable Applicant intake form; (2) a Microsoft Excel Workbook; (3) a Microsoft Word Narrative template; and (4) other mandatory attachments. (Applicants must use the Microsoft Word Narrative template the CDFI Fund provides; alternative templates...
A Multiple-Representation Paradigm for Document Development
1988-07-05
Write [10], MicroSoft ·word [99], PageMaker [4], Vent ura Pub- lisher [135], Interleaf Publishing System [78], FrameMaker [52] and more have alre ady...processing in FrameMaker , MicroSoft Word, and Ventura Publisher are all handled by a noninteractive off-line program. Direct manipulation, from the
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-23
... a separate document, our preferred file format is Microsoft Word. If you attach multiple comments (such as form letters), our preferred format is a Microsoft Excel spreadsheet. (2) By Hard Copy: Submit...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-24
... Microsoft Excel. By Hard Copy: U.S. mail or hand-delivery: Public Comments Processing, Attn: FWS-HQ-ES-2013... procedures. If you attach your comments as a separate document, our preferred file format is Microsoft Word...
;meta http-equiv=Content-Type content="text/html; charset=iso-8859-1"> <meta name=ProgId content=Word.Document> <meta name=Generator content="Microsoft Word 11"> <meta name /dublin_core"> <meta name=dc.title content="Alaska Solar Resource: Flat Plate Collector, Facing
Creating Printed Materials for Mathematics with a Macintosh Computer.
ERIC Educational Resources Information Center
Mahler, Philip
This document gives instructions on how to use a Macintosh computer to create printed materials for mathematics. A Macintosh computer, Microsoft Word, and objected-oriented (Draw-type) art program, and a function-graphing program are capable of producing high quality printed instructional materials for mathematics. Word 5.1 has an equation editor…
Automated software system for checking the structure and format of ACM SIG documents
NASA Astrophysics Data System (ADS)
Mirza, Arsalan Rahman; Sah, Melike
2017-04-01
Microsoft (MS) Office Word is one of the most commonly used software tools for creating documents. MS Word 2007 and above uses XML to represent the structure of MS Word documents. Metadata about the documents are automatically created using Office Open XML (OOXML) syntax. We develop a new framework, which is called ADFCS (Automated Document Format Checking System) that takes the advantage of the OOXML metadata, in order to extract semantic information from MS Office Word documents. In particular, we develop a new ontology for Association for Computing Machinery (ACM) Special Interested Group (SIG) documents for representing the structure and format of these documents by using OWL (Web Ontology Language). Then, the metadata is extracted automatically in RDF (Resource Description Framework) according to this ontology using the developed software. Finally, we generate extensive rules in order to infer whether the documents are formatted according to ACM SIG standards. This paper, introduces ACM SIG ontology, metadata extraction process, inference engine, ADFCS online user interface, system evaluation and user study evaluations.
76 FR 10405 - Federal Copyright Protection of Sound Recordings Fixed Before February 15, 1972
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-24
... file in either the Adobe Portable Document File (PDF) format that contains searchable, accessible text (not an image); Microsoft Word; WordPerfect; Rich Text Format (RTF); or ASCII text file format (not a..., comments may be delivered in hard copy. If hand delivered by a private party, an original [[Page 10406...
Using OpenOffice as a Portable Interface to JAVA-Based Applications
NASA Astrophysics Data System (ADS)
Comeau, T.; Garrett, B.; Richon, J.; Romelfanger, F.
2004-07-01
STScI previously used Microsoft Word and Microsoft Access, a Sybase ODBC driver, and the Adobe Acrobat PDF writer, along with a substantial amount of Visual Basic, to generate a variety of documents for the internal Space Telescope Grants Administration System (STGMS). While investigating an upgrade to Microsoft Office XP, we began considering alternatives, ultimately selecting an open source product, OpenOffice.org. This reduces the total number of products required to operate the internal STGMS system, simplifies the build system, and opens the possibility of moving to a non-Windows platform. We describe the experience of moving from Microsoft Office to OpenOffice.org, and our other internal uses of OpenOffice.org in our development environment.
ERIC Educational Resources Information Center
Branzburg, Jeffrey
2008-01-01
There are many ways to begin a PDF document using Adobe Acrobat. The easiest and most popular way is to create the document in another application (such as Microsoft Word) and then use the Adobe Acrobat software to convert it to a PDF. In this article, the author describes how he used Acrobat's many tools in his project--an interactive…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-20
... on reh'g & compliance, 117 FERC ] 61,126 (2006), aff'd sub nom. Alcoa, Inc. v. FERC, 564 F.3d 1342 (D... persons an opportunity to view and/or print the contents of this document via the Internet through FERC's... document is available on eLibrary in PDF and Microsoft Word format for viewing, printing, and/or...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-31
... form; (2) a Microsoft Excel Workbook; (3) a Microsoft Word Narrative template; and (4) other mandatory attachments. (Applicants must use the Microsoft Word Narrative template the CDFI Fund provides; alternative...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-25
... were created, such as Microsoft Excel, Microsoft Word, or Microsoft PowerPoint (``native format'')? We... (condensed) or expanded (detailed) format Export search results to Excel or PDF As noted above, system is...., Microsoft Word ``.doc'' format or non-copy protected text- searchable ``.pdf'' format)? Should submissions...
Master's Students' Perceptions of Microsoft Word for Mathematical Typesetting
ERIC Educational Resources Information Center
Loch, Birgit; Lowe, Tim W.; Mestel, Ben D.
2015-01-01
It is widely recognized that mathematical typesetting is more difficult than typesetting in most other disciplines due to the need for specialized mathematical notation and symbols. While most mathematicians type mathematical documents using LaTeX, with varying levels of proficiency, students often use other options or handwrite mathematics. Here,…
Working Together: Google Apps Goes to School
ERIC Educational Resources Information Center
Oishi, Lindsay
2007-01-01
Online collaboration and project-management tools allow people to work together without being in the same place at the same time. However, that is not all, Google Docs & Spreadsheets, for example, allows the creation of documents and spreadsheets just like in Microsoft Word and Excel, but with more collaborative capacity. Google Calendar lets…
77 FR 4891 - Technical Corrections to Commission Regulations
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-01
... removes 18 CFR 4.34(b)(5)(iv) concerning new requests for water quality certification if an application to... material adverse impact on the water quality in the discharge from the project or proposed project. The... PDF and Microsoft Word format for viewing, printing, and/or downloading. To access this document in e...
BioWord: A sequence manipulation suite for Microsoft Word
2012-01-01
Background The ability to manipulate, edit and process DNA and protein sequences has rapidly become a necessary skill for practicing biologists across a wide swath of disciplines. In spite of this, most everyday sequence manipulation tools are distributed across several programs and web servers, sometimes requiring installation and typically involving frequent switching between applications. To address this problem, here we have developed BioWord, a macro-enabled self-installing template for Microsoft Word documents that integrates an extensive suite of DNA and protein sequence manipulation tools. Results BioWord is distributed as a single macro-enabled template that self-installs with a single click. After installation, BioWord will open as a tab in the Office ribbon. Biologists can then easily manipulate DNA and protein sequences using a familiar interface and minimize the need to switch between applications. Beyond simple sequence manipulation, BioWord integrates functionality ranging from dyad search and consensus logos to motif discovery and pair-wise alignment. Written in Visual Basic for Applications (VBA) as an open source, object-oriented project, BioWord allows users with varying programming experience to expand and customize the program to better meet their own needs. Conclusions BioWord integrates a powerful set of tools for biological sequence manipulation within a handy, user-friendly tab in a widely used word processing software package. The use of a simple scripting language and an object-oriented scheme facilitates customization by users and provides a very accessible educational platform for introducing students to basic bioinformatics algorithms. PMID:22676326
BioWord: a sequence manipulation suite for Microsoft Word.
Anzaldi, Laura J; Muñoz-Fernández, Daniel; Erill, Ivan
2012-06-07
The ability to manipulate, edit and process DNA and protein sequences has rapidly become a necessary skill for practicing biologists across a wide swath of disciplines. In spite of this, most everyday sequence manipulation tools are distributed across several programs and web servers, sometimes requiring installation and typically involving frequent switching between applications. To address this problem, here we have developed BioWord, a macro-enabled self-installing template for Microsoft Word documents that integrates an extensive suite of DNA and protein sequence manipulation tools. BioWord is distributed as a single macro-enabled template that self-installs with a single click. After installation, BioWord will open as a tab in the Office ribbon. Biologists can then easily manipulate DNA and protein sequences using a familiar interface and minimize the need to switch between applications. Beyond simple sequence manipulation, BioWord integrates functionality ranging from dyad search and consensus logos to motif discovery and pair-wise alignment. Written in Visual Basic for Applications (VBA) as an open source, object-oriented project, BioWord allows users with varying programming experience to expand and customize the program to better meet their own needs. BioWord integrates a powerful set of tools for biological sequence manipulation within a handy, user-friendly tab in a widely used word processing software package. The use of a simple scripting language and an object-oriented scheme facilitates customization by users and provides a very accessible educational platform for introducing students to basic bioinformatics algorithms.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-06
... to electronic comments will be accepted in Microsoft Word, Excel, or Adobe PDF file formats only... this document, identified by the code NOAA-NMFS-2013-0150, by any of the following methods: Electronic Submissions: Submit all electronic comments via the Federal eRulemaking Portal. Go to www.regulations.gov...
Beyond Word Processing. In Microsoft Word 5.0 with Word 5.1 Addendum.
ERIC Educational Resources Information Center
Hall, Jean Marie; Yoder, Sharon
This book is designed for use with Microsoft Word 5.0; it includes an addendum covering the new features of Word 5.1. The contents of the book are designed to get individuals started using some of the more advanced techniques available in Word. Some of the ideas presented in the book will be of immediate use and others are more complex. All of the…
The Case for an Open Data Model
1998-08-01
Microsoft Word, Pagemaker, and Framemaker , and the drawing programs MacDraw, Adobe Illustrator, and Microsoft PowerPoint, use their own proprietary...needs a custom word counting tool, since no utility could work in Word and other word processors. Framemaker for Windows does not have a word counting...supplied in 2 At least none that I could find in Framemaker 5.5 for Windows. Another problem with
77 FR 7526 - Interpretation of Protection System Reliability Standard
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-13
... reh'g & compliance, 117 FERC ] 61,126 (2006), aff'd sub nom. Alcoa, Inc. v. FERC, 564 F.3d 1342 (D.C... opportunity to view and/or print the contents of this document via the Internet through FERC's Home Page... available on eLibrary in PDF and Microsoft Word format for viewing, printing, and/or downloading. To access...
75 FR 7648 - Agency Information Collection Activities: Emergency Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-22
..., recipients, and representative payees: Braille and Microsoft Word files (on data compact discs). Current...) Braille, or (5) Microsoft Word. This call did not require OMB clearance. However, there may be respondents...
An Efficiency Comparison of Document Preparation Systems Used in Academic Research and Development
Knauff, Markus; Nejasmic, Jelica
2014-01-01
The choice of an efficient document preparation system is an important decision for any academic researcher. To assist the research community, we report a software usability study in which 40 researchers across different disciplines prepared scholarly texts with either Microsoft Word or LaTeX. The probe texts included simple continuous text, text with tables and subheadings, and complex text with several mathematical equations. We show that LaTeX users were slower than Word users, wrote less text in the same amount of time, and produced more typesetting, orthographical, grammatical, and formatting errors. On most measures, expert LaTeX users performed even worse than novice Word users. LaTeX users, however, more often report enjoying using their respective software. We conclude that even experienced LaTeX users may suffer a loss in productivity when LaTeX is used, relative to other document preparation systems. Individuals, institutions, and journals should carefully consider the ramifications of this finding when choosing document preparation strategies, or requiring them of authors. PMID:25526083
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-29
... submitted electronically in Microsoft Excel or Word formats to [email protected] . FOR FURTHER... recommendations should be submitted electronically in Microsoft Excel or Word format. Respondents to this request...
... County-level Lyme disease data from 2000-2016 Microsoft Excel file [Excel CSV – 209KB] ––Right–click the link ... PDF file Microsoft PowerPoint file Microsoft Word file Microsoft Excel file Audio/Video file Apple Quicktime file RealPlayer ...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-26
... formatted as Microsoft Word. Please make reference to CDC-2013-0021 and Docket Number NIOSH 245-A. To access... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention [CDC-2013-0021... materials, visit http://www.regulations.gov and enter CDC-2013- 0021 in the search field and click ``Search...
NASA Technical Reports Server (NTRS)
Messaro. Semma; Harrison, Phillip
2010-01-01
Ares I Zonal Random vibration environments due to acoustic impingement and combustion processes are develop for liftoff, ascent and reentry. Random Vibration test criteria for Ares I Upper Stage pyrotechnic components are developed by enveloping the applicable zonal environments where each component is located. Random vibration tests will be conducted to assure that these components will survive and function appropriately after exposure to the expected vibration environments. Methodology: Random Vibration test criteria for Ares I Upper Stage pyrotechnic components were desired that would envelope all the applicable environments where each component was located. Applicable Ares I Vehicle drawings and design information needed to be assessed to determine the location(s) for each component on the Ares I Upper Stage. Design and test criteria needed to be developed by plotting and enveloping the applicable environments using Microsoft Excel Spreadsheet Software and documenting them in a report Using Microsoft Word Processing Software. Conclusion: Random vibration liftoff, ascent, and green run design & test criteria for the Upper Stage Pyrotechnic Components were developed by using Microsoft Excel to envelope zonal environments applicable to each component. Results were transferred from Excel into a report using Microsoft Word. After the report is reviewed and edited by my mentor it will be submitted for publication as an attachment to a memorandum. Pyrotechnic component designers will extract criteria from my report for incorporation into the design and test specifications for components. Eventually the hardware will be tested to the environments I developed to assure that the components will survive and function appropriately after exposure to the expected vibration environments.
Improving Quality Using Architecture Fault Analysis with Confidence Arguments
2015-03-01
the same time, T text, diagram, and table-based requirements documentation and the use of Microsoft Word and Dynamic Object - Oriented Requirements...Lamsweerde 2003] Van Lamsweerde, Axel & Letier, Emmanuel. “From Object Orientation to Goal Orientation : A Paradigm Shift for Requirements Engineering,” 4–8...Introduction 1 Approach , Concepts, and Notations 5 2.1 Requirement Specification and Architecture Design 5 2.2 AADL Concepts Supporting Architecture
CDC Vital Signs: Adult Smoking among People with Mental Illness
... PDF file Microsoft PowerPoint file Microsoft Word file Microsoft Excel ... National Center for Chronic Disease Prevention and Health Promotion, Office on Smoking and Health Page maintained by: Office ...
... and Team Healthcare Providers Prevention Information and Advice Posters for the Athletic Community General MRSA Information and ... site? Adobe PDF file Microsoft PowerPoint file Microsoft Word file Microsoft Excel file Audio/Video file Apple ...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-31
... Microsoft Word, Microsoft Excel, WordPerfect, or Adobe PDF file formats only. FOR FURTHER INFORMATION...). The closure was implemented based on advice from the U.S. Food and Drug Administration (FDA) after... Management Plan (FMP). Since the implementation of the closure, NOAA's National Ocean Service has provided...
47 CFR 61.22 - Composition of tariffs.
Code of Federal Regulations, 2010 CFR
2010-10-01
...Perfect 5.1, Microsoft Word 6, or Microsoft Word 97 software. No diskettes shall contain more than one... clearly labelled with the carrier's name, Tariff Number, software used, and the date of submission. When... defined in § 1.4(e)(2) of this chapter. (d) Domestic and international nondominant carriers subject to the...
14 CFR 302.603 - Contents of complaint or request for determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
... determination. 302.603 Section 302.603 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF...: Microsoft Word (or RTF), Word Perfect, Ami Pro, Microsoft Excel, Lotus 123, Quattro Pro, or ASCII tab...: one copy for the docket, one copy for the Office of Hearings, and one copy for the Office of Aviation...
... PDF file Microsoft PowerPoint file Microsoft Word file Microsoft Excel file Audio/Video file Apple Quicktime file RealPlayer file Text file Zip Archive file SAS file ePub file RIS file Page last reviewed: February 18, 2013 Page last updated: March 30, 2017 Content source: ...
Scabies: Workplace Frequently Asked Questions (FAQs)
... PDF file Microsoft PowerPoint file Microsoft Word file Microsoft Excel file Audio/Video file Apple Quicktime file RealPlayer file Text file Zip Archive file SAS file ePub file RIS file Page last reviewed: July 19, 2013 Page last updated: July 19, 2013 Content source: ...
ERIC Educational Resources Information Center
Butler, E. Sonny
Much of what librarians do today requires adeptness in creating and manipulating databases. Many new computers bought by libraries every year come packaged with Microsoft Office and include Microsoft Access. This database program features a seamless interface between Microsoft Office's other programs like Word, Excel, and PowerPoint. This book…
FastStats: Chronic Liver Disease and Cirrhosis
... PDF file Microsoft PowerPoint file Microsoft Word file Microsoft Excel file Audio/Video file Apple Quicktime file RealPlayer file Text file Zip Archive file SAS file ePub file RIS file Page last reviewed: May 30, 2013 Page last updated: October 6, 2016 Content source: ...
A simple procedure for retrieval of a cement-retained implant-supported crown: a case report.
Buzayan, Muaiyed Mahmoud; Mahmood, Wan Adida; Yunus, Norsiah Binti
2014-02-01
Retrieval of cement-retained implant prostheses can be more demanding than retrieval of screw-retained prostheses. This case report describes a simple and predictable procedure to locate the abutment screw access openings of cementretained implant-supported crowns in cases of fractured ceramic veneer. A conventional periapical radiography image was captured using a digital camera, transferred to a computer, and manipulated using Microsoft Word document software to estimate the location of the abutment screw access.
Walker Ranch 3D seismic images
Robert J. Mellors
2016-03-01
Amplitude images (both vertical and depth slices) extracted from 3D seismic reflection survey over area of Walker Ranch area (adjacent to Raft River). Crossline spacing of 660 feet and inline of 165 feet using a Vibroseis source. Processing included depth migration. Micro-earthquake hypocenters on images. Stratigraphic information and nearby well tracks added to images. Images are embedded in a Microsoft Word document with additional information. Exact location and depth restricted for proprietary reasons. Data collection and processing funded by Agua Caliente. Original data remains property of Agua Caliente.
76 FR 42164 - Announcement of Competition Under the America COMPETES Reauthorization Act of 2011
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-18
... listing will be in a format completely compatible with Microsoft Excel 2007 and contain the information... by VA. The narrative will be in a format completely compatible with Microsoft Word 2007, not...
New Tools to Convert PDF Math Contents into Accessible e-Books Efficiently.
Suzuki, Masakazu; Terada, Yugo; Kanahori, Toshihiro; Yamaguchi, Katsuhito
2015-01-01
New features in our math-OCR software to convert PDF math contents into accessible e-books are shown. A method for recognizing PDF is thoroughly improved. In addition, contents in any selected area including math formulas in a PDF file can be cut and pasted into a document in various accessible formats, which is automatically recognized and converted into texts and accessible math formulas through this process. Combining it with our authoring tool for a technical document, one can easily produce accessible e-books in various formats such as DAISY, accessible EPUB3, DAISY-like HTML5, Microsoft Word with math objects and so on. Those contents are useful for various print-disabled students ranging from the blind to the dyslexic.
Customising Microsoft Office to Develop a Tutorial Learning Environment
ERIC Educational Resources Information Center
Deacon, Andrew; Jaftha, Jacob; Horwitz, David
2004-01-01
Powerful applications such as Microsoft Office's Excel and Word are widely used to perform common tasks in the workplace and in education. Scripting within these applications allows unanticipated user requirements to be addressed. We show that such extensibility, intended to support office automation-type applications, is well suited to the…
Cryptanalysis on classical cipher based on Indonesian language
NASA Astrophysics Data System (ADS)
Marwati, R.; Yulianti, K.
2018-05-01
Cryptanalysis is a process of breaking a cipher in an illegal decryption. This paper discusses about encryption some classic cryptography, breaking substitution cipher and stream cipher, and increasing its security. Encryption and ciphering based on Indonesian Language text. Microsoft Word and Microsoft Excel were chosen as ciphering and breaking tools.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-21
... may submit attachments to electronic comments in Microsoft Word or Excel, WordPerfect, or Adobe PDF... categories per the percentages outlined in the 2006 Consolidated HMS FMP. In other words, the combined effect... other words, if a vessel is not allowed access to the Cape Hatteras Gear Restricted Area due to the...
Tools for Requirements Management: A Comparison of Telelogic DOORS and the HiVe
2006-07-01
types DOORS deals with are text files, spreadsheets, FrameMaker , rich text, Microsoft Word and Microsoft Project. 2.5.1 Predefined file formats DOORS...during the export. DOORS exports FrameMaker files in an incomplete format, meaning DOORS exported files will have to be opened in FrameMaker and saved
Sending Foreign Language Word Processor Files over Networks.
ERIC Educational Resources Information Center
Feustle, Joseph A., Jr.
1992-01-01
Advantages of using online systems are described, and specific techniques for successfully transmitting computer text files are described. Topics covered include Microsoft's Rich TextFile, WordPerfect encoding, text compression, and especially encoding and decoding with UNIX programs. (LB)
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-12
... comments will only be accepted in Microsoft Word or Excel, WordPerfect, or Adobe PDF file formats. Written... membership includes: Belize, Canada, China, Chinese Taipei (Taiwan), Colombia, Costa Rica, Ecuador, El...
Software Reviews: Programs Worth a Second Look.
ERIC Educational Resources Information Center
Classroom Computer Learning, 1989
1989-01-01
Reviews three software programs: (1) "Microsoft Works 2.0": word processing, data processing, and telecommunications, grades 7 and up; (2) "AppleWorks GS": word processor, database, spreadsheet, graphics, and telecommunications, grades 3-12, Apple IIGS; (3) "Choices, Choices: On the Playground, Taking Responsibility":…
How to Create a Navigational Wireframe in Word, With Site Map Example
Use Microsoft Word's graphic tools to create a wireframe: an organization chart showing the top three levels of HTML content (Home page, secondary pages, and tertiary pages). This is an important step in planning the structure of a website.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-07
.... NMFS will accept anonymous comments. Enter ``N/A'' in the required fields, if you wish to remain anonymous. Attachments to electronic comments will be accepted in Microsoft Word, Excel, WordPerfect, or...
76 FR 9210 - Draft DOC National Aquaculture Policy
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-16
... confidential business information or otherwise sensitive or protected information. All comments and attachments... and NOAA's Online Privacy Policy, we treat your name, city, state, and any comments you provide as... accusations. You may submit attachments to electronic comments in Microsoft Word, Excel, WordPerfect, or Adobe...
Microsoft Office 365 Deployment Continues through June at NCI at Frederick | Poster
The latest Microsoft suite, Office 365 (O365), is being deployed to all NCI at Frederick computers during the months of May and June to comply with federal mandates. The suite includes the latest versions of Word, Excel, Outlook, PowerPoint, and Skype for Business, along with cloud-based capabilities. These cloud-based capabilities will help meet the federal mandates that
Galdino, Greg M; Gotway, Michael
2005-02-01
The curriculum vitae (CV) has been the traditional method for radiologists to illustrate their accomplishments in the field of medicine. Despite its presence in medicine as a standard, widely accepted means to describe one's professional career and its use for decades as an accomplice to most applications and interviews, there is relatively little written in the medical literature regarding the CV. Misrepresentation on medical students', residents', and fellows' applications has been reported. Using digital technology, CVs have the potential to be much more than printed words on paper and offers a solution to misrepresentation. Digital CVs may incorporate full-length articles, graphics, presentations, clinical images, and video. Common formats for digital CVs include CD-ROMs or DVD-ROMs containing articles (in Adobe Portable Document Format) and presentations (in Microsoft PowerPoint format) accompanying printed CVs, word processing documents with hyperlinks to articles and presentations either locally (on CD-ROMs or DVD-ROMs) or remotely (via the Internet), or hypertext markup language documents. Digital CVs afford the ability to provide more information that is readily accessible to those receiving and reviewing them. Articles, presentations, videos, images, and Internet links can be illustrated using standard file formats commonly available to all radiologists. They can be easily updated and distributed on an inexpensive media, such as a CD-ROM or DVD-ROM. With the availability of electronic articles, presentations, and information via the Internet, traditional paper CVs may soon be superseded by their electronic successors.
ERIC Educational Resources Information Center
King, Marianne
1988-01-01
Discusses the advantages and disadvantages of using an Apple Macintosh in a high school journalism department. Details the software available in the categories of layout ("Xpress" and "Pagemaker"), word processing ("Microsoft Word"), and graphics ("MacDraw,""Cricket Draw,""MacPaint," and…
Don't Just Do the Math--Type It!
ERIC Educational Resources Information Center
Stephens, Greg
2016-01-01
Most word processors, including Google Docs™ and Microsoft® Word, include an equation editor. These are great tools for the occasional homework problem or project assignment. Getting the mathematics to display correctly means making decisions about exactly which elements of an expression go where. The feedback is immediate: Students can see…
Reviewing Student Papers Electronically
ERIC Educational Resources Information Center
Dunford, Spencer
2011-01-01
In order to consistently give quality feedback to students, the author introduces the revision and automation tools in Microsoft Word 2007. These features, Comments, Tracking, and Changes, are part of the Review group in MS Word 2007. Additionally, the AutoCorrect feature can be used to enhance and support editing endeavors. This article offers a…
TurboTech Technical Evaluation Automated System
NASA Technical Reports Server (NTRS)
Tiffany, Dorothy J.
2009-01-01
TurboTech software is a Web-based process that simplifies and semiautomates technical evaluation of NASA proposals for Contracting Officer's Technical Representatives (COTRs). At the time of this reporting, there have been no set standards or systems for training new COTRs in technical evaluations. This new process provides boilerplate text in response to interview style questions. This text is collected into a Microsoft Word document that can then be further edited to conform to specific cases. By providing technical language and a structured format, TurboTech allows the COTRs to concentrate more on the actual evaluation, and less on deciding what language would be most appropriate. Since the actual word choice is one of the more time-consuming parts of a COTRs job, this process should allow for an increase in quantity of proposals evaluated. TurboTech is applicable to composing technical evaluations of contractor proposals, task and delivery orders, change order modifications, requests for proposals, new work modifications, task assignments, as well as any changes to existing contracts.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-30
... anonymous). Attachments to electronic comments will be accepted in Microsoft Word or Excel, WordPerfect, or Adobe PDF file formats only. Amendment 2 to the Fishery Ecosystem Plan for the Hawaiian Archipelago....gpoaccess.gov/fr . Fishing for pelagic armorhead is managed under the Fishery Ecosystem Plan for the...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-20
... proposed rule to implement a regulatory amendment (Regulatory Amendment 12) to the Fishery Management Plan... can also attach additional files (up to 10MB) in Microsoft Word, Excel, WordPerfect, or Adobe PDF file... benefit to the nation, particularly with respect to providing food production and recreational...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-15
... comments in Microsoft Word, Excel, WordPerfect, or Adobe PDF file formats only. FOR FURTHER INFORMATION... butterfish stock was overfished. The Mid-Atlantic Fishery Management Council (Council) developed Amendment 10 to the FMP in response to SAW 38; Amendment 10 enacted a rebuilding program for butterfish, as well...
A Three-fold Outlook of the Ultra-Efficient Engine Technology Program Office (UEET)
NASA Technical Reports Server (NTRS)
Graham, La Quilia E.
2004-01-01
The Ultra-Efficient Engine Technology (UEET) Office at NASA Glenn Research Center is a part of the Aeronautics Directorate. Its vision is to develop and hand off revolutionary turbine engine propulsion technologies that will enable future generation vehicles over a wide range of flight speeds. There are seven different technology area projects of UEET. During my tenure at NASA Glenn Research Center, my assignment was to assist three different areas of UEET, simultaneously. I worked with Kathy Zona in Education Outreach, Lynn Boukalik in Knowledge Management, and Denise Busch with Financial Management. All of my tasks were related to the business side of UEET. As an intern with Education Outreach I created a word search to partner with an exhibit of a Turbine Engine developed out of the UEET office. This exhibit is a portable model that is presented to students of varying ages. The word search complies with National Standards for Education which are part of every science, engineering, and technology teachers curriculum. I also updated a Conference Planning/Workshop Excel Spreadsheet for the UEET Office. I collected and inputted facility overviews from various venues, both on and off site to determine where to hold upcoming conferences. I then documented which facilities were compliant with the Federal Emergency Management Agency's (FEMA) Hotel and Motel Fire Safety Act of 1990. The second area in which I worked was Knowledge Management. a large knowledge management system online which has extensive documentation that continually needs reviewing, updating, and archiving. Knowledge management is the ability to bring individual or team knowledge to an organizational level so that the information can be stored, shared, reviewed, archived. Livelink and a secure server are the Knowledge Management systems that UEET utilizes, Through these systems, I was able to obtain the documents needed for archiving. My assignment was to obtain intellectual property including reports, presentations, or any other documents related to the project. My next task was to document the author, date of creation, and all other properties of each document. To archive these documents I worked extensively with Microsoft Excel. different financial systems of accounting such as the SAP business accounting system. I also learned the best ways to present financial data and shadowed my mentor as she presented financial data to both UEET's project management and the Resources Analysis and Management Office (RAMO). I analyzed the June 2004 financial data of UEET and used Microsoft Excel to input the results of the data. This process made it easier to present the full cost of the project in the month of June. In addition I assisted in the End of the Year 2003 Reconciliation of Purchases of UEET.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-02
... would be caught, but because the 2009-10 fishery took less than the TAC, the associated risk of... will be accepted in Microsoft Word or Excel, WordPerfect, or Adobe PDF file formats only. A... other information, and taking into account the associated risk of overfishing. The Deep 7 bottomfish are...
75 FR 26703 - Atlantic Coastal Fisheries Cooperative Management Act Provisions; Weakfish Fishery
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-12
... any one of the following methods: Electronic Submissions: Submit all electronic public comments via... submit attachments to electronic comments in Microsoft Word, Excel, WordPerfect, or Adobe PDF file... the tip of the lower jaw with the mouth closed to the end of the lower tip of the tail) in or from the...
ERIC Educational Resources Information Center
Varank, Ilhan; Erkoç, M. Fatih; Büyükimdat, Meryem Köskeroglu; Aktas, Mehmet; Yeni, Sabiha; Adigüzel, Tufan; Cömert, Zafer; Esgin, Esad
2014-01-01
The purpose of this study was to investigate the effectiveness of an online automated evaluation and feedback system that assessed students' word processing assignments prepared with Microsoft Office Word. The participants of the study were 119 undergraduate teacher education students, 86 of whom were female and 32 were male, enrolled in different…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-02
...-BA81 by any one of the following methods: Electronic Submissions: Federal eRulemaking Portal: http... required fields if you wish to remain anonymous). Attachments to electronic comments will be accepted in Microsoft Word, Excel, WordPerfect, or Adobe PDF file formats only. The petition, 90-day finding, 12-month...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-13
...). Attachments to electronic comments will be accepted in Microsoft Word or Excel, WordPerfect, or Adobe PDF... accuracy against the scanned image of the paper VTRs submitted by the owner/ operator of the vessel. VTR... combination of recent fishing activity and a review of the scanned images of the original VTR were used to...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-04
... WordPerfect, Microsoft Word, PDF, or ASCII file format, and avoid the use of special characters or any... furnaces and boilers is found at 10 CFR 430.23(n) and 10 CFR part 430, subpart B, appendix N, Uniform Test... such as fuel calorific value, weight of condensate, water flow and temperature, voltage, and flue gas...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-19
... the message. Comments and suggestions should be provided in WordPerfect, Microsoft Word, PDF, or text file format. The full text of the interpretive rule is available at http://www1.eere.energy.gov.... The full text of the interpretive rule is available at http://www1.eere.energy.gov/buildings/appliance...
Microsoft Biology Initiative: .NET Bioinformatics Platform and Tools
Diaz Acosta, B.
2011-01-01
The Microsoft Biology Initiative (MBI) is an effort in Microsoft Research to bring new technology and tools to the area of bioinformatics and biology. This initiative is comprised of two primary components, the Microsoft Biology Foundation (MBF) and the Microsoft Biology Tools (MBT). MBF is a language-neutral bioinformatics toolkit built as an extension to the Microsoft .NET Framework—initially aimed at the area of Genomics research. Currently, it implements a range of parsers for common bioinformatics file formats; a range of algorithms for manipulating DNA, RNA, and protein sequences; and a set of connectors to biological web services such as NCBI BLAST. MBF is available under an open source license, and executables, source code, demo applications, documentation and training materials are freely downloadable from http://research.microsoft.com/bio. MBT is a collection of tools that enable biology and bioinformatics researchers to be more productive in making scientific discoveries.
32 CFR 806b.32 - Submitting notices for publication in the Federal Register.
Code of Federal Regulations, 2012 CFR
2012-07-01
... managers must send a proposed notice, through the Major Command Privacy Office, to Air Force Chief Information Officer/P. Send notices electronically to [email protected] using Microsoft Word, using the...
32 CFR 806b.32 - Submitting notices for publication in the Federal Register.
Code of Federal Regulations, 2014 CFR
2014-07-01
... managers must send a proposed notice, through the Major Command Privacy Office, to Air Force Chief Information Officer/P. Send notices electronically to [email protected] using Microsoft Word, using the...
How reliable is computerized assessment of readability?
Mailloux, S L; Johnson, M E; Fisher, D G; Pettibone, T J
1995-01-01
To assess the consistency and comparability of readability software programs, four software programs (Corporate Voice, Grammatix IV, Microsoft Word for Windows, and RightWriter) were compared. Standard materials included 28 pieces of printed educational materials on human immunodeficiency virus/acquired immunodeficiency syndrome distributed nationally and the Gettysburg Address. Statistical analyses for the educational materials revealed that each of the three formulas assessed (Flesch-Kincaid, Flesch Reading Ease, and Gunning Fog Index) provided significantly different grade equivalent scores and that the Microsoft Word program provided significantly lower grade levels and was more inconsistent in the scores provided. For the Gettysburg Address, considerable variation was revealed among formulas, with the discrepancy being up to two grade levels. When averaging across formulas, there was a variation of 1.3 grade levels between the four software programs. Given the variation between formulas and programs, implications for decisions based on results of these software programs are provided.
Crispen's Five Antivirus Rules.
ERIC Educational Resources Information Center
Crispen, Patrick Douglas
2000-01-01
Provides rules for protecting computers from viruses, Trojan horses, or worms. Topics include purchasing commercial antivirus programs and keeping them updated; updating virus definitions weekly; precautions before opening attached files; macro virus protection in Microsoft Word; and precautions with executable files. (LRW)
The Number of Scholarly Documents on the Public Web
Khabsa, Madian; Giles, C. Lee
2014-01-01
The number of scholarly documents available on the web is estimated using capture/recapture methods by studying the coverage of two major academic search engines: Google Scholar and Microsoft Academic Search. Our estimates show that at least 114 million English-language scholarly documents are accessible on the web, of which Google Scholar has nearly 100 million. Of these, we estimate that at least 27 million (24%) are freely available since they do not require a subscription or payment of any kind. In addition, at a finer scale, we also estimate the number of scholarly documents on the web for fifteen fields: Agricultural Science, Arts and Humanities, Biology, Chemistry, Computer Science, Economics and Business, Engineering, Environmental Sciences, Geosciences, Material Science, Mathematics, Medicine, Physics, Social Sciences, and Multidisciplinary, as defined by Microsoft Academic Search. In addition, we show that among these fields the percentage of documents defined as freely available varies significantly, i.e., from 12 to 50%. PMID:24817403
The number of scholarly documents on the public web.
Khabsa, Madian; Giles, C Lee
2014-01-01
The number of scholarly documents available on the web is estimated using capture/recapture methods by studying the coverage of two major academic search engines: Google Scholar and Microsoft Academic Search. Our estimates show that at least 114 million English-language scholarly documents are accessible on the web, of which Google Scholar has nearly 100 million. Of these, we estimate that at least 27 million (24%) are freely available since they do not require a subscription or payment of any kind. In addition, at a finer scale, we also estimate the number of scholarly documents on the web for fifteen fields: Agricultural Science, Arts and Humanities, Biology, Chemistry, Computer Science, Economics and Business, Engineering, Environmental Sciences, Geosciences, Material Science, Mathematics, Medicine, Physics, Social Sciences, and Multidisciplinary, as defined by Microsoft Academic Search. In addition, we show that among these fields the percentage of documents defined as freely available varies significantly, i.e., from 12 to 50%.
ERIC Educational Resources Information Center
Diffin, Jennifer; Chirombo, Fanuel; Nangle, Dennis; de Jong, Mark
2010-01-01
This article explains how the document management team (circulation and interlibrary loan) at the University of Maryland University College implemented Microsoft's SharePoint product to create a central hub for online collaboration, communication, and storage. Enhancing the team's efficiency, organization, and cooperation was the primary goal.…
ERIC Educational Resources Information Center
Pyle, Betty; Cangelosi, Sandy
1988-01-01
Argues that middle and junior high schools can produce professional looking student publications by using desktop publishing. Presents three newspaper pages designed with the Apple Macintosh, using "Pagemaker,""Cricket Draw," and "Microsoft Word" software. (MM)
Crispen's Five Antivirus Rules.
ERIC Educational Resources Information Center
Crispen, Patrick Douglas
2000-01-01
Explains five rules to protect computers from viruses. Highlights include commercial antivirus software programs and the need to upgrade them periodically (every year to 18 months); updating virus definitions at least weekly; scanning attached files from email with antivirus software before opening them; Microsoft Word macro protection; and the…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-04
... will also be accepted on standard disks in Microsoft Word or ASCII file format. D. How should I handle... hazards of lead-based paint and where to receive more information about health protection. The poster also...
CRISP90 - SOFTWARE DESIGN ANALYZER SYSTEM
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1994-01-01
The CRISP90 Software Design Analyzer System, an update of CRISP-80, is a set of programs forming a software design and documentation tool which supports top-down, hierarchic, modular, structured design and programming methodologies. The quality of a computer program can often be significantly influenced by the design medium in which the program is developed. The medium must foster the expression of the programmer's ideas easily and quickly, and it must permit flexible and facile alterations, additions, and deletions to these ideas as the design evolves. The CRISP90 software design analyzer system was developed to provide the PDL (Programmer Design Language) programmer with such a design medium. A program design using CRISP90 consists of short, English-like textual descriptions of data, interfaces, and procedures that are imbedded in a simple, structured, modular syntax. The display is formatted into two-dimensional, flowchart-like segments for a graphic presentation of the design. Together with a good interactive full-screen editor or word processor, the CRISP90 design analyzer becomes a powerful tool for the programmer. In addition to being a text formatter, the CRISP90 system prepares material that would be tedious and error prone to extract manually, such as a table of contents, module directory, structure (tier) chart, cross-references, and a statistics report on the characteristics of the design. Referenced modules are marked by schematic logic symbols to show conditional, iterative, and/or concurrent invocation in the program. A keyword usage profile can be generated automatically and glossary definitions inserted into the output documentation. Another feature is the capability to detect changes that were made between versions. Thus, "change-bars" can be placed in the output document along with a list of changed pages and a version history report. Also, items may be marked as "to be determined" and each will appear on a special table until the item is supplied. The CRISP90 software design analyzer system is written in Microsoft QuickBasic. The program requires an IBM PC compatible with a hard disk, 128K RAM, and an ASCII printer. The program operates under MS-DOS/PC-DOS 3.10 or later. The program was developed in 1983 and updated in 1990. Microsoft and MS-DOS are registered trademarks of Microsoft Corporation. IBM PC and PC-DOS are registered trademarks of International Business Machines Corporation. CRISP90 is a copyrighted work with all copyright vested in NASA.
Liu, Ren-Hu; Meng, Jin-Ling
2003-05-01
MAPMAKER is one of the most widely used computer software package for constructing genetic linkage maps.However, the PC version, MAPMAKER 3.0 for PC, could not draw the genetic linkage maps that its Macintosh version, MAPMAKER 3.0 for Macintosh,was able to do. Especially in recent years, Macintosh computer is much less popular than PC. Most of the geneticists use PC to analyze their genetic linkage data. So a new computer software to draw the same genetic linkage maps on PC as the MAPMAKER for Macintosh to do on Macintosh has been crying for. Microsoft Excel,one component of Microsoft Office package, is one of the most popular software in laboratory data processing. Microsoft Visual Basic for Applications (VBA) is one of the most powerful functions of Microsoft Excel. Using this program language, we can take creative control of Excel, including genetic linkage map construction, automatic data processing and more. In this paper, a Microsoft Excel macro called MapDraw is constructed to draw genetic linkage maps on PC computer based on given genetic linkage data. Use this software,you can freely construct beautiful genetic linkage map in Excel and freely edit and copy it to Word or other application. This software is just an Excel format file. You can freely copy it from ftp://211.69.140.177 or ftp://brassica.hzau.edu.cn and the source code can be found in Excel's Visual Basic Editor.
49 CFR Appendix A to Part 1511 - Aviation Security Infrastructure Fee
Code of Federal Regulations, 2012 CFR
2012-10-01
.... Please also submit the same information in Microsoft Word either on a computer disk or by e-mail to TSA..., including Checkpoint Screening Supervisors. 7. All associated expensed non-labor costs including computers, communications equipment, time management systems, supplies, parking, identification badging, furniture, fixtures...
49 CFR Appendix A to Part 1511 - Aviation Security Infrastructure Fee
Code of Federal Regulations, 2014 CFR
2014-10-01
.... Please also submit the same information in Microsoft Word either on a computer disk or by e-mail to TSA..., including Checkpoint Screening Supervisors. 7. All associated expensed non-labor costs including computers, communications equipment, time management systems, supplies, parking, identification badging, furniture, fixtures...
49 CFR Appendix A to Part 1511 - Aviation Security Infrastructure Fee
Code of Federal Regulations, 2013 CFR
2013-10-01
.... Please also submit the same information in Microsoft Word either on a computer disk or by e-mail to TSA..., including Checkpoint Screening Supervisors. 7. All associated expensed non-labor costs including computers, communications equipment, time management systems, supplies, parking, identification badging, furniture, fixtures...
Cloud Computing Based E-Learning System
ERIC Educational Resources Information Center
Al-Zoube, Mohammed; El-Seoud, Samir Abou; Wyne, Mudasser F.
2010-01-01
Cloud computing technologies although in their early stages, have managed to change the way applications are going to be developed and accessed. These technologies are aimed at running applications as services over the internet on a flexible infrastructure. Microsoft office applications, such as word processing, excel spreadsheet, access database…
46 CFR 535.701 - General requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., Washington, DC 20573-0001. A copy of the Monitoring Report form in Microsoft Word and Excel format may be... Monitoring Reports in the Commission's prescribed electronic format, either on diskette or CD-ROM. (e)(1) The... filed by this subpart may be filed by direct electronic transmission in lieu of hard copy. Detailed...
Ideas without Words--Internationalizing Business Presentations.
ERIC Educational Resources Information Center
Sondak, Norman; Sondak, Eileen
This paper presents elements of the computer graphics environment including information on: Lotus 1-2-3; Apple Macintosh; Desktop Publishing; Object-Oriented Programming; and Microsoft's Windows 3. A brief scenario illustrates the use of the minimization principle in presenting a new product to a group of international financiers. A taxonomy of…
Computer Language Settings and Canadian Spellings
ERIC Educational Resources Information Center
Shuttleworth, Roger
2011-01-01
The language settings used on personal computers interact with the spell-checker in Microsoft Word, which directly affects the flagging of spellings that are deemed incorrect. This study examined the language settings of personal computers owned by a group of Canadian university students. Of 21 computers examined, only eight had their Windows…
32 CFR 806b.32 - Submitting notices for publication in the Federal Register.
Code of Federal Regulations, 2010 CFR
2010-07-01
... managers must send a proposed notice, through the Major Command Privacy Office, to Air Force Chief Information Officer/P. Send notices electronically to [email protected] using Microsoft Word, using the... assessment was accomplished and is available should the Office of Management and Budget request it. ...
NASA Astrophysics Data System (ADS)
Rahman, Fuad; Tarnikova, Yuliya; Hartono, Rachmat; Alam, Hassan
2006-01-01
This paper presents a novel automatic web publishing solution, Pageview (R). PageView (R) is a complete working solution for document processing and management. The principal aim of this tool is to allow workgroups to share, access and publish documents on-line on a regular basis. For example, assuming that a person is working on some documents. The user will, in some fashion, organize his work either in his own local directory or in a shared network drive. Now extend that concept to a workgroup. Within a workgroup, some users are working together on some documents, and they are saving them in a directory structure somewhere on a document repository. The next stage of this reasoning is that a workgroup is working on some documents, and they want to publish them routinely on-line. Now it may happen that they are using different editing tools, different software, and different graphics tools. The resultant documents may be in PDF, Microsoft Office (R), HTML, or Word Perfect format, just to name a few. In general, this process needs the documents to be processed in a fashion so that they are in the HTML format, and then a web designer needs to work on that collection to make them available on-line. PageView (R) takes care of this whole process automatically, making the document workflow clean and easy to follow. PageView (R) Server publishes documents, complete with the directory structure, for online use. The documents are automatically converted to HTML and PDF so that users can view the content without downloading the original files, or having to download browser plug-ins. Once published, other users can access the documents as if they are accessing them from their local folders. The paper will describe the complete working system and will discuss possible applications within the document management research.
[A co-word analysis of current research on neonatal jaundice].
Bao, Shan; Yang, Xiao-Yan; Tang, Jun; Wu, Jin-Lin; Mu, De-Zhi
2014-08-01
To investigate the research on neonatal jaundice in recent years by co-word analysis and to summarize the hot spots and trend of research in this field in China. The CNKI was searched with "neonate" and "jaundice" as the key words to identify the papers published from January 2009 to July 2013 that were in accordance with strict inclusion and exclusion criteria. To reveal the relationship between different high-frequency key words, Microsoft Office Excel 2013 was used for statistical analysis of key words, and Ucinet 6.0 and Netdraw were used for co-occurrence analysis. A total of 2 054 papers were included, and 44 high-frequency key words were extracted. The current hotspots of research on neonatal jaundice in China were displayed, and the relationship between different high-frequency key words was presented. There has been in-depth research on clinical manifestations and diagnosis of neonatal jaundice in China, but further research is needed to investigate the etiology, mechanism, and treatment of neonatal jaundice.
Document image retrieval through word shape coding.
Lu, Shijian; Li, Linlin; Tan, Chew Lim
2008-11-01
This paper presents a document retrieval technique that is capable of searching document images without OCR (optical character recognition). The proposed technique retrieves document images by a new word shape coding scheme, which captures the document content through annotating each word image by a word shape code. In particular, we annotate word images by using a set of topological shape features including character ascenders/descenders, character holes, and character water reservoirs. With the annotated word shape codes, document images can be retrieved by either query keywords or a query document image. Experimental results show that the proposed document image retrieval technique is fast, efficient, and tolerant to various types of document degradation.
ERIC Educational Resources Information Center
Felts, Renee R.
2013-01-01
As increasing numbers of students enroll in introductory computer application courses, instructors have difficulty providing the needed assistance in the traditional laboratory setting. Simulators have been used to facilitate college instruction, but the effectiveness of using a simulator in an introductory computer application course had not yet…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-09
... comments should be formatted as Microsoft Word. Please make reference to CDC-2013-0016 and Docket Number... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention [CDC-2013-0016... Protective Equipment: 2013-2018'', now available for public comment at http://www.regulations.gov . DATES...
Put Power into Your Presentations: Using Presentation Software Effectively
ERIC Educational Resources Information Center
Safransky, Robert J.; Burmeister, Marsha L.
2009-01-01
Microsoft PowerPoint, Apple Keynote, and OpenOffice Impress are relatively common tools in the classroom and in the boardroom these days. What makes presentation software so popular? As the Chinese proverb declares, a picture is worth a thousand words. People like visual presentations. Presentation software can make even a dull subject come to…
Part 2 of a 4-part series Hair Products: Trends and Alternatives
Jacob, Sharon; Katta, Rajani; Nedorost, Susan; Warshaw, Erin; Zirwas, Matt; Bhinder, Manpreet
2011-01-01
Objective: To provide updated data on usage of ingredients that are common potential contact allergens in several categories of hair products. To identify useful alternative products with few or no common contact allergens. Design: In November 2009, the full ingredient lists of 5,416 skin, hair, and cosmetic products marketed by the CVS pharmacy chain was copied from CVS.com into Microsoft Word format for analysis. Computer searches were made in Microsoft Word using search/replace and sorting functions to accurately identify the presence of specific allergens in each website product. Measurements: Percentages of American Contact Dermatitis Society core series allergens (and other common preservatives and sunblocks) were calculated. Results: The usage of American Contact Dermatitis Society core series allergens (and other preservatives and sunblocks) in hair products is reported. Conclusion: Data on allergens and alternatives for hair products is not widely published. This article reviews some of the common potential allergens in hair products, including shampoos, conditioners, and styling products. Suitable available alternative products for patients with contact allergy are listed. PMID:21779419
Adeshina, A M; Hashim, R
2017-03-01
Diagnostic radiology is a core and integral part of modern medicine, paving ways for the primary care physicians in the disease diagnoses, treatments and therapy managements. Obviously, all recent standard healthcare procedures have immensely benefitted from the contemporary information technology revolutions, apparently revolutionizing those approaches to acquiring, storing and sharing of diagnostic data for efficient and timely diagnosis of diseases. Connected health network was introduced as an alternative to the ageing traditional concept in healthcare system, improving hospital-physician connectivity and clinical collaborations. Undoubtedly, the modern medicinal approach has drastically improved healthcare but at the expense of high computational cost and possible breach of diagnosis privacy. Consequently, a number of cryptographical techniques are recently being applied to clinical applications, but the challenges of not being able to successfully encrypt both the image and the textual data persist. Furthermore, processing time of encryption-decryption of medical datasets, within a considerable lower computational cost without jeopardizing the required security strength of the encryption algorithm, still remains as an outstanding issue. This study proposes a secured radiology-diagnostic data framework for connected health network using high-performance GPU-accelerated Advanced Encryption Standard. The study was evaluated with radiology image datasets consisting of brain MR and CT datasets obtained from the department of Surgery, University of North Carolina, USA, and the Swedish National Infrastructure for Computing. Sample patients' notes from the University of North Carolina, School of medicine at Chapel Hill were also used to evaluate the framework for its strength in encrypting-decrypting textual data in the form of medical report. Significantly, the framework is not only able to accurately encrypt and decrypt medical image datasets, but it also successfully encrypts and decrypts textual data in Microsoft Word document, Microsoft Excel and Portable Document Formats which are the conventional format of documenting medical records. Interestingly, the entire encryption and decryption procedures were achieved at a lower computational cost using regular hardware and software resources without compromising neither the quality of the decrypted data nor the security level of the algorithms.
Roeckner, Jared T; Peebles, Amy B
2018-06-01
Our objective was to analyze systematically the preface and foreword of each edition of Williams Obstetrics and Te Linde's Operative Gynecology to gain insight into historical changes in medicine. The preface and foreword from 24 editions of Williams Obstetrics and 11 editions of Te Linde's Operative Gynecology were obtained. Documents were assessed for the inclusion of predefined key words or topics, including sex-specific pronoun usage, insurance, fertility regulation, government regulation/laws, documentation burden, malpractice, race, medicine as "art" or medicine as "science," and others. Data were extracted and analyzed using Microsoft Excel. Changing pronoun usage was evident across both texts. From 1941 through 1950, physicians were referred to as male 19 times and as female once. The ratio of male-to-female pronoun usage equalized in the 1990s. Medicine increasingly was referred to as a science rather than as an art within the last 2 decades. From the 1970s onward, emerging physician concerns, including malpractice, documentation burden, regulation, and insurance, were mentioned increasingly. The first mention of governmental regulation and evidence-based medicine occurred in the 21st century. Since 1903, race was never mentioned and "change" and "improvement" were cited almost universally. The increase in female pronoun usage reflects the expanding role of women in medicine. Another trend noted relates to increasing external influence on and regulation of our profession. Previously less important concerns such as documentation burden have emerged in the last 2 decades.
Clarke, Martina A; King, Joshua L; Kim, Min Soon
2015-07-01
To evaluate physician utilization of speech recognition technology (SRT) for medical documentation in two hospitals. A quantitative survey was used to collect data in the areas of practice, electronic equipment used for documentation, documentation created after providing care, and overall thoughts about and satisfaction with the SRT. The survey sample was from one rural and one urban facility in central Missouri. In addition, qualitative interviews were conducted with a chief medical officer and a physician champion regarding implementation issues, training, choice of SRT, and outcomes from their perspective. Seventy-one (60%) of the anticipated 125 surveys were returned. A total of 16 (23%) participants were practicing in internal medicine and 9 (13%) were practicing in family medicine. Fifty-six (79%) participants used a desktop and 14 (20%) used a laptop (2%) computer. SRT products from Nuance were the dominant SRT used by 59 participants (83%). Windows operating systems (Microsoft, Redmond, WA) was used by more than 58 (82%) of the survey respondents. With regard to user experience, 42 (59%) participants experienced spelling and grammatical errors, 15 (21%) encountered clinical inaccuracy, 9 (13%) experienced word substitution, and 4 (6%) experienced misleading medical information. This study shows critical issues of inconsistency, unreliability, and dissatisfaction in the functionality and usability of SRT. This merits further attention to improve the functionality and usability of SRT for better adoption within varying healthcare settings.
Using the World Wide Web for GIDEP Problem Data Processing at Marshall Space Flight Center
NASA Technical Reports Server (NTRS)
McPherson, John W.; Haraway, Sandra W.; Whirley, J. Don
1999-01-01
Since April 1997, Marshall Space Flight Center has been using electronic transfer and the web to support our processing of the Government-Industry Data Exchange Program (GIDEP) and NASA ALERT information. Specific aspects include: (1) Extraction of ASCII text information from GIDEP for loading into Word documents for e-mail to ALERT actionees; (2) Downloading of GIDEP form image formats in Adobe Acrobat (.pdf) for internal storage display on the MSFC ALERT web page; (3) Linkage of stored GRDEP problem forms with summary information for access from the MSFC ALERT Distribution Summary Chart or from an html table of released MSFC ALERTs (4) Archival of historic ALERTs for reference by GIDEP ID, MSFC ID, or MSFC release date; (5) On-line tracking of ALERT response status using a Microsoft Access database and the web (6) On-line response to ALERTs from MSFC actionees through interactive web forms. The technique, benefits, effort, coordination, and lessons learned for each aspect are covered herein.
Digitization workflows for flat sheets and packets of plants, algae, and fungi1
Nelson, Gil; Sweeney, Patrick; Wallace, Lisa E.; Rabeler, Richard K.; Allard, Dorothy; Brown, Herrick; Carter, J. Richard; Denslow, Michael W.; Ellwood, Elizabeth R.; Germain-Aubrey, Charlotte C.; Gilbert, Ed; Gillespie, Emily; Goertzen, Leslie R.; Legler, Ben; Marchant, D. Blaine; Marsico, Travis D.; Morris, Ashley B.; Murrell, Zack; Nazaire, Mare; Neefus, Chris; Oberreiter, Shanna; Paul, Deborah; Ruhfel, Brad R.; Sasek, Thomas; Shaw, Joey; Soltis, Pamela S.; Watson, Kimberly; Weeks, Andrea; Mast, Austin R.
2015-01-01
Effective workflows are essential components in the digitization of biodiversity specimen collections. To date, no comprehensive, community-vetted workflows have been published for digitizing flat sheets and packets of plants, algae, and fungi, even though latest estimates suggest that only 33% of herbarium specimens have been digitally transcribed, 54% of herbaria use a specimen database, and 24% are imaging specimens. In 2012, iDigBio, the U.S. National Science Foundation’s (NSF) coordinating center and national resource for the digitization of public, nonfederal U.S. collections, launched several working groups to address this deficiency. Here, we report the development of 14 workflow modules with 7–36 tasks each. These workflows represent the combined work of approximately 35 curators, directors, and collections managers representing more than 30 herbaria, including 15 NSF-supported plant-related Thematic Collections Networks and collaboratives. The workflows are provided for download as Portable Document Format (PDF) and Microsoft Word files. Customization of these workflows for specific institutional implementation is encouraged. PMID:26421256
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-04
... security screening procedures. If a foreign national wishes to participate in the public meeting, please..., Microsoft Word, PDF, or ASCII file format, and avoid the use of special characters or any form of encryption... that comments sent by mail are often delayed and may be damaged by mail screening processes.) Hand...
ERIC Educational Resources Information Center
Hosek, Angela M.; Titsworth, Scott
2016-01-01
Millennial students are immersed in a digital world governed by codes and scripts. Coders create programs from scratch. We interact with code when we launch most programs like Microsoft Word or a web browser. Alternatively, scripting uses programing environments (or middleware) in which combinations of stock commands are used. Many applications…
Extra! Extra! Read All about It! How to Construct a Newsletter: A Student Project
ERIC Educational Resources Information Center
Renard, Monika; Tracy, Kay
2011-01-01
This article discusses a student project that highlights the value of printed employee newsletters as an internal communication tool for organizations. The project provides specific information and directions on how to develop an employee newsletter on human resource topics. Microsoft Word 2007 is used for newsletter formatting. The article also…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-28
... 0648-XA293, by any one of the following methods: Electronic Submissions: Submit all electronic public..., if you wish to remain anonymous). You may submit attachments to electronic comments in Microsoft Word... special regulations at the mouth of Tillamook Bay. This action was taken to comply with conservation...
Building Composite Characters on a Postscript Printer.
ERIC Educational Resources Information Center
Gothard, James E.
Procedures enabling the placement of diacritical markings over a character for printing in PostScript fonts on an Apple LaserWriter printer are described. The procedures involve some programming in the PostScript Language and manipulation of Adobe PostScript fonts. It is assumed that Microsoft Word will be used to create the text to be printed.…
ERIC Educational Resources Information Center
Cazzell, Samantha; Browarnik, Brooke; Skinner, Amy; Skinner, Christopher; Cihak, David; Ciancio, Dennis; McCurdy, Merilee; Forbes, Bethany
2016-01-01
A multiple-baseline across-students design was used to evaluate the effects of a computer-based flashcard reading (CFR) intervention, developed using Microsoft PowerPoint software, on students' ability to read health-related words within 3 seconds. The students were three adults with intellectual disabilities enrolled in a postsecondary college…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-08
... Grenadines currently receive benefits only under CBERA. The Dominican Republic, El Salvador, Guatemala... country. The CAFTA-DR entered into force for El Salvador on March 1, 2006; for Honduras on April 1, 2006... & Upload File'' field. USTR prefers submissions in Microsoft Word (.doc) or Adobe Acrobat (.pdf). If the...
The Impact of Computer-Based Instruction on the Development of EFL Learners' Writing Skills
ERIC Educational Resources Information Center
Zaini, A.; Mazdayasna, G.
2015-01-01
The current study investigated the application and effectiveness of computer assisted language learning (CALL) in teaching academic writing to Iranian EFL (English as a Foreign Language) learners by means of Microsoft Word Office. To this end, 44 sophomore intermediate university students majoring in English Language and Literature at an Iranian…
The Effects of Collaborative Writing Activity Using Google Docs on Students' Writing Abilities
ERIC Educational Resources Information Center
Suwantarathip, Ornprapat; Wichadee, Saovapa
2014-01-01
Google Docs, a free web-based version of Microsoft Word, offers collaborative features which can be used to facilitate collaborative writing in a foreign language classroom. The current study compared writing abilities of students who collaborated on writing assignments using Google Docs with those working in groups in a face-to-face classroom.…
Microsoft Word - PufferAdvisory_TCH.docx
Center for Food Safety and Applied Nutrition (CFSAN)
... 因應特殊情況, 每年進口至美國二至三次。 , 由一家經美國食品藥物管理局/ 日本 政府協定, 所核可的紐約州進口商「 」 負責。 Wako International 這是 ...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-03
... comments should be formatted as Microsoft Word. Please make reference to CDC-2013-0007 and Docket Number... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention [CDC-2013-0007... effect of law. Public Comment Period: Comments must be received by August 2, 2013. ADDRESSES: You may...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-17
... comments should be formatted as Microsoft Word. Please make reference to CDC-2013-0009 and Docket Number... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention (CDC) [CDC-2013....regulations.gov and enter CDC-2013-0009 in the search field and click ``Search.'' Public comment period...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-12
... seq. Dated: June 7, 2013. Kara Meckley, Acting Deputy Director, Office of Sustainable Fisheries... June 27, 2013. ADDRESSES: You may submit comments, identified by NOAA-NMFS-2012-0248, by any one of the... anonymous). Attachments to electronic comments will be accepted in Microsoft Word, Excel, or Adobe PDF file...
75 FR 26906 - NVOCC Negotiated Rate Arrangements; Notice of Public Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-13
... must be filed with the Office of the Secretary no later than 5 p.m. on May 14, 2010, and include the... vessel-operating common carriers. Requests to appear should be addressed to the Office of the Secretary and submitted: By e-mail as an attachment (Microsoft Word) sent to [email protected] ; by facsimile to...
ERIC Educational Resources Information Center
Prayaga, Chandra
2008-01-01
A simple interface between VPython and Microsoft (MS) Office products such as Word and Excel, controlled by Visual Basic for Applications, is described. The interface allows the preparation of content-rich, interactive learning environments by taking advantage of the three-dimensional (3D) visualization capabilities of VPython and the GUI…
Spotting words in handwritten Arabic documents
NASA Astrophysics Data System (ADS)
Srihari, Sargur; Srinivasan, Harish; Babu, Pavithra; Bhole, Chetan
2006-01-01
The design and performance of a system for spotting handwritten Arabic words in scanned document images is presented. Three main components of the system are a word segmenter, a shape based matcher for words and a search interface. The user types in a query in English within a search window, the system finds the equivalent Arabic word, e.g., by dictionary look-up, locates word images in an indexed (segmented) set of documents. A two-step approach is employed in performing the search: (1) prototype selection: the query is used to obtain a set of handwritten samples of that word from a known set of writers (these are the prototypes), and (2) word matching: the prototypes are used to spot each occurrence of those words in the indexed document database. A ranking is performed on the entire set of test word images-- where the ranking criterion is a similarity score between each prototype word and the candidate words based on global word shape features. A database of 20,000 word images contained in 100 scanned handwritten Arabic documents written by 10 different writers was used to study retrieval performance. Using five writers for providing prototypes and the other five for testing, using manually segmented documents, 55% precision is obtained at 50% recall. Performance increases as more writers are used for training.
ERIC Educational Resources Information Center
Huh, Joo Hee
2012-01-01
I criticize the typewriting model and linear writing structure of Microsoft Word software for writing in the computer. I problematize bodily movement in writing that the error of the software disregards. In this research, writing activity is viewed as bodily, spatial and mediated activity under the premise of the unity of consciousness and…
ERIC Educational Resources Information Center
Teneqexhi, Romeo; Qirko, Margarita; Sharko, Genci; Vrapi, Fatmir; Kuneshka, Loreta
2017-01-01
Exams assessment is one of the most tedious work for university teachers all over the world. Multiple choice theses make exams assessment a little bit easier, but the teacher cannot prepare more than 3-4 variants; in this case, the possibility of students for cheating from one another becomes a risk for "objective assessment outcome." On…
ERIC Educational Resources Information Center
Prvinchandar, Sunita; Ayub, Ahmad Fauzi Mohd
2014-01-01
This study compared the effectiveness of two types of computer software for improving the English writing skills of pupils in a Malaysian primary school. Sixty students who participated in the seven-week training course were divided into two groups, with the experimental group using the StyleWriter software and the control group using the…
Scheman, Andrew; Jacob, Sharon; Katta, Rajani; Nedorost, Susan; Warshaw, Erin; Zirwas, Matt; Selbo, Nicole
2011-10-01
To provide updated data on the usage of ingredients that are common potential contact allergens in several categories of topical products. To identify useful alternative products with few or no common contact allergens. In November 2009, the full ingredient lists of 5,416 skin, hair, and cosmetic products marketed by the CVS pharmacy chain were copied from CVS.com into Microsoft Word format for analysis. Computer searches were made in Microsoft Word using search/replace and sorting functions to accurately identify the presence of specific allergens in each website product. Percentages of American Contact Dermatitis Society core series allergens (and other common preservatives and sunblocks) were calculated. The usage of American Contact Dermatitis Society core series allergens (and other preservatives and sunblocks) in various miscellaneous categories of topical products is reported. Data on allergens and alternatives for ancillary skin care products are not widely published. This article reviews some of the common potential allergens in antiperspirants, deodorants, shaving products, sunblocks, powders, and wipes. Suitable available alternative products for patients with contact allergy are listed.
Jacob, Sharon; Katta, Rajani; Nedorost, Susan; Warshaw, Erin; Zirwas, Matt; Selbo, Nicole
2011-01-01
Objective: To provide updated data on the usage of ingredients that are common potential contact allergens in several categories of topical products. To identify useful alternative products with few or no common contact allergens. Design: In November 2009, the full ingredient lists of 5,416 skin, hair, and cosmetic products marketed by the CVS pharmacy chain were copied from CVS.com into Microsoft Word format for analysis. Computer searches were made in Microsoft Word using search/replace and sorting functions to accurately identify the presence of specific allergens in each website product. Measurements: Percentages of American Contact Dermatitis Society core series allergens (and other common preservatives and sunblocks) were calculated. Results: The usage of American Contact Dermatitis Society core series allergens (and other preservatives and sunblocks) in various miscellaneous categories of topical products is reported. Conclusion: Data on allergens and alternatives for ancillary skin care products are not widely published. This article reviews some of the common potential allergens in antiperspirants, deodorants, shaving products, sunblocks, powders, and wipes. Suitable available alternative products for patients with contact allergy are listed. PMID:22010054
Transcript mapping for handwritten English documents
NASA Astrophysics Data System (ADS)
Jose, Damien; Bharadwaj, Anurag; Govindaraju, Venu
2008-01-01
Transcript mapping or text alignment with handwritten documents is the automatic alignment of words in a text file with word images in a handwritten document. Such a mapping has several applications in fields ranging from machine learning where large quantities of truth data are required for evaluating handwriting recognition algorithms, to data mining where word image indexes are used in ranked retrieval of scanned documents in a digital library. The alignment also aids "writer identity" verification algorithms. Interfaces which display scanned handwritten documents may use this alignment to highlight manuscript tokens when a person examines the corresponding transcript word. We propose an adaptation of the True DTW dynamic programming algorithm for English handwritten documents. The integration of the dissimilarity scores from a word-model word recognizer and Levenshtein distance between the recognized word and lexicon word, as a cost metric in the DTW algorithm leading to a fast and accurate alignment, is our primary contribution. Results provided, confirm the effectiveness of our approach.
NASA Astrophysics Data System (ADS)
This has always been the major objection to its use by those not driven by the need to typeset mathematics since the “what-you-see-is-what-you-get” (WYSIWYG) packages offered by Microsoft Word and WordPerfect are easy to learn and use. Recently, however, com-mercial software companies have begun to market almost-WYSIWYG programs that create LaTeX files. Some commercial software that creates LaTeX files are listed in Table 1. EXP and SWP have some of the “look and feel” of the software that is popular in offices and PCTeX32 allows quick and convenient previews of the translated LaTeX files.
Combating WMD Journal. Issue 6, Fall/Winter 2010
2010-12-31
Editorial Board prior to publication. Submit articles in Microsoft Word without automatic features, include photographs , graphs, tables, etc. as...presenters as many in attendance were unlikely to be swayed and in some cases the meet- ings turned into adversarial shouting matches. 19 These...Solar Superstorm, http://science.nasa.gov/ science-news/science-at- nasa /2003/23oct_superstorm/ 8. Pfeffer, Robert, The Need to Re- define
Font adaptive word indexing of modern printed documents.
Marinai, Simone; Marino, Emanuele; Soda, Giovanni
2006-08-01
We propose an approach for the word-level indexing of modern printed documents which are difficult to recognize using current OCR engines. By means of word-level indexing, it is possible to retrieve the position of words in a document, enabling queries involving proximity of terms. Web search engines implement this kind of indexing, allowing users to retrieve Web pages on the basis of their textual content. Nowadays, digital libraries hold collections of digitized documents that can be retrieved either by browsing the document images or relying on appropriate metadata assembled by domain experts. Word indexing tools would therefore increase the access to these collections. The proposed system is designed to index homogeneous document collections by automatically adapting to different languages and font styles without relying on OCR engines for character recognition. The approach is based on three main ideas: the use of Self Organizing Maps (SOM) to perform unsupervised character clustering, the definition of one suitable vector-based word representation whose size depends on the word aspect-ratio, and the run-time alignment of the query word with indexed words to deal with broken and touching characters. The most appropriate applications are for processing modern printed documents (17th to 19th centuries) where current OCR engines are less accurate. Our experimental analysis addresses six data sets containing documents ranging from books of the 17th century to contemporary journals.
System for information discovery
Pennock, Kelly A [Richland, WA; Miller, Nancy E [Kennewick, WA
2002-11-19
A sequence of word filters are used to eliminate terms in the database which do not discriminate document content, resulting in a filtered word set and a topic word set whose members are highly predictive of content. These two word sets are then formed into a two dimensional matrix with matrix entries calculated as the conditional probability that a document will contain a word in a row given that it contains the word in a column. The matrix representation allows the resultant vectors to be utilized to interpret document contents.
2008-11-01
T or more words, where T is a threshold that is empirically set to 300 in the experiment. The second rule aims to remove pornographic documents...Some blog documents are embedded with pornographic words to attract search traffic. We identify a list of pornographic words. Given a blog document, all...document, this document is considered pornographic spam, and is discarded. The third rule removes documents written in foreign languages. We count the
Xyce Parallel Electronic Simulator Reference Guide Version 6.7.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R.; Aadithya, Karthik Venkatraman; Mei, Ting
This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users' Guide [1] . The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce . This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users' Guide [1] . The information herein is subject to change without notice. Copyright c 2002-2017 Sandia Corporation. All rights reserved. Trademarks Xyce TM Electronic Simulator and Xyce TMmore » are trademarks of Sandia Corporation. Orcad, Orcad Capture, PSpice and Probe are registered trademarks of Cadence Design Systems, Inc. Microsoft, Windows and Windows 7 are registered trademarks of Microsoft Corporation. Medici, DaVinci and Taurus are registered trademarks of Synopsys Corporation. Amtec and TecPlot are trademarks of Amtec Engineering, Inc. All other trademarks are property of their respective owners. Contacts World Wide Web http://xyce.sandia.gov https://info.sandia.gov/xyce (Sandia only) Email xyce@sandia.gov (outside Sandia) xyce-sandia@sandia.gov (Sandia only) Bug Reports (Sandia only) http://joseki-vm.sandia.gov/bugzilla http://morannon.sandia.gov/bugzilla« less
Exposing Vital Forensic Artifacts of USB Devices in the Windows 10 Registry
2015-06-01
12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) Digital media devices are regularly seized pursuant to criminal investigations and...ABSTRACT Digital media devices are regularly seized pursuant to criminal investigations and Microsoft Windows is the most commonly encountered... digital footprints available on seized computers that assist in re-creating a crime scene and telling the story of the events that occurred. Part of this
ERIC Educational Resources Information Center
Fredholm, Kent
2014-01-01
The use of online translation (OT) is increasing as more pupils receive laptops from their schools. This study investigates OT use in two groups of Swedish pupils (ages 17-18) studying Spanish as an L3: one group (A) having free Internet access and the spelling and grammar checker of Microsoft Word, the other group (B) using printed dictionaries…
NASA Technical Reports Server (NTRS)
Vairo, Daniel M.
1998-01-01
The removal and installation of sting-mounted wind tunnel models in the National Transonic Facility (NTF) is a multi-task process having a large impact on the annual throughput of the facility. Approximately ten model removal and installation cycles occur annually at the NTF with each cycle requiring slightly over five days to complete. The various tasks of the model changeover process were modeled in Microsoft Project as a template to provide a planning, tracking, and management tool. The template can also be used as a tool to evaluate improvements to this process. This document describes the development of the template and provides step-by-step instructions on its use and as a planning and tracking tool. A secondary role of this document is to provide an overview of the model changeover process and briefly describe the tasks associated with it.
Readability assessment of the American Rhinologic Society patient education materials.
Kasabwala, Khushabu; Misra, Poonam; Hansberry, David R; Agarwal, Nitin; Baredes, Soly; Setzen, Michael; Eloy, Jean Anderson
2013-04-01
The extensive amount of medical literature available on the Internet is frequently accessed by patients. To effectively contribute to healthcare decision-making, these online resources should be worded at a level that is readable by any patient seeking information. The American Medical Association and National Institutes of Health recommend the readability of patient information material should be between a 4th to 6th grade level. In this study, we evaluate the readability of online patient education information available from the American Rhinologic Society (ARS) website using 9 different assessment tools that analyze the materials for reading ease and grade level of the target audience. Online patient education material from the ARS was downloaded in February 2012 and assessed for level of readability using the Flesch Reading Ease, Flesch-Kincaid Grade Level, Simple Measure of Gobbledygook (SMOG) Grading, Coleman-Liau Index, Gunning-Fog Index, FORCAST formula, Raygor Readability Estimate, the Fry Graph, and the New Dale-Chall Readability Formula. Each article was pasted as plain text into a Microsoft® Word® document and each subsection was analyzed using the software package Readability Studio Professional Edition Version 2012.1. All healthcare education materials assessed were written between a 9th grade and graduate reading level and were considered "difficult" to read by the assessment scales. Online patient education materials on the ARS website are written above the recommended 6th grade level and may require revision to make them easily understood by a broader audience. © 2013 ARS-AAOA, LLC.
NASA Astrophysics Data System (ADS)
Sleeman, J.; Halem, M.; Finin, T.; Cane, M. A.
2016-12-01
Approximately every five years dating back to 1989, thousands of climate scientists, research centers and government labs volunteer to prepare comprehensive Assessment Reports for the Intergovernmental Panel on Climate Change. These are highly curated reports distributed to 200 nation policy makers. There have been five IPCC Assessment Reports to date, the latest leading to a Paris Agreement in Dec. 2016 signed thus far by 172 nations to limit the amount of global Greenhouse gases emitted to producing no more than a 20 C warming of the atmosphere. These reports are a living evolving big data collection tracing 30 years of climate science research, observations, and model scenario intercomparisons. They contain more than 200,000 citations over a 30 year period that trace the evolution of the physical basis of climate science, the observed and predicted impact, risk and adaptation to increased greenhouse gases and mitigation approaches, pathways, policies for climate change. Document-topic and topic-term probability distributions are built from the vocabularies of the respective assessment report chapters and citations. Using Microsoft Bing, we retrieve 150,000 citations referenced across chapters and convert those citations to text. Using a word n-gram model based on a heterogeneous set of climate change terminology, lemmatization, noise filtering and stopword elimination, we calculate word frequencies for chapters and citations. Temporal document sets are built based on the assessment period. In addition to topic modeling, we employ cross domain correlation measures. Using the Jensen-Shannon divergence and Pearson correlation we build correlation matrices for chapter and citations topics. The shared vocabulary acts as the bridge between domains resulting in chapter-citation point pairs in space. Pairs are established based on a document-topic probability distribution. Each chapter and citation is associated with a vector of topics and based on the n most probable topics, we establish which chapter-citation pairs are most similar. We will perform posterior inferences based on Hastings -Metropolis simulated annealing MCMC algorithm to infer, from the evolution of topics starting from AR1 to AR4, assertions of topics for AR5 and potentially AR6.
Investigating Background Pictures for Picture Gesture Authentication
2017-06-01
computing , stating “Microsoft is committed to making sure that the technology within the agreement has a mobile-first focus, and we 2 expect to begin to...Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1. AGENCY USE ONLY (Leave Blank) 2. REPORT DATE 06-16-2017 3. REPORT TYPE AND...unlimited. 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) The military relies heavily on computer systems. Without a strong method of authentication
Word spotting for handwritten documents using Chamfer Distance and Dynamic Time Warping
NASA Astrophysics Data System (ADS)
Saabni, Raid M.; El-Sana, Jihad A.
2011-01-01
A large amount of handwritten historical documents are located in libraries around the world. The desire to access, search, and explore these documents paves the way for a new age of knowledge sharing and promotes collaboration and understanding between human societies. Currently, the indexes for these documents are generated manually, which is very tedious and time consuming. Results produced by state of the art techniques, for converting complete images of handwritten documents into textual representations, are not yet sufficient. Therefore, word-spotting methods have been developed to archive and index images of handwritten documents in order to enable efficient searching within documents. In this paper, we present a new matching algorithm to be used in word-spotting tasks for historical Arabic documents. We present a novel algorithm based on the Chamfer Distance to compute the similarity between shapes of word-parts. Matching results are used to cluster images of Arabic word-parts into different classes using the Nearest Neighbor rule. To compute the distance between two word-part images, the algorithm subdivides each image into equal-sized slices (windows). A modified version of the Chamfer Distance, incorporating geometric gradient features and distance transform data, is used as a similarity distance between the different slices. Finally, the Dynamic Time Warping (DTW) algorithm is used to measure the distance between two images of word-parts. By using the DTW we enabled our system to cluster similar word-parts, even though they are transformed non-linearly due to the nature of handwriting. We tested our implementation of the presented methods using various documents in different writing styles, taken from Juma'a Al Majid Center - Dubai, and obtained encouraging results.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Microsoft Open Database Connectivity (ODBC) standard. ODBC is a Windows technology that allows a database software package to import data from a database created using a different software package. We currently...-compatible format. All databases must be supported with adequate documentation on data attributes, SQL...
Static Verification for Code Contracts
NASA Astrophysics Data System (ADS)
Fähndrich, Manuel
The Code Contracts project [3] at Microsoft Research enables programmers on the .NET platform to author specifications in existing languages such as C# and VisualBasic. To take advantage of these specifications, we provide tools for documentation generation, runtime contract checking, and static contract verification.
TableSim--A program for analysis of small-sample categorical data.
David J. Rugg
2003-01-01
Documents a computer program for calculating correct P-values of 1-way and 2-way tables when sample sizes are small. The program is written in Fortran 90; the executable code runs in 32-bit Microsoft-- command line environments.
Zhang, Yong; Huo, Meirong; Zhou, Jianping; Xie, Shaofei
2010-09-01
This study presents PKSolver, a freely available menu-driven add-in program for Microsoft Excel written in Visual Basic for Applications (VBA), for solving basic problems in pharmacokinetic (PK) and pharmacodynamic (PD) data analysis. The program provides a range of modules for PK and PD analysis including noncompartmental analysis (NCA), compartmental analysis (CA), and pharmacodynamic modeling. Two special built-in modules, multiple absorption sites (MAS) and enterohepatic circulation (EHC), were developed for fitting the double-peak concentration-time profile based on the classical one-compartment model. In addition, twenty frequently used pharmacokinetic functions were encoded as a macro and can be directly accessed in an Excel spreadsheet. To evaluate the program, a detailed comparison of modeling PK data using PKSolver and professional PK/PD software package WinNonlin and Scientist was performed. The results showed that the parameters estimated with PKSolver were satisfactory. In conclusion, the PKSolver simplified the PK and PD data analysis process and its output could be generated in Microsoft Word in the form of an integrated report. The program provides pharmacokinetic researchers with a fast and easy-to-use tool for routine and basic PK and PD data analysis with a more user-friendly interface. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
BanTeC: a software tool for management of corneal transplantation.
López-Alvarez, P; Caballero, F; Trias, J; Cortés, U; López-Navidad, A
2005-11-01
Until recently, all cornea information at our tissue bank was managed manually, no specific database or computer tool had been implemented to provide electronic versions of documents and medical reports. The main objective of the BanTeC project was therefore to create a computerized system to integrate and classify all the information and documents used in the center in order to facilitate management of retrieved, transplanted corneal tissues. We used the Windows platform to develop the project. Microsoft Access and Microsoft Jet Engine were used at the database level and Data Access Objects was the chosen data access technology. In short, the BanTeC software seeks to computerize the tissue bank. All the initial stages of the development have now been completed, from specification of needs, program design and implementation of the software components, to the total integration of the final result in the real production environment. BanTeC will allow the generation of statistical reports for analysis to improve our performance.
Setti, E; Musumeci, R
2001-06-01
The world wide web is an exciting service that allows one to publish electronic documents made of text and images on the internet. Client software called a web browser can access these documents, and display and print them. The most popular browsers are currently Microsoft Internet Explorer (Microsoft, Redmond, WA) and Netscape Communicator (Netscape Communications, Mountain View, CA). These browsers can display text in hypertext markup language (HTML) format and images in Joint Photographic Expert Group (JPEG) and Graphic Interchange Format (GIF). Currently, neither browser can display radiologic images in native Digital Imaging and Communications in Medicine (DICOM) format. With the aim to publish radiologic images on the internet, we wrote a dedicated Java applet. Our software can display radiologic and histologic images in DICOM, JPEG, and GIF formats, and provides a a number of functions like windowing and magnification lens. The applet is compatible with some web browsers, even the older versions. The software is free and available from the author.
Eppinger, Robert G.; Sipeki, Julianna; Scofield, M.L. Sco
2008-01-01
This report includes a document and accompanying Microsoft Access 2003 database of geoscientific references for the country of Afghanistan. The reference compilation is part of a larger joint study of Afghanistan?s energy, mineral, and water resources, and geologic hazards currently underway by the U.S. Geological Survey, the British Geological Survey, and the Afghanistan Geological Survey. The database includes both published (n = 2,489) and unpublished (n = 176) references compiled through calendar year 2007. The references comprise two separate tables in the Access database. The reference database includes a user-friendly, keyword-searchable interface and only minimum knowledge of the use of Microsoft Access is required.
33 CFR 160.210 - Methods for submitting an NOA.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Language (XML) formatted documents via web service; (iii) Electronic submission via Microsoft InfoPath... NVMC, United States Coast Guard, 408 Coast Guard Drive, Kearneysville, WV 25430, by: (1) Electronic submission via the electronic Notice of Arrival and Departure (eNOAD) and consisting of the following three...
33 CFR 160.210 - Methods for submitting an NOA.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Language (XML) formatted documents via web service; (iii) Electronic submission via Microsoft InfoPath... NVMC, United States Coast Guard, 408 Coast Guard Drive, Kearneysville, WV 25430, by: (1) Electronic submission via the electronic Notice of Arrival and Departure (eNOAD) and consisting of the following three...
33 CFR 160.210 - Methods for submitting an NOA.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Language (XML) formatted documents via web service; (iii) Electronic submission via Microsoft InfoPath... NVMC, United States Coast Guard, 408 Coast Guard Drive, Kearneysville, WV 25430, by: (1) Electronic submission via the electronic Notice of Arrival and Departure (eNOAD) and consisting of the following three...
An integrated data-analysis and database system for AMS 14C
NASA Astrophysics Data System (ADS)
Kjeldsen, Henrik; Olsen, Jesper; Heinemeier, Jan
2010-04-01
AMSdata is the name of a combined database and data-analysis system for AMS 14C and stable-isotope work that has been developed at Aarhus University. The system (1) contains routines for data analysis of AMS and MS data, (2) allows a flexible and accurate description of sample extraction and pretreatment, also when samples are split into several fractions, and (3) keeps track of all measured, calculated and attributed data. The structure of the database is flexible and allows an unlimited number of measurement and pretreatment procedures. The AMS 14C data analysis routine is fairly advanced and flexible, and it can be easily optimized for different kinds of measuring processes. Technically, the system is based on a Microsoft SQL server and includes stored SQL procedures for the data analysis. Microsoft Office Access is used for the (graphical) user interface, and in addition Excel, Word and Origin are exploited for input and output of data, e.g. for plotting data during data analysis.
CD-ROM Networking: Navigating through VINES and NetWare and the New Software Technologies.
ERIC Educational Resources Information Center
Lieberman, Paula
1995-01-01
Provides an overview of developments in CD-ROM networking technology and describes products offered by Axis, Banyan (VINES--network operating environment), CD Connection, Celerity, Data/Ware, Document Imaging Systems Corporation (DISC), Imagery, Jodian, Meridian, Micro Design International, Microsoft, Microtest, Novell, OnLine Computer Systems,…
Aviation Environmental Design Tool (AEDT) : Version 2c service Pack 1 : installation guide.
DOT National Transportation Integrated Search
2016-12-01
This document provides detailed instructions on how to install and run AEDT 2c Service Pack 1 (SP1). It is important to follow the installation instructions in the order listed below, as Microsoft SQL Server 2008 R2 is a prerequisite for AEDT. Instal...
John F. Caratti
2006-01-01
The FIREMON database software allows users to enter data, store, analyze, and summarize plot data, photos, and related documents. The FIREMON database software consists of a Java application and a Microsoft® Access database. The Java application provides the user interface with FIREMON data through data entry forms, data summary reports, and other data management tools...
Finamore, Joe; Ray, William; Kadolph, Chris; Rastegar-Mojarad, Majid; Ye, Zhan; Jacqueline, Bohne; Tachinardi, Umberto; Mendonça, Eneida; Finnegan, Brian; Bartkowiak, Barbara; Weichelt, Bryan; Lin, Simon
2014-01-01
Background/Aims New terms are rapidly appearing in the literature and practice of clinical medicine and translational research. To catalog real-world usage of medical terms, we report the first construction of an online dictionary of clinical and translational medicinal terms, which are computationally generated in near real-time using a big data approach. This project is NIH CTSA-funded and developed by the Marshfield Clinic Research Foundation in conjunction with University of Wisconsin - Madison. Currently titled Marshfield Dictionary of Clinical and Translational Science (MD-CTS), this application is a Google-like word search tool. By entering a term into the search bar, MD-CTS will display that term’s definition, usage examples, contextual terms, related images, and ontological information. A prototype is available for public viewing at http://spellchecker.mfldclin.edu/. Methods We programmatically derived the lexicon for MD-CTS from scholarly communications by parsing through 15,156,745 MEDLINE abstracts and extracting all of the unique words found therein. We then ran this list through several filters in order to remove words that were not relevant for searching, such as common English words and numeric expressions. We then loaded the resulting 1,795,769 terms into SQL tables. Each term is cross-referenced with every occurrence in all abstracts in which it was found. Additional information is aggregated from Wiktionary, Bioportal, and Wikipedia in real-time and displayed on-screen. From this lexicon we created a supplemental dictionary resource (updated quarterly) to be used in Microsoft Office® products. Results We evaluated the utility of MD-CTS by creating a list of 100 words derived from recent clinical and translational medicine publications in the week of July 22, 2013. We then performed comparative searches for each term with Taber’s Cyclopedic Medical Dictionary, Stedman’s Medical Dictionary, Dorland’s Illustrated Medical Dictionary, Medical Subject Headings (MeSH), and MD-CTS. We compared our supplemental dictionary resource to OpenMedSpell for effectiveness in accuracy of term recognition. Conclusions In summary, we developed an online mobile and desktop reference, which comprehensively integrates Wiktionary (term information), Bioportal (ontological information), Wikipedia (related images), and Medline abstract information (term usage) for scientists and clinicians to browse in real-time. We also created a supplemental dictionary resource to be used in Microsoft Office® products.
Zagoris, Konstantinos; Pratikakis, Ioannis; Gatos, Basilis
2017-05-03
Word spotting strategies employed in historical handwritten documents face many challenges due to variation in the writing style and intense degradation. In this paper, a new method that permits effective word spotting in handwritten documents is presented that it relies upon document-oriented local features which take into account information around representative keypoints as well a matching process that incorporates spatial context in a local proximity search without using any training data. Experimental results on four historical handwritten datasets for two different scenarios (segmentation-based and segmentation-free) using standard evaluation measures show the improved performance achieved by the proposed methodology.
Teaching Basic Reading Skills in Secondary Schools.
ERIC Educational Resources Information Center
Carnine, Linda
1980-01-01
This document presents diagnostic and prescriptive techniques that will enable teachers to enhance secondary school students' learning through reading in content areas. Three terms used in the document are defined in Section I: "vocabulary skills" include word attack skills, sight word skills, and word meanings; "comprehension skills" are literal,…
Sanfilippo, Antonio [Richland, WA; Calapristi, Augustin J [West Richland, WA; Crow, Vernon L [Richland, WA; Hetzler, Elizabeth G [Kennewick, WA; Turner, Alan E [Kennewick, WA
2009-12-22
Document clustering methods, document cluster label disambiguation methods, document clustering apparatuses, and articles of manufacture are described. In one aspect, a document clustering method includes providing a document set comprising a plurality of documents, providing a cluster comprising a subset of the documents of the document set, using a plurality of terms of the documents, providing a cluster label indicative of subject matter content of the documents of the cluster, wherein the cluster label comprises a plurality of word senses, and selecting one of the word senses of the cluster label.
Car manufacturers and global road safety: a word frequency analysis of road safety documents.
Roberts, I; Wentz, R; Edwards, P
2006-10-01
The World Bank believes that the car manufacturers can make a valuable contribution to road safety in poor countries and has established the Global Road Safety Partnership (GRSP) for this purpose. However, some commentators are sceptical. The authors examined road safety policy documents to assess the extent of any bias. Word frequency analyses of road safety policy documents from the World Health Organization (WHO) and the GRSP. The relative occurrence of key road safety terms was quantified by calculating a word prevalence ratio with 95% confidence intervals. Terms for which there was a fourfold difference in prevalence between the documents were tabulated. Compared to WHO's World report on road traffic injury prevention, the GRSP road safety documents were substantially less likely to use the words speed, speed limits, child restraint, pedestrian, public transport, walking, and cycling, but substantially more likely to use the words school, campaign, driver training, and billboard. There are important differences in emphasis in road safety policy documents prepared by WHO and the GRSP. Vigilance is needed to ensure that the road safety interventions that the car industry supports are based on sound evidence of effectiveness.
Link-topic model for biomedical abbreviation disambiguation.
Kim, Seonho; Yoon, Juntae
2015-02-01
The ambiguity of biomedical abbreviations is one of the challenges in biomedical text mining systems. In particular, the handling of term variants and abbreviations without nearby definitions is a critical issue. In this study, we adopt the concepts of topic of document and word link to disambiguate biomedical abbreviations. We newly suggest the link topic model inspired by the latent Dirichlet allocation model, in which each document is perceived as a random mixture of topics, where each topic is characterized by a distribution over words. Thus, the most probable expansions with respect to abbreviations of a given abstract are determined by word-topic, document-topic, and word-link distributions estimated from a document collection through the link topic model. The model allows two distinct modes of word generation to incorporate semantic dependencies among words, particularly long form words of abbreviations and their sentential co-occurring words; a word can be generated either dependently on the long form of the abbreviation or independently. The semantic dependency between two words is defined as a link and a new random parameter for the link is assigned to each word as well as a topic parameter. Because the link status indicates whether the word constitutes a link with a given specific long form, it has the effect of determining whether a word forms a unigram or a skipping/consecutive bigram with respect to the long form. Furthermore, we place a constraint on the model so that a word has the same topic as a specific long form if it is generated in reference to the long form. Consequently, documents are generated from the two hidden parameters, i.e. topic and link, and the most probable expansion of a specific abbreviation is estimated from the parameters. Our model relaxes the bag-of-words assumption of the standard topic model in which the word order is neglected, and it captures a richer structure of text than does the standard topic model by considering unigrams and semantically associated bigrams simultaneously. The addition of semantic links improves the disambiguation accuracy without removing irrelevant contextual words and reduces the parameter space of massive skipping or consecutive bigrams. The link topic model achieves 98.42% disambiguation accuracy on 73,505 MEDLINE abstracts with respect to 21 three letter abbreviations and their 139 distinct long forms. Copyright © 2014 Elsevier Inc. All rights reserved.
Cloud Computing E-Communication Services in the University Environment
ERIC Educational Resources Information Center
Babin, Ron; Halilovic, Branka
2017-01-01
The use of cloud computing services has grown dramatically in post-secondary institutions in the last decade. In particular, universities have been attracted to the low-cost and flexibility of acquiring cloud software services from Google, Microsoft and others, to implement e-mail, calendar and document management and other basic office software.…
Creating FGDC and NBII metadata with Metavist 2005.
David J. Rugg
2004-01-01
This report documents a computer program for creating metadata compliant with the Federal Geographic Data Committee (FGDC) 1998 metadata standard or the National Biological Information Infrastructure (NBII) 1999 Biological Data Profile for the FGDC standard. The software runs under the Microsoft Windows 2000 and XP operating systems, and requires the presence of...
Keeping PCs up to Date Can Be Fun
ERIC Educational Resources Information Center
Goldsborough, Reid
2004-01-01
The "joy" of computer maintenance takes many forms. These days, automation is the byword. Operating systems such as Microsoft Windows and utility suites such as Symantec's Norton Internet Security let you automatically keep crucial parts of your computer system up to date. It's fun to watch the technology keep tabs on itself. This document offers…
ERIC Educational Resources Information Center
Borst Pauwels, H. W. J.; And Others
The integration of existing applications in hypermedia environments is a promising approach towards more flexible and user-friendly hypermedia learning materials. A hypermedia courseware editor, called HyDE (Hypermedia Document Editor) was developed using Microsoft Windows TM OLE technology. OLE (object Linking and Embedding) stands for an…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avery, L.W.; Donohoo, D.T.; Sanchez, J.A.
1996-09-30
PNNL successfully completed the three tasks: Task 1 - This task provided DISA with an updated set of design checklists that can be used to measure compliance with the Style Guide. These checklists are in Microsoft{reg_sign}Word 6.0 format. Task 2 - This task provided a discussion of two basic models for using the Style Guide and the Design Checklist, as a compliance tool and as a design tool.
The distinct emotional flavor of Gnostic writings from the early Christian era.
Whissell, Cynthia
2008-02-01
More than 500,000 scored words in 83 documents were used to conclude that it is possible to identify the source of documents (proto-orthodox Christian versus early Gnostic) on the basis of the emotions underlying the words. Twenty-seven New Testament works and seven Gnostic documents (including the gospels of Thomas, Judas, and Mary [Magdalene]) were scored with the Dictionary of Affect in Language. Patterns of emotional word use focusing on eight types of extreme emotional words were employed in a discriminant function analysis to predict source. Prediction was highly successful (canonical r = .81, 97% correct identification of source). When the discriminant function was tested with more than 30 additional Gnostic and Christian works including a variety of translations and some wisdom books, it correctly classified all of them. The majority of the predictive power of the function (97% of all correct categorizations, 70% of the canonical r2) was associated with the preferential presence of passive and passive/pleasant words in Gnostic documents.
Wu, Chien Hua; Chiu, Ruey Kei; Yeh, Hong Mo; Wang, Da Wei
2017-11-01
In 2011, the Ministry of Health and Welfare of Taiwan established the National Electronic Medical Record Exchange Center (EEC) to permit the sharing of medical resources among hospitals. This system can presently exchange electronic medical records (EMRs) among hospitals, in the form of medical imaging reports, laboratory test reports, discharge summaries, outpatient records, and outpatient medication records. Hospitals can send or retrieve EMRs over the virtual private network by connecting to the EEC through a gateway. International standards should be adopted in the EEC to allow users with those standards to take advantage of this exchange service. In this study, a cloud-based EMR-exchange prototyping system was implemented on the basis of the Integrating the Healthcare Enterprise's Cross-Enterprise Document Sharing integration profile and the existing EMR exchange system. RESTful services were used to implement the proposed prototyping system on the Microsoft Azure cloud-computing platform. Four scenarios were created in Microsoft Azure to determine the feasibility and effectiveness of the proposed system. The experimental results demonstrated that the proposed system successfully completed EMR exchange under the four scenarios created in Microsoft Azure. Additional experiments were conducted to compare the efficiency of the EMR-exchanging mechanisms of the proposed system with those of the existing EEC system. The experimental results suggest that the proposed RESTful service approach is superior to the Simple Object Access Protocol method currently implemented in the EEC system, according to the irrespective response times under the four experimental scenarios. Copyright © 2017 Elsevier B.V. All rights reserved.
A Language-Independent Approach to Automatic Text Difficulty Assessment for Second-Language Learners
2013-08-01
best-suited for regression. Our baseline uses z-normalized shallow length features and TF -LOG weighted vectors on bag-of-words for Arabic, Dari...length features and TF -LOG weighted vectors on bag-of-words for Arabic, Dari, English and Pashto. We compare Support Vector Machines and the Margin...football, whereas they are much less common in documents about opera). We used TF -LOG weighted word frequencies on bag-of-words for each document
The purpose of this document is to introduce the use of the ground water geohydrology computer program WhAEM for Microsoft Windows (32-bit), or WhAEM2000. WhAEM2000 is a public domain, ground-water flow model designed to facilitate capture zone delineation and protection area map...
2001-09-01
of MEIMS was programmed in Microsoft Access 97 using Visual Basic for Applications ( VBA ). This prototype had very little documentation. The FAA...using Acess 2000 as an interface and SQL server as the database engine. Question 1: Did you have any problems accessing the program? Y / N
Integrating Digital Learning Objects in the Classroom: A Need for Educational Leadership
ERIC Educational Resources Information Center
Janson, Annick; Janson, Robin
2009-01-01
In this article, Annick Janson and Robin Janson introduce research from the Microsoft New Zealand's Partners in Learning Programme by documenting the impact of digital learning objects (DLOs) on educational practice. Janson and Janson describe the impact of DLOs on the teaching practice of a primary school in New Zealand, tracing the effects of…
Cuffney, Thomas F.; Brightbill, Robin A.
2011-01-01
The Invertebrate Data Analysis System (IDAS) software was developed to provide an accurate, consistent, and efficient mechanism for analyzing invertebrate data collected as part of the U.S. Geological Survey National Water-Quality Assessment (NAWQA) Program. The IDAS software is a stand-alone program for personal computers that run Microsoft Windows(Registered). It allows users to read data downloaded from the NAWQA Program Biological Transactional Database (Bio-TDB) or to import data from other sources either as Microsoft Excel(Registered) or Microsoft Access(Registered) files. The program consists of five modules: Edit Data, Data Preparation, Calculate Community Metrics, Calculate Diversities and Similarities, and Data Export. The Edit Data module allows the user to subset data on the basis of taxonomy or sample type, extract a random subsample of data, combine or delete data, summarize distributions, resolve ambiguous taxa (see glossary) and conditional/provisional taxa, import non-NAWQA data, and maintain and create files of invertebrate attributes that are used in the calculation of invertebrate metrics. The Data Preparation module allows the user to select the type(s) of sample(s) to process, calculate densities, delete taxa on the basis of laboratory processing notes, delete pupae or terrestrial adults, combine lifestages or keep them separate, select a lowest taxonomic level for analysis, delete rare taxa on the basis of the number of sites where a taxon occurs and (or) the abundance of a taxon in a sample, and resolve taxonomic ambiguities by one of four methods. The Calculate Community Metrics module allows the user to calculate 184 community metrics, including metrics based on organism tolerances, functional feeding groups, and behavior. The Calculate Diversities and Similarities module allows the user to calculate nine diversity and eight similarity indices. The Data Export module allows the user to export data to other software packages (CANOCO, Primer, PC-ORD, MVSP) and produce tables of community data that can be imported into spreadsheet, database, graphics, statistics, and word-processing programs. The IDAS program facilitates the documentation of analyses by keeping a log of the data that are processed, the files that are generated, and the program settings used to process the data. Though the IDAS program was developed to process NAWQA Program invertebrate data downloaded from Bio-TDB, the Edit Data module includes tools that can be used to convert non-NAWQA data into Bio-TDB format. Consequently, the data manipulation, analysis, and export procedures provided by the IDAS program can be used to process data generated outside of the NAWQA Program.
Quantitative analysis of the text and graphic content in ophthalmic slide presentations.
Ing, Edsel; Celo, Erdit; Ing, Royce; Weisbrod, Lawrence; Ing, Mercedes
2017-04-01
To determine the characteristics of ophthalmic digital slide presentations. Retrospective quantitative analysis. Slide presentations from a 2015 Canadian primary eye care conference were analyzed for their duration, character and word count, font size, words per minute (wpm), lines per slide, words per slide, slides per minute (spm), text density product (wpm × spm), proportion of graphic content, and Flesch Reading Ease (FRE) score using Microsoft PowerPoint and Word. The median audience evaluation score for the lectures was used to dichotomize the higher scoring lectures (HSL) from the lower scoring lectures (LSL). A priori we hypothesized that there would be a difference in the wpm, spm, text density product, and FRE score between HSL and LSL. Wilcoxon rank-sum tests with Bonferroni correction were utilized. The 17 lectures had medians of 2.5 spm, 20.3 words per slide, 5.0 lines per slide, 28-point sans serif font, 36% graphic content, and text density product of 136.4 words × slides/minute 2 . Although not statistically significant, the HSL had more wpm, fewer words per slide, more graphics per slide, greater text density, and higher FRE score than LSL. There was a statistically significant difference in the spm of the HSL (3.1 ± 1.0) versus the LSL (2.2 ± 1.0) at p = 0.0124. All presenters showed more than 1 slide per minute. The HSL showed more spm than the LSL. The descriptive statistics from this study may aid in the preparation of slides used for teaching and conferences. Copyright © 2017 Canadian Ophthalmological Society. Published by Elsevier Inc. All rights reserved.
Human-System Integration Scorecard Update to VB.Net
NASA Technical Reports Server (NTRS)
Sanders, Blaze D.
2009-01-01
The purpose of this project was to create Human-System Integration (HSI) scorecard software, which could be utilized to validate that human factors have been considered early in hardware/system specifications and design. The HSI scorecard is partially based upon the revised Human Rating Requirements (HRR) intended for NASA's Constellation program. This software scorecard will allow for quick appraisal of HSI factors, by using visual aids to highlight low and rapidly changing scores. This project consisted of creating a user-friendly Visual Basic program that could be easily distributed and updated, to and by fellow colleagues. Updating the Microsoft Word version of the HSI scorecard to a computer application will allow for the addition of useful features, improved easy of use, and decreased completion time for user. One significant addition is the ability to create Microsoft Excel graphs automatically from scorecard data, to allow for clear presentation of problematic areas. The purpose of this paper is to describe the rational and benefits of creating the HSI scorecard software, the problems and goals of project, and future work that could be done.
Suresh, R
2017-08-01
Pertinent marks of fired cartridge cases such as firing pin, breech face, extractor, ejector, etc. are used for firearm identification. A non-standard semiautomatic pistol and four .22rim fire cartridges (head stamp KF) is used for known source comparison study. Two test fired cartridge cases are examined under stereomicroscope. The characteristic marks are captured by digital camera and comparative analysis of striation marks is done by using different tools available in the Microsoft word (Windows 8) of a computer system. The similarities of striation marks thus obtained are highly convincing to identify the firearm. In this paper, an effort has been made to study and compare the striation marks of two fired cartridge cases using stereomicroscope, digital camera and computer system. Comparison microscope is not used in this study. The method described in this study is simple, cost effective, transport to field study and can be equipped in a crime scene vehicle to facilitate immediate on spot examination. The findings may be highly helpful to the forensic community, law enforcement agencies and students. Copyright © 2017 Elsevier B.V. All rights reserved.
Text-image alignment for historical handwritten documents
NASA Astrophysics Data System (ADS)
Zinger, S.; Nerbonne, J.; Schomaker, L.
2009-01-01
We describe our work on text-image alignment in context of building a historical document retrieval system. We aim at aligning images of words in handwritten lines with their text transcriptions. The images of handwritten lines are automatically segmented from the scanned pages of historical documents and then manually transcribed. To train automatic routines to detect words in an image of handwritten text, we need a training set - images of words with their transcriptions. We present our results on aligning words from the images of handwritten lines and their corresponding text transcriptions. Alignment based on the longest spaces between portions of handwriting is a baseline. We then show that relative lengths, i.e. proportions of words in their lines, can be used to improve the alignment results considerably. To take into account the relative word length, we define the expressions for the cost function that has to be minimized for aligning text words with their images. We apply right to left alignment as well as alignment based on exhaustive search. The quality assessment of these alignments shows correct results for 69% of words from 100 lines, or 90% of partially correct and correct alignments combined.
Centroid-Based Document Classification Algorithms: Analysis & Experimental Results
2000-03-06
stories such as baseball, football , basketball, and Olympics. In the first category, most of the documents contain words Clinton and Lewinsky and hence...document. On the other hand, any of sports related words like baseball, football , and basketball appearing in a document will put the document in the...0.15 diseas 0.14 women 0.13 heart 0.12 drug 4 0.41 newspap 0.22 editor 0.19 advertis 0.14 media 0.13 peruvian 0.13 coverag 0.12 percent 0.12 journalist
Efficient automatic OCR word validation using word partial format derivation and language model
NASA Astrophysics Data System (ADS)
Chen, Siyuan; Misra, Dharitri; Thoma, George R.
2010-01-01
In this paper we present an OCR validation module, implemented for the System for Preservation of Electronic Resources (SPER) developed at the U.S. National Library of Medicine.1 The module detects and corrects suspicious words in the OCR output of scanned textual documents through a procedure of deriving partial formats for each suspicious word, retrieving candidate words by partial-match search from lexicons, and comparing the joint probabilities of N-gram and OCR edit transformation corresponding to the candidates. The partial format derivation, based on OCR error analysis, efficiently and accurately generates candidate words from lexicons represented by ternary search trees. In our test case comprising a historic medico-legal document collection, this OCR validation module yielded the correct words with 87% accuracy and reduced the overall OCR word errors by around 60%.
"What is relevant in a text document?": An interpretable machine learning approach
Arras, Leila; Horn, Franziska; Montavon, Grégoire; Müller, Klaus-Robert
2017-01-01
Text documents can be described by a number of abstract concepts such as semantic category, writing style, or sentiment. Machine learning (ML) models have been trained to automatically map documents to these abstract concepts, allowing to annotate very large text collections, more than could be processed by a human in a lifetime. Besides predicting the text’s category very accurately, it is also highly desirable to understand how and why the categorization process takes place. In this paper, we demonstrate that such understanding can be achieved by tracing the classification decision back to individual words using layer-wise relevance propagation (LRP), a recently developed technique for explaining predictions of complex non-linear classifiers. We train two word-based ML models, a convolutional neural network (CNN) and a bag-of-words SVM classifier, on a topic categorization task and adapt the LRP method to decompose the predictions of these models onto words. Resulting scores indicate how much individual words contribute to the overall classification decision. This enables one to distill relevant information from text documents without an explicit semantic information extraction step. We further use the word-wise relevance scores for generating novel vector-based document representations which capture semantic information. Based on these document vectors, we introduce a measure of model explanatory power and show that, although the SVM and CNN models perform similarly in terms of classification accuracy, the latter exhibits a higher level of explainability which makes it more comprehensible for humans and potentially more useful for other applications. PMID:28800619
39 CFR 3001.10 - Form and number of copies of documents.
Code of Federal Regulations, 2010 CFR
2010-07-01
... service must be printed from a text-based pdf version of the document, where possible. Otherwise, they may... generated in either Acrobat (pdf), Word, or WordPerfect, or Rich Text Format (rtf). [67 FR 67559, Nov. 6...
ERIC Educational Resources Information Center
Herrera-Viedma, Enrique; Peis, Eduardo
2003-01-01
Presents a fuzzy evaluation method of SGML documents based on computing with words. Topics include filtering the amount of information available on the Web to assist users in their search processes; document type definitions; linguistic modeling; user-system interaction; and use with XML and other markup languages. (Author/LRW)
Carey, A.E.; Prudic, David E.
1996-01-01
Documentation is provided of model input and sample output used in a previous report for analysis of ground-water flow and simulated pumping scenarios in Paradise Valley, Humboldt County, Nevada.Documentation includes files containing input values and listings of sample output. The files, in American International Standard Code for Information Interchange (ASCII) or binary format, are compressed and put on a 3-1/2-inch diskette. The decompressed files require approximately 8.4 megabytes of disk space on an International Business Machine (IBM)- compatible microcomputer using the MicroSoft Disk Operating System (MS-DOS) operating system version 5.0 or greater.
The purpose of this document is to introduce through a case study the use of the ground water geohydrology computer program WhAEM for Microsoft Windows (32-bit), or WhAEM2000. WhAEM2000 is a public domain, ground-water flow model designed to facilitate capture zone delineation an...
2014-05-01
software is available for a wide variety of operating systems , including Unix, FreeBSD, Linux, Solaris, Novell NetWare, OS X, Microsoft Windows, OS/2, TPF...Word for Xenix systems . Subsequent versions were later written for several other platforms including IBM PCs running DOS (1983), Apple Macintosh ...this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204
Information extraction and knowledge graph construction from geoscience literature
NASA Astrophysics Data System (ADS)
Wang, Chengbin; Ma, Xiaogang; Chen, Jianguo; Chen, Jingwen
2018-03-01
Geoscience literature published online is an important part of open data, and brings both challenges and opportunities for data analysis. Compared with studies of numerical geoscience data, there are limited works on information extraction and knowledge discovery from textual geoscience data. This paper presents a workflow and a few empirical case studies for that topic, with a focus on documents written in Chinese. First, we set up a hybrid corpus combining the generic and geology terms from geology dictionaries to train Chinese word segmentation rules of the Conditional Random Fields model. Second, we used the word segmentation rules to parse documents into individual words, and removed the stop-words from the segmentation results to get a corpus constituted of content-words. Third, we used a statistical method to analyze the semantic links between content-words, and we selected the chord and bigram graphs to visualize the content-words and their links as nodes and edges in a knowledge graph, respectively. The resulting graph presents a clear overview of key information in an unstructured document. This study proves the usefulness of the designed workflow, and shows the potential of leveraging natural language processing and knowledge graph technologies for geoscience.
Sub-word image clustering in Farsi printed books
NASA Astrophysics Data System (ADS)
Soheili, Mohammad Reza; Kabir, Ehsanollah; Stricker, Didier
2015-02-01
Most OCR systems are designed for the recognition of a single page. In case of unfamiliar font faces, low quality papers and degraded prints, the performance of these products drops sharply. However, an OCR system can use redundancy of word occurrences in large documents to improve recognition results. In this paper, we propose a sub-word image clustering method for the applications dealing with large printed documents. We assume that the whole document is printed by a unique unknown font with low quality print. Our proposed method finds clusters of equivalent sub-word images with an incremental algorithm. Due to the low print quality, we propose an image matching algorithm for measuring the distance between two sub-word images, based on Hamming distance and the ratio of the area to the perimeter of the connected components. We built a ground-truth dataset of more than 111000 sub-word images to evaluate our method. All of these images were extracted from an old Farsi book. We cluster all of these sub-words, including isolated letters and even punctuation marks. Then all centers of created clusters are labeled manually. We show that all sub-words of the book can be recognized with more than 99.7% accuracy by assigning the label of each cluster center to all of its members.
Words, concepts, or both: optimal indexing units for automated information retrieval.
Hersh, W. R.; Hickam, D. H.; Leone, T. J.
1992-01-01
What is the best way to represent the content of documents in an information retrieval system? This study compares the retrieval effectiveness of five different methods for automated (machine-assigned) indexing using three test collections. The consistently best methods are those that use indexing based on the words that occur in the available text of each document. Methods used to map text into concepts from a controlled vocabulary showed no advantage over the word-based methods. This study also looked at an approach to relevance feedback which showed benefit for both word-based and concept-based methods. PMID:1482951
Design and realization of the compound text-based test questions library management system
NASA Astrophysics Data System (ADS)
Shi, Lei; Feng, Lin; Zhao, Xin
2011-12-01
The test questions library management system is the essential part of the on-line examination system. The basic demand for which is to deal with compound text including information like images, formulae and create the corresponding Word documents. Having compared with the two current solutions of creating documents, this paper presents a design proposal of Word Automation mechanism based on OLE/COM technology, and discusses the way of Word Automation application in detail and at last provides the operating results of the system which have high reference value in improving the generated efficiency of project documents and report forms.
Replacement Attack: A New Zero Text Watermarking Attack
NASA Astrophysics Data System (ADS)
Bashardoost, Morteza; Mohd Rahim, Mohd Shafry; Saba, Tanzila; Rehman, Amjad
2017-03-01
The main objective of zero watermarking methods that are suggested for the authentication of textual properties is to increase the fragility of produced watermarks against tampering attacks. On the other hand, zero watermarking attacks intend to alter the contents of document without changing the watermark. In this paper, the Replacement attack is proposed, which focuses on maintaining the location of the words in the document. The proposed text watermarking attack is specifically effective on watermarking approaches that exploit words' transition in the document. The evaluation outcomes prove that tested word-based method are unable to detect the existence of replacement attack in the document. Moreover, the comparison results show that the size of Replacement attack is estimated less accurate than other common types of zero text watermarking attacks.
Do, Nhan V; Barnhill, Rick; Heermann-Do, Kimberly A; Salzman, Keith L; Gimbel, Ronald W
2011-01-01
To design, build, implement, and evaluate a personal health record (PHR), tethered to the Military Health System, that leverages Microsoft® HealthVault and Google® Health infrastructure based on user preference. A pilot project was conducted in 2008-2009 at Madigan Army Medical Center in Tacoma, Washington. Our PHR was architected to a flexible platform that incorporated standards-based models of Continuity of Document and Continuity of Care Record to map Department of Defense-sourced health data, via a secure Veterans Administration data broker, to Microsoft® HealthVault and Google® Health based on user preference. The project design and implementation were guided by provider and patient advisory panels with formal user evaluation. The pilot project included 250 beneficiary users. Approximately 73.2% of users were < 65 years of age, and 38.4% were female. Of the users, 169 (67.6%) selected Microsoft® HealthVault, and 81 (32.4%) selected Google® Health as their PHR of preference. Sample evaluation of users reflected 100% (n = 60) satisfied with convenience of record access and 91.7% (n = 55) satisfied with overall functionality of PHR. Key lessons learned related to data-transfer decisions (push vs pull), purposeful delays in reporting sensitive information, understanding and mapping PHR use and clinical workflow, and decisions on information patients may choose to share with their provider. Currently PHRs are being viewed as empowering tools for patient activation. Design and implementation issues (eg, technical, organizational, information security) are substantial and must be thoughtfully approached. Adopting standards into design can enhance the national goal of portability and interoperability.
Impact of office productivity cloud computing on energy consumption and greenhouse gas emissions.
Williams, Daniel R; Tang, Yinshan
2013-05-07
Cloud computing is usually regarded as being energy efficient and thus emitting less greenhouse gases (GHG) than traditional forms of computing. When the energy consumption of Microsoft's cloud computing Office 365 (O365) and traditional Office 2010 (O2010) software suites were tested and modeled, some cloud services were found to consume more energy than the traditional form. The developed model in this research took into consideration the energy consumption at the three main stages of data transmission; data center, network, and end user device. Comparable products from each suite were selected and activities were defined for each product to represent a different computing type. Microsoft provided highly confidential data for the data center stage, while the networking and user device stages were measured directly. A new measurement and software apportionment approach was defined and utilized allowing the power consumption of cloud services to be directly measured for the user device stage. Results indicated that cloud computing is more energy efficient for Excel and Outlook which consumed less energy and emitted less GHG than the standalone counterpart. The power consumption of the cloud based Outlook (8%) and Excel (17%) was lower than their traditional counterparts. However, the power consumption of the cloud version of Word was 17% higher than its traditional equivalent. A third mixed access method was also measured for Word which emitted 5% more GHG than the traditional version. It is evident that cloud computing may not provide a unified way forward to reduce energy consumption and GHG. Direct conversion from the standalone package into the cloud provision platform can now consider energy and GHG emissions at the software development and cloud service design stage using the methods described in this research.
Xyce Parallel Electronic Simulator Reference Guide Version 6.4
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R.; Mei, Ting; Russo, Thomas V.
This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users' Guide [1] . The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce . This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users' Guide [1] . Trademarks The information herein is subject to change without notice. Copyright c 2002-2015 Sandia Corporation. All rights reserved. Xyce TM Electronic Simulator and Xyce TMmore » are trademarks of Sandia Corporation. Portions of the Xyce TM code are: Copyright c 2002, The Regents of the University of California. Produced at the Lawrence Livermore National Laboratory. Written by Alan Hindmarsh, Allan Taylor, Radu Serban. UCRL-CODE-2002-59 All rights reserved. Orcad, Orcad Capture, PSpice and Probe are registered trademarks of Cadence Design Systems, Inc. Microsoft, Windows and Windows 7 are registered trademarks of Microsoft Corporation. Medici, DaVinci and Taurus are registered trademarks of Synopsys Corporation. Amtec and TecPlot are trademarks of Amtec Engineering, Inc. Xyce 's expression library is based on that inside Spice 3F5 developed by the EECS Department at the University of California. The EKV3 MOSFET model was developed by the EKV Team of the Electronics Laboratory-TUC of the Technical University of Crete. All other trademarks are property of their respective owners. Contacts Bug Reports (Sandia only) http://joseki.sandia.gov/bugzilla http://charleston.sandia.gov/bugzilla World Wide Web http://xyce.sandia.gov http://charleston.sandia.gov/xyce (Sandia only) Email xyce@sandia.gov (outside Sandia) xyce-sandia@sandia.gov (Sandia only)« less
NASA Lewis Steady-State Heat Pipe Code Architecture
NASA Technical Reports Server (NTRS)
Mi, Ye; Tower, Leonard K.
2013-01-01
NASA Glenn Research Center (GRC) has developed the LERCHP code. The PC-based LERCHP code can be used to predict the steady-state performance of heat pipes, including the determination of operating temperature and operating limits which might be encountered under specified conditions. The code contains a vapor flow algorithm which incorporates vapor compressibility and axially varying heat input. For the liquid flow in the wick, Darcy s formula is employed. Thermal boundary conditions and geometric structures can be defined through an interactive input interface. A variety of fluid and material options as well as user defined options can be chosen for the working fluid, wick, and pipe materials. This report documents the current effort at GRC to update the LERCHP code for operating in a Microsoft Windows (Microsoft Corporation) environment. A detailed analysis of the model is presented. The programming architecture for the numerical calculations is explained and flowcharts of the key subroutines are given
Emissions & Generation Resource Integrated Database (eGRID), eGRID2010
The Emissions & Generation Resource Integrated Database (eGRID) is a comprehensive source of data on the environmental characteristics of almost all electric power generated in the United States. These environmental characteristics include air emissions for nitrogen oxides, sulfur dioxide, carbon dioxide, methane, and nitrous oxide; emissions rates; net generation; resource mix; and many other attributes.eGRID2010 contains the complete release of year 2007 data, as well as years 2005 and 2004 data. Excel spreadsheets, full documentation, summary data, eGRID subregion and NERC region representational maps, and GHG emission factors are included in this data set. The Archived data in eGRID2002 contain years 1996 through 2000 data.For year 2007 data, the first Microsoft Excel workbook, Plant, contains boiler, generator, and plant spreadsheets. The second Microsoft Excel workbook, Aggregation, contains aggregated data by state, electric generating company, parent company, power control area, eGRID subregion, NERC region, and U.S. total levels. The third Microsoft Excel workbook, ImportExport, contains state import-export data, as well as U.S. generation and consumption data for years 2007, 2005, and 2004. For eGRID data for years 2005 and 2004, a user friendly web application, eGRIDweb, is available to select, view, print, and export specified data.
Improving the Plasticity of LIMS Implementation: LIMS Extension through Microsoft Excel
NASA Technical Reports Server (NTRS)
Culver, Mark
2017-01-01
A Laboratory Information Management System (LIMS) is a databasing software with many built-in tools ideal for handling and documenting most laboratory processes in an accurate and consistent manner, making it an indispensable tool for the modern laboratory. However, a lot of LIMS end users will find that in the performance of analyses that have unique considerations such as standard curves, multiple stages incubations, or logical considerations, a base LIMS distribution may not ideally suit their needs. These considerations bring about the need for extension languages, which can extend the functionality of a LIMS. While these languages do provide the implementation team the functionality required to accommodate these special laboratory analyses, they are usually too complex for the end user to modify to compensate for natural changes in laboratory operations. The LIMS utilized by our laboratory offers a unique and easy-to-use choice for an extension language, one that is already heavily relied upon not only in science but also in most academic and business pursuits: Microsoft Excel. The validity of Microsoft Excel as a pseudo programming language and its usability and versatility as a LIMS extension language will be discussed. The NELAC implications and overall drawbacks of this LIMS configuration will also be discussed.
Automatic generation of stop word lists for information retrieval and analysis
Rose, Stuart J
2013-01-08
Methods and systems for automatically generating lists of stop words for information retrieval and analysis. Generation of the stop words can include providing a corpus of documents and a plurality of keywords. From the corpus of documents, a term list of all terms is constructed and both a keyword adjacency frequency and a keyword frequency are determined. If a ratio of the keyword adjacency frequency to the keyword frequency for a particular term on the term list is less than a predetermined value, then that term is excluded from the term list. The resulting term list is truncated based on predetermined criteria to form a stop word list.
75 FR 12001 - Privacy Act of 1974; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-12
... Source Categories. In routine Use 27, we inadvertently omitted the words ``in writing''. This document..., paragraph 27, in the third line after the words ``verbally or'', add the words ``in writing''. Approved...
Aye, Aye, Aye, Aye: Orthography Enhances Rapid Word Reading in an Exploratory Study.
ERIC Educational Resources Information Center
Neuhaus, Graham F.; Post, Yolanda
2003-01-01
Uses a novel word-reading efficiency measure to determine if articulations or processing times associated with reading the word "aye" were enhanced through the phonological or orthographic qualities contained in the preceding word. Documents the importance of separating phonological and orthographic information in English homophones. (SG)
Fast words boundaries localization in text fields for low quality document images
NASA Astrophysics Data System (ADS)
Ilin, Dmitry; Novikov, Dmitriy; Polevoy, Dmitry; Nikolaev, Dmitry
2018-04-01
The paper examines the problem of word boundaries precise localization in document text zones. Document processing on a mobile device consists of document localization, perspective correction, localization of individual fields, finding words in separate zones, segmentation and recognition. While capturing an image with a mobile digital camera under uncontrolled capturing conditions, digital noise, perspective distortions or glares may occur. Further document processing gets complicated because of its specifics: layout elements, complex background, static text, document security elements, variety of text fonts. However, the problem of word boundaries localization has to be solved at runtime on mobile CPU with limited computing capabilities under specified restrictions. At the moment, there are several groups of methods optimized for different conditions. Methods for the scanned printed text are quick but limited only for images of high quality. Methods for text in the wild have an excessively high computational complexity, thus, are hardly suitable for running on mobile devices as part of the mobile document recognition system. The method presented in this paper solves a more specialized problem than the task of finding text on natural images. It uses local features, a sliding window and a lightweight neural network in order to achieve an optimal algorithm speed-precision ratio. The duration of the algorithm is 12 ms per field running on an ARM processor of a mobile device. The error rate for boundaries localization on a test sample of 8000 fields is 0.3
Kasabwala, Khushabu; Agarwal, Nitin; Hansberry, David R; Baredes, Soly; Eloy, Jean Anderson
2012-09-01
Americans are increasingly turning to the Internet as a source of health care information. These online resources should be written at a level readily understood by the average American. This study evaluates the readability of online patient education information available from the American Academy of Otolaryngology--Head and Neck Surgery Foundation (AAO-HNSF) professional Web site using 7 different assessment tools that analyze the materials for reading ease and grade level of its target audience. Analysis of Internet-based patient education material from the AAO-HNSF Web site. Online patient education material from the AAO-HNSF was downloaded in January 2012 and assessed for level of readability using the Flesch Reading Ease, Flesch-Kincaid Grade Level, SMOG grading, Coleman-Liau Index, Gunning-Fog Index, Raygor Readability Estimate graph, and Fry Readability graph. The text from each subsection was pasted as plain text into Microsoft Word document, and each subsection was subjected to readability analysis using the software package Readability Studio Professional Edition Version 2012.1. All health care education material assessed is written between an 11th grade and graduate reading level and is considered "difficult to read" by the assessment scales. Online patient education materials on the AAO-HNSF Web site are written above the recommended 6th grade level and may need to be revised to make them more easily understood by a broader audience.
Identification of misspelled words without a comprehensive dictionary using prevalence analysis.
Turchin, Alexander; Chu, Julia T; Shubina, Maria; Einbinder, Jonathan S
2007-10-11
Misspellings are common in medical documents and can be an obstacle to information retrieval. We evaluated an algorithm to identify misspelled words through analysis of their prevalence in a representative body of text. We evaluated the algorithm's accuracy of identifying misspellings of 200 anti-hypertensive medication names on 2,000 potentially misspelled words randomly selected from narrative medical documents. Prevalence ratios (the frequency of the potentially misspelled word divided by the frequency of the non-misspelled word) in physician notes were computed by the software for each of the words. The software results were compared to the manual assessment by an independent reviewer. Area under the ROC curve for identification of misspelled words was 0.96. Sensitivity, specificity, and positive predictive value were 99.25%, 89.72% and 82.9% for the prevalence ratio threshold (0.32768) with the highest F-measure (0.903). Prevalence analysis can be used to identify and correct misspellings with high accuracy.
Barnhill, Rick; Heermann-Do, Kimberly A; Salzman, Keith L; Gimbel, Ronald W
2011-01-01
Objective To design, build, implement, and evaluate a personal health record (PHR), tethered to the Military Health System, that leverages Microsoft® HealthVault and Google® Health infrastructure based on user preference. Materials and methods A pilot project was conducted in 2008–2009 at Madigan Army Medical Center in Tacoma, Washington. Our PHR was architected to a flexible platform that incorporated standards-based models of Continuity of Document and Continuity of Care Record to map Department of Defense-sourced health data, via a secure Veterans Administration data broker, to Microsoft® HealthVault and Google® Health based on user preference. The project design and implementation were guided by provider and patient advisory panels with formal user evaluation. Results The pilot project included 250 beneficiary users. Approximately 73.2% of users were <65 years of age, and 38.4% were female. Of the users, 169 (67.6%) selected Microsoft® HealthVault, and 81 (32.4%) selected Google® Health as their PHR of preference. Sample evaluation of users reflected 100% (n=60) satisfied with convenience of record access and 91.7% (n=55) satisfied with overall functionality of PHR. Discussion Key lessons learned related to data-transfer decisions (push vs pull), purposeful delays in reporting sensitive information, understanding and mapping PHR use and clinical workflow, and decisions on information patients may choose to share with their provider. Conclusion Currently PHRs are being viewed as empowering tools for patient activation. Design and implementation issues (eg, technical, organizational, information security) are substantial and must be thoughtfully approached. Adopting standards into design can enhance the national goal of portability and interoperability. PMID:21292705
Relational Learning via Collective Matrix Factorization
2008-06-01
well-known example of such a schema is pLSI- pHITS [13], which models document-word counts and document-document citations: E1 = words and E2 = E3...relational co- clustering include pLSI, pLSI- pHITS , the symmetric block models of Long et. al. [23, 24, 25], and Bregman tensor clustering [5] (which can...to pLSI- pHITS In this section we provide an example where the additional flexibility of collective matrix factorization leads to better results; and
The Implementation of Cosine Similarity to Calculate Text Relevance between Two Documents
NASA Astrophysics Data System (ADS)
Gunawan, D.; Sembiring, C. A.; Budiman, M. A.
2018-03-01
Rapidly increasing number of web pages or documents leads to topic specific filtering in order to find web pages or documents efficiently. This is a preliminary research that uses cosine similarity to implement text relevance in order to find topic specific document. This research is divided into three parts. The first part is text-preprocessing. In this part, the punctuation in a document will be removed, then convert the document to lower case, implement stop word removal and then extracting the root word by using Porter Stemming algorithm. The second part is keywords weighting. Keyword weighting will be used by the next part, the text relevance calculation. Text relevance calculation will result the value between 0 and 1. The closer value to 1, then both documents are more related, vice versa.
NASA Technical Reports Server (NTRS)
1994-01-01
This booklet provides a partial list of acronyms, abbreviations, and other short word forms, including their definitions, used in documents at the Goddard Space Flight Center (GSFC). This list does not preclude the use of other short forms of less general usage, as long as these short forms are identified the first time they appear in a document and are defined in a glossary in the document in which they are used. This document supplements information in the GSFC Scientific and Technical Information Handbook (GHB 2200.2/April 1989). It is not intended to contain all short word forms used in GSFC documents; however, it was compiled of actual short forms used in recent GSFC documents. The entries are listed first, alphabetically by the short form, and then again alphabetically by definition.
Visualization and Analysis of Geology Word Vectors for Efficient Information Extraction
NASA Astrophysics Data System (ADS)
Floyd, J. S.
2016-12-01
When a scientist begins studying a new geographic region of the Earth, they frequently begin by gathering relevant scientific literature in order to understand what is known, for example, about the region's geologic setting, structure, stratigraphy, and tectonic and environmental history. Experienced scientists typically know what keywords to seek and understand that if a document contains one important keyword, then other words in the document may be important as well. Word relationships in a document give rise to what is known in linguistics as the context-dependent nature of meaning. For example, the meaning of the word `strike' in geology, as in the strike of a fault, is quite different from its popular meaning in baseball. In addition, word order, such as in the phrase `Cretaceous-Tertiary boundary,' often corresponds to the order of sequences in time or space. The context of words and the relevance of words to each other can be derived quantitatively by machine learning vector representations of words. Here we show the results of training a neural network to create word vectors from scientific research papers from selected rift basins and mid-ocean ridges: the Woodlark Basin of Papua New Guinea, the Hess Deep rift, and the Gulf of Mexico basin. The word vectors are statistically defined by surrounding words within a given window, limited by the length of each sentence. The word vectors are analyzed by their cosine distance to related words (e.g., `axial' and `magma'), classified by high dimensional clustering, and visualized by reducing the vector dimensions and plotting the vectors on a two- or three-dimensional graph. Similarity analysis of `Triassic' and `Cretaceous' returns `Jurassic' as the nearest word vector, suggesting that the model is capable of learning the geologic time scale. Similarity analysis of `basalt' and `minerals' automatically returns mineral names such as `chlorite', `plagioclase,' and `olivine.' Word vector analysis and visualization allow one to extract information from hundreds of papers or more and find relationships in less time than it would take to read all of the papers. As machine learning tools become more commonly available, more and more scientists will be able to use and refine these tools for their individual needs.
ERIC Educational Resources Information Center
Parke, David, Ed.
This document contains 21 presentations from a conference on business and marketing education. The following papers are included: "Microsoft Excel 2000" (Jeff Fuller); "Clueless in the Classroom? Hints To Help!" (Mary W. Evans); "A Strategy To Improve Narrative-Number Linkage in Business Writing" (Ellis A. Hayes);…
Microsoft Research at TREC 2009. Web and Relevance Feedback Tracks
2009-11-01
Information Processing Systems, pages 193–200, 2006. [2] J . M. Kleinberg. Authoritative sources in a hyperlinked environment. In Proc. of the 9th...Walker, S. Jones, M. Hancock-Beaulieu, and M. Gatford. Okapi at TREC-3. In Proc. of the 3rd Text REtrieval Conference, 1994. [8] J . J . Rocchio. Relevance...feedback in information retrieval. In Gerard Salton , editor, The SMART Retrieval System - Experiments in Automatic Document Processing. Prentice Hall
WorldWide Telescope: A Newly Open Source Astronomy Visualization System
NASA Astrophysics Data System (ADS)
Fay, Jonathan; Roberts, Douglas A.
2016-01-01
After eight years of development by Microsoft Research, WorldWide Telescope (WWT) was made an open source project at the end of June 2015. WWT was motivated by the desire to put new surveys of objects, such as the Sloan Digital Sky Survey in the context of the night sky. The development of WWT under Microsoft started with the creation of a Windows desktop client that is widely used in various education, outreach and research projects. Using this, users can explore the data built into WWT as well as data that is loaded in. Beyond exploration, WWT can be used to create tours that present various datasets a narrative format.In the past two years, the team developed a collection of web controls, including an HTML5 web client, which contains much of the functionality of the Windows desktop client. The project under Microsoft has deep connections with several user communities such as education through the WWT Ambassadors program, http://wwtambassadors.org/ and with planetariums and museums such as the Adler Planetarium. WWT can also support research, including using WWT to visualize the Bones of the Milky Way and rich connections between WWT and the Astrophysical Data Systems (ADS, http://labs.adsabs.harvard.edu/adsabs/). One important new research connection is the use of WWT to create dynamic and potentially interactive supplements to journal articles, which have been created in 2015.Now WWT is an open source community lead project. The source code is available in GitHub (https://github.com/WorldWideTelescope). There is significant developer documentation on the website (http://worldwidetelescope.org/Developers/) and an extensive developer workshops (http://wwtworkshops.org/?tribe_events=wwt-developer-workshop) has taken place in the fall of 2015.Now that WWT is open source anyone who has the interest in the project can be a contributor. As important as helping out with coding, the project needs people interested in documentation, testing, training and other roles.
Whittier Tunnel, Transportation & Public Facilities, State of Alaska
ONLINE (or choose to download in Adobe PDF or Excel format) Summer May 1 - Sept 30 PDF document | Excel document Winter Oct 1 - Apr 30 PDF document | Excel document Current Regulations: PDF document | Word
ERIC Educational Resources Information Center
Pavelko, Stacey L.; Owens, Robert E., Jr.
2017-01-01
Purpose: The purpose of this study was to document whether mean length of utterance (MLU[subscript S]), total number of words (TNW), clauses per sentence (CPS), and/or words per sentence (WPS) demonstrated age-related changes in children with typical language and to document the average time to collect, transcribe, and analyze conversational…
ERIC Educational Resources Information Center
Hendrix, Peter; Bolger, Patrick; Baayen, Harald
2017-01-01
Recent studies have documented frequency effects for word n-grams, independently of word unigram frequency. Further studies have revealed constructional prototype effects, both at the word level as well as for phrases. The present speech production study investigates the time course of these effects for the production of prepositional phrases in…
Development of First-Graders' Word Reading Skills: For Whom Can Dynamic Assessment Tell Us More?
ERIC Educational Resources Information Center
Cho, Eunsoo; Compton, Donald L.; Gilbert, Jennifer K.; Steacy, Laura M.; Collins, Alyson A.; Lindström, Esther R.
2017-01-01
Dynamic assessment (DA) of word reading measures learning potential for early reading development by documenting the amount of assistance needed to learn how to read words with unfamiliar orthography. We examined the additive value of DA for predicting first-grade decoding and word recognition development while controlling for autoregressive…
SciReader enables reading of medical content with instantaneous definitions.
Gradie, Patrick R; Litster, Megan; Thomas, Rinu; Vyas, Jay; Schiller, Martin R
2011-01-25
A major problem patients encounter when reading about health related issues is document interpretation, which limits reading comprehension and therefore negatively impacts health care. Currently, searching for medical definitions from an external source is time consuming, distracting, and negatively impacts reading comprehension and memory of the material. SciReader was built as a Java application with a Flex-based front-end client. The dictionary used by SciReader was built by consolidating data from several sources and generating new definitions with a standardized syntax. The application was evaluated by measuring the percentage of words defined in different documents. A survey was used to test the perceived effect of SciReader on reading time and comprehension. We present SciReader, a web-application that simplifies document interpretation by allowing users to instantaneously view medical, English, and scientific definitions as they read any document. This tool reveals the definitions of any selected word in a small frame at the top of the application. SciReader relies on a dictionary of ~750,000 unique Biomedical and English word definitions. Evaluation of the application shows that it maps ~98% of words in several different types of documents and that most users tested in a survey indicate that the application decreases reading time and increases comprehension. SciReader is a web application useful for reading medical and scientific documents. The program makes jargon-laden content more accessible to patients, educators, health care professionals, and the general public.
Microsoft Repository Version 2 and the Open Information Model.
ERIC Educational Resources Information Center
Bernstein, Philip A.; Bergstraesser, Thomas; Carlson, Jason; Pal, Shankar; Sanders, Paul; Shutt, David
1999-01-01
Describes the programming interface and implementation of the repository engine and the Open Information Model for Microsoft Repository, an object-oriented meta-data management facility that ships in Microsoft Visual Studio and Microsoft SQL Server. Discusses Microsoft's component object model, object manipulation, queries, and information…
Microsoft, libraries and open source
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2010-04-26
We are finally starting to see the early signs of transformation inscholarly publishing. The innovations we've been expecting for years areslowly being adopted, but we can also expect the pace of change toaccelerate in the coming 3 to 5 years. At the same time, many of ritualsand artifacts of the scholarly communication lifecycle are still rooted ina centuries-old model. What are the primary goals of scholarlycommunication, and what will be the future role of librarians in thatcycle? What are the obstacles in information flow (many of our owndesign) that can be removed?Is the library profession moving fast enough to staymore » ahead of the curve...or are we ever going to be struggling to keep up? With the advent of thedata deluge, all-XML workflows, the semantic Web, cloud servicesand increasingly intelligent mobile devices - what are the implicationsfor libraries, archivists, publishers, scholarly societies as well asindividual researchers and scholars? The opportunities are many - butcapitalizing on this ever-evolving landscape will require significantchanges to our field, changes that we are not currently well-positioned toenact. This talk will map the current scholarly communication landscape -highlighting recent exciting developments, and will focus on therepercussions and some specific recommendations for the broader field ofinformation management.About the speaker:Alex Wade is the Director for Scholarly Communication within Microsoft'sExternal Research division, where he oversees several projects related toresearcher productivity tools, semantic information capture, and theinteroperability of information systems. Alex holds a Bachelor's degree inPhilosophy from U.C. Berkeley, and a Masters of Librarianship degree fromthe University of Washington.During his career at Microsoft, Alex has managed the corporate search andtaxonomy management services; has shipped a SharePoint-based document andworkflow management solution for Sarbanes-Oxley compliance; and served asSenior Program Manager for Windows Search in Windows Vista and Windows 7.Prior to joining Microsoft, Alex held Systems Librarian, EngineeringLibrarian, Philosophy Librarian, and technical library positions at theUniversity of Washington, the University of Michigan, and U.C. Berkeley.Web: http://research.microsoft.com/en-us/people/awade/ « less
Svendsen, Mathias Tiedemann; Andersen, Flemming; Andersen, Klaus Ejner
2018-03-01
Topical antipsoriatics are recommended first-line treatment of psoriasis, but rates of adherence are low. Patient support by use of electronic health (eHealth) services is suggested to improve medical adherence. To review randomised controlled trials (RCTs) testing eHealth interventions designed to improve adherence to topical antipsoriatics and to review applications for smartphones (apps) incorporating the word psoriasis. Literature review: Medline, Embase, Cochrane, PsycINFO and Web of Science were searched using search terms for eHealth, psoriasis and topical antipsoriatics. General analysis of apps: The operating systems (OS) for smartphones, iOS, Google Play, Microsoft Store, Symbian OS and Blackberry OS were searched for apps containing the word psoriasis. Literature review: Only one RCT was included, reporting on psoriasis patients' Internet reporting their status of psoriasis over a 12-month period. The rate of adherence was measured by Medication Event Monitoring System (MEMS ® ). An improvement in medical adherence and reduction of severity of psoriasis were reported. General analysis of apps: A total 184 apps contained the word psoriasis. There is a critical need for high-quality RCTs testing if the ubiquitous eHealth technologies, for example, some of the numerous apps, can improve psoriasis patients' rates of adherence to topical antipsoriatics.
Exploiting domain information for Word Sense Disambiguation of medical documents.
Stevenson, Mark; Agirre, Eneko; Soroa, Aitor
2012-01-01
Current techniques for knowledge-based Word Sense Disambiguation (WSD) of ambiguous biomedical terms rely on relations in the Unified Medical Language System Metathesaurus but do not take into account the domain of the target documents. The authors' goal is to improve these methods by using information about the topic of the document in which the ambiguous term appears. The authors proposed and implemented several methods to extract lists of key terms associated with Medical Subject Heading terms. These key terms are used to represent the document topic in a knowledge-based WSD system. They are applied both alone and in combination with local context. A standard measure of accuracy was calculated over the set of target words in the widely used National Library of Medicine WSD dataset. The authors report a significant improvement when combining those key terms with local context, showing that domain information improves the results of a WSD system based on the Unified Medical Language System Metathesaurus alone. The best results were obtained using key terms obtained by relevance feedback and weighted by inverse document frequency.
Exploiting domain information for Word Sense Disambiguation of medical documents
Agirre, Eneko; Soroa, Aitor
2011-01-01
Objective Current techniques for knowledge-based Word Sense Disambiguation (WSD) of ambiguous biomedical terms rely on relations in the Unified Medical Language System Metathesaurus but do not take into account the domain of the target documents. The authors' goal is to improve these methods by using information about the topic of the document in which the ambiguous term appears. Design The authors proposed and implemented several methods to extract lists of key terms associated with Medical Subject Heading terms. These key terms are used to represent the document topic in a knowledge-based WSD system. They are applied both alone and in combination with local context. Measurements A standard measure of accuracy was calculated over the set of target words in the widely used National Library of Medicine WSD dataset. Results and discussion The authors report a significant improvement when combining those key terms with local context, showing that domain information improves the results of a WSD system based on the Unified Medical Language System Metathesaurus alone. The best results were obtained using key terms obtained by relevance feedback and weighted by inverse document frequency. PMID:21900701
Patient handover in orthopaedics, improving safety using Information Technology.
Pearkes, Tim
2015-01-01
Good inpatient handover ensures patient safety and continuity of care. An adjunct to this is the patient list which is routinely managed by junior doctors. These lists are routinely created and managed within Microsoft Excel or Word. Following the merger of two orthopaedic departments into a single service in a new hospital, it was felt that a number of safety issues within the handover process needed to be addressed. This quality improvement project addressed these issues through the creation and implementation of a new patient database which spanned the department, allowing trouble free, safe, and comprehensive handover. Feedback demonstrated an improved user experience, greater reliability, continuity within the lists and a subsequent improvement in patient safety.
ERIC Educational Resources Information Center
Velasco, Kelly; Zizak, Amanda
This report describes a program for improving word analysis skills in order to increase sight reading, reading accuracy, and fluency. The targeted population consisted of second and third graders in a suburban area close to a large metropolitan city in a Midwestern state. The problems of low word analysis skills were documented through Qualitative…
Clustering of Farsi sub-word images for whole-book recognition
NASA Astrophysics Data System (ADS)
Soheili, Mohammad Reza; Kabir, Ehsanollah; Stricker, Didier
2015-01-01
Redundancy of word and sub-word occurrences in large documents can be effectively utilized in an OCR system to improve recognition results. Most OCR systems employ language modeling techniques as a post-processing step; however these techniques do not use important pictorial information that exist in the text image. In case of large-scale recognition of degraded documents, this information is even more valuable. In our previous work, we proposed a subword image clustering method for the applications dealing with large printed documents. In our clustering method, the ideal case is when all equivalent sub-word images lie in one cluster. To overcome the issues of low print quality, the clustering method uses an image matching algorithm for measuring the distance between two sub-word images. The measured distance with a set of simple shape features were used to cluster all sub-word images. In this paper, we analyze the effects of adding more shape features on processing time, purity of clustering, and the final recognition rate. Previously published experiments have shown the efficiency of our method on a book. Here we present extended experimental results and evaluate our method on another book with totally different font face. Also we show that the number of the new created clusters in a page can be used as a criteria for assessing the quality of print and evaluating preprocessing phases.
10 CFR 2.1011 - Management of electronic information.
Code of Federal Regulations, 2013 CFR
2013-01-01
... participants shall make textual (or, where non-text, image) versions of their documents available on a web... of the following acceptable formats: ASCII, native word processing (Word, WordPerfect), PDF Normal, or HTML. (iv) Image files must be formatted as TIFF CCITT G4 for bi-tonal images or PNG (Portable...
10 CFR 2.1011 - Management of electronic information.
Code of Federal Regulations, 2014 CFR
2014-01-01
... participants shall make textual (or, where non-text, image) versions of their documents available on a web... of the following acceptable formats: ASCII, native word processing (Word, WordPerfect), PDF Normal, or HTML. (iv) Image files must be formatted as TIFF CCITT G4 for bi-tonal images or PNG (Portable...
ERIC Educational Resources Information Center
Kercood, Suneeta; Zentall, Sydney S.; Vinh, Megan; Tom-Wright, Kinsey
2012-01-01
The purpose of this theoretically-based study was to examine the effects of yellow-highlighting "relevant" words and units within math word problems. Initial differences were documented between 10 girls at-risk for ADHD and 10 comparisons on the performance of group and individual assessments of math computations and word problems, as had…
When "Veps" Cry: Two-Year-Olds Efficiently Learn Novel Words from Linguistic Contexts Alone
ERIC Educational Resources Information Center
Ferguson, Brock; Graf, Eileen; Waxman, Sandra R.
2018-01-01
We assessed 24-month-old infants' lexical processing efficiency for both novel and familiar words. Prior work documented that 19-month-olds successfully identify referents of familiar words (e.g., The dog is so little) as well as novel words whose meanings were informed only by the surrounding sentence (e.g., The vep is crying), but that the speed…
GeoSegmenter: A statistically learned Chinese word segmenter for the geoscience domain
NASA Astrophysics Data System (ADS)
Huang, Lan; Du, Youfu; Chen, Gongyang
2015-03-01
Unlike English, the Chinese language has no space between words. Segmenting texts into words, known as the Chinese word segmentation (CWS) problem, thus becomes a fundamental issue for processing Chinese documents and the first step in many text mining applications, including information retrieval, machine translation and knowledge acquisition. However, for the geoscience subject domain, the CWS problem remains unsolved. Although a generic segmenter can be applied to process geoscience documents, they lack the domain specific knowledge and consequently their segmentation accuracy drops dramatically. This motivated us to develop a segmenter specifically for the geoscience subject domain: the GeoSegmenter. We first proposed a generic two-step framework for domain specific CWS. Following this framework, we built GeoSegmenter using conditional random fields, a principled statistical framework for sequence learning. Specifically, GeoSegmenter first identifies general terms by using a generic baseline segmenter. Then it recognises geoscience terms by learning and applying a model that can transform the initial segmentation into the goal segmentation. Empirical experimental results on geoscience documents and benchmark datasets showed that GeoSegmenter could effectively recognise both geoscience terms and general terms.
Image Annotation and Topic Extraction Using Super-Word Latent Dirichlet Allocation
2013-09-01
an image can be used to improve automated image annotation performance over existing generalized annotators. Second, image anno - 3 tations can be used...the other variables. The first ratio in the sampling Equation 2.18 uses word frequency by total words, φ̂ (w) j . The second ratio divides word...topics by total words in that document θ̂ (d) j . Both leave out the current assignment of zi and the results are used to randomly choose a new topic
California State Spelling Championship Word Lists [and Spelling Bee Planning Information].
ERIC Educational Resources Information Center
Sonoma County Superintendent of Schools, Santa Rosa, CA.
This two-part document contains a spelling word list compiled by the Sonoma County Superintendent of Schools (California) for use in the California State Elementary Spelling Championship competition, along with information for planning and conducting spelling bees. The spelling word list (also intended for use in the regional competitions) is a…
Evidence on Tips for Supporting Reading Skills at Home
ERIC Educational Resources Information Center
What Works Clearinghouse, 2018
2018-01-01
This document begins by providing four tips parents and care takers can use to supporting childrens' reading skills at home: (1) Have conversations before, during, and after reading together; (2) Help children learn how to break sentences into words and words into syllables; (3) Help children sound out words smoothly; and (4) Model reading…
77 FR 15053 - Manual for Courts-Martial; Proposed Amendments
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-14
... the M.R.E. and a Word document using color-coded text and comments to explain amendments. Updated... evidence. f. Commenter recommended using the words ``pursuant to statutory authority'' in M.R.E. 807. JSC... the rule to findings. i. Commenter recommended removing the word ``allegedly'' from proposed M.R.E...
Keywords image retrieval in historical handwritten Arabic documents
NASA Astrophysics Data System (ADS)
Saabni, Raid; El-Sana, Jihad
2013-01-01
A system is presented for spotting and searching keywords in handwritten Arabic documents. A slightly modified dynamic time warping algorithm is used to measure similarities between words. Two sets of features are generated from the outer contour of the words/word-parts. The first set is based on the angles between nodes on the contour and the second set is based on the shape context features taken from the outer contour. To recognize a given word, the segmentation-free approach is partially adopted, i.e., continuous word parts are used as the basic alphabet, instead of individual characters or complete words. Additional strokes, such as dots and detached short segments, are classified and used in a postprocessing step to determine the final comparison decision. The search for a keyword is performed by the search for its word parts given in the correct order. The performance of the presented system was very encouraging in terms of efficiency and match rates. To evaluate the presented system its performance is compared to three different systems. Unfortunately, there are no publicly available standard datasets with ground truth for testing Arabic key word searching systems. Therefore, a private set of images partially taken from Juma'a Al-Majid Center in Dubai for evaluation is used, while using a slightly modified version of the IFN/ENIT database for training.
2013-01-01
website). Data mining tools are in-house code developed in Python, C++ and Java . • NGA The National Geospatial-Intelligence Agency (NGA) performs data...as PostgreSQL (with PostGIS), MySQL , Microsoft SQL Server, SQLite, etc. using the appropriate JDBC driver. 14 The documentation and ease to learn are...written in Java that is able to perform various types of regressions, classi- fications, and other data mining tasks. There is also a commercial version
CHROMA: consensus-based colouring of multiple alignments for publication.
Goodstadt, L; Ponting, C P
2001-09-01
CHROMA annotates multiple protein sequence alignments by consensus to produce formatted and coloured text suitable for incorporation into other documents for publication. The package is designed to be flexible and reliable, and has a simple-to-use graphical user interface running under Microsoft Windows. Both the executables and source code for CHROMA running under Windows and Linux (portable command-line only) are freely available at http://www.lg.ndirect.co.uk/chroma. Software enquiries should be directed to CHROMA@lg.ndirect.co.uk.
Putting Home Data Management into Perspective
2009-12-01
approaches. However, users of home and personal storage live it. Popular interfaces (e.g., iTunes , iPhoto, and even drop-down lists of recently...users of home and personal storage live it. Popular interfaces (e.g., iTunes , iPhoto, and even drop-down lists of recently-opened Word documents...live it. Popular interfaces (e.g., iTunes , iPhoto, and even drop- down lists of recently-opened Word documents) allow users to navigate file
An Evaluation of the UMLS in Representing Corpus Derived Clinical Concepts
Friedlin, Jeff; Overhage, Marc
2011-01-01
We performed an evaluation of the Unified Medical Language System (UMLS) in representing concepts derived from medical narrative documents from three domains: chest x-ray reports, discharge summaries and admission notes. We detected concepts in these documents by identifying noun phrases (NPs) and N-grams, including unigrams (single words), bigrams (word pairs) and trigrams (word triples). After removing NPs and N-grams that did not represent discrete clinical concepts, we processed the remaining with the UMLS MetaMap program. We manually reviewed the results of MetaMap processing to determine whether MetaMap found full, partial or no representation of the concept. For full representations, we determined whether post-coordination was required. Our results showed that a large portion of concepts found in clinical narrative documents are either unrepresented or poorly represented in the current version of the UMLS Metathesaurus and that post-coordination was often required in order to fully represent a concept. PMID:22195097
75 FR 27199 - Promoting Diversification of Ownership in the Broadcasting Services
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-14
... business. This document corrects the Report and Order by substituting the word ``ethnicity'' for ``gender... the first column, paragraph 11, the Commission inadvertently used the word ``gender'' instead of...
Research on aviation unsafe incidents classification with improved TF-IDF algorithm
NASA Astrophysics Data System (ADS)
Wang, Yanhua; Zhang, Zhiyuan; Huo, Weigang
2016-05-01
The text content of Aviation Safety Confidential Reports contains a large number of valuable information. Term frequency-inverse document frequency algorithm is commonly used in text analysis, but it does not take into account the sequential relationship of the words in the text and its role in semantic expression. According to the seven category labels of civil aviation unsafe incidents, aiming at solving the problems of TF-IDF algorithm, this paper improved TF-IDF algorithm based on co-occurrence network; established feature words extraction and words sequential relations for classified incidents. Aviation domain lexicon was used to improve the accuracy rate of classification. Feature words network model was designed for multi-documents unsafe incidents classification, and it was used in the experiment. Finally, the classification accuracy of improved algorithm was verified by the experiments.
Recurrent-neural-network-based Boolean factor analysis and its application to word clustering.
Frolov, Alexander A; Husek, Dusan; Polyakov, Pavel Yu
2009-07-01
The objective of this paper is to introduce a neural-network-based algorithm for word clustering as an extension of the neural-network-based Boolean factor analysis algorithm (Frolov , 2007). It is shown that this extended algorithm supports even the more complex model of signals that are supposed to be related to textual documents. It is hypothesized that every topic in textual data is characterized by a set of words which coherently appear in documents dedicated to a given topic. The appearance of each word in a document is coded by the activity of a particular neuron. In accordance with the Hebbian learning rule implemented in the network, sets of coherently appearing words (treated as factors) create tightly connected groups of neurons, hence, revealing them as attractors of the network dynamics. The found factors are eliminated from the network memory by the Hebbian unlearning rule facilitating the search of other factors. Topics related to the found sets of words can be identified based on the words' semantics. To make the method complete, a special technique based on a Bayesian procedure has been developed for the following purposes: first, to provide a complete description of factors in terms of component probability, and second, to enhance the accuracy of classification of signals to determine whether it contains the factor. Since it is assumed that every word may possibly contribute to several topics, the proposed method might be related to the method of fuzzy clustering. In this paper, we show that the results of Boolean factor analysis and fuzzy clustering are not contradictory, but complementary. To demonstrate the capabilities of this attempt, the method is applied to two types of textual data on neural networks in two different languages. The obtained topics and corresponding words are at a good level of agreement despite the fact that identical topics in Russian and English conferences contain different sets of keywords.
Yu, Zhiguo; Nguyen, Thang; Dhombres, Ferdinand; Johnson, Todd; Bodenreider, Olivier
2018-01-01
Extracting and understanding information, themes and relationships from large collections of documents is an important task for biomedical researchers. Latent Dirichlet Allocation is an unsupervised topic modeling technique using the bag-of-words assumption that has been applied extensively to unveil hidden thematic information within large sets of documents. In this paper, we added MeSH descriptors to the bag-of-words assumption to generate ‘hybrid topics’, which are mixed vectors of words and descriptors. We evaluated this approach on the quality and interpretability of topics in both a general corpus and a specialized corpus. Our results demonstrated that the coherence of ‘hybrid topics’ is higher than that of regular bag-of-words topics in the specialized corpus. We also found that the proportion of topics that are not associated with MeSH descriptors is higher in the specialized corpus than in the general corpus. PMID:29295179
McKenna, D; Kadidlo, D; Sumstad, D; McCullough, J
2003-01-01
Errors and accidents, or deviations from standard operating procedures, other policy, or regulations must be documented and reviewed, with corrective actions taken to assure quality performance in a cellular therapy laboratory. Though expectations and guidance for deviation management exist, a description of the framework for the development of such a program is lacking in the literature. Here we describe our deviation management program, which uses a Microsoft Access database and Microsoft Excel to analyze deviations and notable events, facilitating quality assurance (QA) functions and ongoing process improvement. Data is stored in a Microsoft Access database with an assignment to one of six deviation type categories. Deviation events are evaluated for potential impact on patient and product, and impact scores for each are determined using a 0- 4 grading scale. An immediate investigation occurs, and corrective actions are taken to prevent future similar events from taking place. Additionally, deviation data is collectively analyzed on a quarterly basis using Microsoft Excel, to identify recurring events or developing trends. Between January 1, 2001 and December 31, 2001 over 2500 products were processed at our laboratory. During this time period, 335 deviations and notable events occurred, affecting 385 products and/or patients. Deviations within the 'technical error' category were most common (37%). Thirteen percent of deviations had a patient and/or a product impact score > or = 2, a score indicating, at a minimum, potentially affected patient outcome or moderate effect upon product quality. Real-time analysis and quarterly review of deviations using our deviation management program allows for identification and correction of deviations. Monitoring of deviation trends allows for process improvement and overall successful functioning of the QA program in the cell therapy laboratory. Our deviation management program could serve as a model for other laboratories in need of such a program.
ERIC Educational Resources Information Center
Morocco, Catherine Cobb; And Others
The 2-year study investigated the use of word processing technology with 36 learning disabled (LD) intermediate grade children and 9 remedial teachers in five Massachusetts school districts. During the first year study staff documented how word processing was being used. In the second year, word processing activities hypothesized to be the most…
A metric to search for relevant words
NASA Astrophysics Data System (ADS)
Zhou, Hongding; Slater, Gary W.
2003-11-01
We propose a new metric to evaluate and rank the relevance of words in a text. The method uses the density fluctuations of a word to compute an index that measures its degree of clustering. Highly significant words tend to form clusters, while common words are essentially uniformly spread in a text. If a word is not rare, the metric is stable when we move any individual occurrence of this word in the text. Furthermore, we prove that the metric always increases when words are moved to form larger clusters, or when several independent documents are merged. Using the Holy Bible as an example, we show that our approach reduces the significance of common words when compared to a recently proposed statistical metric.
What's New with MS Office Suites
ERIC Educational Resources Information Center
Goldsborough, Reid
2012-01-01
If one buys a new PC, laptop, or netbook computer today, it probably comes preloaded with Microsoft Office 2010 Starter Edition. This is a significantly limited, advertising-laden version of Microsoft's suite of productivity programs, Microsoft Office. This continues the trend of PC makers providing ever more crippled versions of Microsoft's…
Utilizing Microsoft Mathematics in Teaching and Learning Calculus
ERIC Educational Resources Information Center
Oktaviyanthi, Rina; Supriani, Yani
2015-01-01
The experimental design was conducted to investigate the use of Microsoft Mathematics, free software made by Microsoft Corporation, in teaching and learning Calculus. This paper reports results from experimental study details on implementation of Microsoft Mathematics in Calculus, students' achievement and the effects of the use of Microsoft…
Experimental Design: Utilizing Microsoft Mathematics in Teaching and Learning Calculus
ERIC Educational Resources Information Center
Oktaviyanthi, Rina; Supriani, Yani
2015-01-01
The experimental design was conducted to investigate the use of Microsoft Mathematics, free software made by Microsoft Corporation, in teaching and learning Calculus. This paper reports results from experimental study details on implementation of Microsoft Mathematics in Calculus, students' achievement and the effects of the use of Microsoft…
ERIC Educational Resources Information Center
Hamada, Megumi; Koda, Keiko
2011-01-01
Although the role of the phonological loop in word-retention is well documented, research in Chinese character retention suggests the involvement of non-phonological encoding. This study investigated whether the extent to which the phonological loop contributes to learning and remembering visually introduced words varies between college-level…
The Effect of Sonority on Word Segmentation: Evidence for the Use of a Phonological Universal
ERIC Educational Resources Information Center
Ettlinger, Marc; Finn, Amy S.; Hudson Kam, Carla L.
2012-01-01
It has been well documented how language-specific cues may be used for word segmentation. Here, we investigate what role a language-independent phonological universal, the sonority sequencing principle (SSP), may also play. Participants were presented with an unsegmented speech stream with non-English word onsets that juxtaposed adherence to the…
Readability Levels of Dental Patient Education Brochures.
Boles, Catherine D; Liu, Ying; November-Rider, Debra
2016-02-01
The objective of this study was to evaluate dental patient education brochures produced since 2000 to determine if there is any change in the Flesch-Kincaid grade level readability. A convenience sample of 36 brochures was obtained for analysis of the readability of the patient education material on multiple dental topics. Readability was measured using the Flesch-Kincaid Grade Level through Microsoft Word. Pearson's correlation was used to describe the relationship among the factors of interest. Backward model selection of multiple linear regression model was used to investigate the relationship between Flesch-Kincaid Grade level and a set of predictors included in this study. A convenience sample (n=36) of dental education brochures produced from 2000 to 2014 showed a mean Flesch-Kincaid reading grade level of 9.15. Weak to moderate correlations existed between word count and grade level (r=0.40) and characters count and grade level (r=0.46); strong correlations were found between grade level and average words per sentence (r=0.70), average characters per word (r=0.85) and Flesch Reading Ease (r=-0.98). Only 1 brochure out of the sample met the recommended sixth grade reading level (Flesch-Kincaid Grade Level 5.7). Overall, the Flesch-Kincaid Grade Level of all brochures was significantly higher than the recommended sixth grade reading level (p<0.0001). The findings from this study demonstrated that there has generally been an improvement in the Flesch-Kincaid grade level readability of the brochures. However, the majority of the brochures analyzed are still testing above the recommended sixth grade reading level. Copyright © 2016 The American Dental Hygienists’ Association.
ERIC Educational Resources Information Center
Lee, Jesse
2013-01-01
The goal of this study was to find and trace word order patterns in Possessive Noun Phrases ("PNP's") in formulaic language within notarial documents dating from the tenth through the thirteenth centuries, originating from the Monastery of Sahagun, Leon, Spain. The overall results show clear trends, which reveal a diachronic process that…
2015-01-01
class within Microsoft Visual Studio . 2 It has been tested on and is compatible with Microsoft Vista, 7, and 8 and Visual Studio Express 2008...the ScreenRecorder utility assumes a basic understanding of compiling and running C++ code within Microsoft Visual Studio . This report does not...of Microsoft Visual Studio , the ScreenRecorder utility was developed as a C++ class that can be compiled as a library (static or dynamic) to be
2001-09-01
replication) -- all from Visual Basic and VBA . In fact, we found that the SQL Server engine actually had a plethora of options, most formidable of...2002, the new SQL Server 2000 database engine, and Microsoft Visual Basic.NET. This thesis describes our use of the Spiral Development Model to...versions of Microsoft products? Specifically, the pending release of Microsoft Office 2002, the new SQL Server 2000 database engine, and Microsoft
Determining Fuzzy Membership for Sentiment Classification: A Three-Layer Sentiment Propagation Model
Zhao, Chuanjun; Wang, Suge; Li, Deyu
2016-01-01
Enormous quantities of review documents exist in forums, blogs, twitter accounts, and shopping web sites. Analysis of the sentiment information hidden in these review documents is very useful for consumers and manufacturers. The sentiment orientation and sentiment intensity of a review can be described in more detail by using a sentiment score than by using bipolar sentiment polarity. Existing methods for calculating review sentiment scores frequently use a sentiment lexicon or the locations of features in a sentence, a paragraph, and a document. In order to achieve more accurate sentiment scores of review documents, a three-layer sentiment propagation model (TLSPM) is proposed that uses three kinds of interrelations, those among documents, topics, and words. First, we use nine relationship pairwise matrices between documents, topics, and words. In TLSPM, we suppose that sentiment neighbors tend to have the same sentiment polarity and similar sentiment intensity in the sentiment propagation network. Then, we implement the sentiment propagation processes among the documents, topics, and words in turn. Finally, we can obtain the steady sentiment scores of documents by a continuous iteration process. Intuition might suggest that documents with strong sentiment intensity make larger contributions to classification than those with weak sentiment intensity. Therefore, we use the fuzzy membership of documents obtained by TLSPM as the weight of the text to train a fuzzy support vector machine model (FSVM). As compared with a support vector machine (SVM) and four other fuzzy membership determination methods, the results show that FSVM trained with TLSPM can enhance the effectiveness of sentiment classification. In addition, FSVM trained with TLSPM can reduce the mean square error (MSE) on seven sentiment rating prediction data sets. PMID:27846225
Microsoft in Southeast Europe: A Conversation with Goran Radman
ERIC Educational Resources Information Center
Pendergast, William; Frayne, Colette; Kelley, Patricia
2009-01-01
Goran Radman (GR) joined Microsoft in 1996 and served until Fall 2008 as Microsoft Chairman, Southeast Europe (SEE) and Chairman, East and Central Europe (ECEE). Based in Croatia, where he enjoys sailing the Adriatic coast and islands, he spoke with the authors during 2008 and 2009 about his experience launching Microsoft's commercial presence in…
Microsoft's Tom Corddry on Multimedia, the Information Superhighway and the Future of Online.
ERIC Educational Resources Information Center
Herther, Nancy K.
1994-01-01
Tom Corddry, Microsoft Corporation's Creative Director for the Consumer Division, is interviewed about the Microsoft Home line of products and the development of related CD-ROM and multimedia products. Reasons for Microsoft's entry into the content market and its challenges, the market's future, and the company's interest in developing online…
Investigative change detection: identifying new topics using lexicon-based search
NASA Astrophysics Data System (ADS)
Hintz, Kenneth J.
2002-08-01
In law enforcement there is much textual data which needs to be searched in order to detect new threats. A new methodology which can be applied to this need is the automatic searching of the contents of documents from known sources to construct a lexicon of words used by that source. When analyzing future documents, the occurrence of words which have not been lexiconized are indicative of the introduction of a new topic into the source's lexicon which should be examined in its context by an analyst. A system analogous to this has been built and used to detect Fads and Categories on web sites. Fad refers to the first appearance of a word not in the lexicon; Category refers to the repeated appearance of a Fad word and the exceeding of some frequency or spatial occurrence metric indicating a permanence to the Category.
ERIC Educational Resources Information Center
Long, Sandra; And Others
Part of a curriculum series for academically gifted elementary students in the area of reading, the document presents objectives and activities for language arts instruction. There are three major objectives: (1) recognizing persuasive use of words, vague and imprecise words, multiple meanings conveyed by a single word, and propaganda techniques;…
ERIC Educational Resources Information Center
Ivy, Sarah E.; Guerra, Jennifer A.; Hatton, Deborah D.
2017-01-01
Introduction: Constant time delay is an evidence-based practice to teach sight word recognition to students with a variety of disabilities. To date, two studies have documented its effectiveness for teaching braille. Methods: Using a multiple-baseline design, we evaluated the effectiveness of constant time delay to teach highly motivating words to…
Min-cut segmentation of cursive handwriting in tabular documents
NASA Astrophysics Data System (ADS)
Davis, Brian L.; Barrett, William A.; Swingle, Scott D.
2015-01-01
Handwritten tabular documents, such as census, birth, death and marriage records, contain a wealth of information vital to genealogical and related research. Much work has been done in segmenting freeform handwriting, however, segmentation of cursive handwriting in tabular documents is still an unsolved problem. Tabular documents present unique segmentation challenges caused by handwriting overlapping cell-boundaries and other words, both horizontally and vertically, as "ascenders" and "descenders" overlap into adjacent cells. This paper presents a method for segmenting handwriting in tabular documents using a min-cut/max-flow algorithm on a graph formed from a distance map and connected components of handwriting. Specifically, we focus on line, word and first letter segmentation. Additionally, we include the angles of strokes of the handwriting as a third dimension to our graph to enable the resulting segments to share pixels of overlapping letters. Word segmentation accuracy is 89.5% evaluating lines of the data set used in the ICDAR2013 Handwriting Segmentation Contest. Accuracy is 92.6% for a specific application of segmenting first and last names from noisy census records. Accuracy for segmenting lines of names from noisy census records is 80.7%. The 3D graph cutting shows promise in segmenting overlapping letters, although highly convoluted or overlapping handwriting remains an ongoing challenge.
OLIVER: an online library of images for veterinary education and research.
McGreevy, Paul; Shaw, Tim; Burn, Daniel; Miller, Nick
2007-01-01
As part of a strategic move by the University of Sydney toward increased flexibility in learning, the Faculty of Veterinary Science undertook a number of developments involving Web-based teaching and assessment. OLIVER underpins them by providing a rich, durable repository for learning objects. To integrate Web-based learning, case studies, and didactic presentations for veterinary and animal science students, we established an online library of images and other learning objects for use by academics in the Faculties of Veterinary Science and Agriculture. The objectives of OLIVER were to maximize the use of the faculty's teaching resources by providing a stable archiving facility for graphic images and other multimedia learning objects that allows flexible and precise searching, integrating indexing standards, thesauri, pull-down lists of preferred terms, and linking of objects within cases. OLIVER offers a portable and expandable Web-based shell that facilitates ongoing storage of learning objects in a range of media. Learning objects can be downloaded in common, standardized formats so that they can be easily imported for use in a range of applications, including Microsoft PowerPoint, WebCT, and Microsoft Word. OLIVER now contains more than 9,000 images relating to many facets of veterinary science; these are annotated and supported by search engines that allow rapid access to both images and relevant information. The Web site is easily updated and adapted as required.
Space Station Furnace Facility Management Information System (SSFF-MIS) Development
NASA Technical Reports Server (NTRS)
Meade, Robert M.
1996-01-01
This report summarizes the chronology, results, and lessons learned from the development of the SSFF-MIS. This system has been nearly two years in development and has yielded some valuable insights into specialized MIS development. General: In December of 1994, the Camber Corporation and Science Applications International Corporation (SAIC) were contracted to design, develop, and implement a MIS for Marshall Space Flight Center's Space Station Furnace Facility Project. The system was to be accessible from both EBM-Compatible PC and Macintosh platforms. The system was required to contain data manually entered into the MIS as well as data imported from other MSFC sources. Electronic interfaces were established for each data source and retrieval was to be performed at prescribed time intervals. The SOW requirement that predominantly drove the development software selection was the dual-platform (IBM-PC and Macintosh) requirement. The requirement that the system would be maintained by Government personnel influenced the selection of Commercial Off-the-shelf software because of its inherent stability and readily available documentation and support. Microsoft FoxPro Professional 2.6 for Windows and Macintosh was selected as the development tool. This is a software development tool that has been in use for many years. It is stable and powerful. Microsoft has since released the replacement for this product, Microsoft Visual FoxPro, but at the time of this development, it was only available on the Windows platform. The initial contract included included the requirement for capabilities relating to the Work- and Organizational Breakdown Structures, cost (plan and actuals), workforce (plan and actuals), critical path scheduling, trend analysis, procurements and contracts, interface to manufacturing, Safety and Mission Assurance, risk analysis, and technical performance indicators. It also required full documentation of the system and training of users. During the course of the contract, the requirements for Safety and Mission Assurance interface, risk analysis, and technical performance indicators were deleted. Additional capabilities were added as reflected in the Contract Chronology below. Modification 4 added the requirement for Support Contractor manpower data, the ability to manually input data not imported from non-nal sources, a general 'health' indicator screen, and remote usage. Mod 6 included the ability to change the level of planning of Civil Service Manpower at any time and the ability to manually enter Op Codes in the manufacturing data where such codes were not provided by the EMPACS database. Modification 9 included a number of changes to report contents and formats. Modification 11 required the preparation of a detailed System Design Document.
CaveMan Enterprise version 1.0 Software Validation and Verification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, David
The U.S. Department of Energy Strategic Petroleum Reserve stores crude oil in caverns solution-mined in salt domes along the Gulf Coast of Louisiana and Texas. The CaveMan software program has been used since the late 1990s as one tool to analyze pressure mea- surements monitored at each cavern. The purpose of this monitoring is to catch potential cavern integrity issues as soon as possible. The CaveMan software was written in Microsoft Visual Basic, and embedded in a Microsoft Excel workbook; this method of running the CaveMan software is no longer sustainable. As such, a new version called CaveMan Enter- prisemore » has been developed. CaveMan Enterprise version 1.0 does not have any changes to the CaveMan numerical models. CaveMan Enterprise represents, instead, a change from desktop-managed work- books to an enterprise framework, moving data management into coordinated databases and porting the numerical modeling codes into the Python programming language. This document provides a report of the code validation and verification testing.« less
Peakall, Rod; Smouse, Peter E
2012-10-01
GenAlEx: Genetic Analysis in Excel is a cross-platform package for population genetic analyses that runs within Microsoft Excel. GenAlEx offers analysis of diploid codominant, haploid and binary genetic loci and DNA sequences. Both frequency-based (F-statistics, heterozygosity, HWE, population assignment, relatedness) and distance-based (AMOVA, PCoA, Mantel tests, multivariate spatial autocorrelation) analyses are provided. New features include calculation of new estimators of population structure: G'(ST), G''(ST), Jost's D(est) and F'(ST) through AMOVA, Shannon Information analysis, linkage disequilibrium analysis for biallelic data and novel heterogeneity tests for spatial autocorrelation analysis. Export to more than 30 other data formats is provided. Teaching tutorials and expanded step-by-step output options are included. The comprehensive guide has been fully revised. GenAlEx is written in VBA and provided as a Microsoft Excel Add-in (compatible with Excel 2003, 2007, 2010 on PC; Excel 2004, 2011 on Macintosh). GenAlEx, and supporting documentation and tutorials are freely available at: http://biology.anu.edu.au/GenAlEx. rod.peakall@anu.edu.au.
Rapid automatic keyword extraction for information retrieval and analysis
Rose, Stuart J [Richland, WA; Cowley,; E, Wendy [Richland, WA; Crow, Vernon L [Richland, WA; Cramer, Nicholas O [Richland, WA
2012-03-06
Methods and systems for rapid automatic keyword extraction for information retrieval and analysis. Embodiments can include parsing words in an individual document by delimiters, stop words, or both in order to identify candidate keywords. Word scores for each word within the candidate keywords are then calculated based on a function of co-occurrence degree, co-occurrence frequency, or both. Based on a function of the word scores for words within the candidate keyword, a keyword score is calculated for each of the candidate keywords. A portion of the candidate keywords are then extracted as keywords based, at least in part, on the candidate keywords having the highest keyword scores.
Van Wicklin, Sharon A
2016-05-01
Variations in documenting surgical wound classification Key words: surgical wound classification, clean, clean-contaminated, contaminated, dirty. Wearing long-sleeved jackets while preparing and packaging items for sterilization Key words: long-sleeved jackets, organic material, sterile processing. Endoscopic transmission of prions Key words: prions, high-risk tissue, low-risk tissue, Creutzfeldt-Jakob disease (CJD), variant Creutzfeldt-Jakob disease (vCJD). Wearing gloves when handling flexible endoscopes Key words: gloves, low-protein, powder-free, natural rubber latex gloves, latex-free gloves. Copyright © 2016 AORN, Inc. Published by Elsevier Inc. All rights reserved.
Word add-in for ontology recognition: semantic enrichment of scientific literature.
Fink, J Lynn; Fernicola, Pablo; Chandran, Rahul; Parastatidis, Savas; Wade, Alex; Naim, Oscar; Quinn, Gregory B; Bourne, Philip E
2010-02-24
In the current era of scientific research, efficient communication of information is paramount. As such, the nature of scholarly and scientific communication is changing; cyberinfrastructure is now absolutely necessary and new media are allowing information and knowledge to be more interactive and immediate. One approach to making knowledge more accessible is the addition of machine-readable semantic data to scholarly articles. The Word add-in presented here will assist authors in this effort by automatically recognizing and highlighting words or phrases that are likely information-rich, allowing authors to associate semantic data with those words or phrases, and to embed that data in the document as XML. The add-in and source code are publicly available at http://www.codeplex.com/UCSDBioLit. The Word add-in for ontology term recognition makes it possible for an author to add semantic data to a document as it is being written and it encodes these data using XML tags that are effectively a standard in life sciences literature. Allowing authors to mark-up their own work will help increase the amount and quality of machine-readable literature metadata.
The computerized OMAHA system in microsoft office excel.
Lai, Xiaobin; Wong, Frances K Y; Zhang, Peiqiang; Leung, Carenx W Y; Lee, Lai H; Wong, Jessica S Y; Lo, Yim F; Ching, Shirley S Y
2014-01-01
The OMAHA System was adopted as the documentation system in an interventional study. To systematically record client care and facilitate data analysis, two Office Excel files were developed. The first Excel file (File A) was designed to record problems, care procedure, and outcomes for individual clients according to the OMAHA System. It was used by the intervention nurses in the study. The second Excel file (File B) was the summary of all clients that had been automatically extracted from File A. Data in File B can be analyzed directly in Excel or imported in PASW for further analysis. Both files have four parts to record basic information and the three parts of the OMAHA System. The computerized OMAHA System simplified the documentation procedure and facilitated the management and analysis of data.
A Better Way to Store Energy for Less Cost
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darmon, Jonathan M.; Weiss, Charles J.; Hulley, Elliott B.
Representing the Center for Molecular Electrocatalysis (CME), this document is one of the entries in the Ten Hundred and One Word Challenge. As part of the challenge, the 46 Energy Frontier Research Centers were invited to represent their science in images, cartoons, photos, words and original paintings, but any descriptions or words could only use the 1000 most commonly used words in the English language, with the addition of one word important to each of the EFRCs and the mission of DOE energy. The mission of CME to understand, design and develop molecular electrocatalysts for solar fuel production and use.
Multimedia proceedings of the 10th Office Information Technology Conference
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hudson, B.
1993-09-10
The CD contains the handouts for all the speakers, demo software from Apple, Adobe, Microsoft, and Zylabs, and video movies of the keynote speakers. Adobe Acrobat is used to provide full-fidelity retrieval of the speakers` slides and Apple`s Quicktime for Macintosh and Windows is used for video playback. ZyIndex is included for Windows users to provide a full-text search engine for selected documents. There are separately labelled installation and operating instructions for Macintosh and Windows users and some general materials common to both sets of users.
Enhancement of Text Representations Using Related Document Titles.
ERIC Educational Resources Information Center
Salton, G.; Zhang, Y.
1986-01-01
Briefly reviews various methodologies for constructing enhanced document representations, discusses their general lack of usefulness, and describes a method of document indexing which uses title words taken from bibliographically related items. Evaluation of this process indicates that it is not sufficiently reliable to warrant incorporation into…
ERIC Educational Resources Information Center
Hicks, Emily D.
2004-01-01
The cultural activities, including the performance of music and spoken word are documented. The cultural activities in the San Diego-Tijuana region that is described is emerged from rhizomatic, transnational points of contact.
Wang OIS glossary package for reformatting documents telecommunicated to the OIS system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Markow, S.R.
1983-12-09
Documents that are composed on a computer and then transmitted by telecommunications into a Wang Office Information System (OIS) word processing system need to be reformatted and cleaned up before they can be used properly as word processing documents suitable for further revisions or additions. This report describes a group of glossary entries created for the Wang OIS which simplifies the job of cleaning up telecommunicated documents. This glossary is a semi-automated process designed to eliminate most of the tedious work needed to be performed in removing extra spaces and returns, adjusting formats, moving material, repagination, using tabs or indents,more » and similar problems. The report briefly discusses the problems, describes the glossary approach to solving them, and gives instructions for actually using the glossary entries.« less
Development and validation of a brief, descriptive Danish pain questionnaire (BDDPQ).
Perkins, F M; Werner, M U; Persson, F; Holte, K; Jensen, T S; Kehlet, H
2004-04-01
A new pain questionnaire should be simple, be documented to have discriminative function, and be related to previously used questionnaires. Word meaning was validated by using bilingual Danish medical students and asking them to translate words taken from the Danish version of the McGill pain questionnaire into English. Evaluative word value was estimated using a visual analog scale (VAS). Discriminative function was assessed by having patients with one of six painful conditions (postherpetic neuralgia, phantom limb pain, rheumatoid arthritis, ankle fracture, appendicitis, or labor pain) complete the questionnaire. We were not able to find Danish words that were reliably back-translated to the English words 'splitting' or 'gnawing'. A simple three-word set of evaluative terms had good separation when rated on a VAS scale ('let' 17.5+/-6.5 mm; 'moderat' 42.7+/-8.6 mm; and 'staerk' 74.9+/-9.7 mm). The questionnaire was able to discriminate among the six painful conditions with 77% accuracy by just using the descriptive words. The accuracy of the questionnaire increased to 96% with the addition of evaluative terms (for pain at rest and with activity), chronicity (acute vs. chronic), and location of the pain. A Danish pain questionnaire that subjects and patients can self-administer has been developed and validated relative to the words used in the English McGill Pain questionnaire. The discriminative ability of the questionnaire among some common painful conditions has been tested and documented. The questionnaire may be of use in patient care and research.
ERIC Educational Resources Information Center
Cor, Ken; Alves, Cecilia; Gierl, Mark J.
2008-01-01
This review describes and evaluates a software add-in created by Frontline Systems, Inc., that can be used with Microsoft Excel 2007 to solve large, complex test assembly problems. The combination of Microsoft Excel 2007 with the Frontline Systems Premium Solver Platform is significant because Microsoft Excel is the most commonly used spreadsheet…
Principal semantic components of language and the measurement of meaning.
Samsonovich, Alexei V; Samsonovic, Alexei V; Ascoli, Giorgio A
2010-06-11
Metric systems for semantics, or semantic cognitive maps, are allocations of words or other representations in a metric space based on their meaning. Existing methods for semantic mapping, such as Latent Semantic Analysis and Latent Dirichlet Allocation, are based on paradigms involving dissimilarity metrics. They typically do not take into account relations of antonymy and yield a large number of domain-specific semantic dimensions. Here, using a novel self-organization approach, we construct a low-dimensional, context-independent semantic map of natural language that represents simultaneously synonymy and antonymy. Emergent semantics of the map principal components are clearly identifiable: the first three correspond to the meanings of "good/bad" (valence), "calm/excited" (arousal), and "open/closed" (freedom), respectively. The semantic map is sufficiently robust to allow the automated extraction of synonyms and antonyms not originally in the dictionaries used to construct the map and to predict connotation from their coordinates. The map geometric characteristics include a limited number ( approximately 4) of statistically significant dimensions, a bimodal distribution of the first component, increasing kurtosis of subsequent (unimodal) components, and a U-shaped maximum-spread planar projection. Both the semantic content and the main geometric features of the map are consistent between dictionaries (Microsoft Word and Princeton's WordNet), among Western languages (English, French, German, and Spanish), and with previously established psychometric measures. By defining the semantics of its dimensions, the constructed map provides a foundational metric system for the quantitative analysis of word meaning. Language can be viewed as a cumulative product of human experiences. Therefore, the extracted principal semantic dimensions may be useful to characterize the general semantic dimensions of the content of mental states. This is a fundamental step toward a universal metric system for semantics of human experiences, which is necessary for developing a rigorous science of the mind.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fink, J.K.
1972-07-01
The HELP documents provide SPEAKEASY users with concise definitions of most of the words available in the current processors. In this report, the documents are given in a variety of formats to enable one to find specific information quickly. The bulk of this report consists of computer read-out of the HELP library via SPEAKEASY.
Scribe: A Document Specification Language and Its Compiler
1980-10-01
34" prints today’s date as "Samedi, le 13 Decembre, 1980". The template "el 8 de Marzo de 1952" prints today’s date as "el 13 de Diciembre de 1980". The...Letter spacing and kerning 20 3.12 Ligatures 24 3.1.3 Diacritical Marks 24 3.2 Lineation and Word Placement 27 3.2.1 Word Spacing and Justification 27...letterhead. 67 Figure 24 : Document format definition for CMU thesis. 68 Figure 25: Twenty basic rules for indexers, from Collison [11]. 74 Figure 26
2003-04-01
8 Deconstructing the model’s output................................................................................ 9 Implications of the ideas...identified characters of a word are used as a probe to retrieve a word’s identity (its spelling and phonology ) from memory. In addition to the...document matrix has been reduced by the SVD. Deconstructing the model’s output Why do semantic relationships between words emerge from the model? Is the
Fracture Testing of Large-Scale Thin-Sheet Aluminum Alloy (MS Word file)
DOT National Transportation Integrated Search
1996-02-01
Word Document; A series of fracture tests on large-scale, precracked, aluminum alloy panels were carried out to examine and characterize the process by which cracks propagate and link up in this material. Extended grips and test fixtures were special...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-02
... number of the interim rule published on July 13, 2012, in the words of issuance. This document corrects... July 13, 2012, as 41427 instead of 41247 in the words of issuance. The page number is correctly listed...
Reynolds, Kellin; Barnhill, Danny; Sias, Jamie; Young, Amy; Polite, Florencia Greer
2014-12-01
A portable electronic method of providing instructional feedback and recording an evaluation of resident competency immediately following surgical procedures has not previously been documented in obstetrics and gynecology. This report presents a unique electronic format that documents resident competency and encourages verbal communication between faculty and residents immediately following operative procedures. The Microsoft Tag system and SurveyMonkey platform were linked by a 2-D QR code using Microsoft QR code generator. Each resident was given a unique code (TAG) embedded onto an ID card. An evaluation form was attached to each resident's file in SurveyMonkey. Postoperatively, supervising faculty scanned the resident's TAG with a smartphone and completed the brief evaluation using the phone's screen. The evaluation was reviewed with the resident and automatically submitted to the resident's educational file. The evaluation system was quickly accepted by residents and faculty. Of 43 residents and faculty in the study, 38 (88%) responded to a survey 8 weeks after institution of the electronic evaluation system. Thirty (79%) of the 38 indicated it was superior to the previously used handwritten format. The electronic system demonstrated improved utilization compared with paper evaluations, with a mean of 23 electronic evaluations submitted per resident during a 6-month period versus 14 paper assessments per resident during an earlier period of 6 months. This streamlined portable electronic evaluation is an effective tool for direct, formative feedback for residents, and it creates a longitudinal record of resident progress. Satisfaction with, and use of, this evaluation system was high.
Reynolds, Kellin; Barnhill, Danny; Sias, Jamie; Young, Amy; Polite, Florencia Greer
2014-01-01
Background A portable electronic method of providing instructional feedback and recording an evaluation of resident competency immediately following surgical procedures has not previously been documented in obstetrics and gynecology. Objective This report presents a unique electronic format that documents resident competency and encourages verbal communication between faculty and residents immediately following operative procedures. Methods The Microsoft Tag system and SurveyMonkey platform were linked by a 2-D QR code using Microsoft QR code generator. Each resident was given a unique code (TAG) embedded onto an ID card. An evaluation form was attached to each resident's file in SurveyMonkey. Postoperatively, supervising faculty scanned the resident's TAG with a smartphone and completed the brief evaluation using the phone's screen. The evaluation was reviewed with the resident and automatically submitted to the resident's educational file. Results The evaluation system was quickly accepted by residents and faculty. Of 43 residents and faculty in the study, 38 (88%) responded to a survey 8 weeks after institution of the electronic evaluation system. Thirty (79%) of the 38 indicated it was superior to the previously used handwritten format. The electronic system demonstrated improved utilization compared with paper evaluations, with a mean of 23 electronic evaluations submitted per resident during a 6-month period versus 14 paper assessments per resident during an earlier period of 6 months. Conclusions This streamlined portable electronic evaluation is an effective tool for direct, formative feedback for residents, and it creates a longitudinal record of resident progress. Satisfaction with, and use of, this evaluation system was high. PMID:26140128
Grid Stiffened Structure Analysis Tool
NASA Technical Reports Server (NTRS)
1999-01-01
The Grid Stiffened Analysis Tool contract is contract performed by Boeing under NASA purchase order H30249D. The contract calls for a "best effort" study comprised of two tasks: (1) Create documentation for a composite grid-stiffened structure analysis tool, in the form of a Microsoft EXCEL spread sheet, that was developed by originally at Stanford University and later further developed by the Air Force, and (2) Write a program that functions as a NASTRAN pre-processor to generate an FEM code for grid-stiffened structure. In performing this contract, Task 1 was given higher priority because it enables NASA to make efficient use of a unique tool they already have; Task 2 was proposed by Boeing because it also would be beneficial to the analysis of composite grid-stiffened structures, specifically in generating models for preliminary design studies. The contract is now complete, this package includes copies of the user's documentation for Task 1 and a CD ROM & diskette with an electronic copy of the user's documentation and an updated version of the "GRID 99" spreadsheet.
A boy asked his Mom about energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mutolo, Paul F.; Muller, David; O'Dea, James
Representing the Energy Materials Center (EMC), this document is one of the entries in the Ten Hundred and One Word Challenge. As part of the challenge, the 46 Energy Frontier Research Centers were invited to represent their science in images, cartoons, photos, words and original paintings, but any descriptions or words could only use the 1000 most commonly used words in the English language, with the addition of one word important to each of the EFRCs and the mission of DOE energy. The mission of EMC is advancing the science of energy conversion and storage by understanding and exploiting fundamentalmore » properties of active materials and their interfaces.« less
The nature of compounds: a psychocentric perspective.
Libben, Gary
2014-01-01
Although compound words often seem to be words that themselves contain words, this paper argues that this is not the case for the vast majority of lexicalized compounds. Rather, it is claimed that as a result of acts of lexical processing, the constituents of compound words develop into new lexical representations. These representations are bound to specific morphological roles and positions (e.g., head, modifier) within a compound word. The development of these positionally bound compound constituents creates a rich network of lexical knowledge that facilitates compound processing and also creates some of the well-documented patterns in the psycholinguistic and neurolinguistic study of compounding.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rameau, Jon; Crabtree, George; Greene, Laura
Representing the Center for Emergent Superconductivity (CES), this document is one of the entries in the Ten Hundred and One Word Challenge. As part of the challenge, the 46 Energy Frontier Research Centers were invited to represent their science in images, cartoons, photos, words and original paintings, but any descriptions or words could only use the 1000 most commonly used words in the English language, with the addition of one word important to each of the EFRCs and the mission of DOE: energy. The mission of the CES is to discover new high-temperature superconductors and improve the performance of knownmore » superconductors by understanding the fundamental physics of superconductivity.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Chuanqi; Liang, Yan; Sahl, Lars
Representing the Center for Solar Fuels (CSF), this document is one of the entries in the Ten Hundred and One Word Challenge. As part of the challenge, the 46 Energy Frontier Research Centers were invited to represent their science in images, cartoons, photos, words and original paintings, but any descriptions or words could only use the 1000 most commonly used words in the English language, with the addition of one word important to each of the EFRCs and the mission of DOE energy. The mission of the CSF is to provide the basic research to enable a revolution in themore » collection and conversion of sunlight into storable solar fuels.« less
Adamson, Lauren B.; Bakeman, Roger; Brandon, Benjamin
2015-01-01
This study documents how parents weave new words into on-going interactions with children who are just beginning to speak. Dyads with typically developing toddlers and with young children with autism spectrum disorder and Down syndrome (n = 56, 23, and 29) were observed using a Communication Play Protocol during which parents could use novel words to refer to novel objects. Parents readily introduced both labels and sound words even when their child did not respond expressively or produce the words. Results highlight both how parents act in ways that may facilitate their child's appreciation of the relation between a new word and its referent and how they subtly adjust their actions to suit their child's level of word learning and specific learning challenges. PMID:25863927
Critical Linguistics: A Starting Point for Oppositional Reading.
ERIC Educational Resources Information Center
Janks, Hilary
This document focuses on specific linguistic features that serve ideological functions in texts written in South Africa from 1985 to 1988. The features examined include: naming; metaphors; old words with new meanings; words becoming tainted; renaming or lexicalization; overlexicalization; strategies for resisting classification; tense and aspect;…
1988-04-01
e.g., definitions, references, pictures) on the selected item in a separate window. For example, in a hyper- text document on astronomy , the reader...might arrive at the highlighted word " Copernicus ", select the word with the keyboard or mouse, and then be offered a number of related topics from
ERIC Educational Resources Information Center
Bolger, Charlene
A compilation of over 50 elementary school activities focuses on developing students' familiarity with the 50 states. Exercises such as word searches, scrambled word puzzles, shape puzzles, spelling bees, match games, and atlas games introduce students to the capitals, major cities, main characteristics, and location of each state. The document is…
Rashotte, Judy; Coburn, Geraldine; Harrison, Denise; Stevens, Bonnie J; Yamada, Janet; Abbott, Laura K
2013-01-01
Although documentation of children's pain by health care professionals is frequently undertaken, few studies have explored the nature of the language used to describe pain in the medical records of hospitalized children. To describe health care professionals' use of written language related to the quality and quantity of pain experienced by hospitalized children. Free-text pain narratives documented during a 24 h period were collected from the medical records of 3822 children (0 to 18 years of age) hospitalized on 32 inpatient units in eight Canadian pediatric hospitals. A qualitative descriptive exploration using a content analysis approach was used. Pain narratives were documented a total of 5390 times in 1518 of the 3822 children's medical records (40%). Overall, word choices represented objective and subjective descriptors. Two major categories were identified, with their respective subcategories of word indicators and associated cues: indicators of pain, including behavioural (e.g., vocal, motor, facial and activities cues), affective and physiological cues, and children's descriptors; and word qualifiers, including intensity, comparator and temporal qualifiers. The richness and complexity of vocabulary used by clinicians to document children's pain lend support to the concept that the word 'pain' is a label that represents a myriad of different experiences. There is potential to refine pediatric pain assessment measures to be inclusive of other cues used to identify children's pain. The results enhance the discussion concerning the development of standardized nomenclature. Further research is warranted to determine whether there is congruence in interpretation across time, place and individuals.
Fuzzy Document Clustering Approach using WordNet Lexical Categories
NASA Astrophysics Data System (ADS)
Gharib, Tarek F.; Fouad, Mohammed M.; Aref, Mostafa M.
Text mining refers generally to the process of extracting interesting information and knowledge from unstructured text. This area is growing rapidly mainly because of the strong need for analysing the huge and large amount of textual data that reside on internal file systems and the Web. Text document clustering provides an effective navigation mechanism to organize this large amount of data by grouping their documents into a small number of meaningful classes. In this paper we proposed a fuzzy text document clustering approach using WordNet lexical categories and Fuzzy c-Means algorithm. Some experiments are performed to compare efficiency of the proposed approach with the recently reported approaches. Experimental results show that Fuzzy clustering leads to great performance results. Fuzzy c-means algorithm overcomes other classical clustering algorithms like k-means and bisecting k-means in both clustering quality and running time efficiency.
Word add-in for ontology recognition: semantic enrichment of scientific literature
2010-01-01
Background In the current era of scientific research, efficient communication of information is paramount. As such, the nature of scholarly and scientific communication is changing; cyberinfrastructure is now absolutely necessary and new media are allowing information and knowledge to be more interactive and immediate. One approach to making knowledge more accessible is the addition of machine-readable semantic data to scholarly articles. Results The Word add-in presented here will assist authors in this effort by automatically recognizing and highlighting words or phrases that are likely information-rich, allowing authors to associate semantic data with those words or phrases, and to embed that data in the document as XML. The add-in and source code are publicly available at http://www.codeplex.com/UCSDBioLit. Conclusions The Word add-in for ontology term recognition makes it possible for an author to add semantic data to a document as it is being written and it encodes these data using XML tags that are effectively a standard in life sciences literature. Allowing authors to mark-up their own work will help increase the amount and quality of machine-readable literature metadata. PMID:20181245
CIS3/398: Implementation of a Web-Based Electronic Patient Record for Transplant Recipients
Fritsche, L; Lindemann, G; Schroeter, K; Schlaefer, A; Neumayer, H-H
1999-01-01
Introduction While the "Electronic patient record" (EPR) is a frequently quoted term in many areas of healthcare, only few working EPR-systems are available so far. To justify their use, EPRs must be able to store and display all kinds of medical information in a reliable, secure, time-saving, user-friendly way at an affordable price. Fields with patients who are attended to by a large number of medical specialists over a prolonged period of time are best suited to demonstrate the potential benefits of an EPR. The aim of our project was to investigate the feasibility of an EPR based solely on "of-the-shelf"-software and Internet-technology in the field of organ transplantation. Methods The EPR-system consists of three main elements: Data-storage facilities, a Web-server and a user-interface. Data are stored either in a relational database (Sybase Adaptive 11.5, Sybase Inc., CA) or in case of pictures (JPEG) and files in application formats (e. g. Word-Documents) on a Windows NT 4.0 Server (Microsoft Corp., WA). The entire communication of all data is handled by a Web-server (IIS 4.0, Microsoft) with an Active Server Pages extension. The database is accessed by ActiveX Data Objects via the ODBC-interface. The only software required on the user's computer is the Internet Explorer 4.01 (Microsoft), during the first use of the EPR, the ActiveX HTML Layout Control is automatically added. The user can access the EPR via Local or Wide Area Network or by dial-up connection. If the EPR is accessed from outside the firewall, all communication is encrypted (SSL 3.0, Netscape Comm. Corp., CA).The speed of the EPR-system was tested with 50 repeated measurements of the duration of two key-functions: 1) Display of all lab results for a given day and patient and 2) automatic composition of a letter containing diagnoses, medication, notes and lab results. For the test a 233 MHz Pentium II Processor with 10 Mbit/s Ethernet connection (ping-time below 10 ms) over 2 hubs to the server (400 MHz Pentium II, 256 MB RAM) was used. Results So far the EPR-system has been running for eight consecutive months and contains complete records of 673 transplant recipients with an average follow-up of 9.9 (SD :4.9) years and a total of 1.1 million lab values. Instruction to enable new users to perform basic operations took less than two hours in all cases. The average duration of laboratory access was 0.9 (SD:0.5) seconds, the automatic composition of a letter took 6.1 (SD:2.4) seconds. Apart from the database and Windows NT, all other components are available for free. The development of the EPR-system required less than two person-years. Conclusion Implementation of an Electronic patient record that meets the requirements of comprehensiveness, reliability, security, speed, user-friendliness and affordability using a combination of "of-the-shelf" software-products can be feasible, if the current state-of-the-art internet technology is applied.
Authorship Discovery in Blogs Using Bayesian Classification with Corrective Scaling
2008-06-01
4 2.3 W. Fucks ’ Diagram of n-Syllable Word Frequencies . . . . . . . . . . . . . . 5 3.1 Confusion Matrix for All Test Documents of 500...of the books which scholars believed he had. • Wilhelm Fucks discriminated between authors using the average number of syllables per word and average...distance between equal-syllabled words [8]. Fucks , too, concluded that a study such as his reveals a “possibility of a quantitative classification
Comparing Medline citations using modified N-grams
Nawab, Rao Muhammad Adeel; Stevenson, Mark; Clough, Paul
2014-01-01
Objective We aim to identify duplicate pairs of Medline citations, particularly when the documents are not identical but contain similar information. Materials and methods Duplicate pairs of citations are identified by comparing word n-grams in pairs of documents. N-grams are modified using two approaches which take account of the fact that the document may have been altered. These are: (1) deletion, an item in the n-gram is removed; and (2) substitution, an item in the n-gram is substituted with a similar term obtained from the Unified Medical Language System Metathesaurus. N-grams are also weighted using a score derived from a language model. Evaluation is carried out using a set of 520 Medline citation pairs, including a set of 260 manually verified duplicate pairs obtained from the Deja Vu database. Results The approach accurately detects duplicate Medline document pairs with an F1 measure score of 0.99. Allowing for word deletions and substitution improves performance. The best results are obtained by combining scores for n-grams of length 1–5 words. Discussion Results show that the detection of duplicate Medline citations can be improved by modifying n-grams and that high performance can also be obtained using only unigrams (F1=0.959), particularly when allowing for substitutions of alternative phrases. PMID:23715801
Comparing Medline citations using modified N-grams.
Nawab, Rao Muhammad Adeel; Stevenson, Mark; Clough, Paul
2014-01-01
We aim to identify duplicate pairs of Medline citations, particularly when the documents are not identical but contain similar information. Duplicate pairs of citations are identified by comparing word n-grams in pairs of documents. N-grams are modified using two approaches which take account of the fact that the document may have been altered. These are: (1) deletion, an item in the n-gram is removed; and (2) substitution, an item in the n-gram is substituted with a similar term obtained from the Unified Medical Language System Metathesaurus. N-grams are also weighted using a score derived from a language model. Evaluation is carried out using a set of 520 Medline citation pairs, including a set of 260 manually verified duplicate pairs obtained from the Deja Vu database. The approach accurately detects duplicate Medline document pairs with an F1 measure score of 0.99. Allowing for word deletions and substitution improves performance. The best results are obtained by combining scores for n-grams of length 1-5 words. Results show that the detection of duplicate Medline citations can be improved by modifying n-grams and that high performance can also be obtained using only unigrams (F1=0.959), particularly when allowing for substitutions of alternative phrases.
Holographic Rovers: Augmented Reality and the Microsoft HoloLens
NASA Technical Reports Server (NTRS)
Toler, Laura
2017-01-01
Augmented Reality is an emerging field in technology, and encompasses Head Mounted Displays, smartphone apps, and even projected images. HMDs include the Meta 2, Magic Leap, Avegant Light Field, and the Microsoft HoloLens, which is evaluated specifically. The Microsoft HoloLens is designed to be used as an AR personal computer, and is being optimized with that goal in mind. Microsoft allied with the Unity3D game engine to create an SDK for interested application developers that can be used in the Unity environment.
Knowledge of medical students of Tehran University of Medical Sciences regarding plagiarism.
Gharedaghi, Mohammad Hadi; Nourijelyani, Keramat; Salehi Sadaghiani, Mohammad; Yousefzadeh-Fard, Yashar; Gharedaghi, Azadeh; Javadian, Pouya; Morteza, Afsaneh; Andrabi, Yasir; Nedjat, Saharnaz
2013-07-13
The core concept of plagiarism is defined as the use of other people's ideas or words without proper acknowledgement. Herein, we used a questionnaire to assess the knowledge of students of Tehran University of Medical Sciences (TUMS) regarding plagiarism and copyright infringement. The questionnaire comprised 8 questions. The first six questions of the questionnaire were translations of exercises of a book about academic writing and were concerning plagiarism in preparing articles. Questions number 7 and 8 (which were concerning plagiarism in preparing Microsoft PowerPoint slideshows and copyright infringement, respectively) were developed by the authors of the present study. The validity of the questionnaire was approved by five experts in the field of epidemiology and biostatistics. A pilot study consisting of a test and retest was carried to assess the reliability of the questionnaire. The sampling method was stratified random sampling, and the questionnaire was handed out to 74 interns of TUMS during July and August 2011. 14.9% of the students correctly answered the first six questions. 44.6% of the students were adequately familiar with proper referencing in Microsoft PowerPoint slideshows. 16.2% of the students understood what constitutes copyright infringement. The number of correctly answered questions by the students was directly proportionate to the number of their published articles. Knowledge of students of TUMS regarding plagiarism and copyright infringement is quite poor. Courses with specific focus on plagiarism and copyright infringement might help in this regard.
Ye, Jay J
2015-07-01
Pathologists' daily tasks consist of both the professional interpretation of slides and the secretarial tasks of translating these interpretations into final pathology reports, the latter of which is a time-consuming endeavor for most pathologists. To describe an artificial intelligence that performs secretarial tasks, designated as Secretary-Mimicking Artificial Intelligence (SMILE). The underling implementation of SMILE is a collection of computer programs that work in concert to "listen to" the voice commands and to "watch for" the changes of windows caused by slide bar code scanning; SMILE responds to these inputs by acting upon PowerPath Client windows (Sunquest Information Systems, Tucson, Arizona) and its Microsoft Word (Microsoft, Redmond, Washington) Add-In window, eventuating in the reports being typed and finalized. Secretary-Mimicking Artificial Intelligence also communicates relevant information to the pathologist via the computer speakers and message box on the screen. Secretary-Mimicking Artificial Intelligence performs many secretarial tasks intelligently and semiautonomously, with rapidity and consistency, thus enabling pathologists to focus on slide interpretation, which results in a marked increase in productivity, decrease in errors, and reduction of stress in daily practice. Secretary-Mimicking Artificial Intelligence undergoes encounter-based learning continually, resulting in a continuous improvement in its knowledge-based intelligence. Artificial intelligence for pathologists is both feasible and powerful. The future widespread use of artificial intelligence in our profession is certainly going to transform how we practice pathology.
Tardif, Twila; Fletcher, Paul; Liang, Weilan; Zhang, Zhixiang; Kaciroti, Niko; Marchman, Virginia A
2008-07-01
Although there has been much debate over the content of children's first words, few large sample studies address this question for children at the very earliest stages of word learning. The authors report data from comparable samples of 265 English-, 336 Putonghua- (Mandarin), and 369 Cantonese-speaking 8- to 16-month-old infants whose caregivers completed MacArthur-Bates Communicative Development Inventories and reported them to produce between 1 and 10 words. Analyses of individual words indicated striking commonalities in the first words that children learn. However, substantive cross-linguistic differences appeared in the relative prevalence of common nouns, people terms, and verbs as well as in the probability that children produced even one of these word types when they had a total of 1-3, 4-6, or 7-10 words in their vocabularies. These data document cross-linguistic differences in the types of words produced even at the earliest stages of vocabulary learning and underscore the importance of parental input and cross-linguistic/cross-cultural variations in children's early word-learning.
Information extraction for enhanced access to disease outbreak reports.
Grishman, Ralph; Huttunen, Silja; Yangarber, Roman
2002-08-01
Document search is generally based on individual terms in the document. However, for collections within limited domains it is possible to provide more powerful access tools. This paper describes a system designed for collections of reports of infectious disease outbreaks. The system, Proteus-BIO, automatically creates a table of outbreaks, with each table entry linked to the document describing that outbreak; this makes it possible to use database operations such as selection and sorting to find relevant documents. Proteus-BIO consists of a Web crawler which gathers relevant documents; an information extraction engine which converts the individual outbreak events to a tabular database; and a database browser which provides access to the events and, through them, to the documents. The information extraction engine uses sets of patterns and word classes to extract the information about each event. Preparing these patterns and word classes has been a time-consuming manual operation in the past, but automated discovery tools now make this task significantly easier. A small study comparing the effectiveness of the tabular index with conventional Web search tools demonstrated that users can find substantially more documents in a given time period with Proteus-BIO.
Semantic Similarity between Web Documents Using Ontology
NASA Astrophysics Data System (ADS)
Chahal, Poonam; Singh Tomer, Manjeet; Kumar, Suresh
2018-06-01
The World Wide Web is the source of information available in the structure of interlinked web pages. However, the procedure of extracting significant information with the assistance of search engine is incredibly critical. This is for the reason that web information is written mainly by using natural language, and further available to individual human. Several efforts have been made in semantic similarity computation between documents using words, concepts and concepts relationship but still the outcome available are not as per the user requirements. This paper proposes a novel technique for computation of semantic similarity between documents that not only takes concepts available in documents but also relationships that are available between the concepts. In our approach documents are being processed by making ontology of the documents using base ontology and a dictionary containing concepts records. Each such record is made up of the probable words which represents a given concept. Finally, document ontology's are compared to find their semantic similarity by taking the relationships among concepts. Relevant concepts and relations between the concepts have been explored by capturing author and user intention. The proposed semantic analysis technique provides improved results as compared to the existing techniques.
Semantic Similarity between Web Documents Using Ontology
NASA Astrophysics Data System (ADS)
Chahal, Poonam; Singh Tomer, Manjeet; Kumar, Suresh
2018-03-01
The World Wide Web is the source of information available in the structure of interlinked web pages. However, the procedure of extracting significant information with the assistance of search engine is incredibly critical. This is for the reason that web information is written mainly by using natural language, and further available to individual human. Several efforts have been made in semantic similarity computation between documents using words, concepts and concepts relationship but still the outcome available are not as per the user requirements. This paper proposes a novel technique for computation of semantic similarity between documents that not only takes concepts available in documents but also relationships that are available between the concepts. In our approach documents are being processed by making ontology of the documents using base ontology and a dictionary containing concepts records. Each such record is made up of the probable words which represents a given concept. Finally, document ontology's are compared to find their semantic similarity by taking the relationships among concepts. Relevant concepts and relations between the concepts have been explored by capturing author and user intention. The proposed semantic analysis technique provides improved results as compared to the existing techniques.
Human-Robot Interface Controller Usability for Mission Planning on the Move
2012-11-01
5 Figure 3. Microsoft Xbox 360 controller for Windows...6 Figure 5. Microsoft Trackball Explorer. .........................................................................................7 Figure 6...Xbox 360 Controller is a registered trademark of Microsoft Corporation. 4 3.2.1 HMMWV The HMMWV was equipped with a diesel engine
Automatic Keyword Extraction from Individual Documents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rose, Stuart J.; Engel, David W.; Cramer, Nicholas O.
2010-05-03
This paper introduces a novel and domain-independent method for automatically extracting keywords, as sequences of one or more words, from individual documents. We describe the method’s configuration parameters and algorithm, and present an evaluation on a benchmark corpus of technical abstracts. We also present a method for generating lists of stop words for specific corpora and domains, and evaluate its ability to improve keyword extraction on the benchmark corpus. Finally, we apply our method of automatic keyword extraction to a corpus of news articles and define metrics for characterizing the exclusivity, essentiality, and generality of extracted keywords within a corpus.
Peakall, Rod; Smouse, Peter E.
2012-01-01
Summary: GenAlEx: Genetic Analysis in Excel is a cross-platform package for population genetic analyses that runs within Microsoft Excel. GenAlEx offers analysis of diploid codominant, haploid and binary genetic loci and DNA sequences. Both frequency-based (F-statistics, heterozygosity, HWE, population assignment, relatedness) and distance-based (AMOVA, PCoA, Mantel tests, multivariate spatial autocorrelation) analyses are provided. New features include calculation of new estimators of population structure: G′ST, G′′ST, Jost’s Dest and F′ST through AMOVA, Shannon Information analysis, linkage disequilibrium analysis for biallelic data and novel heterogeneity tests for spatial autocorrelation analysis. Export to more than 30 other data formats is provided. Teaching tutorials and expanded step-by-step output options are included. The comprehensive guide has been fully revised. Availability and implementation: GenAlEx is written in VBA and provided as a Microsoft Excel Add-in (compatible with Excel 2003, 2007, 2010 on PC; Excel 2004, 2011 on Macintosh). GenAlEx, and supporting documentation and tutorials are freely available at: http://biology.anu.edu.au/GenAlEx. Contact: rod.peakall@anu.edu.au PMID:22820204
Adamson, Lauren B; Bakeman, Roger; Brandon, Benjamin
2015-05-01
This study documents how parents weave new words into on-going interactions with children who are just beginning to speak. Dyads with typically developing toddlers and with young children with autism spectrum disorder and Down syndrome (n=56, 23, and 29) were observed using a Communication Play Protocol during which parents could use novel words to refer to novel objects. Parents readily introduced both labels and sound words even when their child did not respond expressively or produce the words. Results highlight both how parents act in ways that may facilitate their child's appreciation of the relation between a new word and its referent and how they subtly adjust their actions to suit their child's level of word learning and specific learning challenges. Copyright © 2015 Elsevier Inc. All rights reserved.
The evolution of the ISOLDE control system
NASA Astrophysics Data System (ADS)
Jonsson, O. C.; Catherall, R.; Deloose, I.; Drumm, P.; Evensen, A. H. M.; Gase, K.; Focker, G. J.; Fowler, A.; Kugler, E.; Lettry, J.; Olesen, G.; Ravn, H. L.; Isolde Collaboration
The ISOLDE on-line mass separator facility is operating on a Personal Computer based control system since spring 1992. Front End Computers accessing the hardware are controlled from consoles running Microsoft Windows ™ through a Novell NetWare4 ™ local area network. The control system is transparently integrated in the CERN wide office network and makes heavy use of the CERN standard office application programs to control and to document the running of the ISOLDE isotope separators. This paper recalls the architecture of the control system, shows its recent developments and gives some examples of its graphical user interface.
The evolution of the ISOLDE control system
NASA Astrophysics Data System (ADS)
Jonsson, O. C.; Catherall, R.; Deloose, I.; Evensen, A. H. M.; Gase, K.; Focker, G. J.; Fowler, A.; Kugler, E.; Lettry, J.; Olesen, G.; Ravn, H. L.; Drumm, P.
1996-04-01
The ISOLDE on-line mass separator facility is operating on a Personal Computer based control system since spring 1992. Front End Computers accessing the hardware are controlled from consoles running Microsoft Windows® through a Novell NetWare4® local area network. The control system is transparently integrated in the CERN wide office network and makes heavy use of the CERN standard office application programs to control and to document the running of the ISOLDE isotope separators. This paper recalls the architecture of the control system, shows its recent developments and gives some examples of its graphical user interface.
Reaction Time Variability Associated with Reading Skills in Poor Readers with ADHD
Tamm, Leanne; Epstein, Jeffery N.; Denton, Carolyn A.; Vaughn, Aaron J.; Peugh, James; Willcutt, Erik G.
2014-01-01
Objective Linkages between neuropsychological functioning (i.e., response inhibition, processing speed, reaction time variability) and word reading have been documented among children with Attention-Deficit/Hyperactivity Disorder (ADHD) and children with Reading Disorders. However, associations between neuropsychological functioning and other aspects of reading (i.e., fluency, comprehension) have not been well-documented among children with comorbid ADHD and Reading Disorder. Method Children with ADHD and poor word reading (i.e., ≤25th percentile) completed a stop signal task (SST) and tests of word reading, reading fluency, and reading comprehension. Multivariate multiple regression was conducted predicting the reading skills from SST variables [i.e., mean reaction time (MRT), reaction time standard deviation (SDRT), and stop signal reaction time (SSRT)]. Results SDRT predicted word reading, reading fluency, and reading comprehension. MRT and SSRT were not associated with any reading skill. After including word reading in models predicting reading fluency and reading comprehension, the effects of SDRT were minimized. Discussion Reaction time variability (i.e., SDRT) reflects impairments in information processing and failure to maintain executive control. The pattern of results from this study suggest SDRT exerts its effects on reading fluency and reading comprehension through its effect on word reading (i.e., decoding) and that this relation may be related to observed deficits in higher-level elements of reading. PMID:24528537
DOE Office of Scientific and Technical Information (OSTI.GOV)
Augustyn, Veronica; Ko, Jesse; Rauda, Iris
Representing the Molecularly Engineered Energy Materials (MEEM), this document is one of the entries in the Ten Hundred and One Word Challenge. As part of the challenge, the 46 Energy Frontier Research Centers were invited to represent their science in images, cartoons, photos, words and original paintings, but any descriptions or words could only use the 1000 most commonly used words in the English language, with the addition of one word important to each of the EFRCs and the mission of DOE energy. The mission of MEEM, using inexpensive custom-designed molecular building blocks, aims to create revolutionary new materials withmore » self-assembled multi-scale architectures that will enable high performing energy generation and storage applications.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stocks, G. Malcolm; Morris, James; Sproles, Andrew
Representing the Center for Defect Physics (CDP), this document is one of the entries in the Ten Hundred and One Word Challenge. As part of the challenge, the 46 Energy Frontier Research Centers were invited to represent their science in images, cartoons, photos, words and original paintings, but any descriptions or words could only use the 1000 most commonly used words in the English language, with the addition of one word important to each of the EFRCs and the mission of DOE: energy. The mission of the CDP is to enhance our fundamental understanding of defects, defect interactions, and defectmore » dynamics that determine the performance of structural materials in extreme environments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shastry, Tejas
Representing the Argonne-Northwestern Solar Energy Research (ANSER) Center, this document is one of the entries in the Ten Hundred and One Word Challenge. As part of the challenge, the 46 Energy Frontier Research Centers were invited to represent their science in images, cartoons, photos, words and original paintings, but any descriptions or words could only use the 1000 most commonly used words in the English language, with the addition of one word important to each of the EFRCs and the mission of DOE: energy. The mission of ANSER is to revolutionize our understanding of molecules, materials and methods necessary tomore » create dramatically more efficient technologies for solar fuels and electricity production.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montoya, Joseph
Representing the Center on Nanostructuring for Efficient Energy Conversion (CNEEC), this document is one of the entries in the Ten Hundred and One Word Challenge. As part of the challenge, the 46 Energy Frontier Research Centers were invited to represent their science in images, cartoons, photos, words and original paintings, but any descriptions or words could only use the 1000 most commonly used words in the English language, with the addition of one word important to each of the EFRCs and the mission of DOE energy. The mission of CNEEC is to understand how nanostructuring can enhance efficiency for energymore » conversion and solve fundamental cross-cutting problems in advanced energy conversion and storage systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crain, Steven P.; Yang, Shuang-Hong; Zha, Hongyuan
Access to health information by consumers is ham- pered by a fundamental language gap. Current attempts to close the gap leverage consumer oriented health information, which does not, however, have good coverage of slang medical terminology. In this paper, we present a Bayesian model to automatically align documents with different dialects (slang, com- mon and technical) while extracting their semantic topics. The proposed diaTM model enables effective information retrieval, even when the query contains slang words, by explicitly modeling the mixtures of dialects in documents and the joint influence of dialects and topics on word selection. Simulations us- ing consumermore » questions to retrieve medical information from a corpus of medical documents show that diaTM achieves a 25% improvement in information retrieval relevance by nDCG@5 over an LDA baseline.« less
Inclusion in the Microsoft Workforce
ERIC Educational Resources Information Center
Exceptional Parent, 2008
2008-01-01
Since 1975, Microsoft has been a worldwide leader in software, services, and solutions that help people and businesses realize their full potential. Loren Mikola, the Disability Inclusion Program Manager at Microsoft, ensures that this technology also reaches and includes the special needs population and, through the hiring of individuals with…
ERIC Educational Resources Information Center
White, Charles E., Jr.
The purpose of this study was to develop and implement a hypertext documentation system in an industrial laboratory and to evaluate its usefulness by participative observation and a questionnaire. Existing word-processing test method documentation was converted directly into a hypertext format or "hyperdocument." The hyperdocument was designed and…
Business Documents Don't Have to Be Boring
ERIC Educational Resources Information Center
Schultz, Benjamin
2006-01-01
With business documents, visuals can serve to enhance the written word in conveying the message. Images can be especially effective when used subtly, on part of the page, on successive pages to provide continuity, or even set as watermarks over the entire page. A main reason given for traditional text-only business documents is that they are…
47 CFR 0.409 - Commission policy on private printing of FCC forms.
Code of Federal Regulations, 2014 CFR
2014-10-01
... in quality to the original document, without change to the page size, image size, configuration of... document.” (4) Do not add to the form any other symbol, word or phrase that might be construed as...
47 CFR 0.409 - Commission policy on private printing of FCC forms.
Code of Federal Regulations, 2012 CFR
2012-10-01
... in quality to the original document, without change to the page size, image size, configuration of... document.” (4) Do not add to the form any other symbol, word or phrase that might be construed as...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-01
... must be submitted electronically in machine-readable format. PDF images created by scanning a paper document may not be submitted, except in cases in which a word- processing version of the document is not...
47 CFR 0.409 - Commission policy on private printing of FCC forms.
Code of Federal Regulations, 2013 CFR
2013-10-01
... in quality to the original document, without change to the page size, image size, configuration of... document.” (4) Do not add to the form any other symbol, word or phrase that might be construed as...
47 CFR 0.409 - Commission policy on private printing of FCC forms.
Code of Federal Regulations, 2011 CFR
2011-10-01
... in quality to the original document, without change to the page size, image size, configuration of... document.” (4) Do not add to the form any other symbol, word or phrase that might be construed as...
Progress Report--Microsoft Office 2003 Lynchburg College Tutorials
ERIC Educational Resources Information Center
Murray, Tom
2004-01-01
For the past several years Lynchburg College has developed Microsoft tutorials for use with academic classes and faculty, student and staff training. The tutorials are now used internationally. Last year Microsoft and Verizon sponsored a tutorial web site at http://www.officetutorials.com. This website recognizes ASCUE members for their wonderful…
Sandler, Leonard A; Blanck, Peter
2005-01-01
This case study examines efforts by Microsoft Corporation to enhance the diversity of its workforce and improve the accessibility and usability of its products and services for persons with disabilities. The research explores the relation among the Americans with Disabilities Act of 1990, corporate leadership, attitudes and behaviors towards individuals with disabilities, and dynamics that shape organizational culture at Microsoft. Implications for Microsoft, other employers, researchers, and the disability community are discussed. 2005 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Mann, Christopher; Narasimhamurthi, Natarajan
1998-08-01
This paper discusses a specific implementation of a web and complement based simulation systems. The overall simulation container is implemented within a web page viewed with Microsoft's Internet Explorer 4.0 web browser. Microsoft's ActiveX/Distributed Component Object Model object interfaces are used in conjunction with the Microsoft DirectX graphics APIs to provide visualization functionality for the simulation. The MathWorks' Matlab computer aided control system design program is used as an ActiveX automation server to provide the compute engine for the simulations.
Responding to Nonwords in the Lexical Decision Task: Insights from the English Lexicon Project
ERIC Educational Resources Information Center
Yap, Melvin J.; Sibley, Daragh E.; Balota, David A.; Ratcliff, Roger; Rueckl, Jay
2015-01-01
Researchers have extensively documented how various statistical properties of words (e.g., word frequency) influence lexical processing. However, the impact of lexical variables on nonword decision-making performance is less clear. This gap is surprising, because a better specification of the mechanisms driving nonword responses may provide…
Words for Work Evaluation Report 2011
ERIC Educational Resources Information Center
National Literacy Trust, 2011
2011-01-01
This document analyses and evaluates the findings of the second pilot year of the National Literacy Trust's speaking and listening project, Words for Work. This year's project worked with 219 year 9 pupils across England, and engaged 91 volunteers from the business community to facilitate group work that encouraged pupils to investigate their own…
ERIC Educational Resources Information Center
Baayen, R. Harald; Hendrix, Peter; Ramscar, Michael
2013-01-01
Arnon and Snider ((2010). More than words: Frequency effects for multi-word phrases. "Journal of Memory and Language," 62, 67-82) documented frequency effects for compositional four-grams independently of the frequencies of lower-order "n"-grams. They argue that comprehenders apparently store frequency information about…
Machine-Aided Indexing of Technical Literature
ERIC Educational Resources Information Center
Klingbiel, Paul H.
1973-01-01
To index at the Defense Documentation Center (DDC), an automated system must choose single words or phrases rapidly and economically. Automation of DDC's indexing has been machine-aided from its inception. A machine-aided indexing system is described that indexes one million words of text per hour of CPU time. (22 references) (Author/SJ)
Morphological Effects in Auditory Word Recognition: Evidence from Danish
ERIC Educational Resources Information Center
Balling, Laura Winther; Baayen, R. Harald
2008-01-01
In this study, we investigate the processing of morphologically complex words in Danish using auditory lexical decision. We document a second critical point in auditory comprehension in addition to the Uniqueness Point (UP), namely the point at which competing morphological continuation forms of the base cease to be compatible with the input,…
A Basic Vocabulary of Federal Social Program Applications and Forms.
ERIC Educational Resources Information Center
Afflerbach, Peter P.; And Others
A study of the application forms for Social Security, Supplemental Security Income, public assistance, food stamps, Medicaid, and Medicare was conducted to examine the frequently occurring unfamiliar, specialized vocabulary words. It was found that 76 such words occurred at least ten times in the documents studied. A large number of other…
A Validation of Parafoveal Semantic Information Extraction in Reading Chinese
ERIC Educational Resources Information Center
Zhou, Wei; Kliegl, Reinhold; Yan, Ming
2013-01-01
Parafoveal semantic processing has recently been well documented in reading Chinese sentences, presumably because of language-specific features. However, because of a large variation of fixation landing positions on pretarget words, some preview words actually were located in foveal vision when readers' eyes landed close to the end of the…
Improving Elementary Students' Spelling Achievement Using High-Frequency Words.
ERIC Educational Resources Information Center
Durnil, Christina; And Others
An action research study detailed a program for improving spelling achievement across the curriculum. The targeted population is composed of second and third grade students from a growing, middle class community located in a suburb of Chicago, Illinois. The problem of misspelled words in the students' writing was documented through students'…
Linguistic, Cognitive, and Social Constraints on Lexical Entrenchment
ERIC Educational Resources Information Center
Chesley, Paula
2011-01-01
How do new words become established in a speech community? This dissertation documents linguistic, cognitive, and social factors that are hypothesized to affect "lexical entrenchment," the extent to which a new word becomes part of the lexicon of a speech community. First, in a longitudinal corpus study, I find that linguistic properties such as…
Hierarchic Agglomerative Clustering Methods for Automatic Document Classification.
ERIC Educational Resources Information Center
Griffiths, Alan; And Others
1984-01-01
Considers classifications produced by application of single linkage, complete linkage, group average, and word clustering methods to Keen and Cranfield document test collections, and studies structure of hierarchies produced, extent to which methods distort input similarity matrices during classification generation, and retrieval effectiveness…
Word Criticality Analysis MOS: 17B. Skill Levels 1 & 2.
1981-09-01
DPFO Curl -___ .... F ...... COPIES ATOP .o’,. 109.1 ,.,,,o .4i,1,.~ .. d.,--. - , selll efle *,5,ed. !* DISCLAIMER NOTICE THIS DOCUMENT IS BEST QUALITY...Manual (SM). These critical words were selected by subject matter/job experts knowledgeable in their MOS. The vocabulary set used as the basis for critical...following 5 point rating scale was used by a team of up to 3 subject matter experts fzum Army MOS proponent schools to rate each word selected as having
Microsoft's Vista: Guarantees People with Special Needs Access to Computers
ERIC Educational Resources Information Center
Williams, John M.
2006-01-01
In this article, the author discusses the accessibility features of Microsoft's Windows Vista. One of the most innovative aspects of Windows Vista is a new accessibility and automated testing model called Microsoft UI Automation, which reduces development costs not only for accessible and assistive technology (AT) developers, but also for…
Microsoft Excel Software Usage for Teaching Science and Engineering Curriculum
ERIC Educational Resources Information Center
Singh, Gurmukh; Siddiqui, Khalid
2009-01-01
In this article, our main objective is to present the use of Microsoft Software Excel 2007/2003 for teaching college and university level curriculum in science and engineering. In particular, we discuss two interesting and fascinating examples of interactive applications of Microsoft Excel targeted for undergraduate students in: 1) computational…
Challenging Google, Microsoft Unveils a Search Tool for Scholarly Articles
ERIC Educational Resources Information Center
Carlson, Scott
2006-01-01
Microsoft has introduced a new search tool to help people find scholarly articles online. The service, which includes journal articles from prominent academic societies and publishers, puts Microsoft in direct competition with Google Scholar. The new free search tool, which should work on most Web browsers, is called Windows Live Academic Search…
Microsoft's Book-Search Project Has a Surprise Ending
ERIC Educational Resources Information Center
Foster, Andrea L.
2008-01-01
It is hard to imagine a Microsoft venture falling under the weight of a competitor. That's the post-mortem offered by many academic librarians as they ponder the software giant's recent and sudden announcement that it is shutting down its book-digitization project. The librarians' conclusion: Google did it. Microsoft quietly revealed in May that…
Mouriño García, Marcos Antonio; Pérez Rodríguez, Roberto; Anido Rifón, Luis E
2015-01-01
Automatic classification of text documents into a set of categories has a lot of applications. Among those applications, the automatic classification of biomedical literature stands out as an important application for automatic document classification strategies. Biomedical staff and researchers have to deal with a lot of literature in their daily activities, so it would be useful a system that allows for accessing to documents of interest in a simple and effective way; thus, it is necessary that these documents are sorted based on some criteria-that is to say, they have to be classified. Documents to classify are usually represented following the bag-of-words (BoW) paradigm. Features are words in the text-thus suffering from synonymy and polysemy-and their weights are just based on their frequency of occurrence. This paper presents an empirical study of the efficiency of a classifier that leverages encyclopedic background knowledge-concretely Wikipedia-in order to create bag-of-concepts (BoC) representations of documents, understanding concept as "unit of meaning", and thus tackling synonymy and polysemy. Besides, the weighting of concepts is based on their semantic relevance in the text. For the evaluation of the proposal, empirical experiments have been conducted with one of the commonly used corpora for evaluating classification and retrieval of biomedical information, OHSUMED, and also with a purpose-built corpus of MEDLINE biomedical abstracts, UVigoMED. Results obtained show that the Wikipedia-based bag-of-concepts representation outperforms the classical bag-of-words representation up to 157% in the single-label classification problem and up to 100% in the multi-label problem for OHSUMED corpus, and up to 122% in the single-label classification problem and up to 155% in the multi-label problem for UVigoMED corpus.
Pérez Rodríguez, Roberto; Anido Rifón, Luis E.
2015-01-01
Automatic classification of text documents into a set of categories has a lot of applications. Among those applications, the automatic classification of biomedical literature stands out as an important application for automatic document classification strategies. Biomedical staff and researchers have to deal with a lot of literature in their daily activities, so it would be useful a system that allows for accessing to documents of interest in a simple and effective way; thus, it is necessary that these documents are sorted based on some criteria—that is to say, they have to be classified. Documents to classify are usually represented following the bag-of-words (BoW) paradigm. Features are words in the text—thus suffering from synonymy and polysemy—and their weights are just based on their frequency of occurrence. This paper presents an empirical study of the efficiency of a classifier that leverages encyclopedic background knowledge—concretely Wikipedia—in order to create bag-of-concepts (BoC) representations of documents, understanding concept as “unit of meaning”, and thus tackling synonymy and polysemy. Besides, the weighting of concepts is based on their semantic relevance in the text. For the evaluation of the proposal, empirical experiments have been conducted with one of the commonly used corpora for evaluating classification and retrieval of biomedical information, OHSUMED, and also with a purpose-built corpus of MEDLINE biomedical abstracts, UVigoMED. Results obtained show that the Wikipedia-based bag-of-concepts representation outperforms the classical bag-of-words representation up to 157% in the single-label classification problem and up to 100% in the multi-label problem for OHSUMED corpus, and up to 122% in the single-label classification problem and up to 155% in the multi-label problem for UVigoMED corpus. PMID:26468436
NASA Astrophysics Data System (ADS)
Zhang, Hui; Wang, Deqing; Wu, Wenjun; Hu, Hongping
2012-11-01
In today's business environment, enterprises are increasingly under pressure to process the vast amount of data produced everyday within enterprises. One method is to focus on the business intelligence (BI) applications and increasing the commercial added-value through such business analytics activities. Term weighting scheme, which has been used to convert the documents as vectors in the term space, is a vital task in enterprise Information Retrieval (IR), text categorisation, text analytics, etc. When determining term weight in a document, the traditional TF-IDF scheme sets weight value for the term considering only its occurrence frequency within the document and in the entire set of documents, which leads to some meaningful terms that cannot get the appropriate weight. In this article, we propose a new term weighting scheme called Term Frequency - Function of Document Frequency (TF-FDF) to address this issue. Instead of using monotonically decreasing function such as Inverse Document Frequency, FDF presents a convex function that dynamically adjusts weights according to the significance of the words in a document set. This function can be manually tuned based on the distribution of the most meaningful words which semantically represent the document set. Our experiments show that the TF-FDF can achieve higher value of Normalised Discounted Cumulative Gain in IR than that of TF-IDF and its variants, and improving the accuracy of relevance ranking of the IR results.
A Kind of Optimization Method of Loading Documents in OpenOffice.org
NASA Astrophysics Data System (ADS)
Lan, Yuqing; Li, Li; Zhou, Wenbin
As a giant in open source community, OpenOffice.org has become the most popular office suite within Linux community. But OpenOffice.org is relatively slow while loading documents. Research shows that the most time consuming part is importing one page of whole document. If there are many pages in a document, the accumulation of time consumed can be astonishing. Therefore, this paper proposes a solution, which has improved the speed of loading documents through asynchronous importing mechanism: a document is not imported as a whole, but only part of the document is imported at first for display, then mechanism in the background is started to asynchronously import the remaining parts, and insert it into the drawing queue of OpenOffice.org for display. In this way, the problem can be solved and users don't have to wait for a long time. Application start-up time testing tool has been used to test the time consumed in loading different pages of documents before and after optimization of OpenOffice.org, then, we adopt the regression theory to analyse the correlation between the page number of documents and the loading time. In addition, visual modeling of the experimental data are acquired with the aid of matlab. An obvious increase in loading speed can be seen after a comparison of the time consumed to load a document before and after the solution is adopted. And then, using Microsoft Office compared with the optimized OpenOffice.org, their loading speeds are almost same. The results of the experiments show the effectiveness of this solution.
A Comparison of Product Realization Frameworks
1993-10-01
software (integrated FrameMaker ). Also included are BOLD for on-line documentation delivery, printer/plotter support, and 18 network licensing support. AMPLE...are built with DSS. Documentation tools include an on-line information system (BOLD), text editing (Notepad), word processing (integrated FrameMaker ...within an application. FrameMaker is fully integrated with the Falcon Framework to provide consistent documentation capabilities within engineering
DataUp: A tool to help researchers describe and share tabular data.
Strasser, Carly; Kunze, John; Abrams, Stephen; Cruse, Patricia
2014-01-01
Scientific datasets have immeasurable value, but they lose their value over time without proper documentation, long-term storage, and easy discovery and access. Across disciplines as diverse as astronomy, demography, archeology, and ecology, large numbers of small heterogeneous datasets (i.e., the long tail of data) are especially at risk unless they are properly documented, saved, and shared. One unifying factor for many of these at-risk datasets is that they reside in spreadsheets. In response to this need, the California Digital Library (CDL) partnered with Microsoft Research Connections and the Gordon and Betty Moore Foundation to create the DataUp data management tool for Microsoft Excel. Many researchers creating these small, heterogeneous datasets use Excel at some point in their data collection and analysis workflow, so we were interested in developing a data management tool that fits easily into those work flows and minimizes the learning curve for researchers. The DataUp project began in August 2011. We first formally assessed the needs of researchers by conducting surveys and interviews of our target research groups: earth, environmental, and ecological scientists. We found that, on average, researchers had very poor data management practices, were not aware of data centers or metadata standards, and did not understand the benefits of data management or sharing. Based on our survey results, we composed a list of desirable components and requirements and solicited feedback from the community to prioritize potential features of the DataUp tool. These requirements were then relayed to the software developers, and DataUp was successfully launched in October 2012.
DataUp: A tool to help researchers describe and share tabular data
Strasser, Carly; Kunze, John; Abrams, Stephen; Cruse, Patricia
2014-01-01
Scientific datasets have immeasurable value, but they lose their value over time without proper documentation, long-term storage, and easy discovery and access. Across disciplines as diverse as astronomy, demography, archeology, and ecology, large numbers of small heterogeneous datasets (i.e., the long tail of data) are especially at risk unless they are properly documented, saved, and shared. One unifying factor for many of these at-risk datasets is that they reside in spreadsheets. In response to this need, the California Digital Library (CDL) partnered with Microsoft Research Connections and the Gordon and Betty Moore Foundation to create the DataUp data management tool for Microsoft Excel. Many researchers creating these small, heterogeneous datasets use Excel at some point in their data collection and analysis workflow, so we were interested in developing a data management tool that fits easily into those work flows and minimizes the learning curve for researchers. The DataUp project began in August 2011. We first formally assessed the needs of researchers by conducting surveys and interviews of our target research groups: earth, environmental, and ecological scientists. We found that, on average, researchers had very poor data management practices, were not aware of data centers or metadata standards, and did not understand the benefits of data management or sharing. Based on our survey results, we composed a list of desirable components and requirements and solicited feedback from the community to prioritize potential features of the DataUp tool. These requirements were then relayed to the software developers, and DataUp was successfully launched in October 2012. PMID:25653834
ROSA, Wellington Luiz de Oliveira; SILVA, Tiago Machado; LIMA, Giana da Silveira; SILVA, Adriana Fernandes; PIVA, Evandro
2016-01-01
ABSTRACT Objective A systematic review was conducted to analyze Brazilian scientific and technological production related to the dental materials field over the past 50 years. Material and Methods This study followed the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (Prisma) statement. Searches were performed until December 2014 in six databases: MedLine (PubMed), Scopus, LILACS, IBECS, BBO, and the Cochrane Library. Additionally, the Brazilian patent database (INPI - Instituto Nacional de Propriedade Industrial) was screened in order to get an overview of Brazilian technological development in the dental materials field. Two reviewers independently analyzed the documents. Only studies and patents related to dental materials were included in this review. Data regarding the material category, dental specialty, number of documents and patents, filiation countries, and the number of citations were tabulated and analyzed in Microsoft Office Excel (Microsoft Corporation, Redmond, Washington, United States). Results A total of 115,806 studies and 53 patents were related to dental materials and were included in this review. Brazil had 8% affiliation in studies related to dental materials, and the majority of the papers published were related to dental implants (1,137 papers), synthetic resins (681 papers), dental cements (440 papers), dental alloys (392 papers) and dental adhesives (361 papers). The Brazilian technological development with patented dental materials was smaller than the scientific production. The most patented type of material was dental alloys (11 patents), followed by dental implants (8 patents) and composite resins (7 patents). Conclusions Dental materials science has had a substantial number of records, demonstrating an important presence in scientific and technological development of dentistry. In addition, it is important to approximate the relationship between academia and industry to expand the technological development in countries such as Brazil. PMID:27383712
Rosa, Wellington Luiz de Oliveira; Silva, Tiago Machado; Lima, Giana da Silveira; Silva, Adriana Fernandes; Piva, Evandro
2016-01-01
A systematic review was conducted to analyze Brazilian scientific and technological production related to the dental materials field over the past 50 years. This study followed the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (Prisma) statement. Searches were performed until December 2014 in six databases: MedLine (PubMed), Scopus, LILACS, IBECS, BBO, and the Cochrane Library. Additionally, the Brazilian patent database (INPI - Instituto Nacional de Propriedade Industrial) was screened in order to get an overview of Brazilian technological development in the dental materials field. Two reviewers independently analyzed the documents. Only studies and patents related to dental materials were included in this review. Data regarding the material category, dental specialty, number of documents and patents, filiation countries, and the number of citations were tabulated and analyzed in Microsoft Office Excel (Microsoft Corporation, Redmond, Washington, United States). A total of 115,806 studies and 53 patents were related to dental materials and were included in this review. Brazil had 8% affiliation in studies related to dental materials, and the majority of the papers published were related to dental implants (1,137 papers), synthetic resins (681 papers), dental cements (440 papers), dental alloys (392 papers) and dental adhesives (361 papers). The Brazilian technological development with patented dental materials was smaller than the scientific production. The most patented type of material was dental alloys (11 patents), followed by dental implants (8 patents) and composite resins (7 patents). Dental materials science has had a substantial number of records, demonstrating an important presence in scientific and technological development of dentistry. In addition, it is important to approximate the relationship between academia and industry to expand the technological development in countries such as Brazil.
The professional profile of UFBA nursing management graduate students.
Paiva, Mirian Santos; Coelho, Edméia de Almeida Cardoso; Nascimento, Enilda Rosendo do; Melo, Cristina Maria Meira de; Fernandes, Josicelia Dumêt; Santos, Ninalva de Andrade
2011-12-01
The objective of the present study was to analyze the professional profile of the nursing graduate students of Federal University of Bahia, more specifically of the nursing management area. This descriptive, exploratory study was performed using documental research. The data was collected from the graduates' curriculum on the Lattes Platform and from the graduate program documents, using a form. The study population consisted of graduates enrolled under the line of research The Organization and Evaluation of Health Care Systems, who developed dissertations/theses addressing Nursing/Health Management. The data were stored using Microsoft Excel, and then transferred to the STATA 9.0 statistical software. Results showed that most graduates are women, originally from the State of Bahia, and had completed the course between 2000 and 2011; faculty of public institutions who continued involved in academic work after completing the course. These results point at the program as an academic environment committed to preparing researchers.
Han, Xu; Kim, Jung-jae; Kwoh, Chee Keong
2016-01-01
Biomedical text mining may target various kinds of valuable information embedded in the literature, but a critical obstacle to the extension of the mining targets is the cost of manual construction of labeled data, which are required for state-of-the-art supervised learning systems. Active learning is to choose the most informative documents for the supervised learning in order to reduce the amount of required manual annotations. Previous works of active learning, however, focused on the tasks of entity recognition and protein-protein interactions, but not on event extraction tasks for multiple event types. They also did not consider the evidence of event participants, which might be a clue for the presence of events in unlabeled documents. Moreover, the confidence scores of events produced by event extraction systems are not reliable for ranking documents in terms of informativity for supervised learning. We here propose a novel committee-based active learning method that supports multi-event extraction tasks and employs a new statistical method for informativity estimation instead of using the confidence scores from event extraction systems. Our method is based on a committee of two systems as follows: We first employ an event extraction system to filter potential false negatives among unlabeled documents, from which the system does not extract any event. We then develop a statistical method to rank the potential false negatives of unlabeled documents 1) by using a language model that measures the probabilities of the expression of multiple events in documents and 2) by using a named entity recognition system that locates the named entities that can be event arguments (e.g. proteins). The proposed method further deals with unknown words in test data by using word similarity measures. We also apply our active learning method for the task of named entity recognition. We evaluate the proposed method against the BioNLP Shared Tasks datasets, and show that our method can achieve better performance than such previous methods as entropy and Gibbs error based methods and a conventional committee-based method. We also show that the incorporation of named entity recognition into the active learning for event extraction and the unknown word handling further improve the active learning method. In addition, the adaptation of the active learning method into named entity recognition tasks also improves the document selection for manual annotation of named entities.
Making More Light with Less Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuritzky, Leah; Jewell, Jason
Representing the Center for Energy Efficient Materials (CEEM), this document is one of the entries in the Ten Hundred and One Word Challenge. As part of the challenge, the 46 Energy Frontier Research Centers were invited to represent their science in images, cartoons, photos, words and original paintings, but any descriptions or words could only use the 1000 most commonly used words in the English language, with the addition of one word important to each of the EFRCs and the mission of DOE: energy. The mission of the CEEM is to discover and develop materials that control the interactions amongmore » light, electricity, and heat at the nanoscale for improved solar energy conversion, solid-state lighting, and conversion of heat into electricity.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okman, Oya; Baginska, Marta; Jones, Elizabeth MC
Representing the Center for Electrical Energy Storage (CEES), this document is one of the entries in the Ten Hundred and One Word Challenge and was awarded "Best Science Lesson." As part of the challenge, the 46 Energy Frontier Research Centers were invited to represent their science in images, cartoons, photos, words and original paintings, but any descriptions or words could only use the 1000 most commonly used words in the English language, with the addition of one word important to each of the EFRCs and the mission of DOE: energy. The mission of the CEES is to acquire a fundamentalmore » understanding of interfacial phenomena controlling electrochemical processes that will enable dramatic improvements in the properties and performance of energy storage devices, notably Li ion batteries.« less
Rocks Filled with Tiny Spaces Can Turn Green Growing Things Into Stuff We Use Every Day
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nikbin, Nima; Josephson, Tyler; Courtney, Timothy
Representing the Catalysis Center for Energy Innovation (CCEI), this document is one of the entries in the Ten Hundred and One Word Challenge. As part of the challenge, the 46 Energy Frontier Research Centers were invited to represent their science in images, cartoons, photos, words and original paintings, but any descriptions or words could only use the 1000 most commonly used words in the English language, with the addition of one word important to each of the EFRCs and the mission of DOE: energy. The mission of CCEI is to design and characterize novel catalysts for the efficient conversion ofmore » the complex molecules comprising biomass into chemicals and fuels.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Epstein, Marianne; Luckyanova, Maria; Manke, Kara
Representing the Solid-State Solar-Thermal Energy Conversion Center (S3TEC), this document is one of the entries in the Ten Hundred and One Word Challenge. As part of the challenge, the 46 Energy Frontier Research Centers were invited to represent their science in images, cartoons, photos, words and original paintings, but any descriptions or words could only use the 1000 most commonly used words in the English language, with the addition of one word important to each of the EFRCs and the mission of DOE energy. The mission of S3TEC is advancing fundamental science and developing materials to harness heat from themore » sun and convert this heat into electricity via solid-state thermoelectric and thermophotovoltaic technologies.« less
Lighting the World in a Different Way
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilber, Nicole; Houmpheng, Krista; Coltrin, Mike
Representing the Solid State Lighting Science (SSLS), this document is one of the entries in the Ten Hundred and One Word Challenge. As part of the challenge, the 46 Energy Frontier Research Centers were invited to represent their science in images, cartoons, photos, words and original paintings, but any descriptions or words could only use the 1000 most commonly used words in the English language, with the addition of one word important to each of the EFRCs and the mission of DOE: energy. The mission of the SSLS is to help build the scientific foundation that enables solid-state lighting tomore » produce the most light for the least energy, both in the U.S. and, as a side-benefit, throughout the world.« less
Power to the People...Energy for Now and Later
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sung, Chih-Jen; Law, Chung K; Brady, Kyle
Representing the Combustion Energy Frontier Research Center (CEFRC), this document is one of the entries in the Ten Hundred and One Word Challenge. As part of the challenge, the 46 Energy Frontier Research Centers were invited to represent their science in images, cartoons, photos, words and original paintings, but any descriptions or words could only use the 1000 most commonly used words in the English language, with the addition of one word important to each of the EFRCs and the mission of DOE: energy. The mission of CEFRC is to develop a validated, predictive, multi-scale combusion modeling capacity which canmore » be used to optimize the design and operation of evolving fuels in advanced engines for transportation applications.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bryant, Steven L; Camacho-Lopez, Tara R; Tenney, Craig M
Representing the Center for Frontiers of Subsurface Energy Security (CFSES), this document is one of the entries in the Ten Hundred and One Word Challenge. As part of the challenge, the 46 Energy Frontier Research Centers were invited to represent their science in images, cartoons, photos, words and original paintings, but any descriptions or words could only use the 1000 most commonly used words in the English language, with the addition of one word important to each of the EFRCs and the mission of DOE: energy. The mission of the CFSES is to pursue the scientific understanding of multiscale, multiphysicsmore » processes and to ensure safe and economically feasible storage of carbon dioxide and other byproducts of energy production without harming the environment.« less
Using Left Overs to Make Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steuterman, Sally; Czarnecki, Alicia; Hurley, Paul
Representing the Material Science Antinides (MSA), this document is one of the entries in the Ten Hundred and One Word Challenge. As part of the challenge, the 46 Energy Frontier Research Centers were invited to represent their science in images, cartoons, photos, words and original paintings, but any descriptions or words could only use the 1000 most commonly used words in the English language, with the addition of one word important to each of the EFRCs and the mission of DOE energy. The mission of MSA is to conduct transformative research in the actinide sciences with full integration of experimentalmore » and computational approaches, and an emphasis on research questions that are important to the energy future of the nation.« less
Using all of the Energy from the Sun to Make Power
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dapkus, P. Daniel; Povinelli, Michelle
Representing the Center for Energy Nanoscience (CEN), this document is one of the entries in the Ten Hundred and One Word Challenge and was awarded "Overall Winner Runner-up." As part of the challenge, the 46 Energy Frontier Research Centers were invited to represent their science in images, cartoons, photos, words and original paintings, but any descriptions or words could only use the 1000 most commonly used words in the English language, with the addition of one word important to each of the EFRCs and the mission of DOE: energy. The mission of the CEN is to explore the light absorptionmore » and emission in organic and nanostructure materials and their hybrids for solar energy conversion and solid state lighting.« less
Sunlight + Water = Tomorrow's Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Anne Katherine
Representing the Center for Bio-Inspired Solar Fuel Production (BISfuel), this document is one of the entries in the Ten Hundred and One Word Challenge. As part of the challenge, the 46 Energy Frontier Research Centers were invited to represent their science in images, cartoons, photos, words and original paintings, but any descriptions or words could only use the 1000 most commonly used words in the English language, with the addition of one word important to each of the EFRCs and the mission of DOE: energy. The mission of BISfuel is to construct a complete system for solar-powered production of hydrogenmore » fuel via water splitting; design principles are drawn from the fundamental concepts that underlie photosynthetic energy conversion.« less
Text mining by Tsallis entropy
NASA Astrophysics Data System (ADS)
Jamaati, Maryam; Mehri, Ali
2018-01-01
Long-range correlations between the elements of natural languages enable them to convey very complex information. Complex structure of human language, as a manifestation of natural languages, motivates us to apply nonextensive statistical mechanics in text mining. Tsallis entropy appropriately ranks the terms' relevance to document subject, taking advantage of their spatial correlation length. We apply this statistical concept as a new powerful word ranking metric in order to extract keywords of a single document. We carry out an experimental evaluation, which shows capability of the presented method in keyword extraction. We find that, Tsallis entropy has reliable word ranking performance, at the same level of the best previous ranking methods.
Amatchmethod Based on Latent Semantic Analysis for Earthquakehazard Emergency Plan
NASA Astrophysics Data System (ADS)
Sun, D.; Zhao, S.; Zhang, Z.; Shi, X.
2017-09-01
The structure of the emergency plan on earthquake is complex, and it's difficult for decision maker to make a decision in a short time. To solve the problem, this paper presents a match method based on Latent Semantic Analysis (LSA). After the word segmentation preprocessing of emergency plan, we carry out keywords extraction according to the part-of-speech and the frequency of words. Then through LSA, we map the documents and query information to the semantic space, and calculate the correlation of documents and queries by the relation between vectors. The experiments results indicate that the LSA can improve the accuracy of emergency plan retrieval efficiently.
ASM Based Synthesis of Handwritten Arabic Text Pages
Al-Hamadi, Ayoub; Elzobi, Moftah; El-etriby, Sherif; Ghoneim, Ahmed
2015-01-01
Document analysis tasks, as text recognition, word spotting, or segmentation, are highly dependent on comprehensive and suitable databases for training and validation. However their generation is expensive in sense of labor and time. As a matter of fact, there is a lack of such databases, which complicates research and development. This is especially true for the case of Arabic handwriting recognition, that involves different preprocessing, segmentation, and recognition methods, which have individual demands on samples and ground truth. To bypass this problem, we present an efficient system that automatically turns Arabic Unicode text into synthetic images of handwritten documents and detailed ground truth. Active Shape Models (ASMs) based on 28046 online samples were used for character synthesis and statistical properties were extracted from the IESK-arDB database to simulate baselines and word slant or skew. In the synthesis step ASM based representations are composed to words and text pages, smoothed by B-Spline interpolation and rendered considering writing speed and pen characteristics. Finally, we use the synthetic data to validate a segmentation method. An experimental comparison with the IESK-arDB database encourages to train and test document analysis related methods on synthetic samples, whenever no sufficient natural ground truthed data is available. PMID:26295059
ASM Based Synthesis of Handwritten Arabic Text Pages.
Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-Etriby, Sherif; Ghoneim, Ahmed
2015-01-01
Document analysis tasks, as text recognition, word spotting, or segmentation, are highly dependent on comprehensive and suitable databases for training and validation. However their generation is expensive in sense of labor and time. As a matter of fact, there is a lack of such databases, which complicates research and development. This is especially true for the case of Arabic handwriting recognition, that involves different preprocessing, segmentation, and recognition methods, which have individual demands on samples and ground truth. To bypass this problem, we present an efficient system that automatically turns Arabic Unicode text into synthetic images of handwritten documents and detailed ground truth. Active Shape Models (ASMs) based on 28046 online samples were used for character synthesis and statistical properties were extracted from the IESK-arDB database to simulate baselines and word slant or skew. In the synthesis step ASM based representations are composed to words and text pages, smoothed by B-Spline interpolation and rendered considering writing speed and pen characteristics. Finally, we use the synthetic data to validate a segmentation method. An experimental comparison with the IESK-arDB database encourages to train and test document analysis related methods on synthetic samples, whenever no sufficient natural ground truthed data is available.
77 FR 76606 - Community Development Financial Institutions Fund
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-28
... form, with pre-set text limits and font size restrictions. Applicants must submit their narrative responses by using the FY 2013 CDFI Program Application narrative template document. This Word document...) A-133 Narrative Report; (iv) Institution Level Report; (v) Transaction Level Report (for Awardees...
Lexical frequency and voice assimilation in complex words in Dutch
NASA Astrophysics Data System (ADS)
Ernestus, Mirjam; Lahey, Mybeth; Verhees, Femke; Baayen, Harald
2004-05-01
Words with higher token frequencies tend to have more reduced acoustic realizations than lower frequency words (e.g., Hay, 2000; Bybee, 2001; Jurafsky et al., 2001). This study documents frequency effects for regressive voice assimilation (obstruents are voiced before voiced plosives) in Dutch morphologically complex words in the subcorpus of read-aloud novels in the corpus of spoken Dutch (Oostdijk et al., 2002). As expected, the initial obstruent of the cluster tends to be absent more often as lexical frequency increases. More importantly, as frequency increases, the duration of vocal-fold vibration in the cluster decreases, and the duration of the bursts in the cluster increases, after partialing out cluster duration. This suggests that there is less voicing for higher-frequency words. In fact, phonetic transcriptions show regressive voice assimilation for only half of the words and progressive voice assimilation for one third. Interestingly, the progressive voice assimilation observed for higher-frequency complex words renders these complex words more similar to monomorphemic words: Dutch monomorphemic words typically contain voiceless obstruent clusters (Zonneveld, 1983). Such high-frequency complex words may therefore be less easily parsed into their constituent morphemes (cf. Hay, 2000), favoring whole word lexical access (Bertram et al., 2000).
Schultheiss, Oliver C.
2013-01-01
Traditionally, implicit motives (i.e., non-conscious preferences for specific classes of incentives) are assessed through semantic coding of imaginative stories. The present research tested the marker-word hypothesis, which states that implicit motives are reflected in the frequencies of specific words. Using Linguistic Inquiry and Word Count (LIWC; Pennebaker et al., 2001), Study 1 identified word categories that converged with a content-coding measure of the implicit motives for power, achievement, and affiliation in picture stories collected in German and US student samples, showed discriminant validity with self-reported motives, and predicted well-validated criteria of implicit motives (gender difference for the affiliation motive; in interaction with personal-goal progress: emotional well-being). Study 2 demonstrated LIWC-based motive scores' causal validity by documenting their sensitivity to motive arousal. PMID:24137149
Online database for documenting clinical pathology resident education.
Hoofnagle, Andrew N; Chou, David; Astion, Michael L
2007-01-01
Training of clinical pathologists is evolving and must now address the 6 core competencies described by the Accreditation Council for Graduate Medical Education (ACGME), which include patient care. A substantial portion of the patient care performed by the clinical pathology resident takes place while the resident is on call for the laboratory, a practice that provides the resident with clinical experience and assists the laboratory in providing quality service to clinicians in the hospital and surrounding community. Documenting the educational value of these on-call experiences and providing evidence of competence is difficult for residency directors. An online database of these calls, entered by residents and reviewed by faculty, would provide a mechanism for documenting and improving the education of clinical pathology residents. With Microsoft Access we developed an online database that uses active server pages and secure sockets layer encryption to document calls to the clinical pathology resident. Using the data collected, we evaluated the efficacy of 3 interventions aimed at improving resident education. The database facilitated the documentation of more than 4 700 calls in the first 21 months it was online, provided archived resident-generated data to assist in serving clients, and demonstrated that 2 interventions aimed at improving resident education were successful. We have developed a secure online database, accessible from any computer with Internet access, that can be used to easily document clinical pathology resident education and competency.
Enzymatic Decontamination of Environmental Organophosphorus Compounds
2006-12-04
ABSTRACT (Maximum 200 words) The abstract is below since many authors do not follow the 200 word limit 14. SUBJECT TERMS organophosphorus compounds ...5404 Enzymatic decontamination of environmental organophosphorus compounds REPORT DOCUMENTATION PAGE 18. SECURITY CLASSIFICATION ON THIS PAGE...239-18 298-102 15. NUMBER OF PAGES 20. LIMITATION OF ABSTRACT UL - 4-Dec-2006 Enzymatic decontamination of environmental organophosphorus compounds
Photograph + Printed Word: A New Language for the Student Journalist.
ERIC Educational Resources Information Center
Magmer, James
This document examines the use of photography and the printed word to make visual statements in student publications. It is written for journalists who are writers and editors as well as for photojournalists and for student journalists interested in increasing the quality of the school newspaper, magazine, or yearbook. The role of the photographer…
JPKWIC - General key word in context and subject index report generator
NASA Technical Reports Server (NTRS)
Jirka, R.; Kabashima, N.; Kelly, D.; Plesset, M.
1968-01-01
JPKWIC computer program is a general key word in context and subject index report generator specifically developed to help nonprogrammers and nontechnical personnel to use the computer to access files, libraries and mass documentation. This program is designed to produce a KWIC index, a subject index, an edit report, a summary report, and an exclusion list.
According to Davis: Connecting Principles and Practices
ERIC Educational Resources Information Center
Schulman, Steven M.
2013-01-01
In this article, the author allows Robert B. Davis to state for himself his own Principles concerning how children learn, and how teachers can best teach them. These principles are put forward in Davis' own words along with detailed documentation. The author goes on compare Davis' words with his practices. A single Davis video (Towers of Hanoi) is…
A Comparison of Key Concepts in Data Analytics and Data Science
ERIC Educational Resources Information Center
McMaster, Kirby; Rague, Brian; Wolthuis, Stuart L.; Sambasivam, Samuel
2018-01-01
This research study provides an examination of the relatively new fields of Data Analytics and Data Science. We compare word rates in Data Analytics and Data Science documents to determine which concepts are mentioned most often. The most frequent concept in both fields is "data." The word rate for "data" is more than twice the…
ERIC Educational Resources Information Center
Cornell Univ., Ithaca, NY. Dept. of Computer Science.
Part Two of the eighteenth report on Salton's Magical Automatic Retriever of Texts (SMART) project is composed of three papers: The first: "The Effect of Common Words and Synonyms on Retrieval Performance" by D. Bergmark discloses that removal of common words from the query and document vectors significantly increases precision and that…
Finding Relevant Data in a Sea of Languages
2016-04-26
full machine-translated text , unbiased word clouds , query-biased word clouds , and query-biased sentence...and information retrieval to automate language processing tasks so that the limited number of linguists available for analyzing text and spoken...the crime (stock market). The Cross-LAnguage Search Engine (CLASE) has already preprocessed the documents, extracting text to identify the language
ERIC Educational Resources Information Center
Hopp, Holger
2005-01-01
This study documents knowledge of UG-mediated aspects of optionality in word order in the second language (L2) German of advanced English and Japanese speakers (n = 39). A bimodal grammaticality judgement task, which controlled for context and intonation, was administered to probe judgements on a set of scrambling, topicalization and remnant…
Vives, Michael; Young, Lyle; Sabharwal, Sanjeev
2009-12-01
Analysis of spine-related websites available to the general public. To assess the readability of spine-related patient educational materials available on professional society and individual surgeon or practice based websites. The Internet has become a valuable source of patient education material. A significant percentage of patients, however, find this Internet based information confusing. Healthcare experts recommend that the readability of patient education material be less than the sixth grade level. The Flesch-Kincaid grade level is the most widely used method to evaluate the readability score of textual material, with lower scores suggesting easier readability. We conducted an Internet search of all patient education documents on the North American Spine Society (NASS), American Association of Neurological Surgeons (AANS), the American Academy of Orthopaedic Surgeons (AAOS), and a sample of 10 individual surgeon or practice based websites. The Flesch-Kincaid grade level of each article was calculated using widely available Microsoft Office Word software. The mean grade level of articles on the various professional society and individual/practice based websites were compared. A total of 121 articles from the various websites were available and analyzed. All 4 categories of websites had mean Flesch-Kincaid grade levels greater than 10. Only 3 articles (2.5%) were found to be at or below the sixth grade level, the recommended readability level for adult patients in the United States. There were no significant differences among the mean Flesch-Kincaid grade levels from the AAOS, NASS, AANS, and practice-based web-sites (P = 0.065, ANOVA). Our findings suggest that most of the Spine-related patient education materials on professional society and practice-based websites have readability scores that may be too high, making comprehension difficult for a substantial portion of the United States adult population.
PharmTeX: a LaTeX-Based Open-Source Platform for Automated Reporting Workflow.
Rasmussen, Christian Hove; Smith, Mike K; Ito, Kaori; Sundararajan, Vijayakumar; Magnusson, Mats O; Niclas Jonsson, E; Fostvedt, Luke; Burger, Paula; McFadyen, Lynn; Tensfeldt, Thomas G; Nicholas, Timothy
2018-03-16
Every year, the pharmaceutical industry generates a large number of scientific reports related to drug research, development, and regulatory submissions. Many of these reports are created using text processing tools such as Microsoft Word. Given the large number of figures, tables, references, and other elements, this is often a tedious task involving hours of copying and pasting and substantial efforts in quality control (QC). In the present article, we present the LaTeX-based open-source reporting platform, PharmTeX, a community-based effort to make reporting simple, reproducible, and user-friendly. The PharmTeX creators put a substantial effort into simplifying the sometimes complex elements of LaTeX into user-friendly functions that rely on advanced LaTeX and Perl code running in the background. Using this setup makes LaTeX much more accessible for users with no prior LaTeX experience. A software collection was compiled for users not wanting to manually install the required software components. The PharmTeX templates allow for inclusion of tables directly from mathematical software output as well and figures from several formats. Code listings can be included directly from source. No previous experience and only a few hours of training are required to start writing reports using PharmTeX. PharmTeX significantly reduces the time required for creating a scientific report fully compliant with regulatory and industry expectations. QC is made much simpler, since there is a direct link between analysis output and report input. PharmTeX makes available to report authors the strengths of LaTeX document processing without the need for extensive training. Graphical Abstract ᅟ.
Majumder, Anirban; Sanyal, Debmalya
2017-01-01
Gender dysphoria (GD) is an increasingly recognized medical condition in India, and little scientific data on treatment outcomes are available. Our objective is to study the therapeutic options including psychotherapy, hormone, and surgical treatments used for alleviating GD in male-to-female (MTF) transgender subjects in Eastern India. This is a retrospective study of treatment preferences and outcome in 55 MTF transgender subjects who were presented to the endocrine clinic. Descriptive statistical analysis is carried out in the present study, and Microsoft Word and Excel are used to generate graphs and tables. The mean follow-up was 1.9 years and 14 subjects (25.5%) were lost to follow-up after a single or 2-3 contact sessions. Rest 41 subjects (74.5%) desiring treatment had regular counseling and medical monitoring. All 41 subjects were dressing to present herself as female and all of them were receiving cross-sex hormone therapy either estrogen only (68%), or drospirenone in combination with estrogen (12%) or gonadotropin-releasing hormone agonist (GnRH) in combination with estrogens (19.5%). Most of the subjects preferred estrogen therapy as it was most affordable and only a small number of subjects preferred drospirenone or GnRH agonist because of cost and availability. 23.6% subjects underwent esthetic breast augmentation surgery and 25.5% underwent orchiectomy and/or vaginoplasty. Three subjects presented with prior breast augmentation surgery and nine subjects presented with prior orchiectomy without vaginoplasty, depicting a high prevalence of poorly supervised surgeries. Standards of care documents provide clinical guidance for health professionals about the optimal management of transsexual people. The lack of information among health professionals about proper and protocolwise management leads to suboptimal physical, social, and sexual results.
Majumder, Anirban; Sanyal, Debmalya
2017-01-01
Context: Gender dysphoria (GD) is an increasingly recognized medical condition in India, and little scientific data on treatment outcomes are available. Aims: Our objective is to study the therapeutic options including psychotherapy, hormone, and surgical treatments used for alleviating GD in male–to–female (MTF) transgender subjects in Eastern India. Subjects and Methods: This is a retrospective study of treatment preferences and outcome in 55 MTF transgender subjects who were presented to the endocrine clinic. Statistical Analysis Used: Descriptive statistical analysis is carried out in the present study, and Microsoft Word and Excel are used to generate graphs and tables. Results: The mean follow-up was 1.9 years and 14 subjects (25.5%) were lost to follow-up after a single or 2–3 contact sessions. Rest 41 subjects (74.5%) desiring treatment had regular counseling and medical monitoring. All 41 subjects were dressing to present herself as female and all of them were receiving cross-sex hormone therapy either estrogen only (68%), or drospirenone in combination with estrogen (12%) or gonadotropin-releasing hormone agonist (GnRH) in combination with estrogens (19.5%). Most of the subjects preferred estrogen therapy as it was most affordable and only a small number of subjects preferred drospirenone or GnRH agonist because of cost and availability. 23.6% subjects underwent esthetic breast augmentation surgery and 25.5% underwent orchiectomy and/or vaginoplasty. Three subjects presented with prior breast augmentation surgery and nine subjects presented with prior orchiectomy without vaginoplasty, depicting a high prevalence of poorly supervised surgeries. Conclusions: Standards of care documents provide clinical guidance for health professionals about the optimal management of transsexual people. The lack of information among health professionals about proper and protocolwise management leads to suboptimal physical, social, and sexual results. PMID:28217493
Zabek, Daniel; Taylor, John; Bowen, Chris
2016-09-05
Flexible pyroelectric energy generators provide unique features for harvesting temperature fluctuations which can be effectively enhanced using meshed electrodes that improve thermal conduction, convection and radiation into the pyroelectric. In this paper, thermal radiation energy is continuously harvested with pyroelectric free standing Polyvilylidene Difluoride (PVDF) films over a large number of heat heat cycles using a novel micro-sized symmetrical patterned meshed electrode. It is shown that, for the meshed electrode geometries considered in this work, the polarisation-field (P-E), current-field (I-E) characteristics and device capacitance are unaffected since the fringing fields were generally small; this is verified using numerical simulations and comparison with experimental measurements. The use of meshed electrodes has been shown to significantly improve both the open circuit voltage (16 V to 59 V) and closed-circuit current (9 nA to 32 nA). The pyroelectric alternating current (AC) is rectified for direct current (DC) storage and 30% reduction in capacitor charging time is achieved by using the optimum meshed electrodes. The use of meshed electrodes on ferroelectric materials provides an innovative route to improve their performance in applications such as wearable devices, novel flexible sensors and large scale pyroelectric energy harvesters.hese instructions give you guidelines for preparing papers for IEEE Transactions and Journals. Use this document as a template if you are using Microsoft Word 6.0 or later. Otherwise, use this document as an instruction set. The electronic file of your paper will be formatted further at IEEE. Paper titles should be written in uppercase and lowercase letters, not all uppercase. Avoid writing long formulas with subscripts in the title; short formulas that identify the elements are fine (e.g., "Nd-Fe-B"). Do not write "(Invited)" in the title. Full names of authors are preferred in the author field, but are not required. Put a space between authors' initials. Define all symbols used in the abstract. Do not cite references in the abstract. Do not delete the blank line immediately above the abstract; it sets the footnote at the bottom of this column.
Word Spotting and Recognition with Embedded Attributes.
Almazán, Jon; Gordo, Albert; Fornés, Alicia; Valveny, Ernest
2014-12-01
This paper addresses the problems of word spotting and word recognition on images. In word spotting, the goal is to find all instances of a query word in a dataset of images. In recognition, the goal is to recognize the content of the word image, usually aided by a dictionary or lexicon. We describe an approach in which both word images and text strings are embedded in a common vectorial subspace. This is achieved by a combination of label embedding and attributes learning, and a common subspace regression. In this subspace, images and strings that represent the same word are close together, allowing one to cast recognition and retrieval tasks as a nearest neighbor problem. Contrary to most other existing methods, our representation has a fixed length, is low dimensional, and is very fast to compute and, especially, to compare. We test our approach on four public datasets of both handwritten documents and natural images showing results comparable or better than the state-of-the-art on spotting and recognition tasks.
ERIC Educational Resources Information Center
Bhanji, Zahra
2012-01-01
The purpose of this article is to explore Microsoft Corporation as a new international actor shaping educational reforms and practices. This study examines how the implementation of Microsoft's global Partners in Learning (PiL) program varied and was mediated by national politics and national institutional practices in two different contexts,…
Document image cleanup and binarization
NASA Astrophysics Data System (ADS)
Wu, Victor; Manmatha, Raghaven
1998-04-01
Image binarization is a difficult task for documents with text over textured or shaded backgrounds, poor contrast, and/or considerable noise. Current optical character recognition (OCR) and document analysis technology do not handle such documents well. We have developed a simple yet effective algorithm for document image clean-up and binarization. The algorithm consists of two basic steps. In the first step, the input image is smoothed using a low-pass filter. The smoothing operation enhances the text relative to any background texture. This is because background texture normally has higher frequency than text does. The smoothing operation also removes speckle noise. In the second step, the intensity histogram of the smoothed image is computed and a threshold automatically selected as follows. For black text, the first peak of the histogram corresponds to text. Thresholding the image at the value of the valley between the first and second peaks of the histogram binarizes the image well. In order to reliably identify the valley, the histogram is smoothed by a low-pass filter before the threshold is computed. The algorithm has been applied to some 50 images from a wide variety of source: digitized video frames, photos, newspapers, advertisements in magazines or sales flyers, personal checks, etc. There are 21820 characters and 4406 words in these images. 91 percent of the characters and 86 percent of the words are successfully cleaned up and binarized. A commercial OCR was applied to the binarized text when it consisted of fonts which were OCR recognizable. The recognition rate was 84 percent for the characters and 77 percent for the words.
Cost Estimating Cases: Educational Tools for Cost Analysts
1993-09-01
only appropriate documentation should be provided. In other words, students should not submit all of the documentation possible using ACEIT , only that...case was their lack of understanding of the ACEIT software used to conduct the estimate. Specifically, many students misinterpreted the cost...estimating relationships (CERs) embedded in the 49 software. Additionally, few of the students were able to properly organize the ACEIT documentation output
Xyce Parallel Electronic Simulator Reference Guide Version 6.6.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, Eric R.; Aadithya, Karthik Venkatraman; Mei, Ting
This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users' Guide [1] . The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce . This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users' Guide [1] . The information herein is subject to change without notice. Copyright c 2002-2016 Sandia Corporation. All rights reserved. Acknowledgements The BSIM Group at the University ofmore » California, Berkeley developed the BSIM3, BSIM4, BSIM6, BSIM-CMG and BSIM-SOI models. The BSIM3 is Copyright c 1999, Regents of the University of California. The BSIM4 is Copyright c 2006, Regents of the University of California. The BSIM6 is Copyright c 2015, Regents of the University of California. The BSIM-CMG is Copyright c 2012 and 2016, Regents of the University of California. The BSIM-SOI is Copyright c 1990, Regents of the University of California. All rights reserved. The Mextram model has been developed by NXP Semiconductors until 2007, Delft University of Technology from 2007 to 2014, and Auburn University since April 2015. Copyrights c of Mextram are with Delft University of Technology, NXP Semiconductors and Auburn University. The MIT VS Model Research Group developed the MIT Virtual Source (MVS) model. Copyright c 2013 Massachusetts Institute of Technology (MIT). The EKV3 MOSFET model was developed by the EKV Team of the Electronics Laboratory-TUC of the Technical University of Crete. Trademarks Xyce TM Electronic Simulator and Xyce TM are trademarks of Sandia Corporation. Orcad, Orcad Capture, PSpice and Probe are registered trademarks of Cadence Design Systems, Inc. Microsoft, Windows and Windows 7 are registered trademarks of Microsoft Corporation. Medici, DaVinci and Taurus are registered trademarks of Synopsys Corporation. Amtec and TecPlot are trademarks of Amtec Engineering, Inc. All other trademarks are property of their respective owners. Contacts World Wide Web http://xyce.sandia.gov https://info.sandia.gov/xyce (Sandia only) Email xyce@sandia.gov (outside Sandia) xyce-sandia@sandia.gov (Sandia only) Bug Reports (Sandia only) http://joseki-vm.sandia.gov/bugzilla http://morannon.sandia.gov/bugzilla« less
Human Factors Feedback: Brain Acoustic Monitor
2012-02-01
Microsoft Office Excel .................................................................12 iv 4. Conclusions 13 5. References 15 Appendix A...Panasonic Toughbook system. †Toughbook is registered trademark of Panasonic Corporation. ‡Windows is a registered trademark of Microsoft Corporation. 4...was preloaded with Microsoft Windows XP service pack 2 OS. This OS is widely used on IBM-style personal computers, and the BAM system did not
Singh, Shikha; Deshmukh, Sonali; Merani, Varsha; Rejintal, Neeta
2016-01-01
The aim of this article is to evaluate the mean cephalometric values for Arnett's soft tissue analysis in the Maratha ethnic (Indian) population. Lateral cephalograms of 60 patients (30 males and 30 females) aged 18-26 years were obtained with the patients in the Natural Head Position (NHP), with teeth in maximum intercuspation and lips in the rest position. Moreover, hand tracings were also done. The statistical analysis was performed with the help of a statistical software, the Statistical Package for the Social Sciences version 16, and Microsoft word and Excel (Microsoft office 2007) were used to generate the analytical data. Statistical significance was tested atP level (1% and 5% level of significance). Statistical analysis using student's unpaired t-test were performed. Various cephalometric values for the Maratha ethnic (Indian) population differed from Caucasian cephalometric values such as nasolabial inclination, incisor proclination, and exposure, which may affect the outcome of the orthodontic and orthognathic treatment. Marathas have more proclined maxillary incisors, less prominent chin, less facial length, acute nasolabial angle, and all soft tissue thickness are greater in Marathas except lower lip thickness (in Maratha males and females) and upper lip angle (in Maratha males) than those of the Caucasian population. It is a fact that all different ethnic races have different facial characters. The variability of the soft tissue integument in people with different ethnic origin makes it necessary to study the soft tissue standards of a particular community and consider those norms when planning an orthodontic and orthognathic treatment for particular racial and ethnic patients.
Different Neural Correlates of Emotion-Label Words and Emotion-Laden Words: An ERP Study.
Zhang, Juan; Wu, Chenggang; Meng, Yaxuan; Yuan, Zhen
2017-01-01
It is well-documented that both emotion-label words (e.g., sadness, happiness) and emotion-laden words (e.g., death, wedding) can induce emotion activation. However, the neural correlates of emotion-label words and emotion-laden words recognition have not been examined. The present study aimed to compare the underlying neural responses when processing the two kinds of words by employing event-related potential (ERP) measurements. Fifteen Chinese native speakers were asked to perform a lexical decision task in which they should judge whether a two-character compound stimulus was a real word or not. Results showed that (1) emotion-label words and emotion-laden words elicited similar P100 at the posteriors sites, (2) larger N170 was found for emotion-label words than for emotion-laden words at the occipital sites on the right hemisphere, and (3) negative emotion-label words elicited larger Late Positivity Complex (LPC) on the right hemisphere than on the left hemisphere while such effect was not found for emotion-laden words and positive emotion-label words. The results indicate that emotion-label words and emotion-laden words elicit different cortical responses at both early (N170) and late (LPC) stages. In addition, right hemisphere advantage for emotion-label words over emotion-laden words can be observed in certain time windows (i.e., N170 and LPC) while fails to be detected in some other time window (i.e., P100). The implications of the current findings for future emotion research were discussed.
The personal receiving document management and the realization of email function in OAS
NASA Astrophysics Data System (ADS)
Li, Biqing; Li, Zhao
2017-05-01
This software is an independent software system, suitable for small and medium enterprises, contains personal office, scientific research project management and system management functions, independently run in relevant environment, and to solve practical needs. This software is an independent software system, using the current popular B/S (browser/server) structure and ASP.NET technology development, using the Windows 7 operating system, Microsoft SQL Server2005 Visual2008 and database as a development platform, suitable for small and medium enterprises, contains personal office, scientific research project management and system management functions, independently run in relevant environment, and to solve practical needs.
NASA Technical Reports Server (NTRS)
1998-01-01
SYMED, Inc., developed a unique electronic medical records and information management system. The S2000 Medical Interactive Care System (MICS) incorporates both a comprehensive and interactive medical care support capability and an extensive array of digital medical reference materials in either text or high resolution graphic form. The system was designed, in cooperation with NASA, to improve the effectiveness and efficiency of physician practices. The S2000 is a MS (Microsoft) Windows based software product which combines electronic forms, medical documents, records management, and features a comprehensive medical information system for medical diagnostic support and treatment. SYMED, Inc. offers access to its medical systems to all companies seeking competitive advantages.
Summary of Expansions, Updates, and Results in GREET 2017 Suite of Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Michael; Elgowainy, Amgad; Han, Jeongwoo
This report provides a technical summary of the expansions and updates to the 2017 release of Argonne National Laboratory’s Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation (GREET®) model, including references and links to key technical documents related to these expansions and updates. The GREET 2017 release includes an updated version of the GREET1 (the fuel-cycle GREET model) and GREET2 (the vehicle-cycle GREET model), both in the Microsoft Excel platform and in the GREET.net modeling platform. Figure 1 shows the structure of the GREET Excel modeling platform. The .net platform integrates all GREET modules together seamlessly.
Global Impact Estimation of ISO 50001 Energy Management System for Industrial and Service Sectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aghajanzadeh, Arian; Therkelsen, Peter L.; Rao, Prakash
A methodology has been developed to determine the impacts of ISO 50001 Energy Management System (EnMS) at a region or country level. The impacts of ISO 50001 EnMS include energy, CO2 emissions, and cost savings. This internationally recognized and transparent methodology has been embodied in a user friendly Microsoft Excel® based tool called ISO 50001 Impact Estimator Tool (IET 50001). However, the tool inputs are critical in order to get accurate and defensible results. This report is intended to document the data sources used and assumptions made to calculate the global impact of ISO 50001 EnMS.
Proceedings-1979 third annual practical conference on communication
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1981-04-01
Topics covered at the meeting include: nonacademic writing, writer and editor training in technical publications, readability of technical documents, guide for beginning technical editors, a visual aids data base, newsletter publishing, style guide for a project management organization, word processing, computer graphics, text management for technical documentation, and typographical terminology.
ERIC Educational Resources Information Center
Australian Dept. of Labour and National Service, Melbourne. Women's Bureau.
This document is an English-language abstract (approximately 1,500 words) in which Australian child care facilities are surveyed to include those providing full-day care and therefore excludes kindergartens, play centers, nursery schools, and child minding centers that provide care for only part of the day. The document presents a breakdown of…
Sun-to-power cells layer by layer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moseke, Dawn; Richards, Robin; Moseke, Daniel
Representing the Center for Interface Science: Solar Electric Materials (CISSEM), this document is one of the entries in the Ten Hundred and One Word Challenge. As part of the challenge, the 46 Energy Frontier Research Centers were invited to represent their science in images, cartoons, photos, words and original paintings, but any descriptions or words could only use the 1000 most commonly used words in the English language, with the addition of one word important to each of the EFRCs and the mission of DOE: energy. The mission of the CISSEM is to advance the understanding of interface science underlyingmore » solar energy conversion technologies based on organic and organic-inorganic hybrid materials; and to inspire, recruit and train future scientists and leaders in basic science of solar electric conversion.« less
Powering your car with sun light
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cosgrove, Daniel; Brown, Nicole; Kiemle, Sarah
Representing the Center for Lignocellulose Structure and Formation (CLSF), this document is one of the entries in the Ten Hundred and One Word Challenge and was awarded "Overall Winner." As part of the challenge, the 46 Energy Frontier Research Centers were invited to represent their science in images, cartoons, photos, words and original paintings, but any descriptions or words could only use the 1000 most commonly used words in the English language, with the addition of one word important to each of the EFRCs and the mission of DOE: energy. The mission of the CLSF is to dramatically increase ourmore » fundamental knowledge of the formation and physical interactions of bio-polymer networks in plant cell walls to provide a basis for improved methods for converting biomass into fuels.« less
Our On-Its-Head-and-In-Your-Dreams Approach Leads to Clean Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kazmerski, Lawrence; Gwinner, Don; Hicks, Al
Representing the Center for Inverse Design (CID), this document is one of the entries in the Ten Hundred and One Word Challenge. As part of the challenge, the 46 Energy Frontier Research Centers were invited to represent their science in images, cartoons, photos, words and original paintings, but any descriptions or words could only use the 1000 most commonly used words in the English language, with the addition of one word important to each of the EFRCs and the mission of DOE: energy. The mission of the CID is to revolutionize the discovery of new materials by design with tailoredmore » properties through the development and application of a novel inverse design approach powered by theory guiding experiment with an initial focus on solar energy conversion.« less
Controlling Light to Make the Most Energy From the Sun
DOE Office of Scientific and Technical Information (OSTI.GOV)
Callahan, Dennis; Corcoran, Chris; Eisler, Carissa
Representing the Light-Material Interactions in Energy Conversion (LMI), this document is one of the entries in the Ten Hundred and One Word Challenge. As part of the challenge, the 46 Energy Frontier Research Centers were invited to represent their science in images, cartoons, photos, words and original paintings, but any descriptions or words could only use the 1000 most commonly used words in the English language, with the addition of one word important to each of the EFRCs and the mission of DOE energy. The mission of LMI to tailor the morphology, complex dielectric structure, and electronic properties of mattermore » so as to sculpt the flow of sunlight and heat, enabling light conversion to electrical and chemical energy with unprecedented efficiency.« less
Stuff Moving Through Other Stuff - For Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
All EFRC effort,
Representing the Understanding Charge Separation and Transfer at Interfaces in Energy Materials (EFRC:CST), this document is one of the entries in the Ten Hundred and One Word Challenge. As part of the challenge, the 46 Energy Frontier Research Centers were invited to represent their science in images, cartoons, photos, words and original paintings, but any descriptions or words could only use the 1000 most commonly used words in the English language, with the addition of one word important to each of the EFRCs and the mission of DOE energy. Understanding Charge Separation and Transfer at Interfaces in Energy Materials (EFRC:CST),more » is focused on advancing the understanding and design of nanostructured molecular materials for organic photovoltaic (OPV) and electrical energy storage (EES) applications.« less
The Walk Forward of Sun-Grown Green-Thing Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huetteman, Carl; Burroff-Murr, Pam; Anderson, Sarah
Representing the Center for Direct Catalytic Conversion of Biomass to Biofuels (C3Bio), this document is one of the entries in the Ten Hundred and One Word Challenge and was awarded "Best Tagline." As part of the challenge, the 46 Energy Frontier Research Centers were invited to represent their science in images, cartoons, photos, words and original paintings, but any descriptions or words could only use the 1000 most commonly used words in the English language, with the addition of one word important to each of the EFRCs and the mission of DOE: energy. The mission of C3Bio at Purdue Universitymore » is to integrate fundamental knowledge and enable technologies for catalytic conversion of engineered biomass to advanced biofuels and value-added products.« less
Is The Same bit of Light Exciting Two (or more) Parts of a Thing at the Same Time?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goodknight, Joey; Aspuru-Guzik, Alan
Representing the Center for Excitonics (CE), this document is one of the entries in the Ten Hundred and One Word Challenge. As part of the challenge, the 46 Energy Frontier Research Centers were invited to represent their science in images, cartoons, photos, words and original paintings, but any descriptions or words could only use the 1000 most commonly used words in the English language, with the addition of one word important to each of the EFRCs and the mission of DOE: energy. The mission of the CE is to understand the transport of charge carriers in synthetic disordered systems, whichmore » hold promise as new materials for conversion of solar energy to electricity and electrical energy storage.« less
Phonological Priming with Nonwords in Children with and without Specific Language Impairment
ERIC Educational Resources Information Center
Brooks, Patricia J.; Seiger-Gardner, Liat; Obeid, Rita; MacWhinney, Brian
2015-01-01
Purpose: The cross-modal picture-word interference task is used to examine contextual effects on spoken-word production. Previous work has documented lexical-phonological interference in children with specific language impairment (SLI) when a related distractor (e.g., bell) occurs prior to a picture to be named (e.g., a bed). In the current study,…
The Magic of Words: Teaching Vocabulary in the Early Childhood Classroom
ERIC Educational Resources Information Center
Neuman, Susan B.; Wright, Tanya S.
2014-01-01
Developing a large and rich vocabulary is central to learning to read. Children must know the words that make up written texts in order to understand them, especially as the vocabulary demands of content-related materials increase in the upper grades. Studies have documented that the size of a person's vocabulary is strongly related to how…
New Words Digest, Fall 1989-Summer 1990.
ERIC Educational Resources Information Center
New Words Digest, 1990
1990-01-01
This document consists of the four issues of the first annual volume of a quarterly magazine for new adult readers. It is aimed at adults reading at the fourth- to eighth-grade level. The magazine is designed to be self-motivating to the new reader or the learning disabled. Phonetic helps are provided for those words that do not conform to typical…
VISION User Guide - VISION (Verifiable Fuel Cycle Simulation) Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacob J. Jacobson; Robert F. Jeffers; Gretchen E. Matthern
2009-08-01
The purpose of this document is to provide a guide for using the current version of the Verifiable Fuel Cycle Simulation (VISION) model. This is a complex model with many parameters; the user is strongly encouraged to read this user guide before attempting to run the model. This model is an R&D work in progress and may contain errors and omissions. It is based upon numerous assumptions. This model is intended to assist in evaluating “what if” scenarios and in comparing fuel, reactor, and fuel processing alternatives at a systems level for U.S. nuclear power. The model is not intendedmore » as a tool for process flow and design modeling of specific facilities nor for tracking individual units of fuel or other material through the system. The model is intended to examine the interactions among the components of a fuel system as a function of time varying system parameters; this model represents a dynamic rather than steady-state approximation of the nuclear fuel system. VISION models the nuclear cycle at the system level, not individual facilities, e.g., “reactor types” not individual reactors and “separation types” not individual separation plants. Natural uranium can be enriched, which produces enriched uranium, which goes into fuel fabrication, and depleted uranium (DU), which goes into storage. Fuel is transformed (transmuted) in reactors and then goes into a storage buffer. Used fuel can be pulled from storage into either separation of disposal. If sent to separations, fuel is transformed (partitioned) into fuel products, recovered uranium, and various categories of waste. Recycled material is stored until used by its assigned reactor type. Note that recovered uranium is itself often partitioned: some RU flows with recycled transuranic elements, some flows with wastes, and the rest is designated RU. RU comes out of storage if needed to correct the U/TRU ratio in new recycled fuel. Neither RU nor DU are designated as wastes. VISION is comprised of several Microsoft Excel input files, a Powersim Studio core, and several Microsoft Excel output files. All must be co-located in the same folder on a PC to function. We use Microsoft Excel 2003 and have not tested VISION with Microsoft Excel 2007. The VISION team uses both Powersim Studio 2005 and 2009 and it should work with either.« less
Content Abstract Classification Using Naive Bayes
NASA Astrophysics Data System (ADS)
Latif, Syukriyanto; Suwardoyo, Untung; Aldrin Wihelmus Sanadi, Edwin
2018-03-01
This study aims to classify abstract content based on the use of the highest number of words in an abstract content of the English language journals. This research uses a system of text mining technology that extracts text data to search information from a set of documents. Abstract content of 120 data downloaded at www.computer.org. Data grouping consists of three categories: DM (Data Mining), ITS (Intelligent Transport System) and MM (Multimedia). Systems built using naive bayes algorithms to classify abstract journals and feature selection processes using term weighting to give weight to each word. Dimensional reduction techniques to reduce the dimensions of word counts rarely appear in each document based on dimensional reduction test parameters of 10% -90% of 5.344 words. The performance of the classification system is tested by using the Confusion Matrix based on comparative test data and test data. The results showed that the best classification results were obtained during the 75% training data test and 25% test data from the total data. Accuracy rates for categories of DM, ITS and MM were 100%, 100%, 86%. respectively with dimension reduction parameters of 30% and the value of learning rate between 0.1-0.5.
An IR-Based Approach Utilizing Query Expansion for Plagiarism Detection in MEDLINE.
Nawab, Rao Muhammad Adeel; Stevenson, Mark; Clough, Paul
2017-01-01
The identification of duplicated and plagiarized passages of text has become an increasingly active area of research. In this paper, we investigate methods for plagiarism detection that aim to identify potential sources of plagiarism from MEDLINE, particularly when the original text has been modified through the replacement of words or phrases. A scalable approach based on Information Retrieval is used to perform candidate document selection-the identification of a subset of potential source documents given a suspicious text-from MEDLINE. Query expansion is performed using the ULMS Metathesaurus to deal with situations in which original documents are obfuscated. Various approaches to Word Sense Disambiguation are investigated to deal with cases where there are multiple Concept Unique Identifiers (CUIs) for a given term. Results using the proposed IR-based approach outperform a state-of-the-art baseline based on Kullback-Leibler Distance.
Exploiting salient semantic analysis for information retrieval
NASA Astrophysics Data System (ADS)
Luo, Jing; Meng, Bo; Quan, Changqin; Tu, Xinhui
2016-11-01
Recently, many Wikipedia-based methods have been proposed to improve the performance of different natural language processing (NLP) tasks, such as semantic relatedness computation, text classification and information retrieval. Among these methods, salient semantic analysis (SSA) has been proven to be an effective way to generate conceptual representation for words or documents. However, its feasibility and effectiveness in information retrieval is mostly unknown. In this paper, we study how to efficiently use SSA to improve the information retrieval performance, and propose a SSA-based retrieval method under the language model framework. First, SSA model is adopted to build conceptual representations for documents and queries. Then, these conceptual representations and the bag-of-words (BOW) representations can be used in combination to estimate the language models of queries and documents. The proposed method is evaluated on several standard text retrieval conference (TREC) collections. Experiment results on standard TREC collections show the proposed models consistently outperform the existing Wikipedia-based retrieval methods.
Use of Co-occurrences for Temporal Expressions Annotation
NASA Astrophysics Data System (ADS)
Craveiro, Olga; Macedo, Joaquim; Madeira, Henrique
The annotation or extraction of temporal information from text documents is becoming increasingly important in many natural language processing applications such as text summarization, information retrieval, question answering, etc.. This paper presents an original method for easy recognition of temporal expressions in text documents. The method creates semantically classified temporal patterns, using word co-occurrences obtained from training corpora and a pre-defined seed keywords set, derived from the used language temporal references. A participation on a Portuguese named entity evaluation contest showed promising effectiveness and efficiency results. This approach can be adapted to recognize other type of expressions or languages, within other contexts, by defining the suitable word sets and training corpora.
Hamada, Megumi; Koda, Keiko
2011-04-01
Although the role of the phonological loop in word-retention is well documented, research in Chinese character retention suggests the involvement of non-phonological encoding. This study investigated whether the extent to which the phonological loop contributes to learning and remembering visually introduced words varies between college-level Chinese ESL learners (N = 20) and native speakers of English (N = 20). The groups performed a paired associative learning task under two conditions (control versus articulatory suppression) with two word types (regularly spelled versus irregularly spelled words) differing in degree of phonological accessibility. The results demonstrated that both groups' recall declined when the phonological loop was made less available (with irregularly spelled words and in the articulatory suppression condition), but the decline was greater for the native group. These results suggest that word learning entails phonological encoding uniformly across learners, but the contribution of phonology varies among learners with diverse linguistic backgrounds.
Orena, E F; Caldiroli, D; Acerbi, F; Barazzetta, I; Papagno, C
2018-06-05
Neuropsychological, neuroimaging and electrophysiological studies demonstrate that abstract and concrete word processing relies not only on the activity of a common bilateral network but also on dedicated networks. The neuropsychological literature has shown that a selective sparing of abstract relative to concrete words can be documented in lesions of the left anterior temporal regions. We investigated concrete and abstract word processing in 10 patients undergoing direct electrical stimulation (DES) for brain mapping during awake surgery in the left hemisphere. A lexical decision and a concreteness judgment task were added to the neuropsychological assessment during intra-operative monitoring. On the concreteness judgment, DES delivered over the inferior frontal gyrus significantly decreased abstract word accuracy while accuracy for concrete words decreased when the anterior temporal cortex was stimulated. These results are consistent with a lexical-semantic model that distinguishes between concrete and abstract words related to different neural substrates in the left hemisphere.
Seeking Feng Shui in US-China Rhetoric - Words Matter
2017-03-31
2017 DISTRIBUTION A. Approved for public release: distribution unlimited. DISCLAIMER The views expressed in this academic research paper are those...leaders’ rhetoric conflates contingency planning threat analysis as U.S.-China policy and is inconsistent with the threats China poses. Not only is...national strategy documents can be viewed as political documents that may not represent true U.S. intent, both sets of documents still require adherence to
Methods and means used in programming intelligent searches of technical documents
NASA Technical Reports Server (NTRS)
Gross, David L.
1993-01-01
In order to meet the data research requirements of the Safety, Reliability & Quality Assurance activities at Kennedy Space Center (KSC), a new computer search method for technical data documents was developed. By their very nature, technical documents are partially encrypted because of the author's use of acronyms, abbreviations, and shortcut notations. This problem of computerized searching is compounded at KSC by the volume of documentation that is produced during normal Space Shuttle operations. The Centralized Document Database (CDD) is designed to solve this problem. It provides a common interface to an unlimited number of files of various sizes, with the capability to perform any diversified types and levels of data searches. The heart of the CDD is the nature and capability of its search algorithms. The most complex form of search that the program uses is with the use of a domain-specific database of acronyms, abbreviations, synonyms, and word frequency tables. This database, along with basic sentence parsing, is used to convert a request for information into a relational network. This network is used as a filter on the original document file to determine the most likely locations for the data requested. This type of search will locate information that traditional techniques, (i.e., Boolean structured key-word searching), would not find.
Contemporary issues in HIM. The application layer--III.
Wear, L L; Pinkert, J R
1993-07-01
We have seen document preparation systems evolve from basic line editors through powerful, sophisticated desktop publishing programs. This component of the application layer is probably one of the most used, and most readily identifiable. Ask grade school children nowadays, and many will tell you that they have written a paper on a computer. Next month will be a "fun" tour through a number of other application programs we find useful. They will range from a simple notebook reminder to a sophisticated photograph processor. Application layer: Software targeted for the end user, focusing on a specific application area, and typically residing in the computer system as distinct components on top of the OS. Desktop publishing: A document preparation program that begins with the text features of a word processor, then adds the ability for a user to incorporate outputs from a variety of graphic programs, spreadsheets, and other applications. Line editor: A document preparation program that manipulates text in a file on the basis of numbered lines. Word processor: A document preparation program that can, among other things, reformat sections of documents, move and replace blocks of text, use multiple character fonts, automatically create a table of contents and index, create complex tables, and combine text and graphics.
The influence of autonomic arousal and semantic relatedness on memory for emotional words.
Buchanan, Tony W; Etzel, Joset A; Adolphs, Ralph; Tranel, Daniel
2006-07-01
Increased memory for emotional stimuli is a well-documented phenomenon. Emotional arousal during the encoding of a stimulus is one mediator of this memory enhancement. Other variables such as semantic relatedness also play a role in the enhanced memory for emotional stimuli, especially for verbal stimuli. Research has not addressed the contributions of emotional arousal, indexed by self-report and autonomic measures, and semantic relatedness on memory performance. Twenty young adults (10 women) were presented neutral-unrelated words, school-related words, moderately arousing emotional words, and highly arousing taboo words while heart rate and skin conductance were measured. Memory was tested with free recall and recognition tests. Results showed that taboo words, which were both semantically related and high arousal were remembered best. School-related words, which were high on semantic relatedness but low on arousal, were remembered better than the moderately arousing emotional words and semantically unrelated neutral words. Psychophysiological responses showed that within the moderately arousing emotional and neutral word groups, those words eliciting greater autonomic activity were better remembered than words that did not elicit such activity. These results demonstrate additive effects of semantic relatedness and emotional arousal on memory. Relatedness confers an advantage to memory (as in the school-words), but the combination of relatedness and arousal (as in the taboo words) results in the best memory performance.
Biomedical information retrieval across languages.
Daumke, Philipp; Markü, Kornél; Poprat, Michael; Schulz, Stefan; Klar, Rüdiger
2007-06-01
This work presents a new dictionary-based approach to biomedical cross-language information retrieval (CLIR) that addresses many of the general and domain-specific challenges in current CLIR research. Our method is based on a multilingual lexicon that was generated partly manually and partly automatically, and currently covers six European languages. It contains morphologically meaningful word fragments, termed subwords. Using subwords instead of entire words significantly reduces the number of lexical entries necessary to sufficiently cover a specific language and domain. Mediation between queries and documents is based on these subwords as well as on lists of word-n-grams that are generated from large monolingual corpora and constitute possible translation units. The translations are then sent to a standard Internet search engine. This process makes our approach an effective tool for searching the biomedical content of the World Wide Web in different languages. We evaluate this approach using the OHSUMED corpus, a large medical document collection, within a cross-language retrieval setting.
When and how do GPs record vital signs in children with acute infections? A cross-sectional study
Blacklock, Claire; Haj-Hassan, Tanya Ali; Thompson, Matthew J
2012-01-01
Background NICE recommendations and evidence from ambulatory settings promotes the use of vital signs in identifying serious infections in children. This appears to differ from usual clinical practice where GPs report measuring vital signs infrequently. Aim To identify frequency of vital sign documentation by GPs, in the assessment of children with acute infections in primary care. Design and setting Observational study in 15 general practice surgeries in Oxfordshire and Somerset, UK. Method A standardised proforma was used to extract consultation details including documentation of numerical vital signs, and words or phrases used by the GP in assessing vital signs, for 850 children aged 1 month to 16 years presenting with acute infection. Results Of the children presenting with acute infections 31.6% had one or more numerical vital signs recorded (269, 31.6%), however GP recording rate improved if free text proxies were also considered: at least one vital sign was then recorded in over half (54.1%) of children. In those with recorded numerical values for vital signs, the most frequent was temperature (210, 24.7%), followed by heart rate (62, 7.3%), respiratory rate (58, 6.8%), and capillary refill time (36, 4.2%). Words or phrases for vital signs were documented infrequently (temperature 17.6%, respiratory rate 14.6%, capillary refill time 12.5%, and heart rate 0.5%), Text relating to global assessment was documented in 313/850 (36.8%) of consultations. Conclusion GPs record vital signs using words and phrases as well as numerical methods, although overall documentation of vital signs is infrequent in children presenting with acute infections. PMID:23265227
Quek, June; Brauer, Sandra G; Treleaven, Julia; Clark, Ross A
2017-09-01
This study aims to investigate the concurrent validity and intrarater reliability of the Microsoft Kinect to measure thoracic kyphosis against the Flexicurve. Thirty-three healthy individuals (age: 31±11.0 years, men: 17, height: 170.2±8.2 cm, weight: 64.2±12.0 kg) participated, with 29 re-examined for intrarater reliability 1-7 days later. Thoracic kyphosis was measured using the Flexicurve and the Microsoft Kinect consecutively in both standing and sitting positions. Both the kyphosis index and angle were calculated. The Microsoft Kinect showed excellent concurrent validity (intraclass correlation coefficient=0.76-0.82) and reliability (intraclass correlation coefficient=0.81-0.98) for measuring thoracic kyphosis (angle and index) in both standing and sitting postures. This study is the first to show that the Microsoft Kinect has excellent validity and intrarater reliability to measure thoracic kyphosis, which is promising for its use in the clinical setting.
The present status and problems in document retrieval system : document input type retrieval system
NASA Astrophysics Data System (ADS)
Inagaki, Hirohito
The office-automation (OA) made many changes. Many documents were begun to maintained in an electronic filing system. Therefore, it is needed to establish efficient document retrieval system to extract useful information. Current document retrieval systems are using simple word-matching, syntactic-matching, semantic-matching to obtain high retrieval efficiency. On the other hand, the document retrieval systems using special hardware devices, such as ISSP, were developed for aiming high speed retrieval. Since these systems can accept a single sentence or keywords as input, it is difficult to explain searcher's request. We demonstrated document input type retrieval system, which can directly accept document as an input, and can search similar documents from document data-base.
Microsoft health patient journey demonstrator.
Disse, Kirsten
2008-01-01
As health care becomes more reliant on electronic systems, there is a need to standardise display elements to promote patient safety and clinical efficiency. The Microsoft Health Common User Interface (MSCUI) programme, developed by Microsoft and the National Health Service (NHS) was born out of this need and creates guidance and controls designed to increase patient safety and clinical effectiveness through consistent interface treatments. The Microsoft Health Patient Journey Demonstrator is a prototype tool designed to provide exemplar implementations of MSCUI guidance on a Microsoft platform. It is a targeted glimpse at a visual interface for the integration of health-relevant information, including electronic medical records. We built the demonstrator in Microsoft Silverlight 2, our application technology which brings desktop functionality and enriched levels of user experience to health settings worldwide via the internet. We based the demonstrator on an easily recognisable clinical scenario which offered us the most scope for demonstrating MSCUI guidance and innovation. The demonstrator is structured in three sections (administration, primary care and secondary care) each of which illustrates the activities associated within the setting relevant to our scenario. The demonstrator is published on the MSCUI website www.mscui.net The MSCUI patient journey demonstrator has been successful in raising awareness and increasing interest in the CUI programme.
Different Neural Correlates of Emotion-Label Words and Emotion-Laden Words: An ERP Study
Zhang, Juan; Wu, Chenggang; Meng, Yaxuan; Yuan, Zhen
2017-01-01
It is well-documented that both emotion-label words (e.g., sadness, happiness) and emotion-laden words (e.g., death, wedding) can induce emotion activation. However, the neural correlates of emotion-label words and emotion-laden words recognition have not been examined. The present study aimed to compare the underlying neural responses when processing the two kinds of words by employing event-related potential (ERP) measurements. Fifteen Chinese native speakers were asked to perform a lexical decision task in which they should judge whether a two-character compound stimulus was a real word or not. Results showed that (1) emotion-label words and emotion-laden words elicited similar P100 at the posteriors sites, (2) larger N170 was found for emotion-label words than for emotion-laden words at the occipital sites on the right hemisphere, and (3) negative emotion-label words elicited larger Late Positivity Complex (LPC) on the right hemisphere than on the left hemisphere while such effect was not found for emotion-laden words and positive emotion-label words. The results indicate that emotion-label words and emotion-laden words elicit different cortical responses at both early (N170) and late (LPC) stages. In addition, right hemisphere advantage for emotion-label words over emotion-laden words can be observed in certain time windows (i.e., N170 and LPC) while fails to be detected in some other time window (i.e., P100). The implications of the current findings for future emotion research were discussed. PMID:28983242
Why American business demands twenty-first century learning: A company perspective.
Knox, Allyson
2006-01-01
Microsoft is an innovative corporation demonstrating the kind and caliber of job skills needed in the twenty-first century. It demonstrates its commitment to twenty-first century skills by holding its employees accountable to a set of core competencies, enabling the company to run effectively. The author explores how Microsoft's core competencies parallel the Partnership for 21st Century Skills learning frameworks. Both require advanced problem-solving skills and a passion for technology, both expect individuals to be able to work in teams, both look for a love of learning, and both call for the self-confidence to honestly self-evaluate. Microsoft also works to cultivate twenty-first century skills among future workers, investing in education to help prepare young people for competitive futures. As the need for digital literacy has become imperative, technology companies have taken the lead in facilitating technology training by partnering with schools and communities. Microsoft is playing a direct role in preparing students for what lies ahead in their careers. To further twenty-first century skills, or core competencies, among the nation's youth, Microsoft has established Partners in Learning, a program that helps education organizations build partnerships that leverage technology to improve teaching and learning. One Partners in Learning grantee is Global Kids, a nonprofit organization that trains students to design online games focused on global social issues resonating with civic and global competencies. As Microsoft believes the challenges of competing in today's economy and teaching today's students are substantial but not insurmountable, such partnerships and investments demonstrate Microsoft's belief in and commitment to twenty-first century skills.
ERIC Educational Resources Information Center
Chenail, Ronald J.
2012-01-01
In the first of a series of "how-to" essays on conducting qualitative data analysis, Ron Chenail points out the challenges of determining units to analyze qualitatively when dealing with text. He acknowledges that although we may read a document word-by-word or line-by-line, we need to adjust our focus when processing the text for purposes of…
Cooperative Educational Abstracting Service (CEAS). (Abstract Series No. 103-122, March 1972).
ERIC Educational Resources Information Center
International Bureau of Education, Geneva (Switzerland).
This document is a compilation of 20 English-language abstracts concerning various aspects of education in Switzerland, New Zealand, Chile, Poland, Argentina, Pakistan, Malaysia, Thailand, and France. The abstracts are informative in nature, each being approximately 1,500 words in length. They are based on documents submitted by each of the…
ERIC Educational Resources Information Center
Consejo Nacional Tecnico de la Educacion (Mexico).
This document is an English-language abstract (approximately 1,500 words) of two booklets on Mexican educational reform. The first booklet cites the parts of the Mexican Constitution dealing with education, the legal foundation of Mexican education, stipulating that it shall be universal, democratic, national, compulsory, free and immune from…
Action Learning. Symposium 21. [Concurrent Symposium Session at AHRD Annual Conference, 2000.
ERIC Educational Resources Information Center
2000
This document contains three papers from a symposium on action learning that was conducted as part of a conference on human resource development (HRD). "Searching for Meaning in Complex Action Learning Data: What Environments, Acts, and Words Reveal" (Verna J. Willis) analyzes complex action learning documents produced as course…
A Method for Search Engine Selection using Thesaurus for Selective Meta-Search Engine
NASA Astrophysics Data System (ADS)
Goto, Shoji; Ozono, Tadachika; Shintani, Toramatsu
In this paper, we propose a new method for selecting search engines on WWW for selective meta-search engine. In selective meta-search engine, a method is needed that would enable selecting appropriate search engines for users' queries. Most existing methods use statistical data such as document frequency. These methods may select inappropriate search engines if a query contains polysemous words. In this paper, we describe an search engine selection method based on thesaurus. In our method, a thesaurus is constructed from documents in a search engine and is used as a source description of the search engine. The form of a particular thesaurus depends on the documents used for its construction. Our method enables search engine selection by considering relationship between terms and overcomes the problems caused by polysemous words. Further, our method does not have a centralized broker maintaining data, such as document frequency for all search engines. As a result, it is easy to add a new search engine, and meta-search engines become more scalable with our method compared to other existing methods.
Building a Road from Light to Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Anton; Bilby, David; Barito, Adam
Representing the Center for Solar and Thermal Energy Conversion (CSTEC), this document is one of the entries in the Ten Hundred and One Word Challenge. As part of the challenge, the 46 Energy Frontier Research Centers were invited to represent their science in images, cartoons, photos, words and original paintings, but any descriptions or words could only use the 1000 most commonly used words in the English language, with the addition of one word important to each of the EFRCs and the mission of DOE energy. The mission of the Center for Solar and Thermal Energy Conversion (CSTEC) is tomore » design and to synthesize new materials for high efficiency photovoltaic (PV) and thermoelectric (TE) devices, predicated on new fundamental insights into equilibrium and non-equilibrium processes, including quantum phenomena, that occur in materials over various spatial and temporal scales.« less
Putting more power in your pocket
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chapman, Karena
Representing the Northeastern Center for Chemical Energy Storage (NECCES), this document is one of the entries in the Ten Hundred and One Word Challenge. As part of the challenge, the 46 Energy Frontier Research Centers were invited to represent their science in images, cartoons, photos, words and original paintings, but any descriptions or words could only use the 1000 most commonly used words in the English language, with the addition of one word important to each of the EFRCs and the mission of DOE energy. The mission of NECCEC is to identify the key atomic-scale processes which govern electrode functionmore » in rechargeable batteries, over a wide range of time and length scales, via the development and use of novel characterization and theoretical tools, and to use this information to identify and design new battery systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDaniel, Hunter; Beard, Matthew C; Wheeler, Lance M
Representing the Center for Advanced Solar Photophysics (CASP), this document is one of the entries in the Ten Hundred and One Word Challenge and was awarded “Overall Winner Runner-up and People’s Choice Winner.” As part of the challenge, the 46 Energy Frontier Research Centers were invited to represent their science in images, cartoons, photos, words and original paintings, but any descriptions or words could only use the 1000 most commonly used words in the English language, with the addition of one word important to each of the EFRCs and the mission of DOE: energy. The mission of CASP is tomore » explore and exploit the unique physics of nanostructured materials to boost the efficiency of solar energy conversion through novel light-matter interactions, controlled excited-state dynamics, and engineered carrier-carrier coupling.« less
How are the energy waves blocked on the way from hot to cold?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, Xianming; He, Lingfeng; Khafizov, Marat
Representing the Center for Materials Science of Nuclear Fuel (CMSNF), this document is one of the entries in the Ten Hundred and One Word Challenge. As part of the challenge, the 46 Energy Frontier Research Centers were invited to represent their science in images, cartoons, photos, words and original paintings, but any descriptions or words could only use the 1000 most commonly used words in the English language, with the addition of one word important to each of the EFRCs and the mission of DOE energy. The mission of CMSNF to develop an experimentally validated multi-scale computational capability for themore » predictive understanding of the impact of microstructure on thermal transport in nuclear fuel under irradiation, with ultimate application to UO2 as a model system« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cropley, Cecelia
Representing the Center for Catalytic Hydrocarbon Functionalization (CCHF), this document is one of the entries in the Ten Hundred and One Word Challenge. As part of the challenge, the 46 Energy Frontier Research Centers were invited to represent their science in images, cartoons, photos, words and original paintings, but any descriptions or words could only use the 1000 most commonly used words in the English language, with the addition of one word important to each of the EFRCs and the mission of DOE: energy. The mission of CCHF is to develop, validate, and optimize new methods to rearrange the bondsmore » of hydrocarbons, implement enzymatic strategies into synthetic systems, and design optimal environments for catalysts that can be used to reversibly functionalize hydrocarbons, especially for more efficient use of natural gas including low temperature conversion to liquid fuels.« less
2013-06-16
Science Dept., University of California, Irvine, USA 92697. Email : a.anandkumar@uci.edu,mjanzami@uci.edu. Daniel Hsu and Sham Kakade are with...Microsoft Research New England, 1 Memorial Drive, Cambridge, MA 02142. Email : dahsu@microsoft.com, skakade@microsoft.com 1 a latent space dimensionality...Sparse coding for multitask and transfer learning. ArxXiv preprint, abs/1209.0738, 2012. [34] G.H. Golub and C.F. Van Loan. Matrix Computations. The
NASA Astrophysics Data System (ADS)
Niebuhr, Cole
2018-04-01
Papers published in the astronomical community, particularly in the field of double star research, often contain plots that display the positions of the component stars relative to each other on a Cartesian coordinate plane. Due to the complexities of plotting a three-dimensional orbit into a two-dimensional image, it is often difficult to include an accurate reproduction of the orbit for comparison purposes. Methods to circumvent this obstacle do exist; however, many of these protocols result in low-quality blurred images or require specific and often expensive software. Here, a method is reported using Microsoft Paint and Microsoft Excel to produce high-quality images with an accurate reproduction of a partial orbit.
Engineering Documentation and Data Control
NASA Technical Reports Server (NTRS)
Matteson, Michael J.; Bramley, Craig; Ciaruffoli, Veronica
2001-01-01
Mississippi Space Services (MSS) the facility services contractor for NASA's John C. Stennis Space Center (SSC), is utilizing technology to improve engineering documentation and data control. Two identified improvement areas, labor intensive documentation research and outdated drafting standards, were targeted as top priority. MSS selected AutoManager(R) WorkFlow from Cyco software to manage engineering documentation. The software is currently installed on over 150 desctops. The outdated SSC drafting standard was written for pre-CADD drafting methods, in other words, board drafting. Implementation of COTS software solutions to manage engineering documentation and update the drafting standard resulted in significant increases in productivity by reducing the time spent searching for documents.
Level statistics of words: Finding keywords in literary texts and symbolic sequences
NASA Astrophysics Data System (ADS)
Carpena, P.; Bernaola-Galván, P.; Hackenberg, M.; Coronado, A. V.; Oliver, J. L.
2009-03-01
Using a generalization of the level statistics analysis of quantum disordered systems, we present an approach able to extract automatically keywords in literary texts. Our approach takes into account not only the frequencies of the words present in the text but also their spatial distribution along the text, and is based on the fact that relevant words are significantly clustered (i.e., they self-attract each other), while irrelevant words are distributed randomly in the text. Since a reference corpus is not needed, our approach is especially suitable for single documents for which no a priori information is available. In addition, we show that our method works also in generic symbolic sequences (continuous texts without spaces), thus suggesting its general applicability.
NASA Technical Reports Server (NTRS)
Khan, Ahmed
2010-01-01
The International Space Station (ISS) Operations Planning Team, Mission Control Centre and Mission Automation Support Network (MAS) have all evolved over the years to use commercial web-based technologies to create a configurable electronic infrastructure to manage the complex network of real-time planning, crew scheduling, resource and activity management as well as onboard document and procedure management required to co-ordinate ISS assembly, daily operations and mission support. While these Web technologies are classified as non-critical in nature, their use is part of an essential backbone of daily operations on the ISS and allows the crew to operate the ISS as a functioning science laboratory. The rapid evolution of the internet from 1998 (when ISS assembly began) to today, along with the nature of continuous manned operations in space, have presented a unique challenge in terms of software engineering and system development. In addition, the use of a wide array of competing internet technologies (including commercial technologies such as .NET and JAVA ) and the special requirements of having to support this network, both nationally among various control centres for International Partners (IPs), as well as onboard the station itself, have created special challenges for the MCC Web Tools Development Team, software engineers and flight controllers, who implement and maintain this system. This paper presents an overview of some of these operational challenges, and the evolving nature of the solutions and the future use of COTS based rich internet technologies in manned space flight operations. In particular this paper will focus on the use of Microsoft.s .NET API to develop Web-Based Operational tools, the use of XML based service oriented architectures (SOA) that needed to be customized to support Mission operations, the maintenance of a Microsoft IIS web server onboard the ISS, The OpsLan, functional-oriented Web Design with AJAX
Combining approaches to on-line handwriting information retrieval
NASA Astrophysics Data System (ADS)
Peña Saldarriaga, Sebastián; Viard-Gaudin, Christian; Morin, Emmanuel
2010-01-01
In this work, we propose to combine two quite different approaches for retrieving handwritten documents. Our hypothesis is that different retrieval algorithms should retrieve different sets of documents for the same query. Therefore, significant improvements in retrieval performances can be expected. The first approach is based on information retrieval techniques carried out on the noisy texts obtained through handwriting recognition, while the second approach is recognition-free using a word spotting algorithm. Results shows that for texts having a word error rate (WER) lower than 23%, the performances obtained with the combined system are close to the performances obtained on clean digital texts. In addition, for poorly recognized texts (WER > 52%), an improvement of nearly 17% can be observed with respect to the best available baseline method.
Arabic handwritten: pre-processing and segmentation
NASA Astrophysics Data System (ADS)
Maliki, Makki; Jassim, Sabah; Al-Jawad, Naseer; Sellahewa, Harin
2012-06-01
This paper is concerned with pre-processing and segmentation tasks that influence the performance of Optical Character Recognition (OCR) systems and handwritten/printed text recognition. In Arabic, these tasks are adversely effected by the fact that many words are made up of sub-words, with many sub-words there associated one or more diacritics that are not connected to the sub-word's body; there could be multiple instances of sub-words overlap. To overcome these problems we investigate and develop segmentation techniques that first segment a document into sub-words, link the diacritics with their sub-words, and removes possible overlapping between words and sub-words. We shall also investigate two approaches for pre-processing tasks to estimate sub-words baseline, and to determine parameters that yield appropriate slope correction, slant removal. We shall investigate the use of linear regression on sub-words pixels to determine their central x and y coordinates, as well as their high density part. We also develop a new incremental rotation procedure to be performed on sub-words that determines the best rotation angle needed to realign baselines. We shall demonstrate the benefits of these proposals by conducting extensive experiments on publicly available databases and in-house created databases. These algorithms help improve character segmentation accuracy by transforming handwritten Arabic text into a form that could benefit from analysis of printed text.
Advances to the development of a basic Mexican sign-to-speech and text language translator
NASA Astrophysics Data System (ADS)
Garcia-Bautista, G.; Trujillo-Romero, F.; Diaz-Gonzalez, G.
2016-09-01
Sign Language (SL) is the basic alternative communication method between deaf people. However, most of the hearing people have trouble understanding the SL, making communication with deaf people almost impossible and taking them apart from daily activities. In this work we present an automatic basic real-time sign language translator capable of recognize a basic list of Mexican Sign Language (MSL) signs of 10 meaningful words, letters (A-Z) and numbers (1-10) and translate them into speech and text. The signs were collected from a group of 35 MSL signers executed in front of a Microsoft Kinect™ Sensor. The hand gesture recognition system use the RGB-D camera to build and storage data point clouds, color and skeleton tracking information. In this work we propose a method to obtain the representative hand trajectory pattern information. We use Euclidean Segmentation method to obtain the hand shape and Hierarchical Centroid as feature extraction method for images of numbers and letters. A pattern recognition method based on a Back Propagation Artificial Neural Network (ANN) is used to interpret the hand gestures. Finally, we use K-Fold Cross Validation method for training and testing stages. Our results achieve an accuracy of 95.71% on words, 98.57% on numbers and 79.71% on letters. In addition, an interactive user interface was designed to present the results in voice and text format.
Morphable Word Clouds for Time-Varying Text Data Visualization.
Chi, Ming-Te; Lin, Shih-Syun; Chen, Shiang-Yi; Lin, Chao-Hung; Lee, Tong-Yee
2015-12-01
A word cloud is a visual representation of a collection of text documents that uses various font sizes, colors, and spaces to arrange and depict significant words. The majority of previous studies on time-varying word clouds focuses on layout optimization and temporal trend visualization. However, they do not fully consider the spatial shapes and temporal motions of word clouds, which are important factors for attracting people's attention and are also important cues for human visual systems in capturing information from time-varying text data. This paper presents a novel method that uses rigid body dynamics to arrange multi-temporal word-tags in a specific shape sequence under various constraints. Each word-tag is regarded as a rigid body in dynamics. With the aid of geometric, aesthetic, and temporal coherence constraints, the proposed method can generate a temporally morphable word cloud that not only arranges word-tags in their corresponding shapes but also smoothly transforms the shapes of word clouds over time, thus yielding a pleasing time-varying visualization. Using the proposed frame-by-frame and morphable word clouds, people can observe the overall story of a time-varying text data from the shape transition, and people can also observe the details from the word clouds in frames. Experimental results on various data demonstrate the feasibility and flexibility of the proposed method in morphable word cloud generation. In addition, an application that uses the proposed word clouds in a simulated exhibition demonstrates the usefulness of the proposed method.
A Tale of Two Observing Systems: Interoperability in the World of Microsoft Windows
NASA Astrophysics Data System (ADS)
Babin, B. L.; Hu, L.
2008-12-01
Louisiana Universities Marine Consortium's (LUMCON) and Dauphin Island Sea Lab's (DISL) Environmental Monitoring System provide a unified coastal ocean observing system. These two systems are mirrored to maintain autonomy while offering an integrated data sharing environment. Both systems collect data via Campbell Scientific Data loggers, store the data in Microsoft SQL servers, and disseminate the data in real- time on the World Wide Web via Microsoft Internet Information Servers and Active Server Pages (ASP). The utilization of Microsoft Windows technologies presented many challenges to these observing systems as open source tools for interoperability grow. The current open source tools often require the installation of additional software. In order to make data available through common standards formats, "home grown" software has been developed. One example of this is the development of software to generate xml files for transmission to the National Data Buoy Center (NDBC). OOSTethys partners develop, test and implement easy-to-use, open-source, OGC-compliant software., and have created a working prototype of networked, semantically interoperable, real-time data systems. Partnering with OOSTethys, we are developing a cookbook to implement OGC web services. The implementation will be written in ASP, will run in a Microsoft operating system environment, and will serve data via Sensor Observation Services (SOS). This cookbook will give observing systems running Microsoft Windows the tools to easily participate in the Open Geospatial Consortium (OGC) Oceans Interoperability Experiment (OCEANS IE).
Spotting handwritten words and REGEX using a two stage BLSTM-HMM architecture
NASA Astrophysics Data System (ADS)
Bideault, Gautier; Mioulet, Luc; Chatelain, Clément; Paquet, Thierry
2015-01-01
In this article, we propose a hybrid model for spotting words and regular expressions (REGEX) in handwritten documents. The model is made of the state-of-the-art BLSTM (Bidirectional Long Short Time Memory) neural network for recognizing and segmenting characters, coupled with a HMM to build line models able to spot the desired sequences. Experiments on the Rimes database show very promising results.
Climbing the Tower of Babel: Perfecting Machine Translation
2011-02-16
Center) used MT tools to translate extraordinary numbers of Russian technical documents. 10 For the Air Force, the manpower and time savings were...recognition.htm. Granted, this number is tempered by the rules of a specific language that would disallow specific word orderings, or mandate particular word...sequences, (e.g., in English, prepositions can only be followed by articles, etc) but the overall numbers convey the complexity of the machine
ERIC Educational Resources Information Center
Abu Nasr, Julinda; And Others
This document is divided into two parts: (1) "A Study of Sex Role Stereotype in Arabic Readers" and (2) "A Guide for the Identification and Elimination of Sexism in Arabic Textbooks." In part 1, a sample of 79 Arabic readers were read word for word and the images pertaining to females were recorded. The results of the survey…
Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa; Al-Garadi, Mohammed Ali
2018-06-01
Text categorization has been used extensively in recent years to classify plain-text clinical reports. This study employs text categorization techniques for the classification of open narrative forensic autopsy reports. One of the key steps in text classification is document representation. In document representation, a clinical report is transformed into a format that is suitable for classification. The traditional document representation technique for text categorization is the bag-of-words (BoW) technique. In this study, the traditional BoW technique is ineffective in classifying forensic autopsy reports because it merely extracts frequent but discriminative features from clinical reports. Moreover, this technique fails to capture word inversion, as well as word-level synonymy and polysemy, when classifying autopsy reports. Hence, the BoW technique suffers from low accuracy and low robustness unless it is improved with contextual and application-specific information. To overcome the aforementioned limitations of the BoW technique, this research aims to develop an effective conceptual graph-based document representation (CGDR) technique to classify 1500 forensic autopsy reports from four (4) manners of death (MoD) and sixteen (16) causes of death (CoD). Term-based and Systematized Nomenclature of Medicine-Clinical Terms (SNOMED CT) based conceptual features were extracted and represented through graphs. These features were then used to train a two-level text classifier. The first level classifier was responsible for predicting MoD. In addition, the second level classifier was responsible for predicting CoD using the proposed conceptual graph-based document representation technique. To demonstrate the significance of the proposed technique, its results were compared with those of six (6) state-of-the-art document representation techniques. Lastly, this study compared the effects of one-level classification and two-level classification on the experimental results. The experimental results indicated that the CGDR technique achieved 12% to 15% improvement in accuracy compared with fully automated document representation baseline techniques. Moreover, two-level classification obtained better results compared with one-level classification. The promising results of the proposed conceptual graph-based document representation technique suggest that pathologists can adopt the proposed system as their basis for second opinion, thereby supporting them in effectively determining CoD. Copyright © 2018 Elsevier Inc. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-13
... minutes, automatically generate the SPL document (a few formatting edits may have to be made). Based on... render it as intended in SPL. The comment said that most users need to apply applicable formatting to..., including MS Word (both editable and hard- formatted), faxes, texts, in emails, or other scanned documents...
Working Words: A User's Guide to Written Communication at Work.
ERIC Educational Resources Information Center
Hagston, Jan
Writing a document that is clear and easy to understand is difficult. This resource book is a guide to making written material easier to read, understand, and use. The guide is targeted at those who write work-place documents--industry or TAFE (Technical and Further Education) trainers, managers, supervisors, union representatives or writers of…
ERIC Educational Resources Information Center
Newlin, George
Charles Dickens' novel, "A Tale of Two Cities," does not waste a word in telling a touching, suspenseful tale set against the background of one of the bloodiest events in history, the French Revolution. This casebook's collection of historical documents, collateral readings, and commentary will promote interdisciplinary study of the…
Putting Practice into Words: The State of Data and Methods Transparency in Grammatical Descriptions
ERIC Educational Resources Information Center
Gawne, Lauren; Kelly, Barbara F.; Berez-Kroeker, Andrea L.; Heston, Tyler
2017-01-01
Language documentation and description are closely related practices, often performed as part of the same fieldwork project on an un(der)-studied language. Research trends in recent decades have seen a great volume of publishing in regards to the methods of language documentation, however, it is not clear that linguists' awareness of the…
ERIC Educational Resources Information Center
Uzunboylu, Huseyin; Genc, Zeynep
2017-01-01
The purpose of this study is to determine the recent trends in foreign language learning through mobile learning. The study was conducted employing document analysis and related content analysis among the qualitative research methodology. Through the search conducted on Scopus database with the key words "mobile learning and foreign language…
2011-02-17
document objects, on one or more electronic document pages. These commands have their roots in typography , so, to understand the PDF Language, one...must have at least a rudimentary understanding of typography . Only a few of the typographic commands, called text showing operators, can hold strings
Extracting Related Words from Anchor Text Clusters by Focusing on the Page Designer's Intention
NASA Astrophysics Data System (ADS)
Liu, Jianquan; Chen, Hanxiong; Furuse, Kazutaka; Ohbo, Nobuo
Approaches for extracting related words (terms) by co-occurrence work poorly sometimes. Two words frequently co-occurring in the same documents are considered related. However, they may not relate at all because they would have no common meanings nor similar semantics. We address this problem by considering the page designer’s intention and propose a new model to extract related words. Our approach is based on the idea that the web page designers usually make the correlative hyperlinks appear in close zone on the browser. We developed a browser-based crawler to collect “geographically” near hyperlinks, then by clustering these hyperlinks based on their pixel coordinates, we extract related words which can well reflect the designer’s intention. Experimental results show that our method can represent the intention of the web page designer in extremely high precision. Moreover, the experiments indicate that our extracting method can obtain related words in a high average precision.
DTD Creation for the Software Technology for Adaptable, Reliable Systems (STARS) Program
1990-06-23
developed to store documents in a format peculiar to the program’s design . Editing the document became easy since word processors adjust all spacing and...descriptive markup may be output to a 3 CDRL 1810 January 26, 1990 variety of devices ranging from high quality typography printers through laser printers...provision for non-SGML material, such as graphics , to be inserted in a document. For these reasons the Computer-Aided Acquisition and Logistics Support
Katzman, G L
2001-03-01
The goal of the project was to create a method by which an in-house digital teaching file could be constructed that was simple, inexpensive, independent of hypertext markup language (HTML) restrictions, and appears identical on multiple platforms. To accomplish this, Microsoft PowerPoint and Adobe Acrobat were used in succession to assemble digital teaching files in the Acrobat portable document file format. They were then verified to appear identically on computers running Windows, Macintosh Operating Systems (OS), and the Silicon Graphics Unix-based OS as either a free-standing file using Acrobat Reader software or from within a browser window using the Acrobat browser plug-in. This latter display method yields a file viewed through a browser window, yet remains independent of underlying HTML restrictions, which may confer an advantage over simple HTML teaching file construction. Thus, a hybrid of HTML-distributed Adobe Acrobat generated WWW documents may be a viable alternative for digital teaching file construction and distribution.
Chemical-text hybrid search engines.
Zhou, Yingyao; Zhou, Bin; Jiang, Shumei; King, Frederick J
2010-01-01
As the amount of chemical literature increases, it is critical that researchers be enabled to accurately locate documents related to a particular aspect of a given compound. Existing solutions, based on text and chemical search engines alone, suffer from the inclusion of "false negative" and "false positive" results, and cannot accommodate diverse repertoire of formats currently available for chemical documents. To address these concerns, we developed an approach called Entity-Canonical Keyword Indexing (ECKI), which converts a chemical entity embedded in a data source into its canonical keyword representation prior to being indexed by text search engines. We implemented ECKI using Microsoft Office SharePoint Server Search, and the resultant hybrid search engine not only supported complex mixed chemical and keyword queries but also was applied to both intranet and Internet environments. We envision that the adoption of ECKI will empower researchers to pose more complex search questions that were not readily attainable previously and to obtain answers at much improved speed and accuracy.
Qu, Zhenhong; Ghorbani, Rhonda P; Li, Hongyan; Hunter, Robert L; Hannah, Christina D
2007-03-01
Gross examination, encompassing description, dissection, and sampling, is a complex task and an essential component of surgical pathology. Because of the complexity of the task, standardized protocols to guide the gross examination often become a bulky manual that is difficult to use. This problem is further compounded by the high specimen volume and biohazardous nature of the task. As a result, such a manual is often underused, leading to errors that are potentially harmful and time consuming to correct-a common chronic problem affecting many pathology laboratories. To combat this problem, we have developed a simple method that incorporates complex text and graphic information of a typical procedure manual and yet allows easy access to any intended instructive information in the manual. The method uses the Object-Linking-and-Embedding function of Microsoft Word (Microsoft, Redmond, WA) to establish hyperlinks among different contents, and then it uses the touch screen technology to facilitate navigation through the manual on a computer screen installed at the cutting bench with no need for a physical keyboard or a mouse. It takes less than 4 seconds to reach any intended information in the manual by 3 to 4 touches on the screen. A 3-year follow-up study shows that this method has increased use of the manual and has improved the quality of gross examination. The method is simple and can be easily tailored to different formats of instructive information, allowing flexible organization, easy access, and quick navigation. Increased compliance to instructive information reduces errors at the grossing bench and improves work efficiency.
Comparison of neurological healthcare oriented educational resources for patients on the internet.
Punia, Vineet; Dagar, Anjali; Agarwal, Nitin; He, Wenzhuan; Hillen, Machteld
2014-12-01
The internet has become a major contributor to health literacy promotion. The average American reads at 7th-8th grade level and it is recommended to write patient education materials at or below 6th grade reading level. We tried to assess the level of literacy required to read and understand online patient education materials (OPEM) for neurological diseases from various internet resources. We then compared those to an assumed reference OPEM source, namely the patient education brochures from the American Academy of Neurology (AAN), the world's largest professional association of neurologists. Disease specific patient education brochures were downloaded from the AAN website. OPEM for these diseases were also accessed from other common online sources determined using a predefined criterion. All OPEM were converted to Microsoft Word (Microsoft Corp., Redmond, WA, USA) and their reading level was analyzed using Readability Studio Professional Edition version 2012.1 (Oleander Software, Vandalia, OH, USA). Descriptive analysis and analysis of variance were used to compare reading levels of OPEM from different resources. Medline Plus, Mayo clinic and Wikipedia qualified for OPEM analysis. All OPEM from these resources, including the AAN, were written above the recommended 6th grade reading level. They were also found to be "fairly difficult", "difficult" or "confusing" on the Flesch Reading Ease scale. AAN OPEM on average needed lower reading level, with Wikipedia OPEM being significantly (p<0.01) more difficult to read compared to the other three resources. OPEM on neurological diseases are being written at a level of reading complexity higher than the average American and the recommended reading levels. This may be undermining the utility of these resources. Copyright © 2014 Elsevier Ltd. All rights reserved.
Journal of the College of Physicians and Surgeons of Pakistan: Five Years Bibliometric Analysis.
Saeed Ullah, Saeed; Jan, Saeed Ullah; Jan, Tahir; Ahmad, Hafiz Nafees; Jan, Muhammad Yahya; Rauf, Muhammad Abdur
2016-11-01
To conduct the bibliometric analysis of the Journal of the College of Physicians and Surgeons Pakistan (JCPSP) from 2012 to 2014. The prime objectives of this report were to determine the number and percentage of articles by year, authorship pattern, gender and geographical affiliation, ranking by subject and citation analysis. A data collection instrument was developed as bibliometric form. The data was analysed using the Microsoft Excel spread sheet. Editorials and letters to editors were excluded. There were 1106 total research documents, including 721 original articles and 385 case reports. A rapid increase in number of articles per year was noticed, more original papers than case reports. Majority of the authors were male. The contribution of Balochistan and Khyber Pakhtunkhwa was less than the other provinces. JCPSP was the most cited document in the reference list of the research documents. The scholars of Khyber Pakhtunkhwa and Balochistan and female researchers should give more attention in writing quality articles eligible for consideration at this Journal. It is also suggested that writers should be compelled to address such fields of medical sciences as neurology, nephrology, anatomy and pharmacology, while writing original articles and case reports.
A VBA Desktop Database for Proposal Processing at National Optical Astronomy Observatories
NASA Astrophysics Data System (ADS)
Brown, Christa L.
National Optical Astronomy Observatories (NOAO) has developed a relational Microsoft Windows desktop database using Microsoft Access and the Microsoft Office programming language, Visual Basic for Applications (VBA). The database is used to track data relating to observing proposals from original receipt through the review process, scheduling, observing, and final statistical reporting. The database has automated proposal processing and distribution of information. It allows NOAO to collect and archive data so as to query and analyze information about our science programs in new ways.
ERIC Educational Resources Information Center
Benghalem, Boualem
2015-01-01
This study aims to investigate the effects of using ICT tools such as Microsoft PowerPoint on EFL students' attitude and anxiety. The participants in this study were 40 Master 2 students of Didactics of English as a Foreign Language, Djillali Liabes University, Sidi Bel Abbes Algeria. In order to find out the effects of Microsoft PowerPoint on EFL…
2017-06-01
implement human following on a mobile robot in an indoor environment . B. FUTURE WORK Future work that could be conducted in the realm of this thesis...FEASIBILITY OF CONDUCTING HUMAN TRACKING AND FOLLOWING IN AN INDOOR ENVIRONMENT USING A MICROSOFT KINECT AND THE ROBOT OPERATING SYSTEM by...FEASIBILITY OF CONDUCTING HUMAN TRACKING AND FOLLOWING IN AN INDOOR ENVIRONMENT USING A MICROSOFT KINECT AND THE ROBOT OPERATING SYSTEM 5. FUNDING NUMBERS
An overview of selected information storage and retrieval issues in computerized document processing
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Ihebuzor, Valentine U.
1984-01-01
The rapid development of computerized information storage and retrieval techniques has introduced the possibility of extending the word processing concept to document processing. A major advantage of computerized document processing is the relief of the tedious task of manual editing and composition usually encountered by traditional publishers through the immense speed and storage capacity of computers. Furthermore, computerized document processing provides an author with centralized control, the lack of which is a handicap of the traditional publishing operation. A survey of some computerized document processing techniques is presented with emphasis on related information storage and retrieval issues. String matching algorithms are considered central to document information storage and retrieval and are also discussed.
Davidson, Meghan M; Kaushanskaya, Margarita; Ellis Weismer, Susan
2018-05-25
Word reading and oral language predict reading comprehension, which is generally poor, in individuals with autism spectrum disorder (ASD). However, working memory (WM), despite documented weaknesses, has not been thoroughly investigated as a predictor of reading comprehension in ASD. This study examined the role of three parallel WM N-back tasks using abstract shapes, familiar objects, and written words in children (8-14 years) with ASD (n = 19) and their typically developing peers (n = 24). All three types of WM were significant predictors of reading comprehension when considered alone. However, these relationships were rendered non-significant with the addition of age, word reading, vocabulary, and group entered into the models. Oral vocabulary emerged as the strongest predictor of reading comprehension.
Marketing-Stimulated Word-of-Mouth: A Channel for Growing Demand.
Gombeski, William R; Martin, Becky; Britt, Jason
2015-01-01
Marketing-stimulated word-of-mouth (WOM) marketing has been poorly understood in health care, leading to it being underappreciated and underutilized by marketers. A study of new patients to a new runner's clinic was conducted to understand how they chose the program. The importance of marketing-stimulated WOM, both individual and organizational, is documented. Marketing-stimulated WOM is an often overlooked and rarely measured channel for increasing the impact of marketing programs.
Névéol, Aurélie; Pereira, Suzanne; Kerdelhué, Gaetan; Dahamna, Badisse; Joubert, Michel; Darmoni, Stéfan J
2007-01-01
The growing number of resources to be indexed in the catalogue of online health resources in French (CISMeF) calls for curating strategies involving automatic indexing tools while maintaining the catalogue's high indexing quality standards. To develop a simple automatic tool that retrieves MeSH descriptors from documents titles. In parallel to research on advanced indexing methods, a bag-of-words tool was developed for timely inclusion in CISMeF's maintenance system. An evaluation was carried out on a corpus of 99 documents. The indexing sets retrieved by the automatic tool were compared to manual indexing based on the title and on the full text of resources. 58% of the major main headings were retrieved by the bag-of-words algorithm and the precision on main heading retrieval was 69%. Bag-of-words indexing has effectively been used on selected resources to be included in CISMeF since August 2006. Meanwhile, on going work aims at improving the current version of the tool.
Named Entity Recognition in Chinese Clinical Text Using Deep Neural Network.
Wu, Yonghui; Jiang, Min; Lei, Jianbo; Xu, Hua
2015-01-01
Rapid growth in electronic health records (EHRs) use has led to an unprecedented expansion of available clinical data in electronic formats. However, much of the important healthcare information is locked in the narrative documents. Therefore Natural Language Processing (NLP) technologies, e.g., Named Entity Recognition that identifies boundaries and types of entities, has been extensively studied to unlock important clinical information in free text. In this study, we investigated a novel deep learning method to recognize clinical entities in Chinese clinical documents using the minimal feature engineering approach. We developed a deep neural network (DNN) to generate word embeddings from a large unlabeled corpus through unsupervised learning and another DNN for the NER task. The experiment results showed that the DNN with word embeddings trained from the large unlabeled corpus outperformed the state-of-the-art CRF's model in the minimal feature engineering setting, achieving the highest F1-score of 0.9280. Further analysis showed that word embeddings derived through unsupervised learning from large unlabeled corpus remarkably improved the DNN with randomized embedding, denoting the usefulness of unsupervised feature learning.
Development of First-Graders' Word Reading Skills: For Whom Can Dynamic Assessment Tell Us More?
Cho, Eunsoo; Compton, Donald L; Gilbert, Jennifer K; Steacy, Laura M; Collins, Alyson A; Lindström, Esther R
2017-01-01
Dynamic assessment (DA) of word reading measures learning potential for early reading development by documenting the amount of assistance needed to learn how to read words with unfamiliar orthography. We examined the additive value of DA for predicting first-grade decoding and word recognition development while controlling for autoregressive effects. Additionally, we examined whether predictive validity of DA would be higher for students who have poor phonological awareness skills. First-grade students (n = 105) were assessed on measures of word reading, phonological awareness, rapid automatized naming, and DA in the fall and again assessed on word reading measures in the spring. A series of planned, moderated multiple regression analyses indicated that DA made a significant and unique contribution in predicting word recognition development above and beyond the autoregressor, particularly for students with poor phonological awareness skills. For these students, DA explained 3.5% of the unique variance in end-of-first-grade word recognition that was not attributable to autoregressive effect. Results suggest that DA provides an important source of individual differences in the development of word recognition skills that cannot be fully captured by merely assessing the present level of reading skills through traditional static assessment, particularly for students at risk for developing reading disabilities. © Hammill Institute on Disabilities 2015.
Judo strategy. The competitive dynamics of Internet time.
Yoffie, D B; Cusumano, M A
1999-01-01
Competition on the Internet is creating fierce battles between industry giants and small-scale start-ups. Smart start-ups can avoid those conflicts by moving quickly to uncontested ground and, when that's no longer possible, turning dominant players' strengths against them. The authors call this competitive approach judo strategy. They use the Netscape-Microsoft battles to illustrate the three main principles of judo strategy: rapid movement, flexibility, and leverage. In the early part of the browser wars, for instance, Netscape applied the principle of rapid movement by being the first company to offer a free stand-alone browser. This allowed Netscape to build market share fast and to set the market standard. Flexibility became a critical factor later in the browser wars. In December 1995, when Microsoft announced that it would "embrace and extend" competitors' Internet successes, Netscape failed to give way in the face of superior strength. Instead it squared off against Microsoft and even turned down numerous opportunities to craft deep partnerships with other companies. The result was that Netscape lost deal after deal when competing with Microsoft for common distribution channels. Netscape applied the principle of leverage by using Microsoft's strengths against it. Taking advantage of Microsoft's determination to convert the world to Windows or Windows NT, Netscape made its software compatible with existing UNIX systems. While it is true that these principles can't replace basic execution, say the authors, without speed, flexibility, and leverage, very few companies can compete successfully on Internet time.
Proponents of Creationism but Not Proponents of Evolution Frame the Origins Debate in Terms of Proof
ERIC Educational Resources Information Center
Barnes, Ralph M.; Church, Rebecca A.
2013-01-01
In Study 1, 72 internet documents containing creationism, ID (intelligent design), or evolution content were selected for analysis. All instances of proof cognates (the word "proof" and related terms such as "proven", "disproof", etc.) contained within these documents were identified and labeled in terms of the manner in which the terms were used.…
"Records of Rights": A New Exhibit at the National Archives in Washington, D.C.
ERIC Educational Resources Information Center
Hussey, Michael
2014-01-01
America's founding documents--the Declaration of Independence, the Constitution, and the Bill of Rights--are icons of human liberty. But the ideals enshrined in those documents did not initially apply to all Americans. They were, in the words of Martin Luther King, Jr., "a promissory note to which every American was to fall heir."…
Software Process Automation: Experiences from the Trenches.
1996-07-01
Integration of problem database Weaver tions) J Process WordPerfect, All-in-One, Oracle, CM Integration of tools Weaver System K Process Framemaker , CM...handle change requests and problem reports. * Autoplan, a project management tool * Framemaker , a document processing system * Worldview, a document...Cadre, Team Work, FrameMaker , some- thing for requirements traceability, their own homegrown scheduling tool, and their own homegrown tool integrator
ERIC Educational Resources Information Center
Mexico.
This document is an English-language abstract (approximately 1,500 words) of the draft of a law for the preservation of Mexican national heritage, particularly for the protection, conservation, and recuperation of cultural objects. The document consists of twelve chapters and six articles. Chapter 1 declares the protection, conservation,…
Slant correction for handwritten English documents
NASA Astrophysics Data System (ADS)
Shridhar, Malayappan; Kimura, Fumitaka; Ding, Yimei; Miller, John W. V.
2004-12-01
Optical character recognition of machine-printed documents is an effective means for extracting textural material. While the level of effectiveness for handwritten documents is much poorer, progress is being made in more constrained applications such as personal checks and postal addresses. In these applications a series of steps is performed for recognition beginning with removal of skew and slant. Slant is a characteristic unique to the writer and varies from writer to writer in which characters are tilted some amount from vertical. The second attribute is the skew that arises from the inability of the writer to write on a horizontal line. Several methods have been proposed and discussed for average slant estimation and correction in the earlier papers. However, analysis of many handwritten documents reveals that slant is a local property and slant varies even within a word. The use of an average slant for the entire word often results in overestimation or underestimation of the local slant. This paper describes three methods for local slant estimation, namely the simple iterative method, high-speed iterative method, and the 8-directional chain code method. The experimental results show that the proposed methods can estimate and correct local slant more effectively than the average slant correction.
Genes2WordCloud: a quick way to identify biological themes from gene lists and free text.
Baroukh, Caroline; Jenkins, Sherry L; Dannenfelser, Ruth; Ma'ayan, Avi
2011-10-13
Word-clouds recently emerged on the web as a solution for quickly summarizing text by maximizing the display of most relevant terms about a specific topic in the minimum amount of space. As biologists are faced with the daunting amount of new research data commonly presented in textual formats, word-clouds can be used to summarize and represent biological and/or biomedical content for various applications. Genes2WordCloud is a web application that enables users to quickly identify biological themes from gene lists and research relevant text by constructing and displaying word-clouds. It provides users with several different options and ideas for the sources that can be used to generate a word-cloud. Different options for rendering and coloring the word-clouds give users the flexibility to quickly generate customized word-clouds of their choice. Genes2WordCloud is a word-cloud generator and a word-cloud viewer that is based on WordCram implemented using Java, Processing, AJAX, mySQL, and PHP. Text is fetched from several sources and then processed to extract the most relevant terms with their computed weights based on word frequencies. Genes2WordCloud is freely available for use online; it is open source software and is available for installation on any web-site along with supporting documentation at http://www.maayanlab.net/G2W. Genes2WordCloud provides a useful way to summarize and visualize large amounts of textual biological data or to find biological themes from several different sources. The open source availability of the software enables users to implement customized word-clouds on their own web-sites and desktop applications.
Genes2WordCloud: a quick way to identify biological themes from gene lists and free text
2011-01-01
Background Word-clouds recently emerged on the web as a solution for quickly summarizing text by maximizing the display of most relevant terms about a specific topic in the minimum amount of space. As biologists are faced with the daunting amount of new research data commonly presented in textual formats, word-clouds can be used to summarize and represent biological and/or biomedical content for various applications. Results Genes2WordCloud is a web application that enables users to quickly identify biological themes from gene lists and research relevant text by constructing and displaying word-clouds. It provides users with several different options and ideas for the sources that can be used to generate a word-cloud. Different options for rendering and coloring the word-clouds give users the flexibility to quickly generate customized word-clouds of their choice. Methods Genes2WordCloud is a word-cloud generator and a word-cloud viewer that is based on WordCram implemented using Java, Processing, AJAX, mySQL, and PHP. Text is fetched from several sources and then processed to extract the most relevant terms with their computed weights based on word frequencies. Genes2WordCloud is freely available for use online; it is open source software and is available for installation on any web-site along with supporting documentation at http://www.maayanlab.net/G2W. Conclusions Genes2WordCloud provides a useful way to summarize and visualize large amounts of textual biological data or to find biological themes from several different sources. The open source availability of the software enables users to implement customized word-clouds on their own web-sites and desktop applications. PMID:21995939
Compilation of Disruptions to Airports by Volcanic Activity (Version 1.0, 1944-2006)
Guffanti, Marianne; Mayberry, Gari C.; Casadevall, Thomas J.; Wunderman, Richard
2008-01-01
Volcanic activity has caused significant hazards to numerous airports worldwide, with local to far-ranging effects on travelers and commerce. To more fully characterize the nature and scope of volcanic hazards to airports, we collected data on incidents of airports throughout the world that have been affected by volcanic activity, beginning in 1944 with the first documented instance of damage to modern aircraft and facilities in Naples, Italy, and extending through 2006. Information was gleaned from various sources, including news outlets, volcanological reports (particularly the Smithsonian Institution's Bulletin of the Global Volcanism Network), and previous publications on the topic. This report presents the full compilation of the data collected. For each incident, information about the affected airport and the volcanic source has been compiled as a record in a Microsoft Access database. The database is incomplete in so far as incidents may not have not been reported or documented, but it does present a good sample from diverse parts of the world. Not included are en-route diversions to avoid airborne ash clouds at cruise altitudes. The database has been converted to a Microsoft Excel spreadsheet. To make the PDF version of table 1 in this open-file report resemble the spreadsheet, order the PDF pages as 12, 17, 22; 13, 18, 23; 14, 19, 24; 15, 20, 25; and 16, 21, 26. Analysis of the database reveals that, at a minimum, 101 airports in 28 countries were impacted on 171 occasions from 1944 through 2006 by eruptions at 46 volcanoes. The number of affected airports (101) probably is better constrained than the number of incidents (171) because recurring disruptions at a given airport may have been lumped together or not reported by news agencies, whereas the initial disruption likely is noticed and reported and thus the airport correctly counted.
NASA Technical Reports Server (NTRS)
Liew, K. H.; Urip, E.; Yang, S. L.; Marek, C. J.
2004-01-01
Droplet interaction with a high temperature gaseous crossflow is important because of its wide application in systems involving two phase mixing such as in combustion requiring quick mixing of fuel and air with the reduction of pollutants and for jet mixing in the dilution zone of combustors. Therefore, the focus of this work is to investigate dispersion of a two-dimensional atomized and evaporating spray jet into a two-dimensional crossflow. An interactive Microsoft Excel program for tracking a single droplet in crossflow that has previously been developed will be modified to include droplet evaporation computation. In addition to the high velocity airflow, the injected droplets are also subjected to combustor temperature and pressure that affect their motion in the flow field. Six ordinary differential equations are then solved by 4th-order Runge-Kutta method using Microsoft Excel software. Microsoft Visual Basic programming and Microsoft Excel macrocode are used to produce the data and plot graphs describing the droplet's motion in the flow field. This program computes and plots the data sequentially without forcing the user to open other types of plotting programs. A user's manual on how to use the program is included.
Interface for the documentation and compilation of a library of computer models in physiology.
Summers, R. L.; Montani, J. P.
1994-01-01
A software interface for the documentation and compilation of a library of computer models in physiology was developed. The interface is an interactive program built within a word processing template in order to provide ease and flexibility of documentation. A model editor within the interface directs the model builder as to standardized requirements for incorporating models into the library and provides the user with an index to the levels of documentation. The interface and accompanying library are intended to facilitate model development, preservation and distribution and will be available for public use. PMID:7950046
Spatial Paradigm for Information Retrieval and Exploration
DOE Office of Scientific and Technical Information (OSTI.GOV)
The SPIRE system consists of software for visual analysis of primarily text based information sources. This technology enables the content analysis of text documents without reading all the documents. It employs several algorithms for text and word proximity analysis. It identifies the key themes within the text documents. From this analysis, it projects the results onto a visual spatial proximity display (Galaxies or Themescape) where items (documents and/or themes) visually close to each other are known to have content which is close to each other. Innovative interaction techniques then allow for dynamic visual analysis of large text based information spaces.
SPIRE1.03. Spatial Paradigm for Information Retrieval and Exploration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, K.J.; Bohn, S.; Crow, V.
The SPIRE system consists of software for visual analysis of primarily text based information sources. This technology enables the content analysis of text documents without reading all the documents. It employs several algorithms for text and word proximity analysis. It identifies the key themes within the text documents. From this analysis, it projects the results onto a visual spatial proximity display (Galaxies or Themescape) where items (documents and/or themes) visually close to each other are known to have content which is close to each other. Innovative interaction techniques then allow for dynamic visual analysis of large text based information spaces.
Pina, Jamie; Massoudi, Barbara L; Chester, Kelley; Koyanagi, Mark
2018-06-07
Researchers and analysts have not completely examined word frequency analysis as an approach to creating a public health quality improvement taxonomy. To develop a taxonomy of public health quality improvement concepts for an online exchange of quality improvement work. We analyzed documents, conducted an expert review, and employed a user-centered design along with a faceted search approach to make online entries searchable for users. To provide the most targeted facets to users, we used word frequency to analyze 334 published public health quality improvement documents to find the most common clusters of word meanings. We then reviewed the highest-weighted concepts and categorized their relationships to quality improvement details in our taxonomy. Next, we mapped meanings to items in our taxonomy and presented them in order of their weighted percentages in the data. Using these methods, we developed and sorted concepts in the faceted search presentation so that online exchange users could access relevant search criteria. We reviewed 50 of the top synonym clusters and identified 12 categories for our taxonomy data. The final categories were as follows: Summary; Planning and Execution Details; Health Impact; Training and Preparation; Information About the Community; Information About the Health Department; Results; Quality Improvement (QI) Staff; Information; Accreditation Details; Collaborations; and Contact Information of the Submitter. Feedback about the elements in the taxonomy and presentation of elements in our search environment from users has been positive. When relevant data are available, the word frequency analysis method may be useful in other taxonomy development efforts for public health.
Assessment & Commitment Tracking System (ACTS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bryant, Robert A.; Childs, Teresa A.; Miller, Michael A.
2004-12-20
The ACTS computer code provides a centralized tool for planning and scheduling assessments, tracking and managing actions associated with assessments or that result from an event or condition, and "mining" data for reporting and analyzing information for improving performance. The ACTS application is designed to work with the MS SQL database management system. All database interfaces are written in SQL. The following software is used to develop and support the ACTS application: Cold Fusion HTML JavaScript Quest TOAD Microsoft Visual Source Safe (VSS) HTML Mailer for sending email Microsoft SQL Microsoft Internet Information Server
Novel word retention in bilingual and monolingual speakers
Kan, Pui Fong; Sadagopan, Neeraja
2014-01-01
The goal of this research was to examine word retention in bilinguals and monolinguals. Long-term word retention is an essential part of vocabulary learning. Previous studies have documented that bilinguals outperform monolinguals in terms of retrieving newly-exposed words. Yet, little is known about whether or to what extent bilinguals are different from monolinguals in word retention. Participants were 30 English-speaking monolingual adults and 30 bilingual adults who speak Spanish as a home language and learned English as a second language during childhood. In a previous study (Kan et al., 2014), the participants were exposed to the target novel words in English, Spanish, and Cantonese. In this current study, word retention was measured a week after the fast mapping task. No exposures were given during the one-week interval. Results showed that bilinguals and monolinguals retain a similar number of words. However, participants produced more words in English than in either Spanish or Cantonese. Correlation analyses revealed that language knowledge plays a role in the relationships between fast mapping and word retention. Specifically, within- and across-language relationships between bilinguals' fast mapping and word retention were found in Spanish and English, by contrast, within-language relationships between monolinguals' fast mapping and word retention were found in English and across-language relationships between their fast mapping and word retention performance in English and Cantonese. Similarly, bilinguals differed from monolinguals in the relationships among the word retention scores in three languages. Significant correlations were found among bilinguals' retention scores. However, no such correlations were found among monolinguals' retention scores. The overall findings suggest that bilinguals' language experience and language knowledge most likely contribute to how they learn and retain new words. PMID:25324789
Novel word retention in bilingual and monolingual speakers.
Kan, Pui Fong; Sadagopan, Neeraja
2014-01-01
The goal of this research was to examine word retention in bilinguals and monolinguals. Long-term word retention is an essential part of vocabulary learning. Previous studies have documented that bilinguals outperform monolinguals in terms of retrieving newly-exposed words. Yet, little is known about whether or to what extent bilinguals are different from monolinguals in word retention. Participants were 30 English-speaking monolingual adults and 30 bilingual adults who speak Spanish as a home language and learned English as a second language during childhood. In a previous study (Kan et al., 2014), the participants were exposed to the target novel words in English, Spanish, and Cantonese. In this current study, word retention was measured a week after the fast mapping task. No exposures were given during the one-week interval. Results showed that bilinguals and monolinguals retain a similar number of words. However, participants produced more words in English than in either Spanish or Cantonese. Correlation analyses revealed that language knowledge plays a role in the relationships between fast mapping and word retention. Specifically, within- and across-language relationships between bilinguals' fast mapping and word retention were found in Spanish and English, by contrast, within-language relationships between monolinguals' fast mapping and word retention were found in English and across-language relationships between their fast mapping and word retention performance in English and Cantonese. Similarly, bilinguals differed from monolinguals in the relationships among the word retention scores in three languages. Significant correlations were found among bilinguals' retention scores. However, no such correlations were found among monolinguals' retention scores. The overall findings suggest that bilinguals' language experience and language knowledge most likely contribute to how they learn and retain new words.
Mining User Dwell Time for Personalized Web Search Re-Ranking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Songhua; Jiang, Hao; Lau, Francis
We propose a personalized re-ranking algorithm through mining user dwell times derived from a user's previously online reading or browsing activities. We acquire document level user dwell times via a customized web browser, from which we then infer conceptword level user dwell times in order to understand a user's personal interest. According to the estimated concept word level user dwell times, our algorithm can estimate a user's potential dwell time over a new document, based on which personalized webpage re-ranking can be carried out. We compare the rankings produced by our algorithm with rankings generated by popular commercial search enginesmore » and a recently proposed personalized ranking algorithm. The results clearly show the superiority of our method. In this paper, we propose a new personalized webpage ranking algorithmthrough mining dwell times of a user. We introduce a quantitative model to derive concept word level user dwell times from the observed document level user dwell times. Once we have inferred a user's interest over the set of concept words the user has encountered in previous readings, we can then predict the user's potential dwell time over a new document. Such predicted user dwell time allows us to carry out personalized webpage re-ranking. To explore the effectiveness of our algorithm, we measured the performance of our algorithm under two conditions - one with a relatively limited amount of user dwell time data and the other with a doubled amount. Both evaluation cases put our algorithm for generating personalized webpage rankings to satisfy a user's personal preference ahead of those by Google, Yahoo!, and Bing, as well as a recent personalized webpage ranking algorithm.« less
Efficient processing of two-dimensional arrays with C or C++
Donato, David I.
2017-07-20
Because fast and efficient serial processing of raster-graphic images and other two-dimensional arrays is a requirement in land-change modeling and other applications, the effects of 10 factors on the runtimes for processing two-dimensional arrays with C and C++ are evaluated in a comparative factorial study. This study’s factors include the choice among three C or C++ source-code techniques for array processing; the choice of Microsoft Windows 7 or a Linux operating system; the choice of 4-byte or 8-byte array elements and indexes; and the choice of 32-bit or 64-bit memory addressing. This study demonstrates how programmer choices can reduce runtimes by 75 percent or more, even after compiler optimizations. Ten points of practical advice for faster processing of two-dimensional arrays are offered to C and C++ programmers. Further study and the development of a C and C++ software test suite are recommended.Key words: array processing, C, C++, compiler, computational speed, land-change modeling, raster-graphic image, two-dimensional array, software efficiency
E-library Implementation in Library University of Riau
NASA Astrophysics Data System (ADS)
Yuhelmi; Rismayeti
2017-12-01
This research aims to see how the e-book implementation in Library University of Riau and the obstacle in its implementation. In the Globalization era, digital libraries should be developed or else it will decrease the readers’ interest, with the recent advanced technology, digital libraries are one of the learning tools that can be used to finding an information through the internet access, hence digital libraries or commonly known as E-Library is really helping the students and academic community in finding information. The methods that used in this research is Observation, Interview, and Literature Study. The respondents in this research are the staff who involved in the process of digitization in Library University of Riau. The result of this research shows that implementation of e-library in Library University of Riau is already filled the user needs for now, although there is obstacle faced just like technical problems for example the internet connection speed and the technical problem to convert the format from Microsoft Word .doc to Adobe.pdf
Quality of patient education materials for rehabilitation after neurological surgery.
Agarwal, Nitin; Sarris, Christina; Hansberry, David R; Lin, Matthew J; Barrese, James C; Prestigiacomo, Charles J
2013-01-01
To evaluate the quality of online patient education materials for rehabilitation following neurological surgery. Materials were obtained from the National Institute of Neurological Disorders and Stroke (NINDS), U.S. National Library of Medicine (NLM), American Occupational Therapy Association (AOTA), and the American Academy of Orthopaedic Surgeons (AAOS). After removing unnecessary formatting, the readability of each site was assessed using the Flesch Reading Ease and Flesch-Kincaid Grade Level evaluations with Microsoft Office Word software. The average values of the Flesch Reading Ease and Flesch-Kincaid Grade Level were 41.5 and 11.8, respectively, which are well outside the recommended reading levels for the average American. Moreover, no online section was written below a ninth grade reading level. Evaluations of several websites from the NINDS, NLM, AOTA, and AAOS demonstrated that their reading levels were higher than that of the average American. Improved readability might be beneficial for patient education. Ultimately, increased patient comprehension may correlate to positive clinical outcomes.
Owens, John
2009-01-01
Technological advances in the acquisition of DNA and protein sequence information and the resulting onrush of data can quickly overwhelm the scientist unprepared for the volume of information that must be evaluated and carefully dissected to discover its significance. Few laboratories have the luxury of dedicated personnel to organize, analyze, or consistently record a mix of arriving sequence data. A methodology based on a modern relational-database manager is presented that is both a natural storage vessel for antibody sequence information and a conduit for organizing and exploring sequence data and accompanying annotation text. The expertise necessary to implement such a plan is equal to that required by electronic word processors or spreadsheet applications. Antibody sequence projects maintained as independent databases are selectively unified by the relational-database manager into larger database families that contribute to local analyses, reports, interactive HTML pages, or exported to facilities dedicated to sophisticated sequence analysis techniques. Database files are transposable among current versions of Microsoft, Macintosh, and UNIX operating systems.
Code of Federal Regulations, 2014 CFR
2014-10-01
... WORDS AND TERMS Internet Links 1202.7000 General. Most documents cited throughout (TAR) 48 CFR chapter 12, can be found on the internet. (TAR) 48 CFR chapter 12 will cite the corresponding internet...
Code of Federal Regulations, 2013 CFR
2013-10-01
... WORDS AND TERMS Internet Links 1202.7000 General. Most documents cited throughout (TAR) 48 CFR chapter 12, can be found on the internet. (TAR) 48 CFR chapter 12 will cite the corresponding internet...
Code of Federal Regulations, 2012 CFR
2012-10-01
... WORDS AND TERMS Internet Links 1202.7000 General. Most documents cited throughout (TAR) 48 CFR chapter 12, can be found on the internet. (TAR) 48 CFR chapter 12 will cite the corresponding internet...
Reaching for the Stars, Goals for the Library Profession
ERIC Educational Resources Information Center
Bloomfield, Masse
1971-01-01
Space colonization will require the microforming of all of man's worded knowledge which will take leadership and dedication for the library profession information service or documentation. (2 references) (AB)
Code of Federal Regulations, 2011 CFR
2011-10-01
... WORDS AND TERMS Internet Links 1202.7000 General. Most documents cited throughout (TAR) 48 CFR chapter 12, can be found on the internet. (TAR) 48 CFR chapter 12 will cite the corresponding internet...
Code of Federal Regulations, 2010 CFR
2010-10-01
... WORDS AND TERMS Internet Links 1202.7000 General. Most documents cited throughout (TAR) 48 CFR chapter 12, can be found on the internet. (TAR) 48 CFR chapter 12 will cite the corresponding internet...
ERIC Educational Resources Information Center
Smith, Irene; Yoder, Sharon
1996-01-01
Discusses word processing and desktop publishing and offers suggestions for creating documents that look more professional, including proportional type size, spacing, the use of punctuation marks, italics, tabs and margins, and paragraph styles. (LRW)
Word Criticality Analysis MOS: 76W. Skill Levels 1 & 2
1981-09-01
some degree of criticality in the training/performance of tasks contained in the respective MOS Soldier’s Manual (SM). These critical words were...printout. The prime users of this document were fully cognizant of its contents and required no special instructioft for interpretation . However, for the...Skill Level II However, due to the way some Soldier Manuals are constructed, the WCA for some MOS have both Skill Levels merged into one report. Each