Signature Verification Using N-tuple Learning Machine.
Maneechot, Thanin; Kitjaidure, Yuttana
2005-01-01
This research presents new algorithm for signature verification using N-tuple learning machine. The features are taken from handwritten signature on Digital Tablet (On-line). This research develops recognition algorithm using four features extraction, namely horizontal and vertical pen tip position(x-y position), pen tip pressure, and pen altitude angles. Verification uses N-tuple technique with Gaussian thresholding.
Age and gender-invariant features of handwritten signatures for verification systems
NASA Astrophysics Data System (ADS)
AbdAli, Sura; Putz-Leszczynska, Joanna
2014-11-01
Handwritten signature is one of the most natural biometrics, the study of human physiological and behavioral patterns. Behavioral biometrics includes signatures that may be different due to its owner gender or age because of intrinsic or extrinsic factors. This paper presents the results of the author's research on age and gender influence on verification factors. The experiments in this research were conducted using a database that contains signatures and their associated metadata. The used algorithm is based on the universal forgery feature idea, where the global classifier is able to classify a signature as a genuine one or, as a forgery, without the actual knowledge of the signature template and its owner. Additionally, the reduction of the dimensionality with the MRMR method is discussed.
Rosso, Osvaldo A; Ospina, Raydonal; Frery, Alejandro C
2016-01-01
We present a new approach for handwritten signature classification and verification based on descriptors stemming from time causal information theory. The proposal uses the Shannon entropy, the statistical complexity, and the Fisher information evaluated over the Bandt and Pompe symbolization of the horizontal and vertical coordinates of signatures. These six features are easy and fast to compute, and they are the input to an One-Class Support Vector Machine classifier. The results are better than state-of-the-art online techniques that employ higher-dimensional feature spaces which often require specialized software and hardware. We assess the consistency of our proposal with respect to the size of the training sample, and we also use it to classify the signatures into meaningful groups.
Ospina, Raydonal; Frery, Alejandro C.
2016-01-01
We present a new approach for handwritten signature classification and verification based on descriptors stemming from time causal information theory. The proposal uses the Shannon entropy, the statistical complexity, and the Fisher information evaluated over the Bandt and Pompe symbolization of the horizontal and vertical coordinates of signatures. These six features are easy and fast to compute, and they are the input to an One-Class Support Vector Machine classifier. The results are better than state-of-the-art online techniques that employ higher-dimensional feature spaces which often require specialized software and hardware. We assess the consistency of our proposal with respect to the size of the training sample, and we also use it to classify the signatures into meaningful groups. PMID:27907014
Signature Verification Based on Handwritten Text Recognition
NASA Astrophysics Data System (ADS)
Viriri, Serestina; Tapamo, Jules-R.
Signatures continue to be an important biometric trait because it remains widely used primarily for authenticating the identity of human beings. This paper presents an efficient text-based directional signature recognition algorithm which verifies signatures, even when they are composed of special unconstrained cursive characters which are superimposed and embellished. This algorithm extends the character-based signature verification technique. The experiments carried out on the GPDS signature database and an additional database created from signatures captured using the ePadInk tablet, show that the approach is effective and efficient, with a positive verification rate of 94.95%.
Modeling the Lexical Morphology of Western Handwritten Signatures
Diaz-Cabrera, Moises; Ferrer, Miguel A.; Morales, Aythami
2015-01-01
A handwritten signature is the final response to a complex cognitive and neuromuscular process which is the result of the learning process. Because of the many factors involved in signing, it is possible to study the signature from many points of view: graphologists, forensic experts, neurologists and computer vision experts have all examined them. Researchers study written signatures for psychiatric, penal, health and automatic verification purposes. As a potentially useful, multi-purpose study, this paper is focused on the lexical morphology of handwritten signatures. This we understand to mean the identification, analysis, and description of the signature structures of a given signer. In this work we analyze different public datasets involving 1533 signers from different Western geographical areas. Some relevant characteristics of signature lexical morphology have been selected, examined in terms of their probability distribution functions and modeled through a General Extreme Value distribution. This study suggests some useful models for multi-disciplinary sciences which depend on handwriting signatures. PMID:25860942
Discriminative Features Mining for Offline Handwritten Signature Verification
NASA Astrophysics Data System (ADS)
Neamah, Karrar; Mohamad, Dzulkifli; Saba, Tanzila; Rehman, Amjad
2014-03-01
Signature verification is an active research area in the field of pattern recognition. It is employed to identify the particular person with the help of his/her signature's characteristics such as pen pressure, loops shape, speed of writing and up down motion of pen, writing speed, pen pressure, shape of loops, etc. in order to identify that person. However, in the entire process, features extraction and selection stage is of prime importance. Since several signatures have similar strokes, characteristics and sizes. Accordingly, this paper presents combination of orientation of the skeleton and gravity centre point to extract accurate pattern features of signature data in offline signature verification system. Promising results have proved the success of the integration of the two methods.
Research on Signature Verification Method Based on Discrete Fréchet Distance
NASA Astrophysics Data System (ADS)
Fang, J. L.; Wu, W.
2018-05-01
This paper proposes a multi-feature signature template based on discrete Fréchet distance, which breaks through the limitation of traditional signature authentication using a single signature feature. It solves the online handwritten signature authentication signature global feature template extraction calculation workload, signature feature selection unreasonable problem. In this experiment, the false recognition rate (FAR) and false rejection rate (FRR) of the statistical signature are calculated and the average equal error rate (AEER) is calculated. The feasibility of the combined template scheme is verified by comparing the average equal error rate of the combination template and the original template.
Offline Signature Verification Using the Discrete Radon Transform and a Hidden Markov Model
NASA Astrophysics Data System (ADS)
Coetzer, J.; Herbst, B. M.; du Preez, J. A.
2004-12-01
We developed a system that automatically authenticates offline handwritten signatures using the discrete Radon transform (DRT) and a hidden Markov model (HMM). Given the robustness of our algorithm and the fact that only global features are considered, satisfactory results are obtained. Using a database of 924 signatures from 22 writers, our system achieves an equal error rate (EER) of 18% when only high-quality forgeries (skilled forgeries) are considered and an EER of 4.5% in the case of only casual forgeries. These signatures were originally captured offline. Using another database of 4800 signatures from 51 writers, our system achieves an EER of 12.2% when only skilled forgeries are considered. These signatures were originally captured online and then digitally converted into static signature images. These results compare well with the results of other algorithms that consider only global features.
NASA Astrophysics Data System (ADS)
Vielhauer, Claus; Croce Ferri, Lucilla
2003-06-01
Our paper addresses two issues of a biometric authentication algorithm for ID cardholders previously presented namely the security of the embedded reference data and the aging process of the biometric data. We describe a protocol that allows two levels of verification, combining a biometric hash technique based on handwritten signature and hologram watermarks with cryptographic signatures in a verification infrastructure. This infrastructure consists of a Trusted Central Public Authority (TCPA), which serves numerous Enrollment Stations (ES) in a secure environment. Each individual performs an enrollment at an ES, which provides the TCPA with the full biometric reference data and a document hash. The TCPA then calculates the authentication record (AR) with the biometric hash, a validity timestamp, and a document hash provided by the ES. The AR is then signed with a cryptographic signature function, initialized with the TCPA's private key and embedded in the ID card as a watermark. Authentication is performed at Verification Stations (VS), where the ID card will be scanned and the signed AR is retrieved from the watermark. Due to the timestamp mechanism and a two level biometric verification technique based on offline and online features, the AR can deal with the aging process of the biometric feature by forcing a re-enrollment of the user after expiry, making use of the ES infrastructure. We describe some attack scenarios and we illustrate the watermarking embedding, retrieval and dispute protocols, analyzing their requisites, advantages and disadvantages in relation to security requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... SIGNATURES General Provisions § 11.1 Scope. (a) The regulations in this part set forth the criteria under which the agency considers electronic records, electronic signatures, and handwritten signatures... handwritten signatures executed on paper. (b) This part applies to records in electronic form that are created...
Multimodal person authentication on a smartphone under realistic conditions
NASA Astrophysics Data System (ADS)
Morris, Andrew C.; Jassim, Sabah; Sellahewa, Harin; Allano, Lorene; Ehlers, Johan; Wu, Dalei; Koreman, Jacques; Garcia-Salicetti, Sonia; Ly-Van, Bao; Dorizzi, Bernadette
2006-05-01
Verification of a person's identity by the combination of more than one biometric trait strongly increases the robustness of person authentication in real applications. This is particularly the case in applications involving signals of degraded quality, as for person authentication on mobile platforms. The context of mobility generates degradations of input signals due to the variety of environments encountered (ambient noise, lighting variations, etc.), while the sensors' lower quality further contributes to decrease in system performance. Our aim in this work is to combine traits from the three biometric modalities of speech, face and handwritten signature in a concrete application, performing non intrusive biometric verification on a personal mobile device (smartphone/PDA). Most available biometric databases have been acquired in more or less controlled environments, which makes it difficult to predict performance in a real application. Our experiments are performed on a database acquired on a PDA as part of the SecurePhone project (IST-2002-506883 project "Secure Contracts Signed by Mobile Phone"). This database contains 60 virtual subjects balanced in gender and age. Virtual subjects are obtained by coupling audio-visual signals from real English speaking subjects with signatures from other subjects captured on the touch screen of the PDA. Video data for the PDA database was recorded in 2 recording sessions separated by at least one week. Each session comprises 4 acquisition conditions: 2 indoor and 2 outdoor recordings (with in each case, a good and a degraded quality recording). Handwritten signatures were captured in one session in realistic conditions. Different scenarios of matching between training and test conditions are tested to measure the resistance of various fusion systems to different types of variability and different amounts of enrolment data.
37 CFR 1.4 - Nature of correspondence and signature requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., that is, have an original handwritten signature personally signed, in permanent dark ink or its... EFS-Web customization. (e) The following correspondence must be submitted with an original handwritten signature personally signed in permanent dark ink or its equivalent: (1) Correspondence requiring a person's...
NASA Astrophysics Data System (ADS)
Kuseler, Torben; Lami, Ihsan; Jassim, Sabah; Sellahewa, Harin
2010-04-01
The use of mobile communication devices with advance sensors is growing rapidly. These sensors are enabling functions such as Image capture, Location applications, and Biometric authentication such as Fingerprint verification and Face & Handwritten signature recognition. Such ubiquitous devices are essential tools in today's global economic activities enabling anywhere-anytime financial and business transactions. Cryptographic functions and biometric-based authentication can enhance the security and confidentiality of mobile transactions. Using Biometric template security techniques in real-time biometric-based authentication are key factors for successful identity verification solutions, but are venerable to determined attacks by both fraudulent software and hardware. The EU-funded SecurePhone project has designed and implemented a multimodal biometric user authentication system on a prototype mobile communication device. However, various implementations of this project have resulted in long verification times or reduced accuracy and/or security. This paper proposes to use built-in-self-test techniques to ensure no tampering has taken place on the verification process prior to performing the actual biometric authentication. These techniques utilises the user personal identification number as a seed to generate a unique signature. This signature is then used to test the integrity of the verification process. Also, this study proposes the use of a combination of biometric modalities to provide application specific authentication in a secure environment, thus achieving optimum security level with effective processing time. I.e. to ensure that the necessary authentication steps and algorithms running on the mobile device application processor can not be undermined or modified by an imposter to get unauthorized access to the secure system.
Code of Federal Regulations, 2014 CFR
2014-07-01
... contain an image of the requester's handwritten signature, such as an attachment that shows the requester... confidentiality statute, the email transmission must contain an image of the requester's handwritten signature... processing, e-mail FOIA appeals must be sent to official VA FOIA mailboxes established for the purpose of...
Code of Federal Regulations, 2013 CFR
2013-07-01
... contain an image of the requester's handwritten signature, such as an attachment that shows the requester... confidentiality statute, the email transmission must contain an image of the requester's handwritten signature... processing, e-mail FOIA appeals must be sent to official VA FOIA mailboxes established for the purpose of...
Code of Federal Regulations, 2012 CFR
2012-07-01
... contain an image of the requester's handwritten signature, such as an attachment that shows the requester... confidentiality statute, the email transmission must contain an image of the requester's handwritten signature... processing, e-mail FOIA appeals must be sent to official VA FOIA mailboxes established for the purpose of...
21 CFR 11.70 - Signature/record linking.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 1 2010-04-01 2010-04-01 false Signature/record linking. 11.70 Section 11.70 Food... RECORDS; ELECTRONIC SIGNATURES Electronic Records § 11.70 Signature/record linking. Electronic signatures and handwritten signatures executed to electronic records shall be linked to their respective...
A bimodal biometric identification system
NASA Astrophysics Data System (ADS)
Laghari, Mohammad S.; Khuwaja, Gulzar A.
2013-03-01
Biometrics consists of methods for uniquely recognizing humans based upon one or more intrinsic physical or behavioral traits. Physicals are related to the shape of the body. Behavioral are related to the behavior of a person. However, biometric authentication systems suffer from imprecision and difficulty in person recognition due to a number of reasons and no single biometrics is expected to effectively satisfy the requirements of all verification and/or identification applications. Bimodal biometric systems are expected to be more reliable due to the presence of two pieces of evidence and also be able to meet the severe performance requirements imposed by various applications. This paper presents a neural network based bimodal biometric identification system by using human face and handwritten signature features.
USign--a security enhanced electronic consent model.
Li, Yanyan; Xie, Mengjun; Bian, Jiang
2014-01-01
Electronic consent becomes increasingly popular in the healthcare sector given the many benefits it provides. However, security concerns, e.g., how to verify the identity of a person who is remotely accessing the electronic consent system in a secure and user-friendly manner, also arise along with the popularity of electronic consent. Unfortunately, existing electronic consent systems do not pay sufficient attention to those issues. They mainly rely on conventional password based authentication to verify the identity of an electronic consent user, which is far from being sufficient given that identity theft threat is real and significant in reality. In this paper, we present a security enhanced electronic consent model called USign. USign enhances the identity protection and authentication for electronic consent systems by leveraging handwritten signatures everyone is familiar with and mobile computing technologies that are becoming ubiquitous. We developed a prototype of USign and conducted preliminary evaluation on accuracy and usability of signature verification. Our experimental results show the feasibility of the proposed model.
Signature detection and matching for document image retrieval.
Zhu, Guangyu; Zheng, Yefeng; Doermann, David; Jaeger, Stefan
2009-11-01
As one of the most pervasive methods of individual identification and document authentication, signatures present convincing evidence and provide an important form of indexing for effective document image processing and retrieval in a broad range of applications. However, detection and segmentation of free-form objects such as signatures from clustered background is currently an open document analysis problem. In this paper, we focus on two fundamental problems in signature-based document image retrieval. First, we propose a novel multiscale approach to jointly detecting and segmenting signatures from document images. Rather than focusing on local features that typically have large variations, our approach captures the structural saliency using a signature production model and computes the dynamic curvature of 2D contour fragments over multiple scales. This detection framework is general and computationally tractable. Second, we treat the problem of signature retrieval in the unconstrained setting of translation, scale, and rotation invariant nonrigid shape matching. We propose two novel measures of shape dissimilarity based on anisotropic scaling and registration residual error and present a supervised learning framework for combining complementary shape information from different dissimilarity metrics using LDA. We quantitatively study state-of-the-art shape representations, shape matching algorithms, measures of dissimilarity, and the use of multiple instances as query in document image retrieval. We further demonstrate our matching techniques in offline signature verification. Extensive experiments using large real-world collections of English and Arabic machine-printed and handwritten documents demonstrate the excellent performance of our approaches.
NASA Astrophysics Data System (ADS)
Ricci, R.; Chollet, G.; Crispino, M. V.; Jassim, S.; Koreman, J.; Olivar-Dimas, M.; Garcia-Salicetti, S.; Soria-Rodriguez, P.
2006-05-01
This article presents an overview of the SecurePhone project, with an account of the first results obtained. SecurePhone's primary aim is to realise a mobile phone prototype - the 'SecurePhone' - in which biometrical authentication enables users to deal secure, dependable transactions over a mobile network. The SecurePhone is based on a commercial PDA-phone, supplemented with specific software modules and a customised SIM card. It integrates in a single environment a number of advanced features: access to cryptographic keys through strong multimodal biometric authentication; appending and verification of digital signatures; real-time exchange and interactive modification of (esigned) documents and voice recordings. SecurePhone's 'biometric recogniser' is based on original research. A fused combination of three different biometric methods - speaker, face and handwritten signature verification - is exploited, with no need for dedicated hardware components. The adoption of non-intrusive, psychologically neutral biometric techniques is expected to mitigate rejection problems that often inhibit the social use of biometrics, and speed up the spread of e-signature technology. Successful biometric authentication grants access to SecurePhone's built-in esignature services through a user-friendly interface. Special emphasis is accorded to the definition of a trustworthy security chain model covering all aspects of system operation. The SecurePhone is expected to boost m-commerce and open new scenarios for m-business and m-work, by changing the way people interact and by improving trust and confidence in information technologies, often considered intimidating and difficult to use. Exploitation plans will also explore other application domains (physical and logical access control, securised mobile communications).
Heckeroth, J; Boywitt, C D
2017-06-01
Considering the increasing relevance of handwritten electronically captured signatures, we evaluated the ability of forensic handwriting examiners (FHEs) to distinguish between authentic and simulated electronic signatures. Sixty-six professional FHEs examined the authenticity of electronic signatures captured with software by signotec on a smartphone Galaxy Note 4 by Samsung and signatures made with a ballpoint pen on paper (conventional signatures). In addition, we experimentally varied the name ("J. König" vs. "A. Zaiser") and the status (authentic vs. simulated) of the signatures in question. FHEs' conclusions about the authenticity did not show a statistically significant general difference between electronic and conventional signatures. Furthermore, no significant discrepancies between electronic and conventional signatures were found with regard to other important aspects of the authenticity examination such as questioned signatures' graphic information content, the suitability of the provided sample signatures, the necessity of further examinations and the levels of difficulty of the cases under examination. Thus, this study did not reveal any indications that electronic signatures captured with software by signotec on a Galaxy Note 4 are less well suited than conventional signatures for the examination of authenticity, precluding potential technical problems concerning the integrity of electronic signatures. Copyright © 2017 Elsevier B.V. All rights reserved.
Transcript mapping for handwritten English documents
NASA Astrophysics Data System (ADS)
Jose, Damien; Bharadwaj, Anurag; Govindaraju, Venu
2008-01-01
Transcript mapping or text alignment with handwritten documents is the automatic alignment of words in a text file with word images in a handwritten document. Such a mapping has several applications in fields ranging from machine learning where large quantities of truth data are required for evaluating handwriting recognition algorithms, to data mining where word image indexes are used in ranked retrieval of scanned documents in a digital library. The alignment also aids "writer identity" verification algorithms. Interfaces which display scanned handwritten documents may use this alignment to highlight manuscript tokens when a person examines the corresponding transcript word. We propose an adaptation of the True DTW dynamic programming algorithm for English handwritten documents. The integration of the dissimilarity scores from a word-model word recognizer and Levenshtein distance between the recognized word and lexicon word, as a cost metric in the DTW algorithm leading to a fast and accurate alignment, is our primary contribution. Results provided, confirm the effectiveness of our approach.
Code of Federal Regulations, 2012 CFR
2012-01-01
... graphical image of a handwritten signature, usually created using a special computer input device, such as a... comparison with the characteristics and biometric data of a known or exemplar signature image. Director means... folder across the Government. Electronic retirement and insurance processing system means the new...
Code of Federal Regulations, 2011 CFR
2011-01-01
... graphical image of a handwritten signature, usually created using a special computer input device, such as a... comparison with the characteristics and biometric data of a known or exemplar signature image. Director means... folder across the Government. Electronic retirement and insurance processing system means the new...
Code of Federal Regulations, 2013 CFR
2013-01-01
... graphical image of a handwritten signature, usually created using a special computer input device, such as a... comparison with the characteristics and biometric data of a known or exemplar signature image. Director means... folder across the Government. Electronic retirement and insurance processing system means the new...
Code of Federal Regulations, 2014 CFR
2014-01-01
...) ELECTRONIC RETIREMENT PROCESSING General Provisions § 850.103 Definitions. In this part— Agency means an... graphical image of a handwritten signature usually created using a special computer input device (such as a... comparison with the characteristics and biometric data of a known or exemplar signature image. Director means...
Artificial neural networks for document analysis and recognition.
Marinai, Simone; Gori, Marco; Soda, Giovanni; Society, Computer
2005-01-01
Artificial neural networks have been extensively applied to document analysis and recognition. Most efforts have been devoted to the recognition of isolated handwritten and printed characters with widely recognized successful results. However, many other document processing tasks, like preprocessing, layout analysis, character segmentation, word recognition, and signature verification, have been effectively faced with very promising results. This paper surveys the most significant problems in the area of offline document image processing, where connectionist-based approaches have been applied. Similarities and differences between approaches belonging to different categories are discussed. A particular emphasis is given on the crucial role of prior knowledge for the conception of both appropriate architectures and learning algorithms. Finally, the paper provides a critical analysis on the reviewed approaches and depicts the most promising research guidelines in the field. In particular, a second generation of connectionist-based models are foreseen which are based on appropriate graphical representations of the learning environment.
Automatic extraction of numeric strings in unconstrained handwritten document images
NASA Astrophysics Data System (ADS)
Haji, M. Mehdi; Bui, Tien D.; Suen, Ching Y.
2011-01-01
Numeric strings such as identification numbers carry vital pieces of information in documents. In this paper, we present a novel algorithm for automatic extraction of numeric strings in unconstrained handwritten document images. The algorithm has two main phases: pruning and verification. In the pruning phase, the algorithm first performs a new segment-merge procedure on each text line, and then using a new regularity measure, it prunes all sequences of characters that are unlikely to be numeric strings. The segment-merge procedure is composed of two modules: a new explicit character segmentation algorithm which is based on analysis of skeletal graphs and a merging algorithm which is based on graph partitioning. All the candidate sequences that pass the pruning phase are sent to a recognition-based verification phase for the final decision. The recognition is based on a coarse-to-fine approach using probabilistic RBF networks. We developed our algorithm for the processing of real-world documents where letters and digits may be connected or broken in a document. The effectiveness of the proposed approach is shown by extensive experiments done on a real-world database of 607 documents which contains handwritten, machine-printed and mixed documents with different types of layouts and levels of noise.
Group discriminatory power of handwritten characters
NASA Astrophysics Data System (ADS)
Tomai, Catalin I.; Kshirsagar, Devika M.; Srihari, Sargur N.
2003-12-01
Using handwritten characters we address two questions (i) what is the group identification performance of different alphabets (upper and lower case) and (ii) what are the best characters for the verification task (same writer/different writer discrimination) knowing demographic information about the writer such as ethnicity, age or sex. The Bhattacharya distance is used to rank different characters by their group discriminatory power and the k-nn classifier to measure the individual performance of characters for group identification. Given the tasks of identifying the correct gender/age/ethnicity or handedness, the accumulated performance of characters varies between 65% and 85%.
49 CFR 592.6 - Duties of a registered importer.
Code of Federal Regulations, 2010 CFR
2010-10-01
... pursuant to § 592.5(a)(5)(iv), with an original hand-written signature and not with a signature that is... paragraph (d) of this section (the 30-day period will be extended if the Administrator has made written... on which Code J is checked, and the EPA has granted the ICI written permission to operate the vehicle...
Identification of forgeries in handwritten petitions for ballot propositions
NASA Astrophysics Data System (ADS)
Srihari, Sargur; Ramakrishnan, Veshnu; Malgireddy, Manavender; Ball, Gregory R.
2009-01-01
Many governments have some form of "direct democracy" legislation procedure whereby individual citizens can propose various measures creating or altering laws. Generally, such a process is started with the gathering of a large number of signatures. There is interest in whether or not there are fraudulent signatures present in such a petition, and if so what percentage of the signatures are indeed fraudulent. However, due to the large number of signatures (tens of thousands), it is not feasible to have a document examiner verify the signatures directly. Instead, there is interest in creating a subset of signatures where there is a high probability of fraud that can be verified. We present a method by which a pairwise comparison of signatures can be performed and subsequent sorting can generate such subsets.
Glove-based approach to online signature verification.
Kamel, Nidal S; Sayeed, Shohel; Ellis, Grant A
2008-06-01
Utilizing the multiple degrees of freedom offered by the data glove for each finger and the hand, a novel on-line signature verification system using the Singular Value Decomposition (SVD) numerical tool for signature classification and verification is presented. The proposed technique is based on the Singular Value Decomposition in finding r singular vectors sensing the maximal energy of glove data matrix A, called principal subspace, so the effective dimensionality of A can be reduced. Having modeled the data glove signature through its r-principal subspace, signature authentication is performed by finding the angles between the different subspaces. A demonstration of the data glove is presented as an effective high-bandwidth data entry device for signature verification. This SVD-based signature verification technique is tested and its performance is shown to be able to recognize forgery signatures with a false acceptance rate of less than 1.2%.
Fuzzy Clustering of Multiple Instance Data
2015-11-30
depth is not. To illustrate this data, in figure 1 we display the GPR signatures of the same mine buried at 3 in deep in two geographically different...target signature depends on the soil properties of the site. The same mine type is buried at 3in deep in both sites. Since its formal introduction...drug design [15], and the problem of handwritten digit recognition [16]. To the best of our knowledge, Diet - terich, et. al [1] were the first to
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katsumi Marukawa; Kazuki Nakashima; Masashi Koga
1994-12-31
This paper presents a paper form processing system with an error correcting function for reading handwritten kanji strings. In the paper form processing system, names and addresses are important key data, and especially this paper takes up an error correcting method for name and address recognition. The method automatically corrects errors of the kanji OCR (Optical Character Reader) with the help of word dictionaries and other knowledge. Moreover, it allows names and addresses to be written in any style. The method consists of word matching {open_quotes}furigana{close_quotes} verification for name strings, and address approval for address strings. For word matching, kanjimore » name candidates are extracted by automaton-type word matching. In {open_quotes}furigana{close_quotes} verification, kana candidate characters recognized by the kana OCR are compared with kana`s searched from the name dictionary based on kanji name candidates, given by the word matching. The correct name is selected from the results of word matching and furigana verification. Also, the address approval efficiently searches for the right address based on a bottom-up procedure which follows hierarchical relations from a lower placename to a upper one by using the positional condition among the placenames. We ascertained that the error correcting method substantially improves the recognition rate and processing speed in experiments on 5,032 forms.« less
Bidding Agents That Perpetrate Auction Fraud
NASA Astrophysics Data System (ADS)
Trevathan, Jarrod; McCabe, Alan; Read, Wayne
This paper presents a software bidding agent that inserts fake bids on the seller's behalf to inflate an auction's price. This behaviour is referred to as shill bidding. Shill bidding is strictly prohibited by online auctioneers, as it defrauds unsuspecting buyers by forcing them to pay more for the item. The malicious bidding agent was constructed to aid in developing shill detection techniques. We have previously documented a simple shill bidding agent that incrementally increases the auction price until it reaches the desired profit target, or it becomes too risky to continue bidding. This paper presents an adaptive shill bidding agent which when used over a series of auctions with substitutable items, can revise its strategy based on bidding behaviour in past auctions. The adaptive agent applies a novel prediction technique referred to as the Extremum Consistency (EC) algorithm, to determine the optimal price to aspire for. The EC algorithm has successfully been used in handwritten signature verification for determining the maximum and minimum values in an input stream. The agent's ability to inflate the price has been tested in a simulated marketplace and experimental results are presented.
40 CFR 761.217 - Exception reporting.
Code of Federal Regulations, 2013 CFR
2013-07-01
... PROHIBITIONS PCB Waste Disposal Records and Reports § 761.217 Exception reporting. (a)(1) A generator of PCB waste, who does not receive a copy of the manifest with the handwritten signature of the owner or... 761.217 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL...
40 CFR 761.217 - Exception reporting.
Code of Federal Regulations, 2014 CFR
2014-07-01
... PROHIBITIONS PCB Waste Disposal Records and Reports § 761.217 Exception reporting. (a)(1) A generator of PCB waste, who does not receive a copy of the manifest with the handwritten signature of the owner or... 761.217 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-02
...'' and provided four support documents (``Separation Agreement and Release'' related to Louis Reynolds... Reynolds. The ``Separation Agreement and Release'' document established that Louis Reynolds was separated... handwritten note that Louis Reynolds is one of the individuals. The ``Signatures'' document shows that Louis...
On the usability and security of pseudo-signatures
NASA Astrophysics Data System (ADS)
Chen, Jin; Lopresti, Daniel
2010-01-01
Handwriting has been proposed as a possible biometric for a number of years. However, recent work has shown that handwritten passphrases are vulnerable to both human-based and machine-based forgeries. Pseudosignatures as an alternative are designed to thwart such attacks while still being easy for users to create, remember, and reproduce. In this paper, we briefly review the concept of pseudo-signatures, then describe an evaluation framework that considers aspects of both usability and security. We present results from preliminary experiments that examine user choice in creating pseudo-signatures and discuss the implications when sketching is used for generating cryptographic keys.
38 CFR 1.554 - Requirements for making requests.
Code of Federal Regulations, 2012 CFR
2012-07-01
... must contain an image of the requester's handwritten signature. To make a request for VA records, write... by another confidentiality statute, the e-mail transmission must contain an image of the requester's... assure prompt processing, e-mail FOIA requests must be sent to official VA FOIA mailboxes established for...
38 CFR 1.554 - Requirements for making requests.
Code of Federal Regulations, 2014 CFR
2014-07-01
... must contain an image of the requester's handwritten signature. To make a request for VA records, write... by another confidentiality statute, the e-mail transmission must contain an image of the requester's... assure prompt processing, e-mail FOIA requests must be sent to official VA FOIA mailboxes established for...
38 CFR 1.554 - Requirements for making requests.
Code of Federal Regulations, 2013 CFR
2013-07-01
... must contain an image of the requester's handwritten signature. To make a request for VA records, write... by another confidentiality statute, the e-mail transmission must contain an image of the requester's... assure prompt processing, e-mail FOIA requests must be sent to official VA FOIA mailboxes established for...
Offline signature verification using convolution Siamese network
NASA Astrophysics Data System (ADS)
Xing, Zi-Jian; Yin, Fei; Wu, Yi-Chao; Liu, Cheng-Lin
2018-04-01
This paper presents an offline signature verification approach using convolutional Siamese neural network. Unlike the existing methods which consider feature extraction and metric learning as two independent stages, we adopt a deepleaning based framework which combines the two stages together and can be trained end-to-end. The experimental results on two offline public databases (GPDSsynthetic and CEDAR) demonstrate the superiority of our method on the offline signature verification problem.
Input apparatus for dynamic signature verification systems
EerNisse, Errol P.; Land, Cecil E.; Snelling, Jay B.
1978-01-01
The disclosure relates to signature verification input apparatus comprising a writing instrument and platen containing piezoelectric transducers which generate signals in response to writing pressures.
75 FR 42575 - Electronic Signature and Storage of Form I-9, Employment Eligibility Verification
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-22
... Electronic Signature and Storage of Form I-9, Employment Eligibility Verification AGENCY: U.S. Immigration... published an interim final rule to permit electronic signature and storage of the Form I-9. 71 FR 34510..., or a combination of paper and electronic systems; Employers may change electronic storage systems as...
Fusion of Dependent and Independent Biometric Information Sources
2005-03-01
palmprint , DNA, ECG, signature, etc. The comparison of various biometric techniques is given in [13] and is presented in Table 1. Since, each...theory. Experimental studies on the M2VTS database [32] showed that a reduction in error rates is up to about 40%. Four combination strategies are...taken from the CEDAR benchmark database . The word recognition results were the highest (91%) among published results for handwritten words (before 2001
Towards a Bayesian evaluation of features in questioned handwritten signatures.
Gaborini, Lorenzo; Biedermann, Alex; Taroni, Franco
2017-05-01
In this work, we propose the construction of a evaluative framework for supporting experts in questioned signature examinations. Through the use of Bayesian networks, we envision to quantify the probative value of well defined measurements performed on questioned signatures, in a way that is both formalised and part of a coherent approach to evaluation. At the current stage, our project is explorative, focusing on the broad range of aspects that relate to comparative signature examinations. The goal is to identify writing features which are both highly discriminant, and easy for forensic examiners to detect. We also seek for a balance between case-specific features and characteristics which can be measured in the vast majority of signatures. Care is also taken at preserving the interpretability at every step of the reasoning process. This paves the way for future work, which will aim at merging the different contributions to a single probabilistic measure of strength of evidence using Bayesian networks. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Authentication Based on Pole-zero Models of Signature Velocity
Rashidi, Saeid; Fallah, Ali; Towhidkhah, Farzad
2013-01-01
With the increase of communication and financial transaction through internet, on-line signature verification is an accepted biometric technology for access control and plays a significant role in authenticity and authorization in modernized society. Therefore, fast and precise algorithms for the signature verification are very attractive. The goal of this paper is modeling of velocity signal that pattern and properties is stable for persons. With using pole-zero models based on discrete cosine transform, precise method is proposed for modeling and then features is founded from strokes. With using linear, parzen window and support vector machine classifiers, the signature verification technique was tested with a large number of authentic and forgery signatures and has demonstrated the good potential of this technique. The signatures are collected from three different database include a proprietary database, the SVC2004 and the Sabanci University signature database benchmark databases. Experimental results based on Persian, SVC2004 and SUSIG databases show that our method achieves an equal error rate of 5.91%, 5.62% and 3.91% in the skilled forgeries, respectively. PMID:24696797
FIR signature verification system characterizing dynamics of handwriting features
NASA Astrophysics Data System (ADS)
Thumwarin, Pitak; Pernwong, Jitawat; Matsuura, Takenobu
2013-12-01
This paper proposes an online signature verification method based on the finite impulse response (FIR) system characterizing time-frequency characteristics of dynamic handwriting features. First, the barycenter determined from both the center point of signature and two adjacent pen-point positions in the signing process, instead of one pen-point position, is used to reduce the fluctuation of handwriting motion. In this paper, among the available dynamic handwriting features, motion pressure and area pressure are employed to investigate handwriting behavior. Thus, the stable dynamic handwriting features can be described by the relation of the time-frequency characteristics of the dynamic handwriting features. In this study, the aforesaid relation can be represented by the FIR system with the wavelet coefficients of the dynamic handwriting features as both input and output of the system. The impulse response of the FIR system is used as the individual feature for a particular signature. In short, the signature can be verified by evaluating the difference between the impulse responses of the FIR systems for a reference signature and the signature to be verified. The signature verification experiments in this paper were conducted using the SUBCORPUS MCYT-100 signature database consisting of 5,000 signatures from 100 signers. The proposed method yielded equal error rate (EER) of 3.21% on skilled forgeries.
Contingency-Focused Financial Management and Logistics for the U.S. Coast Guard
2008-12-01
being processed by the local contracting office. Hard copy PRs with hand-written signatures are not to be accepted unless a waiver has been granted...forms used for authorizing procurement are nearly the same but would merely require drafting the documents in a different hard -copy format to provide...be created, promulgated and distributed in hard copy for managers in the field and at support commands to enact when it is evident that service
Dual function seal: visualized digital signature for electronic medical record systems.
Yu, Yao-Chang; Hou, Ting-Wei; Chiang, Tzu-Chiang
2012-10-01
Digital signature is an important cryptography technology to be used to provide integrity and non-repudiation in electronic medical record systems (EMRS) and it is required by law. However, digital signatures normally appear in forms unrecognizable to medical staff, this may reduce the trust from medical staff that is used to the handwritten signatures or seals. Therefore, in this paper we propose a dual function seal to extend user trust from a traditional seal to a digital signature. The proposed dual function seal is a prototype that combines the traditional seal and digital seal. With this prototype, medical personnel are not just can put a seal on paper but also generate a visualized digital signature for electronic medical records. Medical Personnel can then look at the visualized digital signature and directly know which medical personnel generated it, just like with a traditional seal. Discrete wavelet transform (DWT) is used as an image processing method to generate a visualized digital signature, and the peak signal to noise ratio (PSNR) is calculated to verify that distortions of all converted images are beyond human recognition, and the results of our converted images are from 70 dB to 80 dB. The signature recoverability is also tested in this proposed paper to ensure that the visualized digital signature is verifiable. A simulated EMRS is implemented to show how the visualized digital signature can be integrity into EMRS.
Benchmarking desktop and mobile handwriting across COTS devices: The e-BioSign biometric database
Tolosana, Ruben; Vera-Rodriguez, Ruben; Fierrez, Julian; Morales, Aythami; Ortega-Garcia, Javier
2017-01-01
This paper describes the design, acquisition process and baseline evaluation of the new e-BioSign database, which includes dynamic signature and handwriting information. Data is acquired from 5 different COTS devices: three Wacom devices (STU-500, STU-530 and DTU-1031) specifically designed to capture dynamic signatures and handwriting, and two general purpose tablets (Samsung Galaxy Note 10.1 and Samsung ATIV 7). For the two Samsung tablets, data is collected using both pen stylus and also the finger in order to study the performance of signature verification in a mobile scenario. Data was collected in two sessions for 65 subjects, and includes dynamic information of the signature, the full name and alpha numeric sequences. Skilled forgeries were also performed for signatures and full names. We also report a benchmark evaluation based on e-BioSign for person verification under three different real scenarios: 1) intra-device, 2) inter-device, and 3) mixed writing-tool. We have experimented the proposed benchmark using the main existing approaches for signature verification: feature- and time functions-based. As a result, new insights into the problem of signature biometrics in sensor-interoperable scenarios have been obtained, namely: the importance of specific methods for dealing with device interoperability, and the necessity of a deeper analysis on signatures acquired using the finger as the writing tool. This e-BioSign public database allows the research community to: 1) further analyse and develop signature verification systems in realistic scenarios, and 2) investigate towards a better understanding of the nature of the human handwriting when captured using electronic COTS devices in realistic conditions. PMID:28475590
Benchmarking desktop and mobile handwriting across COTS devices: The e-BioSign biometric database.
Tolosana, Ruben; Vera-Rodriguez, Ruben; Fierrez, Julian; Morales, Aythami; Ortega-Garcia, Javier
2017-01-01
This paper describes the design, acquisition process and baseline evaluation of the new e-BioSign database, which includes dynamic signature and handwriting information. Data is acquired from 5 different COTS devices: three Wacom devices (STU-500, STU-530 and DTU-1031) specifically designed to capture dynamic signatures and handwriting, and two general purpose tablets (Samsung Galaxy Note 10.1 and Samsung ATIV 7). For the two Samsung tablets, data is collected using both pen stylus and also the finger in order to study the performance of signature verification in a mobile scenario. Data was collected in two sessions for 65 subjects, and includes dynamic information of the signature, the full name and alpha numeric sequences. Skilled forgeries were also performed for signatures and full names. We also report a benchmark evaluation based on e-BioSign for person verification under three different real scenarios: 1) intra-device, 2) inter-device, and 3) mixed writing-tool. We have experimented the proposed benchmark using the main existing approaches for signature verification: feature- and time functions-based. As a result, new insights into the problem of signature biometrics in sensor-interoperable scenarios have been obtained, namely: the importance of specific methods for dealing with device interoperability, and the necessity of a deeper analysis on signatures acquired using the finger as the writing tool. This e-BioSign public database allows the research community to: 1) further analyse and develop signature verification systems in realistic scenarios, and 2) investigate towards a better understanding of the nature of the human handwriting when captured using electronic COTS devices in realistic conditions.
Improving semi-text-independent method of writer verification using difference vector
NASA Astrophysics Data System (ADS)
Li, Xin; Ding, Xiaoqing
2009-01-01
The semi-text-independent method of writer verification based on the linear framework is a method that can use all characters of two handwritings to discriminate the writers in the condition of knowing the text contents. The handwritings are allowed to just have small numbers of even totally different characters. This fills the vacancy of the classical text-dependent methods and the text-independent methods of writer verification. Moreover, the information, what every character is, is used for the semi-text-independent method in this paper. Two types of standard templates, generated from many writer-unknown handwritten samples and printed samples of each character, are introduced to represent the content information of each character. The difference vectors of the character samples are gotten by subtracting the standard templates from the original feature vectors and used to replace the original vectors in the process of writer verification. By removing a large amount of content information and remaining the style information, the verification accuracy of the semi-text-independent method is improved. On a handwriting database involving 30 writers, when the query handwriting and the reference handwriting are composed of 30 distinct characters respectively, the average equal error rate (EER) of writer verification reaches 9.96%. And when the handwritings contain 50 characters, the average EER falls to 6.34%, which is 23.9% lower than the EER of not using the difference vectors.
AdaBoost-based on-line signature verifier
NASA Astrophysics Data System (ADS)
Hongo, Yasunori; Muramatsu, Daigo; Matsumoto, Takashi
2005-03-01
Authentication of individuals is rapidly becoming an important issue. The authors previously proposed a Pen-input online signature verification algorithm. The algorithm considers a writer"s signature as a trajectory of pen position, pen pressure, pen azimuth, and pen altitude that evolve over time, so that it is dynamic and biometric. Many algorithms have been proposed and reported to achieve accuracy for on-line signature verification, but setting the threshold value for these algorithms is a problem. In this paper, we introduce a user-generic model generated by AdaBoost, which resolves this problem. When user- specific models (one model for each user) are used for signature verification problems, we need to generate the models using only genuine signatures. Forged signatures are not available because imposters do not give forged signatures for training in advance. However, we can make use of another's forged signature in addition to the genuine signatures for learning by introducing a user generic model. And Adaboost is a well-known classification algorithm, making final decisions depending on the sign of the output value. Therefore, it is not necessary to set the threshold value. A preliminary experiment is performed on a database consisting of data from 50 individuals. This set consists of western-alphabet-based signatures provide by a European research group. In this experiment, our algorithm gives an FRR of 1.88% and an FAR of 1.60%. Since no fine-tuning was done, this preliminary result looks very promising.
Quantum blind dual-signature scheme without arbitrator
NASA Astrophysics Data System (ADS)
Li, Wei; Shi, Ronghua; Huang, Dazu; Shi, Jinjing; Guo, Ying
2016-03-01
Motivated by the elegant features of a bind signature, we suggest the design of a quantum blind dual-signature scheme with three phases, i.e., initial phase, signing phase and verification phase. Different from conventional schemes, legal messages are signed not only by the blind signatory but also by the sender in the signing phase. It does not rely much on an arbitrator in the verification phase as the previous quantum signature schemes usually do. The security is guaranteed by entanglement in quantum information processing. Security analysis demonstrates that the signature can be neither forged nor disavowed by illegal participants or attacker. It provides a potential application for e-commerce or e-payment systems with the current technology.
Automated analysis in generic groups
NASA Astrophysics Data System (ADS)
Fagerholm, Edvard
This thesis studies automated methods for analyzing hardness assumptions in generic group models, following ideas of symbolic cryptography. We define a broad class of generic and symbolic group models for different settings---symmetric or asymmetric (leveled) k-linear groups --- and prove ''computational soundness'' theorems for the symbolic models. Based on this result, we formulate a master theorem that relates the hardness of an assumption to solving problems in polynomial algebra. We systematically analyze these problems identifying different classes of assumptions and obtain decidability and undecidability results. Then, we develop automated procedures for verifying the conditions of our master theorems, and thus the validity of hardness assumptions in generic group models. The concrete outcome is an automated tool, the Generic Group Analyzer, which takes as input the statement of an assumption, and outputs either a proof of its generic hardness or shows an algebraic attack against the assumption. Structure-preserving signatures are signature schemes defined over bilinear groups in which messages, public keys and signatures are group elements, and the verification algorithm consists of evaluating ''pairing-product equations''. Recent work on structure-preserving signatures studies optimality of these schemes in terms of the number of group elements needed in the verification key and the signature, and the number of pairing-product equations in the verification algorithm. While the size of keys and signatures is crucial for many applications, another aspect of performance is the time it takes to verify a signature. The most expensive operation during verification is the computation of pairings. However, the concrete number of pairings is not captured by the number of pairing-product equations considered in earlier work. We consider the question of what is the minimal number of pairing computations needed to verify structure-preserving signatures. We build an automated tool to search for structure-preserving signatures matching a template. Through exhaustive search we conjecture lower bounds for the number of pairings required in the Type~II setting and prove our conjecture to be true. Finally, our tool exhibits examples of structure-preserving signatures matching the lower bounds, which proves tightness of our bounds, as well as improves on previously known structure-preserving signature schemes.
Character Recognition Method by Time-Frequency Analyses Using Writing Pressure
NASA Astrophysics Data System (ADS)
Watanabe, Tatsuhito; Katsura, Seiichiro
With the development of information and communication technology, personal verification becomes more and more important. In the future ubiquitous society, the development of terminals handling personal information requires the personal verification technology. The signature is one of the personal verification methods; however, the number of characters is limited in the case of the signature and therefore false signature is used easily. Thus, personal identification is difficult from handwriting. This paper proposes a “haptic pen” that extracts the writing pressure, and shows a character recognition method by time-frequency analyses. Although the figures of characters written by different amanuenses are similar, the differences appear in the time-frequency domain. As a result, it is possible to use the proposed character recognition for personal identification more exactly. The experimental results showed the viability of the proposed method.
Do handwritten words magnify lexical effects in visual word recognition?
Perea, Manuel; Gil-López, Cristina; Beléndez, Victoria; Carreiras, Manuel
2016-01-01
An examination of how the word recognition system is able to process handwritten words is fundamental to formulate a comprehensive model of visual word recognition. Previous research has revealed that the magnitude of lexical effects (e.g., the word-frequency effect) is greater with handwritten words than with printed words. In the present lexical decision experiments, we examined whether the quality of handwritten words moderates the recruitment of top-down feedback, as reflected in word-frequency effects. Results showed a reading cost for difficult-to-read and easy-to-read handwritten words relative to printed words. But the critical finding was that difficult-to-read handwritten words, but not easy-to-read handwritten words, showed a greater word-frequency effect than printed words. Therefore, the inherent physical variability of handwritten words does not necessarily boost the magnitude of lexical effects.
Eye movements when reading sentences with handwritten words.
Perea, Manuel; Marcet, Ana; Uixera, Beatriz; Vergara-Martínez, Marta
2016-10-17
The examination of how we read handwritten words (i.e., the original form of writing) has typically been disregarded in the literature on reading. Previous research using word recognition tasks has shown that lexical effects (e.g., the word-frequency effect) are magnified when reading difficult handwritten words. To examine this issue in a more ecological scenario, we registered the participants' eye movements when reading handwritten sentences that varied in the degree of legibility (i.e., sentences composed of words in easy vs. difficult handwritten style). For comparison purposes, we included a condition with printed sentences. Results showed a larger reading cost for sentences with difficult handwritten words than for sentences with easy handwritten words, which in turn showed a reading cost relative to the sentences with printed words. Critically, the effect of word frequency was greater for difficult handwritten words than for easy handwritten words or printed words in the total times on a target word, but not on first-fixation durations or gaze durations. We examine the implications of these findings for models of eye movement control in reading.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1979-03-01
The certification and verification of the Northrup Model NSC-01-0732 Fresnel lens tracking solar collector are presented. A certification statement is included with signatures and a separate report on the structural analysis of the collector system. System verification against the Interim Performance Criteria are indicated by matrices with verification discussion, analysis, and enclosed test results.
Static Signature Synthesis: A Neuromotor Inspired Approach for Biometrics.
Ferrer, Miguel A; Diaz-Cabrera, Moises; Morales, Aythami
2015-03-01
In this paper we propose a new method for generating synthetic handwritten signature images for biometric applications. The procedures we introduce imitate the mechanism of motor equivalence which divides human handwriting into two steps: the working out of an effector independent action plan and its execution via the corresponding neuromuscular path. The action plan is represented as a trajectory on a spatial grid. This contains both the signature text and its flourish, if there is one. The neuromuscular path is simulated by applying a kinematic Kaiser filter to the trajectory plan. The length of the filter depends on the pen speed which is generated using a scalar version of the sigma lognormal model. An ink deposition model, applied pixel by pixel to the pen trajectory, provides realistic static signature images. The lexical and morphological properties of the synthesized signatures as well as the range of the synthesis parameters have been estimated from real databases of real signatures such as the MCYT Off-line and the GPDS960GraySignature corpuses. The performance experiments show that by tuning only four parameters it is possible to generate synthetic identities with different stability and forgers with different skills. Therefore it is possible to create datasets of synthetic signatures with a performance similar to databases of real signatures. Moreover, we can customize the created dataset to produce skilled forgeries or simple forgeries which are easier to detect, depending on what the researcher needs. Perceptual evaluation gives an average confusion of 44.06 percent between real and synthetic signatures which shows the realism of the synthetic ones. The utility of the synthesized signatures is demonstrated by studying the influence of the pen type and number of users on an automatic signature verifier.
Assessment of legibility and completeness of handwritten and electronic prescriptions.
Albarrak, Ahmed I; Al Rashidi, Eman Abdulrahman; Fatani, Rwaa Kamil; Al Ageel, Shoog Ibrahim; Mohammed, Rafiuddin
2014-12-01
To assess the legibility and completeness of handwritten prescriptions and compare with electronic prescription system for medication errors. Prospective study. King Khalid University Hospital (KKUH), Riyadh, Saudi Arabia. Handwritten prescriptions were received from clinical units of Medicine Outpatient Department (MOPD), Primary Care Clinic (PCC) and Surgery Outpatient Department (SOPD) whereas electronic prescriptions were collected from the pediatric ward. The handwritten prescription was assessed for completeness by the checklist designed according to the hospital prescription and evaluated for legibility by two pharmacists. The comparison between handwritten and electronic prescription errors was evaluated based on the validated checklist adopted from previous studies. Legibility and completeness of prescriptions. 398 prescriptions (199 handwritten and 199 e-prescriptions) were assessed. About 71 (35.7%) of handwritten and 5 (2.5%) of electronic prescription errors were identified. A significant statistical difference (P < 0.001) was observed between handwritten and e-prescriptions in omitted dose and omitted route of administration category of error distribution. The rate of completeness in patient identification in handwritten prescriptions was 80.97% in MOPD, 76.36% in PCC and 85.93% in SOPD clinic units. Assessment of medication prescription completeness was 91.48% in MOPD, 88.48% in PCC, and 89.28% in SOPD. This study revealed a high incidence of prescribing errors in handwritten prescriptions. The use of e-prescription system showed a significant decline in the incidence of errors. The legibility of handwritten prescriptions was relatively good whereas the level of completeness was very low.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bunch, Kyle J.; Williams, Laura S.; Jones, Anthony M.
The 2010 ratification of the New START Treaty has been widely regarded as a noteworthy national security achievement for both the Obama administration and the Medvedev-Putin regime, but deeper cuts are envisioned under future arms control regimes. Future verification needs will include monitoring the storage of warhead components and fissile materials and verifying dismantlement of warheads, pits, secondaries, and other materials. From both the diplomatic and technical perspectives, verification under future arms control regimes will pose new challenges. Since acceptable verification technology must protect sensitive design information and attributes, non-nuclear non-sensitive signatures may provide a significant verification tool without themore » use of additional information barriers. The use of electromagnetic signatures to monitor nuclear material storage containers is a promising technology with the potential to fulfill these challenging requirements. Research performed at Pacific Northwest National Laboratory (PNNL) has demonstrated that low frequency electromagnetic signatures of sealed metallic containers can be used to confirm the presence of specific components on a “yes/no” basis without revealing classified information. Arms control inspectors might use this technique to verify the presence or absence of monitored items, including both nuclear and non-nuclear materials. Although additional research is needed to study signature aspects such as uniqueness and investigate container-specific scenarios, the technique potentially offers a rapid and cost-effective tool to verify reduction and dismantlement of U.S. and Russian nuclear weapons.« less
Assessment of legibility and completeness of handwritten and electronic prescriptions
Albarrak, Ahmed I; Al Rashidi, Eman Abdulrahman; Fatani, Rwaa Kamil; Al Ageel, Shoog Ibrahim; Mohammed, Rafiuddin
2014-01-01
Objectives To assess the legibility and completeness of handwritten prescriptions and compare with electronic prescription system for medication errors. Design Prospective study. Setting King Khalid University Hospital (KKUH), Riyadh, Saudi Arabia. Subjects and methods Handwritten prescriptions were received from clinical units of Medicine Outpatient Department (MOPD), Primary Care Clinic (PCC) and Surgery Outpatient Department (SOPD) whereas electronic prescriptions were collected from the pediatric ward. The handwritten prescription was assessed for completeness by the checklist designed according to the hospital prescription and evaluated for legibility by two pharmacists. The comparison between handwritten and electronic prescription errors was evaluated based on the validated checklist adopted from previous studies. Main outcome measures Legibility and completeness of prescriptions. Results 398 prescriptions (199 handwritten and 199 e-prescriptions) were assessed. About 71 (35.7%) of handwritten and 5 (2.5%) of electronic prescription errors were identified. A significant statistical difference (P < 0.001) was observed between handwritten and e-prescriptions in omitted dose and omitted route of administration category of error distribution. The rate of completeness in patient identification in handwritten prescriptions was 80.97% in MOPD, 76.36% in PCC and 85.93% in SOPD clinic units. Assessment of medication prescription completeness was 91.48% in MOPD, 88.48% in PCC, and 89.28% in SOPD. Conclusions This study revealed a high incidence of prescribing errors in handwritten prescriptions. The use of e-prescription system showed a significant decline in the incidence of errors. The legibility of handwritten prescriptions was relatively good whereas the level of completeness was very low. PMID:25561864
36 CFR 218.8 - Filing an objection.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) Signature or other verification of authorship upon request (a scanned signature for electronic mail may be... related to the proposed project; if applicable, how the objector believes the environmental analysis or...
36 CFR 218.8 - Filing an objection.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) Signature or other verification of authorship upon request (a scanned signature for electronic mail may be... related to the proposed project; if applicable, how the objector believes the environmental analysis or...
What differs in visual recognition of handwritten vs. printed letters? An fMRI study.
Longcamp, Marieke; Hlushchuk, Yevhen; Hari, Riitta
2011-08-01
In models of letter recognition, handwritten letters are considered as a particular font exemplar, not qualitatively different in their processing from printed letters. Yet, some data suggest that recognizing handwritten letters might rely on distinct processes, possibly related to motor knowledge. We applied functional magnetic resonance imaging to compare the neural correlates of perceiving handwritten letters vs. standard printed letters. Statistical analysis circumscribed to frontal brain regions involved in hand-movement triggering and execution showed that processing of handwritten letters is supported by a stronger activation of the left primary motor cortex and the supplementary motor area. At the whole-brain level, additional differences between handwritten and printed letters were observed in the right superior frontal, middle occipital, and parahippocampal gyri, and in the left inferior precentral and the fusiform gyri. The results are suggested to indicate embodiment of the visual perception of handwritten letters. Copyright © 2010 Wiley-Liss, Inc.
Analysis of an Indirect Neutron Signature for Enhanced UF6 Cylinder Verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kulisek, Jonathan A.; McDonald, Benjamin S.; Smith, Leon E.
2017-02-21
The International Atomic Energy Agency (IAEA) currently uses handheld gamma-ray spectrometers combined with ultrasonic wall-thickness gauges to verify the declared enrichment of uranium hexafluoride (UF6) cylinders. The current method provides relatively low accuracy for the assay of 235U enrichment, especially for natural and depleted UF6. Furthermore, the current method provides no capability to assay the absolute mass of 235U in the cylinder due to the localized instrument geometry and limited penetration of the 186-keV gamma-ray signature from 235U. Also, the current verification process is a time-consuming component of on-site inspections at uranium enrichment plants. Toward the goal of a more-capablemore » cylinder assay method, the Pacific Northwest National Laboratory has developed the hybrid enrichment verification array (HEVA). HEVA measures both the traditional 186-keV direct signature and a non-traditional, high-energy neutron-induced signature (HEVANT). HEVANT enables full-volume assay of UF6 cylinders by exploiting the relatively larger mean free paths of the neutrons emitted from the UF6. In this work, Monte Carlo modeling is used as the basis for characterizing HEVANT in terms of the individual contributions to HEVANT from nuclides and hardware components. Monte Carlo modeling is also used to quantify the intrinsic efficiency of HEVA for neutron detection in a cylinder-assay geometry. Modeling predictions are validated against neutron-induced gamma-ray spectra from laboratory measurements and a relatively large population of Type 30B cylinders spanning a range of enrichments. Implications of the analysis and findings on the viability of HEVA for cylinder verification are discussed, such as the resistance of the HEVANT signature to manipulation by the nearby placement of neutron-conversion materials.« less
Scott, Pippa; Edwards, Phil
2006-01-01
Background Postal questionnaires are commonly used to collect data for health studies, but non-response reduces study sample sizes and can introduce bias. Finding ways to increase the proportion of questionnaires returned would improve research quality. We sought to quantify the effect on response when researchers address participants personally by name on letters that accompany questionnaires. Methods All randomised controlled trials in a published systematic review that evaluated the effect on response of including participants' names on letters that accompany questionnaires were included. Odds ratios for response were pooled in a random effects meta-analysis and evidence for changes in effects over time was assessed using random effects meta-regression. Results Fourteen randomised controlled trials were included covering a wide range of topics. Most topics were unrelated to health or social care. The odds of response when including participants' names on letters were increased by one-fifth (pooled OR 1.18, 95% CI 1.03 to 1.34; p = 0.015). When participants' names and hand-written signatures were used in combination, the effect was a more substantial increase in response (OR 1.45, 95% CI 1.27 to 1.66; p < 0.001), corresponding to an absolute increase in the proportion of questionnaires returned of between 4% and 10%, depending on the baseline response rate. There was no evidence that the magnitude of these effects had declined over time. Conclusion This meta-analysis of the best available evidence indicates that researchers using postal questionnaires can increase response by addressing participants by name on cover letters. The effect appears to be enhanced by including hand-written signatures. PMID:16953871
A Quantum Multi-proxy Blind Signature Scheme Based on Genuine Four-Qubit Entangled State
NASA Astrophysics Data System (ADS)
Tian, Juan-Hong; Zhang, Jian-Zhong; Li, Yan-Ping
2016-02-01
In this paper, we propose a multi-proxy blind signature scheme based on controlled teleportation. Genuine four-qubit entangled state functions as quantum channel. The scheme uses the physical characteristics of quantum mechanics to implement delegation, signature and verification. The security analysis shows the scheme satisfies the security features of multi-proxy signature, unforgeability, undeniability, blindness and unconditional security.
Handwritten text line segmentation by spectral clustering
NASA Astrophysics Data System (ADS)
Han, Xuecheng; Yao, Hui; Zhong, Guoqiang
2017-02-01
Since handwritten text lines are generally skewed and not obviously separated, text line segmentation of handwritten document images is still a challenging problem. In this paper, we propose a novel text line segmentation algorithm based on the spectral clustering. Given a handwritten document image, we convert it to a binary image first, and then compute the adjacent matrix of the pixel points. We apply spectral clustering on this similarity metric and use the orthogonal kmeans clustering algorithm to group the text lines. Experiments on Chinese handwritten documents database (HIT-MW) demonstrate the effectiveness of the proposed method.
Sowan, Azizeh K.; Vaidya, Vinay U.; Soeken, Karen L.; Hilmas, Elora
2010-01-01
OBJECTIVES The use of continuous infusion medications with individualized concentrations may increase the risk for errors in pediatric patients. The objective of this study was to evaluate the effect of computerized prescriber order entry (CPOE) for continuous infusions with standardized concentrations on frequency of pharmacy processing errors. In addition, time to process handwritten versus computerized infusion orders was evaluated and user satisfaction with CPOE as compared to handwritten orders was measured. METHODS Using a crossover design, 10 pharmacists in the pediatric satellite within a university teaching hospital were given test scenarios of handwritten and CPOE order sheets and asked to process infusion orders using the pharmacy system in order to generate infusion labels. Participants were given three groups of orders: five correct handwritten orders, four handwritten orders written with deliberate errors, and five correct CPOE orders. Label errors were analyzed and time to complete the task was recorded. RESULTS Using CPOE orders, participants required less processing time per infusion order (2 min, 5 sec ± 58 sec) compared with time per infusion order in the first handwritten order sheet group (3 min, 7 sec ± 1 min, 20 sec) and the second handwritten order sheet group (3 min, 26 sec ± 1 min, 8 sec), (p<0.01). CPOE eliminated all error types except wrong concentration. With CPOE, 4% of infusions processed contained errors, compared with 26% of the first group of handwritten orders and 45% of the second group of handwritten orders (p<0.03). Pharmacists were more satisfied with CPOE orders when compared with the handwritten method (p=0.0001). CONCLUSIONS CPOE orders saved pharmacists' time and greatly improved the safety of processing continuous infusions, although not all errors were eliminated. pharmacists were overwhelmingly satisfied with the CPOE orders PMID:22477811
Handwritten digits recognition based on immune network
NASA Astrophysics Data System (ADS)
Li, Yangyang; Wu, Yunhui; Jiao, Lc; Wu, Jianshe
2011-11-01
With the development of society, handwritten digits recognition technique has been widely applied to production and daily life. It is a very difficult task to solve these problems in the field of pattern recognition. In this paper, a new method is presented for handwritten digit recognition. The digit samples firstly are processed and features extraction. Based on these features, a novel immune network classification algorithm is designed and implemented to the handwritten digits recognition. The proposed algorithm is developed by Jerne's immune network model for feature selection and KNN method for classification. Its characteristic is the novel network with parallel commutating and learning. The performance of the proposed method is experimented to the handwritten number datasets MNIST and compared with some other recognition algorithms-KNN, ANN and SVM algorithm. The result shows that the novel classification algorithm based on immune network gives promising performance and stable behavior for handwritten digits recognition.
Signature-based store checking buffer
Sridharan, Vilas; Gurumurthi, Sudhanva
2015-06-02
A system and method for optimizing redundant output verification, are provided. A hardware-based store fingerprint buffer receives multiple instances of output from multiple instances of computation. The store fingerprint buffer generates a signature from the content included in the multiple instances of output. When a barrier is reached, the store fingerprint buffer uses the signature to verify the content is error-free.
A Quantum Proxy Blind Signature Scheme Based on Genuine Five-Qubit Entangled State
NASA Astrophysics Data System (ADS)
Zeng, Chuan; Zhang, Jian-Zhong; Xie, Shu-Cui
2017-06-01
In this paper, a quantum proxy blind signature scheme based on controlled quantum teleportation is proposed. This scheme uses a genuine five-qubit entangled state as quantum channel and adopts the classical Vernam algorithm to blind message. We use the physical characteristics of quantum mechanics to implement delegation, signature and verification. Security analysis shows that our scheme is valid and satisfy the properties of a proxy blind signature, such as blindness, verifiability, unforgeability, undeniability.
Dust devil signatures in infrasound records of the International Monitoring System
NASA Astrophysics Data System (ADS)
Lorenz, Ralph D.; Christie, Douglas
2015-03-01
We explore whether dust devils have a recognizable signature in infrasound array records, since several Comprehensive Nuclear-Test-Ban Treaty verification stations conducting continuous measurements with microbarometers are in desert areas which see dust devils. The passage of dust devils (and other boundary layer vortices, whether dust laden or not) causes a local temporary drop in pressure: the high-pass time domain filtering in microbarometers results in a "heartbeat" signature, which we observe at the Warramunga station in Australia. We also observe a ~50 min pseudoperiodicity in the occurrence of these signatures and some higher-frequency infrasound. Dust devils do not significantly degrade the treaty verification capability. The pipe arrays for spatial averaging used in infrasound monitoring degrade the detection efficiency of small devils, but the long observation time may allow a useful census of large vortices, and thus, the high-sensitivity infrasonic array data from the monitoring network can be useful in studying columnar vortices in the lower atmosphere.
Renier, M; Gnoato, F; Tessari, A; Formilan, M; Busonera, F; Albanese, P; Sartori, G; Cester, A
2016-06-01
Some clinical conditions, including dementia, compromise cognitive functions involved in decision-making processes, with repercussions on the ability to subscribe a will. Because of the increasing number of aged people with cognitive impairment there is an acute and growing need for decision-making capacity evidence-based assessment. Our study investigates the relationship between writing abilities and cognitive integrity to see if it is possible to make inferences on decision-making capacity through handwriting analysis. We also investigated the relationship between signature ability and cognitive integrity. Thirty-six participants with diagnosis of MCI and 38 participants with diagnosis of initial dementia were recruited. For each subject we collected two samples of signature-an actual and a previous one-and an extract of spontaneous writing. Furthermore, we administered a neuropsychological battery to investigate cognitive functions involved in decision-making. We found significant correlations between spontaneous writing indexes and neuropsychological test results. Nonetheless, the index of signature deterioration does not correlate with the level of cognitive decline. Our results suggest that a careful analysis of spontaneous writing can be useful to make inferences on decision-making capacity, whereas great caution should be taken in attributing validity to handwritten signature of subjects with MCI or dementia. The analysis of spontaneous writing can be a reliable aid in cases of retrospective evaluation of cognitive integrity. On the other side, the ability to sign is not an index of cognitive integrity.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-12
... (original and update), and verification audit; names of the person(s) who completed the self-assessment... of the self assessment, date of the verification audit report, name of the auditor, signature and... self assessment, (2) conducting a baseline survey of the regulated industry, and (3) obtaining an...
Hemispheric Differences in Processing Handwritten Cursive
ERIC Educational Resources Information Center
Hellige, Joseph B.; Adamson, Maheen M.
2007-01-01
Hemispheric asymmetry was examined for native English speakers identifying consonant-vowel-consonant (CVC) non-words presented in standard printed form, in standard handwritten cursive form or in handwritten cursive with the letters separated by small gaps. For all three conditions, fewer errors occurred when stimuli were presented to the right…
Improvement of a Quantum Proxy Blind Signature Scheme
NASA Astrophysics Data System (ADS)
Zhang, Jia-Lei; Zhang, Jian-Zhong; Xie, Shu-Cui
2018-02-01
Improvement of a quantum proxy blind signature scheme is proposed in this paper. Six-qubit entangled state functions as quantum channel. In our scheme, a trust party Trent is introduced so as to avoid David's dishonest behavior. The receiver David verifies the signature with the help of Trent in our scheme. The scheme uses the physical characteristics of quantum mechanics to implement message blinding, delegation, signature and verification. Security analysis proves that our scheme has the properties of undeniability, unforgeability, anonymity and can resist some common attacks.
Improvement of a Quantum Proxy Blind Signature Scheme
NASA Astrophysics Data System (ADS)
Zhang, Jia-Lei; Zhang, Jian-Zhong; Xie, Shu-Cui
2018-06-01
Improvement of a quantum proxy blind signature scheme is proposed in this paper. Six-qubit entangled state functions as quantum channel. In our scheme, a trust party Trent is introduced so as to avoid David's dishonest behavior. The receiver David verifies the signature with the help of Trent in our scheme. The scheme uses the physical characteristics of quantum mechanics to implement message blinding, delegation, signature and verification. Security analysis proves that our scheme has the properties of undeniability, unforgeability, anonymity and can resist some common attacks.
Quantum Proxy Multi-Signature Scheme Using Genuinely Entangled Six Qubits State
NASA Astrophysics Data System (ADS)
Cao, Hai-Jing; Wang, Huai-Sheng; Li, Peng-Fei
2013-04-01
A quantum proxy multi-signature scheme is presented based on controlled teleportation. Genuinely entangled six qubits quantum state functions as quantum channel. The scheme uses the physical characteristics of quantum mechanics to implement delegation, signature and verification. Quantum key distribution and one-time pad are adopted in our scheme, which could guarantee not only the unconditional security of the scheme but also the anonymity of the messages owner.
Multiparty Quantum Blind Signature Scheme Based on Graph States
NASA Astrophysics Data System (ADS)
Jian-Wu, Liang; Xiao-Shu, Liu; Jin-Jing, Shi; Ying, Guo
2018-05-01
A multiparty quantum blind signature scheme is proposed based on the principle of graph state, in which the unitary operations of graph state particles can be applied to generate the quantum blind signature and achieve verification. Different from the classical blind signature based on the mathematical difficulty, the scheme could guarantee not only the anonymity but also the unconditionally security. The analysis shows that the length of the signature generated in our scheme does not become longer as the number of signers increases, and it is easy to increase or decrease the number of signers.
An Improved Quantum Proxy Blind Signature Scheme Based on Genuine Seven-Qubit Entangled State
NASA Astrophysics Data System (ADS)
Yang, Yuan-Yuan; Xie, Shu-Cui; Zhang, Jian-Zhong
2017-07-01
An improved quantum proxy blind signature scheme based on controlled teleportation is proposed in this paper. Genuine seven-qubit entangled state functions as quantum channel. We use the physical characteristics of quantum mechanics to implement delegation, signature and verification. Security analysis shows that our scheme is unforgeability, undeniability, blind and unconditionally secure. Meanwhile, we propose a trust party to provide higher security, the trust party is costless.
A quantum proxy group signature scheme based on an entangled five-qubit state
NASA Astrophysics Data System (ADS)
Wang, Meiling; Ma, Wenping; Wang, Lili; Yin, Xunru
2015-09-01
A quantum proxy group signature (QPGS) scheme based on controlled teleportation is presented, by using the entangled five-qubit quantum state functions as quantum channel. The scheme uses the physical characteristics of quantum mechanics to implement delegation, signature and verification. The security of the scheme is guaranteed by the entanglement correlations of the entangled five-qubit state, the secret keys based on the quantum key distribution (QKD) and the one-time pad algorithm, all of which have been proven to be unconditionally secure and the signature anonymity.
21 CFR 1311.25 - Requirements for obtaining a CSOS digital certificate.
Code of Federal Regulations, 2013 CFR
2013-04-01
... public keys, the corresponding private key must be used to sign the certificate request. Verification of the signature using the public key in the request will serve as proof of possession of the private key. ... certification of the public digital signature key. After the request is approved, the Certification Authority...
21 CFR 1311.25 - Requirements for obtaining a CSOS digital certificate.
Code of Federal Regulations, 2012 CFR
2012-04-01
... public keys, the corresponding private key must be used to sign the certificate request. Verification of the signature using the public key in the request will serve as proof of possession of the private key. ... certification of the public digital signature key. After the request is approved, the Certification Authority...
21 CFR 1311.25 - Requirements for obtaining a CSOS digital certificate.
Code of Federal Regulations, 2014 CFR
2014-04-01
... public keys, the corresponding private key must be used to sign the certificate request. Verification of the signature using the public key in the request will serve as proof of possession of the private key. ... certification of the public digital signature key. After the request is approved, the Certification Authority...
Zagoris, Konstantinos; Pratikakis, Ioannis; Gatos, Basilis
2017-05-03
Word spotting strategies employed in historical handwritten documents face many challenges due to variation in the writing style and intense degradation. In this paper, a new method that permits effective word spotting in handwritten documents is presented that it relies upon document-oriented local features which take into account information around representative keypoints as well a matching process that incorporates spatial context in a local proximity search without using any training data. Experimental results on four historical handwritten datasets for two different scenarios (segmentation-based and segmentation-free) using standard evaluation measures show the improved performance achieved by the proposed methodology.
A Quantum Multi-Proxy Weak Blind Signature Scheme Based on Entanglement Swapping
NASA Astrophysics Data System (ADS)
Yan, LiLi; Chang, Yan; Zhang, ShiBin; Han, GuiHua; Sheng, ZhiWei
2017-02-01
In this paper, we present a multi-proxy weak blind signature scheme based on quantum entanglement swapping of Bell states. In the scheme, proxy signers can finish the signature instead of original singer with his/her authority. It can be applied to the electronic voting system, electronic paying system, etc. The scheme uses the physical characteristics of quantum mechanics to implement delegation, signature and verification. It could guarantee not only the unconditionally security but also the anonymity of the message owner. The security analysis shows the scheme satisfies the security features of multi-proxy weak signature, singers cannot disavowal his/her signature while the signature cannot be forged by others, and the message owner can be traced.
Continuous-variable quantum homomorphic signature
NASA Astrophysics Data System (ADS)
Li, Ke; Shang, Tao; Liu, Jian-wei
2017-10-01
Quantum cryptography is believed to be unconditionally secure because its security is ensured by physical laws rather than computational complexity. According to spectrum characteristic, quantum information can be classified into two categories, namely discrete variables and continuous variables. Continuous-variable quantum protocols have gained much attention for their ability to transmit more information with lower cost. To verify the identities of different data sources in a quantum network, we propose a continuous-variable quantum homomorphic signature scheme. It is based on continuous-variable entanglement swapping and provides additive and subtractive homomorphism. Security analysis shows the proposed scheme is secure against replay, forgery and repudiation. Even under nonideal conditions, it supports effective verification within a certain verification threshold.
A Quantum Proxy Signature Scheme Based on Genuine Five-qubit Entangled State
NASA Astrophysics Data System (ADS)
Cao, Hai-Jing; Huang, Jun; Yu, Yao-Feng; Jiang, Xiu-Li
2014-09-01
In this paper a very efficient and secure proxy signature scheme is proposed. It is based on controlled quantum teleportation. Genuine five-qubit entangled state functions as quantum channel. The scheme uses the physical characteristics of quantum mechanics to implement delegation, signature and verification. Quantum key distribution and one-time pad are adopted in our scheme, which could guarantee not only the unconditional security of the scheme but also the anonymity of the messages owner.
Recognition of Similar Shaped Handwritten Marathi Characters Using Artificial Neural Network
NASA Astrophysics Data System (ADS)
Jane, Archana P.; Pund, Mukesh A.
2012-03-01
The growing need have handwritten Marathi character recognition in Indian offices such as passport, railways etc has made it vital area of a research. Similar shape characters are more prone to misclassification. In this paper a novel method is provided to recognize handwritten Marathi characters based on their features extraction and adaptive smoothing technique. Feature selections methods avoid unnecessary patterns in an image whereas adaptive smoothing technique form smooth shape of charecters.Combination of both these approaches leads to the better results. Previous study shows that, no one technique achieves 100% accuracy in handwritten character recognition area. This approach of combining both adaptive smoothing & feature extraction gives better results (approximately 75-100) and expected outcomes.
Revocable identity-based proxy re-signature against signing key exposure.
Yang, Xiaodong; Chen, Chunlin; Ma, Tingchun; Wang, Jinli; Wang, Caifen
2018-01-01
Identity-based proxy re-signature (IDPRS) is a novel cryptographic primitive that allows a semi-trusted proxy to convert a signature under one identity into another signature under another identity on the same message by using a re-signature key. Due to this transformation function, IDPRS is very useful in constructing privacy-preserving schemes for various information systems. Key revocation functionality is important in practical IDPRS for managing users dynamically; however, the existing IDPRS schemes do not provide revocation mechanisms that allow the removal of misbehaving or compromised users from the system. In this paper, we first introduce a notion called revocable identity-based proxy re-signature (RIDPRS) to achieve the revocation functionality. We provide a formal definition of RIDPRS as well as its security model. Then, we present a concrete RIDPRS scheme that can resist signing key exposure and prove that the proposed scheme is existentially unforgeable against adaptive chosen identity and message attacks in the standard model. To further improve the performance of signature verification in RIDPRS, we introduce a notion called server-aided revocable identity-based proxy re-signature (SA-RIDPRS). Moreover, we extend the proposed RIDPRS scheme to the SA-RIDPRS scheme and prove that this extended scheme is secure against adaptive chosen message and collusion attacks. The analysis results show that our two schemes remain efficient in terms of computational complexity when implementing user revocation procedures. In particular, in the SA-RIDPRS scheme, the verifier needs to perform only a bilinear pairing and four exponentiation operations to verify the validity of the signature. Compared with other IDPRS schemes in the standard model, our SA-RIDPRS scheme greatly reduces the computation overhead of verification.
Revocable identity-based proxy re-signature against signing key exposure
Ma, Tingchun; Wang, Jinli; Wang, Caifen
2018-01-01
Identity-based proxy re-signature (IDPRS) is a novel cryptographic primitive that allows a semi-trusted proxy to convert a signature under one identity into another signature under another identity on the same message by using a re-signature key. Due to this transformation function, IDPRS is very useful in constructing privacy-preserving schemes for various information systems. Key revocation functionality is important in practical IDPRS for managing users dynamically; however, the existing IDPRS schemes do not provide revocation mechanisms that allow the removal of misbehaving or compromised users from the system. In this paper, we first introduce a notion called revocable identity-based proxy re-signature (RIDPRS) to achieve the revocation functionality. We provide a formal definition of RIDPRS as well as its security model. Then, we present a concrete RIDPRS scheme that can resist signing key exposure and prove that the proposed scheme is existentially unforgeable against adaptive chosen identity and message attacks in the standard model. To further improve the performance of signature verification in RIDPRS, we introduce a notion called server-aided revocable identity-based proxy re-signature (SA-RIDPRS). Moreover, we extend the proposed RIDPRS scheme to the SA-RIDPRS scheme and prove that this extended scheme is secure against adaptive chosen message and collusion attacks. The analysis results show that our two schemes remain efficient in terms of computational complexity when implementing user revocation procedures. In particular, in the SA-RIDPRS scheme, the verifier needs to perform only a bilinear pairing and four exponentiation operations to verify the validity of the signature. Compared with other IDPRS schemes in the standard model, our SA-RIDPRS scheme greatly reduces the computation overhead of verification. PMID:29579125
2011-09-01
to show cryptographic signature # generation on a UNIX system # SHA=/bin/ sha256 CSDB=/tmp/csdb CODEBASE=. touch "$CSDB" find "$CODEBASE" -type f...artifacts generated earlier. 81 #! /bin/sh # # Demo program to show cryptographic signature # verification on a UNIX system # SHA=/bin/ sha256 CSDB=/tmp
Dawdy, M R; Munter, D W; Gilmore, R A
1997-03-01
This study was designed to examine the relationship between patient entry rates (a measure of physician work load) and documentation errors/omissions in both handwritten and dictated emergency treatment records. The study was carried out in two phases. Phase I examined handwritten records and Phase II examined dictated and transcribed records. A total of 838 charts for three common chief complaints (chest pain, abdominal pain, asthma/chronic obstructive pulmonary disease) were retrospectively reviewed and scored for the presence or absence of 11 predetermined criteria. Patient entry rates were determined by reviewing the emergency department patient registration logs. The data were analyzed using simple correlation and linear regression analysis. A positive correlation was found between patient entry rates and documentation errors in handwritten charts. No such correlation was found in the dictated charts. We conclude that work load may negatively affect documentation accuracy when charts are handwritten. However, the use of dictation services may minimize or eliminate this effect.
NASA Astrophysics Data System (ADS)
Xiong, Yan; Reichenbach, Stephen E.
1999-01-01
Understanding of hand-written Chinese characters is at such a primitive stage that models include some assumptions about hand-written Chinese characters that are simply false. So Maximum Likelihood Estimation (MLE) may not be an optimal method for hand-written Chinese characters recognition. This concern motivates the research effort to consider alternative criteria. Maximum Mutual Information Estimation (MMIE) is an alternative method for parameter estimation that does not derive its rationale from presumed model correctness, but instead examines the pattern-modeling problem in automatic recognition system from an information- theoretic point of view. The objective of MMIE is to find a set of parameters in such that the resultant model allows the system to derive from the observed data as much information as possible about the class. We consider MMIE for recognition of hand-written Chinese characters using on a simplified hidden Markov Random Field. MMIE provides improved performance improvement over MLE in this application.
Handwritten numeral databases of Indian scripts and multistage recognition of mixed numerals.
Bhattacharya, Ujjwal; Chaudhuri, B B
2009-03-01
This article primarily concerns the problem of isolated handwritten numeral recognition of major Indian scripts. The principal contributions presented here are (a) pioneering development of two databases for handwritten numerals of two most popular Indian scripts, (b) a multistage cascaded recognition scheme using wavelet based multiresolution representations and multilayer perceptron classifiers and (c) application of (b) for the recognition of mixed handwritten numerals of three Indian scripts Devanagari, Bangla and English. The present databases include respectively 22,556 and 23,392 handwritten isolated numeral samples of Devanagari and Bangla collected from real-life situations and these can be made available free of cost to researchers of other academic Institutions. In the proposed scheme, a numeral is subjected to three multilayer perceptron classifiers corresponding to three coarse-to-fine resolution levels in a cascaded manner. If rejection occurred even at the highest resolution, another multilayer perceptron is used as the final attempt to recognize the input numeral by combining the outputs of three classifiers of the previous stages. This scheme has been extended to the situation when the script of a document is not known a priori or the numerals written on a document belong to different scripts. Handwritten numerals in mixed scripts are frequently found in Indian postal mails and table-form documents.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul, J. N.; Chin, M. R.; Sjoden, G. E.
2013-07-01
A mobile 'drive by' passive radiation detection system to be applied in special nuclear materials (SNM) storage facilities for validation and compliance purposes has been designed through the use of computational modeling and new radiation detection methods. This project was the result of work over a 1 year period to create optimal design specifications to include creation of 3D models using both Monte Carlo and deterministic codes to characterize the gamma and neutron leakage out each surface of SNM-bearing canisters. Results were compared and agreement was demonstrated between both models. Container leakages were then used to determine the expected reactionmore » rates using transport theory in the detectors when placed at varying distances from the can. A 'typical' background signature was incorporated to determine the minimum signatures versus the probability of detection to evaluate moving source protocols with collimation. This established the criteria for verification of source presence and time gating at a given vehicle speed. New methods for the passive detection of SNM were employed and shown to give reliable identification of age and material for highly enriched uranium (HEU) and weapons grade plutonium (WGPu). The finalized 'Mobile Pit Verification System' (MPVS) design demonstrated that a 'drive-by' detection system, collimated and operating at nominally 2 mph, is capable of rapidly verifying each and every weapon pit stored in regularly spaced, shelved storage containers, using completely passive gamma and neutron signatures for HEU and WGPu. This system is ready for real evaluation to demonstrate passive total material accountability in storage facilities. (authors)« less
St John, E R; Scott, A J; Irvine, T E; Pakzad, F; Leff, D R; Layer, G T
2017-08-01
Completion of hand-written consent forms for surgical procedures may suffer from missing or inaccurate information, poor legibility and high variability. We audited the completion of hand-written consent forms and trialled a web-based application to generate modifiable, procedure-specific consent forms. The investigation comprised two phases at separate UK hospitals. In phase one, the completion of individual responses in hand-written consent forms for a variety of procedures were prospectively audited. Responses were categorised into three domains (patient details, procedure details and patient sign-off) that were considered "failed" if a contained element was not correct and legible. Phase two was confined to a breast surgical unit where hand-written consent forms were assessed as for phase one and interrogated for missing complications by two independent experts. An electronic consent platform was introduced and electronically-produced consent forms assessed. In phase one, 99 hand-written consent forms were assessed and the domain failure rates were: patient details 10%; procedure details 30%; and patient sign-off 27%. Laparoscopic cholecystectomy was the most common procedure (7/99) but there was significant variability in the documentation of complications: 12 in total, a median of 6 and a range of 2-9. In phase two, 44% (27/61) of hand-written forms were missing essential complications. There were no domain failures amongst 29 electronically-produced consent forms and no variability in the documentation of potential complications. Completion of hand-written consent forms suffers from wide variation and is frequently suboptimal. Electronically-produced, procedure-specific consent forms can improve the quality and consistency of consent documentation. Copyright © 2015 Royal College of Surgeons of Edinburgh (Scottish charity number SC005317) and Royal College of Surgeons in Ireland. Published by Elsevier Ltd. All rights reserved.
Arbitrated Quantum Signature with Hamiltonian Algorithm Based on Blind Quantum Computation
NASA Astrophysics Data System (ADS)
Shi, Ronghua; Ding, Wanting; Shi, Jinjing
2018-03-01
A novel arbitrated quantum signature (AQS) scheme is proposed motivated by the Hamiltonian algorithm (HA) and blind quantum computation (BQC). The generation and verification of signature algorithm is designed based on HA, which enables the scheme to rely less on computational complexity. It is unnecessary to recover original messages when verifying signatures since the blind quantum computation is applied, which can improve the simplicity and operability of our scheme. It is proved that the scheme can be deployed securely, and the extended AQS has some extensive applications in E-payment system, E-government, E-business, etc.
Arbitrated Quantum Signature with Hamiltonian Algorithm Based on Blind Quantum Computation
NASA Astrophysics Data System (ADS)
Shi, Ronghua; Ding, Wanting; Shi, Jinjing
2018-07-01
A novel arbitrated quantum signature (AQS) scheme is proposed motivated by the Hamiltonian algorithm (HA) and blind quantum computation (BQC). The generation and verification of signature algorithm is designed based on HA, which enables the scheme to rely less on computational complexity. It is unnecessary to recover original messages when verifying signatures since the blind quantum computation is applied, which can improve the simplicity and operability of our scheme. It is proved that the scheme can be deployed securely, and the extended AQS has some extensive applications in E-payment system, E-government, E-business, etc.
Filtering methods for broadcast authentication against PKC-based denial of service in WSN: a survey
NASA Astrophysics Data System (ADS)
Afianti, Farah; Wirawan, Iwan; Suryani, Titiek
2017-11-01
Broadcast authentication is used to determine legitimate packet from authorized user. The received packet can be forwarded or used for the further purpose. The use of digital signature is one of the compromising methods but it is followed by high complexity especially in the verification process. That phenomenon is used by the adversary to force the user to verify a lot of false packet data. Kind of Denial of Service (DoS) which attacks the main signature can be mitigated by using pre-authentication methods as the first layer to filter false packet data. The objective of the filter is not replacing the main signature but as an addition to actual verification in the sensor node. This paper contributes in comparing the cost of computation, storage, and communication among several filters. The result shows Pre- Authenticator and Dos Attack-Resistant scheme have the lower overhead than the others. Thus followed by needing powerful sender. Moreover, the key chain is promising methods because of efficiency and effectiveness.
Piezoelectric sensor pen for dynamic signature verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
EerNisse, E.P.; Land, C.E.; Snelling, J.B.
The concept of using handwriting dynamics for electronic identification is discussed. A piezoelectric sensor pen for obtaining the pen point dynamics during writing is described. Design equations are derived and details of an operating device are presented. Typical output waveforms are shown to demonstrate the operation of the pen and to show the dissimilarities between dynamics of a genuine signature and an attempted forgery.
Reduction in chemotherapy order errors with computerized physician order entry.
Meisenberg, Barry R; Wright, Robert R; Brady-Copertino, Catherine J
2014-01-01
To measure the number and type of errors associated with chemotherapy order composition associated with three sequential methods of ordering: handwritten orders, preprinted orders, and computerized physician order entry (CPOE) embedded in the electronic health record. From 2008 to 2012, a sample of completed chemotherapy orders were reviewed by a pharmacist for the number and type of errors as part of routine performance improvement monitoring. Error frequencies for each of the three distinct methods of composing chemotherapy orders were compared using statistical methods. The rate of problematic order sets-those requiring significant rework for clarification-was reduced from 30.6% with handwritten orders to 12.6% with preprinted orders (preprinted v handwritten, P < .001) to 2.2% with CPOE (preprinted v CPOE, P < .001). The incidence of errors capable of causing harm was reduced from 4.2% with handwritten orders to 1.5% with preprinted orders (preprinted v handwritten, P < .001) to 0.1% with CPOE (CPOE v preprinted, P < .001). The number of problem- and error-containing chemotherapy orders was reduced sequentially by preprinted order sets and then by CPOE. CPOE is associated with low error rates, but it did not eliminate all errors, and the technology can introduce novel types of errors not seen with traditional handwritten or preprinted orders. Vigilance even with CPOE is still required to avoid patient harm.
An adaptive deep Q-learning strategy for handwritten digit recognition.
Qiao, Junfei; Wang, Gongming; Li, Wenjing; Chen, Min
2018-02-22
Handwritten digits recognition is a challenging problem in recent years. Although many deep learning-based classification algorithms are studied for handwritten digits recognition, the recognition accuracy and running time still need to be further improved. In this paper, an adaptive deep Q-learning strategy is proposed to improve accuracy and shorten running time for handwritten digit recognition. The adaptive deep Q-learning strategy combines the feature-extracting capability of deep learning and the decision-making of reinforcement learning to form an adaptive Q-learning deep belief network (Q-ADBN). First, Q-ADBN extracts the features of original images using an adaptive deep auto-encoder (ADAE), and the extracted features are considered as the current states of Q-learning algorithm. Second, Q-ADBN receives Q-function (reward signal) during recognition of the current states, and the final handwritten digits recognition is implemented by maximizing the Q-function using Q-learning algorithm. Finally, experimental results from the well-known MNIST dataset show that the proposed Q-ADBN has a superiority to other similar methods in terms of accuracy and running time. Copyright © 2018 Elsevier Ltd. All rights reserved.
Sunspot drawings handwritten character recognition method based on deep learning
NASA Astrophysics Data System (ADS)
Zheng, Sheng; Zeng, Xiangyun; Lin, Ganghua; Zhao, Cui; Feng, Yongli; Tao, Jinping; Zhu, Daoyuan; Xiong, Li
2016-05-01
High accuracy scanned sunspot drawings handwritten characters recognition is an issue of critical importance to analyze sunspots movement and store them in the database. This paper presents a robust deep learning method for scanned sunspot drawings handwritten characters recognition. The convolution neural network (CNN) is one algorithm of deep learning which is truly successful in training of multi-layer network structure. CNN is used to train recognition model of handwritten character images which are extracted from the original sunspot drawings. We demonstrate the advantages of the proposed method on sunspot drawings provided by Chinese Academy Yunnan Observatory and obtain the daily full-disc sunspot numbers and sunspot areas from the sunspot drawings. The experimental results show that the proposed method achieves a high recognition accurate rate.
Austin, Peter David; Hand, Kieran Sean; Elia, Marinos
2014-02-01
Handwritten recycled paper prescription for parenteral nutrition (PN) may become a concentrated source of viable contaminants, including pathogens. This study examined the effect of using fresh printouts of electronic prescriptions on these contaminants. Cellulose sponge stick swabs with neutralizing buffer were used to sample the surfaces of PN prescriptions (n = 32 handwritten recycled; n = 32 printed electronic) on arrival to the pharmacy or following printing and PN prescriptions and bags packaged together during delivery (n = 38 handwritten recycled; n = 34 printed electronic) on arrival to hospital wards. Different media plates and standard microbiological procedures identified the type and number of contaminants. Staphylococcus aureus, fungi, and mold were infrequent contaminants. nonspecific aerobes more frequently contaminated handwritten recycled than printed electronic prescriptions (into pharmacy, 94% vs 44%, fisher exact test P .001; onto wards, 76% vs 50%, p = .028), with greater numbers of colony-forming units (CFU) (into pharmacy, median 130 [interquartile range (IQR), 65260] VS 0 [075], Mann-Whitney U test, P .001; onto wards, median 120 [15320] vs 10 [040], P = .001). packaging with handwritten recycled prescriptions led to more frequent nonspecific aerobic bag surface contamination (63% vs 41%, fisher exact test P = .097), with greater numbers of CFU (median 40 [IQR, 080] VS 0 [040], Mann-Whitney U test, P = .036). The use of printed electronic PN prescriptions can reduce microbial loads for contamination of surfaces that compromises aseptic techniques.
Public-key quantum digital signature scheme with one-time pad private-key
NASA Astrophysics Data System (ADS)
Chen, Feng-Lin; Liu, Wan-Fang; Chen, Su-Gen; Wang, Zhi-Hua
2018-01-01
A quantum digital signature scheme is firstly proposed based on public-key quantum cryptosystem. In the scheme, the verification public-key is derived from the signer's identity information (such as e-mail) on the foundation of identity-based encryption, and the signature private-key is generated by one-time pad (OTP) protocol. The public-key and private-key pair belongs to classical bits, but the signature cipher belongs to quantum qubits. After the signer announces the public-key and generates the final quantum signature, each verifier can verify publicly whether the signature is valid or not with the public-key and quantum digital digest. Analysis results show that the proposed scheme satisfies non-repudiation and unforgeability. Information-theoretic security of the scheme is ensured by quantum indistinguishability mechanics and OTP protocol. Based on the public-key cryptosystem, the proposed scheme is easier to be realized compared with other quantum signature schemes under current technical conditions.
Rotation Reveals the Importance of Configural Cues in Handwritten Word Perception
Barnhart, Anthony S.; Goldinger, Stephen D.
2013-01-01
A dramatic perceptual asymmetry occurs when handwritten words are rotated 90° in either direction. Those rotated in a direction consistent with their natural tilt (typically clockwise) become much more difficult to recognize, relative to those rotated in the opposite direction. In Experiment 1, we compared computer-printed and handwritten words, all equated for degrees of leftward and rightward tilt, and verified the phenomenon: The effect of rotation was far larger for cursive words, especially when rotated in a tilt-consistent direction. In Experiment 2, we replicated this pattern with all items presented in visual noise. In both experiments, word frequency effects were larger for computer-printed words and did not interact with rotation. The results suggest that handwritten word perception requires greater configural processing, relative to computer print, because handwritten letters are variable and ambiguous. When words are rotated, configural processing suffers, particularly when rotation exaggerates natural tilt. Our account is similar to theories of the “Thatcher Illusion,” wherein face inversion disrupts holistic processing. Together, the findings suggest that configural, word-level processing automatically increases when people read handwriting, as letter-level processing becomes less reliable. PMID:23589201
Text-image alignment for historical handwritten documents
NASA Astrophysics Data System (ADS)
Zinger, S.; Nerbonne, J.; Schomaker, L.
2009-01-01
We describe our work on text-image alignment in context of building a historical document retrieval system. We aim at aligning images of words in handwritten lines with their text transcriptions. The images of handwritten lines are automatically segmented from the scanned pages of historical documents and then manually transcribed. To train automatic routines to detect words in an image of handwritten text, we need a training set - images of words with their transcriptions. We present our results on aligning words from the images of handwritten lines and their corresponding text transcriptions. Alignment based on the longest spaces between portions of handwriting is a baseline. We then show that relative lengths, i.e. proportions of words in their lines, can be used to improve the alignment results considerably. To take into account the relative word length, we define the expressions for the cost function that has to be minimized for aligning text words with their images. We apply right to left alignment as well as alignment based on exhaustive search. The quality assessment of these alignments shows correct results for 69% of words from 100 lines, or 90% of partially correct and correct alignments combined.
Zhi, Naiqian; Jaeger, Beverly Kris; Gouldstone, Andrew; Sipahi, Rifat; Frank, Samuel
2017-03-01
Detection of changes in micrographia as a manifestation of symptomatic progression or therapeutic response in Parkinson's disease (PD) is challenging as such changes can be subtle. A computerized toolkit based on quantitative analysis of handwriting samples would be valuable as it could supplement and support clinical assessments, help monitor micrographia, and link it to PD. Such a toolkit would be especially useful if it could detect subtle yet relevant changes in handwriting morphology, thus enhancing resolution of the detection procedure. This would be made possible by developing a set of metrics sensitive enough to detect and discern micrographia with specificity. Several metrics that are sensitive to the characteristics of micrographia were developed, with minimal sensitivity to confounding handwriting artifacts. These metrics capture character size-reduction, ink utilization, and pixel density within a writing sample from left to right. They are used here to "score" handwritten signatures of 12 different individuals corresponding to healthy and symptomatic PD conditions, and sample control signatures that had been artificially reduced in size for comparison purposes. Moreover, metric analyses of samples from ten of the 12 individuals for which clinical diagnosis time is known show considerable informative variations when applied to static signature samples obtained before and after diagnosis. In particular, a measure called pixel density variation showed statistically significant differences ( ) between two comparison groups of remote signature recordings: earlier versus recent, based on independent and paired t-test analyses on a total of 40 signature samples. The quantitative framework developed here has the potential to be used in future controlled experiments to study micrographia and links to PD from various aspects, including monitoring and assessment of applied interventions and treatments. The inherent value in this methodology is further enhanced by its reliance solely on static signatures, not requiring dynamic sampling with specialized equipment.
New efficient algorithm for recognizing handwritten Hindi digits
NASA Astrophysics Data System (ADS)
El-Sonbaty, Yasser; Ismail, Mohammed A.; Karoui, Kamal
2001-12-01
In this paper a new algorithm for recognizing handwritten Hindi digits is proposed. The proposed algorithm is based on using the topological characteristics combined with statistical properties of the given digits in order to extract a set of features that can be used in the process of digit classification. 10,000 handwritten digits are used in the experimental results. 1100 digits are used for training and another 5500 unseen digits are used for testing. The recognition rate has reached 97.56%, a substitution rate of 1.822%, and a rejection rate of 0.618%.
Comparison of crisp and fuzzy character networks in handwritten word recognition
NASA Technical Reports Server (NTRS)
Gader, Paul; Mohamed, Magdi; Chiang, Jung-Hsien
1992-01-01
Experiments involving handwritten word recognition on words taken from images of handwritten address blocks from the United States Postal Service mailstream are described. The word recognition algorithm relies on the use of neural networks at the character level. The neural networks are trained using crisp and fuzzy desired outputs. The fuzzy outputs were defined using a fuzzy k-nearest neighbor algorithm. The crisp networks slightly outperformed the fuzzy networks at the character level but the fuzzy networks outperformed the crisp networks at the word level.
An Identity-Based Anti-Quantum Privacy-Preserving Blind Authentication in Wireless Sensor Networks.
Zhu, Hongfei; Tan, Yu-An; Zhu, Liehuang; Wang, Xianmin; Zhang, Quanxin; Li, Yuanzhang
2018-05-22
With the development of wireless sensor networks, IoT devices are crucial for the Smart City; these devices change people's lives such as e-payment and e-voting systems. However, in these two systems, the state-of-art authentication protocols based on traditional number theory cannot defeat a quantum computer attack. In order to protect user privacy and guarantee trustworthy of big data, we propose a new identity-based blind signature scheme based on number theorem research unit lattice, this scheme mainly uses a rejection sampling theorem instead of constructing a trapdoor. Meanwhile, this scheme does not depend on complex public key infrastructure and can resist quantum computer attack. Then we design an e-payment protocol using the proposed scheme. Furthermore, we prove our scheme is secure in the random oracle, and satisfies confidentiality, integrity, and non-repudiation. Finally, we demonstrate that the proposed scheme outperforms the other traditional existing identity-based blind signature schemes in signing speed and verification speed, outperforms the other lattice-based blind signature in signing speed, verification speed, and signing secret key size.
An Identity-Based Anti-Quantum Privacy-Preserving Blind Authentication in Wireless Sensor Networks
Zhu, Hongfei; Tan, Yu-an; Zhu, Liehuang; Wang, Xianmin; Zhang, Quanxin; Li, Yuanzhang
2018-01-01
With the development of wireless sensor networks, IoT devices are crucial for the Smart City; these devices change people’s lives such as e-payment and e-voting systems. However, in these two systems, the state-of-art authentication protocols based on traditional number theory cannot defeat a quantum computer attack. In order to protect user privacy and guarantee trustworthy of big data, we propose a new identity-based blind signature scheme based on number theorem research unit lattice, this scheme mainly uses a rejection sampling theorem instead of constructing a trapdoor. Meanwhile, this scheme does not depend on complex public key infrastructure and can resist quantum computer attack. Then we design an e-payment protocol using the proposed scheme. Furthermore, we prove our scheme is secure in the random oracle, and satisfies confidentiality, integrity, and non-repudiation. Finally, we demonstrate that the proposed scheme outperforms the other traditional existing identity-based blind signature schemes in signing speed and verification speed, outperforms the other lattice-based blind signature in signing speed, verification speed, and signing secret key size. PMID:29789475
17 CFR 1.4 - Electronic signatures, acknowledgments and verifications.
Code of Federal Regulations, 2014 CFR
2014-04-01
... commission merchant or introducing broker, a retail forex customer of a retail foreign exchange dealer or..., retail forex customer, participant, client, counterparty, swap dealer, or major swap participant will be...
17 CFR 1.4 - Electronic signatures, acknowledgments and verifications.
Code of Federal Regulations, 2013 CFR
2013-04-01
... commission merchant or introducing broker, a retail forex customer of a retail foreign exchange dealer or..., retail forex customer, participant, client, counterparty, swap dealer, or major swap participant will be...
Handwritten recognition of Tamil vowels using deep learning
NASA Astrophysics Data System (ADS)
Ram Prashanth, N.; Siddarth, B.; Ganesh, Anirudh; Naveen Kumar, Vaegae
2017-11-01
We come across a large volume of handwritten texts in our daily lives and handwritten character recognition has long been an important area of research in pattern recognition. The complexity of the task varies among different languages and it so happens largely due to the similarity between characters, distinct shapes and number of characters which are all language-specific properties. There have been numerous works on character recognition of English alphabets and with laudable success, but regional languages have not been dealt with very frequently and with similar accuracies. In this paper, we explored the performance of Deep Belief Networks in the classification of Handwritten Tamil vowels, and conclusively compared the results obtained. The proposed method has shown satisfactory recognition accuracy in light of difficulties faced with regional languages such as similarity between characters and minute nuances that differentiate them. We can further extend this to all the Tamil characters.
49 CFR 1104.4 - Attestation and verification.
Code of Federal Regulations, 2012 CFR
2012-10-01
... in ink by the practitioner or attorney, whose address should be stated. The signature of a... or attorney must be: (1) Signed in ink; (2) Accompanied by the signer's address; and (3) Verified, if...
49 CFR 1104.4 - Attestation and verification.
Code of Federal Regulations, 2014 CFR
2014-10-01
... in ink by the practitioner or attorney, whose address should be stated. The signature of a... or attorney must be: (1) Signed in ink; (2) Accompanied by the signer's address; and (3) Verified, if...
49 CFR 1104.4 - Attestation and verification.
Code of Federal Regulations, 2011 CFR
2011-10-01
... in ink by the practitioner or attorney, whose address should be stated. The signature of a... or attorney must be: (1) Signed in ink; (2) Accompanied by the signer's address; and (3) Verified, if...
49 CFR 1104.4 - Attestation and verification.
Code of Federal Regulations, 2010 CFR
2010-10-01
... in ink by the practitioner or attorney, whose address should be stated. The signature of a... or attorney must be: (1) Signed in ink; (2) Accompanied by the signer's address; and (3) Verified, if...
49 CFR 1104.4 - Attestation and verification.
Code of Federal Regulations, 2013 CFR
2013-10-01
... in ink by the practitioner or attorney, whose address should be stated. The signature of a... or attorney must be: (1) Signed in ink; (2) Accompanied by the signer's address; and (3) Verified, if...
Velocity-image model for online signature verification.
Khan, Mohammad A U; Niazi, Muhammad Khalid Khan; Khan, Muhammad Aurangzeb
2006-11-01
In general, online signature capturing devices provide outputs in the form of shape and velocity signals. In the past, strokes have been extracted while tracking velocity signal minimas. However, the resulting strokes are larger and complicated in shape and thus make the subsequent job of generating a discriminative template difficult. We propose a new stroke-based algorithm that splits velocity signal into various bands. Based on these bands, strokes are extracted which are smaller and more simpler in nature. Training of our proposed system revealed that low- and high-velocity bands of the signal are unstable, whereas the medium-velocity band can be used for discrimination purposes. Euclidean distances of strokes extracted on the basis of medium velocity band are used for verification purpose. The experiments conducted show improvement in discriminative capability of the proposed stroke-based system.
NASA Astrophysics Data System (ADS)
Pérez-Cabré, Elisabet; Millán, María S.; Javidi, Bahram
2006-09-01
Verification of a piece of information and/or authentication of a given object or person are common operations carried out by automatic security systems that can be applied, for instance, to control the entrance to restricted areas, access to public buildings, identification of cardholders, etc. Vulnerability of such security systems may depend on the ease of counterfeiting the information used as a piece of identification for verification and authentication. To protect data against tampering, the signature that identifies an object is usually encrypted to avoid an easy recognition at human sight and an easy reproduction using conventional devices for imaging or scanning. To make counterfeiting even more difficult, we propose to combine data from visible and near infrared (NIR) spectral bands. By doing this, neither the visible content nor the NIR data by theirselves are sufficient to allow the signature recognition and thus, the identification of a given object. Only the appropriate combination of both signals permits a satisfactory authentication. In addition, the resulting signature is encrypted following a fully-phase encryption technique and the obtained complex-amplitude distribution is encoded on an ID tag. Spatial multiplexing of the encrypted signature allows us to build a distortion-invariant ID tag, so that remote authentication can be achieved even if the tag is captured under rotation or at different distances. We also explore the possibility of using partial information of the encrypted signature to simplify the ID tag design.
Cryptographic framework for document-objects resulting from multiparty collaborative transactions.
Goh, A
2000-01-01
Multiparty transactional frameworks--i.e. Electronic Data Interchange (EDI) or Health Level (HL) 7--often result in composite documents which can be accurately modelled using hyperlinked document-objects. The structural complexity arising from multiauthor involvement and transaction-specific sequencing would be poorly handled by conventional digital signature schemes based on a single evaluation of a one-way hash function and asymmetric cryptography. In this paper we outline the generation of structure-specific authentication hash-trees for the the authentication of transactional document-objects, followed by asymmetric signature generation on the hash-tree value. Server-side multi-client signature verification would probably constitute the single most compute-intensive task, hence the motivation for our usage of the Rabin signature protocol which results in significantly reduced verification workloads compared to the more commonly applied Rivest-Shamir-Adleman (RSA) protocol. Data privacy is handled via symmetric encryption of message traffic using session-specific keys obtained through key-negotiation mechanisms based on discrete-logarithm cryptography. Individual client-to-server channels can be secured using a double key-pair variation of Diffie-Hellman (DH) key negotiation, usage of which also enables bidirectional node authentication. The reciprocal server-to-client multicast channel is secured through Burmester-Desmedt (BD) key-negotiation which enjoys significant advantages over the usual multiparty extensions to the DH protocol. The implementation of hash-tree signatures and bi/multidirectional key negotiation results in a comprehensive cryptographic framework for multiparty document-objects satisfying both authentication and data privacy requirements.
A Prototype of Mathematical Treatment of Pen Pressure Data for Signature Verification.
Li, Chi-Keung; Wong, Siu-Kay; Chim, Lai-Chu Joyce
2018-01-01
A prototype using simple mathematical treatment of the pen pressure data recorded by a digital pen movement recording device was derived. In this study, a total of 48 sets of signature and initial specimens were collected. Pearson's correlation coefficient was used to compare the data of the pen pressure patterns. From the 820 pair comparisons of the 48 sets of genuine signatures, a high degree of matching was found in which 95.4% (782 pairs) and 80% (656 pairs) had rPA > 0.7 and rPA > 0.8, respectively. In the comparison of the 23 forged signatures with their corresponding control signatures, 20 of them (89.2% of pairs) had rPA values < 0.6, showing a lower degree of matching when compared with the results of the genuine signatures. The prototype could be used as a complementary technique to improve the objectivity of signature examination and also has a good potential to be developed as a tool for automated signature identification. © 2017 American Academy of Forensic Sciences.
Foo Kune, Denis [Saint Paul, MN; Mahadevan, Karthikeyan [Mountain View, CA
2011-01-25
A recursive verification protocol to reduce the time variance due to delays in the network by putting the subject node at most one hop from the verifier node provides for an efficient manner to test wireless sensor nodes. Since the software signatures are time based, recursive testing will give a much cleaner signal for positive verification of the software running on any one node in the sensor network. In this protocol, the main verifier checks its neighbor, who in turn checks its neighbor, and continuing this process until all nodes have been verified. This ensures minimum time delays for the software verification. Should a node fail the test, the software verification downstream is halted until an alternative path (one not including the failed node) is found. Utilizing techniques well known in the art, having a node tested twice, or not at all, can be avoided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gastelum, Zoe N.; Cramer, Nicholas O.; Benz, Jacob M.
While international nonproliferation and arms control verification capabilities have their foundations in physical and chemical sensors, state declarations, and on-site inspections, verification experts are beginning to consider the importance of open source data to complement and support traditional means of verification. One of those new, and increasingly expanding, sources of open source information is social media, which can be ingested and understood through social media analytics (SMA). Pacific Northwest National Laboratory (PNNL) is conducting research to further our ability to identify, visualize, and fuse social media data to support nonproliferation and arms control treaty verification efforts. This paper will describemore » our preliminary research to examine social media signatures of nonproliferation or arms control proxy events. We will describe the development of our preliminary nonproliferation and arms control proxy events, outline our initial findings, and propose ideas for future work.« less
Optical detection of random features for high security applications
NASA Astrophysics Data System (ADS)
Haist, T.; Tiziani, H. J.
1998-02-01
Optical detection of random features in combination with digital signatures based on public key codes in order to recognize counterfeit objects will be discussed. Without applying expensive production techniques objects are protected against counterfeiting. Verification is done off-line by optical means without a central authority. The method is applied for protecting banknotes. Experimental results for this application are presented. The method is also applicable for identity verification of a credit- or chip-card holder.
Handwritten digits recognition using HMM and PSO based on storks
NASA Astrophysics Data System (ADS)
Yan, Liao; Jia, Zhenhong; Yang, Jie; Pang, Shaoning
2010-07-01
A new method for handwritten digits recognition based on hidden markov model (HMM) and particle swarm optimization (PSO) is proposed. This method defined 24 strokes with the sense of directional, to make up for the shortage that is sensitive in choice of stating point in traditional methods, but also reduce the ambiguity caused by shakes. Make use of excellent global convergence of PSO; improving the probability of finding the optimum and avoiding local infinitesimal obviously. Experimental results demonstrate that compared with the traditional methods, the proposed method can make most of the recognition rate of handwritten digits improved.
Interactive-predictive detection of handwritten text blocks
NASA Astrophysics Data System (ADS)
Ramos Terrades, O.; Serrano, N.; Gordó, A.; Valveny, E.; Juan, A.
2010-01-01
A method for text block detection is introduced for old handwritten documents. The proposed method takes advantage of sequential book structure, taking into account layout information from pages previously transcribed. This glance at the past is used to predict the position of text blocks in the current page with the help of conventional layout analysis methods. The method is integrated into the GIDOC prototype: a first attempt to provide integrated support for interactive-predictive page layout analysis, text line detection and handwritten text transcription. Results are given in a transcription task on a 764-page Spanish manuscript from 1891.
Handwritten document age classification based on handwriting styles
NASA Astrophysics Data System (ADS)
Ramaiah, Chetan; Kumar, Gaurav; Govindaraju, Venu
2012-01-01
Handwriting styles are constantly changing over time. We approach the novel problem of estimating the approximate age of Historical Handwritten Documents using Handwriting styles. This system will have many applications in handwritten document processing engines where specialized processing techniques can be applied based on the estimated age of the document. We propose to learn a distribution over styles across centuries using Topic Models and to apply a classifier over weights learned in order to estimate the approximate age of the documents. We present a comparison of different distance metrics such as Euclidean Distance and Hellinger Distance within this application.
Handwritten Word Recognition Using Multi-view Analysis
NASA Astrophysics Data System (ADS)
de Oliveira, J. J.; de A. Freitas, C. O.; de Carvalho, J. M.; Sabourin, R.
This paper brings a contribution to the problem of efficiently recognizing handwritten words from a limited size lexicon. For that, a multiple classifier system has been developed that analyzes the words from three different approximation levels, in order to get a computational approach inspired on the human reading process. For each approximation level a three-module architecture composed of a zoning mechanism (pseudo-segmenter), a feature extractor and a classifier is defined. The proposed application is the recognition of the Portuguese handwritten names of the months, for which a best recognition rate of 97.7% was obtained, using classifier combination.
House officer procedure documentation using a personal digital assistant: a longitudinal study
Bird, Steven B; Lane, David R
2006-01-01
Background Personal Digital Assistants (PDAs) have been integrated into daily practice for many emergency physicians and house officers. Few objective data exist that quantify the effect of PDAs on documentation. The objective of this study was to determine whether use of a PDA would improve emergency medicine house officer documentation of procedures and patient resuscitations. Methods Twelve first-year Emergency Medicine (EM) residents were provided a Palm V (Palm, Inc., Santa Clara, California, USA) PDA. A customizable patient procedure and encounter program was constructed and loaded into each PDA. Residents were instructed to enter information on patients who had any of 20 procedures performed, were deemed clinically unstable, or on whom follow-up was obtained. These data were downloaded to the residency coordinator's desktop computer on a weekly basis for 36 months. The mean number of procedures and encounters performed per resident over a three year period were then compared with those of 12 historical controls from a previous residency class that had recorded the same information using a handwritten card system for 36 months. Means of both groups were compared a two-tailed Student's t test with a Bonferroni correction for multiple comparisons. One hundred randomly selected entries from both the PDA and handwritten groups were reviewed for completeness. Another group of 11 residents who had used both handwritten and PDA procedure logs for one year each were asked to complete a questionnaire regarding their satisfaction with the PDA system. Results Mean documentation of three procedures significantly increased in the PDA vs handwritten groups: conscious sedation 24.0 vs 0.03 (p = 0.001); thoracentesis 3.0 vs 0.0 (p = 0.001); and ED ultrasound 24.5 vs. 0.0 (p = 0.001). In the handwritten cohort, only the number of cardioversions/defibrillations (26.5 vs 11.5) was statistically increased (p = 0.001). Of the PDA entries, 100% were entered completely, compared to only 91% of the handwritten group, including 4% that were illegible. 10 of 11 questioned residents preferred the PDA procedure log to a handwritten log (mean ± SD Likert-scale score of 1.6 ± 0.9). Conclusion Overall use of a PDA did not significantly change EM resident procedure or patient resuscitation documentation when used over a three-year period. Statistically significant differences between the handwritten and PDA groups likely represent alterations in the standard of ED care over time. Residents overwhelmingly preferred the PDA procedure log to a handwritten log and more entries are complete using the PDA. These favorable comparisons and the numerous other uses of PDAs may make them an attractive alternative for resident documentation. PMID:16438709
Argon Collection And Purification For Proliferation Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Achey, R.; Hunter, D.
2015-10-09
In order to determine whether a seismic event was a declared/undeclared underground nuclear weapon test, environmental samples must be taken and analyzed for signatures that are unique to a nuclear explosion. These signatures are either particles or gases. Particle samples are routinely taken and analyzed under the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) verification regime as well as by individual countries. Gas samples are analyzed for signature gases, especially radioactive xenon. Underground nuclear tests also produce radioactive argon, but that signature is not well monitored. A radioactive argon signature, along with other signatures, can more conclusively determine whether an event wasmore » a nuclear test. This project has developed capabilities for collecting and purifying argon samples for ultra-low-background proportional counting. SRNL has developed a continuous gas enrichment system that produces an output stream containing 97% argon from whole air using adsorbent separation technology (the flow diagram for the system is shown in the figure). The vacuum swing adsorption (VSA) enrichment system is easily scalable to produce ten liters or more of 97% argon within twelve hours. A gas chromatographic separation using a column of modified hydrogen mordenite molecular sieve has been developed that can further purify the sample to better than 99% purity after separation from the helium carrier gas. The combination of these concentration and purification systems has the capability of being used for a field-deployable system for collecting argon samples suitable for ultra-low-background proportional counting for detecting nuclear detonations under the On-Site Inspection program of the CTBTO verification regime. The technology also has applications for the bulk argon separation from air for industrial purposes such as the semi-conductor industry.« less
Simulation Detection in Handwritten Documents by Forensic Document Examiners.
Kam, Moshe; Abichandani, Pramod; Hewett, Tom
2015-07-01
This study documents the results of a controlled experiment designed to quantify the abilities of forensic document examiners (FDEs) and laypersons to detect simulations in handwritten documents. Nineteen professional FDEs and 26 laypersons (typical of a jury pool) were asked to inspect test packages that contained six (6) known handwritten documents written by the same person and two (2) questioned handwritten documents. Each questioned document was either written by the person who wrote the known documents, or written by a different person who tried to simulate the writing of the person who wrote the known document. The error rates of the FDEs were smaller than those of the laypersons when detecting simulations in the questioned documents. Among other findings, the FDEs never labeled a questioned document that was written by the same person who wrote the known documents as "simulation." There was a significant statistical difference between the responses of the FDEs and layperson for documents without simulations. © 2015 American Academy of Forensic Sciences.
Construction of language models for an handwritten mail reading system
NASA Astrophysics Data System (ADS)
Morillot, Olivier; Likforman-Sulem, Laurence; Grosicki, Emmanuèle
2012-01-01
This paper presents a system for the recognition of unconstrained handwritten mails. The main part of this system is an HMM recognizer which uses trigraphs to model contextual information. This recognition system does not require any segmentation into words or characters and directly works at line level. To take into account linguistic information and enhance performance, a language model is introduced. This language model is based on bigrams and built from training document transcriptions only. Different experiments with various vocabulary sizes and language models have been conducted. Word Error Rate and Perplexity values are compared to show the interest of specific language models, fit to handwritten mail recognition task.
Robust recognition of handwritten numerals based on dual cooperative network
NASA Technical Reports Server (NTRS)
Lee, Sukhan; Choi, Yeongwoo
1992-01-01
An approach to robust recognition of handwritten numerals using two operating parallel networks is presented. The first network uses inputs in Cartesian coordinates, and the second network uses the same inputs transformed into polar coordinates. How the proposed approach realizes the robustness to local and global variations of input numerals by handling inputs both in Cartesian coordinates and in its transformed Polar coordinates is described. The required network structures and its learning scheme are discussed. Experimental results show that by tracking only a small number of distinctive features for each teaching numeral in each coordinate, the proposed system can provide robust recognition of handwritten numerals.
Biometric verification in dynamic writing
NASA Astrophysics Data System (ADS)
George, Susan E.
2002-03-01
Pen-tablet devices capable of capturing the dynamics of writing record temporal and pressure information as well as the spatial pattern. This paper explores biometric verification based upon the dynamics of writing where writers are distinguished not on the basis of what they write (ie the signature), but how they write. We have collected samples of dynamic writing from 38 Chinese writers. Each writer was asked to provide 10 copies of a paragraph of text and the same number of signature samples. From the data we have extracted stroke-based primitives from the sentence data utilizing pen-up/down information and heuristic rules about the shape of the character. The x, y and pressure values of each primitive were interpolated into an even temporal range based upon a 20 msec sampling rate. We applied the Daubechies 1 wavelet transform to the x signal, y signal and pressure signal using the coefficients as inputs to a multi-layer perceptron trained with back-propagation on the sentence data. We found a sensitivity of 0.977 and specificity of 0.990 recognizing writers based on test primitives extracted from sentence data and measures of 0.916 and 0.961 respectively, from test primitives extracted from signature data.
Neural Networks for Handwritten English Alphabet Recognition
NASA Astrophysics Data System (ADS)
Perwej, Yusuf; Chaturvedi, Ashish
2011-04-01
This paper demonstrates the use of neural networks for developing a system that can recognize hand-written English alphabets. In this system, each English alphabet is represented by binary values that are used as input to a simple feature extraction system, whose output is fed to our neural network system.
Interpreting Chicken-Scratch: Lexical Access for Handwritten Words
ERIC Educational Resources Information Center
Barnhart, Anthony S.; Goldinger, Stephen D.
2010-01-01
Handwritten word recognition is a field of study that has largely been neglected in the psychological literature, despite its prevalence in society. Whereas studies of spoken word recognition almost exclusively employ natural, human voices as stimuli, studies of visual word recognition use synthetic typefaces, thus simplifying the process of word…
Li, Ming; Jia, Bin; Ding, Liying; Hong, Feng; Ouyang, Yongzhong; Chen, Rui; Zhou, Shumin; Chen, Huanwen; Fang, Xiang
2013-09-01
Molecular images of documents were obtained by sequentially scanning the surface of the document using desorption atmospheric pressure chemical ionization mass spectrometry (DAPCI-MS), which was operated in either a gasless, solvent-free or methanol vapor-assisted mode. The decay process of the ink used for handwriting was monitored by following the signal intensities recorded by DAPCI-MS. Handwritings made using four types of inks on four kinds of paper surfaces were tested. By studying the dynamic decay of the inks, DAPCI-MS imaging differentiated a 10-min old from two 4 h old samples. Non-destructive forensic analysis of forged signatures either handwritten or computer-assisted was achieved according to the difference of the contour in DAPCI images, which was attributed to the strength personalized by different writers. Distinction of the order of writing/stamping on documents and detection of illegal printings were accomplished with a spatial resolution of about 140 µm. A Matlab® written program was developed to facilitate the visualization of the similarity between signature images obtained by DAPCI-MS. The experimental results show that DAPCI-MS imaging provides rich information at the molecular level and thus can be used for the reliable document analysis in forensic applications. © 2013 The Authors. Journal of Mass Spectrometry published by John Wiley & Sons, Ltd.
Determining the Value of Handwritten Comments within Work Orders
ERIC Educational Resources Information Center
Thombs, Daniel
2010-01-01
In the workplace many work orders are handwritten on paper rather than recorded in a digital format. Despite being archived, these documents are neither referenced nor analyzed after their creation. Tacit knowledge gathered though employee documentation is generally considered beneficial, but only if it can be easily gathered and processed. …
Lee, S; Pan, J J
1996-01-01
This paper presents a new approach to representation and recognition of handwritten numerals. The approach first transforms a two-dimensional (2-D) spatial representation of a numeral into a three-dimensional (3-D) spatio-temporal representation by identifying the tracing sequence based on a set of heuristic rules acting as transformation operators. A multiresolution critical-point segmentation method is then proposed to extract local feature points, at varying degrees of scale and coarseness. A new neural network architecture, referred to as radial-basis competitive and cooperative network (RCCN), is presented especially for handwritten numeral recognition. RCCN is a globally competitive and locally cooperative network with the capability of self-organizing hidden units to progressively achieve desired network performance, and functions as a universal approximator of arbitrary input-output mappings. Three types of RCCNs are explored: input-space RCCN (IRCCN), output-space RCCN (ORCCN), and bidirectional RCCN (BRCCN). Experiments against handwritten zip code numerals acquired by the U.S. Postal Service indicated that the proposed method is robust in terms of variations, deformations, transformations, and corruption, achieving about 97% recognition rate.
Font generation of personal handwritten Chinese characters
NASA Astrophysics Data System (ADS)
Lin, Jeng-Wei; Wang, Chih-Yin; Ting, Chao-Lung; Chang, Ray-I.
2014-01-01
Today, digital multimedia messages have drawn more and more attention due to the great achievement of computer and network techniques. Nevertheless, text is still the most popular media for people to communicate with others. Many fonts have been developed so that product designers can choose unique fonts to demonstrate their idea gracefully. It is commonly believed that handwritings can reflect one's personality, emotion, feeling, education level, and so on. This is especially true in Chinese calligraphy. However, it is not easy for ordinary users to customize a font of their personal handwritings. In this study, we performed a process reengineering in font generation. We present a new method to create font in a batch mode. Rather than to create glyphs of characters one by one according to their codepoints, people create glyphs incrementally in an on-demand manner. A Java Implementation is developed to read a document image of user handwritten Chinese characters, and make a vector font of these handwritten Chinese characters. Preliminary experiment result shows that the proposed method can help ordinary users create their personal handwritten fonts easily and quickly.
Arabic writer identification based on diacritic's features
NASA Astrophysics Data System (ADS)
Maliki, Makki; Al-Jawad, Naseer; Jassim, Sabah A.
2012-06-01
Natural languages like Arabic, Kurdish, Farsi (Persian), Urdu, and any other similar languages have many features, which make them different from other languages like Latin's script. One of these important features is diacritics. These diacritics are classified as: compulsory like dots which are used to identify/differentiate letters, and optional like short vowels which are used to emphasis consonants. Most indigenous and well trained writers often do not use all or some of these second class of diacritics, and expert readers can infer their presence within the context of the writer text. In this paper, we investigate the use of diacritics shapes and other characteristic as parameters of feature vectors for Arabic writer identification/verification. Segmentation techniques are used to extract the diacritics-based feature vectors from examples of Arabic handwritten text. The results of evaluation test will be presented, which has been carried out on an in-house database of 50 writers. Also the viability of using diacritics for writer recognition will be demonstrated.
Toward Automatic Verification of Goal-Oriented Flow Simulations
NASA Technical Reports Server (NTRS)
Nemec, Marian; Aftosmis, Michael J.
2014-01-01
We demonstrate the power of adaptive mesh refinement with adjoint-based error estimates in verification of simulations governed by the steady Euler equations. The flow equations are discretized using a finite volume scheme on a Cartesian mesh with cut cells at the wall boundaries. The discretization error in selected simulation outputs is estimated using the method of adjoint-weighted residuals. Practical aspects of the implementation are emphasized, particularly in the formulation of the refinement criterion and the mesh adaptation strategy. Following a thorough code verification example, we demonstrate simulation verification of two- and three-dimensional problems. These involve an airfoil performance database, a pressure signature of a body in supersonic flow and a launch abort with strong jet interactions. The results show reliable estimates and automatic control of discretization error in all simulations at an affordable computational cost. Moreover, the approach remains effective even when theoretical assumptions, e.g., steady-state and solution smoothness, are relaxed.
Comparing Postsecondary Marketing Student Performance on Computer-Based and Handwritten Essay Tests
ERIC Educational Resources Information Center
Truell, Allen D.; Alexander, Melody W.; Davis, Rodney E.
2004-01-01
The purpose of this study was to determine if there were differences in postsecondary marketing student performance on essay tests based on test format (i.e., computer-based or handwritten). Specifically, the variables of performance, test completion time, and gender were explored for differences based on essay test format. Results of the study…
Handwritten Newspapers on the Iowa Frontier, 1844-54.
ERIC Educational Resources Information Center
Atwood, Roy Alden
Journalism on the agricultural frontier of the Old Northwest territory of the United States was shaped by a variety of cultural forces and environmental factors and took on diverse forms. Bridging the gap between the two cultural forms of written correspondence and printed news was a third form: the handwritten newspaper. Between 1844 and 1854…
Judging the Emergent Reading Abilities of Kindergarten Children.
ERIC Educational Resources Information Center
Otto, Beverly; Sulzby, Elizabeth
In 1981, a scale, the Emergent Reading Ability Judgments for Dictated and Handwritten Stories, was developed for use in assessing how close a child was to reading independently based upon the nature of the child's attempts to read from dictated and handwritten stories. A study was conducted to apply the scale to stories from a new sample of…
Spatial Analysis of Handwritten Texts as a Marker of Cognitive Control.
Crespo, Y; Soriano, M F; Iglesias-Parro, S; Aznarte, J I; Ibáñez-Molina, A J
2017-12-01
We explore the idea that cognitive demands of the handwriting would influence the degree of automaticity of the handwriting process, which in turn would affect the geometric parameters of texts. We compared the heterogeneity of handwritten texts in tasks with different cognitive demands; the heterogeneity of texts was analyzed with lacunarity, a measure of geometrical invariance. In Experiment 1, we asked participants to perform two tasks that varied in cognitive demands: transcription and exposition about an autobiographical episode. Lacunarity was significantly lower in transcription. In Experiment 2, we compared a veridical and a fictitious version of a personal event. Lacunarity was lower in veridical texts. We contend that differences in lacunarity of handwritten texts reveal the degree of automaticity in handwriting.
Text-line extraction in handwritten Chinese documents based on an energy minimization framework.
Koo, Hyung Il; Cho, Nam Ik
2012-03-01
Text-line extraction in unconstrained handwritten documents remains a challenging problem due to nonuniform character scale, spatially varying text orientation, and the interference between text lines. In order to address these problems, we propose a new cost function that considers the interactions between text lines and the curvilinearity of each text line. Precisely, we achieve this goal by introducing normalized measures for them, which are based on an estimated line spacing. We also present an optimization method that exploits the properties of our cost function. Experimental results on a database consisting of 853 handwritten Chinese document images have shown that our method achieves a detection rate of 99.52% and an error rate of 0.32%, which outperforms conventional methods.
Recognition of degraded handwritten digits using dynamic Bayesian networks
NASA Astrophysics Data System (ADS)
Likforman-Sulem, Laurence; Sigelle, Marc
2007-01-01
We investigate in this paper the application of dynamic Bayesian networks (DBNs) to the recognition of handwritten digits. The main idea is to couple two separate HMMs into various architectures. First, a vertical HMM and a horizontal HMM are built observing the evolving streams of image columns and image rows respectively. Then, two coupled architectures are proposed to model interactions between these two streams and to capture the 2D nature of character images. Experiments performed on the MNIST handwritten digit database show that coupled architectures yield better recognition performances than non-coupled ones. Additional experiments conducted on artificially degraded (broken) characters demonstrate that coupled architectures better cope with such degradation than non coupled ones and than discriminative methods such as SVMs.
A distinguishing method of printed and handwritten legal amount on Chinese bank check
NASA Astrophysics Data System (ADS)
Zhu, Ningbo; Lou, Zhen; Yang, Jingyu
2003-09-01
While carrying out Optical Chinese Character Recognition, distinguishing the font between printed and handwritten characters at the early phase is necessary, because there is so much difference between the methods on recognizing these two types of characters. In this paper, we proposed a good method on how to banish seals and its relative standards that can judge whether they should be banished. Meanwhile, an approach on clearing up scattered noise shivers after image segmentation is presented. Four sets of classifying features that show discrimination between printed and handwritten characters are well adopted. The proposed approach was applied to an automatic check processing system and tested on about 9031 checks. The recognition rate is more than 99.5%.
Network-based Arbitrated Quantum Signature Scheme with Graph State
NASA Astrophysics Data System (ADS)
Ma, Hongling; Li, Fei; Mao, Ningyi; Wang, Yijun; Guo, Ying
2017-08-01
Implementing an arbitrated quantum signature(QAS) through complex networks is an interesting cryptography technology in the literature. In this paper, we propose an arbitrated quantum signature for the multi-user-involved networks, whose topological structures are established by the encoded graph state. The determinative transmission of the shared keys, is enabled by the appropriate stabilizers performed on the graph state. The implementation of this scheme depends on the deterministic distribution of the multi-user-shared graph state on which the encoded message can be processed in signing and verifying phases. There are four parties involved, the signatory Alice, the verifier Bob, the arbitrator Trent and Dealer who assists the legal participants in the signature generation and verification. The security is guaranteed by the entanglement of the encoded graph state which is cooperatively prepared by legal participants in complex quantum networks.
Robotic Spent Fuel Monitoring – It is time to improve old approaches and old techniques!
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tobin, Stephen Joseph; Dasari, Venkateswara Rao; Trellue, Holly Renee
This report describes various approaches and techniques associated with robotic spent fuel monitoring. The purpose of this description is to improve the quality of measured signatures, reduce the inspection burden on the IAEA, and to provide frequent verification.
Detection of hail signatures from single-polarization C-band radar reflectivity
NASA Astrophysics Data System (ADS)
Kunz, Michael; Kugel, Petra I. S.
2015-02-01
Five different criteria that estimate hail signatures from single-polarization radar data are statistically evaluated over a 15-year period by categorical verification against loss data provided by a building insurance company. The criteria consider different levels or thresholds of radar reflectivity, some of them complemented by estimates of the 0 °C level or cloud top temperature. Applied to reflectivity data from a single C-band radar in southwest Germany, it is found that all criteria are able to reproduce most of the past damage-causing hail events. However, the criteria substantially overestimate hail occurrence by up to 80%, mainly due to the verification process using damage data. Best results in terms of highest Heidke Skill Score HSS or Critical Success Index CSI are obtained for the Hail Detection Algorithm (HDA) and the Probability of Severe Hail (POSH). Radar-derived hail probability shows a high spatial variability with a maximum on the lee side of the Black Forest mountains and a minimum in the broad Rhine valley.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henry, Michael J.; Cramer, Nicholas O.; Benz, Jacob M.
Traditional arms control treaty verification activities typically involve a combination of technical measurements via physical and chemical sensors, state declarations, political agreements, and on-site inspections involving international subject matter experts. However, the ubiquity of the internet, and the electronic sharing of data that it enables, has made available a wealth of open source information with the potential to benefit verification efforts. Open source information is already being used by organizations such as the International Atomic Energy Agency to support the verification of state-declared information, prepare inspectors for in-field activities, and to maintain situational awareness . The recent explosion in socialmore » media use has opened new doors to exploring the attitudes, moods, and activities around a given topic. Social media platforms, such as Twitter, Facebook, and YouTube, offer an opportunity for individuals, as well as institutions, to participate in a global conversation at minimal cost. Social media data can also provide a more data-rich environment, with text data being augmented with images, videos, and location data. The research described in this paper investigates the utility of applying social media signatures as potential arms control and nonproliferation treaty verification tools and technologies, as determined through a series of case studies. The treaty relevant events that these case studies touch upon include detection of undeclared facilities or activities, determination of unknown events recorded by the International Monitoring System (IMS), and the global media response to the occurrence of an Indian missile launch. The case studies examine how social media can be used to fill an information gap and provide additional confidence to a verification activity. The case studies represent, either directly or through a proxy, instances where social media information may be available that could potentially augment the evaluation of an event. The goal of this paper is to instigate a discussion within the verification community as to where and how social media can be effectively utilized to complement and enhance traditional treaty verification efforts. In addition, this paper seeks to identify areas of future research and development necessary to adapt social media analytic tools and techniques, and to form the seed for social media analytics to aid and inform arms control and nonproliferation policymakers and analysts. While social media analysis (as well as open source analysis as a whole) will not ever be able to replace traditional arms control verification measures, they do supply unique signatures that can augment existing analysis.« less
ERIC Educational Resources Information Center
Petko, Dominik; Egger, Nives; Graber, Marc
2014-01-01
The goal of this study was to compare how weblogs and traditional handwritten reflective learning protocols compare regarding the use of cognitive and metacognitive strategies for knowledge acquisition as well as learning gains in secondary school students. The study used a quasi-experimental control group design with repeated measurements…
A Dynamic Bayesian Network Based Structural Learning towards Automated Handwritten Digit Recognition
NASA Astrophysics Data System (ADS)
Pauplin, Olivier; Jiang, Jianmin
Pattern recognition using Dynamic Bayesian Networks (DBNs) is currently a growing area of study. In this paper, we present DBN models trained for classification of handwritten digit characters. The structure of these models is partly inferred from the training data of each class of digit before performing parameter learning. Classification results are presented for the four described models.
Rasmussen, Luke V; Peissig, Peggy L; McCarty, Catherine A; Starren, Justin
2012-06-01
Although the penetration of electronic health records is increasing rapidly, much of the historical medical record is only available in handwritten notes and forms, which require labor-intensive, human chart abstraction for some clinical research. The few previous studies on automated extraction of data from these handwritten notes have focused on monolithic, custom-developed recognition systems or third-party systems that require proprietary forms. We present an optical character recognition processing pipeline, which leverages the capabilities of existing third-party optical character recognition engines, and provides the flexibility offered by a modular custom-developed system. The system was configured and run on a selected set of form fields extracted from a corpus of handwritten ophthalmology forms. The processing pipeline allowed multiple configurations to be run, with the optimal configuration consisting of the Nuance and LEADTOOLS engines running in parallel with a positive predictive value of 94.6% and a sensitivity of 13.5%. While limitations exist, preliminary experience from this project yielded insights on the generalizability and applicability of integrating multiple, inexpensive general-purpose third-party optical character recognition engines in a modular pipeline.
Peissig, Peggy L; McCarty, Catherine A; Starren, Justin
2011-01-01
Background Although the penetration of electronic health records is increasing rapidly, much of the historical medical record is only available in handwritten notes and forms, which require labor-intensive, human chart abstraction for some clinical research. The few previous studies on automated extraction of data from these handwritten notes have focused on monolithic, custom-developed recognition systems or third-party systems that require proprietary forms. Methods We present an optical character recognition processing pipeline, which leverages the capabilities of existing third-party optical character recognition engines, and provides the flexibility offered by a modular custom-developed system. The system was configured and run on a selected set of form fields extracted from a corpus of handwritten ophthalmology forms. Observations The processing pipeline allowed multiple configurations to be run, with the optimal configuration consisting of the Nuance and LEADTOOLS engines running in parallel with a positive predictive value of 94.6% and a sensitivity of 13.5%. Discussion While limitations exist, preliminary experience from this project yielded insights on the generalizability and applicability of integrating multiple, inexpensive general-purpose third-party optical character recognition engines in a modular pipeline. PMID:21890871
A survey of user acceptance of electronic patient anesthesia records
Jin, Hyun Seung; Lee, Suk Young; Jeong, Hui Yeon; Choi, Soo Joo; Lee, Hye Won
2012-01-01
Background An anesthesia information management system (AIMS), although not widely used in Korea, will eventually replace handwritten records. This hospital began using AIMS in April 2010. The purpose of this study was to evaluate users' attitudes concerning AIMS and to compare them with manual documentation in the operating room (OR). Methods A structured questionnaire focused on satisfaction with electronic anesthetic records and comparison with handwritten anesthesia records was administered to anesthesiologists, trainees, and nurses during February 2011 and the responses were collected anonymously during March 2011. Results A total of 28 anesthesiologists, 27 trainees, and 47 nurses responded to this survey. Most participants involved in this survey were satisfied with AIMS (96.3%, 82.2%, and 89.3% of trainees, anesthesiologists, and nurses, respectively) and preferred AIMS over handwritten anesthesia records in 96.3%, 71.4%, and 97.9% of trainees, anesthesiologists, and nurses, respectively. However, there were also criticisms of AIMS related to user-discomfort during short, simple or emergency surgeries, doubtful legal status, and inconvenient placement of the system. Conclusions Overall, most of the anesthetic practitioners in this hospital quickly accepted and prefer AIMS over the handwritten anesthetic records in the OR. PMID:22558502
Biswas, Mithun; Islam, Rafiqul; Shom, Gautam Kumar; Shopon, Md; Mohammed, Nabeel; Momen, Sifat; Abedin, Anowarul
2017-06-01
BanglaLekha-Isolated, a Bangla handwritten isolated character dataset is presented in this article. This dataset contains 84 different characters comprising of 50 Bangla basic characters, 10 Bangla numerals and 24 selected compound characters. 2000 handwriting samples for each of the 84 characters were collected, digitized and pre-processed. After discarding mistakes and scribbles, 1,66,105 handwritten character images were included in the final dataset. The dataset also includes labels indicating the age and the gender of the subjects from whom the samples were collected. This dataset could be used not only for optical handwriting recognition research but also to explore the influence of gender and age on handwriting. The dataset is publicly available at https://data.mendeley.com/datasets/hf6sf8zrkc/2.
Container weld identification using portable laser scanners
NASA Astrophysics Data System (ADS)
Taddei, Pierluigi; Boström, Gunnar; Puig, David; Kravtchenko, Victor; Sequeira, Vítor
2015-03-01
Identification and integrity verification of sealed containers for security applications can be obtained by employing noninvasive portable optical systems. We present a portable laser range imaging system capable of identifying welds, a byproduct of a container's physical sealing, with micrometer accuracy. It is based on the assumption that each weld has a unique three-dimensional (3-D) structure which cannot be copied or forged. We process the 3-D surface to generate a normalized depth map which is invariant to mechanical alignment errors and that is used to build compact signatures representing the weld. A weld is identified by performing cross correlations of its signature against a set of known signatures. The system has been tested on realistic datasets, containing hundreds of welds, yielding no false positives or false negatives and thus showing the robustness of the system and the validity of the chosen signature.
Joshi, Anuradha; Buch, Jatin; Kothari, Nitin; Shah, Nishal
2016-06-01
Prescription order is an important therapeutic transaction between physician and patient. A good quality prescription is an extremely important factor for minimizing errors in dispensing medication and it should be adherent to guidelines for prescription writing for benefit of the patient. To evaluate frequency and type of prescription errors in outpatient prescriptions and find whether prescription writing abides with WHO standards of prescription writing. A cross-sectional observational study was conducted at Anand city. Allopathic private practitioners practising at Anand city of different specialities were included in study. Collection of prescriptions was started a month after the consent to minimize bias in prescription writing. The prescriptions were collected from local pharmacy stores of Anand city over a period of six months. Prescriptions were analysed for errors in standard information, according to WHO guide to good prescribing. Descriptive analysis was performed to estimate frequency of errors, data were expressed as numbers and percentage. Total 749 (549 handwritten and 200 computerised) prescriptions were collected. Abundant omission errors were identified in handwritten prescriptions e.g., OPD number was mentioned in 6.19%, patient's age was mentioned in 25.50%, gender in 17.30%, address in 9.29% and weight of patient mentioned in 11.29%, while in drug items only 2.97% drugs were prescribed by generic name. Route and Dosage form was mentioned in 77.35%-78.15%, dose mentioned in 47.25%, unit in 13.91%, regimens were mentioned in 72.93% while signa (direction for drug use) in 62.35%. Total 4384 errors out of 549 handwritten prescriptions and 501 errors out of 200 computerized prescriptions were found in clinicians and patient details. While in drug item details, total number of errors identified were 5015 and 621 in handwritten and computerized prescriptions respectively. As compared to handwritten prescriptions, computerized prescriptions appeared to be associated with relatively lower rates of error. Since out-patient prescription errors are abundant and often occur in handwritten prescriptions, prescribers need to adapt themselves to computerized prescription order entry in their daily practice.
Buch, Jatin; Kothari, Nitin; Shah, Nishal
2016-01-01
Introduction Prescription order is an important therapeutic transaction between physician and patient. A good quality prescription is an extremely important factor for minimizing errors in dispensing medication and it should be adherent to guidelines for prescription writing for benefit of the patient. Aim To evaluate frequency and type of prescription errors in outpatient prescriptions and find whether prescription writing abides with WHO standards of prescription writing. Materials and Methods A cross-sectional observational study was conducted at Anand city. Allopathic private practitioners practising at Anand city of different specialities were included in study. Collection of prescriptions was started a month after the consent to minimize bias in prescription writing. The prescriptions were collected from local pharmacy stores of Anand city over a period of six months. Prescriptions were analysed for errors in standard information, according to WHO guide to good prescribing. Statistical Analysis Descriptive analysis was performed to estimate frequency of errors, data were expressed as numbers and percentage. Results Total 749 (549 handwritten and 200 computerised) prescriptions were collected. Abundant omission errors were identified in handwritten prescriptions e.g., OPD number was mentioned in 6.19%, patient’s age was mentioned in 25.50%, gender in 17.30%, address in 9.29% and weight of patient mentioned in 11.29%, while in drug items only 2.97% drugs were prescribed by generic name. Route and Dosage form was mentioned in 77.35%-78.15%, dose mentioned in 47.25%, unit in 13.91%, regimens were mentioned in 72.93% while signa (direction for drug use) in 62.35%. Total 4384 errors out of 549 handwritten prescriptions and 501 errors out of 200 computerized prescriptions were found in clinicians and patient details. While in drug item details, total number of errors identified were 5015 and 621 in handwritten and computerized prescriptions respectively. Conclusion As compared to handwritten prescriptions, computerized prescriptions appeared to be associated with relatively lower rates of error. Since out-patient prescription errors are abundant and often occur in handwritten prescriptions, prescribers need to adapt themselves to computerized prescription order entry in their daily practice. PMID:27504305
[About da tai - abortion in old Chinese folk medicine handwritten manuscripts].
Zheng, Jinsheng
2013-01-01
Of 881 Chinese handwritten volumes with medical texts of the 17th through mid-20th century held by Staatsbibliothek zu Berlin and Ethnologisches Museum Berlin-Dahlem, 48 volumes include prescriptions for induced abortion. A comparison shows that these records are significantly different from references to abortion in Chinese printed medical texts of pre-modern times. For example, the percentage of recipes recommended for artificial abortions in handwritten texts is significantly higher than those in printed medical books. Authors of handwritten texts used 25 terms to designate artificial abortion, with the term da tai [see text], lit.: "to strike the fetus", occurring most frequently. Its meaning is well defined, in contrast to other terms used, such as duo tai [see text], lit: "to make a fetus fall", xia tai [see text], lit. "to bring a fetus down", und duan chan [see text], lit., to interrupt birthing", which is mostly used to indicate a temporary or permanent sterilization. Pre-modern Chinese medicine has not generally abstained from inducing abortions; physicians showed a differentiating attitude. While abortions were descibed as "things a [physician with an attitude of] humaneness will not do", in case a pregnancy was seen as too risky for a woman she was offered medication to terminate this pregnancy. The commercial application of abortifacients has been recorded in China since ancient times. A request for such services has continued over time for various reasons, including so-called illegitimate pregnancies, and those by nuns, widows and prostitutes. In general, recipes to induce abortions documented in printed medical literature have mild effects and are to be ingested orally. In comparison, those recommended in handwritten texts are rather toxic. Possibly to minimize the negative side-effects of such medication, practitioners of folk medicine developed mechanical devices to perform "external", i.e., vaginal approaches.
Edwards, Kylie-Ellen; Hagen, Sander M; Hannam, Jacqueline; Kruger, Cornelis; Yu, Richard; Merry, Alan F
2013-10-01
Anesthesia information management system (AIMS) technology is designed to facilitate high-quality anesthetic recordkeeping. We examined the hypothesis that no difference exists between AIMS and handwritten anesthetic records in regard to the completeness of important information contained as text data. We also investigated the effect of observational research on the completeness of anesthesiologists' recordkeeping. As part of a larger randomized controlled trial, participants were randomized to produce 400 anesthetic records, either handwritten (n = 200) or using an AIMS (n = 200). Records were assessed against a 32-item checklist modified from a clinical guideline. Intravenous agent and bolus recordings were quantified, and data were compared between handwritten and AIMS records. Records produced with intensive research observation during the initial phase of the study (n = 200) were compared with records produced with reduced intensity observation during the final phase of the study (n = 200). The AIMS records were more complete than the handwritten records (mean difference 7.1%; 95% confidence interval [CI] 5.6 to 8.6%; P < 0.0001), with higher completion rates for six individual items on the checklist (P < 0.0001). Drug annotation data were equal between arms. The records completed early in the study, during a period of more intense observation, were more thorough than subsequent records (87.3% vs 81.6%, respectively; mean difference 5.7%; 95% CI 4.2 to 7.3%; P < 0.0001). The AIMS records were more complete than the handwritten records for 32 predefined items. The potential of observational research to influence professional behaviour in an anesthetic context was confirmed. This trial was registered at the Australian New Zealand Clinical Trials Registry No 12608000068369.
Marker Registration Technique for Handwritten Text Marker in Augmented Reality Applications
NASA Astrophysics Data System (ADS)
Thanaborvornwiwat, N.; Patanukhom, K.
2018-04-01
Marker registration is a fundamental process to estimate camera poses in marker-based Augmented Reality (AR) systems. We developed AR system that creates correspondence virtual objects on handwritten text markers. This paper presents a new method for registration that is robust for low-content text markers, variation of camera poses, and variation of handwritten styles. The proposed method uses Maximally Stable Extremal Regions (MSER) and polygon simplification for a feature point extraction. The experiment shows that we need to extract only five feature points per image which can provide the best registration results. An exhaustive search is used to find the best matching pattern of the feature points in two images. We also compared performance of the proposed method to some existing registration methods and found that the proposed method can provide better accuracy and time efficiency.
Interpreting Chicken-Scratch: Lexical Access for Handwritten Words
Barnhart, Anthony S.; Goldinger, Stephen D.
2014-01-01
Handwritten word recognition is a field of study that has largely been neglected in the psychological literature, despite its prevalence in society. Whereas studies of spoken word recognition almost exclusively employ natural, human voices as stimuli, studies of visual word recognition use synthetic typefaces, thus simplifying the process of word recognition. The current study examined the effects of handwriting on a series of lexical variables thought to influence bottom-up and top-down processing, including word frequency, regularity, bidirectional consistency, and imageability. The results suggest that the natural physical ambiguity of handwritten stimuli forces a greater reliance on top-down processes, because almost all effects were magnified, relative to conditions with computer print. These findings suggest that processes of word perception naturally adapt to handwriting, compensating for physical ambiguity by increasing top-down feedback. PMID:20695708
Local Subspace Classifier with Transform-Invariance for Image Classification
NASA Astrophysics Data System (ADS)
Hotta, Seiji
A family of linear subspace classifiers called local subspace classifier (LSC) outperforms the k-nearest neighbor rule (kNN) and conventional subspace classifiers in handwritten digit classification. However, LSC suffers very high sensitivity to image transformations because it uses projection and the Euclidean distances for classification. In this paper, I present a combination of a local subspace classifier (LSC) and a tangent distance (TD) for improving accuracy of handwritten digit recognition. In this classification rule, we can deal with transform-invariance easily because we are able to use tangent vectors for approximation of transformations. However, we cannot use tangent vectors in other type of images such as color images. Hence, kernel LSC (KLSC) is proposed for incorporating transform-invariance into LSC via kernel mapping. The performance of the proposed methods is verified with the experiments on handwritten digit and color image classification.
Spotting handwritten words and REGEX using a two stage BLSTM-HMM architecture
NASA Astrophysics Data System (ADS)
Bideault, Gautier; Mioulet, Luc; Chatelain, Clément; Paquet, Thierry
2015-01-01
In this article, we propose a hybrid model for spotting words and regular expressions (REGEX) in handwritten documents. The model is made of the state-of-the-art BLSTM (Bidirectional Long Short Time Memory) neural network for recognizing and segmenting characters, coupled with a HMM to build line models able to spot the desired sequences. Experiments on the Rimes database show very promising results.
HMM-based lexicon-driven and lexicon-free word recognition for online handwritten Indic scripts.
Bharath, A; Madhvanath, Sriganesh
2012-04-01
Research for recognizing online handwritten words in Indic scripts is at its early stages when compared to Latin and Oriental scripts. In this paper, we address this problem specifically for two major Indic scripts--Devanagari and Tamil. In contrast to previous approaches, the techniques we propose are largely data driven and script independent. We propose two different techniques for word recognition based on Hidden Markov Models (HMM): lexicon driven and lexicon free. The lexicon-driven technique models each word in the lexicon as a sequence of symbol HMMs according to a standard symbol writing order derived from the phonetic representation. The lexicon-free technique uses a novel Bag-of-Symbols representation of the handwritten word that is independent of symbol order and allows rapid pruning of the lexicon. On handwritten Devanagari word samples featuring both standard and nonstandard symbol writing orders, a combination of lexicon-driven and lexicon-free recognizers significantly outperforms either of them used in isolation. In contrast, most Tamil word samples feature the standard symbol order, and the lexicon-driven recognizer outperforms the lexicon free one as well as their combination. The best recognition accuracies obtained for 20,000 word lexicons are 87.13 percent for Devanagari when the two recognizers are combined, and 91.8 percent for Tamil using the lexicon-driven technique.
Spotting words in handwritten Arabic documents
NASA Astrophysics Data System (ADS)
Srihari, Sargur; Srinivasan, Harish; Babu, Pavithra; Bhole, Chetan
2006-01-01
The design and performance of a system for spotting handwritten Arabic words in scanned document images is presented. Three main components of the system are a word segmenter, a shape based matcher for words and a search interface. The user types in a query in English within a search window, the system finds the equivalent Arabic word, e.g., by dictionary look-up, locates word images in an indexed (segmented) set of documents. A two-step approach is employed in performing the search: (1) prototype selection: the query is used to obtain a set of handwritten samples of that word from a known set of writers (these are the prototypes), and (2) word matching: the prototypes are used to spot each occurrence of those words in the indexed document database. A ranking is performed on the entire set of test word images-- where the ranking criterion is a similarity score between each prototype word and the candidate words based on global word shape features. A database of 20,000 word images contained in 100 scanned handwritten Arabic documents written by 10 different writers was used to study retrieval performance. Using five writers for providing prototypes and the other five for testing, using manually segmented documents, 55% precision is obtained at 50% recall. Performance increases as more writers are used for training.
Kabir, Muhammad N.; Alginahi, Yasser M.
2014-01-01
This paper addresses the problems and threats associated with verification of integrity, proof of authenticity, tamper detection, and copyright protection for digital-text content. Such issues were largely addressed in the literature for images, audio, and video, with only a few papers addressing the challenge of sensitive plain-text media under known constraints. Specifically, with text as the predominant online communication medium, it becomes crucial that techniques are deployed to protect such information. A number of digital-signature, hashing, and watermarking schemes have been proposed that essentially bind source data or embed invisible data in a cover media to achieve its goal. While many such complex schemes with resource redundancies are sufficient in offline and less-sensitive texts, this paper proposes a hybrid approach based on zero-watermarking and digital-signature-like manipulations for sensitive text documents in order to achieve content originality and integrity verification without physically modifying the cover text in anyway. The proposed algorithm was implemented and shown to be robust against undetected content modifications and is capable of confirming proof of originality whilst detecting and locating deliberate/nondeliberate tampering. Additionally, enhancements in resource utilisation and reduced redundancies were achieved in comparison to traditional encryption-based approaches. Finally, analysis and remarks are made about the current state of the art, and future research issues are discussed under the given constraints. PMID:25254247
Encoding sensory and motor patterns as time-invariant trajectories in recurrent neural networks
2018-01-01
Much of the information the brain processes and stores is temporal in nature—a spoken word or a handwritten signature, for example, is defined by how it unfolds in time. However, it remains unclear how neural circuits encode complex time-varying patterns. We show that by tuning the weights of a recurrent neural network (RNN), it can recognize and then transcribe spoken digits. The model elucidates how neural dynamics in cortical networks may resolve three fundamental challenges: first, encode multiple time-varying sensory and motor patterns as stable neural trajectories; second, generalize across relevant spatial features; third, identify the same stimuli played at different speeds—we show that this temporal invariance emerges because the recurrent dynamics generate neural trajectories with appropriately modulated angular velocities. Together our results generate testable predictions as to how recurrent networks may use different mechanisms to generalize across the relevant spatial and temporal features of complex time-varying stimuli. PMID:29537963
Egeland, Merete T; Tarangen, Magnus; Shiryaeva, Olga; Gay, Caryl; Døsen, Liv K; Haye, Rolf
2017-06-02
Postal questionnaires are often used to assess the results of nasal septoplasty, but response rates vary widely. This study assesses strategies designed to increase the response rate. Postoperative questionnaires using visual analogue scales (VAS) for nasal obstruction were mailed to 160 consecutive patients alternately allocated to one of two groups. Group A received the questionnaire in the usual manner and group B received a modified cover letter with hand-written name and signature and a hand-stamped return envelope. Of the 80 patients in each group, 47 (58.8%) in group A and 54 (67.5%) in group B returned the questionnaire (p = 0.25). There were no age or gender differences between the groups, nor did the pre- and postoperative VAS scores differ between the groups. The strategies used in this study increased the response rate to postal questionnaires by 8.7% points, but this was not a statistically significant or clinically meaningful improvement.
Encoding sensory and motor patterns as time-invariant trajectories in recurrent neural networks.
Goudar, Vishwa; Buonomano, Dean V
2018-03-14
Much of the information the brain processes and stores is temporal in nature-a spoken word or a handwritten signature, for example, is defined by how it unfolds in time. However, it remains unclear how neural circuits encode complex time-varying patterns. We show that by tuning the weights of a recurrent neural network (RNN), it can recognize and then transcribe spoken digits. The model elucidates how neural dynamics in cortical networks may resolve three fundamental challenges: first, encode multiple time-varying sensory and motor patterns as stable neural trajectories; second, generalize across relevant spatial features; third, identify the same stimuli played at different speeds-we show that this temporal invariance emerges because the recurrent dynamics generate neural trajectories with appropriately modulated angular velocities. Together our results generate testable predictions as to how recurrent networks may use different mechanisms to generalize across the relevant spatial and temporal features of complex time-varying stimuli. © 2018, Goudar et al.
Generation of signature databases with fast codes
NASA Astrophysics Data System (ADS)
Bradford, Robert A.; Woodling, Arthur E.; Brazzell, James S.
1990-09-01
Using the FASTSIG signature code to generate optical signature databases for the Ground-based Surveillance and Traking System (GSTS) Program has improved the efficiency of the database generation process. The goal of the current GSTS database is to provide standardized, threat representative target signatures that can easily be used for acquisition and trk studies, discrimination algorithm development, and system simulations. Large databases, with as many as eight interpolalion parameters, are required to maintain the fidelity demands of discrimination and to generalize their application to other strateg systems. As the need increases for quick availability of long wave infrared (LWIR) target signatures for an evolving design4o-threat, FASTSIG has become a database generation alternative to using the industry standard OptiCal Signatures Code (OSC). FASTSIG, developed in 1985 to meet the unique strategic systems demands imposed by the discrimination function, has the significant advantage of being a faster running signature code than the OSC, typically requiring two percent of the cpu time. It uses analytical approximations to model axisymmetric targets, with the fidelity required for discrimination analysis. Access of the signature database is accomplished through use of the waveband integration and interpolation software, INTEG and SIGNAT. This paper gives details of this procedure as well as sample interpolated signatures and also covers sample verification by comparison to the OSC, in order to establish the fidelity of the FASTSIG generated database.
WOSMIP II- Workshop on Signatures of Medical and Industrial Isotope Production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthews, Murray; Achim, Pascal; Auer, M.
2011-11-01
Medical and industrial fadioisotopes are fundamental tools used in science, medicine and industry with an ever expanding usage in medical practice where their availability is vital. Very sensitive environmental radionuclide monitoring networks have been developed for nuclear-security-related monitoring [particularly Comprehensive Test-Ban-Treaty (CTBT) compliance verification] and are now operational.
Code of Federal Regulations, 2010 CFR
2010-07-01
... accountholder or must verify the individual's identity. Verification may be either through a signature card or... the purchaser. If the deposit accountholder's identity has not been verified previously, the financial institution shall verify the deposit accountholder's identity by examination of a document which is normally...
Faster Double-Size Bipartite Multiplication out of Montgomery Multipliers
NASA Astrophysics Data System (ADS)
Yoshino, Masayuki; Okeya, Katsuyuki; Vuillaume, Camille
This paper proposes novel algorithms for computing double-size modular multiplications with few modulus-dependent precomputations. Low-end devices such as smartcards are usually equipped with hardware Montgomery multipliers. However, due to progresses of mathematical attacks, security institutions such as NIST have steadily demanded longer bit-lengths for public-key cryptography, making the multipliers quickly obsolete. In an attempt to extend the lifespan of such multipliers, double-size techniques compute modular multiplications with twice the bit-length of the multipliers. Techniques are known for extending the bit-length of classical Euclidean multipliers, of Montgomery multipliers and the combination thereof, namely bipartite multipliers. However, unlike classical and bipartite multiplications, Montgomery multiplications involve modulus-dependent precomputations, which amount to a large part of an RSA encryption or signature verification. The proposed double-size technique simulates double-size multiplications based on single-size Montgomery multipliers, and yet precomputations are essentially free: in an 2048-bit RSA encryption or signature verification with public exponent e=216+1, the proposal with a 1024-bit Montgomery multiplier is at least 1.5 times faster than previous double-size Montgomery multiplications.
Tuerxunyiming, Muhadasi; Xian, Feng; Zi, Jin; Yimamu, Yilihamujiang; Abuduwayite, Reshalaiti; Ren, Yan; Li, Qidan; Abudula, Abulizi; Liu, SiQi; Mohemaiti, Patamu
2018-01-05
Maturity-onset diabetes of the young (MODY) is an inherited monogenic type of diabetes. Genetic mutations in MODY often cause nonsynonymous changes that directly lead to the functional distortion of proteins and the pathological consequences. Herein, we proposed that the inherited mutations found in a MODY family could cause a disturbance of protein abundance, specifically in serum. The serum samples were collected from a Uyghur MODY family through three generations, and the serum proteins after depletion treatment were examined by quantitative proteomics to characterize the MODY-related serum proteins followed by verification using target quantification of proteomics. A total of 32 serum proteins were preliminarily identified as the MODY-related. Further verification test toward the individual samples demonstrated the 12 candidates with the significantly different abundance in the MODY patients. A comparison of the 12 proteins among the sera of type 1 diabetes, type 2 diabetes, MODY, and healthy subjects was conducted and revealed a protein signature related with MODY composed of the serum proteins such as SERPINA7, APOC4, LPA, C6, and F5.
Handwritten mathematical symbols dataset.
Chajri, Yassine; Bouikhalene, Belaid
2016-06-01
Due to the technological advances in recent years, paper scientific documents are used less and less. Thus, the trend in the scientific community to use digital documents has increased considerably. Among these documents, there are scientific documents and more specifically mathematics documents. In this context, we present our own dataset of handwritten mathematical symbols composed of 10,379 images. This dataset gathers Arabic characters, Latin characters, Arabic numerals, Latin numerals, arithmetic operators, set-symbols, comparison symbols, delimiters, etc.
Optical character recognition with feature extraction and associative memory matrix
NASA Astrophysics Data System (ADS)
Sasaki, Osami; Shibahara, Akihito; Suzuki, Takamasa
1998-06-01
A method is proposed in which handwritten characters are recognized using feature extraction and an associative memory matrix. In feature extraction, simple processes such as shifting and superimposing patterns are executed. A memory matrix is generated with singular value decomposition and by modifying small singular values. The method is optically implemented with two liquid crystal displays. Experimental results for the recognition of 25 handwritten alphabet characters clearly shows the effectiveness of the method.
1985-04-01
all invitations should be handwritten in black ink and addressed in the full name of the husband and wife unless the guest is single. Requesting an...34 is handwritten in black ink . If the reply is by telephone, the number is written directly beneath the R.S.V.P. (or a separate response card may be...styles. The card should be engraved with black ink on excellent quality card stock (usually white or cream in color). Script lettering is the most
Pereira, Clayton R; Pereira, Danilo R; Rosa, Gustavo H; Albuquerque, Victor H C; Weber, Silke A T; Hook, Christian; Papa, João P
2018-05-01
Parkinson's disease (PD) is considered a degenerative disorder that affects the motor system, which may cause tremors, micrography, and the freezing of gait. Although PD is related to the lack of dopamine, the triggering process of its development is not fully understood yet. In this work, we introduce convolutional neural networks to learn features from images produced by handwritten dynamics, which capture different information during the individual's assessment. Additionally, we make available a dataset composed of images and signal-based data to foster the research related to computer-aided PD diagnosis. The proposed approach was compared against raw data and texture-based descriptors, showing suitable results, mainly in the context of early stage detection, with results nearly to 95%. The analysis of handwritten dynamics using deep learning techniques showed to be useful for automatic Parkinson's disease identification, as well as it can outperform handcrafted features. Copyright © 2018 Elsevier B.V. All rights reserved.
Recognition of Handwritten Arabic words using a neuro-fuzzy network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boukharouba, Abdelhak; Bennia, Abdelhak
We present a new method for the recognition of handwritten Arabic words based on neuro-fuzzy hybrid network. As a first step, connected components (CCs) of black pixels are detected. Then the system determines which CCs are sub-words and which are stress marks. The stress marks are then isolated and identified separately and the sub-words are segmented into graphemes. Each grapheme is described by topological and statistical features. Fuzzy rules are extracted from training examples by a hybrid learning scheme comprised of two phases: rule generation phase from data using a fuzzy c-means, and rule parameter tuning phase using gradient descentmore » learning. After learning, the network encodes in its topology the essential design parameters of a fuzzy inference system.The contribution of this technique is shown through the significant tests performed on a handwritten Arabic words database.« less
Analysis of line structure in handwritten documents using the Hough transform
NASA Astrophysics Data System (ADS)
Ball, Gregory R.; Kasiviswanathan, Harish; Srihari, Sargur N.; Narayanan, Aswin
2010-01-01
In the analysis of handwriting in documents a central task is that of determining line structure of the text, e.g., number of text lines, location of their starting and end-points, line-width, etc. While simple methods can handle ideal images, real world documents have complexities such as overlapping line structure, variable line spacing, line skew, document skew, noisy or degraded images etc. This paper explores the application of the Hough transform method to handwritten documents with the goal of automatically determining global document line structure in a top-down manner which can then be used in conjunction with a bottom-up method such as connected component analysis. The performance is significantly better than other top-down methods, such as the projection profile method. In addition, we evaluate the performance of skew analysis by the Hough transform on handwritten documents.
Experiments on Urdu Text Recognition
NASA Astrophysics Data System (ADS)
Mukhtar, Omar; Setlur, Srirangaraj; Govindaraju, Venu
Urdu is a language spoken in the Indian subcontinent by an estimated 130-270 million speakers. At the spoken level, Urdu and Hindi are considered dialects of a single language because of shared vocabulary and the similarity in grammar. At the written level, however, Urdu is much closer to Arabic because it is written in Nastaliq, the calligraphic style of the Persian-Arabic script. Therefore, a speaker of Hindi can understand spoken Urdu but may not be able to read written Urdu because Hindi is written in Devanagari script, whereas an Arabic writer can read the written words but may not understand the spoken Urdu. In this chapter we present an overview of written Urdu. Prior research in handwritten Urdu OCR is very limited. We present (perhaps) the first system for recognizing handwritten Urdu words. On a data set of about 1300 handwritten words, we achieved an accuracy of 70% for the top choice, and 82% for the top three choices.
Data requirements for verification of ram glow chemistry
NASA Technical Reports Server (NTRS)
Swenson, G. R.; Mende, S. B.
1985-01-01
A set of questions is posed regarding the surface chemistry producing the ram glow on the space shuttle. The questions surround verification of the chemical cycle involved in the physical processes leading to the glow. The questions, and a matrix of measurements required for most answers, are presented. The measurements include knowledge of the flux composition to and from a ram surface as well as spectroscopic signatures from the U to visible to IR. A pallet set of experiments proposed to accomplish the measurements is discussed. An interim experiment involving an available infrared instrument to be operated from the shuttle Orbiter cabin is also be discussed.
Characterization of palmprints by wavelet signatures via directional context modeling.
Zhang, Lei; Zhang, David
2004-06-01
The palmprint is one of the most reliable physiological characteristics that can be used to distinguish between individuals. Current palmprint-based systems are more user friendly, more cost effective, and require fewer data signatures than traditional fingerprint-based identification systems. The principal lines and wrinkles captured in a low-resolution palmprint image provide more than enough information to uniquely identify an individual. This paper presents a palmprint identification scheme that characterizes a palmprint using a set of statistical signatures. The palmprint is first transformed into the wavelet domain, and the directional context of each wavelet subband is defined and computed in order to collect the predominant coefficients of its principal lines and wrinkles. A set of statistical signatures, which includes gravity center, density, spatial dispersivity and energy, is then defined to characterize the palmprint with the selected directional context values. A classification and identification scheme based on these signatures is subsequently developed. This scheme exploits the features of principal lines and prominent wrinkles sufficiently and achieves satisfactory results. Compared with the line-segments-matching or interesting-points-matching based palmprint verification schemes, the proposed scheme uses a much smaller amount of data signatures. It also provides a convenient classification strategy and more accurate identification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bunch, Kyle J.; Jones, Anthony M.; Ramuhalli, Pradeep
The ratification and ongoing implementation of the New START Treaty have been widely regarded as noteworthy global security achievements for both the Obama Administration and the Putin (formerly Medvedev) regime. But deeper cuts that move beyond the United States and Russia to engage the P-5 and other nuclear weapons possessor states are envisioned under future arms control regimes, and are indeed required for the P-5 in accordance with their Article VI disarmament obligations in the Nuclear Non-Proliferation Treaty. Future verification needs will include monitoring the cessation of production of new fissile material for weapons, monitoring storage of warhead components andmore » fissile materials and verifying dismantlement of warheads, pits, secondary stages, and other materials. A fundamental challenge to implementing a nuclear disarmament regime is the ability to thwart unauthorized material diversion throughout the dismantlement and disposition process through strong chain of custody implementation. Verifying the declared presence, or absence, of nuclear materials and weapons components throughout the dismantlement and disposition lifecycle is a critical aspect of the disarmament process. From both the diplomatic and technical perspectives, verification under these future arms control regimes will require new solutions. Since any acceptable verification technology must protect sensitive design information and attributes to prevent the release of classified or other proliferation-sensitive information, non-nuclear non-sensitive modalities may provide significant new verification tools which do not require the use of additional information barriers. Alternative verification technologies based upon electromagnetic and acoustics could potentially play an important role in fulfilling the challenging requirements of future verification regimes. For example, researchers at the Pacific Northwest National Laboratory (PNNL) have demonstrated that low frequency electromagnetic signatures of sealed metallic containers can be used to rapidly confirm the presence of specific components on a yes/no basis without revealing classified information. PNNL researchers have also used ultrasonic measurements to obtain images of material microstructures which may be used as templates or unique identifiers of treaty-limited items. Such alternative technologies are suitable for application in various stages of weapons dismantlement and often include the advantage of an inherent information barrier due to the inability to extract classified weapon design information from the collected data. As a result, these types of technologies complement radiation-based verification methods for arms control. This article presents an overview of several alternative verification technologies that are suitable for supporting a future, broader and more intrusive arms control regime that spans the nuclear weapons disarmament lifecycle. The general capabilities and limitations of each verification modality are discussed and example technologies are presented. Potential applications are defined in the context of the nuclear material and weapons lifecycle. Example applications range from authentication (e.g., tracking and signatures within the chain of custody from downloading through weapons storage, unclassified templates and unique identification) to verification of absence and final material disposition.« less
Handwritten mathematical symbols dataset
Chajri, Yassine; Bouikhalene, Belaid
2016-01-01
Due to the technological advances in recent years, paper scientific documents are used less and less. Thus, the trend in the scientific community to use digital documents has increased considerably. Among these documents, there are scientific documents and more specifically mathematics documents. In this context, we present our own dataset of handwritten mathematical symbols composed of 10,379 images. This dataset gathers Arabic characters, Latin characters, Arabic numerals, Latin numerals, arithmetic operators, set-symbols, comparison symbols, delimiters, etc. PMID:27006975
Optical ID Tags for Secure Verification of Multispectral Visible and NIR Signatures
NASA Astrophysics Data System (ADS)
Pérez-Cabré, Elisabet; Millán, María S.; Javidi, Bahram
2008-04-01
We propose to combine information from visible (VIS) and near infrared (NIR) spectral bands to increase robustness on security systems and deter from unauthorized use of optical tags that permit the identification of a given person or object. The signature that identifies the element under surveillance will be only obtained by the appropriate combination of the visible content and the NIR data. The fully-phase encryption technique is applied to avoid an easy recognition of the resultant signature at the naked eye and an easy reproduction using conventional devices for imaging or scanning. The obtained complex-amplitude encrypted distribution is encoded on an identity (ID) tag. Spatial multiplexing of the encrypted signature allows us to build a distortion-invariant ID tag, so that remote authentication can be achieved even if the tag is captured under rotation or at different distances. We explore the possibility of using partial information of the encrypted distribution. Simulation results are provided and discussed.
NASA Astrophysics Data System (ADS)
Lowrey, J. D.; Haas, D.
2013-12-01
Underground nuclear explosions (UNEs) produce anthropogenic isotopes that can potentially be used in the verification component of the Comprehensive Nuclear-Test-Ban Treaty. Several isotopes of radioactive xenon gas have been identified as radionuclides of interest within the International Monitoring System (IMS) and in an On-Site Inspection (OSI). Substantial research has been previously undertaken to characterize the geologic and atmospheric mechanisms that can drive the movement of radionuclide gas from a well-contained UNE, considering both sensitivities on gas arrival time and signature variability of xenon due to the nature of subsurface transport. This work further considers sensitivities of radioxenon gas arrival time and signatures to large variability in geologic stratification and generalized explosion cavity characteristics, as well as compares this influence to variability in the shallow surface.
Fuzzy Logic Module of Convolutional Neural Network for Handwritten Digits Recognition
NASA Astrophysics Data System (ADS)
Popko, E. A.; Weinstein, I. A.
2016-08-01
Optical character recognition is one of the important issues in the field of pattern recognition. This paper presents a method for recognizing handwritten digits based on the modeling of convolutional neural network. The integrated fuzzy logic module based on a structural approach was developed. Used system architecture adjusted the output of the neural network to improve quality of symbol identification. It was shown that proposed algorithm was flexible and high recognition rate of 99.23% was achieved.
Fast Multiclass Segmentation using Diffuse Interface Methods on Graphs
2013-02-01
000 28 × 28 images of handwritten digits 0 through 9. Examples of entries can be found in Figure 6. The task is to classify each of the images into the...database of handwritten digits .” [Online]. Available: http://yann.lecun.com/exdb/mnist/ [36] J. Lellmann, J. H. Kappes, J. Yuan, F. Becker, and C...corresponding digit . The images include digits from 0 to 9; thus, this is a 10 class segmentation problem. To construct the weight matrix, we used N
Klein, Hans-Ulrich; Ruckert, Christian; Kohlmann, Alexander; Bullinger, Lars; Thiede, Christian; Haferlach, Torsten; Dugas, Martin
2009-12-15
Multiple gene expression signatures derived from microarray experiments have been published in the field of leukemia research. A comparison of these signatures with results from new experiments is useful for verification as well as for interpretation of the results obtained. Currently, the percentage of overlapping genes is frequently used to compare published gene signatures against a signature derived from a new experiment. However, it has been shown that the percentage of overlapping genes is of limited use for comparing two experiments due to the variability of gene signatures caused by different array platforms or assay-specific influencing parameters. Here, we present a robust approach for a systematic and quantitative comparison of published gene expression signatures with an exemplary query dataset. A database storing 138 leukemia-related published gene signatures was designed. Each gene signature was manually annotated with terms according to a leukemia-specific taxonomy. Two analysis steps are implemented to compare a new microarray dataset with the results from previous experiments stored and curated in the database. First, the global test method is applied to assess gene signatures and to constitute a ranking among them. In a subsequent analysis step, the focus is shifted from single gene signatures to chromosomal aberrations or molecular mutations as modeled in the taxonomy. Potentially interesting disease characteristics are detected based on the ranking of gene signatures associated with these aberrations stored in the database. Two example analyses are presented. An implementation of the approach is freely available as web-based application. The presented approach helps researchers to systematically integrate the knowledge derived from numerous microarray experiments into the analysis of a new dataset. By means of example leukemia datasets we demonstrate that this approach detects related experiments as well as related molecular mutations and may help to interpret new microarray data.
Evaluating structural pattern recognition for handwritten math via primitive label graphs
NASA Astrophysics Data System (ADS)
Zanibbi, Richard; MoucheÌre, Harold; Viard-Gaudin, Christian
2013-01-01
Currently, structural pattern recognizer evaluations compare graphs of detected structure to target structures (i.e. ground truth) using recognition rates, recall and precision for object segmentation, classification and relationships. In document recognition, these target objects (e.g. symbols) are frequently comprised of multiple primitives (e.g. connected components, or strokes for online handwritten data), but current metrics do not characterize errors at the primitive level, from which object-level structure is obtained. Primitive label graphs are directed graphs defined over primitives and primitive pairs. We define new metrics obtained by Hamming distances over label graphs, which allow classification, segmentation and parsing errors to be characterized separately, or using a single measure. Recall and precision for detected objects may also be computed directly from label graphs. We illustrate the new metrics by comparing a new primitive-level evaluation to the symbol-level evaluation performed for the CROHME 2012 handwritten math recognition competition. A Python-based set of utilities for evaluating, visualizing and translating label graphs is publicly available.
Kannada character recognition system using neural network
NASA Astrophysics Data System (ADS)
Kumar, Suresh D. S.; Kamalapuram, Srinivasa K.; Kumar, Ajay B. R.
2013-03-01
Handwriting recognition has been one of the active and challenging research areas in the field of pattern recognition. It has numerous applications which include, reading aid for blind, bank cheques and conversion of any hand written document into structural text form. As there is no sufficient number of works on Indian language character recognition especially Kannada script among 15 major scripts in India. In this paper an attempt is made to recognize handwritten Kannada characters using Feed Forward neural networks. A handwritten Kannada character is resized into 20x30 Pixel. The resized character is used for training the neural network. Once the training process is completed the same character is given as input to the neural network with different set of neurons in hidden layer and their recognition accuracy rate for different Kannada characters has been calculated and compared. The results show that the proposed system yields good recognition accuracy rates comparable to that of other handwritten character recognition systems.
Script-independent text line segmentation in freestyle handwritten documents.
Li, Yi; Zheng, Yefeng; Doermann, David; Jaeger, Stefan; Li, Yi
2008-08-01
Text line segmentation in freestyle handwritten documents remains an open document analysis problem. Curvilinear text lines and small gaps between neighboring text lines present a challenge to algorithms developed for machine printed or hand-printed documents. In this paper, we propose a novel approach based on density estimation and a state-of-the-art image segmentation technique, the level set method. From an input document image, we estimate a probability map, where each element represents the probability that the underlying pixel belongs to a text line. The level set method is then exploited to determine the boundary of neighboring text lines by evolving an initial estimate. Unlike connected component based methods ( [1], [2] for example), the proposed algorithm does not use any script-specific knowledge. Extensive quantitative experiments on freestyle handwritten documents with diverse scripts, such as Arabic, Chinese, Korean, and Hindi, demonstrate that our algorithm consistently outperforms previous methods [1]-[3]. Further experiments show the proposed algorithm is robust to scale change, rotation, and noise.
de Souza, João W M; Alves, Shara S A; Rebouças, Elizângela de S; Almeida, Jefferson S; Rebouças Filho, Pedro P
2018-01-01
Parkinson's disease affects millions of people around the world and consequently various approaches have emerged to help diagnose this disease, among which we can highlight handwriting exams. Extracting features from handwriting exams is an important contribution of the computational field for the diagnosis of this disease. In this paper, we propose an approach that measures the similarity between the exam template and the handwritten trace of the patient following the exam template. This similarity was measured using the Structural Cooccurrence Matrix to calculate how close the handwritten trace of the patient is to the exam template. The proposed approach was evaluated using various exam templates and the handwritten traces of the patient. Each of these variations was used together with the Naïve Bayes, OPF, and SVM classifiers. In conclusion the proposed approach was proven to be better than the existing methods found in the literature and is therefore a promising tool for the diagnosis of Parkinson's disease.
Nonintrusive multibiometrics on a mobile device: a comparison of fusion techniques
NASA Astrophysics Data System (ADS)
Allano, Lorene; Morris, Andrew C.; Sellahewa, Harin; Garcia-Salicetti, Sonia; Koreman, Jacques; Jassim, Sabah; Ly-Van, Bao; Wu, Dalei; Dorizzi, Bernadette
2006-04-01
In this article we test a number of score fusion methods for the purpose of multimodal biometric authentication. These tests were made for the SecurePhone project, whose aim is to develop a prototype mobile communication system enabling biometrically authenticated users to deal legally binding m-contracts during a mobile phone call on a PDA. The three biometrics of voice, face and signature were selected because they are all traditional non-intrusive and easy to use means of authentication which can readily be captured on a PDA. By combining multiple biometrics of relatively low security it may be possible to obtain a combined level of security which is at least as high as that provided by a PIN or handwritten signature, traditionally used for user authentication. As the relative success of different fusion methods depends on the database used and tests made, the database we used was recorded on a suitable PDA (the Qtek2020) and the test protocol was designed to reflect the intended application scenario, which is expected to use short text prompts. Not all of the fusion methods tested are original. They were selected for their suitability for implementation within the constraints imposed by the application. All of the methods tested are based on fusion of the match scores output by each modality. Though computationally simple, the methods tested have shown very promising results. All of the 4 fusion methods tested obtain a significant performance increase.
Post processing for offline Chinese handwritten character string recognition
NASA Astrophysics Data System (ADS)
Wang, YanWei; Ding, XiaoQing; Liu, ChangSong
2012-01-01
Offline Chinese handwritten character string recognition is one of the most important research fields in pattern recognition. Due to the free writing style, large variability in character shapes and different geometric characteristics, Chinese handwritten character string recognition is a challenging problem to deal with. However, among the current methods over-segmentation and merging method which integrates geometric information, character recognition information and contextual information, shows a promising result. It is found experimentally that a large part of errors are segmentation error and mainly occur around non-Chinese characters. In a Chinese character string, there are not only wide characters namely Chinese characters, but also narrow characters like digits and letters of the alphabet. The segmentation error is mainly caused by uniform geometric model imposed on all segmented candidate characters. To solve this problem, post processing is employed to improve recognition accuracy of narrow characters. On one hand, multi-geometric models are established for wide characters and narrow characters respectively. Under multi-geometric models narrow characters are not prone to be merged. On the other hand, top rank recognition results of candidate paths are integrated to boost final recognition of narrow characters. The post processing method is investigated on two datasets, in total 1405 handwritten address strings. The wide character recognition accuracy has been improved lightly and narrow character recognition accuracy has been increased up by 10.41% and 10.03% respectively. It indicates that the post processing method is effective to improve recognition accuracy of narrow characters.
Code of Federal Regulations, 2011 CFR
2011-04-01
... EQUIPMENT USED WITH THE PLAY OF CLASS II GAMES § 547.4 How does a tribal government, tribal gaming... affects the play of the Class II game be submitted, together with the signature verification required by... to correct a problem affecting the fairness, security, or integrity of a game or accounting system or...
Code of Federal Regulations, 2010 CFR
2010-04-01
... EQUIPMENT USED WITH THE PLAY OF CLASS II GAMES § 547.4 How does a tribal government, tribal gaming... affects the play of the Class II game be submitted, together with the signature verification required by... to correct a problem affecting the fairness, security, or integrity of a game or accounting system or...
Code of Federal Regulations, 2012 CFR
2012-04-01
... EQUIPMENT USED WITH THE PLAY OF CLASS II GAMES § 547.4 How does a tribal government, tribal gaming... affects the play of the Class II game be submitted, together with the signature verification required by... to correct a problem affecting the fairness, security, or integrity of a game or accounting system or...
Lee, J W; Cha, D K; Kim, I; Son, A; Ahn, K H
2008-02-01
Fatty acid methyl ester (FAME) technology was evaluated as a monitoring tool for quantification of Gordonia amarae in activated sludge systems. The fatty acid, 19:1 alcohol, which was identified as a unique fatty acid in G. amarae was not only confirmed to be present in foaming plant samples, but the quantity of the signature peak correlated closely with the degree of foaming. Foaming potential experiment provided a range of critical foaming levels that corresponded to G. amarae population. This range of critical Gordonia levels was correlated to the threshold signature FAME amount. Six full-scale wastewater treatment plants were selected based on a survey to participate in our full-scale study to evaluate the potential application of the FAME technique as the Gordonia monitoring tool. Greater amounts of signature FAME were extracted from the mixed liquor samples obtained from treatment plants experiencing Gordonia foaming problems. The amounts of signature FAME correlated well with the conventional filamentous counting technique. These results demonstrated that the relative abundance of the signature FAMEs can be used to quantitatively monitor the abundance of foam-causing microorganism in activated sludge.
A Record Book of Open Heart Surgical Cases between 1959 and 1982, Hand-Written by a Cardiac Surgeon.
Kim, Won-Gon
2016-08-01
A book of brief records of open heart surgery underwent between 1959 and 1982 at Seoul National University Hospital was recently found. The book was hand-written by the late professor and cardiac surgeon Yung Kyoon Lee (1921-1994). This book contains valuable information about cardiac patients and surgery at the early stages of the establishment of open heart surgery in Korea, and at Seoul National University Hospital. This report is intended to analyze the content of the book.
On the Privacy Protection of Biometric Traits: Palmprint, Face, and Signature
NASA Astrophysics Data System (ADS)
Panigrahy, Saroj Kumar; Jena, Debasish; Korra, Sathya Babu; Jena, Sanjay Kumar
Biometrics are expected to add a new level of security to applications, as a person attempting access must prove who he or she really is by presenting a biometric to the system. The recent developments in the biometrics area have lead to smaller, faster and cheaper systems, which in turn has increased the number of possible application areas for biometric identity verification. The biometric data, being derived from human bodies (and especially when used to identify or verify those bodies) is considered personally identifiable information (PII). The collection, use and disclosure of biometric data — image or template, invokes rights on the part of an individual and obligations on the part of an organization. As biometric uses and databases grow, so do concerns that the personal data collected will not be used in reasonable and accountable ways. Privacy concerns arise when biometric data are used for secondary purposes, invoking function creep, data matching, aggregation, surveillance and profiling. Biometric data transmitted across networks and stored in various databases by others can also be stolen, copied, or otherwise misused in ways that can materially affect the individual involved. As Biometric systems are vulnerable to replay, database and brute-force attacks, such potential attacks must be analysed before they are massively deployed in security systems. Along with security, also the privacy of the users is an important factor as the constructions of lines in palmprints contain personal characteristics, from face images a person can be recognised, and fake signatures can be practised by carefully watching the signature images available in the database. We propose a cryptographic approach to encrypt the images of palmprints, faces, and signatures by an advanced Hill cipher technique for hiding the information in the images. It also provides security to these images from being attacked by above mentioned attacks. So, during the feature extraction, the encrypted images are first decrypted, then the features are extracted, and used for identification or verification.
Limitations and requirements of content-based multimedia authentication systems
NASA Astrophysics Data System (ADS)
Wu, Chai W.
2001-08-01
Recently, a number of authentication schemes have been proposed for multimedia data such as images and sound data. They include both label based systems and semifragile watermarks. The main requirement for such authentication systems is that minor modifications such as lossy compression which do not alter the content of the data preserve the authenticity of the data, whereas modifications which do modify the content render the data not authentic. These schemes can be classified into two main classes depending on the model of image authentication they are based on. One of the purposes of this paper is to look at some of the advantages and disadvantages of these image authentication schemes and their relationship with fundamental limitations of the underlying model of image authentication. In particular, we study feature-based algorithms which generate an authentication tag based on some inherent features in the image such as the location of edges. The main disadvantage of most proposed feature-based algorithms is that similar images generate similar features, and therefore it is possible for a forger to generate dissimilar images that have the same features. On the other hand, the class of hash-based algorithms utilizes a cryptographic hash function or a digital signature scheme to reduce the data and generate an authentication tag. It inherits the security of digital signatures to thwart forgery attacks. The main disadvantage of hash-based algorithms is that the image needs to be modified in order to be made authenticatable. The amount of modification is on the order of the noise the image can tolerate before it is rendered inauthentic. The other purpose of this paper is to propose a multimedia authentication scheme which combines some of the best features of both classes of algorithms. The proposed scheme utilizes cryptographic hash functions and digital signature schemes and the data does not need to be modified in order to be made authenticatable. Several applications including the authentication of images on CD-ROM and handwritten documents will be discussed.
Iterative cross section sequence graph for handwritten character segmentation.
Dawoud, Amer
2007-08-01
The iterative cross section sequence graph (ICSSG) is an algorithm for handwritten character segmentation. It expands the cross section sequence graph concept by applying it iteratively at equally spaced thresholds. The iterative thresholding reduces the effect of information loss associated with image binarization. ICSSG preserves the characters' skeletal structure by preventing the interference of pixels that causes flooding of adjacent characters' segments. Improving the structural quality of the characters' skeleton facilitates better feature extraction and classification, which improves the overall performance of optical character recognition (OCR). Experimental results showed significant improvements in OCR recognition rates compared to other well-established segmentation algorithms.
Efficient cost-sensitive human-machine collaboration for offline signature verification
NASA Astrophysics Data System (ADS)
Coetzer, Johannes; Swanepoel, Jacques; Sabourin, Robert
2012-01-01
We propose a novel strategy for the optimal combination of human and machine decisions in a cost-sensitive environment. The proposed algorithm should be especially beneficial to financial institutions where off-line signatures, each associated with a specific transaction value, require authentication. When presented with a collection of genuine and fraudulent training signatures, produced by so-called guinea pig writers, the proficiency of a workforce of human employees and a score-generating machine can be estimated and represented in receiver operating characteristic (ROC) space. Using a set of Boolean fusion functions, the majority vote decision of the human workforce is combined with each threshold-specific machine-generated decision. The performance of the candidate ensembles is estimated and represented in ROC space, after which only the optimal ensembles and associated decision trees are retained. When presented with a questioned signature linked to an arbitrary writer, the system first uses the ROC-based cost gradient associated with the transaction value to select the ensemble that minimises the expected cost, and then uses the corresponding decision tree to authenticate the signature in question. We show that, when utilising the entire human workforce, the incorporation of a machine streamlines the authentication process and decreases the expected cost for all operating conditions.
Authentication of digital video evidence
NASA Astrophysics Data System (ADS)
Beser, Nicholas D.; Duerr, Thomas E.; Staisiunas, Gregory P.
2003-11-01
In response to a requirement from the United States Postal Inspection Service, the Technical Support Working Group tasked The Johns Hopkins University Applied Physics Laboratory (JHU/APL) to develop a technique tha will ensure the authenticity, or integrity, of digital video (DV). Verifiable integrity is needed if DV evidence is to withstand a challenge to its admissibility in court on the grounds that it can be easily edited. Specifically, the verification technique must detect additions, deletions, or modifications to DV and satisfy the two-part criteria pertaining to scientific evidence as articulated in Daubert et al. v. Merrell Dow Pharmaceuticals Inc., 43 F3d (9th Circuit, 1995). JHU/APL has developed a prototype digital video authenticator (DVA) that generates digital signatures based on public key cryptography at the frame level of the DV. Signature generation and recording is accomplished at the same time as DV is recorded by the camcorder. Throughput supports the consumer-grade camcorder data rate of 25 Mbps. The DVA software is implemented on a commercial laptop computer, which is connected to a commercial digital camcorder via the IEEE-1394 serial interface. A security token provides agent identification and the interface to the public key infrastructure (PKI) that is needed for management of the public keys central to DV integrity verification.
Word spotting for handwritten documents using Chamfer Distance and Dynamic Time Warping
NASA Astrophysics Data System (ADS)
Saabni, Raid M.; El-Sana, Jihad A.
2011-01-01
A large amount of handwritten historical documents are located in libraries around the world. The desire to access, search, and explore these documents paves the way for a new age of knowledge sharing and promotes collaboration and understanding between human societies. Currently, the indexes for these documents are generated manually, which is very tedious and time consuming. Results produced by state of the art techniques, for converting complete images of handwritten documents into textual representations, are not yet sufficient. Therefore, word-spotting methods have been developed to archive and index images of handwritten documents in order to enable efficient searching within documents. In this paper, we present a new matching algorithm to be used in word-spotting tasks for historical Arabic documents. We present a novel algorithm based on the Chamfer Distance to compute the similarity between shapes of word-parts. Matching results are used to cluster images of Arabic word-parts into different classes using the Nearest Neighbor rule. To compute the distance between two word-part images, the algorithm subdivides each image into equal-sized slices (windows). A modified version of the Chamfer Distance, incorporating geometric gradient features and distance transform data, is used as a similarity distance between the different slices. Finally, the Dynamic Time Warping (DTW) algorithm is used to measure the distance between two images of word-parts. By using the DTW we enabled our system to cluster similar word-parts, even though they are transformed non-linearly due to the nature of handwriting. We tested our implementation of the presented methods using various documents in different writing styles, taken from Juma'a Al Majid Center - Dubai, and obtained encouraging results.
Kruse, B J
1994-01-01
The author of the famous midwifery text book Der schwangeren Frauen und Hebammen Rosengarten has until now thought to have been Eucharius Rösslin the Elder, in whose name the first printed edition of the work appeared in 1513. According to him, he compiled the text from various sources in the years 1508-1512 at the suggestion of the Duchess Catherine of Brunswick-Luneburg. In the SB und UB Hamburg there is a handwritten preliminary draft of Rosengarten (Cod. med. 801, p. 9-130), dated by the scribe in the year 1494 (this is borne out by watermark analysis). It reproduces the text of Rosengarten without the privilegium, the dedication and the rhyming 'admonition' of the pregnant women and the midwives, as well as the glossary and the illustrative woodcuts almost identically. The printed version of Rosengarten was also expanded by Eucharius Rösslin the Elder with passages among others from Ps.-Ortolfs Frauenbüchlein. The author of this paper was also able to trace a handwritten preliminary draft of Frauenbüchlein, until now unknown, in manuscript 2967 of the Austrian National Library in Vienna. The remark Hic liber pertinet ad Constantinum Roeslin written in the manuscript by a previous owner, and a treatise on syphilis in the hand Eucharius Rösslin the Younger, would indicate that Cod. med. 801 was once in the possession of the Rösslin family. Since Eucharius Rösslin the Elder was born around 1470, and since errors and omissions in Cod. med. 801 indicate that it is a copy of an older text, we are confronted with the question of whether or not the handwritten edition of Rosengarten originates from him or from some other author.
Fingerprint verification on medical image reporting system.
Chen, Yen-Cheng; Chen, Liang-Kuang; Tsai, Ming-Dar; Chiu, Hou-Chang; Chiu, Jainn-Shiun; Chong, Chee-Fah
2008-03-01
The healthcare industry is recently going through extensive changes, through adoption of robust, interoperable healthcare information technology by means of electronic medical records (EMR). However, a major concern of EMR is adequate confidentiality of the individual records being managed electronically. Multiple access points over an open network like the Internet increases possible patient data interception. The obligation is on healthcare providers to procure information security solutions that do not hamper patient care while still providing the confidentiality of patient information. Medical images are also part of the EMR which need to be protected from unauthorized users. This study integrates the techniques of fingerprint verification, DICOM object, digital signature and digital envelope in order to ensure that access to the hospital Picture Archiving and Communication System (PACS) or radiology information system (RIS) is only by certified parties.
NASA Technical Reports Server (NTRS)
Sung, Q. C.; Miller, L. D.
1977-01-01
Three methods were tested for collection of the training sets needed to establish the spectral signatures of the land uses/land covers sought due to the difficulties of retrospective collection of representative ground control data. Computer preprocessing techniques applied to the digital images to improve the final classification results were geometric corrections, spectral band or image ratioing and statistical cleaning of the representative training sets. A minimal level of statistical verification was made based upon the comparisons between the airphoto estimates and the classification results. The verifications provided a further support to the selection of MSS band 5 and 7. It also indicated that the maximum likelihood ratioing technique can achieve more agreeable classification results with the airphoto estimates than the stepwise discriminant analysis.
Huang, H; Coatrieux, G; Shu, H Z; Luo, L M; Roux, Ch
2011-01-01
In this paper we present a medical image integrity verification system that not only allows detecting and approximating malevolent local image alterations (e.g. removal or addition of findings) but is also capable to identify the nature of global image processing applied to the image (e.g. lossy compression, filtering …). For that purpose, we propose an image signature derived from the geometric moments of pixel blocks. Such a signature is computed over regions of interest of the image and then watermarked in regions of non interest. Image integrity analysis is conducted by comparing embedded and recomputed signatures. If any, local modifications are approximated through the determination of the parameters of the nearest generalized 2D Gaussian. Image moments are taken as image features and serve as inputs to one classifier we learned to discriminate the type of global image processing. Experimental results with both local and global modifications illustrate the overall performances of our approach.
Towards Trustable Digital Evidence with PKIDEV: PKI Based Digital Evidence Verification Model
NASA Astrophysics Data System (ADS)
Uzunay, Yusuf; Incebacak, Davut; Bicakci, Kemal
How to Capture and Preserve Digital Evidence Securely? For the investigation and prosecution of criminal activities that involve computers, digital evidence collected in the crime scene has a vital importance. On one side, it is a very challenging task for forensics professionals to collect them without any loss or damage. On the other, there is the second problem of providing the integrity and authenticity in order to achieve legal acceptance in a court of law. By conceiving digital evidence simply as one instance of digital data, it is evident that modern cryptography offers elegant solutions for this second problem. However, to our knowledge, there is not any previous work proposing a systematic model having a holistic view to address all the related security problems in this particular case of digital evidence verification. In this paper, we present PKIDEV (Public Key Infrastructure based Digital Evidence Verification model) as an integrated solution to provide security for the process of capturing and preserving digital evidence. PKIDEV employs, inter alia, cryptographic techniques like digital signatures and secure time-stamping as well as latest technologies such as GPS and EDGE. In our study, we also identify the problems public-key cryptography brings when it is applied to the verification of digital evidence.
A feature based comparison of pen and swipe based signature characteristics.
Robertson, Joshua; Guest, Richard
2015-10-01
Dynamic Signature Verification (DSV) is a biometric modality that identifies anatomical and behavioral characteristics when an individual signs their name. Conventionally signature data has been captured using pen/tablet apparatus. However, the use of other devices such as the touch-screen tablets has expanded in recent years affording the possibility of assessing biometric interaction on this new technology. To explore the potential of employing DSV techniques when a user signs or swipes with their finger, we report a study to correlate pen and finger generated features. Investigating the stability and correlation between a set of characteristic features recorded in participant's signatures and touch-based swipe gestures, a statistical analysis was conducted to assess consistency between capture scenarios. The results indicate that there is a range of static and dynamic features such as the rate of jerk, size, duration and the distance the pen traveled that can lead to interoperability between these two systems for input methods for use within a potential biometric context. It can be concluded that this data indicates that a general principle is that the same underlying constructional mechanisms are evident. Copyright © 2015 Elsevier B.V. All rights reserved.
Ancient administrative handwritten documents: X-ray analysis and imaging
Albertin, F.; Astolfo, A.; Stampanoni, M.; Peccenini, Eva; Hwu, Y.; Kaplan, F.; Margaritondo, G.
2015-01-01
Handwritten characters in administrative antique documents from three centuries have been detected using different synchrotron X-ray imaging techniques. Heavy elements in ancient inks, present even for everyday administrative manuscripts as shown by X-ray fluorescence spectra, produce attenuation contrast. In most cases the image quality is good enough for tomography reconstruction in view of future applications to virtual page-by-page ‘reading’. When attenuation is too low, differential phase contrast imaging can reveal the characters from refractive index effects. The results are potentially important for new information harvesting strategies, for example from the huge Archivio di Stato collection, objective of the Venice Time Machine project. PMID:25723946
Ancient administrative handwritten documents: X-ray analysis and imaging.
Albertin, F; Astolfo, A; Stampanoni, M; Peccenini, Eva; Hwu, Y; Kaplan, F; Margaritondo, G
2015-03-01
Handwritten characters in administrative antique documents from three centuries have been detected using different synchrotron X-ray imaging techniques. Heavy elements in ancient inks, present even for everyday administrative manuscripts as shown by X-ray fluorescence spectra, produce attenuation contrast. In most cases the image quality is good enough for tomography reconstruction in view of future applications to virtual page-by-page `reading'. When attenuation is too low, differential phase contrast imaging can reveal the characters from refractive index effects. The results are potentially important for new information harvesting strategies, for example from the huge Archivio di Stato collection, objective of the Venice Time Machine project.
Structural analysis of online handwritten mathematical symbols based on support vector machines
NASA Astrophysics Data System (ADS)
Simistira, Foteini; Papavassiliou, Vassilis; Katsouros, Vassilis; Carayannis, George
2013-01-01
Mathematical expression recognition is still a very challenging task for the research community mainly because of the two-dimensional (2d) structure of mathematical expressions (MEs). In this paper, we present a novel approach for the structural analysis between two on-line handwritten mathematical symbols of a ME, based on spatial features of the symbols. We introduce six features to represent the spatial affinity of the symbols and compare two multi-class classification methods that employ support vector machines (SVMs): one based on the "one-against-one" technique and one based on the "one-against-all", in identifying the relation between a pair of symbols (i.e. subscript, numerator, etc). A dataset containing 1906 spatial relations derived from the Competition on Recognition of Online Handwritten Mathematical Expressions (CROHME) 2012 training dataset is constructed to evaluate the classifiers and compare them with the rule-based classifier of the ILSP-1 system participated in the contest. The experimental results give an overall mean error rate of 2.61% for the "one-against-one" SVM approach, 6.57% for the "one-against-all" SVM technique and 12.31% error rate for the ILSP-1 classifier.
NASA Astrophysics Data System (ADS)
Rahim, Kartini Abdul; Kahar, Rosmila Abdul; Khalid, Halimi Mohd.; Salleh, Rohayu Mohd; Hashim, Rathiah
2015-05-01
Recognition of Arabic handwritten and its variants such as Farsi (Persian) and Urdu had been receiving considerable attention in recent years. Being contrast to Arabic handwritten, Jawi, as a second method of Malay handwritten, has not been studied yet, but if any, there were a few references on it. The recent transformation in Malaysian education, the Special Education is one of the priorities in the Malaysia Blueprint. One of the special needs quoted in Malaysia education is dyslexia. A dyslexic student is considered as student with learning disability. Concluding a student is truly dyslexia might be incorrect for they were only assessed through Roman alphabet, without considering assessment via Jawi handwriting. A study was conducted on dyslexic students attending a special class for dyslexia in Malay Language to determine whether they are also dyslexia in Jawi handwriting. The focus of the study is to test the copying skills in relation to word reading and writing in Malay Language with and without dyslexia through both characters. A total of 10 dyslexic children and 10 normal children were recruited. In conclusion for future study, dyslexic students have less difficulty in performing Jawi handwriting in Malay Language through statistical analysis.
Slant correction for handwritten English documents
NASA Astrophysics Data System (ADS)
Shridhar, Malayappan; Kimura, Fumitaka; Ding, Yimei; Miller, John W. V.
2004-12-01
Optical character recognition of machine-printed documents is an effective means for extracting textural material. While the level of effectiveness for handwritten documents is much poorer, progress is being made in more constrained applications such as personal checks and postal addresses. In these applications a series of steps is performed for recognition beginning with removal of skew and slant. Slant is a characteristic unique to the writer and varies from writer to writer in which characters are tilted some amount from vertical. The second attribute is the skew that arises from the inability of the writer to write on a horizontal line. Several methods have been proposed and discussed for average slant estimation and correction in the earlier papers. However, analysis of many handwritten documents reveals that slant is a local property and slant varies even within a word. The use of an average slant for the entire word often results in overestimation or underestimation of the local slant. This paper describes three methods for local slant estimation, namely the simple iterative method, high-speed iterative method, and the 8-directional chain code method. The experimental results show that the proposed methods can estimate and correct local slant more effectively than the average slant correction.
A Database of Computer Attacks for the Evaluation of Intrusion Detection Systems
1999-06-01
administrator whenever a system binary file (such as the ps, login , or ls program) is modified. Normal users have no legitimate reason to alter these files...development of EMERALD [46], which combines statistical anomaly detection from NIDES with signature verification. Specification-based intrusion detection...the creation of a single host that can act as many hosts. Daemons that provide network services—including telnetd, ftpd, and login — display banners
Resource Public Key Infrastructure Extension
2012-01-01
tests for checking compliance with the RFC 3779 extensions that are used in the RPKI. These tests also were used to identify an error in the OPENSSL ...rsync, OpenSSL , Cryptlib, and MySQL/ODBC. We assume that the adversaries can exploit any publicly known vulnerability in this software. • Server...NULL, set FLAG_NOCHAIN in Ctemp, defer verification. T = P Use OpenSSL to verify certificate chain S using trust anchor T, checking signature and
NASA Astrophysics Data System (ADS)
Prijono, Agus; Darmawan Hangkawidjaja, Aan; Ratnadewi; Saleh Ahmar, Ansari
2018-01-01
The verification to person who is used today as a fingerprint, signature, personal identification number (PIN) in the bank system, identity cards, attendance, easily copied and forged. This causes the system not secure and is vulnerable to unauthorized persons to access the system. In this research will be implemented verification system using the image of the blood vessels in the back of the palms as recognition more difficult to imitate because it is located inside the human body so it is safer to use. The blood vessels located at the back of the human hand is unique, even humans twins have a different image of the blood vessels. Besides the image of the blood vessels do not depend on a person’s age, so it can be used for long term, except in the case of an accident, or disease. Because of the unique vein pattern recognition can be used in a person. In this paper, we used a modification method to perform the introduction of a person based on the image of the blood vessel that is using Modified Local Line Binary Pattern (MLLBP). The process of matching blood vessel image feature extraction using Hamming Distance. Test case of verification is done by calculating the percentage of acceptance of the same person. Rejection error occurs if a person was not matched by the system with the data itself. The 10 person with 15 image compared to 5 image vein for each person is resulted 80,67% successful Another test case of the verification is done by verified two image from different person that is forgery, and the verification will be true if the system can rejection the image forgery. The ten different person is not verified and the result is obtained 94%.
Towards fraud-proof ID documents using multiple data hiding technologies and biometrics
NASA Astrophysics Data System (ADS)
Picard, Justin; Vielhauer, Claus; Thorwirth, Niels
2004-06-01
Identity documents, such as ID cards, passports, and driver's licenses, contain textual information, a portrait of the legitimate holder, and eventually some other biometric characteristics such as a fingerprint or handwritten signature. As prices for digital imaging technologies fall, making them more widely available, we have seen an exponential increase in the ease and the number of counterfeiters that can effectively forge documents. Today, with only limited knowledge of technology and a small amount of money, a counterfeiter can effortlessly replace a photo or modify identity information on a legitimate document to the extent that it is very diffcult to differentiate from the original. This paper proposes a virtually fraud-proof ID document based on a combination of three different data hiding technologies: digital watermarking, 2-D bar codes, and Copy Detection Pattern, plus additional biometric protection. As will be shown, that combination of data hiding technologies protects the document against any forgery, in principle without any requirement for other security features. To prevent a genuine document to be used by an illegitimate user,biometric information is also covertly stored in the ID document, to be used for identification at the detector.
Boosting bonsai trees for handwritten/printed text discrimination
NASA Astrophysics Data System (ADS)
Ricquebourg, Yann; Raymond, Christian; Poirriez, Baptiste; Lemaitre, Aurélie; Coüasnon, Bertrand
2013-12-01
Boosting over decision-stumps proved its efficiency in Natural Language Processing essentially with symbolic features, and its good properties (fast, few and not critical parameters, not sensitive to over-fitting) could be of great interest in the numeric world of pixel images. In this article we investigated the use of boosting over small decision trees, in image classification processing, for the discrimination of handwritten/printed text. Then, we conducted experiments to compare it to usual SVM-based classification revealing convincing results with very close performance, but with faster predictions and behaving far less as a black-box. Those promising results tend to make use of this classifier in more complex recognition tasks like multiclass problems.
Ghany, Ahmad; Vassanji, Karim; Kuziemsky, Craig; Keshavjee, Karim
2013-01-01
Electronic prescribing (e-prescribing) is expected to bring many benefits to Canadian healthcare, such as a reduction in errors and adverse drug reactions. As there currently is no functioning e-prescribing system in Canada that is completely electronic, we are unable to evaluate the performance of a live system. An alternative approach is to use simulation modeling for evaluation. We developed two discrete-event simulation models, one of the current handwritten prescribing system and one of a proposed e-prescribing system, to compare the performance of these two systems. We were able to compare the number of processes in each model, workflow efficiency, and the distribution of patients or prescriptions. Although we were able to compare these models to each other, using discrete-event simulation software was challenging. We were limited in the number of variables we could measure. We discovered non-linear processes and feedback loops in both models that could not be adequately represented using discrete-event simulation software. Finally, interactions between entities in both models could not be modeled using this type of software. We have come to the conclusion that a more appropriate approach to modeling both the handwritten and electronic prescribing systems would be to use a complex adaptive systems approach using agent-based modeling or systems-based modeling.
Signature extension for spectral variation in soils, volume 4
NASA Technical Reports Server (NTRS)
Berry, J. K.; Smith, J. A.; Jonranson, K.
1976-01-01
The reduced 1975-1976 field data at Garden City, Kansas are presented. These data are being used to evaluate the SRVC model predictions, to compare the ERIM-SUITS model with both the SRVC results and field data, and finally, to provide a data base for reviewing multitemporal trajectories. In particular, the applicability of the tasselled cap transformation is reviewed. The first detailed verification of this approach utilizing actual field measured data from the LACIE field measurement program, rather than LANDSAT data, is given.
A feasibility study of the destruction of chemical weapons by photocatalytic oxidation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hitchman, M.L.; Spackman, A.R.; Yusta, F.J.
1997-01-01
The destruction of existing arsenals or deposits of chemical weapons is an important obstacle on the way to the successful implementation of the Chemical Weapons Convention which was opened for signature in 1993. Many approaches have been proposed and none can be seen as panacea. Each has its merits and shortcomings. In this paper we review the different technologies and propose a new one, photocatalytic oxidation, which has the potential to fill an important gap; a cheap, small, mobile facility for chemical warfare agents which are difficult to transport or are deposited in a remote area. We report some relevantmore » experimental results with this technology for the destruction of chemical weapons. After many years of negotiation, a convention banning the production, possession and use of chemical weapons was opened for signature in Paris on January 13, 1993. The convention, once it is ratified, will provide a framework and a program for the destruction of chemical weapons by the nations party to it. The framework will cover such topics as definitions of terminology, general rules of verification and verification measures, level of destruction of chemical weapons, activities not prohibited under the convention, and investigations in cases of alleged use of chemical weapons. The program will require that countries with chemical weapons shall start their destruction not later than one year after they have ratified the convention, and that they shall complete it within a ten year period. For this period involved countries are required to declare their plans for destruction. These plans have to include a time schedule for the destruction process, an inventory of equipment and buildings to be destroyed, proposed measures for verification, safety measures to be observed during destruction, specification of the types of chemical weapons and the type and quantity of chemical fill to be destroyed, and specification of the destruction method. 38 refs.« less
Background feature descriptor for offline handwritten numeral recognition
NASA Astrophysics Data System (ADS)
Ming, Delie; Wang, Hao; Tian, Tian; Jie, Feiran; Lei, Bo
2011-11-01
This paper puts forward an offline handwritten numeral recognition method based on background structural descriptor (sixteen-value numerical background expression). Through encoding the background pixels in the image according to a certain rule, 16 different eigenvalues were generated, which reflected the background condition of every digit, then reflected the structural features of the digits. Through pattern language description of images by these features, automatic segmentation of overlapping digits and numeral recognition can be realized. This method is characterized by great deformation resistant ability, high recognition speed and easy realization. Finally, the experimental results and conclusions are presented. The experimental results of recognizing datasets from various practical application fields reflect that with this method, a good recognition effect can be achieved.
A Novel Handwritten Letter Recognizer Using Enhanced Evolutionary Neural Network
NASA Astrophysics Data System (ADS)
Mahmoudi, Fariborz; Mirzashaeri, Mohsen; Shahamatnia, Ehsan; Faridnia, Saed
This paper introduces a novel design for handwritten letter recognition by employing a hybrid back-propagation neural network with an enhanced evolutionary algorithm. Feeding the neural network consists of a new approach which is invariant to translation, rotation, and scaling of input letters. Evolutionary algorithm is used for the global search of the search space and the back-propagation algorithm is used for the local search. The results have been computed by implementing this approach for recognizing 26 English capital letters in the handwritings of different people. The computational results show that the neural network reaches very satisfying results with relatively scarce input data and a promising performance improvement in convergence of the hybrid evolutionary back-propagation algorithms is exhibited.
Character context: a shape descriptor for Arabic handwriting recognition
NASA Astrophysics Data System (ADS)
Mudhsh, Mohammed; Almodfer, Rolla; Duan, Pengfei; Xiong, Shengwu
2017-11-01
In the handwriting recognition field, designing good descriptors are substantial to obtain rich information of the data. However, the handwriting recognition research of a good descriptor is still an open issue due to unlimited variation in human handwriting. We introduce a "character context descriptor" that efficiently dealt with the structural characteristics of Arabic handwritten characters. First, the character image is smoothed and normalized, then the character context descriptor of 32 feature bins is built based on the proposed "distance function." Finally, a multilayer perceptron with regularization is used as a classifier. On experimentation with a handwritten Arabic characters database, the proposed method achieved a state-of-the-art performance with recognition rate equal to 98.93% and 99.06% for the 66 and 24 classes, respectively.
NASA Astrophysics Data System (ADS)
Kaur, Jaswinder; Jagdev, Gagandeep, Dr.
2018-01-01
Optical character recognition is concerned with the recognition of optically processed characters. The recognition is done offline after the writing or printing has been completed, unlike online recognition where the computer has to recognize the characters instantly as they are drawn. The performance of character recognition depends upon the quality of scanned documents. The preprocessing steps are used for removing low-frequency background noise and normalizing the intensity of individual scanned documents. Several filters are used for reducing certain image details and enabling an easier or faster evaluation. The primary aim of the research work is to recognize handwritten and machine written characters and differentiate them. The language opted for the research work is Punjabi Gurmukhi and tool utilized is Matlab.
Long-term verifiability of the electronic healthcare records' authenticity.
Lekkas, Dimitrios; Gritzalis, Dimitris
2007-01-01
To investigate whether the long-term preservation of the authenticity of electronic healthcare records (EHR) is possible. To propose a mechanism that enables the secure validation of an EHR for long periods, far beyond the lifespan of a digital signature and at least as long as the lifetime of a patient. The study is based on the fact that although the attributes of data authenticity, i.e. integrity and origin verifiability, can be preserved by digital signatures, the necessary period for the retention of EHRs is far beyond the lifespan of a simple digital signature. It is identified that the lifespan of signed data is restricted by the validity period of the relevant keys and the digital certificates, by the future unavailability of signature-verification data, and by suppression of trust relationships. In this paper, the notarization paradigm is exploited, and a mechanism for cumulative notarization of signed EHR is proposed. The proposed mechanism implements a successive trust transition towards new entities, modern technologies, and refreshed data, eliminating any dependency of the relying party on ceased entities, obsolete data, or weak old technologies. The mechanism also exhibits strength against various threat scenarios. A future relying party will have to trust only the fresh technology and information provided by the last notary, in order to verify the authenticity of an old signed EHR. A Cumulatively Notarized Signature is strong even in the case of the compromise of a notary in the chain.
NASA Astrophysics Data System (ADS)
Versteeg, R.; Leger, E.; Dafflon, B.
2016-12-01
Geologic sequestration of CO2 is one of the primary proposed approaches for reducing total atmospheric CO2 concentrations. MVAA (Monitoring, Verification, Accounting and Assessment) of CO2 sequestration is an essential part of the geologic CO2 sequestration cycle. MVAA activities need to meet multiple operational, regulatory and environmental objectives, including ensuring the protection of underground sources of drinking water. Anticipated negative consequences of CO2 leakage into groundwater, besides possible brine contamination and release of gaseous CO2, include a significant increase of dissolved CO2 into shallow groundwater systems, which will decrease groundwater pH and can potentially mobilize naturally occurring trace metals and ions that are commonly absorbed to or contained in sediments. Autonomous electrical geophysical monitoring in aquifers has the potential of allowing for rapid and automated detection of CO2 leakage. However, while the feasibility of such monitoring has been demonstrated by a number of different field experiments, automated interpretation of complex electrical resistivity data requires the development of quantitative relationships between complex electrical resistivity signatures and dissolved CO2 in the aquifer resulting from leakage Under a DOE SBIR funded effort we performed multiple tank scale experiments in which we investigated complex electrical resistivity signatures associated with dissolved CO2 plumes in saturated sediments. We also investigated the feasibility of distinguishing CO2 leakage signatures from signatures associated with other processes such as salt water movement, temperature variations and other variations in chemical or physical conditions. In addition to these experiments we also numerically modeled the tank experiments. These experiments showed that (a) we can distinguish CO2 leakage signatures from other signatures, (b) CO2 leakage signatures have a consistent characteristic, (c) laboratory experiments are in agreement with field results, and (d) we can numerically simulate the main characteristics of CO2 leakage and associated electrical geophysical signatures.
Matthews, K M; Bowyer, T W; Saey, P R J; Payne, R F
2012-08-01
Radiopharmaceuticals make contributions of inestimable value to medical practice. With growing demand new technologies are being developed and applied worldwide. Most diagnostic procedures rely on (99m)Tc and the use of uranium targets in reactors is currently the favored method of production, with 95% of the necessary (99)Mo parent currently being produced by four major global suppliers. Coincidentally there are growing concerns for nuclear security and proliferation. New disarmament treaties such as the Comprehensive Nuclear-Test-Ban Treaty (CTBT) are coming into effect and treaty compliance-verification monitoring is gaining momentum. Radioxenon emissions (isotopes Xe-131, 133, 133m and 135) from radiopharmaceutical production facilities are of concern in this context because radioxenon is a highly sensitive tracer for detecting nuclear explosions. There exists, therefore, a potential for confusing source attribution, with emissions from radiopharmaceutical-production facilities regularly being detected in treaty compliance-verification networks. The CTBT radioxenon network currently under installation is highly sensitive with detection limits approaching 0.1 mBq/m³ and, depending on transport conditions and background, able to detect industrial release signatures from sites thousands of kilometers away. The method currently employed to distinguish between industrial and military radioxenon sources involves plots of isotope ratios (133m)Xe/(131m)Xe versus (135)Xe/(133)Xe, but source attribution can be ambiguous. Through the WOSMIP Workshop the environmental monitoring community is gaining a better understanding of the complexities of the processes at production facilities, and the production community is recognizing the impact their operations have on monitoring systems and their goal of nuclear non-proliferation. Further collaboration and discussion are needed, together with advances in Xe trapping technology and monitoring systems. Such initiatives will help in addressing the dichotomy which exists between expanding production and improving monitoring sensitivity, with the ultimate aim of enabling unambiguous distinction between different nuclide signatures. Copyright © 2012 Elsevier Ltd. All rights reserved.
Arabic handwritten: pre-processing and segmentation
NASA Astrophysics Data System (ADS)
Maliki, Makki; Jassim, Sabah; Al-Jawad, Naseer; Sellahewa, Harin
2012-06-01
This paper is concerned with pre-processing and segmentation tasks that influence the performance of Optical Character Recognition (OCR) systems and handwritten/printed text recognition. In Arabic, these tasks are adversely effected by the fact that many words are made up of sub-words, with many sub-words there associated one or more diacritics that are not connected to the sub-word's body; there could be multiple instances of sub-words overlap. To overcome these problems we investigate and develop segmentation techniques that first segment a document into sub-words, link the diacritics with their sub-words, and removes possible overlapping between words and sub-words. We shall also investigate two approaches for pre-processing tasks to estimate sub-words baseline, and to determine parameters that yield appropriate slope correction, slant removal. We shall investigate the use of linear regression on sub-words pixels to determine their central x and y coordinates, as well as their high density part. We also develop a new incremental rotation procedure to be performed on sub-words that determines the best rotation angle needed to realign baselines. We shall demonstrate the benefits of these proposals by conducting extensive experiments on publicly available databases and in-house created databases. These algorithms help improve character segmentation accuracy by transforming handwritten Arabic text into a form that could benefit from analysis of printed text.
NASA Astrophysics Data System (ADS)
Lee, Jasper C.; Ma, Kevin C.; Liu, Brent J.
2008-03-01
A Data Grid for medical images has been developed at the Image Processing and Informatics Laboratory, USC to provide distribution and fault-tolerant storage of medical imaging studies across Internet2 and public domain. Although back-up policies and grid certificates guarantee privacy and authenticity of grid-access-points, there still lacks a method to guarantee the sensitive DICOM images have not been altered or corrupted during transmission across a public domain. This paper takes steps toward achieving full image transfer security within the Data Grid by utilizing DICOM image authentication and a HIPAA-compliant auditing system. The 3-D lossless digital signature embedding procedure involves a private 64 byte signature that is embedded into each original DICOM image volume, whereby on the receiving end the signature can to be extracted and verified following the DICOM transmission. This digital signature method has also been developed at the IPILab. The HIPAA-Compliant Auditing System (H-CAS) is required to monitor embedding and verification events, and allows monitoring of other grid activity as well. The H-CAS system federates the logs of transmission and authentication events at each grid-access-point and stores it into a HIPAA-compliant database. The auditing toolkit is installed at the local grid-access-point and utilizes Syslog [1], a client-server standard for log messaging over an IP network, to send messages to the H-CAS centralized database. By integrating digital image signatures and centralized logging capabilities, DICOM image integrity within the Medical Imaging and Informatics Data Grid can be monitored and guaranteed without loss to any image quality.
Detection and recognition of targets by using signal polarization properties
NASA Astrophysics Data System (ADS)
Ponomaryov, Volodymyr I.; Peralta-Fabi, Ricardo; Popov, Anatoly V.; Babakov, Mikhail F.
1999-08-01
The quality of radar target recognition can be enhanced by exploiting its polarization signatures. A specialized X-band polarimetric radar was used for target recognition in experimental investigations. The following polarization characteristics connected to the object geometrical properties were investigated: the amplitudes of the polarization matrix elements; an anisotropy coefficient; depolarization coefficient; asymmetry coefficient; the energy of a backscattering signal; object shape factor. A large quantity of polarimetric radar data was measured and processed to form a database of different object and different weather conditions. The histograms of polarization signatures were approximated by a Nakagami distribution, then used for real- time target recognition. The Neyman-Pearson criterion was used for the target detection, and the criterion of the maximum of a posterior probability was used for recognition problem. Some results of experimental verification of pattern recognition and detection of objects with different electrophysical and geometrical characteristics urban in clutter are presented in this paper.
NASA Astrophysics Data System (ADS)
Rishi, Rahul; Choudhary, Amit; Singh, Ravinder; Dhaka, Vijaypal Singh; Ahlawat, Savita; Rao, Mukta
2010-02-01
In this paper we propose a system for classification problem of handwritten text. The system is composed of preprocessing module, supervised learning module and recognition module on a very broad level. The preprocessing module digitizes the documents and extracts features (tangent values) for each character. The radial basis function network is used in the learning and recognition modules. The objective is to analyze and improve the performance of Multi Layer Perceptron (MLP) using RBF transfer functions over Logarithmic Sigmoid Function. The results of 35 experiments indicate that the Feed Forward MLP performs accurately and exhaustively with RBF. With the change in weight update mechanism and feature-drawn preprocessing module, the proposed system is competent with good recognition show.
Structural model constructing for optical handwritten character recognition
NASA Astrophysics Data System (ADS)
Khaustov, P. A.; Spitsyn, V. G.; Maksimova, E. I.
2017-02-01
The article is devoted to the development of the algorithms for optical handwritten character recognition based on the structural models constructing. The main advantage of these algorithms is the low requirement regarding the number of reference images. The one-pass approach to a thinning of the binary character representation has been proposed. This approach is based on the joint use of Zhang-Suen and Wu-Tsai algorithms. The effectiveness of the proposed approach is confirmed by the results of the experiments. The article includes the detailed description of the structural model constructing algorithm’s steps. The proposed algorithm has been implemented in character processing application and has been approved on MNIST handwriting characters database. Algorithms that could be used in case of limited reference images number were used for the comparison.
Retrieving handwriting by combining word spotting and manifold ranking
NASA Astrophysics Data System (ADS)
Peña Saldarriaga, Sebastián; Morin, Emmanuel; Viard-Gaudin, Christian
2012-01-01
Online handwritten data, produced with Tablet PCs or digital pens, consists in a sequence of points (x, y). As the amount of data available in this form increases, algorithms for retrieval of online data are needed. Word spotting is a common approach used for the retrieval of handwriting. However, from an information retrieval (IR) perspective, word spotting is a primitive keyword based matching and retrieval strategy. We propose a framework for handwriting retrieval where an arbitrary word spotting method is used, and then a manifold ranking algorithm is applied on the initial retrieval scores. Experimental results on a database of more than 2,000 handwritten newswires show that our method can improve the performances of a state-of-the-art word spotting system by more than 10%.
Gout in Duke Federico of Montefeltro (1422-1482): a new pearl of the Italian Renaissance.
Fornaciari, Antonio; Giuffra, Valentina; Armocida, Emanuele; Caramella, Davide; Rühli, Frank J; Galassi, Francesco Maria
2018-01-01
The article examines the truthfulness of historical accounts claiming that Renaissance Duke Federico of Montefeltro (1422-1482) suffered from gout. By direct paleopathological assessment of the skeletal remains and by the philological investigation of historical and documental sources, primarily a 1461 handwritten letter by the Duke himself to his personal physician, a description of the symptoms and Renaissance therapy is offered and a final diagnosis of gout is formulated. The Duke's handwritten letter offers a rare testimony of ancient clinical self-diagnostics and Renaissance living-experience of gout. Moreover, the article also shows how an alliance between historical, documental and paleopathological methods can greatly increase the precision of retrospective diagnoses, thus helping to shed clearer light onto the antiquity and evolution of diseases.
Development of a technique for inflight jet noise simulation. I, II
NASA Technical Reports Server (NTRS)
Clapper, W. S.; Stringas, E. J.; Mani, R.; Banerian, G.
1976-01-01
Several possible noise simulation techniques were evaluated, including closed circuit wind tunnels, free jets, rocket sleds and high speed trains. The free jet technique was selected for demonstration and verification. The first paper describes the selection and development of the technique and presents results for simulation and in-flight tests of the Learjet, F106, and Bertin Aerotrain. The second presents a theoretical study relating the two sets of noise signatures. It is concluded that the free jet simulation technique provides a satisfactory assessment of in-flight noise.
Strict integrity control of biomedical images
NASA Astrophysics Data System (ADS)
Coatrieux, Gouenou; Maitre, Henri; Sankur, Bulent
2001-08-01
The control of the integrity and authentication of medical images is becoming ever more important within the Medical Information Systems (MIS). The intra- and interhospital exchange of images, such as in the PACS (Picture Archiving and Communication Systems), and the ease of copying, manipulation and distribution of images have brought forth the security aspects. In this paper we focus on the role of watermarking for MIS security and address the problem of integrity control of medical images. We discuss alternative schemes to extract verification signatures and compare their tamper detection performance.
NASA Technical Reports Server (NTRS)
Wheeler, Mark
1996-01-01
This report details the research, development, utility, verification and transition on wet microburst forecasting and detection the Applied Meteorology Unit (AMU) did in support of ground and launch operations at Kennedy Space Center (KSC) and Cape Canaveral Air Station (CCAS). The unforecasted wind event on 16 August 1994 of 33.5 ms-1 (65 knots) at the Shuttle Landing Facility raised the issue of wet microburst detection and forecasting. The AMU researched and analyzed the downburst wind event and determined it was a wet microburst event. A program was developed for operational use on the Meteorological Interactive Data Display System (MIDDS) weather system to analyze, compute and display Theta(epsilon) profiles, the microburst day potential index (MDPI), and wind index (WINDEX) maximum wind gust value. Key microburst nowcasting signatures using the WSR-88D data were highlighted. Verification of the data sets indicated that the MDPI has good potential in alerting the duty forecaster to the potential of wet microburst and the WINDEX values computed from the hourly surface data do have potential in showing a trend for the maximum gust potential. WINDEX should help in filling in the temporal hole between the MDPI on the last Cape Canaveral rawinsonde and the nowcasting radar data tools.
Efficient and Scalable Graph Similarity Joins in MapReduce
Chen, Yifan; Zhang, Weiming; Tang, Jiuyang
2014-01-01
Along with the emergence of massive graph-modeled data, it is of great importance to investigate graph similarity joins due to their wide applications for multiple purposes, including data cleaning, and near duplicate detection. This paper considers graph similarity joins with edit distance constraints, which return pairs of graphs such that their edit distances are no larger than a given threshold. Leveraging the MapReduce programming model, we propose MGSJoin, a scalable algorithm following the filtering-verification framework for efficient graph similarity joins. It relies on counting overlapping graph signatures for filtering out nonpromising candidates. With the potential issue of too many key-value pairs in the filtering phase, spectral Bloom filters are introduced to reduce the number of key-value pairs. Furthermore, we integrate the multiway join strategy to boost the verification, where a MapReduce-based method is proposed for GED calculation. The superior efficiency and scalability of the proposed algorithms are demonstrated by extensive experimental results. PMID:25121135
Efficient and scalable graph similarity joins in MapReduce.
Chen, Yifan; Zhao, Xiang; Xiao, Chuan; Zhang, Weiming; Tang, Jiuyang
2014-01-01
Along with the emergence of massive graph-modeled data, it is of great importance to investigate graph similarity joins due to their wide applications for multiple purposes, including data cleaning, and near duplicate detection. This paper considers graph similarity joins with edit distance constraints, which return pairs of graphs such that their edit distances are no larger than a given threshold. Leveraging the MapReduce programming model, we propose MGSJoin, a scalable algorithm following the filtering-verification framework for efficient graph similarity joins. It relies on counting overlapping graph signatures for filtering out nonpromising candidates. With the potential issue of too many key-value pairs in the filtering phase, spectral Bloom filters are introduced to reduce the number of key-value pairs. Furthermore, we integrate the multiway join strategy to boost the verification, where a MapReduce-based method is proposed for GED calculation. The superior efficiency and scalability of the proposed algorithms are demonstrated by extensive experimental results.
Use of Social Media to Target Information-Driven Arms Control and Nonproliferation Verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kreyling, Sean J.; Williams, Laura S.; Gastelum, Zoe N.
There has been considerable discussion within the national security community, including a recent workshop sponsored by the U.S. State Department, about the use of social media for extracting patterns of collective behavior and influencing public perception in areas relevant to arms control and nonproliferation. This paper seeks to explore if, and how, social media can be used to supplement nonproliferation and arms control inspection and monitoring activities on states and sites of greatest proliferation relevance. In this paper, we set the stage for how social media can be applied in this problem space and describe some of the foreseen challenges,more » including data validation, sources and attributes, verification, and security. Using information analytics and data visualization capabilities available at Pacific Northwest National Laboratory (PNNL), we provide graphical examples of some social media "signatures" of potential relevance for nonproliferation and arms control purposes. We conclude by describing a proposed case study and offering recommendations both for further research and next steps by the policy community.« less
ERIC Educational Resources Information Center
Sevier, Robert
1988-01-01
Most successful yield strategies use a series of messages specifically designed to meet the informational and emotional needs of students in the final decision-making stages. Techniques to try include: brochures, videotapes, handwritten postscripts, posters, and phone campaigns. (MLW)
Ellison, GTH; Richter, LM; de Wet, T; Harris, HE; Griesel, RD; McIntyre, JA
2007-01-01
This study examined the reliability of hand-written and computerised records of birth data collected during the Birth to Ten study at Baragwanath Hospital in Soweto. The reliability of record-keeping in hand-written obstetric and neonatal files was assessed by comparing duplicate records of six different variables abstracted from six different sections in these files. The reliability of computerised record-keeping was assessed by comparing the original hand-written record of each variable with records contained in the hospital’s computerised database. These data sets displayed similar levels of reliability which suggests that similar errors occurred when data were transcribed from one section of the files to the next, and from these files to the computerised database. In both sets of records reliability was highest for the categorical variable infant sex, and for those continuous variables (such as maternal age and gravidity) recorded with unambiguous units. Reliability was lower for continuous variables that could be recorded with different levels of precision (such as birth weight), those that were occasionally measured more than once, and those that could be measured using more than one measurement technique (such as gestational age). Reducing the number of times records are transcribed, categorising continuous variables, and standardising the techniques used for measuring and recording variables would improve the reliability of both hand-written and computerised data sets. OPSOMMING In hierdie studie is die betroubaarheid van handgeskrewe en gerekenariseerde rekords van ge boortedata ondersoek, wat versamel is gedurende die ‘Birth to Ten’ -studie aan die Baragwanath hospitaal in Soweto. Die betroubaarheid van handgeskrewe verloskundige en pasgeboortelike rekords is beoordeel deur duplikaatrekords op ses verskillende verander likes te vergelyk, wat onttrek is uit ses verskillende dele van die betrokke lêers. Die gerekenariseerde rekords se betroubaarheid is beoordeel deur die oorspronklike geskrewe rekord van elke veranderlike te vergelyk met rekords wat beskikbaar is in die hospitaal se gerekenariseerde databasis Hierdie datastelle her vergelykbare vlakke van betroubaarheid getoon, waaruit afgelei kan word dat soortgelyke foute voorkom warmeer data oorgeplaas word vaneen deeivan ’n lêer na ’n ander, en vanaf die lêer na die gerekenariseerde databasis. In albei stelle rekords was die betroubaarheid die hoogste vir die kategoriese veranderlike suigeling se geslag, en vir daardie kontinue veranderlikes (soos moeder se ouderdom en gravida) wat in terme van ondubbelsinmge eenhede gekodeer kan word. Kontinue veranderlikes wat op wisselende vlakke van akkuratheid gemeet word (soos gewig met geboorte), veranderlikes wat soms meer as een keer gemeet is, en veranderlikes wat voigens meer as een metingstegniek bepaal is (soos draagtydsouderdom), was minder betroubaar Deur die aantal kere wat rekords oorgeskryf moet word te verminder, kontinue veranderlikes tat kategoriese veranderlikes te wysig. en tegnieke vir meting en aantekening van veranderlikes te standardiseer, kan die betroubaarheid van sowel handgeskrewe as gerekenariseerde datastelle verbeter word. PMID:9287552
Radar signatures of road vehicles: airborne SAR experiments
NASA Astrophysics Data System (ADS)
Palubinskas, G.; Runge, H.; Reinartz, P.
2005-10-01
The German radar satellite TerraSAR-X is a high resolution, dual receive antenna SAR satellite, which will be launched in spring 2006. Since it will have the capability to measure the velocity of moving targets, the acquired interferometric data can be useful for traffic monitoring applications on a global scale. DLR has started already the development of an automatic and operational processing system which will detect cars, measure their speed and assign them to a road. Statistical approaches are used to derive the vehicle detection algorithm, which require the knowledge of the radar signatures of vehicles, especially under consideration of the geometry of the radar look direction and the vehicle orientation. Simulation of radar signatures is a very difficult task due to the lack of realistic models of vehicles. In this paper the radar signatures of the parking cars are presented. They are estimated experimentally from airborne E-SAR X-band data, which have been collected during flight campaigns in 2003-2005. Several test cars of the same type placed in carefully selected orientation angles and several over-flights with different heading angles made it possible to cover the whole range of aspect angles from 0° to 180°. The large synthetic aperture length or beam width angle of 7° can be divided into several looks. Thus processing of each look separately allows to increase the angle resolution. Such a radar signature profile of one type of vehicle over the whole range of aspect angles in fine resolution can be used further for the verification of simulation studies and for the performance prediction for traffic monitoring with TerraSAR-X.
77 FR 57089 - Meeting of the Chronic Fatigue Syndrome Advisory Committee
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-17
..., 20201. Mailed testimony must be received no later than Monday, September 24, 2012. Note: PDF files, hand-written notes and photographs will not be accepted. Requests for public comment and written testimony will...
77 FR 31856 - Meeting of the Chronic Fatigue Syndrome Advisory Committee
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-30
..., 12 point font. Note: PDF files, hand-written notes and photographs will not be accepted. Requests for public comment and written testimony will not be accepted through the CFSAC mailbox. Also, the CFSAC...
[Patient safety: a comparison between handwritten and computerized voluntary incident reporting].
Capucho, Helaine Carneiro; Arnas, Emilly Rasquini; Cassiani, Silvia Helena De Bortoli
2013-03-01
This study's objective was to compare two types of voluntary incident reporting methods that affect patient safety, handwritten (HR) and computerized (CR), in relation to the number of reports, type of incident reported the individual submitting the report, and quality of reports. This was a descriptive, retrospective and cross-sectional study. CR were more frequent than HR (61.2% vs. 38.6%) among the 1,089 reports analyzed and were submitted every day of the month, while HR were submitted only on weekdays. The highest number of reports referred to medication, followed by problems related to medical-hospital material and the professional who most frequently submitted reports were nurses in both cases. Overall CR presented higher quality than HR (86.1% vs. 61.7%); 36.8% of HR were illegible, a problem that was eliminated in CR. Therefore, the use of computerized incident reporting in hospitals favors qualified voluntary reports, increasing patient safety.
Historical Analyses of Disordered Handwriting
Schiegg, Markus; Thorpe, Deborah
2016-01-01
Handwritten texts carry significant information, extending beyond the meaning of their words. Modern neurology, for example, benefits from the interpretation of the graphic features of writing and drawing for the diagnosis and monitoring of diseases and disorders. This article examines how handwriting analysis can be used, and has been used historically, as a methodological tool for the assessment of medical conditions and how this enhances our understanding of historical contexts of writing. We analyze handwritten material, writing tests and letters, from patients in an early 20th-century psychiatric hospital in southern Germany (Irsee/Kaufbeuren). In this institution, early psychiatrists assessed handwriting features, providing us novel insights into the earliest practices of psychiatric handwriting analysis, which can be connected to Berkenkotter’s research on medical admission records. We finally consider the degree to which historical handwriting bears semiotic potential to explain the psychological state and personality of a writer, and how future research in written communication should approach these sources. PMID:28408774
Quantify spatial relations to discover handwritten graphical symbols
NASA Astrophysics Data System (ADS)
Li, Jinpeng; Mouchère, Harold; Viard-Gaudin, Christian
2012-01-01
To model a handwritten graphical language, spatial relations describe how the strokes are positioned in the 2-dimensional space. Most of existing handwriting recognition systems make use of some predefined spatial relations. However, considering a complex graphical language, it is hard to express manually all the spatial relations. Another possibility would be to use a clustering technique to discover the spatial relations. In this paper, we discuss how to create a relational graph between strokes (nodes) labeled with graphemes in a graphical language. Then we vectorize spatial relations (edges) for clustering and quantization. As the targeted application, we extract the repetitive sub-graphs (graphical symbols) composed of graphemes and learned spatial relations. On two handwriting databases, a simple mathematical expression database and a complex flowchart database, the unsupervised spatial relations outperform the predefined spatial relations. In addition, we visualize the frequent patterns on two text-lines containing Chinese characters.
Enhancement Of Reading Accuracy By Multiple Data Integration
NASA Astrophysics Data System (ADS)
Lee, Kangsuk
1989-07-01
In this paper, a multiple sensor integration technique with neural network learning algorithms is presented which can enhance the reading accuracy of the hand-written numerals. Many document reading applications involve hand-written numerals in a predetermined location on a form, and in many cases, critical data is redundantly described. The amount of a personal check is one such case which is written redundantly in numerals and in alphabetical form. Information from two optical character recognition modules, one specialized for digits and one for words, is combined to yield an enhanced recognition of the amount. The combination can be accomplished by a decision tree with "if-then" rules, but by simply fusing two or more sets of sensor data in a single expanded neural net, the same functionality can be expected with a much reduced system cost. Experimental results of fusing two neural nets to enhance overall recognition performance using a controlled data set are presented.
Online handwritten mathematical expression recognition
NASA Astrophysics Data System (ADS)
Büyükbayrak, Hakan; Yanikoglu, Berrin; Erçil, Aytül
2007-01-01
We describe a system for recognizing online, handwritten mathematical expressions. The system is designed with a user-interface for writing scientific articles, supporting the recognition of basic mathematical expressions as well as integrals, summations, matrices etc. A feed-forward neural network recognizes symbols which are assumed to be single-stroke and a recursive algorithm parses the expression by combining neural network output and the structure of the expression. Preliminary results show that writer-dependent recognition rates are very high (99.8%) while writer-independent symbol recognition rates are lower (75%). The interface associated with the proposed system integrates the built-in recognition capabilities of the Microsoft's Tablet PC API for recognizing textual input and supports conversion of hand-drawn figures into PNG format. This enables the user to enter text, mathematics and draw figures in a single interface. After recognition, all output is combined into one LATEX code and compiled into a PDF file.
Dynamic and Contextual Information in HMM Modeling for Handwritten Word Recognition.
Bianne-Bernard, Anne-Laure; Menasri, Farès; Al-Hajj Mohamad, Rami; Mokbel, Chafic; Kermorvant, Christopher; Likforman-Sulem, Laurence
2011-10-01
This study aims at building an efficient word recognition system resulting from the combination of three handwriting recognizers. The main component of this combined system is an HMM-based recognizer which considers dynamic and contextual information for a better modeling of writing units. For modeling the contextual units, a state-tying process based on decision tree clustering is introduced. Decision trees are built according to a set of expert-based questions on how characters are written. Questions are divided into global questions, yielding larger clusters, and precise questions, yielding smaller ones. Such clustering enables us to reduce the total number of models and Gaussians densities by 10. We then apply this modeling to the recognition of handwritten words. Experiments are conducted on three publicly available databases based on Latin or Arabic languages: Rimes, IAM, and OpenHart. The results obtained show that contextual information embedded with dynamic modeling significantly improves recognition.
NASA Technical Reports Server (NTRS)
Hill, C. L.
1984-01-01
A computer-implemented classification has been derived from Landsat-4 Thematic Mapper data acquired over Baldwin County, Alabama on January 15, 1983. One set of spectral signatures was developed from the data by utilizing a 3x3 pixel sliding window approach. An analysis of the classification produced from this technique identified forested areas. Additional information regarding only the forested areas. Additional information regarding only the forested areas was extracted by employing a pixel-by-pixel signature development program which derived spectral statistics only for pixels within the forested land covers. The spectral statistics from both approaches were integrated and the data classified. This classification was evaluated by comparing the spectral classes produced from the data against corresponding ground verification polygons. This iterative data analysis technique resulted in an overall classification accuracy of 88.4 percent correct for slash pine, young pine, loblolly pine, natural pine, and mixed hardwood-pine. An accuracy assessment matrix has been produced for the classification.
Fire Detection Organizing Questions
NASA Technical Reports Server (NTRS)
2004-01-01
Verified models of fire precursor transport in low and partial gravity: a. Development of models for large-scale transport in reduced gravity. b. Validated CFD simulations of transport of fire precursors. c. Evaluation of the effect of scale on transport and reduced gravity fires. Advanced fire detection system for gaseous and particulate pre-fire and fire signaturesa: a. Quantification of pre-fire pyrolysis products in microgravity. b. Suite of gas and particulate sensors. c. Reduced gravity evaluation of candidate detector technologies. d. Reduced gravity verification of advanced fire detection system. e. Validated database of fire and pre-fire signatures in low and partial gravity.
Spectral signature verification using statistical analysis and text mining
NASA Astrophysics Data System (ADS)
DeCoster, Mallory E.; Firpi, Alexe H.; Jacobs, Samantha K.; Cone, Shelli R.; Tzeng, Nigel H.; Rodriguez, Benjamin M.
2016-05-01
In the spectral science community, numerous spectral signatures are stored in databases representative of many sample materials collected from a variety of spectrometers and spectroscopists. Due to the variety and variability of the spectra that comprise many spectral databases, it is necessary to establish a metric for validating the quality of spectral signatures. This has been an area of great discussion and debate in the spectral science community. This paper discusses a method that independently validates two different aspects of a spectral signature to arrive at a final qualitative assessment; the textual meta-data and numerical spectral data. Results associated with the spectral data stored in the Signature Database1 (SigDB) are proposed. The numerical data comprising a sample material's spectrum is validated based on statistical properties derived from an ideal population set. The quality of the test spectrum is ranked based on a spectral angle mapper (SAM) comparison to the mean spectrum derived from the population set. Additionally, the contextual data of a test spectrum is qualitatively analyzed using lexical analysis text mining. This technique analyzes to understand the syntax of the meta-data to provide local learning patterns and trends within the spectral data, indicative of the test spectrum's quality. Text mining applications have successfully been implemented for security2 (text encryption/decryption), biomedical3 , and marketing4 applications. The text mining lexical analysis algorithm is trained on the meta-data patterns of a subset of high and low quality spectra, in order to have a model to apply to the entire SigDB data set. The statistical and textual methods combine to assess the quality of a test spectrum existing in a database without the need of an expert user. This method has been compared to other validation methods accepted by the spectral science community, and has provided promising results when a baseline spectral signature is present for comparison. The spectral validation method proposed is described from a practical application and analytical perspective.
DNA Methylation Signature of Childhood Chronic Physical Aggression in T Cells of Both Men and Women
Guillemin, Claire; Provençal, Nadine; Suderman, Matthew; Côté, Sylvana M.; Vitaro, Frank; Hallett, Michael; Tremblay, Richard E.; Szyf, Moshe
2014-01-01
Background High frequency of physical aggression is the central feature of severe conduct disorder and is associated with a wide range of social, mental and physical health problems. We have previously tested the hypothesis that differential DNA methylation signatures in peripheral T cells are associated with a chronic aggression trajectory in males. Despite the fact that sex differences appear to play a pivotal role in determining the development, magnitude and frequency of aggression, most of previous studies focused on males, so little is known about female chronic physical aggression. We therefore tested here whether or not there is a signature of physical aggression in female DNA methylation and, if there is, how it relates to the signature observed in males. Methodology/Principal Findings Methylation profiles were created using the method of methylated DNA immunoprecipitation (MeDIP) followed by microarray hybridization and statistical and bioinformatic analyses on T cell DNA obtained from adult women who were found to be on a chronic physical aggression trajectory (CPA) between 6 and 12 years of age compared to women who followed a normal physical aggression trajectory. We confirmed the existence of a well-defined, genome-wide signature of DNA methylation associated with chronic physical aggression in the peripheral T cells of adult females that includes many of the genes similarly associated with physical aggression in the same cell types of adult males. Conclusions This study in a small number of women presents preliminary evidence for a genome-wide variation in promoter DNA methylation that associates with CPA in women that warrant larger studies for further verification. A significant proportion of these associations were previously observed in men with CPA supporting the hypothesis that the epigenetic signature of early life aggression in females is composed of a component specific to females and another common to both males and females. PMID:24475181
Plume mapping and isotopic characterisation of anthropogenic methane sources
NASA Astrophysics Data System (ADS)
Zazzeri, G.; Lowry, D.; Fisher, R. E.; France, J. L.; Lanoisellé, M.; Nisbet, E. G.
2015-06-01
Methane stable isotope analysis, coupled with mole fraction measurement, has been used to link isotopic signature to methane emissions from landfill sites, coal mines and gas leaks in the United Kingdom. A mobile Picarro G2301 CRDS (Cavity Ring-Down Spectroscopy) analyser was installed on a vehicle, together with an anemometer and GPS receiver, to measure atmospheric methane mole fractions and their relative location while driving at speeds up to 80 kph. In targeted areas, when the methane plume was intercepted, air samples were collected in Tedlar bags, for δ13C-CH4 isotopic analysis by CF-GC-IRMS (Continuous Flow Gas Chromatography-Isotope Ratio Mass Spectrometry). This method provides high precision isotopic values, determining δ13C-CH4 to ±0.05 per mil. The bulk signature of the methane plume into the atmosphere from the whole source area was obtained by Keeling plot analysis, and a δ13C-CH4 signature, with the relative uncertainty, allocated to each methane source investigated. Both landfill and natural gas emissions in SE England have tightly constrained isotopic signatures. The averaged δ13C-CH4 for landfill sites is -58 ± 3‰. The δ13C-CH4 signature for gas leaks is also fairly constant around -36 ± 2‰, a value characteristic of homogenised North Sea supply. In contrast, signatures for coal mines in N. England and Wales fall in a range of -51.2 ± 0.3‰ to -30.9 ± 1.4‰, but can be tightly constrained by region. The study demonstrates that CRDS-based mobile methane measurement coupled with off-line high precision isotopic analysis of plume samples is an efficient way of characterising methane sources. It shows that isotopic measurements allow type identification, and possible location of previously unknown methane sources. In modelling studies this measurement provides an independent constraint to determine the contributions of different sources to the regional methane budget and in the verification of inventory source distribution.
Irma 5.2 multi-sensor signature prediction model
NASA Astrophysics Data System (ADS)
Savage, James; Coker, Charles; Thai, Bea; Aboutalib, Omar; Chow, Anthony; Yamaoka, Neil; Kim, Charles
2007-04-01
The Irma synthetic signature prediction code is being developed by the Munitions Directorate of the Air Force Research Laboratory (AFRL/MN) to facilitate the research and development of multi-sensor systems. There are over 130 users within the Department of Defense, NASA, Department of Transportation, academia, and industry. Irma began as a high-resolution, physics-based Infrared (IR) target and background signature model for tactical weapon applications and has grown to include: a laser (or active) channel (1990), improved scene generator to support correlated frame-to-frame imagery (1992), and passive IR/millimeter wave (MMW) channel for a co-registered active/passive IR/MMW model (1994). Irma version 5.0 was released in 2000 and encompassed several upgrades to both the physical models and software; host support was expanded to Windows, Linux, Solaris, and SGI Irix platforms. In 2005, version 5.1 was released after an extensive verification and validation of an upgraded and reengineered active channel. Since 2005, the reengineering effort has focused on the Irma passive channel. Field measurements for the validation effort include the unpolarized data collection. Irma 5.2 is scheduled for release in the summer of 2007. This paper will report the validation test results of the Irma passive models and discuss the new features in Irma 5.2.
Venkataraman, Aishwarya; Siu, Emily; Sadasivam, Kalaimaran
2016-11-01
Medication errors, including infusion prescription errors are a major public health concern, especially in paediatric patients. There is some evidence that electronic or web-based calculators could minimise these errors. To evaluate the impact of an electronic infusion calculator on the frequency of infusion errors in the Paediatric Critical Care Unit of The Royal London Hospital, London, United Kingdom. We devised an electronic infusion calculator that calculates the appropriate concentration, rate and dose for the selected medication based on the recorded weight and age of the child and then prints into a valid prescription chart. Electronic infusion calculator was implemented from April 2015 in Paediatric Critical Care Unit. A prospective study, five months before and five months after implementation of electronic infusion calculator, was conducted. Data on the following variables were collected onto a proforma: medication dose, infusion rate, volume, concentration, diluent, legibility, and missing or incorrect patient details. A total of 132 handwritten prescriptions were reviewed prior to electronic infusion calculator implementation and 119 electronic infusion calculator prescriptions were reviewed after electronic infusion calculator implementation. Handwritten prescriptions had higher error rate (32.6%) as compared to electronic infusion calculator prescriptions (<1%) with a p < 0.001. Electronic infusion calculator prescriptions had no errors on dose, volume and rate calculation as compared to handwritten prescriptions, hence warranting very few pharmacy interventions. Use of electronic infusion calculator for infusion prescription significantly reduced the total number of infusion prescribing errors in Paediatric Critical Care Unit and has enabled more efficient use of medical and pharmacy time resources.
Recognition of handprinted characters for automated cartography A progress report
NASA Technical Reports Server (NTRS)
Lybanon, M.; Brown, R. M.; Gronmeyer, L. K.
1980-01-01
A research program for developing handwritten character recognition techniques is reported. The generation of cartographic/hydrographic manuscripts is overviewed. The performance of hardware/software systems is discussed, along with future research problem areas and planned approaches.
NASA Astrophysics Data System (ADS)
Harkness, Ira; Zhu, Ting; Liang, Yinong; Rauch, Eric; Enqvist, Andreas; Jordan, Kelly A.
2018-01-01
Demand for spent nuclear fuel dry casks as an interim storage solution has increased globally and the IAEA has expressed a need for robust safeguards and verification technologies for ensuring the continuity of knowledge and the integrity of radioactive materials inside spent fuel casks. Existing research has been focusing on "fingerprinting" casks based on count rate statistics to represent radiation emission signatures. The current research aims to expand to include neutron energy spectral information as part of the fuel characteristics. First, spent fuel composition data are taken from the Next Generation Safeguards Initiative Spent Fuel Libraries, representative for Westinghouse 17ˣ17 PWR assemblies. The ORIGEN-S code then calculates the spontaneous fission and (α,n) emissions for individual fuel rods, followed by detailed MCNP simulations of neutrons transported through the fuel assemblies. A comprehensive database of neutron energy spectral profiles is to be constructed, with different enrichment, burn-up, and cooling time conditions. The end goal is to utilize the computational spent fuel library, predictive algorithm, and a pressurized 4He scintillator to verify the spent fuel assemblies inside a cask. This work identifies neutron spectral signatures that correlate with the cooling time of spent fuel. Both the total and relative contributions from spontaneous fission and (α,n) change noticeably with respect to cooling time, due to the relatively short half-life (18 years) of the major neutron source 244Cm. Identification of this and other neutron spectral signatures allows the characterization of spent nuclear fuels in dry cask storage.
Sea Level Data Archaeology for the Global Sea Level Observing System (GLOSS)
NASA Astrophysics Data System (ADS)
Bradshaw, Elizabeth; Matthews, Andy; Rickards, Lesley; Jevrejeva, Svetlana
2015-04-01
The Global Sea Level Observing System (GLOSS) was set up in 1985 to collect long term tide gauge observations and has carried out a number of data archaeology activities over the past decade, including sending member organisations questionnaires to report on their repositories. The GLOSS Group of Experts (GLOSS GE) is looking to future developments in sea level data archaeology and will provide its user community with guidance on finding, digitising, quality controlling and distributing historic records. Many records may not be held in organisational archives and may instead by in national libraries, archives and other collections. GLOSS will promote a Citizen Science approach to discovering long term records by providing tools for volunteers to report data. Tide gauge data come in two different formats, charts and hand-written ledgers. Charts are paper analogue records generated by the mechanical instrument driving a pen trace. Several GLOSS members have developed software to automatically digitise these charts and the various methods were reported in a paper on automated techniques for the digitization of archived mareograms, delivered to the GLOSS GE 13th meeting. GLOSS is creating a repository of software for scanning analogue charts. NUNIEAU is the only publically available software for digitising tide gauge charts but other organisations have developed their own tide gauge digitising software that is available internally. There are several other freely available software packages that convert image data to numerical values. GLOSS could coordinate a comparison study of the various different digitising software programs by: Sending the same charts to each organisation and asking everyone to digitise them using their own procedures Comparing the digitised data Providing recommendations to the GLOSS community The other major form of analogue sea level data is handwritten ledgers, which are usually observations of high and low waters, but sometimes contain higher frequency data. The standard current method for digitising these data is to enter the values manually, which has been performed by GLOSS countries, including France and Spain. The GLOSS GE is exploring other methods for use in the future as this process is time consuming. Current projects to improve Handwritten Text Recognition (HTR) tend to be working with the written word and so require knowledge of sentence structures and word occurrence probabilities to reconstruct sentences e.g. tranScriptorium (European Union's Seventh Framework Programme funded project). This approach would not be applicable to sea level data, however tidal data by its very nature contains periodicity and predictability. HTR technology could be adapted to take this into account and improve the automatic digitisation of handwritten tide gauge ledgers. There are many challenges facing the sea level data archaeology community, but it is hoped that improvements in technology can overcome some of the obstacles: Faster automated digitisation of tide gauge charts Minimal user input Automatic transcribing of handwritten ledgers The GLOSS GE will provide a central location to share software, guidelines for quality controlling data and the GLOSS data archive centres will be the repository of the newly created datasets.
Spent Fuel Assay with an Ultra-High Rate HPGe Spectrometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fast, James; Fulsom, Bryan; Pitts, Karl
2015-07-01
Traditional verification of spent nuclear fuel (SNF) includes determination of initial enrichment, burnup and cool down time (IE, BU, CT). Along with neutron measurements, passive gamma assay provides important information for determining BU and CT. Other gamma-ray-based assay methods such as passive tomography and active delayed gamma offer the potential to measure the spatial distribution of fission products and the fissile isotopic concentration of the fuel, respectively. All fuel verification methods involving gamma-ray spectroscopy require that the spectrometers manage very high count rates while extracting the signatures of interest. PNNL has developed new digital filtering and analysis techniques to producemore » an ultra-high rate gamma-ray spectrometer from a standard coaxial high-purity germanium (HPGe) crystal. This 37% relative efficiency detector has been operated for SNF measurements at input count rates of 500-1300 kcps and throughput in excess of 150 kcps. Optimized filtering algorithms preserve the spectroscopic capability of the system even at these high rates. This paper will present the results of both passive and active SNF measurement performed with this system at PNNL. (authors)« less
32 CFR 637.13 - Retention of property.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 32 National Defense 4 2011-07-01 2011-07-01 false Retention of property. 637.13 Section 637.13 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY (CONTINUED) LAW ENFORCEMENT AND.... Reports of investigation, photographs, exhibits, handwritten notes, sketches, and other materials...
32 CFR 637.13 - Retention of property.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 32 National Defense 4 2010-07-01 2010-07-01 true Retention of property. 637.13 Section 637.13 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY (CONTINUED) LAW ENFORCEMENT AND.... Reports of investigation, photographs, exhibits, handwritten notes, sketches, and other materials...
Hawking radiation in sonic black holes.
Giovanazzi, S
2005-02-18
I present a microscopic description of Hawking radiation in sonic black holes. A one-dimensional Fermi-degenerate liquid squeezed by a smooth barrier forms a transonic flow, a sonic analog of a black hole. The quantum treatment of the noninteracting case establishes a close relationship between sonic Hawking radiation and quantum tunneling through the barrier. Quasiparticle excitations appear at the barrier and are then radiated with a thermal distribution in exact agreement with Hawking's formula. The signature of the radiation can be found in the dynamic structure factor, which can be measured in a scattering experiment. The possibility for experimental verification of this new transport phenomenon for ultracold atoms is discussed.
Experimental verification of low sonic boom configuration
NASA Technical Reports Server (NTRS)
Ferri, A.; Wang, H. H.; Sorensen, H.
1972-01-01
A configuration designed to produce near field signature has been tested at M = 2.71 and the results are analyzed, by taking in account three-dimensional and second order effects. The configuration has an equivalent total area distribution that corresponds to an airplane flying at 60,000 ft. having a weight of 460,000 lbs, and 300 ft. length. A maximum overpressure of 0.95 lb/square foot has been obtained experimentally. The experimental results agree well with the analysis. The investigation indicates that the three-dimensional effects are very important when the measurements in wind tunnels are taken at small distances from the airplane.
Orthographic and phonological neighborhood effects in handwritten word perception
Goldinger, Stephen D.
2017-01-01
In printed-word perception, the orthographic neighborhood effect (i.e., faster recognition of words with more neighbors) has considerable theoretical importance, because it implicates great interactivity in lexical access. Mulatti, Reynolds, and Besner Journal of Experimental Psychology: Human Perception and Performance, 32, 799–810 (2006) questioned the validity of orthographic neighborhood effects, suggesting that they reflect a confound with phonological neighborhood density. They reported that, when phonological density is controlled, orthographic neighborhood effects vanish. Conversely, phonological neighborhood effects were still evident even when controlling for orthographic neighborhood density. The present study was a replication and extension of Mulatti et al. (2006), with words presented in four different formats (computer-generated print and cursive, and handwritten print and cursive). The results from Mulatti et al. (2006) were replicated with computer-generated stimuli, but were reversed with natural stimuli. These results suggest that, when ambiguity is introduced at the level of individual letters, top-down influences from lexical neighbors are increased. PMID:26306881
A comparison of 1D and 2D LSTM architectures for the recognition of handwritten Arabic
NASA Astrophysics Data System (ADS)
Yousefi, Mohammad Reza; Soheili, Mohammad Reza; Breuel, Thomas M.; Stricker, Didier
2015-01-01
In this paper, we present an Arabic handwriting recognition method based on recurrent neural network. We use the Long Short Term Memory (LSTM) architecture, that have proven successful in different printed and handwritten OCR tasks. Applications of LSTM for handwriting recognition employ the two-dimensional architecture to deal with the variations in both vertical and horizontal axis. However, we show that using a simple pre-processing step that normalizes the position and baseline of letters, we can make use of 1D LSTM, which is faster in learning and convergence, and yet achieve superior performance. In a series of experiments on IFN/ENIT database for Arabic handwriting recognition, we demonstrate that our proposed pipeline can outperform 2D LSTM networks. Furthermore, we provide comparisons with 1D LSTM networks trained with manually crafted features to show that the automatically learned features in a globally trained 1D LSTM network with our normalization step can even outperform such systems.
Schiegg, Markus; Thorpe, Deborah
2017-01-01
Handwritten texts carry significant information, extending beyond the meaning of their words. Modern neurology, for example, benefits from the interpretation of the graphic features of writing and drawing for the diagnosis and monitoring of diseases and disorders. This article examines how handwriting analysis can be used, and has been used historically, as a methodological tool for the assessment of medical conditions and how this enhances our understanding of historical contexts of writing. We analyze handwritten material, writing tests and letters, from patients in an early 20th-century psychiatric hospital in southern Germany (Irsee/Kaufbeuren). In this institution, early psychiatrists assessed handwriting features, providing us novel insights into the earliest practices of psychiatric handwriting analysis, which can be connected to Berkenkotter's research on medical admission records. We finally consider the degree to which historical handwriting bears semiotic potential to explain the psychological state and personality of a writer, and how future research in written communication should approach these sources.
Grasso, Giuseppe; Calcagno, Marzia; Rapisarda, Alessandro; D'Agata, Roberta; Spoto, Giuseppe
2017-06-01
The analytical methods that are usually applied to determine the compositions of inks from ancient manuscripts usually focus on inorganic components, as in the case of iron gall ink. In this work, we describe the use of atmospheric pressure/matrix-assisted laser desorption ionization-mass spectrometry (AP/MALDI-MS) as a spatially resolved analytical technique for the study of the organic carbonaceous components of inks used in handwritten parts of ancient books for the first time. Large polycyclic aromatic hydrocarbons (L-PAH) were identified in situ in the ink of XVII century handwritten documents. We prove that it is possible to apply MALDI-MS as a suitable microdestructive diagnostic tool for analyzing samples in air at atmospheric pressure, thus simplifying investigations of the organic components of artistic and archaeological objects. The interpretation of the experimental MS results was supported by independent Raman spectroscopic investigations. Graphical abstract Atmospheric pressure/MALDI mass spectrometry detects in situ polycyclic aromatic hydrocarbons in the carbonaceous ink of XVII century manuscripts.
Detection of variations in aspen forest habitat from LANDSAT digital data: Bear River Range, Utah
NASA Technical Reports Server (NTRS)
Merola, J. A.; Jaynes, R. A. (Principal Investigator)
1982-01-01
The aspen forests of the Bear River Range were analyzed and mapped using data recorded on July 2, 1979 by the LANDSAT III satellite; study efforts yielded sixty-seven light signatures for the study area, of which three groups were identified as aspen and mapped at a scale of 1:24,000. Analysis and verification of the three groups were accomplished by random location of twenty-six field study plots within the LANDSAT-defined aspen areas. All study plots are included within the Cache portion of the Wasatch-Cache National Forest. The following selected site characteristics were recorded for each study plot: a list of understory species present; average percent cover density for understory species; aspen canopy cover estimates and stem measurements; and general site topographic characteristics. The study plot data were then analyzed with respect to corresponding Landsat spectral signatures. Field studies show that all twenty-six study plots are associated with one of the three aspen groups. Further study efforts concentration on characterizing the differences between the site characteristics of plots falling into each of the three aspen groups.
Three plasma metabolite signatures for diagnosing high altitude pulmonary edema
NASA Astrophysics Data System (ADS)
Guo, Li; Tan, Guangguo; Liu, Ping; Li, Huijie; Tang, Lulu; Huang, Lan; Ren, Qian
2015-10-01
High-altitude pulmonary edema (HAPE) is a potentially fatal condition, occurring at altitudes greater than 3,000 m and affecting rapidly ascending, non-acclimatized healthy individuals. However, the lack of biomarkers for this disease still constitutes a bottleneck in the clinical diagnosis. Here, ultra-high performance liquid chromatography coupled with Q-TOF mass spectrometry was applied to study plasma metabolite profiling from 57 HAPE and 57 control subjects. 14 differential plasma metabolites responsible for the discrimination between the two groups from discovery set (35 HAPE subjects and 35 healthy controls) were identified. Furthermore, 3 of the 14 metabolites (C8-ceramide, sphingosine and glutamine) were selected as candidate diagnostic biomarkers for HAPE using metabolic pathway impact analysis. The feasibility of using the combination of these three biomarkers for HAPE was evaluated, where the area under the receiver operating characteristic curve (AUC) was 0.981 and 0.942 in the discovery set and the validation set (22 HAPE subjects and 22 healthy controls), respectively. Taken together, these results suggested that this composite plasma metabolite signature may be used in HAPE diagnosis, especially after further investigation and verification with larger samples.
Laboratory Noble Gas Migration Experiments through Rock
NASA Astrophysics Data System (ADS)
Broome, S.; Cashion, A.; Feldman, J.; Sussman, A. J.; Swanson, E.; Wilson, J.
2016-12-01
The Underground Nuclear Explosion Signatures Experiment (UNESE) was created to address science and research and development aspects associated with nuclear explosion verification and nuclear nonproliferation with a focus on non-prompt signals. A critical component of the UNESE program is a realistic understanding of the post-detonation processes and changes in the environment that produce observable physical and radio-chemical signatures. As such, an understanding of noble gas migration properties through various lithologies is essential. Here we present an empirical methodology to measure tortuosity on well-characterized rhyolitic tuffs and lavas. Tortuosity is then compared with microfracture networks characterized by microscopy. To quantify tortuosity, a pressurized (1500 mbar) fixed volume of argon is expanded into a sample under high vacuum (0.200 mbar). A quadrupole mass spectrometer (QMS) is used to measure argon downstream of the sample in real time, allowing the time-series gas arrival curve to be characterized for each sample. To evaluate the method, blank samples have been machined to correspond with tortuosities of 1, 2, and 4 in conjunction with a restricted-flow valve to mimic rock sample permeability. Data from the blanks are analyzed with this system to correct for system effects on gas arrival. High vacuum is maintained in the QMS system during sampling by precise metering of the gas through a leak valve with active feedback control which allows arrival time and concentration of argon to be established in real time. Along with a comprehensive characterization of the rock and fracture properties, the parameters derived from these experiments will provide invaluable insight into the three-dimensional structure of damage zones, the production of temporally variable signatures and the methods to best detect underground nuclear explosion signatures. SAND2016-7309 A
Does Mechanism Matter? Student Recall of Electronic versus Handwritten Feedback
ERIC Educational Resources Information Center
Osterbur, Megan E.; Hammer, Elizabeth Yost; Hammer, Elliott
2015-01-01
Student consumption and recall of feedback are necessary preconditions of successful formative assessment. Drawing on Sadler's (1998) definition of formative assessment as that which is intended to accelerate learning and improve performance through the providing of feedback, we examine how the mechanism of transmission may impact student…
Diffuse Interface Methods for Multiclass Segmentation of High-Dimensional Data
2014-03-04
handwritten digits , 1998. http://yann.lecun.com/exdb/mnist/. [19] S. Nene, S. Nayar, H. Murase, Columbia Object Image Library (COIL-100), Technical Report... recognition on smartphones using a multiclass hardware-friendly support vector machine, in: Ambient Assisted Living and Home Care, Springer, 2012, pp. 216–223.
Adaptive Learning and Pruning Using Periodic Packet for Fast Invariance Extraction and Recognition
NASA Astrophysics Data System (ADS)
Chang, Sheng-Jiang; Zhang, Bian-Li; Lin, Lie; Xiong, Tao; Shen, Jin-Yuan
2005-02-01
A new learning scheme using a periodic packet as the neuronal activation function is proposed for invariance extraction and recognition of handwritten digits. Simulation results show that the proposed network can extract the invariant feature effectively and improve both the convergence and the recognition rate.
Abstract Graphemic Representations Support Preparation of Handwritten Responses
ERIC Educational Resources Information Center
Shen, Xingjia Rachel; Damian, Marcus F.; Stadthagen-Gonzalez, Hans
2013-01-01
Some evidence suggests that the written production of single words involves not only the ordered retrieval of individual letters, but that abstract, higher-level linguistic properties of the words also influence responses. We report five experiments using the "implicit priming" task adopted from the spoken domain to investigate response…
Researching Australian Children's Literature
ERIC Educational Resources Information Center
Saxby, Maurice
2004-01-01
When in 1962 the author began to research the history of Australian children's literature, access to the primary sources was limited and difficult. From a catalogue drawer in the Mitchell Library of hand-written cards marked "Children's books" he could call up from the stacks, in alphabetical order, piles of early publications. His notes…
NASA Astrophysics Data System (ADS)
Erickson, Diane K.
Today's students have grown up surrounded by technology. They use cell phones, word processors, and the Internet with ease, talking with peers in their community and around the world through e-mails, chatrooms, instant messaging, online discussions, and weblogs ("blogs"). In the midst of this technological explosion, adolescents face a growing need for strong literacy skills in all subject areas for achievement in school and on mandated state and national high stakes tests. The purpose of this study was to examine the use of blogs as a tool for improving open-response writing in the secondary science classroom in comparison to the use of handwritten dialogue journals. The study used a mixed-method approach, gathering both quantitative and qualitative data from 94 students in four eighth-grade science classes. Two classes participated in online class blogs where they posted ideas about science and responded to the ideas of other classmates. Two classes participated in handwritten dialogue journals, writing ideas about science and exchanging journals to respond to the ideas of classmates. The study explored these research questions: Does the use of blogs, as compared to the use of handwritten dialogue journals, improve the open-response writing scores of eighth grade science students? How do students describe their experience using blogs to study science as compared to students using handwritten dialogue journals? and How do motivation, self-efficacy, and community manifest themselves in students who use blogs as compared to students who use handwritten dialogue journals? The quantitative aspect of the study used data from pre- and post-tests and from a Likert-scale post-survey. The pre- and post-writing on open-response science questions were scored using the Massachusetts Comprehensive Assessment System (MCAS) open-response scoring rubric. The study found no statistically significant difference in the writing scores between the blog group and the dialogue journal groups. The study found significant difference between the scores on the post-survey of the two groups with the blogging group registering a more positive attitude about the experience than the dialogue journal group. The qualitative aspect of the study used group and individual interviews with 26 randomly-chosen students to explore the nature of the students' experiences using blogs and dialogue journals. Overall, the blog group communicated more positive responses to the experience than did students from the dialogue journal group, often indicating that blogging was "fun" and "helpful" and made them look forward to science class. This study addressed research needs in the fields of writing, technology, and content literacy. It is significant because there is little research on the use of blogs in the middle school content classroom, particularly on the use of blogs as a tool for improving open-response writing. It adds information as to the experience of students who use blogs in the science classroom and explored it as a way to explore ideas, build understanding, and connect with others. This is significant to know as school districts look to include more technology instruction and practices in the curriculum. Blogs could give students a critical tool for writing and thinking in the content classroom, helping to prepare students for an increasingly technological and global society.
PCANet: A Simple Deep Learning Baseline for Image Classification?
Chan, Tsung-Han; Jia, Kui; Gao, Shenghua; Lu, Jiwen; Zeng, Zinan; Ma, Yi
2015-12-01
In this paper, we propose a very simple deep learning network for image classification that is based on very basic data processing components: 1) cascaded principal component analysis (PCA); 2) binary hashing; and 3) blockwise histograms. In the proposed architecture, the PCA is employed to learn multistage filter banks. This is followed by simple binary hashing and block histograms for indexing and pooling. This architecture is thus called the PCA network (PCANet) and can be extremely easily and efficiently designed and learned. For comparison and to provide a better understanding, we also introduce and study two simple variations of PCANet: 1) RandNet and 2) LDANet. They share the same topology as PCANet, but their cascaded filters are either randomly selected or learned from linear discriminant analysis. We have extensively tested these basic networks on many benchmark visual data sets for different tasks, including Labeled Faces in the Wild (LFW) for face verification; the MultiPIE, Extended Yale B, AR, Facial Recognition Technology (FERET) data sets for face recognition; and MNIST for hand-written digit recognition. Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with the state-of-the-art features either prefixed, highly hand-crafted, or carefully learned [by deep neural networks (DNNs)]. Even more surprisingly, the model sets new records for many classification tasks on the Extended Yale B, AR, and FERET data sets and on MNIST variations. Additional experiments on other public data sets also demonstrate the potential of PCANet to serve as a simple but highly competitive baseline for texture classification and object recognition.
NASA Astrophysics Data System (ADS)
Czirjak, Daniel
2017-04-01
Remote sensing platforms have consistently demonstrated the ability to detect, and in some cases identify, specific targets of interest, and photovoltaic solar panels are shown to have a unique spectral signature that is consistent across multiple manufacturers and construction methods. Solar panels are proven to be detectable in hyperspectral imagery using common statistical target detection methods such as the adaptive cosine estimator, and false alarms can be mitigated through the use of a spectral verification process that eliminates pixels that do not have the key spectral features of photovoltaic solar panel reflectance spectrum. The normalized solar panel index is described and is a key component in the false-alarm mitigation process. After spectral verification, these solar panel arrays are confirmed on openly available literal imagery and can be measured using numerous open-source algorithms and tools. The measurements allow for the assessment of overall solar power generation capacity using an equation that accounts for solar insolation, the area of solar panels, and the efficiency of the solar panels conversion of solar energy to power. Using a known location with readily available information, the methods outlined in this paper estimate the power generation capabilities within 6% of the rated power.
Verification of target motion effects on SAR imagery using the Gotcha GMTI challenge dataset
NASA Astrophysics Data System (ADS)
Hack, Dan E.; Saville, Michael A.
2010-04-01
This paper investigates the relationship between a ground moving target's kinematic state and its SAR image. While effects such as cross-range offset, defocus, and smearing appear well understood, their derivations in the literature typically employ simplifications of the radar/target geometry and assume point scattering targets. This study adopts a geometrical model for understanding target motion effects in SAR imagery, termed the target migration path, and focuses on experimental verification of predicted motion effects using both simulated and empirical datasets based on the Gotcha GMTI challenge dataset. Specifically, moving target imagery is generated from three data sources: first, simulated phase history for a moving point target; second, simulated phase history for a moving vehicle derived from a simulated Mazda MPV X-band signature; and third, empirical phase history from the Gotcha GMTI challenge dataset. Both simulated target trajectories match the truth GPS target position history from the Gotcha GMTI challenge dataset, allowing direct comparison between all three imagery sets and the predicted target migration path. This paper concludes with a discussion of the parallels between the target migration path and the measurement model within a Kalman filtering framework, followed by conclusions.
Information Transfer Problems of the Partially Sighted: Recent Results and Project Summary.
ERIC Educational Resources Information Center
Genensky, S. M.; And Others
The fourth in a series of Rand reports on information transfer problems of the partially sighted reviews earlier reports and describes an experimental secretarial closed circuit TV (CCTV) system which enables the partially sighted to type from a printed or handwritten manuscript. Discussed are experiments using a pseudocolor system to determine…
17 CFR 270.0-2 - General requirements of papers and applications.
Code of Federal Regulations, 2014 CFR
2014-04-01
... necessary to authorize the undersigned to execute and file such instrument has been taken. The undersigned... may be present) by handwritten, typed, printed, or other legible form of notation from the facing page of the document through the last page of that document and any exhibits or attachments thereto...
17 CFR 270.0-2 - General requirements of papers and applications.
Code of Federal Regulations, 2013 CFR
2013-04-01
... necessary to authorize the undersigned to execute and file such instrument has been taken. The undersigned... may be present) by handwritten, typed, printed, or other legible form of notation from the facing page of the document through the last page of that document and any exhibits or attachments thereto...
Older Japanese Adults and Mobile Phones: An Applied Ethnographic Study
ERIC Educational Resources Information Center
Hachiya, Kumiko
2010-01-01
This qualitative research investigates the meaning of "keitai" (mobile phones) for older Japanese adults between the ages of 59 and 79. Participants' emails from keitai, handwritten daily logs, and audio and video recordings from meetings and interviews were collected during my stay of nearly seven months in one of the largest cities in…
Statistical Techniques for Efficient Indexing and Retrieval of Document Images
ERIC Educational Resources Information Center
Bhardwaj, Anurag
2010-01-01
We have developed statistical techniques to improve the performance of document image search systems where the intermediate step of OCR based transcription is not used. Previous research in this area has largely focused on challenges pertaining to generation of small lexicons for processing handwritten documents and enhancement of poor quality…
Digital Management of a Hysteroscopy Surgery Using Parts of the SNOMED Medical Model
Kollias, Anastasios; Paschopoulos, Minas; Evangelou, Angelos; Poulos, Marios
2012-01-01
This work describes a hysteroscopy surgery management application that was designed based on the medical information standard SNOMED. We describe how the application fulfils the needs of this procedure and the way in which existing handwritten medical information is effectively transmitted to the application’s database. PMID:22848338
The Gin Builder: Examining the Skills Needed for the New Industrial Age.
ERIC Educational Resources Information Center
Kosty, Carlita; Lubar, Steven; Rhar, Bill
2000-01-01
Presents a lesson plan in which students explore the impact of industrialization on agriculture, the experience of William Ellison, a free black cotton gin mechanic, and the skills that Ellison needed. Students discuss handwritten documents, diagrams, and census information related to the cotton gin. Includes a bibliography and four handouts. (CMK)
49 CFR 381.310 - How do I apply for an exemption?
Code of Federal Regulations, 2013 CFR
2013-10-01
... exemption? (a) You must send a written request (for example, a typed or handwritten (printed) letter), which... include: (1) Your name, job title, mailing address, and daytime telephone number; (2) The name of the... application must include a copy of all research reports, technical papers, and other publications and...
49 CFR 381.310 - How do I apply for an exemption?
Code of Federal Regulations, 2012 CFR
2012-10-01
... exemption? (a) You must send a written request (for example, a typed or handwritten (printed) letter), which... include: (1) Your name, job title, mailing address, and daytime telephone number; (2) The name of the... application must include a copy of all research reports, technical papers, and other publications and...
49 CFR 381.310 - How do I apply for an exemption?
Code of Federal Regulations, 2011 CFR
2011-10-01
... exemption? (a) You must send a written request (for example, a typed or handwritten (printed) letter), which... include: (1) Your name, job title, mailing address, and daytime telephone number; (2) The name of the... application must include a copy of all research reports, technical papers, and other publications and...
49 CFR 381.310 - How do I apply for an exemption?
Code of Federal Regulations, 2010 CFR
2010-10-01
... exemption? (a) You must send a written request (for example, a typed or handwritten (printed) letter), which... include: (1) Your name, job title, mailing address, and daytime telephone number; (2) The name of the... application must include a copy of all research reports, technical papers, and other publications and...
Positive Health and Financial Practices: Does Budgeting Make a Difference?
ERIC Educational Resources Information Center
O'Neill, Barbara; Xiao, Jing Jian; Ensle, Karen
2017-01-01
This study explored relationships between the practice of following a hand-written or computer-generated budget and the frequency of performance of positive personal health and financial practices. Data were collected from an online quiz completed by 942 adults, providing a simultaneous assessment of individuals' health and financial practices.…
NASA Astrophysics Data System (ADS)
Discussion times were lively and highly fruitful. The Editors have organised questions and answers for each paper alphabetically by the speaker's surname. Although the discussion was recorded, only those questions and answers for which written versions were submitted have been included here. We are deeply indebted to Bev Lynds for transcribing the hand-written questions and answers.
Technology and the Oops! Effect: Finding a Bias against Word Processing.
ERIC Educational Resources Information Center
Roblyer, M. D.
1997-01-01
Introduced to aid writing, word processing can cause unexpected problems for those who use it. Describes four studies in which raters gave word-processed essays consistently lower scores than handwritten essays. Reasons for the discrepancies were higher expectations for typed essays, ease of spotting text errors in typed text, and more difficulty…
ERIC Educational Resources Information Center
Greifeneder, Rainer; Zelt, Sarah; Seele, Tim; Bottenberg, Konstantin; Alt, Alexander
2012-01-01
Background: Handwriting legibility systematically biases evaluations in that highly legible handwriting results in more positive evaluations than less legible handwriting. Because performance assessments in educational contexts are not only based on computerized or multiple choice tests but often include the evaluation of handwritten work samples,…
2008-06-01
14] Mark Weiser. Program slicing. Trans. Software Engineering , July 1984. 17 ...entitled “Perpetually Available and Secure In- formation Systems”, the Software Industry Center at CMU and its sponsors, especially the Alfred P. Sloan...ERL In Acme, a software architect can choose to associate a handwritten error message to each specification. If the specification fails, for any
Mud, Blood, and Bullet Holes: Teaching History with War Letters
ERIC Educational Resources Information Center
Carroll, Andrew
2013-01-01
From handwritten letters of the American Revolution to typed emails from Iraq and Afghanistan, correspondence from U.S. troops offers students deep insight into the specific conflicts and experiences of soldiers. Over 100,000 correspondences have been donated to the Legacy Project, a national initiative launched in 1998 to preserve war letters by…
Betty Kirby: Travels and Translations in the Kindergarten
ERIC Educational Resources Information Center
Sherwood, Elizabeth A.; Freshwater, Amy
2009-01-01
This article examines the pervasive influence of progressive education and travel on a public school kindergarten teacher's professional life. In a statement included in her handwritten list of goals for the children in her classroom, she echoed John Dewey, noting that a kindergarten child should "....live life fully and well because this is a…
ERIC Educational Resources Information Center
Winer, Laura R.; Cooperstock, Jeremy
2002-01-01
Describes the development and use of the Intelligent Classroom collaborative project at McGill University that explored technology use to improve teaching and learning. Explains the hardware and software installation that allows for the automated capture of audio, video, slides, and handwritten annotations during a live lecture, with subsequent…
Investigating the Implemented Mathematics Curriculum of New England Navigation Cyphering Books
ERIC Educational Resources Information Center
Hertel, Joshua
2016-01-01
In this article I discuss an investigation of handwritten mathematics manuscripts known as navigation cyphering books. These manuscripts, which were prepared during the seventeenth and eighteenth centuries, are evidence of an educational tradition that was the primary means by which students in North America learned mathematics between 1607 and…
41 CFR 102-192.140 - What are your general responsibilities as a Federal mail center manager?
Code of Federal Regulations, 2013 CFR
2013-07-01
... transmission of data in lieu of mail, reducing the number of handwritten addresses on outgoing mail, and other... processing activities at the facility, including all regularly scheduled, small package, and expedited... security office, the Postal Inspection Service, or other appropriate authority; (k) Track incoming packages...
49 CFR 381.210 - How do I request a waiver?
Code of Federal Regulations, 2010 CFR
2010-10-01
... a written request (for example, a typed or handwritten (printed) letter), which includes all of the...) Principal place of business for the motor carrier or other entity (street address, city, State, and zip code... written statement that: (1) Describes the unique, non-emergency event for which the waiver would be used...
Handwritten-word spotting using biologically inspired features.
van der Zant, Tijn; Schomaker, Lambert; Haak, Koen
2008-11-01
For quick access to new handwritten collections, current handwriting recognition methods are too cumbersome. They cannot deal with the lack of labeled data and would require extensive laboratory training for each individual script, style, language and collection. We propose a biologically inspired whole-word recognition method which is used to incrementally elicit word labels in a live, web-based annotation system, named Monk. Since human labor should be minimized given the massive amount of image data, it becomes important to rely on robust perceptual mechanisms in the machine. Recent computational models of the neuro-physiology of vision are applied to isolated word classification. A primate cortex-like mechanism allows to classify text-images that have a low frequency of occurrence. Typically these images are the most difficult to retrieve and often contain named entities and are regarded as the most important to people. Usually standard pattern-recognition technology cannot deal with these text-images if there are not enough labeled instances. The results of this retrieval system are compared to normalized word-image matching and appear to be very promising.
Basic test framework for the evaluation of text line segmentation and text parameter extraction.
Brodić, Darko; Milivojević, Dragan R; Milivojević, Zoran
2010-01-01
Text line segmentation is an essential stage in off-line optical character recognition (OCR) systems. It is a key because inaccurately segmented text lines will lead to OCR failure. Text line segmentation of handwritten documents is a complex and diverse problem, complicated by the nature of handwriting. Hence, text line segmentation is a leading challenge in handwritten document image processing. Due to inconsistencies in measurement and evaluation of text segmentation algorithm quality, some basic set of measurement methods is required. Currently, there is no commonly accepted one and all algorithm evaluation is custom oriented. In this paper, a basic test framework for the evaluation of text feature extraction algorithms is proposed. This test framework consists of a few experiments primarily linked to text line segmentation, skew rate and reference text line evaluation. Although they are mutually independent, the results obtained are strongly cross linked. In the end, its suitability for different types of letters and languages as well as its adaptability are its main advantages. Thus, the paper presents an efficient evaluation method for text analysis algorithms.
An online handwriting recognition system for Turkish
NASA Astrophysics Data System (ADS)
Vural, Esra; Erdogan, Hakan; Oflazer, Kemal; Yanikoglu, Berrin A.
2004-12-01
Despite recent developments in Tablet PC technology, there has not been any applications for recognizing handwritings in Turkish. In this paper, we present an online handwritten text recognition system for Turkish, developed using the Tablet PC interface. However, even though the system is developed for Turkish, the addressed issues are common to online handwriting recognition systems in general. Several dynamic features are extracted from the handwriting data for each recorded point and Hidden Markov Models (HMM) are used to train letter and word models. We experimented with using various features and HMM model topologies, and report on the effects of these experiments. We started with first and second derivatives of the x and y coordinates and relative change in the pen pressure as initial features. We found that using two more additional features, that is, number of neighboring points and relative heights of each point with respect to the base-line improve the recognition rate. In addition, extracting features within strokes and using a skipping state topology improve the system performance as well. The improved system performance is 94% in recognizing handwritten words from a 1000-word lexicon.
An online handwriting recognition system for Turkish
NASA Astrophysics Data System (ADS)
Vural, Esra; Erdogan, Hakan; Oflazer, Kemal; Yanikoglu, Berrin A.
2005-01-01
Despite recent developments in Tablet PC technology, there has not been any applications for recognizing handwritings in Turkish. In this paper, we present an online handwritten text recognition system for Turkish, developed using the Tablet PC interface. However, even though the system is developed for Turkish, the addressed issues are common to online handwriting recognition systems in general. Several dynamic features are extracted from the handwriting data for each recorded point and Hidden Markov Models (HMM) are used to train letter and word models. We experimented with using various features and HMM model topologies, and report on the effects of these experiments. We started with first and second derivatives of the x and y coordinates and relative change in the pen pressure as initial features. We found that using two more additional features, that is, number of neighboring points and relative heights of each point with respect to the base-line improve the recognition rate. In addition, extracting features within strokes and using a skipping state topology improve the system performance as well. The improved system performance is 94% in recognizing handwritten words from a 1000-word lexicon.
New approach for segmentation and recognition of handwritten numeral strings
NASA Astrophysics Data System (ADS)
Sadri, Javad; Suen, Ching Y.; Bui, Tien D.
2004-12-01
In this paper, we propose a new system for segmentation and recognition of unconstrained handwritten numeral strings. The system uses a combination of foreground and background features for segmentation of touching digits. The method introduces new algorithms for traversing the top/bottom-foreground-skeletons of the touched digits, and for finding feature points on these skeletons, and matching them to build all the segmentation paths. For the first time a genetic representation is used to show all the segmentation hypotheses. Our genetic algorithm tries to search and evolve the population of candidate segmentations and finds the one with the highest confidence for its segmentation and recognition. We have also used a new method for feature extraction which lowers the variations in the shapes of the digits, and then a MLP neural network is utilized to produce the labels and confidence values for those digits. The NIST SD19 and CENPARMI databases are used for evaluating the system. Our system can get a correct segmentation-recognition rate of 96.07% with rejection rate of 2.61% which compares favorably with those that exist in the literature.
Basic Test Framework for the Evaluation of Text Line Segmentation and Text Parameter Extraction
Brodić, Darko; Milivojević, Dragan R.; Milivojević, Zoran
2010-01-01
Text line segmentation is an essential stage in off-line optical character recognition (OCR) systems. It is a key because inaccurately segmented text lines will lead to OCR failure. Text line segmentation of handwritten documents is a complex and diverse problem, complicated by the nature of handwriting. Hence, text line segmentation is a leading challenge in handwritten document image processing. Due to inconsistencies in measurement and evaluation of text segmentation algorithm quality, some basic set of measurement methods is required. Currently, there is no commonly accepted one and all algorithm evaluation is custom oriented. In this paper, a basic test framework for the evaluation of text feature extraction algorithms is proposed. This test framework consists of a few experiments primarily linked to text line segmentation, skew rate and reference text line evaluation. Although they are mutually independent, the results obtained are strongly cross linked. In the end, its suitability for different types of letters and languages as well as its adaptability are its main advantages. Thus, the paper presents an efficient evaluation method for text analysis algorithms. PMID:22399932
New approach for segmentation and recognition of handwritten numeral strings
NASA Astrophysics Data System (ADS)
Sadri, Javad; Suen, Ching Y.; Bui, Tien D.
2005-01-01
In this paper, we propose a new system for segmentation and recognition of unconstrained handwritten numeral strings. The system uses a combination of foreground and background features for segmentation of touching digits. The method introduces new algorithms for traversing the top/bottom-foreground-skeletons of the touched digits, and for finding feature points on these skeletons, and matching them to build all the segmentation paths. For the first time a genetic representation is used to show all the segmentation hypotheses. Our genetic algorithm tries to search and evolve the population of candidate segmentations and finds the one with the highest confidence for its segmentation and recognition. We have also used a new method for feature extraction which lowers the variations in the shapes of the digits, and then a MLP neural network is utilized to produce the labels and confidence values for those digits. The NIST SD19 and CENPARMI databases are used for evaluating the system. Our system can get a correct segmentation-recognition rate of 96.07% with rejection rate of 2.61% which compares favorably with those that exist in the literature.
A unified approach for development of Urdu Corpus for OCR and demographic purpose
NASA Astrophysics Data System (ADS)
Choudhary, Prakash; Nain, Neeta; Ahmed, Mushtaq
2015-02-01
This paper presents a methodology for the development of an Urdu handwritten text image Corpus and application of Corpus linguistics in the field of OCR and information retrieval from handwritten document. Compared to other language scripts, Urdu script is little bit complicated for data entry. To enter a single character it requires a combination of multiple keys entry. Here, a mixed approach is proposed and demonstrated for building Urdu Corpus for OCR and Demographic data collection. Demographic part of database could be used to train a system to fetch the data automatically, which will be helpful to simplify existing manual data-processing task involved in the field of data collection such as input forms like Passport, Ration Card, Voting Card, AADHAR, Driving licence, Indian Railway Reservation, Census data etc. This would increase the participation of Urdu language community in understanding and taking benefit of the Government schemes. To make availability and applicability of database in a vast area of corpus linguistics, we propose a methodology for data collection, mark-up, digital transcription, and XML metadata information for benchmarking.
Deep Convolutional Extreme Learning Machine and Its Application in Handwritten Digit Classification
Yang, Xinyi
2016-01-01
In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods. PMID:27610128
Variational dynamic background model for keyword spotting in handwritten documents
NASA Astrophysics Data System (ADS)
Kumar, Gaurav; Wshah, Safwan; Govindaraju, Venu
2013-12-01
We propose a bayesian framework for keyword spotting in handwritten documents. This work is an extension to our previous work where we proposed dynamic background model, DBM for keyword spotting that takes into account the local character level scores and global word level scores to learn a logistic regression classifier to separate keywords from non-keywords. In this work, we add a bayesian layer on top of the DBM called the variational dynamic background model, VDBM. The logistic regression classifier uses the sigmoid function to separate keywords from non-keywords. The sigmoid function being neither convex nor concave, exact inference of VDBM becomes intractable. An expectation maximization step is proposed to do approximate inference. The advantage of VDBM over the DBM is multi-fold. Firstly, being bayesian, it prevents over-fitting of data. Secondly, it provides better modeling of data and an improved prediction of unseen data. VDBM is evaluated on the IAM dataset and the results prove that it outperforms our prior work and other state of the art line based word spotting system.
Offline handwritten word recognition using MQDF-HMMs
NASA Astrophysics Data System (ADS)
Ramachandrula, Sitaram; Hambarde, Mangesh; Patial, Ajay; Sahoo, Dushyant; Kochar, Shaivi
2015-01-01
We propose an improved HMM formulation for offline handwriting recognition (HWR). The main contribution of this work is using modified quadratic discriminant function (MQDF) [1] within HMM framework. In an MQDF-HMM the state observation likelihood is calculated by a weighted combination of MQDF likelihoods of individual Gaussians of GMM (Gaussian Mixture Model). The quadratic discriminant function (QDF) of a multivariate Gaussian can be rewritten by avoiding the inverse of covariance matrix by using the Eigen values and Eigen vectors of it. The MQDF is derived from QDF by substituting few of badly estimated lower-most Eigen values by an appropriate constant. The estimation errors of non-dominant Eigen vectors and Eigen values of covariance matrix for which the training data is insufficient can be controlled by this approach. MQDF has been successfully shown to improve the character recognition performance [1]. The usage of MQDF in HMM improves the computation, storage and modeling power of HMM when there is limited training data. We have got encouraging results on offline handwritten character (NIST database) and word recognition in English using MQDF HMMs.
Deep Convolutional Extreme Learning Machine and Its Application in Handwritten Digit Classification.
Pang, Shan; Yang, Xinyi
2016-01-01
In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods.
Chen, Ming-Huang; Yang, Wu-Lung R; Lin, Kuan-Ting; Liu, Chia-Hung; Liu, Yu-Wen; Huang, Kai-Wen; Chang, Peter Mu-Hsin; Lai, Jin-Mei; Hsu, Chun-Nan; Chao, Kun-Mao; Kao, Cheng-Yan; Huang, Chi-Ying F
2011-01-01
Hepatocellular carcinoma (HCC) is an aggressive tumor with a poor prognosis. Currently, only sorafenib is approved by the FDA for advanced HCC treatment; therefore, there is an urgent need to discover candidate therapeutic drugs for HCC. We hypothesized that if a drug signature could reverse, at least in part, the gene expression signature of HCC, it might have the potential to inhibit HCC-related pathways and thereby treat HCC. To test this hypothesis, we first built an integrative platform, the "Encyclopedia of Hepatocellular Carcinoma genes Online 2", dubbed EHCO2, to systematically collect, organize and compare the publicly available data from HCC studies. The resulting collection includes a total of 4,020 genes. To systematically query the Connectivity Map (CMap), which includes 6,100 drug-mediated expression profiles, we further designed various gene signature selection and enrichment methods, including a randomization technique, majority vote, and clique analysis. Subsequently, 28 out of 50 prioritized drugs, including tanespimycin, trichostatin A, thioguanosine, and several anti-psychotic drugs with anti-tumor activities, were validated via MTT cell viability assays and clonogenic assays in HCC cell lines. To accelerate their future clinical use, possibly through drug-repurposing, we selected two well-established drugs to test in mice, chlorpromazine and trifluoperazine. Both drugs inhibited orthotopic liver tumor growth. In conclusion, we successfully discovered and validated existing drugs for potential HCC therapeutic use with the pipeline of Connectivity Map analysis and lab verification, thereby suggesting the usefulness of this procedure to accelerate drug repurposing for HCC treatment.
Design of Distortion-Invariant Optical ID Tags for Remote Identification and Verification of Objects
NASA Astrophysics Data System (ADS)
Pérez-Cabré, Elisabet; Millán, María Sagrario; Javidi, Bahram
Optical identification (ID) tags [1] have a promising future in a number of applications such as the surveillance of vehicles in transportation, control of restricted areas for homeland security, item tracking on conveyor belts or other industrial environment, etc. More specifically, passive optical ID tag [1] was introduced as an optical code containing a signature (that is, a characteristic image or other relevant information of the object), which permits its real-time remote detection and identification. Since their introduction in the literature [1], some contributions have been proposed to increase their usefulness and robustness. To increase security and avoid counterfeiting, the signature was introduced in the optical code as an encrypted function [2-5] following the double-phase encryption technique [6]. Moreover, the design of the optical ID tag was done in such a way that tolerance to variations in scale and rotation was achieved [2-5]. To do that, the encrypted information was multiplexed and distributed in the optical code following an appropriate topology. Further studies were carried out to analyze the influence of different sources of noise. In some proposals [5, 7], the designed ID tag consists of two optical codes where the complex-valued encrypted signature was separately introduced in two real-valued functions according to its magnitude and phase distributions. This solution was introduced to overcome some difficulties in the readout of complex values in outdoors environments. Recently, the fully phase encryption technique [8] has been proposed to increase noise robustness of the authentication system.
Global Monitoring of the CTBT: Progress, Capabilities and Plans (Invited)
NASA Astrophysics Data System (ADS)
Zerbo, L.
2013-12-01
The Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO), established in 1996, is tasked with building up the verification regime of the CTBT. The regime includes a global system for monitoring the earth, the oceans and the atmosphere for nuclear tests, and an on-site inspection (OSI) capability. More than 80% of the 337 facilities of the International Monitoring System (IMS) have been installed and are sending data to the International Data Centre (IDC) in Vienna, Austria for processing. These IMS data along with IDC processed and reviewed products are available to all States that have signed the Treaty. Concurrent with the build-up of the global monitoring networks, near-field geophysical methods are being developed and tested for OSIs. The monitoring system is currently operating in a provisional mode, as the Treaty has not yet entered into force. Progress in installing and operating the IMS and the IDC and in building up an OSI capability will be described. The capabilities of the monitoring networks have progressively improved as stations are added to the IMS and IDC processing techniques refined. Detection thresholds for seismic, hydroacoustic, infrasound and radionuclide events have been measured and in general are equal to or lower than the predictions used during the Treaty negotiations. The measurements have led to improved models and tools that allow more accurate predictions of future capabilities and network performance under any configuration. Unplanned tests of the monitoring network occurred when the DPRK announced nuclear tests in 2006, 2009, and 2013. All three tests were well above the detection threshold and easily detected and located by the seismic monitoring network. In addition, noble gas consistent with the nuclear tests in 2006 and 2013 (according to atmospheric transport models) was detected by stations in the network. On-site inspections of these tests were not conducted as the Treaty has not entered into force. In order to achieve a credible and trustworthy Verification System, increased focus is being put on the development of OSI operational capabilities while operating and sustaining the existing monitoring system, increasing the data availability and quality, and completing the remaining facilities of the IMS. Furthermore, as mandated by the Treaty, the CTBTO also seeks to continuously improve its technologies and methods through interaction with the scientific community. Workshops and scientific conferences such as the CTBT Science and Technology Conference series provide venues for exchanging ideas, and mechanisms have been developed for sharing IMS data with researchers who are developing and testing new and innovative methods pertinent to the verification regime. While progress is steady on building up the verification regime, there is also progress in gaining entry into force of the Treaty, which requires the signatures and ratifications of the DPRK, India and Pakistan; it also requires the ratifications of China, Egypt, Iran, Israel and the United States. Thirty-six other States, whose signatures and ratifications are needed for entry into force have already done so.
The proximate unit in Chinese handwritten character production
Chen, Jenn-Yeu; Cherng, Rong-Ju
2013-01-01
In spoken word production, a proximate unit is the first phonological unit at the sublexical level that is selectable for production (O'Seaghdha et al., 2010). The present study investigated whether the proximate unit in Chinese handwritten character production is the stroke, the radical, or something in between. A written version of the form preparation task was adopted. Chinese participants learned sets of two-character words, later were cued with the first character of each word, and had to write down the second character (the target). Response times were measured from the onset of a cue character to the onset of a written response. In Experiment 1, the target characters within a block shared (homogeneous) or did not share (heterogeneous) the first stroke. In Experiment 2, the first two strokes were shared in the homogeneous blocks. Response times in the homogeneous blocks and in the heterogeneous blocks were comparable in both experiments (Experiment 1: 687 vs. 684 ms, Experiment 2: 717 vs. 716). In Experiment 3 and 4, the target characters within a block shared or did not share the first radical. Response times in the homogeneous blocks were significantly faster than those in the heterogeneous blocks (Experiment 3: 685 vs. 704, Experiment 4: 594 vs. 650). In Experiment 5 and 6, the shared component was a Gestalt-like form that is more than a stroke, constitutes a portion of the target character, can be a stand-alone character itself, can be a radical of another character but is not a radical of the target character (e.g., ± in , , , ; called a logographeme). Response times in the homogeneous blocks were significantly faster than those in the heterogeneous blocks (Experiment 5: 576 vs. 625, Experiment 6: 586 vs. 620). These results suggest a model of Chinese handwritten character production in which the stroke is not a functional unit, the radical plays the role of a morpheme, and the logographeme is the proximate unit. PMID:23950752
Hitti, Eveline; Tamim, Hani; Bakhti, Rinad; Zebian, Dina; Mufarrij, Afif
2017-01-01
Introduction Medication errors are common, with studies reporting at least one error per patient encounter. At hospital discharge, medication errors vary from 15%–38%. However, studies assessing the effect of an internally developed electronic (E)-prescription system at discharge from an emergency department (ED) are comparatively minimal. Additionally, commercially available electronic solutions are cost-prohibitive in many resource-limited settings. We assessed the impact of introducing an internally developed, low-cost E-prescription system, with a list of commonly prescribed medications, on prescription error rates at discharge from the ED, compared to handwritten prescriptions. Methods We conducted a pre- and post-intervention study comparing error rates in a randomly selected sample of discharge prescriptions (handwritten versus electronic) five months pre and four months post the introduction of the E-prescription. The internally developed, E-prescription system included a list of 166 commonly prescribed medications with the generic name, strength, dose, frequency and duration. We included a total of 2,883 prescriptions in this study: 1,475 in the pre-intervention phase were handwritten (HW) and 1,408 in the post-intervention phase were electronic. We calculated rates of 14 different errors and compared them between the pre- and post-intervention period. Results Overall, E-prescriptions included fewer prescription errors as compared to HW-prescriptions. Specifically, E-prescriptions reduced missing dose (11.3% to 4.3%, p <0.0001), missing frequency (3.5% to 2.2%, p=0.04), missing strength errors (32.4% to 10.2%, p <0.0001) and legibility (0.7% to 0.2%, p=0.005). E-prescriptions, however, were associated with a significant increase in duplication errors, specifically with home medication (1.7% to 3%, p=0.02). Conclusion A basic, internally developed E-prescription system, featuring commonly used medications, effectively reduced medication errors in a low-resource setting where the costs of sophisticated commercial electronic solutions are prohibitive. PMID:28874948
Hitti, Eveline; Tamim, Hani; Bakhti, Rinad; Zebian, Dina; Mufarrij, Afif
2017-08-01
Medication errors are common, with studies reporting at least one error per patient encounter. At hospital discharge, medication errors vary from 15%-38%. However, studies assessing the effect of an internally developed electronic (E)-prescription system at discharge from an emergency department (ED) are comparatively minimal. Additionally, commercially available electronic solutions are cost-prohibitive in many resource-limited settings. We assessed the impact of introducing an internally developed, low-cost E-prescription system, with a list of commonly prescribed medications, on prescription error rates at discharge from the ED, compared to handwritten prescriptions. We conducted a pre- and post-intervention study comparing error rates in a randomly selected sample of discharge prescriptions (handwritten versus electronic) five months pre and four months post the introduction of the E-prescription. The internally developed, E-prescription system included a list of 166 commonly prescribed medications with the generic name, strength, dose, frequency and duration. We included a total of 2,883 prescriptions in this study: 1,475 in the pre-intervention phase were handwritten (HW) and 1,408 in the post-intervention phase were electronic. We calculated rates of 14 different errors and compared them between the pre- and post-intervention period. Overall, E-prescriptions included fewer prescription errors as compared to HW-prescriptions. Specifically, E-prescriptions reduced missing dose (11.3% to 4.3%, p <0.0001), missing frequency (3.5% to 2.2%, p=0.04), missing strength errors (32.4% to 10.2%, p <0.0001) and legibility (0.7% to 0.2%, p=0.005). E-prescriptions, however, were associated with a significant increase in duplication errors, specifically with home medication (1.7% to 3%, p=0.02). A basic, internally developed E-prescription system, featuring commonly used medications, effectively reduced medication errors in a low-resource setting where the costs of sophisticated commercial electronic solutions are prohibitive.
Hsu, Chia-Chen; Chou, Chia-Lin; Chen, Tzeng-Ji; Ho, Chin-Chin; Lee, Chung-Yuan; Chou, Yueh-Ching
2015-05-01
Clinical care has become increasingly dependent on computerized physician order entry (CPOE) systems. No study has reported the adverse effect of CPOE on physicians' ability to handwrite prescriptions. This study took advantage of an extensive crash of the CPOE system at a large hospital to assess the completeness, legibility, and accuracy of physicians' handwritten prescriptions. The CPOE system had operated at the outpatient department of an academic medical center in Taiwan since 1993. During an unintentional shutdown that lasted 3.5 hours in 2010, physicians were forced to write prescriptions manually. These handwritten prescriptions, together with clinical medical records, were later audited by clinical pharmacists with respect to 16 fields of the patient's, prescriber's, and drug data. A total of 1418 prescriptions with 3805 drug items were handwritten by 114 to 1369 patients. Not a single prescription had all necessary fields filled in. Although the field of age was most frequently omitted (1282 [90.4%] of 1418 prescriptions) among the patient's data, the field of dosage form was most frequently omitted (3480 [91.5%] of 3805 items) among the drug data. In contrast, the scale of illegibility was rather small. The highest percentage reached only 1.5% (n = 57) in the field of drug frequency. Inaccuracies of strength, dose, and drug name were observed in 745 (19.6%), 517 (13.6%), and 435 (11.4%) prescribed drug items, respectively. The unintentional shutdown of a long-running CPOE system revealed that physicians fail to handwrite flawless prescriptions in the digital era. The contingency plans for computer disasters at health care facilities might include preparation of stand-alone e-prescribing software so that the service delay could be kept to the minimum. However, guidance on prescribing should remain an essential part of medical education. Copyright © 2015 Elsevier HS Journals, Inc. All rights reserved.
Interpretation, compilation and field verification procedures in the CARETS project
Alexander, Robert H.; De Forth, Peter W.; Fitzpatrick, Katherine A.; Lins, Harry F.; McGinty, Herbert K.
1975-01-01
The production of the CARETS map data base involved the development of a series of procedures for interpreting, compiling, and verifying data obtained from remote sensor sources. Level II land use mapping from high-altitude aircraft photography at a scale of 1:100,000 required production of a photomosaic mapping base for each of the 48, 50 x 50 km sheets, and the interpretation and coding of land use polygons on drafting film overlays. CARETS researchers also produced a series of 1970 to 1972 land use change overlays, using the 1970 land use maps and 1972 high-altitude aircraft photography. To enhance the value of the land use sheets, researchers compiled series of overlays showing cultural features, county boundaries and census tracts, surface geology, and drainage basins. In producing Level I land use maps from Landsat imagery, at a scale of 1:250,000, interpreters overlaid drafting film directly on Landsat color composite transparencies and interpreted on the film. They found that such interpretation involves pattern and spectral signature recognition. In studies using Landsat imagery, interpreters identified numerous areas of change but also identified extensive areas of "false change," where Landsat spectral signatures but not land use had changed.
Development and Evaluation of a Feedback Support System with Audio and Playback Strokes
ERIC Educational Resources Information Center
Li, Kai; Akahori, Kanji
2008-01-01
This paper describes the development and evaluation of a handwritten correction support system with audio and playback strokes used to teach Japanese writing. The study examined whether audio and playback strokes have a positive effect on students using honorific expressions in Japanese writing. The results showed that error feedback with audio…
The Use of the Overhead Projector in Teaching Composition.
ERIC Educational Resources Information Center
Bissex, Henry
The overhead projector, used as a controllable blackboard or bulletin board in the teaching of writing, extends the range of teaching techniques so that an instructor may (1) prepare, in advance, handwritten sheets of film--test questions, pupils' sentences, quotations, short poems--to be shown in any order or form; (2) use pictures, graphics, or…
41 CFR 102-192.155 - What should our agency-wide mail management policy statement cover?
Code of Federal Regulations, 2010 CFR
2010-07-01
... correct street addresses, and minimizing use of hand-written addresses; (j) Ensuring that a USPS mail... should our agency-wide mail management policy statement cover? You should have a written, agency-wide...), or to return it to the sender if the addressee cannot be identified. On the other hand, agencies may...
STS-42 IPMP experiment stowed in locker MF71O on OV-103's middeck
NASA Technical Reports Server (NTRS)
1992-01-01
STS-42 Investigations into Polymer Membrane Processing (IPMP) experiment stainless steel cylinders are stowed in locker MF71O on the middeck of Discovery, Orbiter Vehicle (OV) 103. A checklist with numerous handwritten notations floats above the open forward locker and a roll of duct tape is secured on nearby locker.
Write to read: the brain's universal reading and writing network.
Perfetti, Charles A; Tan, Li-Hai
2013-02-01
Do differences in writing systems translate into differences in the brain's reading network? Or is this network universal, relatively impervious to variation in writing systems? A new study adds intriguing evidence to these questions by showing that reading handwritten words activates a pre-motor area across writing systems. Copyright © 2012 Elsevier Ltd. All rights reserved.
Analog design of a new neural network for optical character recognition.
Morns, I P; Dlay, S S
1999-01-01
An electronic circuit is presented for a new type of neural network, which gives a recognition rate of over 100 kHz. The network is used to classify handwritten numerals, presented as Fourier and wavelet descriptors, and has been shown to train far quicker than the popular backpropagation network while maintaining classification accuracy.
ERIC Educational Resources Information Center
Schiegg, Markus; Thorpe, Deborah
2017-01-01
Handwritten texts carry significant information, extending beyond the meaning of their words. Modern neurology, for example, benefits from the interpretation of the graphic features of writing and drawing for the diagnosis and monitoring of diseases and disorders. This article examines how handwriting analysis can be used, and has been used…
ERIC Educational Resources Information Center
Steif, Paul S.; Fu, Luoting; Kara, Levent Burak
2016-01-01
Problems faced by engineering students involve multiple pathways to solution. Students rarely receive effective formative feedback on handwritten homework. This paper examines the potential for computer-based formative assessment of student solutions to multipath engineering problems. In particular, an intelligent tutor approach is adopted and…
Use of Screen Capture to Produce Media for Organic Chemistry
ERIC Educational Resources Information Center
D'Angelo, John G.
2014-01-01
Although many students learn best in different ways, the widest range of students can be reached when multiple modes of input are employed, especially if the student is simultaneously completing a set of handwritten notes. Computers, meanwhile, have led to countless changes in society, and education has not been exempt from these changes. Students…
Application of the ANNA neural network chip to high-speed character recognition.
Sackinger, E; Boser, B E; Bromley, J; Lecun, Y; Jackel, L D
1992-01-01
A neural network with 136000 connections for recognition of handwritten digits has been implemented using a mixed analog/digital neural network chip. The neural network chip is capable of processing 1000 characters/s. The recognition system has essentially the same rate (5%) as a simulation of the network with 32-b floating-point precision.
Code of Federal Regulations, 2013 CFR
2013-07-01
... communicated effectively using handwritten notes. One major advocacy organization, for example, noted that the... example, blood work for routine lab tests or regular allergy shots. Video Interpreting Services... or combustion engines. One commenter suggested using exhaust level as the determinant. Although there...
Code of Federal Regulations, 2011 CFR
2011-07-01
... communicated effectively using handwritten notes. One major advocacy organization, for example, noted that the... example, blood work for routine lab tests or regular allergy shots. Video Interpreting Services... or combustion engines. One commenter suggested using exhaust level as the determinant. Although there...
Code of Federal Regulations, 2012 CFR
2012-07-01
... communicated effectively using handwritten notes. One major advocacy organization, for example, noted that the... example, blood work for routine lab tests or regular allergy shots. Video Interpreting Services... or combustion engines. One commenter suggested using exhaust level as the determinant. Although there...
Code of Federal Regulations, 2014 CFR
2014-07-01
... communicated effectively using handwritten notes. One major advocacy organization, for example, noted that the... example, blood work for routine lab tests or regular allergy shots. Video Interpreting Services... or combustion engines. One commenter suggested using exhaust level as the determinant. Although there...
49 CFR 381.410 - What may I do if I have an idea or suggestion for a pilot program?
Code of Federal Regulations, 2010 CFR
2010-10-01
... (for example, a typed or handwritten (printed) letter) to the Administrator, Federal Motor Carrier... include: (1) Your name, job title, mailing address, and daytime telephone number; (2) The name of the... recommendation should include a copy of all research reports, technical papers, publications and other documents...
49 CFR 381.410 - What may I do if I have an idea or suggestion for a pilot program?
Code of Federal Regulations, 2011 CFR
2011-10-01
... (for example, a typed or handwritten (printed) letter) to the Administrator, Federal Motor Carrier... include: (1) Your name, job title, mailing address, and daytime telephone number; (2) The name of the... recommendation should include a copy of all research reports, technical papers, publications and other documents...
49 CFR 381.410 - What may I do if I have an idea or suggestion for a pilot program?
Code of Federal Regulations, 2012 CFR
2012-10-01
... (for example, a typed or handwritten (printed) letter) to the Administrator, Federal Motor Carrier... include: (1) Your name, job title, mailing address, and daytime telephone number; (2) The name of the... recommendation should include a copy of all research reports, technical papers, publications and other documents...
49 CFR 381.410 - What may I do if I have an idea or suggestion for a pilot program?
Code of Federal Regulations, 2013 CFR
2013-10-01
... (for example, a typed or handwritten (printed) letter) to the Administrator, Federal Motor Carrier... include: (1) Your name, job title, mailing address, and daytime telephone number; (2) The name of the... recommendation should include a copy of all research reports, technical papers, publications and other documents...
ERIC Educational Resources Information Center
Douglas, Jacqueline Ann; Douglas, Alexander; McClelland, Robert James; Davies, John
2015-01-01
This article represents a cross-sectional study of undergraduate students across two north-west university business schools in the UK. A purposefully designed questionnaire was collected from 350 students. The student experience was described in the form of hand-written narratives by first and final year students and had been identified by the…
Arabic Optical Character Recognition (OCR) Evaluation in Order to Develop a Post-OCR Module
2011-09-01
handwritten, and many more have some handwriting in the margins. Some images are blurred or faded to the point of illegibility. Others are mostly or...it is to English, because Arabic has more features such as agreement. We say that Arabic is more “morphologically rich” than English. We intend to
The Child Writer: Graphic Literacy and the Scottish Educational System, 1700-1820
ERIC Educational Resources Information Center
Eddy, Matthew Daniel
2016-01-01
The story of Enlightenment literacy is often reconstructed from textbooks and manuals, with the implicit focus being what children were reading. But far less attention has been devoted to how they mastered the scribal techniques that allowed them to manage knowledge on paper. Focusing on Scotland, handwritten manuscripts are used to reveal that…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aalseth, Craig E.; Day, Anthony R.; Haas, Derek A.
On-Site Inspection (OSI) is a key component of the verification regime for the Comprehensive Nuclear-Test-Ban Treaty (CTBT). Measurements of radionuclide isotopes created by an underground nuclear explosion are a valuable signature of a Treaty violation. Argon-37 is produced from neutron interaction with calcium in soil, 40Ca(n,α)37Ar. For OSI, the 35-day half-life of 37Ar provides both high specific activity and sufficient time for completion of an inspection before decay limits sensitivity. This paper presents a low-background internal-source gas proportional counter with an 37Ar measurement sensitivity level equivalent to 45.1 mBq/SCM in whole air.
Deitte, Lori A; Moser, Patricia P; Geller, Brian S; Sistrom, Chris L
2011-06-01
Attending radiologist signature time (AST) is a variable and modifiable component of overall report turnaround time. Delays in finalized reports have potential to undermine radiologists' value as consultants and adversely affect patient care. This study was performed to evaluate the impact of notebook computer distribution and daily automated e-mail notification on reducing AST. Two simultaneous interventions were initiated in the authors' radiology department in February 2010. These included the distribution of a notebook computer with preloaded software for each attending radiologist to sign radiology reports and daily automated e-mail notifications for unsigned reports. The digital dictation system archive and the radiology information system were queried for all radiology reports produced from January 2009 through August 2010. The time between resident approval and attending radiologist signature before and after the intervention was analyzed. Potential unintended "side effects" of the intervention were also studied. Resident-authored reports were signed, on average, 2.53 hours sooner after the intervention. This represented a highly significant (P = .003) decrease in AST with all else held equal. Postintervention reports were authored by residents at the same rate (about 70%). An unintended "side effect" was that attending radiologists were less likely to make changes to resident-authored reports after the intervention. E-mail notification combined with offsite signing can reduce AST substantially. Notebook computers with preloaded software streamline the process of accessing, editing, and signing reports. The observed decrease in AST reflects a positive change in the timeliness of report signature. Copyright © 2011 AUR. Published by Elsevier Inc. All rights reserved.
2013-06-11
Cf(zt ~~n,.'fb-~.. · wl CJfiiU1 , ~ Cffuf; J~) , Ctw, /IA.Au 1 ~~ , ftw ~L-- ~'JY)VY~) t~~ a~t~~~~~f«Jetud.AJ1/P v..>t'tte #ttiJ hn cLute 10 k~ fo ~ ~ ~lzt( ...
76 FR 62496 - Motor Carrier Safety Advisory Committee Series of Public Subcommittee Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-07
... EOBRs used in lieu of handwritten records of duty status (RODS). Time and Dates: The meetings will be held Monday-Thursday, October 24-27, 2011, from 8:30 am to 5 pm, E.T. at the Sheraton Crystal City, 1800 Jefferson Davis Highway, Arlington, VA, 22202, in meeting rooms Crystal V and VI. Matters To Be...
Strategies to Help Legal Studies Students Avoid Plagiarism
ERIC Educational Resources Information Center
Samuels, Linda B.; Bast, Carol M.
2006-01-01
Plagiarism is certainly not new to academics, but it may be on the rise with easy access to the vast quantities of information available on the Internet. Students researching on the Internet do not have to take handwritten or typewritten notes. They can simply print out or copy and save whatever they find. They are even spared the tedium of having…
49 CFR 381.410 - What may I do if I have an idea or suggestion for a pilot program?
Code of Federal Regulations, 2014 CFR
2014-10-01
... (for example, a typed or handwritten (printed) letter) to the Administrator, Federal Motor Carrier... include: (1) Your name, job title, mailing address, and daytime telephone number; (2) The name of the... measures in the pilot project would be designed to achieve a level of safety that is equivalent to, or...
ASM Based Synthesis of Handwritten Arabic Text Pages
Al-Hamadi, Ayoub; Elzobi, Moftah; El-etriby, Sherif; Ghoneim, Ahmed
2015-01-01
Document analysis tasks, as text recognition, word spotting, or segmentation, are highly dependent on comprehensive and suitable databases for training and validation. However their generation is expensive in sense of labor and time. As a matter of fact, there is a lack of such databases, which complicates research and development. This is especially true for the case of Arabic handwriting recognition, that involves different preprocessing, segmentation, and recognition methods, which have individual demands on samples and ground truth. To bypass this problem, we present an efficient system that automatically turns Arabic Unicode text into synthetic images of handwritten documents and detailed ground truth. Active Shape Models (ASMs) based on 28046 online samples were used for character synthesis and statistical properties were extracted from the IESK-arDB database to simulate baselines and word slant or skew. In the synthesis step ASM based representations are composed to words and text pages, smoothed by B-Spline interpolation and rendered considering writing speed and pen characteristics. Finally, we use the synthetic data to validate a segmentation method. An experimental comparison with the IESK-arDB database encourages to train and test document analysis related methods on synthetic samples, whenever no sufficient natural ground truthed data is available. PMID:26295059
Muharam, Yuswan; Warnatz, Jürgen
2007-08-21
A mechanism generator code to automatically generate mechanisms for the oxidation of large hydrocarbons has been successfully modified and considerably expanded in this work. The modification was through (1) improvement of the existing rules such as cyclic-ether reactions and aldehyde reactions, (2) inclusion of some additional rules to the code, such as ketone reactions, hydroperoxy cyclic-ether formations and additional reactions of alkenes, (3) inclusion of small oxygenates, produced by the code but not included in the handwritten C(1)-C(4) sub-mechanism yet, to the handwritten C(1)-C(4) sub-mechanism. In order to evaluate mechanisms generated by the code, simulations of observed results in different experimental environments have been carried out. Experimentally derived and numerically predicted ignition delays of n-heptane-air and n-decane-air mixtures in high-pressure shock tubes in a wide range of temperatures, pressures and equivalence ratios agree very well. Concentration profiles of the main products and intermediates of n-heptane and n-decane oxidation in jet-stirred reactors at a wide range of temperatures and equivalence ratios are generally well reproduced. In addition, the ignition delay times of different normal alkanes was numerically studied.
NASA Astrophysics Data System (ADS)
Nasertdinova, A. D.; Bochkarev, V. V.
2017-11-01
Deep neural networks with a large number of parameters are a powerful tool for solving problems of pattern recognition, prediction and classification. Nevertheless, overfitting remains a serious problem in the use of such networks. A method of solving the problem of overfitting is proposed in this article. This method is based on reducing the number of independent parameters of a neural network model using the principal component analysis, and can be implemented using existing libraries of neural computing. The algorithm was tested on the problem of recognition of handwritten symbols from the MNIST database, as well as on the task of predicting time series (rows of the average monthly number of sunspots and series of the Lorentz system were used). It is shown that the application of the principal component analysis enables reducing the number of parameters of the neural network model when the results are good. The average error rate for the recognition of handwritten figures from the MNIST database was 1.12% (which is comparable to the results obtained using the "Deep training" methods), while the number of parameters of the neural network can be reduced to 130 times.
ASM Based Synthesis of Handwritten Arabic Text Pages.
Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-Etriby, Sherif; Ghoneim, Ahmed
2015-01-01
Document analysis tasks, as text recognition, word spotting, or segmentation, are highly dependent on comprehensive and suitable databases for training and validation. However their generation is expensive in sense of labor and time. As a matter of fact, there is a lack of such databases, which complicates research and development. This is especially true for the case of Arabic handwriting recognition, that involves different preprocessing, segmentation, and recognition methods, which have individual demands on samples and ground truth. To bypass this problem, we present an efficient system that automatically turns Arabic Unicode text into synthetic images of handwritten documents and detailed ground truth. Active Shape Models (ASMs) based on 28046 online samples were used for character synthesis and statistical properties were extracted from the IESK-arDB database to simulate baselines and word slant or skew. In the synthesis step ASM based representations are composed to words and text pages, smoothed by B-Spline interpolation and rendered considering writing speed and pen characteristics. Finally, we use the synthetic data to validate a segmentation method. An experimental comparison with the IESK-arDB database encourages to train and test document analysis related methods on synthetic samples, whenever no sufficient natural ground truthed data is available.
Wang, Cheng; Zhang, Qingfang
2015-01-01
To what extent do phonological codes constrain orthographic output in handwritten production? We investigated how phonological codes constrain the selection of orthographic codes via sublexical and lexical routes in Chinese written production. Participants wrote down picture names in a picture-naming task in Experiment 1or response words in a symbol—word associative writing task in Experiment 2. A sublexical phonological property of picture names (phonetic regularity: regular vs. irregular) in Experiment 1and a lexical phonological property of response words (homophone density: dense vs. sparse) in Experiment 2, as well as word frequency of the targets in both experiments, were manipulated. A facilitatory effect of word frequency was found in both experiments, in which words with high frequency were produced faster than those with low frequency. More importantly, we observed an inhibitory phonetic regularity effect, in which low-frequency picture names with regular first characters were slower to write than those with irregular ones, and an inhibitory homophone density effect, in which characters with dense homophone density were produced more slowly than those with sparse homophone density. Results suggested that phonological codes constrained handwritten production via lexical and sublexical routes. PMID:25879662
A Multiple-Label Guided Clustering Algorithm for Historical Document Dating and Localization.
He, Sheng; Samara, Petros; Burgers, Jan; Schomaker, Lambert
2016-11-01
It is of essential importance for historians to know the date and place of origin of the documents they study. It would be a huge advancement for historical scholars if it would be possible to automatically estimate the geographical and temporal provenance of a handwritten document by inferring them from the handwriting style of such a document. We propose a multiple-label guided clustering algorithm to discover the correlations between the concrete low-level visual elements in historical documents and abstract labels, such as date and location. First, a novel descriptor, called histogram of orientations of handwritten strokes, is proposed to extract and describe the visual elements, which is built on a scale-invariant polar-feature space. In addition, the multi-label self-organizing map (MLSOM) is proposed to discover the correlations between the low-level visual elements and their labels in a single framework. Our proposed MLSOM can be used to predict the labels directly. Moreover, the MLSOM can also be considered as a pre-structured clustering method to build a codebook, which contains more discriminative information on date and geography. The experimental results on the medieval paleographic scale data set demonstrate that our method achieves state-of-the-art results.
Recognition of Telugu characters using neural networks.
Sukhaswami, M B; Seetharamulu, P; Pujari, A K
1995-09-01
The aim of the present work is to recognize printed and handwritten Telugu characters using artificial neural networks (ANNs). Earlier work on recognition of Telugu characters has been done using conventional pattern recognition techniques. We make an initial attempt here of using neural networks for recognition with the aim of improving upon earlier methods which do not perform effectively in the presence of noise and distortion in the characters. The Hopfield model of neural network working as an associative memory is chosen for recognition purposes initially. Due to limitation in the capacity of the Hopfield neural network, we propose a new scheme named here as the Multiple Neural Network Associative Memory (MNNAM). The limitation in storage capacity has been overcome by combining multiple neural networks which work in parallel. It is also demonstrated that the Hopfield network is suitable for recognizing noisy printed characters as well as handwritten characters written by different "hands" in a variety of styles. Detailed experiments have been carried out using several learning strategies and results are reported. It is shown here that satisfactory recognition is possible using the proposed strategy. A detailed preprocessing scheme of the Telugu characters from digitized documents is also described.
Cell/tissue processing information system for regenerative medicine.
Iwayama, Daisuke; Yamato, Masayuki; Tsubokura, Tetsuya; Takahashi, Minoru; Okano, Teruo
2016-11-01
When conducting clinical studies of regenerative medicine, compliance to good manufacturing practice (GMP) is mandatory, and thus much time is needed for manufacturing and quality management. It is therefore desired to introduce the manufacturing execution system (MES), which is being adopted by factories manufacturing pharmaceutical products. Meanwhile, in manufacturing human cell/tissue processing autologous products, it is necessary to protect patients' personal information, prevent patients from being identified and obtain information for cell/tissue identification. We therefore considered it difficult to adopt conventional MES to regenerative medicine-related clinical trials, and so developed novel software for production/quality management to be used in cell-processing centres (CPCs), conforming to GMP. Since this system satisfies the requirements of regulations in Japan and the USA for electronic records and electronic signatures (ER/ES), the use of ER/ES has been allowed, and the risk of contamination resulting from the use of recording paper has been eliminated, thanks to paperless operations within the CPC. Moreover, to reduce the risk of mix-up and cross-contamination due to contact during production, we developed a touchless input device with built-in radio frequency identification (RFID) reader-writer devices and optical sensors. The use of this system reduced the time to prepare and issue manufacturing instructions by 50% or more, compared to the conventional handwritten system. The system contributes to producing more large-scale production and to reducing production costs for cell and tissue products in regenerative medicine. Copyright © 2014 John Wiley & Sons, Ltd. Copyright © 2014 John Wiley & Sons, Ltd.
Face recognition in the thermal infrared domain
NASA Astrophysics Data System (ADS)
Kowalski, M.; Grudzień, A.; Palka, N.; Szustakowski, M.
2017-10-01
Biometrics refers to unique human characteristics. Each unique characteristic may be used to label and describe individuals and for automatic recognition of a person based on physiological or behavioural properties. One of the most natural and the most popular biometric trait is a face. The most common research methods on face recognition are based on visible light. State-of-the-art face recognition systems operating in the visible light spectrum achieve very high level of recognition accuracy under controlled environmental conditions. Thermal infrared imagery seems to be a promising alternative or complement to visible range imaging due to its relatively high resistance to illumination changes. A thermal infrared image of the human face presents its unique heat-signature and can be used for recognition. The characteristics of thermal images maintain advantages over visible light images, and can be used to improve algorithms of human face recognition in several aspects. Mid-wavelength or far-wavelength infrared also referred to as thermal infrared seems to be promising alternatives. We present the study on 1:1 recognition in thermal infrared domain. The two approaches we are considering are stand-off face verification of non-moving person as well as stop-less face verification on-the-move. The paper presents methodology of our studies and challenges for face recognition systems in the thermal infrared domain.
Identification of host response signatures of infection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Branda, Steven S.; Sinha, Anupama; Bent, Zachary
2013-02-01
Biological weapons of mass destruction and emerging infectious diseases represent a serious and growing threat to our national security. Effective response to a bioattack or disease outbreak critically depends upon efficient and reliable distinguishing between infected vs healthy individuals, to enable rational use of scarce, invasive, and/or costly countermeasures (diagnostics, therapies, quarantine). Screening based on direct detection of the causative pathogen can be problematic, because culture- and probe-based assays are confounded by unanticipated pathogens (e.g., deeply diverged, engineered), and readily-accessible specimens (e.g., blood) often contain little or no pathogen, particularly at pre-symptomatic stages of disease. Thus, in addition to themore » pathogen itself, one would like to detect infection-specific host response signatures in the specimen, preferably ones comprised of nucleic acids (NA), which can be recovered and amplified from tiny specimens (e.g., fingerstick draws). Proof-of-concept studies have not been definitive, however, largely due to use of sub-optimal sample preparation and detection technologies. For purposes of pathogen detection, Sandia has developed novel molecular biology methods that enable selective isolation of NA unique to, or shared between, complex samples, followed by identification and quantitation via Second Generation Sequencing (SGS). The central hypothesis of the current study is that variations on this approach will support efficient identification and verification of NA-based host response signatures of infectious disease. To test this hypothesis, we re-engineered Sandia's sophisticated sample preparation pipelines, and developed new SGS data analysis tools and strategies, in order to pioneer use of SGS for identification of host NA correlating with infection. Proof-of-concept studies were carried out using specimens drawn from pathogen-infected non-human primates (NHP). This work provides a strong foundation for large-scale, highly-efficient efforts to identify and verify infection-specific host NA signatures in human populations.« less
Modelling urban δ13C variations in the Greater Toronto Area
NASA Astrophysics Data System (ADS)
Pugliese, S.; Vogel, F. R.; Murphy, J. G.; Worthy, D. E. J.; Zhang, J.; Zheng, Q.; Moran, M. D.
2015-12-01
Even in urbanized regions, carbon dioxide (CO2) emissions are derived from a variety of biogenic and anthropogenic sources and are influenced by atmospheric transport across borders. As policies are introduced to reduce the emission of CO2, there is a need for independent verification of emissions reporting. In this work, we aim to use carbon isotope (13CO2 and 12CO2) simulations in combination with atmospheric measurements to distinguish between CO2 sources in the Greater Toronto Area (GTA), Canada. This is being done by developing an urban δ13C framework based on existing CO2 emission data and forward modelling using a chemistry transport model, CHIMERE. The framework is designed to use region specific δ13C signatures of the dominant CO2 sources together with a CO2 inventory at a fine spatial and temporal resolution; the product is compared against highly accurate 13CO2 and 12CO2 ambient data. The strength of this framework is its potential to estimate both locally produced and regionally transported CO2. Locally, anthropogenic CO2 in urban areas is often derived from natural gas combustion (for heating) and gasoline/diesel combustion (for transportation); the isotopic signatures of these processes are significantly different (approximately d13CVPDB = -40 ‰ and -26 ‰ respectively) and can be used to infer their relative contributions. Furthermore, the contribution of transported CO2 can also be estimated as nearby regions often rely on other sources of heating (e.g. coal combustion), which has a very different signature (approximately d13CVPDB = -23 ‰). We present an analysis of the GTA in contrast to Paris, France where atmospheric observations are also available and 13CO2 has been studied. Utilizing our δ13C framework and differences in sectoral isotopic signatures, we quantify the relative contribution of CO2 sources on the overall measured concentration and assess the ability of this framework as a tool for tracing the evolution of sector-specific emissions.
Use of behavioral biometrics in intrusion detection and online gaming
NASA Astrophysics Data System (ADS)
Yampolskiy, Roman V.; Govindaraju, Venu
2006-04-01
Behavior based intrusion detection is a frequently used approach for insuring network security. We expend behavior based intrusion detection approach to a new domain of game networks. Specifically, our research shows that a unique behavioral biometric can be generated based on the strategy used by an individual to play a game. We wrote software capable of automatically extracting behavioral profiles for each player in a game of Poker. Once a behavioral signature is generated for a player, it is continuously compared against player's current actions. Any significant deviations in behavior are reported to the game server administrator as potential security breaches. Our algorithm addresses a well-known problem of user verification and can be re-applied to the fields beyond game networks, such as operating systems and non-game networks security.
Theory of Genuine Tripartite Nonlocality of Gaussian States
NASA Astrophysics Data System (ADS)
Adesso, Gerardo; Piano, Samanta
2014-01-01
We investigate the genuine multipartite nonlocality of three-mode Gaussian states of continuous variable systems. For pure states, we present a simplified procedure to obtain the maximum violation of the Svetlichny inequality based on displaced parity measurements, and we analyze its interplay with genuine tripartite entanglement measured via Rényi-2 entropy. The maximum Svetlichny violation admits tight upper and lower bounds at fixed tripartite entanglement. For mixed states, no violation is possible when the purity falls below 0.86. We also explore a set of recently derived weaker inequalities for three-way nonlocality, finding violations for all tested pure states. Our results provide a strong signature for the nonclassical and nonlocal nature of Gaussian states despite their positive Wigner function, and lead to precise recipes for its experimental verification.
Radio frequency interference mitigation using deep convolutional neural networks
NASA Astrophysics Data System (ADS)
Akeret, J.; Chang, C.; Lucchi, A.; Refregier, A.
2017-01-01
We propose a novel approach for mitigating radio frequency interference (RFI) signals in radio data using the latest advances in deep learning. We employ a special type of Convolutional Neural Network, the U-Net, that enables the classification of clean signal and RFI signatures in 2D time-ordered data acquired from a radio telescope. We train and assess the performance of this network using the HIDE &SEEK radio data simulation and processing packages, as well as early Science Verification data acquired with the 7m single-dish telescope at the Bleien Observatory. We find that our U-Net implementation is showing competitive accuracy to classical RFI mitigation algorithms such as SEEK's SUMTHRESHOLD implementation. We publish our U-Net software package on GitHub under GPLv3 license.
ERIC Educational Resources Information Center
Association for Education in Journalism and Mass Communication.
The 16 papers in the first section of the Addenda to this proceedings are: (1) "Shipboard News: Nineteenth Century Handwritten Periodicals at Sea" (Roy Alden Atwood); (2) "The International Institutional Press Association, 1966-1968" (Constance Ledoux Book); (3) "44 Liquormart--A Prescription for Commercial Speech: Return…
Typing Compared with Handwriting for Essay Examinations at University: Letting the Students Choose
ERIC Educational Resources Information Center
Mogey, Nora; Paterson, Jessie; Burk, John; Purcell, Michael
2010-01-01
Students at the University of Edinburgh do almost all their work on computers, but at the end of the semester they are examined by handwritten essays. Intuitively it would be appealing to allow students the choice of handwriting or typing, but this raises a concern that perhaps this might not be "fair"--that the choice a student makes,…
ERIC Educational Resources Information Center
Demir, Yusuf
2017-01-01
On a multifaceted basis, this paper explores the challenges experienced by native and non-native English language teachers (NESTs and NNESTs) in a tertiary-level EFL setting in Turkey. Adopting a qualitative case study design, the data were gathered from five NESTs through interviews and from five NNESTs through hand-written accounts based on the…
2012-06-28
... Hfl-I INPUI liNE ID", ••• I •..•. 2 .•... J, .... 4" .. ".S,."",b" .. I . .... H..•.. q . .llI ... Pkt' IPIIAIION OAT" • II I'B SIU~M I) • ./ I) IIA', IN III I AI I'RII 11'1 IAI IUN • ...
Learning Sparse Feature Representations using Probabilistic Quadtrees and Deep Belief Nets
2015-04-24
Feature Representations usingProbabilistic Quadtrees and Deep Belief Nets Learning sparse feature representations is a useful instru- ment for solving an...novel framework for the classifi cation of handwritten digits that learns sparse representations using probabilistic quadtrees and Deep Belief Nets... Learning Sparse Feature Representations usingProbabilistic Quadtrees and Deep Belief Nets Report Title Learning sparse feature representations is a useful
2000-11-01
Discrete Math . 115, 141-152. [7] Edmonds J., Giles R. (1977) A Min-Max relation for submodular functions on graphs, Annals of Discrete Math . 1, 185...projective planes, handwritten man- uscript, published: (1990) Polyhedral Combinatorics (W. Cook, P.D. Seymour eds.), DIMACS Series in Discrete Math . and...Theoretical Computer Science 1, 101-105. [11] Lovasz L. (1972) Normal hypergraphs and the perfect graph conjecture, Discrete Math . 2, 253-267. [12
Non-Roman Font Generation Via Interactive Computer Graphics,
1986-07-01
sets of kana representing the same set of sounds: hiragana , a cursive script for transcribing native Japanese words (including those borrowed low from...used for transcribing spoken Japanese into dwritten language. Hiragana have a cursive (handwritten) appearance. homophone A syllable or word which is...language into written form. These symbol sets are syllabaries. (see also hiragana , katakana) kanji "Chinese characters" ( Japanese ). (see also hanzi
Keystroke dynamics in the pre-touchscreen era
Ahmad, Nasir; Szymkowiak, Andrea; Campbell, Paul A.
2013-01-01
Biometric authentication seeks to measure an individual’s unique physiological attributes for the purpose of identity verification. Conventionally, this task has been realized via analyses of fingerprints or signature iris patterns. However, whilst such methods effectively offer a superior security protocol compared with password-based approaches for example, their substantial infrastructure costs, and intrusive nature, make them undesirable and indeed impractical for many scenarios. An alternative approach seeks to develop similarly robust screening protocols through analysis of typing patterns, formally known as keystroke dynamics. Here, keystroke analysis methodologies can utilize multiple variables, and a range of mathematical techniques, in order to extract individuals’ typing signatures. Such variables may include measurement of the period between key presses, and/or releases, or even key-strike pressures. Statistical methods, neural networks, and fuzzy logic have often formed the basis for quantitative analysis on the data gathered, typically from conventional computer keyboards. Extension to more recent technologies such as numerical keypads and touch-screen devices is in its infancy, but obviously important as such devices grow in popularity. Here, we review the state of knowledge pertaining to authentication via conventional keyboards with a view toward indicating how this platform of knowledge can be exploited and extended into the newly emergent type-based technological contexts. PMID:24391568
Detection and interpretation of seismoacoustic events at German infrasound stations
NASA Astrophysics Data System (ADS)
Pilger, Christoph; Koch, Karl; Ceranna, Lars
2016-04-01
Three infrasound arrays with collocated or nearby installed seismometers are operated by the Federal Institute for Geosciences and Natural Resources (BGR) as the German National Data Center (NDC) for the verification of the Comprehensive Nuclear-Test-Ban Treaty (CTBT). Infrasound generated by seismoacoustic events is routinely detected at these infrasound arrays, but air-to-ground coupled acoustic waves occasionally show up in seismometer recordings as well. Different natural and artificial sources like meteoroids as well as industrial and mining activity generate infrasonic signatures that are simultaneously detected at microbarometers and seismometers. Furthermore, many near-surface sources like earthquakes and explosions generate both seismic and infrasonic waves that can be detected successively with both technologies. The combined interpretation of seismic and acoustic signatures provides additional information about the origin time and location of remote infrasound events or about the characterization of seismic events distinguishing man-made and natural origins. Furthermore, seismoacoustic studies help to improve the modelling of infrasound propagation and ducting in the atmosphere and allow quantifying the portion of energy coupled into ground and into air by seismoacoustic sources. An overview of different seismoacoustic sources and their detection by German infrasound stations as well as some conclusions on the benefit of a combined seismoacoustic analysis are presented within this study.
Signature of chaos in the 4 f -core-excited states for highly-charged tungsten ions
NASA Astrophysics Data System (ADS)
Safronova, Ulyana; Safronova, Alla
2014-05-01
We evaluate radiative and autoionizing transition rates in highly charged W ions in search for the signature of chaos. In particularly, previously published results for Ag-like W27+, Tm-like W5+, and Yb-like W4+ ions as well as newly obtained for I-like W21+, Xe-like W20+, Cs-like W19+, and La-like W17+ ions (with ground configuration [Kr] 4d10 4fk with k = 7, 8, 9, and 11, respectively) are considered that were calculated using the multiconfiguration relativistic Hebrew University Lawrence Livermore Atomic Code (HULLAC code) and the Hartree-Fock-Relativistic method (COWAN code). The main emphasis was on verification of Gaussian statistics of rates as a function of transition energy. There was no evidence of such statistics for above mentioned previously published results as well as for the transitions between the excited and autoionizing states for newly calculated results. However, we did find the Gaussian profile for the transitions between excited states such as the [Kr] 4d10 4fk - [Kr] 4d10 4f k - 1 5 d transitions , for newly calculated W ions. This work is supported in part by DOE under NNSA Cooperative Agreement DE-NA0001984.
Keystroke dynamics in the pre-touchscreen era.
Ahmad, Nasir; Szymkowiak, Andrea; Campbell, Paul A
2013-12-19
Biometric authentication seeks to measure an individual's unique physiological attributes for the purpose of identity verification. Conventionally, this task has been realized via analyses of fingerprints or signature iris patterns. However, whilst such methods effectively offer a superior security protocol compared with password-based approaches for example, their substantial infrastructure costs, and intrusive nature, make them undesirable and indeed impractical for many scenarios. An alternative approach seeks to develop similarly robust screening protocols through analysis of typing patterns, formally known as keystroke dynamics. Here, keystroke analysis methodologies can utilize multiple variables, and a range of mathematical techniques, in order to extract individuals' typing signatures. Such variables may include measurement of the period between key presses, and/or releases, or even key-strike pressures. Statistical methods, neural networks, and fuzzy logic have often formed the basis for quantitative analysis on the data gathered, typically from conventional computer keyboards. Extension to more recent technologies such as numerical keypads and touch-screen devices is in its infancy, but obviously important as such devices grow in popularity. Here, we review the state of knowledge pertaining to authentication via conventional keyboards with a view toward indicating how this platform of knowledge can be exploited and extended into the newly emergent type-based technological contexts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geddes, Cameron; Ludewigt, Bernhard; Valentine, John
Near-monoenergetic photon sources (MPSs) have the potential to improve sensitivity at greatly reduced dose in existing applications and enable new capabilities in other applications. MPS advantages include the ability to select energy, energy spread, flux, and pulse structures to deliver only the photons needed for the application, while suppressing extraneous dose and background. Some MPSs also offer narrow divergence photon beams which can target dose and/or mitigate scattering contributions to image contrast degradation. Current broad-band, bremsstrahlung photon sources (e.g., linacs and betatrons) deliver unnecessary dose that in some cases also interferes with the signature to be detected and/or restricts operations,more » and must be collimated (reducing flux) to generate narrow divergence beams. While MPSs can in principle resolve these issues, they are technically challenging to produce. Candidate MPS technologies for nonproliferation applications are now being developed, each of which have different properties (e.g. broad divergence vs. narrow). Within each technology, source parameters trade off against one another (e.g. flux vs. energy spread), representing a large operation space. To guide development, requirements for each application of interest must be defined and simulations conducted to define MPS parameters that deliver benefit relative to current systems. The present project conducted a broad assessment of potential nonproliferation applications where MPSs may provide new capabilities or significant performance enhancement (reported separately), which led to prioritization of several applications for detailed analysis. The applications prioritized were: cargo screening and interdiction of Special Nuclear Materials (SNM), detection of hidden SNM, treaty/dismantlement verification, and spent fuel dry storage cask content verification. High resolution imaging for stockpile stewardship was considered as a sub-area of the treaty topic, as it is also of interest for future treaty use. This report presents higher-fidelity calculations and modeling results to quantitatively evaluate the prioritized applications, and to derive the key MPS properties that drive application benefit. Simulations focused on the conventional signatures of radiography, photofission, and NRF to enable comparison to present methods and evaluation of benefit.« less
Measuring Fission Chain Dynamics Through Inter-event Timing of Correlated Particles
NASA Astrophysics Data System (ADS)
Monterial, Mateusz
Neutrons born from fission may go on to induce subsequent fissions in self-propagating series of reactions resulting in a fission chain. Fissile materials comprise all isotopes capable of sustaining nuclear fission chain reactions, and are therefore a necessary prerequisite for the construction of a nuclear weapon. As a result the accountancy and characterization of fissile material is of great importance for national security and the international community. The rate at which neutrons "multiply" in a fissile material is a function of the composition, total mass, density, and shape of the object. These are key characteristics sought out in areas of nuclear non-proliferation, safeguards, treaty verification and emergency response. This thesis demonstrates a novel technique of measuring the underlying fission chain dynamics in fissile material through temporal correlation of neutrons and gamma rays emitted from fission. Fissile material exhibits key detectable signatures through the emission of correlated neutrons and gamma rays from fission. The Non-Destructive Assay (NDA) community has developed mature techniques of assaying fissile material that detect these signatures, such as neutron counting by thermal capture based detectors, and gamma-ray spectroscopy. An alternative use of fast organic scintillators provides three additional capabilities: (1) discrimination between neutrons and gamma-ray pulses (2) sub-nanosecond scale timing between correlated events (3) measurement of deposited neutron energy in the detector. This thesis leverages these capabilities into to measure a new signature, which is demonstrated to be sensitive to both fissile neutron multiplication and presence of neutronically coupled reflectors. In addition, a new 3D imaging method of sources of correlated gamma rays and neutrons is presented, which can improve estimation of total source volume and localization.
Quicksilver IV: The Real Operation Fortitude
2010-06-01
Fortitude, they have also focused on the personalities that made those operations so fascinating; they have devoted entire books to Juan Garcia...was unclear, I have included explanatory notes, based on my own insights, in an effort to provide clarity. The original text is in normal font . Text...that was handwritten in is in italics. Text that was manually crossed out is in a strikethrough font . Notes on Coordinates and Conversion The
ERIC Educational Resources Information Center
Hussey, Michael; Greenhut, Stephanie
2011-01-01
This article features two documents which can serve as a starting point for a lesson on public service while students debate the amount of pay that public servants should receive. These are: (1) the printed draft of the Constitution showing George Washington's handwritten corrections that eliminated state payments and included the phrase "to be…
U.S. Army Research Laboratory (ARL) Corporate Dari Document Transcription and Translation Guidelines
2012-10-01
text file format. 15. SUBJECT TERMS Transcription, Translation, guidelines, ground truth, Optical character recognition , OCR, Machine Translation, MT...foreign language into a target language in order to train, test, and evaluate optical character recognition (OCR) and machine translation (MT) embedded...graphic element and should not be transcribed. Elements that are not part of the primary text such as handwritten annotations or stamps should not be
ERIC Educational Resources Information Center
Arnold, Voiza; And Others
In 1990, a study was conducted at Rio Hondo College (Whittier, California) to determine if readers exhibited any bias in scoring test papers that were composed on a word processor as opposed to being written by hand. The study began with the formulation of tentative pilot study questions and the development of procedures to address them. Three…
Learning and Inductive Inference
1982-07-01
a set of graph grammars to describe visual scenes . Other researchers have applied graph grammars to the pattern recognition of handwritten characters...345 1. Issues / 345 2. Mostows’ operationalizer / 350 0. Learning from ezamples / 360 1. Issues / 3t60 2. Learning in control and pattern recognition ...art.icleis on rote learntinig and ailvice- tAik g. K(ennieth Clarkson contributed Ltte article on grmvit atical inference, anid Geoff’ lroiney wrote
Understanding the Use of Graphic Novels to Support the Writing Skills of a Struggling Writer
ERIC Educational Resources Information Center
Voss, Christina L.
2013-01-01
This mixed methods study combining a single-subject experimental design with an embedded case study focuses on the impact of a visual treatment on the handwritten and typed output of a struggling male writer during his 5th through 7th grades who has undergone a longitudinal remedial phase of two and a half years creating text-only material as well…
Dealing with contaminated datasets: An approach to classifier training
NASA Astrophysics Data System (ADS)
Homenda, Wladyslaw; Jastrzebska, Agnieszka; Rybnik, Mariusz
2016-06-01
The paper presents a novel approach to classification reinforced with rejection mechanism. The method is based on a two-tier set of classifiers. First layer classifies elements, second layer separates native elements from foreign ones in each distinguished class. The key novelty presented here is rejection mechanism training scheme according to the philosophy "one-against-all-other-classes". Proposed method was tested in an empirical study of handwritten digits recognition.
Lewis, Keir Edward; Edwards, Victoria Middleton; Hall, Sian; Temblett, Paul; Hutchings, Hayley
2009-01-01
To quantify any effect of Standardised Order Forms (SOFs), versus hand-written note entries for 'Do Not Attempt Resuscitation'--on the selection and survival of remaining cardiopulmonary resuscitation (CPR) attempts. A prospective, observational study in two UK Hospitals, comparing numbers, demographics and survival rates from CPR attempts for 2 years prior to and 2 years after the introduction of SOFs (the only change in DNAR policy). There were 133 CPR attempts, representing 0.30% of the 44,792 admissions, pre SOFs and 147 CPR attempts representing 0.32% of the 45,340 admissions following the SOFs (p=0.46). The median duration of a CPR attempt was 11min prior to and 15min following the SOFs (p=0.02). Of the CPR attempts, there was no change in mean age (p=0.34), proportions occurring outside working hours (p=0.70) or proportions presenting with an initial shockable rhythm (p=0.30). Survival to discharge following CPR was unchanged (p=0.23). The introduction of SOFs for DNAR orders was associated with a significantly longer duration of CPR (on average by 3-4min) but no difference in overall number, demographics or type of arrest or survival in the remaining CPR attempts.
Synthesis of Common Arabic Handwritings to Aid Optical Character Recognition Research.
Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-Etriby, Sherif
2016-03-11
Document analysis tasks such as pattern recognition, word spotting or segmentation, require comprehensive databases for training and validation. Not only variations in writing style but also the used list of words is of importance in the case that training samples should reflect the input of a specific area of application. However, generation of training samples is expensive in the sense of manpower and time, particularly if complete text pages including complex ground truth are required. This is why there is a lack of such databases, especially for Arabic, the second most popular language. However, Arabic handwriting recognition involves different preprocessing, segmentation and recognition methods. Each requires particular ground truth or samples to enable optimal training and validation, which are often not covered by the currently available databases. To overcome this issue, we propose a system that synthesizes Arabic handwritten words and text pages and generates corresponding detailed ground truth. We use these syntheses to validate a new, segmentation based system that recognizes handwritten Arabic words. We found that a modification of an Active Shape Model based character classifiers-that we proposed earlier-improves the word recognition accuracy. Further improvements are achieved, by using a vocabulary of the 50,000 most common Arabic words for error correction.
Synthesis of Common Arabic Handwritings to Aid Optical Character Recognition Research
Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-etriby, Sherif
2016-01-01
Document analysis tasks such as pattern recognition, word spotting or segmentation, require comprehensive databases for training and validation. Not only variations in writing style but also the used list of words is of importance in the case that training samples should reflect the input of a specific area of application. However, generation of training samples is expensive in the sense of manpower and time, particularly if complete text pages including complex ground truth are required. This is why there is a lack of such databases, especially for Arabic, the second most popular language. However, Arabic handwriting recognition involves different preprocessing, segmentation and recognition methods. Each requires particular ground truth or samples to enable optimal training and validation, which are often not covered by the currently available databases. To overcome this issue, we propose a system that synthesizes Arabic handwritten words and text pages and generates corresponding detailed ground truth. We use these syntheses to validate a new, segmentation based system that recognizes handwritten Arabic words. We found that a modification of an Active Shape Model based character classifiers—that we proposed earlier—improves the word recognition accuracy. Further improvements are achieved, by using a vocabulary of the 50,000 most common Arabic words for error correction. PMID:26978368
A perceptive method for handwritten text segmentation
NASA Astrophysics Data System (ADS)
Lemaitre, Aurélie; Camillerapp, Jean; Coüasnon, Bertrand
2011-01-01
This paper presents a new method to address the problem of handwritten text segmentation into text lines and words. Thus, we propose a method based on the cooperation among points of view that enables the localization of the text lines in a low resolution image, and then to associate the pixels at a higher level of resolution. Thanks to the combination of levels of vision, we can detect overlapping characters and re-segment the connected components during the analysis. Then, we propose a segmentation of lines into words based on the cooperation among digital data and symbolic knowledge. The digital data are obtained from distances inside a Delaunay graph, which gives a precise distance between connected components, at the pixel level. We introduce structural rules in order to take into account some generic knowledge about the organization of a text page. This cooperation among information gives a bigger power of expression and ensures the global coherence of the recognition. We validate this work using the metrics and the database proposed for the segmentation contest of ICDAR 2009. Thus, we show that our method obtains very interesting results, compared to the other methods of the literature. More precisely, we are able to deal with slope and curvature, overlapping text lines and varied kinds of writings, which are the main difficulties met by the other methods.
Vajda, Szilárd; Rangoni, Yves; Cecotti, Hubert
2015-01-01
For training supervised classifiers to recognize different patterns, large data collections with accurate labels are necessary. In this paper, we propose a generic, semi-automatic labeling technique for large handwritten character collections. In order to speed up the creation of a large scale ground truth, the method combines unsupervised clustering and minimal expert knowledge. To exploit the potential discriminant complementarities across features, each character is projected into five different feature spaces. After clustering the images in each feature space, the human expert labels the cluster centers. Each data point inherits the label of its cluster’s center. A majority (or unanimity) vote decides the label of each character image. The amount of human involvement (labeling) is strictly controlled by the number of clusters – produced by the chosen clustering approach. To test the efficiency of the proposed approach, we have compared, and evaluated three state-of-the art clustering methods (k-means, self-organizing maps, and growing neural gas) on the MNIST digit data set, and a Lampung Indonesian character data set, respectively. Considering a k-nn classifier, we show that labeling manually only 1.3% (MNIST), and 3.2% (Lampung) of the training data, provides the same range of performance than a completely labeled data set would. PMID:25870463
Keywords image retrieval in historical handwritten Arabic documents
NASA Astrophysics Data System (ADS)
Saabni, Raid; El-Sana, Jihad
2013-01-01
A system is presented for spotting and searching keywords in handwritten Arabic documents. A slightly modified dynamic time warping algorithm is used to measure similarities between words. Two sets of features are generated from the outer contour of the words/word-parts. The first set is based on the angles between nodes on the contour and the second set is based on the shape context features taken from the outer contour. To recognize a given word, the segmentation-free approach is partially adopted, i.e., continuous word parts are used as the basic alphabet, instead of individual characters or complete words. Additional strokes, such as dots and detached short segments, are classified and used in a postprocessing step to determine the final comparison decision. The search for a keyword is performed by the search for its word parts given in the correct order. The performance of the presented system was very encouraging in terms of efficiency and match rates. To evaluate the presented system its performance is compared to three different systems. Unfortunately, there are no publicly available standard datasets with ground truth for testing Arabic key word searching systems. Therefore, a private set of images partially taken from Juma'a Al-Majid Center in Dubai for evaluation is used, while using a slightly modified version of the IFN/ENIT database for training.
Ultrafast learning in a hard-limited neural network pattern recognizer
NASA Astrophysics Data System (ADS)
Hu, Chia-Lun J.
1996-03-01
As we published in the last five years, the supervised learning in a hard-limited perceptron system can be accomplished in a noniterative manner if the input-output mapping to be learned satisfies a certain positive-linear-independency (or PLI) condition. When this condition is satisfied (for most practical pattern recognition applications, this condition should be satisfied,) the connection matrix required to meet this mapping can be obtained noniteratively in one step. Generally, there exist infinitively many solutions for the connection matrix when the PLI condition is satisfied. We can then select an optimum solution such that the recognition of any untrained patterns will become optimally robust in the recognition mode. The learning speed is very fast and close to real-time because the learning process is noniterative and one-step. This paper reports the theoretical analysis and the design of a practical charter recognition system for recognizing hand-written alphabets. The experimental result is recorded in real-time on an unedited video tape for demonstration purposes. It is seen from this real-time movie that the recognition of the untrained hand-written alphabets is invariant to size, location, orientation, and writing sequence, even the training is done with standard size, standard orientation, central location and standard writing sequence.
Kulkarni, Shruti R; Rajendran, Bipin
2018-07-01
We demonstrate supervised learning in Spiking Neural Networks (SNNs) for the problem of handwritten digit recognition using the spike triggered Normalized Approximate Descent (NormAD) algorithm. Our network that employs neurons operating at sparse biological spike rates below 300Hz achieves a classification accuracy of 98.17% on the MNIST test database with four times fewer parameters compared to the state-of-the-art. We present several insights from extensive numerical experiments regarding optimization of learning parameters and network configuration to improve its accuracy. We also describe a number of strategies to optimize the SNN for implementation in memory and energy constrained hardware, including approximations in computing the neuronal dynamics and reduced precision in storing the synaptic weights. Experiments reveal that even with 3-bit synaptic weights, the classification accuracy of the designed SNN does not degrade beyond 1% as compared to the floating-point baseline. Further, the proposed SNN, which is trained based on the precise spike timing information outperforms an equivalent non-spiking artificial neural network (ANN) trained using back propagation, especially at low bit precision. Thus, our study shows the potential for realizing efficient neuromorphic systems that use spike based information encoding and learning for real-world applications. Copyright © 2018 Elsevier Ltd. All rights reserved.
Using unstructured diaries for primary data collection.
Thomas, Juliet Anne
2015-05-01
To give a reflective account of using unstructured handwritten diaries as a method of collecting qualitative data. Diaries are primarily used in research as a method of collecting qualitative data. There are some challenges associated with their use, including compliance rates. However, they can provide a rich source of meaningful data and can avoid the difficulties of participants trying to precisely recall events after some time has elapsed. The author used unstructured handwritten diaries as her primary method of collecting data during her grounded theory doctoral study, when she examined the professional socialisation of nursing students. Over two years, 26 participants selected from four consecutive recruited groups of nursing students volunteered to take part in the study and were asked to keep a daily diary throughout their first five weeks of clinical experience. When using open-ended research questions, grounded theory's pragmatic approach permits the examination of processes thereby creating conceptual interpretive understanding of data. A wealth of rich, detailed data was obtained from the diaries that permitted the development of new theories regarding the effects early clinical experiences have on nursing students' professional socialisation. Diaries were found to provide insightful in-depth qualitative data in a resource-friendly manner. Nurse researchers should consider using diaries as an alternative to more commonly used approaches to collecting qualitative data.
NASA Astrophysics Data System (ADS)
Minh Ha, Thien; Niggeler, Dieter; Bunke, Horst; Clarinval, Jose
1995-08-01
Although giro forms are used by many people in daily life for money remittance in Switzerland, the processing of these forms at banks and post offices is only partly automated. We describe an ongoing project for building an automatic system that is able to recognize various items printed or written on a giro form. The system comprises three main components, namely, an automatic form feeder, a camera system, and a computer. These components are connected in such a way that the system is able to process a bunch of forms without any human interactions. We present two real applications of our system in the field of payment services, which require the reading of both machine printed and handwritten information that may appear on a giro form. One particular feature of giro forms is their flexible layout, i.e., information items are located differently from one form to another, thus requiring an additional analysis step to localize them before recognition. A commercial optical character recognition software package is used for recognition of machine-printed information, whereas handwritten information is read by our own algorithms, the details of which are presented. The system is implemented by using a client/server architecture providing a high degree of flexibility to change. Preliminary results are reported supporting our claim that the system is usable in practice.
Line Segmentation in Handwritten Assamese and Meetei Mayek Script Using Seam Carving Based Algorithm
NASA Astrophysics Data System (ADS)
Kumar, Chandan Jyoti; Kalita, Sanjib Kr.
Line segmentation is a key stage in an Optical Character Recognition system. This paper primarily concerns the problem of text line extraction on color and grayscale manuscript pages of two major North-east Indian regional Scripts, Assamese and Meetei Mayek. Line segmentation of handwritten text in Assamese and Meetei Mayek scripts is an uphill task primarily because of the structural features of both the scripts and varied writing styles. Line segmentation of a document image is been achieved by using the Seam carving technique, in this paper. Researchers from various regions used this approach for content aware resizing of an image. However currently many researchers are implementing Seam Carving for line segmentation phase of OCR. Although it is a language independent technique, mostly experiments are done over Arabic, Greek, German and Chinese scripts. Two types of seams are generated, medial seams approximate the orientation of each text line, and separating seams separated one line of text from another. Experiments are performed extensively over various types of documents and detailed analysis of the evaluations reflects that the algorithm performs well for even documents with multiple scripts. In this paper, we present a comparative study of accuracy of this method over different types of data.
Zhang, Qingfang; Feng, Chen
2017-01-01
The interaction between central and peripheral processing in written word production remains controversial. This study aims to investigate whether the effects of radical complexity and lexicality in central processing cascade into peripheral processing in Chinese written word production. The participants were asked to write characters and non-characters (lexicality) with different radical complexity (few- and many-strokes). The findings indicated that regardless of the lexicality, the writing latencies were longer for characters with higher complexity (the many-strokes condition) than for characters with lower complexity (the few-strokes condition). The participants slowed down their writing execution at the radicals' boundary strokes, which indicated a radical boundary effect in peripheral processing. Interestingly, the lexicality and the radical complexity affected the pattern of shift velocity and writing velocity during the execution of writing. Lexical processing cascades into peripheral processing but only at the beginning of Chinese characters. In contrast, the radical complexity influenced the execution of handwriting movement throughout the entire character, and the pattern of the effect interacted with the character frequency. These results suggest that the processes of the lexicality and the radical complexity function during the execution of handwritten word production, which suggests that central processing cascades over peripheral processing during Chinese characters handwriting. PMID:28348536
Penalized nonparametric scalar-on-function regression via principal coordinates
Reiss, Philip T.; Miller, David L.; Wu, Pei-Shien; Hua, Wen-Yu
2016-01-01
A number of classical approaches to nonparametric regression have recently been extended to the case of functional predictors. This paper introduces a new method of this type, which extends intermediate-rank penalized smoothing to scalar-on-function regression. In the proposed method, which we call principal coordinate ridge regression, one regresses the response on leading principal coordinates defined by a relevant distance among the functional predictors, while applying a ridge penalty. Our publicly available implementation, based on generalized additive modeling software, allows for fast optimal tuning parameter selection and for extensions to multiple functional predictors, exponential family-valued responses, and mixed-effects models. In an application to signature verification data, principal coordinate ridge regression, with dynamic time warping distance used to define the principal coordinates, is shown to outperform a functional generalized linear model. PMID:29217963
14. 'ANNISQUAM POINT JAN. 4, 1898.' Photocopy of photograph ...
14. 'ANNISQUAM POINT -- JAN. 4, 1898.' Photocopy of photograph (original glass plate negative #T86 in the collection of the Annisquam Historical Society, Annisquam, Massachusetts). Photographer: Martha Harvey (1862-1949). (The handwritten legend along the top edge of the photograph is scratched in the emulsion of the original glass plate negative. Consequently it reads in reverse when printed.) - Annisquam Bridge, Spanning Lobster Cove between Washington & River Streets, Gloucester, Essex County, MA
ERIC Educational Resources Information Center
Dixon, Michael
2012-01-01
This study compares second-year Japanese university students' strategies to write kanji by hand with their strategies to produce the kanji characters on a computer, taking into account factors such as accuracy in writing, the amount of kanji used, the complexity of the kanji used, as well as how the characters used compare with the sequence…
Numerical linear algebra in data mining
NASA Astrophysics Data System (ADS)
Eldén, Lars
Ideas and algorithms from numerical linear algebra are important in several areas of data mining. We give an overview of linear algebra methods in text mining (information retrieval), pattern recognition (classification of handwritten digits), and PageRank computations for web search engines. The emphasis is on rank reduction as a method of extracting information from a data matrix, low-rank approximation of matrices using the singular value decomposition and clustering, and on eigenvalue methods for network analysis.
A Novel Multi-Receiver Signcryption Scheme with Complete Anonymity.
Pang, Liaojun; Yan, Xuxia; Zhao, Huiyang; Hu, Yufei; Li, Huixian
2016-01-01
Anonymity, which is more and more important to multi-receiver schemes, has been taken into consideration by many researchers recently. To protect the receiver anonymity, in 2010, the first multi-receiver scheme based on the Lagrange interpolating polynomial was proposed. To ensure the sender's anonymity, the concept of the ring signature was proposed in 2005, but afterwards, this scheme was proven to has some weakness and at the same time, a completely anonymous multi-receiver signcryption scheme is proposed. In this completely anonymous scheme, the sender anonymity is achieved by improving the ring signature, and the receiver anonymity is achieved by also using the Lagrange interpolating polynomial. Unfortunately, the Lagrange interpolation method was proven a failure to protect the anonymity of receivers, because each authorized receiver could judge whether anyone else is authorized or not. Therefore, the completely anonymous multi-receiver signcryption mentioned above can only protect the sender anonymity. In this paper, we propose a new completely anonymous multi-receiver signcryption scheme with a new polynomial technology used to replace the Lagrange interpolating polynomial, which can mix the identity information of receivers to save it as a ciphertext element and prevent the authorized receivers from verifying others. With the receiver anonymity, the proposed scheme also owns the anonymity of the sender at the same time. Meanwhile, the decryption fairness and public verification are also provided.
Georgiou, Andrew; Prgomet, Mirela; Toouli, George; Callen, Joanne; Westbrook, Johanna
2011-09-01
The provision of relevant clinical information on pathology requests is an important part of facilitating appropriate laboratory utilization and accurate results interpretation and reporting. (1) To determine the quantity and importance of handwritten clinical information provided by physicians to the Microbiology Department of a hospital pathology service; and (2) to examine the impact of a Computerized Provider Order Entry (CPOE) system on the nature of clinical information communication to the laboratory. A multi-method and multi-stage investigation which included: (a) a retrospective audit of all handwritten Microbiology requests received over a 1-month period in the Microbiology Department of a large metropolitan teaching hospital; (b) the administration of a survey to laboratory professionals to investigate the impact of different clinical information on the processing and/or interpretation of tests; (c) an expert panel consisting of medical staff and senior scientists to assess the survey findings and their impact on pathology practice and patient care; and (d) a comparison of the provision and value of clinical information before CPOE, and across 3 years after its implementation. The audit of handwritten requests found that 43% (n=4215) contained patient-related clinical information. The laboratory survey showed that 97% (84/86) of the different types of clinical information provided for wound specimens and 86% (43/50) for stool specimens were shown to have an effect on the processing or interpretation of the specimens by one or more laboratory professionals. The evaluation of the impact of CPOE revealed a significant improvement in the provision of useful clinical information from 2005 to 2008, rising from 90.1% (n=749) to 99.8% (n=915) (p<.0001) for wound specimens and 34% (n=129) to 86% (n=422) (p<.0001) for stool specimens. This study showed that the CPOE system provided an integrated platform to access and exchange valuable patient-related information between physicians and the laboratory. These findings have important implications for helping to inform decisions about the design and structure of CPOE screens and what data entry fields should be designated or made voluntary. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
2013-01-01
M. Ahmadi, and M. Shridhar, “ Handwritten Numeral Recognition with Multiple Features and Multistage Classifiers,” Proc. IEEE Int’l Symp. Circuits...ARTICLE (Post Print) 3. DATES COVERED (From - To) SEP 2011 – SEP 2013 4. TITLE AND SUBTITLE A PARALLEL NEUROMORPHIC TEXT RECOGNITION SYSTEM AND ITS...research in computational intelligence has entered a new era. In this paper, we present an HPC-based context-aware intelligent text recognition
NASA Astrophysics Data System (ADS)
Bezmaternykh, P. V.; Nikolaev, D. P.; Arlazarov, V. L.
2018-04-01
Textual blocks rectification or slant correction is an important stage of document image processing in OCR systems. This paper considers existing methods and introduces an approach for the construction of such algorithms based on Fast Hough Transform analysis. A quality measurement technique is proposed and obtained results are shown for both printed and handwritten textual blocks processing as a part of an industrial system of identity documents recognition on mobile devices.
Handwriting generates variable visual output to facilitate symbol learning.
Li, Julia X; James, Karin H
2016-03-01
Recent research has demonstrated that handwriting practice facilitates letter categorization in young children. The present experiments investigated why handwriting practice facilitates visual categorization by comparing 2 hypotheses: that handwriting exerts its facilitative effect because of the visual-motor production of forms, resulting in a direct link between motor and perceptual systems, or because handwriting produces variable visual instances of a named category in the environment that then changes neural systems. We addressed these issues by measuring performance of 5-year-old children on a categorization task involving novel, Greek symbols across 6 different types of learning conditions: 3 involving visual-motor practice (copying typed symbols independently, tracing typed symbols, tracing handwritten symbols) and 3 involving visual-auditory practice (seeing and saying typed symbols of a single typed font, of variable typed fonts, and of handwritten examples). We could therefore compare visual-motor production with visual perception both of variable and similar forms. Comparisons across the 6 conditions (N = 72) demonstrated that all conditions that involved studying highly variable instances of a symbol facilitated symbol categorization relative to conditions where similar instances of a symbol were learned, regardless of visual-motor production. Therefore, learning perceptually variable instances of a category enhanced performance, suggesting that handwriting facilitates symbol understanding by virtue of its environmental output: supporting the notion of developmental change though brain-body-environment interactions. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Handwriting generates variable visual input to facilitate symbol learning
Li, Julia X.; James, Karin H.
2015-01-01
Recent research has demonstrated that handwriting practice facilitates letter categorization in young children. The present experiments investigated why handwriting practice facilitates visual categorization by comparing two hypotheses: That handwriting exerts its facilitative effect because of the visual-motor production of forms, resulting in a direct link between motor and perceptual systems, or because handwriting produces variable visual instances of a named category in the environment that then changes neural systems. We addressed these issues by measuring performance of 5 year-old children on a categorization task involving novel, Greek symbols across 6 different types of learning conditions: three involving visual-motor practice (copying typed symbols independently, tracing typed symbols, tracing handwritten symbols) and three involving visual-auditory practice (seeing and saying typed symbols of a single typed font, of variable typed fonts, and of handwritten examples). We could therefore compare visual-motor production with visual perception both of variable and similar forms. Comparisons across the six conditions (N=72) demonstrated that all conditions that involved studying highly variable instances of a symbol facilitated symbol categorization relative to conditions where similar instances of a symbol were learned, regardless of visual-motor production. Therefore, learning perceptually variable instances of a category enhanced performance, suggesting that handwriting facilitates symbol understanding by virtue of its environmental output: supporting the notion of developmental change though brain-body-environment interactions. PMID:26726913
Optical character recognition of handwritten Arabic using hidden Markov models
NASA Astrophysics Data System (ADS)
Aulama, Mohannad M.; Natsheh, Asem M.; Abandah, Gheith A.; Olama, Mohammed M.
2011-04-01
The problem of optical character recognition (OCR) of handwritten Arabic has not received a satisfactory solution yet. In this paper, an Arabic OCR algorithm is developed based on Hidden Markov Models (HMMs) combined with the Viterbi algorithm, which results in an improved and more robust recognition of characters at the sub-word level. Integrating the HMMs represents another step of the overall OCR trends being currently researched in the literature. The proposed approach exploits the structure of characters in the Arabic language in addition to their extracted features to achieve improved recognition rates. Useful statistical information of the Arabic language is initially extracted and then used to estimate the probabilistic parameters of the mathematical HMM. A new custom implementation of the HMM is developed in this study, where the transition matrix is built based on the collected large corpus, and the emission matrix is built based on the results obtained via the extracted character features. The recognition process is triggered using the Viterbi algorithm which employs the most probable sequence of sub-words. The model was implemented to recognize the sub-word unit of Arabic text raising the recognition rate from being linked to the worst recognition rate for any character to the overall structure of the Arabic language. Numerical results show that there is a potentially large recognition improvement by using the proposed algorithms.
Optical character recognition of handwritten Arabic using hidden Markov models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aulama, Mohannad M.; Natsheh, Asem M.; Abandah, Gheith A.
2011-01-01
The problem of optical character recognition (OCR) of handwritten Arabic has not received a satisfactory solution yet. In this paper, an Arabic OCR algorithm is developed based on Hidden Markov Models (HMMs) combined with the Viterbi algorithm, which results in an improved and more robust recognition of characters at the sub-word level. Integrating the HMMs represents another step of the overall OCR trends being currently researched in the literature. The proposed approach exploits the structure of characters in the Arabic language in addition to their extracted features to achieve improved recognition rates. Useful statistical information of the Arabic language ismore » initially extracted and then used to estimate the probabilistic parameters of the mathematical HMM. A new custom implementation of the HMM is developed in this study, where the transition matrix is built based on the collected large corpus, and the emission matrix is built based on the results obtained via the extracted character features. The recognition process is triggered using the Viterbi algorithm which employs the most probable sequence of sub-words. The model was implemented to recognize the sub-word unit of Arabic text raising the recognition rate from being linked to the worst recognition rate for any character to the overall structure of the Arabic language. Numerical results show that there is a potentially large recognition improvement by using the proposed algorithms.« less
Handwritten character recognition using background analysis
NASA Astrophysics Data System (ADS)
Tascini, Guido; Puliti, Paolo; Zingaretti, Primo
1993-04-01
The paper describes a low-cost handwritten character recognizer. It is constituted by three modules: the `acquisition' module, the `binarization' module, and the `core' module. The core module can be logically partitioned into six steps: character dilation, character circumscription, region and `profile' analysis, `cut' analysis, decision tree descent, and result validation. Firstly, it reduces the resolution of the binarized regions and detects the minimum rectangle (MR) which encloses the character; the MR partitions the background into regions that surround the character or are enclosed by it, and allows it to define features as `profiles' and `cuts;' a `profile' is the set of vertical or horizontal minimum distances between a side of the MR and the character itself; a `cut' is a vertical or horizontal image segment delimited by the MR. Then, the core module classifies the character by descending along the decision tree on the basis of the analysis of regions around the character, in particular of the `profiles' and `cuts,' and without using context information. Finally, it recognizes the character or reactivates the core module by analyzing validation test results. The recognizer is largely insensible to character discontinuity and is able to detect Arabic numerals and English alphabet capital letters. The recognition rate of a 32 X 32 pixel character is of about 97% after the first iteration, and of over 98% after the second iteration.
Two medieval doctors: Gilbertus Anglicus (c1180-c1250) and John of Gaddesden (1280-1361).
Pearn, John
2013-02-01
Biographies of medieval English doctors are uncommon and fragmentary. The two best-known English medieval physicians were Gilbertus Anglicus and John of Gaddesden. This paper brings together the known details of their lives, compiled from extant biographies and from internal references in their texts. The primary records of their writings exist in handwritten texts and thereafter in incunabula from the time of the invention of printing in 1476. The record of the lives of these two medieval physicians can be expanded, as here, by the general perspective of the life and times in which they lived. Gilbertus Anglicus, an often-quoted physician-teacher at Montpellier, wrote a seven-folio Compendium medicinae in 1271. He described pioneering procedures used later in the emergent disciplines of anaesthetics, cosmetic medicine and travel medicine. Gilbertus' texts, used extensively in European medical schools, passed in handwritten copies from student to student and eventually were printed in 1510. John of Gaddesden, an Oxford graduate in Arts, Medicine and Theology, wrote Rosa Anglica, published circa 1314. Its detailed text is an exemplar of the mixture of received Hippocratic and Galenic lore compounded by medieval astronomy and religious injunction, which mixture was the essence of medieval medicine. The writings of both these medieval English physicians formed part of the core curriculum that underpinned the practice of medicine for the next 400 years.
NASA Astrophysics Data System (ADS)
Chen, C.; Rundle, J. B.; Holliday, J. R.; Nanjo, K.; Turcotte, D. L.; Li, S.; Tiampo, K. F.
2005-12-01
Forecast verification procedures for statistical events with binary outcomes typically rely on the use of contingency tables and Relative Operating Characteristic (ROC) diagrams. Originally developed for the statistical evaluation of tornado forecasts on a county-by-county basis, these methods can be adapted to the evaluation of competing earthquake forecasts. Here we apply these methods retrospectively to two forecasts for the m = 7.3 1999 Chi-Chi, Taiwan, earthquake. These forecasts are based on a method, Pattern Informatics (PI), that locates likely sites for future large earthquakes based on large change in activity of the smallest earthquakes. A competing null hypothesis, Relative Intensity (RI), is based on the idea that future large earthquake locations are correlated with sites having the greatest frequency of small earthquakes. We show that for Taiwan, the PI forecast method is superior to the RI forecast null hypothesis. Inspection of the two maps indicates that their forecast locations are indeed quite different. Our results confirm an earlier result suggesting that the earthquake preparation process for events such as the Chi-Chi earthquake involves anomalous changes in activation or quiescence, and that signatures of these processes can be detected in precursory seismicity data. Furthermore, we find that our methods can accurately forecast the locations of aftershocks from precursory seismicity changes alone, implying that the main shock together with its aftershocks represent a single manifestation of the formation of a high-stress region nucleating prior to the main shock.
The Taranaki daylight fireball, 1999 July 7
NASA Astrophysics Data System (ADS)
McCormick, Jennie
2006-10-01
The New Zealand Taranaki Daylight Fireball was observed on 1999 July 7 from various areas across the North and South Islands of New Zealand and had an apparent magnitude brighter than -20. The event produced more than one hundred handwritten reports, drawings, and paintings from eyewitnesses; video and audio recordings, seismic trace data, and confirmation of detection by the United States Defense Department satellites. A detailed case study based on this data shows that observations by the public are invaluable when compiling a formal history of such events.
2013-06-11
... " (\\" I- ")( Itu 0'rrw.d!iI;Ci.'J't{), tl J(>1r \\,){.I"4r Iff >'lor M IL! ... i::t/J \\-. \\ vJ' 19b - \\'-UlL \\Al-/UC""tiP /iCC- 4.-~.v"d ilM. ~d.uJJJ ru, 4Wr ,ie fYV\\ d-..v tI ~ /'l, ~ fl ...
13. 'WAITING AT THE DRAWBRIDGE.' THE COAL SCHOONER LUCY MAY ...
13. 'WAITING AT THE DRAWBRIDGE.' THE COAL SCHOONER LUCY MAY WAITING AT THE DRAW, JUNE 19, 1896. Photocopy of photograph (original glass plate negative #T89 in the collection of the Annisquam Historical Society, Annisquam, Massachusetts). Photographer: Martha Harvey (1862-1949). (The handwritten legend along the top edge of the photograph is scratched in the emulsion of the original glass plate negative. Consequently it reads in reverse when printed.) - Annisquam Bridge, Spanning Lobster Cove between Washington & River Streets, Gloucester, Essex County, MA
Tensor Train Neighborhood Preserving Embedding
NASA Astrophysics Data System (ADS)
Wang, Wenqi; Aggarwal, Vaneet; Aeron, Shuchin
2018-05-01
In this paper, we propose a Tensor Train Neighborhood Preserving Embedding (TTNPE) to embed multi-dimensional tensor data into low dimensional tensor subspace. Novel approaches to solve the optimization problem in TTNPE are proposed. For this embedding, we evaluate novel trade-off gain among classification, computation, and dimensionality reduction (storage) for supervised learning. It is shown that compared to the state-of-the-arts tensor embedding methods, TTNPE achieves superior trade-off in classification, computation, and dimensionality reduction in MNIST handwritten digits and Weizmann face datasets.
Machine learning phases of matter
NASA Astrophysics Data System (ADS)
Carrasquilla, Juan; Stoudenmire, Miles; Melko, Roger
We show how the technology that allows automatic teller machines read hand-written digits in cheques can be used to encode and recognize phases of matter and phase transitions in many-body systems. In particular, we analyze the (quasi-)order-disorder transitions in the classical Ising and XY models. Furthermore, we successfully use machine learning to study classical Z2 gauge theories that have important technological application in the coming wave of quantum information technologies and whose phase transitions have no conventional order parameter.
Protocol Handbook - A Guide for the Base Protocol Officer
1986-04-01
printed or otherwise impressed on paper napkins , boxes, or anythi-ng designed ftar temporary use and discard. Advertisitig s;Igns are not to be...colored cloth and napkins are fine and add to the setting of the luncheon. There should be a centerpiece, but no candles are used on luncheon tables. Th~e...time, place, dress, and occasion. They should be in the third person, in black ink with the date and time written completely. The dress is handwritten
Poussin, Carine; Belcastro, Vincenzo; Martin, Florian; Boué, Stéphanie; Peitsch, Manuel C; Hoeng, Julia
2017-04-17
Systems toxicology intends to quantify the effect of toxic molecules in biological systems and unravel their mechanisms of toxicity. The development of advanced computational methods is required for analyzing and integrating high throughput data generated for this purpose as well as for extrapolating predictive toxicological outcomes and risk estimates. To ensure the performance and reliability of the methods and verify conclusions from systems toxicology data analysis, it is important to conduct unbiased evaluations by independent third parties. As a case study, we report here the results of an independent verification of methods and data in systems toxicology by crowdsourcing. The sbv IMPROVER systems toxicology computational challenge aimed to evaluate computational methods for the development of blood-based gene expression signature classification models with the ability to predict smoking exposure status. Participants created/trained models on blood gene expression data sets including smokers/mice exposed to 3R4F (a reference cigarette) or noncurrent smokers/Sham (mice exposed to air). Participants applied their models on unseen data to predict whether subjects classify closer to smoke-exposed or nonsmoke exposed groups. The data sets also included data from subjects that had been exposed to potential modified risk tobacco products (MRTPs) or that had switched to a MRTP after exposure to conventional cigarette smoke. The scoring of anonymized participants' predictions was done using predefined metrics. The top 3 performers' methods predicted class labels with area under the precision recall scores above 0.9. Furthermore, although various computational approaches were used, the crowd's results confirmed our own data analysis outcomes with regards to the classification of MRTP-related samples. Mice exposed directly to a MRTP were classified closer to the Sham group. After switching to a MRTP, the confidence that subjects belonged to the smoke-exposed group decreased significantly. Smoking exposure gene signatures that contributed to the group separation included a core set of genes highly consistent across teams such as AHRR, LRRN3, SASH1, and P2RY6. In conclusion, crowdsourcing constitutes a pertinent approach, in complement to the classical peer review process, to independently and unbiasedly verify computational methods and data for risk assessment using systems toxicology.
Merging Infrasound and Electromagnetic Signals as a Means for Nuclear Explosion Detection
NASA Astrophysics Data System (ADS)
Ashkenazy, Joseph; Lipshtat, Azi; Kesar, Amit S.; Pistinner, Shlomo; Ben Horin, Yochai
2016-04-01
The infrasound monitoring network of the CTBT consists of 60 stations. These stations are capable of detecting atmospheric events, and may provide approximate location within time scale of a few hours. However, the nature of these events cannot be deduced from the infrasound signal. More than two decades ago it was proposed to use the electromagnetic pulse (EMP) as a means of discriminating nuclear explosion from other atmospheric events. An EMP is a unique signature of nuclear explosion and is not detected from chemical ones. Nevertheless, it was decided to exclude the EMP technology from the official CTBT verification regime, mainly because of the risk of high false alarm rate, due to lightning electromagnetic pulses [1]. Here we present a method of integrating the information retrieved from the infrasound system with the EMP signal which enables us to discriminate between lightning discharges and nuclear explosions. Furthermore, we show how spectral and other characteristics of the electromagnetic signal emitted from a nuclear explosion are distinguished from those of lightning discharge. We estimate the false alarm probability of detecting a lightning discharge from a given area of the infrasound event, and identifying it as a signature of a nuclear explosion. We show that this probability is very low and conclude that the combination of infrasound monitoring and EMP spectral analysis may produce a reliable method for identifying nuclear explosions. [1] R. Johnson, Unfinished Business: The Negotiation of the CTBT and the End of Nuclear Testing, United Nations Institute for Disarmament Research, 2009.
Asymmetric micro-Doppler frequency comb generation via magnetoelectric coupling
NASA Astrophysics Data System (ADS)
Filonov, Dmitry; Steinberg, Ben Z.; Ginzburg, Pavel
2017-06-01
Electromagnetic scattering from moving bodies, being an inherently time-dependent phenomenon, gives rise to a generation of new frequencies, which can be used to characterize the motion. Whereas an ordinary motion along a linear path produces a constant Doppler shift, an accelerated scatterer can generate a micro-Doppler frequency comb. The spectra produced by rotating objects were studied and observed in a bistatic lock-in detection scheme. The internal geometry of a scatterer was shown to determine the spectrum, and the degree of structural asymmetry was suggested to be identified via signatures in the micro-Doppler comb. In particular, hybrid magnetoelectric particles, showing an ultimate degree of asymmetry in forward and backward scattering directions, were investigated. It was shown that the comb in the backward direction has signatures at the fundamental rotation frequency and its odd harmonics, whereas the comb of the forward scattered field has a prevailing peak at the doubled frequency and its multiples. Additional features of the comb were shown to be affected by the dimensions of the particle and by the strength of the magnetoelectric coupling. Experimental verification was performed with a printed circuit board antenna based on a wire and a split ring, while the structure was illuminated at a 2 GHz carrier frequency. Detailed analysis of micro-Doppler combs enables remote detection of asymmetric features of distant objects and could find use in a span of applications, including stellar radiometry and radio identification.
NASA Astrophysics Data System (ADS)
Slater, Lee; Niemi, Tina M.
2003-06-01
Ground-penetrating radar (GPR) was used in an effort to locate a major active fault that traverses Aqaba City, Jordan. Measurements over an exposed (trenched) cross fault outside of the city identify a radar signature consisting of linear events and horizontal offset/flexured reflectors both showing a geometric correlation with two known faults at a control site. The asymmetric linear events are consistent with dipping planar reflectors matching the known direction of dip of the faults. However, other observations regarding this radar signature render the mechanism generating these events more complex and uncertain. GPR measurements in Aqaba City were limited to vacant lots. Seven GPR profiles were conducted approximately perpendicular to the assumed strike of the fault zone, based on regional geological evidence. A radar response very similar to that obtained over the cross fault was observed on five of the profiles in Aqaba City, although the response is weaker than that obtained at the control site. The positions of the identified responses form a near straight line with a strike of 45°. Although subsurface verification of the fault by trenching within the city is needed, the geophysical evidence for fault zone location is strong. The location of the interpreted fault zone relative to emergency services, military bases, commercial properties, and residential areas is defined to within a few meters. This study has significant implications for seismic hazard analysis in this tectonically active and heavily populated region.
Kepler Planet Detection Metrics: Window and One-Sigma Depth Functions for Data Release 25
NASA Technical Reports Server (NTRS)
Burke, Christopher J.; Catanzarite, Joseph
2017-01-01
This document describes the window and one-sigma depth functions relevant to the Transiting Planet Search (TPS) algorithm in the Kepler pipeline (Jenkins 2002; Jenkins et al. 2017). The window function specifies the fraction of unique orbital ephemeris epochs over which three transits are observable as a function of orbital period. In this context, the epoch and orbital period, together, comprise the ephemeris of an orbiting companion, and ephemerides with the same period are considered equivalent if their epochs differ by an integer multiple of the period. The one-sigma depth function specifies the depth of a signal (in ppm) for a given light curve that results in a one-sigma detection of a transit signature as a function of orbital period when averaged over all unique orbital ephemerides. These planet detection metrics quantify the ability of TPS to detect a transiting planet signature on a star-by-star basis. They are uniquely applicable to a specific Kepler data release, since they are dependent on the details of the light curves searched and the functionality of the TPS algorithm used to perform the search. This document describes the window and one-sigma depth functions relevant to Kepler Data Release 25 (DR25), where the data were processed (Thompson et al. 2016) and searched (Twicken et al. 2016) with the SOC 9.3 pipeline. In Section 4, we describe significant differences from those reported in Kepler Data Release 24 (Burke Seader 2016) and document our verification method.
Sausset, Solen; Lambert, Eric; Olive, Thierry
2013-01-01
The coordination of the various processes involved in language production is a subject of keen debate in writing research. Some authors hold that writing processes can be flexibly coordinated according to task demands, whereas others claim that process coordination is entirely inflexible. For instance, orthographic planning has been shown to be resource-dependent during handwriting, but inflexible in typing, even under time pressure. The present study therefore went one step further in studying flexibility in the coordination of orthographic processing and graphomotor execution, by measuring the impact of time pressure during a handwritten copy task. Orthographic and graphomotor processes were observed via syllable processing. Writers copied out two- and three-syllable words three times in a row, with and without time pressure. Latencies and letter measures at syllable boundaries were analyzed. We hypothesized that if coordination is flexible and varies according to task demands, it should be modified by time pressure, affecting both latency before execution and duration of execution. We therefore predicted that the extent of syllable processing before execution would be reduced under time pressure and, as a consequence, syllable effects during execution would be more salient. Results showed, however, that time pressure interacted neither with syllable number nor with syllable structure. Accordingly, syllable processing appears to remain the same regardless of time pressure. The flexibility of process coordination during handwriting is discussed, as is the operationalization of time pressure constraints. PMID:24319435
Bicket, Mark C.; Kattail, Deepa; Yaster, Myron; Wu, Christopher L.; Pronovost, Peter
2017-01-01
Objective To determine opioid prescribing patterns and rate of three types of errors, discrepancies, and variation from ideal practice. Design Retrospective review of opioid prescriptions processed at an outpatient pharmacy Setting Tertiary institutional medical center Patients We examined 510 consecutive opioid medication prescriptions for adult patients processed at an institutional outpatient pharmacy in June 2016 for patient, provider, and prescription characteristics. Main Outcome Measure(s) We analyzed prescriptions for deviation from best practice guidelines, lack of two patient identifiers, and noncompliance with Drug Enforcement Agency (DEA) rules. Results Mean patient age (SD) was 47.5 years (17.4). The most commonly prescribed opioid was oxycodone (71%), usually not combined with acetaminophen. Practitioners prescribed tablet formulation to 92% of the sample, averaging 57 (47) pills. We identified at least one error on 42% of prescriptions. Among all prescriptions, 9% deviated from best practice guidelines, 21% failed to include two patient identifiers, and 41% were noncompliant with DEA rules. Errors occurred in 89% of handwritten prescriptions, 0% of electronic health record (EHR) computer-generated prescriptions, and 12% of non-EHR computer-generated prescriptions. Inter-rater reliability by kappa was 0.993. Conclusions Inconsistencies in opioid prescribing remain common. Handwritten prescriptions continue to demonstrate higher associations of errors, discrepancies, and variation from ideal practice and government regulations. All computer-generated prescriptions adhered to best practice guidelines and contained two patient identifiers, and all EHR prescriptions were fully compliant with DEA rules. PMID:28345746
Bonnemain, Bruno
2016-03-01
Penicher's pharmacopeia (1695) was part of the Library of the "College de Pharmacie". The inventory of this Library was done in 1780 and is kept by the Library of the BIU Santé, Paris-Descartes University in Paris that digitized it recently. This copy contains handwritten texts that complete the original edition. The first main addition, at the beginning of the document, is three recipes of drugs, in Latin, one of them being well known at the early 18th century, the vulnerary balm of Leonardo Fioraventi (1517-1588), that is also known as Fioraventi's alcoholate. This product will still be present in the French Codex until 1949. The Penicher' book also includes, at the end, three handwritten pages in French which represent the equipment of apothecaries. These drawings are very close to the ones of Charas' Pharmacopeia. One can think that these additions are from the second part of the 18th century, but before the gift of the pharmacopeia to the College de Pharmacie by Fourcy en 1765. The author is unknown but he is probably one of the predecessor of Fourcy in Pharmacie de l'Ours (Bear's pharmacy). This gift done by Fourcy when joining the Community of Parisians pharmacists did not prevent the fact that Fourcy was sentenced by his colleagues pharmacists, a few years later, for the sales of "Chinese specialties" that someone called Jean-Daniel Smith, a physician installed in Paris, asked him to prepare.
NASA Astrophysics Data System (ADS)
Ringbom, A.
2010-12-01
A detailed knowledge of both the spatial and isotopic distribution of anthropogenic radioxenon is essential in investigations of the performance of the radioxenon part of the IMS, as well as in the development of techniques to discriminate radioxenon signatures from a nuclear explosion from other sources. Further, the production processes in the facilities causing the radioxenon background has to be understood and be compatible with simulations. In this work, several aspects of the observed atmospheric radioxenon background are investigated, including the global distribution as well as the current understanding of the observed isotopic ratios. Analyzed radioxenon data from the IMS, as well as from other measurement stations, are used to create an up-to-date description of the global radioxenon background, including all four CTBT relevant xenon isotopes (133Xe, 131mXe, 133mXe, and 135Xe). In addition, measured isotopic ratios will be compared to simulations of neutron induced fission of 235U, and the uncertainties will be discussed. Finally, the impact of the radioxenon background on the detection capability of the IMS will be investigated. This work is a continuation of studies [1,2] that was presented at the International Scientific Studies conference held in Vienna in 2009. [1] A. Ringbom, et.al., “Characterization of the global distribution of atmospheric radioxenons”, International Scientific Studies Conference on CTBT Verification, 10-12 June 2009. [2] R. D'Amours and A. Ringbom, “A study on the global detection capability of IMS for all CTBT relevant xenon isotopes“, International Scientific Studies Conference on CTBT Verification, 10-12 June 2009.
NASA Astrophysics Data System (ADS)
Zhou, Zheng; Liu, Chen; Shen, Wensheng; Dong, Zhen; Chen, Zhe; Huang, Peng; Liu, Lifeng; Liu, Xiaoyan; Kang, Jinfeng
2017-04-01
A binary spike-time-dependent plasticity (STDP) protocol based on one resistive-switching random access memory (RRAM) device was proposed and experimentally demonstrated in the fabricated RRAM array. Based on the STDP protocol, a novel unsupervised online pattern recognition system including RRAM synapses and CMOS neurons is developed. Our simulations show that the system can efficiently compete the handwritten digits recognition task, which indicates the feasibility of using the RRAM-based binary STDP protocol in neuromorphic computing systems to obtain good performance.
Standard classification of software documentation
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1976-01-01
General conceptual requirements for standard levels of documentation and for application of these requirements to intended usages. These standards encourage the policy to produce only those forms of documentation that are needed and adequate for the purpose. Documentation standards are defined with respect to detail and format quality. Classes A through D range, in order, from the most definitive down to the least definitive, and categories 1 through 4 range, in order, from high-quality typeset down to handwritten material. Criteria for each of the classes and categories, as well as suggested selection guidelines for each are given.
TASI Lectures on Flavor Physics
NASA Astrophysics Data System (ADS)
Ligeti, Zoltan
These notes overlap with lectures given at the TASI summer schools in 2014 and 2011, as well as at the European School of High Energy Physics in 2013. This is primarily an attempt at transcribing my handwritten notes, with emphasis on topics and ideas discussed in the lectures. It is not a comprehensive introduction or review of the field, nor does it include a complete list of references. I hope, however, that some may find it useful to better understand the reasons for excitement about recent progress and future opportunities in flavor physics.
Optical Verification Laboratory Demonstration System for High Security Identification Cards
NASA Technical Reports Server (NTRS)
Javidi, Bahram
1997-01-01
Document fraud including unauthorized duplication of identification cards and credit cards is a serious problem facing the government, banks, businesses, and consumers. In addition, counterfeit products such as computer chips, and compact discs, are arriving on our shores in great numbers. With the rapid advances in computers, CCD technology, image processing hardware and software, printers, scanners, and copiers, it is becoming increasingly easy to reproduce pictures, logos, symbols, paper currency, or patterns. These problems have stimulated an interest in research, development and publications in security technology. Some ID cards, credit cards and passports currently use holograms as a security measure to thwart copying. The holograms are inspected by the human eye. In theory, the hologram cannot be reproduced by an unauthorized person using commercially-available optical components; in practice, however, technology has advanced to the point where the holographic image can be acquired from a credit card-photographed or captured with by a CCD camera-and a new hologram synthesized using commercially-available optical components or hologram-producing equipment. Therefore, a pattern that can be read by a conventional light source and a CCD camera can be reproduced. An optical security and anti-copying device that provides significant security improvements over existing security technology was demonstrated. The system can be applied for security verification of credit cards, passports, and other IDs so that they cannot easily be reproduced. We have used a new scheme of complex phase/amplitude patterns that cannot be seen and cannot be copied by an intensity-sensitive detector such as a CCD camera. A random phase mask is bonded to a primary identification pattern which could also be phase encoded. The pattern could be a fingerprint, a picture of a face, or a signature. The proposed optical processing device is designed to identify both the random phase mask and the primary pattern [1-3]. We have demonstrated experimentally an optical processor for security verification of objects, products, and persons. This demonstration is very important to encourage industries to consider the proposed system for research and development.
30 CFR 250.913 - When must I resubmit Platform Verification Program plans?
Code of Federal Regulations, 2010 CFR
2010-07-01
... Structures Platform Verification Program § 250.913 When must I resubmit Platform Verification Program plans? (a) You must resubmit any design verification, fabrication verification, or installation verification... 30 Mineral Resources 2 2010-07-01 2010-07-01 false When must I resubmit Platform Verification...
Generic Verification Protocol for Verification of Online Turbidimeters
This protocol provides generic procedures for implementing a verification test for the performance of online turbidimeters. The verification tests described in this document will be conducted under the Environmental Technology Verification (ETV) Program. Verification tests will...
Experiments with the Dragon Machine
DOE Office of Scientific and Technical Information (OSTI.GOV)
R.E. Malenfant
2005-08-12
The basic characteristics of a self-sustaining chain reaction were demonstrated with the Chicago Pile in 1943, but it was not until early 1945 that sufficient enriched material became available to experimentally verify fast-neutron cross-sections and the kinetic characteristics of a nuclear chain reaction sustained with prompt neutrons alone. However, the demands of wartime and the rapid decline in effort following the cessation of hostilities often resulted in the failure to fully document the experiments or in the loss of documentation as personnel returned to civilian pursuits. When documented, the results were often highly classified. Even when eventually declassified, the datamore » were often not approved for public release until years later.2 Even after declassification and approval for public release, the records are sometimes difficult to find. Through a fortuitous discovery, a set of handwritten notes by ''ORF July 1945'' entitled ''Dragon - Research with a Pulsed Fission Reactor'' was found by William L. Myers in an old storage safe at Pajarito Site of the Los Alamos National Laboratory3. Of course, ORF was identified as Otto R. Frisch. The document was attached to a page in a nondescript spiral bound notebook labeled ''494 Book'' that bore the signatures of Louis Slotin and P. Morrison. The notes also reference an ''Idea LS'' that can only be Louis Slotin. The discovery of the notes led to a search of Laboratory Archives, the negative files of the photo lab, and the Report Library for additional details of the experiments with the Dragon machine that were conducted between January and July 1945. The assembly machine and the experiments were carefully conceived and skillfully executed. The analyses--without the crutch of computers--display real insight into the characteristics of the nuclear chain reaction. The information presented here provides what is believed to be a complete collection of the original documentation of the observations made with the Dragon Machine in early 1945.« less
Verifying Safety Messages Using Relative-Time and Zone Priority in Vehicular Ad Hoc Networks.
Banani, Sam; Gordon, Steven; Thiemjarus, Surapa; Kittipiyakul, Somsak
2018-04-13
In high-density road networks, with each vehicle broadcasting multiple messages per second, the arrival rate of safety messages can easily exceed the rate at which digital signatures can be verified. Since not all messages can be verified, algorithms for selecting which messages to verify are required to ensure that each vehicle receives appropriate awareness about neighbouring vehicles. This paper presents a novel scheme to select important safety messages for verification in vehicular ad hoc networks (VANETs). The proposed scheme uses location and direction of the sender, as well as proximity and relative-time between vehicles, to reduce the number of irrelevant messages verified (i.e., messages from vehicles that are unlikely to cause an accident). Compared with other existing schemes, the analysis results show that the proposed scheme can verify messages from nearby vehicles with lower inter-message delay and reduced packet loss and thus provides high level of awareness of the nearby vehicles.
ERTS-1 data applied to strip mining
NASA Technical Reports Server (NTRS)
Anderson, A. T.; Schubert, J.
1976-01-01
Two coal basins within the western region of the Potomac River Basin contain the largest strip-mining operations in western Maryland and West Virginia. The disturbed strip-mine areas were delineated along with the surrounding geological and vegetation features by using ERTS-1 data in both analog and digital form. The two digital systems employed were (1) the ERTS analysis system, a point-by-point digital analysis of spectral signatures based on known spectral values and (2) the LARS automatic data processing system. These two systems aided in efforts to determine the extent and state of strip mining in this region. Aircraft data, ground-verification information, and geological field studies also aided in the application of ERTS-1 imagery to perform an integrated analysis that assessed the adverse effects of strip mining. The results indicated that ERTS can both monitor and map the extent of strip mining to determine immediately the acreage affected and to indicate where future reclamation and revegetation may be necessary.
Detection of aspen-conifer forest mixes from LANDSAT digital data. [Utah-Idaho Bear River Range
NASA Technical Reports Server (NTRS)
Jaynes, R. A.; Merola, J. A.
1982-01-01
Aspen, conifer and mixed aspen/conifer forests were mapped for a 15-quadrangle study area in the Utah-Idaho Bear River Range using LANDSAT multispectral scanner data. Digital classification and statistical analysis of LANDSAT data allowed the identification of six groups of signatures which reflect different types of aspen/conifer forest mixing. Photo interpretations of the print symbols suggest that such classes are indicative of mid to late seral aspen forests. Digital print map overlays and acreage calculations were prepared for the study area quadrangles. Further field verification is needed to acquire additional information about the nature of the forests. Single date LANDSAT analysis should be a cost effective means to index aspen forests which are at least in the mid seral phase of conifer invasion. Since aspen canopies tend to obscure understory conifers for early seral forests, a second date analysis, using data taken when aspens are leafless, could provide information about early seral aspen forests.
Observation of a Discrete Time Crystal
NASA Astrophysics Data System (ADS)
Kyprianidis, A.; Zhang, J.; Hess, P.; Becker, P.; Lee, A.; Smith, J.; Pagano, G.; Potter, A.; Vishwanath, A.; Potirniche, I.-D.; Yao, N.; Monroe, C.
2017-04-01
Spontaneous symmetry breaking is a key concept in the understanding of many physical phenomena, such as the formation of spatial crystals and the phase transition from paramagnetism to magnetic order. While the breaking of time translation symmetry is forbidden in equilibrium systems, it is possible for non-equilibrium Floquet driven systems to break a discrete time translation symmetry, and we present clear signatures of the formation of such a discrete time crystal. We apply a time periodic Hamiltonian to a chain of interacting spins under many-body localization conditions and observe the system's sub-harmonic response at twice that period. This spontaneous doubling of the periodicity is robust to external perturbations. We represent the spins with a linear chain of trapped 171Yb+ ions in an rf Paul trap, generate spin-spin interactions through spin-dependent optical dipole forces, and measure each spin using state-dependent fluorescence. This work is supported by the ARO Atomic Physics Program, the AFOSR MURI on Quantum Measurement and Verification, and the NSF Physics Frontier Center at JQI.
Wess, Mark L.; Embi, Peter J.; Besier, James L.; Lowry, Chad H.; Anderson, Paul F.; Besier, James C.; Thelen, Geriann; Hegner, Catherine
2007-01-01
Computerized Provider Order Entry (CPOE) has been demonstrated to improve the medication ordering process, but most published studies have been performed at academic hospitals. Little is known about the effects of CPOE at community hospitals. With a pre-post study design, we assessed the effects of a CPOE system on the medication ordering process at both a community and university hospital. The time from provider ordering to pharmacist verification decreased by two hours with CPOE at the community hospital (p<0.0001) and by one hour at the university hospital (p<0.0001). The rate of medication clarifications requiring signature was 2.80 percent pre-CPOE and 0.40 percent with CPOE (p<0.0001) at the community hospital. The university hospital was 2.76 percent pre-CPOE and 0.46 percent with CPOE (p<0.0001). CPOE improved medication order processing at both community and university hospitals. These findings add to the limited literature on CPOE in community hospitals. PMID:18693946
Experimental verification of propeller noise prediction
NASA Technical Reports Server (NTRS)
Succi, G. P.; Munro, D. H.; Zimmer, J. A.
1980-01-01
Results of experimental measurements of the sound fields of 1/4-scale general aviation propellers are presented and experimental wake surveys and pressure signatures obtained are compared with theoretical predictions. Experiments were performed primarily on a 1C160 propeller model mounted in front of a symmetric body in an anechoic wind tunnel, and measured the thrust and torque produced by propeller at different rotation speeds and tunnel velocities, wakes at three axial distances, and sound pressure at various azimuths and tip speeds with advance ratio or tunnel velocity constant. Aerodynamic calculations of blade loading were performed using airfoil section characteristics and a modified strip analysis procedure. The propeller was then modeled as an array of point sound sources with each point characterized by the force and volume of the corresponding propeller section in order to obtain the acoustic characteristics. Measurements are found to agree with predictions over a wide range of operating conditions, tip speeds and propeller nacelle combinations, without the use of adjustable constants.
Verifying Safety Messages Using Relative-Time and Zone Priority in Vehicular Ad Hoc Networks
Banani, Sam; Thiemjarus, Surapa; Kittipiyakul, Somsak
2018-01-01
In high-density road networks, with each vehicle broadcasting multiple messages per second, the arrival rate of safety messages can easily exceed the rate at which digital signatures can be verified. Since not all messages can be verified, algorithms for selecting which messages to verify are required to ensure that each vehicle receives appropriate awareness about neighbouring vehicles. This paper presents a novel scheme to select important safety messages for verification in vehicular ad hoc networks (VANETs). The proposed scheme uses location and direction of the sender, as well as proximity and relative-time between vehicles, to reduce the number of irrelevant messages verified (i.e., messages from vehicles that are unlikely to cause an accident). Compared with other existing schemes, the analysis results show that the proposed scheme can verify messages from nearby vehicles with lower inter-message delay and reduced packet loss and thus provides high level of awareness of the nearby vehicles. PMID:29652840
Tongue prints: A novel biometric and potential forensic tool.
Radhika, T; Jeddy, Nadeem; Nithya, S
2016-01-01
Tongue is a vital internal organ well encased within the oral cavity and protected from the environment. It has unique features which differ from individual to individual and even between identical twins. The color, shape, and surface features are characteristic of every individual, and this serves as a tool for identification. Many modes of biometric systems have come into existence such as fingerprint, iris scan, skin color, signature verification, voice recognition, and face recognition. The search for a new personal identification method secure has led to the use of the lingual impression or the tongue print as a method of biometric authentication. Tongue characteristics exhibit sexual dimorphism thus aiding in the identification of the person. Emerging as a novel biometric tool, tongue prints also hold the promise of a potential forensic tool. This review highlights the uniqueness of tongue prints and its superiority over other biometric identification systems. The various methods of tongue print collection and the classification of tongue features are also elucidated.
Verification Games: Crowd-Sourced Formal Verification
2016-03-01
VERIFICATION GAMES : CROWD-SOURCED FORMAL VERIFICATION UNIVERSITY OF WASHINGTON MARCH 2016 FINAL TECHNICAL REPORT...DATES COVERED (From - To) JUN 2012 – SEP 2015 4. TITLE AND SUBTITLE VERIFICATION GAMES : CROWD-SOURCED FORMAL VERIFICATION 5a. CONTRACT NUMBER FA8750...clarification memorandum dated 16 Jan 09. 13. SUPPLEMENTARY NOTES 14. ABSTRACT Over the more than three years of the project Verification Games : Crowd-sourced
Greifeneder, Rainer; Zelt, Sarah; Seele, Tim; Bottenberg, Konstantin; Alt, Alexander
2012-09-01
Handwriting legibility systematically biases evaluations in that highly legible handwriting results in more positive evaluations than less legible handwriting. Because performance assessments in educational contexts are not only based on computerized or multiple choice tests but often include the evaluation of handwritten work samples, understanding the causes of this bias is critical. This research was designed to replicate and extend the legibility bias in two tightly controlled experiments and to explore whether gender-based inferences contribute to its occurrence. A total of 132 students from a German university participated in one pre-test and two independent experiments. Participants were asked to read and evaluate several handwritten essays varying in content quality. Each essay was presented to some participants in highly legible handwriting and to other participants in less legible handwriting. In addition, the assignment of legibility to participant group was reversed from essay to essay, resulting in a mixed-factor design. The legibility bias was replicated in both experiments. Results suggest that gender-based inferences do not account for its occurrence. Rather it appears that fluency from legibility exerts a biasing impact on evaluations of content and author abilities. The legibility bias was shown to be genuine and strong. By refuting a series of alternative explanations, this research contributes to a better understanding of what underlies the legibility bias. The present research may inform those who grade on what to focus and thus help to better allocate cognitive resources when trying to reduce this important source of error. ©2011 The British Psychological Society.
NASA Astrophysics Data System (ADS)
Sussman, A. J.; Anderson, D.; Burt, C.; Craven, J.; Kimblin, C.; McKenna, I.; Schultz-Fellenz, E. S.; Miller, E.; Yocky, D. A.; Haas, D.
2016-12-01
Underground nuclear explosions (UNEs) result in numerous signatures that manifest on a wide range of temporal and spatial scales. Currently, prompt signals, such as the detection of seismic waves provide only generalized locations and the timing and amplitude of non-prompt signals are difficult to predict. As such, research into improving the detection, location, and identification of suspect events has been conducted, resulting in advancement of nuclear test detection science. In this presentation, we demonstrate the scalar variably of surface and subsurface observables, briefly discuss current capabilities to locate, detect and characterize potential nuclear explosion locations, and explain how emergent technologies and amalgamation of disparate data sets will facilitate improved monitoring and verification. At the smaller scales, material and fracture characterization efforts on rock collected from legacy UNE sites and from underground experiments using chemical explosions can be incorporated into predictive modeling efforts. Spatial analyses of digital elevation models and orthoimagery of both modern conventional and legacy nuclear sites show subtle surface topographic changes and damage at nearby outcrops. Additionally, at sites where such technology cannot penetrate vegetative cover, it is possible to use the vegetation itself as both a companion signature reflecting geologic conditions and showing subsurface impacts to water, nutrients, and chemicals. Aerial systems based on RGB imagery, light detection and ranging, and hyperspectral imaging can allow for combined remote sensing modalities to perform pattern recognition and classification tasks. Finally, more remote systems such as satellite based synthetic aperture radar and satellite imagery are other techniques in development for UNE site detection, location and characterization.
The SeaHorn Verification Framework
NASA Technical Reports Server (NTRS)
Gurfinkel, Arie; Kahsai, Temesghen; Komuravelli, Anvesh; Navas, Jorge A.
2015-01-01
In this paper, we present SeaHorn, a software verification framework. The key distinguishing feature of SeaHorn is its modular design that separates the concerns of the syntax of the programming language, its operational semantics, and the verification semantics. SeaHorn encompasses several novelties: it (a) encodes verification conditions using an efficient yet precise inter-procedural technique, (b) provides flexibility in the verification semantics to allow different levels of precision, (c) leverages the state-of-the-art in software model checking and abstract interpretation for verification, and (d) uses Horn-clauses as an intermediate language to represent verification conditions which simplifies interfacing with multiple verification tools based on Horn-clauses. SeaHorn provides users with a powerful verification tool and researchers with an extensible and customizable framework for experimenting with new software verification techniques. The effectiveness and scalability of SeaHorn are demonstrated by an extensive experimental evaluation using benchmarks from SV-COMP 2015 and real avionics code.
Maximum entropy PDF projection: A review
NASA Astrophysics Data System (ADS)
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
Two-stage approach to keyword spotting in handwritten documents
NASA Astrophysics Data System (ADS)
Haji, Mehdi; Ameri, Mohammad R.; Bui, Tien D.; Suen, Ching Y.; Ponson, Dominique
2013-12-01
Separation of keywords from non-keywords is the main problem in keyword spotting systems which has traditionally been approached by simplistic methods, such as thresholding of recognition scores. In this paper, we analyze this problem from a machine learning perspective, and we study several standard machine learning algorithms specifically in the context of non-keyword rejection. We propose a two-stage approach to keyword spotting and provide a theoretical analysis of the performance of the system which gives insights on how to design the classifier in order to maximize the overall performance in terms of F-measure.
Online Farsi digit recognition using their upper half structure
NASA Astrophysics Data System (ADS)
Ghods, Vahid; Sohrabi, Mohammad Karim
2015-03-01
In this paper, we investigated the efficiency of upper half Farsi numerical digit structure. In other words, half of data (upper half of the digit shapes) was exploited for the recognition of Farsi numerical digits. This method can be used for both offline and online recognition. Half of data is more effective in speed process, data transfer and in this application accuracy. Hidden Markov model (HMM) was used to classify online Farsi digits. Evaluation was performed by TMU dataset. This dataset contains more than 1200 samples of online handwritten Farsi digits. The proposed method yielded more accuracy in recognition rate.
NASA Technical Reports Server (NTRS)
Kiang, Richard K.
1992-01-01
Neural networks have been applied to classifications of remotely sensed data with some success. To improve the performance of this approach, an examination was made of how neural networks are applied to the optical character recognition (OCR) of handwritten digits and letters. A three-layer, feedforward network, along with techniques adopted from OCR, was used to classify Landsat-4 Thematic Mapper data. Good results were obtained. To overcome the difficulties that are characteristic of remote sensing applications and to attain significant improvements in classification accuracy, a special network architecture may be required.
Invariant approach to the character classification
NASA Astrophysics Data System (ADS)
Šariri, Kristina; Demoli, Nazif
2008-04-01
Image moments analysis is a very useful tool which allows image description invariant to translation and rotation, scale change and some types of image distortions. The aim of this work was development of simple method for fast and reliable classification of characters by using Hu's and affine moment invariants. Measure of Eucleidean distance was used as a discrimination feature with statistical parameters estimated. The method was tested in classification of Times New Roman font letters as well as sets of the handwritten characters. It is shown that using all Hu's and three affine invariants as discrimination set improves recognition rate by 30%.
Detail view in engine bay three in the the aft ...
Detail view in engine bay three in the the aft fuselage of the Orbiter Discovery. This view shows the engine interface fittings and the hydraulic-actuator support structure. The propellant feed lines are the large plugged and capped orifices. Note the handwritten references on the thrust plate in proximity to the actuators that read E3 Pitch and E3 Yaw. This view was taken from a service platform in the Orbiter Processing Facility at Kennedy Space Center. - Space Transportation System, Orbiter Discovery (OV-103), Lyndon B. Johnson Space Center, 2101 NASA Parkway, Houston, Harris County, TX
The data life cycle applied to our own data.
Goben, Abigail; Raszewski, Rebecca
2015-01-01
Increased demand for data-driven decision making is driving the need for librarians to be facile with the data life cycle. This case study follows the migration of reference desk statistics from handwritten to digital format. This shift presented two opportunities: first, the availability of a nonsensitive data set to improve the librarians' understanding of data-management and statistical analysis skills, and second, the use of analytics to directly inform staffing decisions and departmental strategic goals. By working through each step of the data life cycle, library faculty explored data gathering, storage, sharing, and analysis questions.
Mobile measurement of methane: plumes, isotopes and inventory verification
NASA Astrophysics Data System (ADS)
Lowry, D.; Zazzeri, G.; Fisher, R. E.; France, J.; Al-Shalaan, A.; Lanoisellé, M.; Nisbet, E. G.
2015-12-01
Since 2013 the RHUL group has been identifying methane plumes from major UK sources using a Picarro 2301 coupled to the A0941 mobile module. Once identified the plumes have been sampled by filling Tedlar or Flexfoil bags for later carbon isotopic analysis by high-precision IRMS. This method has ben successfully deployed to isotopically characterize the main anthropogenic methane emitters in the UK (natural gas, coal, landfill, wastewater treatment, cattle; Zazzeri et al., 2015) and during overseas campaigns in eastern Australia (coal, cattle, legacy gas wells) and Kuwait (landfill, wastewater treatment, oil refineries, cattle, camels). This has identified strong similarities of isotopic signature for some sources (landfill, cattle), but large variations for others (natural gas, coal), which must be isotopically resolved at regional scale. Both landfill and natural gas emissions in SE England have tightly-constrained δ13C signatures, averaging -58 ± 3‰ and -36 ± 2‰, respectively, the latter being characteristic of homogenised North Sea gas supply. In contrast, signatures for coal mines in England and Wales fall in a range of 51.2 ± 0.3‰ to 30.9 ± 1.4‰, but can be tightly constrained by region. On a local scale in west London, repeat surveys in the boroughs of Hounslow and Runnymede have been made for comparison with the latest 1x1 km grid UK inventories for 2009 and 2012, which are subdivided by UNECE categories. An excess methane map can be derived for comparison with inventory emissions maps by identifying daily background and binning the excess values from mobile measurements by grid-square. This shows that the spatial distribution of emissions in the UK 2012 inventory is a big improvement on that of 2009. It also suggests that there is an overestimation of emissions from old landfills (closed before 2000 and reliant on a topsoil cap for oxidation), and an underestimation on emissions from currently active landfill cells. Zazzeri, G. et al. (2015) Plume mapping and isotopic characterization of anthropogenic methane sources, Atmospheric Environment, 110, 151-162.
GHG MITIGATION TECHNOLOGY PERFORMANCE EVALUATIONS UNDERWAY AT THE GHG TECHNOLOGY VERIFICATION CENTER
The paper outlines the verification approach and activities of the Greenhouse Gas (GHG) Technology Verification Center, one of 12 independent verification entities operating under the U.S. EPA-sponsored Environmental Technology Verification (ETV) program. (NOTE: The ETV program...
Collinear cluster tri-partition: Kinematics constraints and stability of collinearity
NASA Astrophysics Data System (ADS)
Holmvall, P.; Köster, U.; Heinz, A.; Nilsson, T.
2017-01-01
Background: A new mode of nuclear fission has been proposed by the FOBOS Collaboration, called collinear cluster tri-partition (CCT), and suggests that three heavy fission fragments can be emitted perfectly collinearly in low-energy fission. This claim is based on indirect observations via missing-energy events using the 2 v 2 E method. This proposed CCT seems to be an extraordinary new aspect of nuclear fission. It is surprising that CCT escaped observation for so long given the relatively high reported yield of roughly 0.5 % relative to binary fission. These claims call for an independent verification with a different experimental technique. Purpose: Verification experiments based on direct observation of CCT fragments with fission-fragment spectrometers require guidance with respect to the allowed kinetic-energy range, which we present in this paper. Furthermore, we discuss corresponding model calculations which, if CCT is found in such verification experiments, could indicate how the breakups proceed. Since CCT refers to collinear emission, we also study the intrinsic stability of collinearity. Methods: Three different decay models are used that together span the timescales of three-body fission. These models are used to calculate the possible kinetic-energy ranges of CCT fragments by varying fragment mass splits, excitation energies, neutron multiplicities, and scission-point configurations. Calculations are presented for the systems 235U(nth,f ) and 252Cf(s f ) , and the fission fragments previously reported for CCT; namely, isotopes of the elements Ni, Si, Ca, and Sn. In addition, we use semiclassical trajectory calculations with a Monte Carlo method to study the intrinsic stability of collinearity. Results: CCT has a high net Q value but, in a sequential decay, the intermediate steps are energetically and geometrically unfavorable or even forbidden. Moreover, perfect collinearity is extremely unstable, and broken by the slightest perturbation. Conclusions: According to our results, the central fragment would be very difficult to detect due to its low kinetic energy, raising the question of why other 2 v 2 E experiments could not detect a missing-mass signature corresponding to CCT. Considering the high kinetic energies of the outer fragments reported in our study, direct-observation experiments should be able to observe CCT. Furthermore, we find that a realization of CCT would require an unphysical fine tuning of the initial conditions. Finally, our stability calculations indicate that, due to the pronounced instability of the collinear configuration, a prolate scission configuration does not necessarily lead to collinear emission, nor does equatorial emission necessarily imply an oblate scission configuration. In conclusion, our results enable independent experimental verification and encourage further critical theoretical studies of CCT.
18 CFR 281.213 - Data Verification Committee.
Code of Federal Regulations, 2011 CFR
2011-04-01
... preparation. (e) The Data Verification Committee shall prepare a report concerning the proposed index of... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Data Verification....213 Data Verification Committee. (a) Each interstate pipeline shall establish a Data Verification...
18 CFR 281.213 - Data Verification Committee.
Code of Federal Regulations, 2010 CFR
2010-04-01
... preparation. (e) The Data Verification Committee shall prepare a report concerning the proposed index of... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Data Verification....213 Data Verification Committee. (a) Each interstate pipeline shall establish a Data Verification...
The politics of verification and the control of nuclear tests, 1945-1980
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gallagher, N.W.
1990-01-01
This dissertation addresses two questions: (1) why has agreement been reached on verification regimes to support some arms control accords but not others; and (2) what determines the extent to which verification arrangements promote stable cooperation. This study develops an alternative framework for analysis by examining the politics of verification at two levels. The logical politics of verification are shaped by the structure of the problem of evaluating cooperation under semi-anarchical conditions. The practical politics of verification are driven by players' attempts to use verification arguments to promote their desired security outcome. The historical material shows that agreements on verificationmore » regimes are reached when key domestic and international players desire an arms control accord and believe that workable verification will not have intolerable costs. Clearer understanding of how verification is itself a political problem, and how players manipulate it to promote other goals is necessary if the politics of verification are to support rather than undermine the development of stable cooperation.« less
Investigation, Development, and Evaluation of Performance Proving for Fault-tolerant Computers
NASA Technical Reports Server (NTRS)
Levitt, K. N.; Schwartz, R.; Hare, D.; Moore, J. S.; Melliar-Smith, P. M.; Shostak, R. E.; Boyer, R. S.; Green, M. W.; Elliott, W. D.
1983-01-01
A number of methodologies for verifying systems and computer based tools that assist users in verifying their systems were developed. These tools were applied to verify in part the SIFT ultrareliable aircraft computer. Topics covered included: STP theorem prover; design verification of SIFT; high level language code verification; assembly language level verification; numerical algorithm verification; verification of flight control programs; and verification of hardware logic.
40 CFR 1065.920 - PEMS calibrations and verifications.
Code of Federal Regulations, 2014 CFR
2014-07-01
....920 PEMS calibrations and verifications. (a) Subsystem calibrations and verifications. Use all the... verifications and analysis. It may also be necessary to limit the range of conditions under which the PEMS can... additional information or analysis to support your conclusions. (b) Overall verification. This paragraph (b...
This ETV program generic verification protocol was prepared and reviewed for the Verification of Pesticide Drift Reduction Technologies project. The protocol provides a detailed methodology for conducting and reporting results from a verification test of pesticide drift reductio...
30 CFR 250.913 - When must I resubmit Platform Verification Program plans?
Code of Federal Regulations, 2011 CFR
2011-07-01
... CONTINENTAL SHELF Platforms and Structures Platform Verification Program § 250.913 When must I resubmit Platform Verification Program plans? (a) You must resubmit any design verification, fabrication... 30 Mineral Resources 2 2011-07-01 2011-07-01 false When must I resubmit Platform Verification...
30 CFR 250.909 - What is the Platform Verification Program?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 2 2011-07-01 2011-07-01 false What is the Platform Verification Program? 250... Platforms and Structures Platform Verification Program § 250.909 What is the Platform Verification Program? The Platform Verification Program is the MMS approval process for ensuring that floating platforms...
Study of techniques for redundancy verification without disrupting systems, phases 1-3
NASA Technical Reports Server (NTRS)
1970-01-01
The problem of verifying the operational integrity of redundant equipment and the impact of a requirement for verification on such equipment are considered. Redundant circuits are examined and the characteristics which determine adaptability to verification are identified. Mutually exclusive and exhaustive categories for verification approaches are established. The range of applicability of these techniques is defined in terms of signal characteristics and redundancy features. Verification approaches are discussed and a methodology for the design of redundancy verification is developed. A case study is presented which involves the design of a verification system for a hypothetical communications system. Design criteria for redundant equipment are presented. Recommendations for the development of technological areas pertinent to the goal of increased verification capabilities are given.
Requirement Assurance: A Verification Process
NASA Technical Reports Server (NTRS)
Alexander, Michael G.
2011-01-01
Requirement Assurance is an act of requirement verification which assures the stakeholder or customer that a product requirement has produced its "as realized product" and has been verified with conclusive evidence. Product requirement verification answers the question, "did the product meet the stated specification, performance, or design documentation?". In order to ensure the system was built correctly, the practicing system engineer must verify each product requirement using verification methods of inspection, analysis, demonstration, or test. The products of these methods are the "verification artifacts" or "closure artifacts" which are the objective evidence needed to prove the product requirements meet the verification success criteria. Institutional direction is given to the System Engineer in NPR 7123.1A NASA Systems Engineering Processes and Requirements with regards to the requirement verification process. In response, the verification methodology offered in this report meets both the institutional process and requirement verification best practices.
Impact of advance directives and a health care proxy on doctors' decisions: a randomized trial.
Escher, Monica; Perneger, Thomas V; Rudaz, Sandrine; Dayer, Pierre; Perrier, Arnaud
2014-01-01
Advance directives or proxy designations are widely recommended, but how they affect doctors' decision making is not well known. The aim of this study was to quantify the influence of advance directives and proxy opinions on doctors' decisions. We mailed to all the generalists and internists in French-speaking Switzerland (N = 1962) three vignettes describing difficult decisions involving incapacitated patients. In each case, the advance directive requested that further care be withheld. One vignette tested the impact of a written advance directive vs. a proxy. Another compared the impact of a handwritten directive vs. a formalized document. The third vignette compared the impact of a family member vs. a doctor as a proxy. Each vignette was prepared in three or four versions, including a control version in which no directive or proxy was present. Vignettes were randomly allocated to respondents. We used logistic regression to predict the decision to forgo a medical intervention. Compared with the control condition, the odds of forgoing a medical intervention were increased by the written advance directive (odds ratio [OR] 7.3; P < 0.001), the proxy (OR 7.9; P < 0.001), and the combination of the two (OR 35.7; P < 0.001). The handwritten directive had the same impact (OR 13.3) as the formalized directive (OR 13.8). The effect of proxy opinion was slightly stronger when provided by a doctor (OR 11.3) rather than by family (OR 7.8). Advance directives and proxy opinions are equally effective in influencing doctors' decisions, but having both has the strongest effect. The format of the advance directive and the identity of the proxy have little influence on decisions. Copyright © 2014 U.S. Cancer Pain Relief Committee. Published by Elsevier Inc. All rights reserved.
McDonnell, Mark D.; Tissera, Migel D.; Vladusich, Tony; van Schaik, André; Tapson, Jonathan
2015-01-01
Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the ‘Extreme Learning Machine’ (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random ‘receptive field’ sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems. PMID:26262687
Shulman, Rob; Singer, Mervyn; Goldstone, John; Bellingan, Geoff
2005-10-05
The study aimed to compare the impact of computerised physician order entry (CPOE) without decision support with hand-written prescribing (HWP) on the frequency, type and outcome of medication errors (MEs) in the intensive care unit. Details of MEs were collected before, and at several time points after, the change from HWP to CPOE. The study was conducted in a London teaching hospital's 22-bedded general ICU. The sampling periods were 28 weeks before and 2, 10, 25 and 37 weeks after introduction of CPOE. The unit pharmacist prospectively recorded details of MEs and the total number of drugs prescribed daily during the data collection periods, during the course of his normal chart review. The total proportion of MEs was significantly lower with CPOE (117 errors from 2429 prescriptions, 4.8%) than with HWP (69 errors from 1036 prescriptions, 6.7%) (p < 0.04). The proportion of errors reduced with time following the introduction of CPOE (p < 0.001). Two errors with CPOE led to patient harm requiring an increase in length of stay and, if administered, three prescriptions with CPOE could potentially have led to permanent harm or death. Differences in the types of error between systems were noted. There was a reduction in major/moderate patient outcomes with CPOE when non-intercepted and intercepted errors were combined (p = 0.01). The mean baseline APACHE II score did not differ significantly between the HWP and the CPOE periods (19.4 versus 20.0, respectively, p = 0.71). Introduction of CPOE was associated with a reduction in the proportion of MEs and an improvement in the overall patient outcome score (if intercepted errors were included). Moderate and major errors, however, remain a significant concern with CPOE.
Starting the Conversation - A Childhood Obesity Knowledge Project Using an App.
Appel, Hoa B; Huang, Bu; Cole, Allison; James, Rosalina; Ai, Amy L
2014-04-01
Starting the Conversation was a pilot project to test an intervention for childhood obesity, a major public health epidemic, using a free smartphone application (app). The primary aim was to assess students' knowledge of nutritional indicators, physical exercise and use of screen time before and after the intervention. The study was conducted in 2011-2012. The sample, recruited from seven high schools in Snohomish County, Washington, was 65.3% minority participants. Of the 118 participants in the sample (n=118), 79 handwrote their responses (n=78) and 36 responded via the app (n=39). We compared the frequency and types of physical exercise, frequency of screen time, and nutritional variables of high school students. Participants used the cell phone app or a handwritten log to record their daily entries for 20 days. Both males (n=43) and females (n=75) grades 9-12 used the app or handwritten entries. Participants who used the app ate less fast food and exercised more, as compared with those who recorded their entries by hand. Screen time usage decreased over the course of the study, based on a comparison of the post-survey level and the pre-survey level. Knowledge of recommended daily consumption of vegetables increased post-test in the app group and knowledge of water consumption increased significantly in both groups. There was no significant difference in BMI pre and post-test. Patterns of nutritional intake, physical exercise and knowledge of these issues varied pre and post-test. It is critical to further examine factors associated with lack of physical activity and food intake patterns of youth using social media to further address the childhood obesity epidemic. Future research should focus on specific ethnic subgroups and an intervention at the school level aimed at the students with BMI ≥ 95 th percentile.
Yang, Xi Jessie; Park, Taezoon; Siah, Tien Ho Kewin; Ang, Bee Leng Sophia; Donchin, Yoel
2015-01-01
INTRODUCTION The aim of the present study was to investigate the challenges faced by physicians during shift handovers in a university hospital that has a high handover sender/recipient ratio. METHODS We adopted a multifaceted approach, comprising recording and analysis of handover information, rating of handover quality, and shadowing of handover recipients. Data was collected at the general medical ward of a university hospital in Singapore for a period of three months. Handover information transfer (i.e. senders’ and recipients’ verbal communication, and recipients’ handwritten notes) and handover environmental factors were analysed. The relationship between ‘to-do’ tasks and information transfer, handover quality and handover duration was examined using analysis of variance. RESULTS Verbal handovers for 152 patients were observed. Handwritten notes on 102 (67.1%) patients and handover quality ratings for 98 (64.5%) patients were collected. Although there was good task prioritisation (information transfer: p < 0.005, handover duration: p < 0.01), incomplete information transfer and poor implementation of non-modifiable identifiers were observed. The high sender/recipient ratio of the hospital made face-to-face and/or bedside handover difficult to implement. Although the current handover method (i.e. use of telephone communication) allowed for interactive communication, it resulted in systemic information loss due to the lack of written information. The handover environment was chaotic in the high sender/recipient ratio setting, and the physicians had no designated handover time or location. CONCLUSION Handovers in high sender/recipient ratio settings are challenging. Efforts should be made to improve the handover processes in such situations, so that patient care is not compromised. PMID:25532519
Fu, H C; Xu, Y Y; Chang, H Y
1999-12-01
Recognition of similar (confusion) characters is a difficult problem in optical character recognition (OCR). In this paper, we introduce a neural network solution that is capable of modeling minor differences among similar characters, and is robust to various personal handwriting styles. The Self-growing Probabilistic Decision-based Neural Network (SPDNN) is a probabilistic type neural network, which adopts a hierarchical network structure with nonlinear basis functions and a competitive credit-assignment scheme. Based on the SPDNN model, we have constructed a three-stage recognition system. First, a coarse classifier determines a character to be input to one of the pre-defined subclasses partitioned from a large character set, such as Chinese mixed with alphanumerics. Then a character recognizer determines the input image which best matches the reference character in the subclass. Lastly, the third module is a similar character recognizer, which can further enhance the recognition accuracy among similar or confusing characters. The prototype system has demonstrated a successful application of SPDNN to similar handwritten Chinese recognition for the public database CCL/HCCR1 (5401 characters x200 samples). Regarding performance, experiments on the CCL/HCCR1 database produced 90.12% recognition accuracy with no rejection, and 94.11% accuracy with 6.7% rejection, respectively. This recognition accuracy represents about 4% improvement on the previously announced performance. As to processing speed, processing before recognition (including image preprocessing, segmentation, and feature extraction) requires about one second for an A4 size character image, and recognition consumes approximately 0.27 second per character on a Pentium-100 based personal computer, without use of any hardware accelerator or co-processor.
NASA Astrophysics Data System (ADS)
Štolc, Svorad; Bajla, Ivan
2010-01-01
In the paper we describe basic functions of the Hierarchical Temporal Memory (HTM) network based on a novel biologically inspired model of the large-scale structure of the mammalian neocortex. The focus of this paper is in a systematic exploration of possibilities how to optimize important controlling parameters of the HTM model applied to the classification of hand-written digits from the USPS database. The statistical properties of this database are analyzed using the permutation test which employs a randomization distribution of the training and testing data. Based on a notion of the homogeneous usage of input image pixels, a methodology of the HTM parameter optimization is proposed. In order to study effects of two substantial parameters of the architecture: the
Hammer, J S; Strain, J J; Friedberg, A; Fulop, G
1995-05-01
No current system of computerized data entry of clinical information in consultation-liaison (C-L) psychiatry has been well received or has demonstrated that it saves the consultant's time. The inability to achieve accurate, complete, systematic collection of discrete variables and data entry in the harried C-L setting is a major impediment to the advancement of the subspecialty and health services research. The hand-held Notebook computer with Windows PEN ENTRY MICROCARES capabilities has permitted one-time direct entry of data at the time of collection at the patient's bedside. Variable choice and selection enhances the completeness and accuracy of data collection. For example, ICD-9, Axis III diagnoses may be selected from a "look-up" which at the same time automatically assigns the appropriate code and diagnostic-related groups, (DRG) number. A patient narrative can be typed at the nurse's station, a chart note printed for the medical record, and the MICRO-CARES literature database perused with the printing of selected citations, abstracts, and in some cases experts' commentaries for the consultee. The consultant's documentation time is halved using the NOTEBOOK WINDOWS PEN ENTRY MICRO-CARES software, with the advantage of more accurate and complete data description than with the traditional handwritten consultation records. Consultees preferred typewritten in contrast to handwritten notes. The cost of the hardware (about $2000) is less than that of an optical scanner, and it permits report generation and archival searches at the nurses' station without returning to the C-L office for scanning. Radio frequency or ethernet download from the Notebook permits direct data transfer to th C-L office archive computer.
30 CFR 250.909 - What is the Platform Verification Program?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 2 2010-07-01 2010-07-01 false What is the Platform Verification Program? 250... Verification Program § 250.909 What is the Platform Verification Program? The Platform Verification Program is the MMS approval process for ensuring that floating platforms; platforms of a new or unique design...
Property-driven functional verification technique for high-speed vision system-on-chip processor
NASA Astrophysics Data System (ADS)
Nshunguyimfura, Victor; Yang, Jie; Liu, Liyuan; Wu, Nanjian
2017-04-01
The implementation of functional verification in a fast, reliable, and effective manner is a challenging task in a vision chip verification process. The main reason for this challenge is the stepwise nature of existing functional verification techniques. This vision chip verification complexity is also related to the fact that in most vision chip design cycles, extensive efforts are focused on how to optimize chip metrics such as performance, power, and area. Design functional verification is not explicitly considered at an earlier stage at which the most sound decisions are made. In this paper, we propose a semi-automatic property-driven verification technique. The implementation of all verification components is based on design properties. We introduce a low-dimension property space between the specification space and the implementation space. The aim of this technique is to speed up the verification process for high-performance parallel processing vision chips. Our experimentation results show that the proposed technique can effectively improve the verification effort up to 20% for the complex vision chip design while reducing the simulation and debugging overheads.
Hydrologic data-verification management program plan
Alexander, C.W.
1982-01-01
Data verification refers to the performance of quality control on hydrologic data that have been retrieved from the field and are being prepared for dissemination to water-data users. Water-data users now have access to computerized data files containing unpublished, unverified hydrologic data. Therefore, it is necessary to develop techniques and systems whereby the computer can perform some data-verification functions before the data are stored in user-accessible files. Computerized data-verification routines can be developed for this purpose. A single, unified concept describing master data-verification program using multiple special-purpose subroutines, and a screen file containing verification criteria, can probably be adapted to any type and size of computer-processing system. Some traditional manual-verification procedures can be adapted for computerized verification, but new procedures can also be developed that would take advantage of the powerful statistical tools and data-handling procedures available to the computer. Prototype data-verification systems should be developed for all three data-processing environments as soon as possible. The WATSTORE system probably affords the greatest opportunity for long-range research and testing of new verification subroutines. (USGS)
Trading Speed and Accuracy by Coding Time: A Coupled-circuit Cortical Model
Standage, Dominic; You, Hongzhi; Wang, Da-Hui; Dorris, Michael C.
2013-01-01
Our actions take place in space and time, but despite the role of time in decision theory and the growing acknowledgement that the encoding of time is crucial to behaviour, few studies have considered the interactions between neural codes for objects in space and for elapsed time during perceptual decisions. The speed-accuracy trade-off (SAT) provides a window into spatiotemporal interactions. Our hypothesis is that temporal coding determines the rate at which spatial evidence is integrated, controlling the SAT by gain modulation. Here, we propose that local cortical circuits are inherently suited to the relevant spatial and temporal coding. In simulations of an interval estimation task, we use a generic local-circuit model to encode time by ‘climbing’ activity, seen in cortex during tasks with a timing requirement. The model is a network of simulated pyramidal cells and inhibitory interneurons, connected by conductance synapses. A simple learning rule enables the network to quickly produce new interval estimates, which show signature characteristics of estimates by experimental subjects. Analysis of network dynamics formally characterizes this generic, local-circuit timing mechanism. In simulations of a perceptual decision task, we couple two such networks. Network function is determined only by spatial selectivity and NMDA receptor conductance strength; all other parameters are identical. To trade speed and accuracy, the timing network simply learns longer or shorter intervals, driving the rate of downstream decision processing by spatially non-selective input, an established form of gain modulation. Like the timing network's interval estimates, decision times show signature characteristics of those by experimental subjects. Overall, we propose, demonstrate and analyse a generic mechanism for timing, a generic mechanism for modulation of decision processing by temporal codes, and we make predictions for experimental verification. PMID:23592967