Parallel digital forensics infrastructure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liebrock, Lorie M.; Duggan, David Patrick
2009-10-01
This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexicomore » Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.« less
High Performance Proactive Digital Forensics
NASA Astrophysics Data System (ADS)
Alharbi, Soltan; Moa, Belaid; Weber-Jahnke, Jens; Traore, Issa
2012-10-01
With the increase in the number of digital crimes and in their sophistication, High Performance Computing (HPC) is becoming a must in Digital Forensics (DF). According to the FBI annual report, the size of data processed during the 2010 fiscal year reached 3,086 TB (compared to 2,334 TB in 2009) and the number of agencies that requested Regional Computer Forensics Laboratory assistance increasing from 689 in 2009 to 722 in 2010. Since most investigation tools are both I/O and CPU bound, the next-generation DF tools are required to be distributed and offer HPC capabilities. The need for HPC is even more evident in investigating crimes on clouds or when proactive DF analysis and on-site investigation, requiring semi-real time processing, are performed. Although overcoming the performance challenge is a major goal in DF, as far as we know, there is almost no research on HPC-DF except for few papers. As such, in this work, we extend our work on the need of a proactive system and present a high performance automated proactive digital forensic system. The most expensive phase of the system, namely proactive analysis and detection, uses a parallel extension of the iterative z algorithm. It also implements new parallel information-based outlier detection algorithms to proactively and forensically handle suspicious activities. To analyse a large number of targets and events and continuously do so (to capture the dynamics of the system), we rely on a multi-resolution approach to explore the digital forensic space. Data set from the Honeynet Forensic Challenge in 2001 is used to evaluate the system from DF and HPC perspectives.
Applications of Fourier transform Raman and infrared spectroscopy in forensic sciences
NASA Astrophysics Data System (ADS)
Kuptsov, Albert N.
2000-02-01
First in the world literature comprehensive digital complementary vibrational spectra collection of polymer materials and search system was developed. Non-destructive combined analysis using complementary FT-Raman and FTIR spectra followed by cross-parallel searching on digital spectral libraries, was applied in different fields of forensic sciences. Some unique possibilities of Raman spectroscopy has been shown in the fields of examination of questioned documents, paper, paints, polymer materials, gemstones and other physical evidences.
Toward a general ontology for digital forensic disciplines.
Karie, Nickson M; Venter, Hein S
2014-09-01
Ontologies are widely used in different disciplines as a technique for representing and reasoning about domain knowledge. However, despite the widespread ontology-related research activities and applications in different disciplines, the development of ontologies and ontology research activities is still wanting in digital forensics. This paper therefore presents the case for establishing an ontology for digital forensic disciplines. Such an ontology would enable better categorization of the digital forensic disciplines, as well as assist in the development of methodologies and specifications that can offer direction in different areas of digital forensics. This includes such areas as professional specialization, certifications, development of digital forensic tools, curricula, and educational materials. In addition, the ontology presented in this paper can be used, for example, to better organize the digital forensic domain knowledge and explicitly describe the discipline's semantics in a common way. Finally, this paper is meant to spark discussions and further research on an internationally agreed ontological distinction of the digital forensic disciplines. Digital forensic disciplines ontology is a novel approach toward organizing the digital forensic domain knowledge and constitutes the main contribution of this paper. © 2014 American Academy of Forensic Sciences.
First Digit Law and Its Application to Digital Forensics
NASA Astrophysics Data System (ADS)
Shi, Yun Q.
Digital data forensics, which gathers evidence of data composition, origin, and history, is crucial in our digital world. Although this new research field is still in its infancy stage, it has started to attract increasing attention from the multimedia-security research community. This lecture addresses the first digit law and its applications to digital forensics. First, the Benford and generalized Benford laws, referred to as first digit law, are introduced. Then, the application of first digit law to detection of JPEG compression history for a given BMP image and detection of double JPEG compressions are presented. Finally, applying first digit law to detection of double MPEG video compressions is discussed. It is expected that the first digit law may play an active role in other task of digital forensics. The lesson learned is that statistical models play an important role in digital forensics and for a specific forensic task different models may provide different performance.
Taxonomy of Challenges for Digital Forensics.
Karie, Nickson M; Venter, Hein S
2015-07-01
Since its inception, over a decade ago, the field of digital forensics has faced numerous challenges. Despite different researchers and digital forensic practitioners having studied and analysed various known digital forensic challenges, as of 2013, there still exists a need for a formal classification of these challenges. This article therefore reviews existing research literature and highlights the various challenges that digital forensics has faced for the last 10 years. In conducting this research study, however, it was difficult for the authors to review all the existing research literature in the digital forensic domain; hence, sampling and randomization techniques were employed to facilitate the review of the gathered literature. Taxonomy of the various challenges is subsequently proposed in this paper based on our review of the literature. The taxonomy classifies the large number of digital forensic challenges into four well-defined and easily understood categories. The proposed taxonomy can be useful, for example, in future developments of automated digital forensic tools by explicitly describing processes and procedures that focus on addressing specific challenges identified in this paper. However, it should also be noted that the purpose of this paper was not to propose any solutions to the individual challenges that digital forensics face, but to serve as a survey of the state of the art of the research area. © 2015 American Academy of Forensic Sciences.
[Possibilities of use of digital imaging in forensic medicine].
Gaval'a, P; Ivicsics, I; Mlynár, J; Novomeský, F
2005-07-01
Based on the daily practice with digital photography and documentation, the authors point out the achievements of the computer technologies implementation to the practice of forensic medicine. The modern methods of imaging, especially the digital photography, offer a wide spectrum of use in forensic medicine--the digital documentation and archivation of autopsy findings, the possibility of immediate consultation of findings with another experts via Internet, and many others. Another possibility is a creation of digital photographic atlas of forensic medicine as a useful aid in pre- and postgradual study. Thus the application of the state-of-the-art computer technologies to the forensic medicine discloses the unknown before possibilities for further development of such a discipline of human medical sciences.
Increasing the reach of forensic genetics with massively parallel sequencing.
Budowle, Bruce; Schmedes, Sarah E; Wendt, Frank R
2017-09-01
The field of forensic genetics has made great strides in the analysis of biological evidence related to criminal and civil matters. More so, the discipline has set a standard of performance and quality in the forensic sciences. The advent of massively parallel sequencing will allow the field to expand its capabilities substantially. This review describes the salient features of massively parallel sequencing and how it can impact forensic genetics. The features of this technology offer increased number and types of genetic markers that can be analyzed, higher throughput of samples, and the capability of targeting different organisms, all by one unifying methodology. While there are many applications, three are described where massively parallel sequencing will have immediate impact: molecular autopsy, microbial forensics and differentiation of monozygotic twins. The intent of this review is to expose the forensic science community to the potential enhancements that have or are soon to arrive and demonstrate the continued expansion the field of forensic genetics and its service in the investigation of legal matters.
Digital forensics: an analytical crime scene procedure model (ACSPM).
Bulbul, Halil Ibrahim; Yavuzcan, H Guclu; Ozel, Mesut
2013-12-10
In order to ensure that digital evidence is collected, preserved, examined, or transferred in a manner safeguarding the accuracy and reliability of the evidence, law enforcement and digital forensic units must establish and maintain an effective quality assurance system. The very first part of this system is standard operating procedures (SOP's) and/or models, conforming chain of custody requirements, those rely on digital forensics "process-phase-procedure-task-subtask" sequence. An acceptable and thorough Digital Forensics (DF) process depends on the sequential DF phases, and each phase depends on sequential DF procedures, respectively each procedure depends on tasks and subtasks. There are numerous amounts of DF Process Models that define DF phases in the literature, but no DF model that defines the phase-based sequential procedures for crime scene identified. An analytical crime scene procedure model (ACSPM) that we suggest in this paper is supposed to fill in this gap. The proposed analytical procedure model for digital investigations at a crime scene is developed and defined for crime scene practitioners; with main focus on crime scene digital forensic procedures, other than that of whole digital investigation process and phases that ends up in a court. When reviewing the relevant literature and interrogating with the law enforcement agencies, only device based charts specific to a particular device and/or more general perspective approaches to digital evidence management models from crime scene to courts are found. After analyzing the needs of law enforcement organizations and realizing the absence of crime scene digital investigation procedure model for crime scene activities we decided to inspect the relevant literature in an analytical way. The outcome of this inspection is our suggested model explained here, which is supposed to provide guidance for thorough and secure implementation of digital forensic procedures at a crime scene. In digital forensic investigations each case is unique and needs special examination, it is not possible to cover every aspect of crime scene digital forensics, but the proposed procedure model is supposed to be a general guideline for practitioners. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Applications of a digital darkroom in the forensic laboratory
NASA Astrophysics Data System (ADS)
Bullard, Barry D.; Birge, Brian
1997-02-01
Through a joint agreement with the Indiana-Marion County Forensic Laboratory Services Agency, the Institute for Forensic Imaging conducted a pilot program to investigate crime lab applications of a digital darkroom. IFI installed and staffed a state-of-the-art digital darkroom in the photography laboratory of the Indianapolis-Marion County crime lab located at Indianapolis, Indiana. The darkroom consisted of several high resolution color digital cameras, image processing computer, dye sublimation continuous tone digital printers, and CD-ROM writer. This paper describes the use of the digital darkroom in several crime lab investigations conducted during the program.
Computer Forensics: Is It the Next Hot IT Subject?
ERIC Educational Resources Information Center
Williams, Victor G.; Revels, Ken
2006-01-01
Digital Forensics is not just the recovery of data or information from computer systems and their networks. It is not a procedure that can be accomplished by software alone, and most important, it is not something that can be accomplished by other than a trained IT forensic professional. Digital Forensics is an emerging science and was developed…
Image manipulation: Fraudulence in digital dental records: Study and review
Chowdhry, Aman; Sircar, Keya; Popli, Deepika Bablani; Tandon, Ankita
2014-01-01
Introduction: In present-day times, freely available software allows dentists to tweak their digital records as never before. But, there is a fine line between acceptable enhancements and scientific delinquency. Aims and Objective: To manipulate digital images (used in forensic dentistry) of casts, lip prints, and bite marks in order to highlight tampering techniques and methods of detecting and preventing manipulation of digital images. Materials and Methods: Digital image records of forensic data (casts, lip prints, and bite marks photographed using Samsung Techwin L77 digital camera) were manipulated using freely available software. Results: Fake digital images can be created either by merging two or more digital images, or by altering an existing image. Discussion and Conclusion: Retouched digital images can be used for fraudulent purposes in forensic investigations. However, tools are available to detect such digital frauds, which are extremely difficult to assess visually. Thus, all digital content should mandatorily have attached metadata and preferably watermarking in order to avert their malicious re-use. Also, computer alertness, especially about imaging software's, should be promoted among forensic odontologists/dental professionals. PMID:24696587
System Support for Forensic Inference
NASA Astrophysics Data System (ADS)
Gehani, Ashish; Kirchner, Florent; Shankar, Natarajan
Digital evidence is playing an increasingly important role in prosecuting crimes. The reasons are manifold: financially lucrative targets are now connected online, systems are so complex that vulnerabilities abound and strong digital identities are being adopted, making audit trails more useful. If the discoveries of forensic analysts are to hold up to scrutiny in court, they must meet the standard for scientific evidence. Software systems are currently developed without consideration of this fact. This paper argues for the development of a formal framework for constructing “digital artifacts” that can serve as proxies for physical evidence; a system so imbued would facilitate sound digital forensic inference. A case study involving a filesystem augmentation that provides transparent support for forensic inference is described.
Digital Stratigraphy: Contextual Analysis of File System Traces in Forensic Science.
Casey, Eoghan
2017-12-28
This work introduces novel methods for conducting forensic analysis of file allocation traces, collectively called digital stratigraphy. These in-depth forensic analysis methods can provide insight into the origin, composition, distribution, and time frame of strata within storage media. Using case examples and empirical studies, this paper illuminates the successes, challenges, and limitations of digital stratigraphy. This study also shows how understanding file allocation methods can provide insight into concealment activities and how real-world computer usage can complicate digital stratigraphy. Furthermore, this work explains how forensic analysts have misinterpreted traces of normal file system behavior as indications of concealment activities. This work raises awareness of the value of taking the overall context into account when analyzing file system traces. This work calls for further research in this area and for forensic tools to provide necessary information for such contextual analysis, such as highlighting mass deletion, mass copying, and potential backdating. © 2017 American Academy of Forensic Sciences.
IoT-Forensics Meets Privacy: Towards Cooperative Digital Investigations
Lopez, Javier
2018-01-01
IoT-Forensics is a novel paradigm for the acquisition of electronic evidence whose operation is conditioned by the peculiarities of the Internet of Things (IoT) context. As a branch of computer forensics, this discipline respects the most basic forensic principles of preservation, traceability, documentation, and authorization. The digital witness approach also promotes such principles in the context of the IoT while allowing personal devices to cooperate in digital investigations by voluntarily providing electronic evidence to the authorities. However, this solution is highly dependent on the willingness of citizens to collaborate and they may be reluctant to do so if the sensitive information within their personal devices is not sufficiently protected when shared with the investigators. In this paper, we provide the digital witness approach with a methodology that enables citizens to share their data with some privacy guarantees. We apply the PRoFIT methodology, originally defined for IoT-Forensics environments, to the digital witness approach in order to unleash its full potential. Finally, we show the feasibility of a PRoFIT-compliant digital witness with two use cases. PMID:29414864
IoT-Forensics Meets Privacy: Towards Cooperative Digital Investigations.
Nieto, Ana; Rios, Ruben; Lopez, Javier
2018-02-07
IoT-Forensics is a novel paradigm for the acquisition of electronic evidence whose operation is conditioned by the peculiarities of the Internet of Things (IoT) context. As a branch of computer forensics, this discipline respects the most basic forensic principles of preservation, traceability, documentation, and authorization. The digital witness approach also promotes such principles in the context of the IoT while allowing personal devices to cooperate in digital investigations by voluntarily providing electronic evidence to the authorities. However, this solution is highly dependent on the willingness of citizens to collaborate and they may be reluctant to do so if the sensitive information within their personal devices is not sufficiently protected when shared with the investigators. In this paper, we provide the digital witness approach with a methodology that enables citizens to share their data with some privacy guarantees. We apply the PRoFIT methodology, originally defined for IoT-Forensics environments, to the digital witness approach in order to unleash its full potential. Finally, we show the feasibility of a PRoFIT-compliant digital witness with two use cases.
ERIC Educational Resources Information Center
Harron, Jason; Langdon, John; Gonzalez, Jennifer; Cater, Scott
2017-01-01
The term forensic science may evoke thoughts of blood-spatter analysis, DNA testing, and identifying molds, spores, and larvae. A growing part of this field, however, is that of digital forensics, involving techniques with clear connections to math and physics. This article describes a five-part project involving smartphones and the investigation…
The use of self-organising maps for anomalous behaviour detection in a digital investigation.
Fei, B K L; Eloff, J H P; Olivier, M S; Venter, H S
2006-10-16
The dramatic increase in crime relating to the Internet and computers has caused a growing need for digital forensics. Digital forensic tools have been developed to assist investigators in conducting a proper investigation into digital crimes. In general, the bulk of the digital forensic tools available on the market permit investigators to analyse data that has been gathered from a computer system. However, current state-of-the-art digital forensic tools simply cannot handle large volumes of data in an efficient manner. With the advent of the Internet, many employees have been given access to new and more interesting possibilities via their desktop. Consequently, excessive Internet usage for non-job purposes and even blatant misuse of the Internet have become a problem in many organisations. Since storage media are steadily growing in size, the process of analysing multiple computer systems during a digital investigation can easily consume an enormous amount of time. Identifying a single suspicious computer from a set of candidates can therefore reduce human processing time and monetary costs involved in gathering evidence. The focus of this paper is to demonstrate how, in a digital investigation, digital forensic tools and the self-organising map (SOM)--an unsupervised neural network model--can aid investigators to determine anomalous behaviours (or activities) among employees (or computer systems) in a far more efficient manner. By analysing the different SOMs (one for each computer system), anomalous behaviours are identified and investigators are assisted to conduct the analysis more efficiently. The paper will demonstrate how the easy visualisation of the SOM enhances the ability of the investigators to interpret and explore the data generated by digital forensic tools so as to determine anomalous behaviours.
The Application of Peer Teaching in Digital Forensics Education
ERIC Educational Resources Information Center
Govan, Michelle
2016-01-01
The field of digital forensics requires a multidisciplinary understanding of a range of diverse subjects, but is interdisciplinary (in using principles, techniques and theories from other disciplines) encompassing both computer and forensic science. This requires that practitioners have a deep technical knowledge and understanding, but that they…
77 FR 14955 - DoD Information Assurance Scholarship Program (IASP)
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-14
... IA and information technology (IT) management, technical, digital and multimedia forensics, cyber..., digital and multimedia forensics, electrical engineering, electronics engineering, information security...
Imaging techniques in digital forensic investigation: a study using neural networks
NASA Astrophysics Data System (ADS)
Williams, Godfried
2006-09-01
Imaging techniques have been applied to a number of applications, such as translation and classification problems in medicine and defence. This paper examines the application of imaging techniques in digital forensics investigation using neural networks. A review of applications of digital image processing is presented, whiles a Pedagogical analysis of computer forensics is also highlighted. A data set describing selected images in different forms are used in the simulation and experimentation.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-11
...-on, more detailed, digital forensics analysis or damage assessments of individual incidents... information. In addition, during any follow-on forensics or damage assessment activities, the Government and...), (c) and (d) of this section are maintained by the digital and multimedia forensics laboratory at DC3...
NASA Astrophysics Data System (ADS)
Kröger, Knut; Creutzburg, Reiner
2013-05-01
The aim of this paper is to show the usefulness of modern forensic software tools for processing large-scale digital investigations. In particular, we focus on the new version of Nuix 4.2 and compare it with AccessData FTK 4.2, X-Ways Forensics 16.9 and Guidance Encase Forensic 7 regarding its performance, functionality, usability and capability. We will show how these software tools work with large forensic images and how capable they are in examining complex and big data scenarios.
ERIC Educational Resources Information Center
Kirschenbaum, Matthew G.; Ovenden, Richard; Redwine, Gabriela
2010-01-01
The purpose of this report is twofold: first, to introduce the field of digital forensics to professionals in the cultural heritage sector; and second, to explore some particular points of convergence between the interests of those charged with collecting and maintaining born-digital cultural heritage materials and those charged with collecting…
Digitized forensics: retaining a link between physical and digital crime scene traces using QR-codes
NASA Astrophysics Data System (ADS)
Hildebrandt, Mario; Kiltz, Stefan; Dittmann, Jana
2013-03-01
The digitization of physical traces from crime scenes in forensic investigations in effect creates a digital chain-of-custody and entrains the challenge of creating a link between the two or more representations of the same trace. In order to be forensically sound, especially the two security aspects of integrity and authenticity need to be maintained at all times. Especially the adherence to the authenticity using technical means proves to be a challenge at the boundary between the physical object and its digital representations. In this article we propose a new method of linking physical objects with its digital counterparts using two-dimensional bar codes and additional meta-data accompanying the acquired data for integration in the conventional documentation of collection of items of evidence (bagging and tagging process). Using the exemplary chosen QR-code as particular implementation of a bar code and a model of the forensic process, we also supply a means to integrate our suggested approach into forensically sound proceedings as described by Holder et al.1 We use the example of the digital dactyloscopy as a forensic discipline, where currently progress is being made by digitizing some of the processing steps. We show an exemplary demonstrator of the suggested approach using a smartphone as a mobile device for the verification of the physical trace to extend the chain-of-custody from the physical to the digital domain. Our evaluation of the demonstrator is performed towards the readability and the verification of its contents. We can read the bar code despite its limited size of 42 x 42 mm and rather large amount of embedded data using various devices. Furthermore, the QR-code's error correction features help to recover contents of damaged codes. Subsequently, our appended digital signature allows for detecting malicious manipulations of the embedded data.
Evaluation of massively parallel sequencing for forensic DNA methylation profiling.
Richards, Rebecca; Patel, Jayshree; Stevenson, Kate; Harbison, SallyAnn
2018-05-11
Epigenetics is an emerging area of interest in forensic science. DNA methylation, a type of epigenetic modification, can be applied to chronological age estimation, identical twin differentiation and body fluid identification. However, there is not yet an agreed, established methodology for targeted detection and analysis of DNA methylation markers in forensic research. Recently a massively parallel sequencing-based approach has been suggested. The use of massively parallel sequencing is well established in clinical epigenetics and is emerging as a new technology in the forensic field. This review investigates the potential benefits, limitations and considerations of this technique for the analysis of DNA methylation in a forensic context. The importance of a robust protocol, regardless of the methodology used, that minimises potential sources of bias is highlighted. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
The prevalence of encoded digital trace evidence in the nonfile space of computer media(,) (.).
Garfinkel, Simson L
2014-09-01
Forensically significant digital trace evidence that is frequently present in sectors of digital media not associated with allocated or deleted files. Modern digital forensic tools generally do not decompress such data unless a specific file with a recognized file type is first identified, potentially resulting in missed evidence. Email addresses are encoded differently for different file formats. As a result, trace evidence can be categorized as Plain in File (PF), Encoded in File (EF), Plain Not in File (PNF), or Encoded Not in File (ENF). The tool bulk_extractor finds all of these formats, but other forensic tools do not. A study of 961 storage devices purchased on the secondary market and shows that 474 contained encoded email addresses that were not in files (ENF). Different encoding formats are the result of different application programs that processed different kinds of digital trace evidence. Specific encoding formats explored include BASE64, GZIP, PDF, HIBER, and ZIP. Published 2014. This article is a U.S. Government work and is in the public domain in the USA. Journal of Forensic Sciences published by Wiley Periodicals, Inc. on behalf of American Academy of Forensic Sciences.
Advanced framework for digital forensic technologies and procedures.
Trček, Denis; Abie, Habtamu; Skomedal, Asmund; Starc, Iztok
2010-11-01
Recent trends in global networks are leading toward service-oriented architectures and sensor networks. On one hand of the spectrum, this means deployment of services from numerous providers to form new service composites, and on the other hand this means emergence of Internet of things. Both these kinds belong to a plethora of realms and can be deployed in many ways, which will pose serious problems in cases of abuse. Consequently, both trends increase the need for new approaches to digital forensics that would furnish admissible evidence for litigation. Because technology alone is clearly not sufficient, it has to be adequately supported by appropriate investigative procedures, which have yet become a subject of an international consensus. This paper therefore provides appropriate a holistic framework to foster an internationally agreed upon approach in digital forensics along with necessary improvements. It is based on a top-down approach, starting with legal, continuing with organizational, and ending with technical issues. More precisely, the paper presents a new architectural technological solution that addresses the core forensic principles at its roots. It deploys so-called leveled message authentication codes and digital signatures to provide data integrity in a way that significantly eases forensic investigations into attacked systems in their operational state. Further, using a top-down approach a conceptual framework for forensics readiness is given, which provides levels of abstraction and procedural guides embellished with a process model that allow investigators perform routine investigations, without becoming overwhelmed by low-level details. As low-level details should not be left out, the framework is further evaluated to include these details to allow organizations to configure their systems for proactive collection and preservation of potential digital evidence in a structured manner. The main reason behind this approach is to stimulate efforts on an internationally agreed "template legislation," similarly to model law in the area of electronic commerce, which would enable harmonized national implementations in the area of digital forensics. © 2010 American Academy of Forensic Sciences.
Automatic forensic face recognition from digital images.
Peacock, C; Goode, A; Brett, A
2004-01-01
Digital image evidence is now widely available from criminal investigations and surveillance operations, often captured by security and surveillance CCTV. This has resulted in a growing demand from law enforcement agencies for automatic person-recognition based on image data. In forensic science, a fundamental requirement for such automatic face recognition is to evaluate the weight that can justifiably be attached to this recognition evidence in a scientific framework. This paper describes a pilot study carried out by the Forensic Science Service (UK) which explores the use of digital facial images in forensic investigation. For the purpose of the experiment a specific software package was chosen (Image Metrics Optasia). The paper does not describe the techniques used by the software to reach its decision of probabilistic matches to facial images, but accepts the output of the software as though it were a 'black box'. In this way, the paper lays a foundation for how face recognition systems can be compared in a forensic framework. The aim of the paper is to explore how reliably and under what conditions digital facial images can be presented in evidence.
On detection of median filtering in digital images
NASA Astrophysics Data System (ADS)
Kirchner, Matthias; Fridrich, Jessica
2010-01-01
In digital image forensics, it is generally accepted that intentional manipulations of the image content are most critical and hence numerous forensic methods focus on the detection of such 'malicious' post-processing. However, it is also beneficial to know as much as possible about the general processing history of an image, including content-preserving operations, since they can affect the reliability of forensic methods in various ways. In this paper, we present a simple yet effective technique to detect median filtering in digital images-a widely used denoising and smoothing operator. As a great variety of forensic methods relies on some kind of a linearity assumption, a detection of non-linear median filtering is of particular interest. The effectiveness of our method is backed with experimental evidence on a large image database.
FIA: An Open Forensic Integration Architecture for Composing Digital Evidence
NASA Astrophysics Data System (ADS)
Raghavan, Sriram; Clark, Andrew; Mohay, George
The analysis and value of digital evidence in an investigation has been the domain of discourse in the digital forensic community for several years. While many works have considered different approaches to model digital evidence, a comprehensive understanding of the process of merging different evidence items recovered during a forensic analysis is still a distant dream. With the advent of modern technologies, pro-active measures are integral to keeping abreast of all forms of cyber crimes and attacks. This paper motivates the need to formalize the process of analyzing digital evidence from multiple sources simultaneously. In this paper, we present the forensic integration architecture (FIA) which provides a framework for abstracting the evidence source and storage format information from digital evidence and explores the concept of integrating evidence information from multiple sources. The FIA architecture identifies evidence information from multiple sources that enables an investigator to build theories to reconstruct the past. FIA is hierarchically composed of multiple layers and adopts a technology independent approach. FIA is also open and extensible making it simple to adapt to technological changes. We present a case study using a hypothetical car theft case to demonstrate the concepts and illustrate the value it brings into the field.
Scudder, Nathan; McNevin, Dennis; Kelty, Sally F; Walsh, Simon J; Robertson, James
2018-03-01
Use of DNA in forensic science will be significantly influenced by new technology in coming years. Massively parallel sequencing and forensic genomics will hasten the broadening of forensic DNA analysis beyond short tandem repeats for identity towards a wider array of genetic markers, in applications as diverse as predictive phenotyping, ancestry assignment, and full mitochondrial genome analysis. With these new applications come a range of legal and policy implications, as forensic science touches on areas as diverse as 'big data', privacy and protected health information. Although these applications have the potential to make a more immediate and decisive forensic intelligence contribution to criminal investigations, they raise policy issues that will require detailed consideration if this potential is to be realised. The purpose of this paper is to identify the scope of the issues that will confront forensic and user communities. Copyright © 2017 The Chartered Society of Forensic Sciences. All rights reserved.
Speech watermarking: an approach for the forensic analysis of digital telephonic recordings.
Faundez-Zanuy, Marcos; Lucena-Molina, Jose J; Hagmüller, Martin
2010-07-01
In this article, the authors discuss the problem of forensic authentication of digital audio recordings. Although forensic audio has been addressed in several articles, the existing approaches are focused on analog magnetic recordings, which are less prevalent because of the large amount of digital recorders available on the market (optical, solid state, hard disks, etc.). An approach based on digital signal processing that consists of spread spectrum techniques for speech watermarking is presented. This approach presents the advantage that the authentication is based on the signal itself rather than the recording format. Thus, it is valid for usual recording devices in police-controlled telephone intercepts. In addition, our proposal allows for the introduction of relevant information such as the recording date and time and all the relevant data (this is not always possible with classical systems). Our experimental results reveal that the speech watermarking procedure does not interfere in a significant way with the posterior forensic speaker identification.
Improved JPEG anti-forensics with better image visual quality and forensic undetectability.
Singh, Gurinder; Singh, Kulbir
2017-08-01
There is an immediate need to validate the authenticity of digital images due to the availability of powerful image processing tools that can easily manipulate the digital image information without leaving any traces. The digital image forensics most often employs the tampering detectors based on JPEG compression. Therefore, to evaluate the competency of the JPEG forensic detectors, an anti-forensic technique is required. In this paper, two improved JPEG anti-forensic techniques are proposed to remove the blocking artifacts left by the JPEG compression in both spatial and DCT domain. In the proposed framework, the grainy noise left by the perceptual histogram smoothing in DCT domain can be reduced significantly by applying the proposed de-noising operation. Two types of denoising algorithms are proposed, one is based on the constrained minimization problem of total variation of energy and other on the normalized weighted function. Subsequently, an improved TV based deblocking operation is proposed to eliminate the blocking artifacts in the spatial domain. Then, a decalibration operation is applied to bring the processed image statistics back to its standard position. The experimental results show that the proposed anti-forensic approaches outperform the existing state-of-the-art techniques in achieving enhanced tradeoff between image visual quality and forensic undetectability, but with high computational cost. Copyright © 2017 Elsevier B.V. All rights reserved.
Open Source Live Distributions for Computer Forensics
NASA Astrophysics Data System (ADS)
Giustini, Giancarlo; Andreolini, Mauro; Colajanni, Michele
Current distributions of open source forensic software provide digital investigators with a large set of heterogeneous tools. Their use is not always focused on the target and requires high technical expertise. We present a new GNU/Linux live distribution, named CAINE (Computer Aided INvestigative Environment) that contains a collection of tools wrapped up into a user friendly environment. The CAINE forensic framework introduces novel important features, aimed at filling the interoperability gap across different forensic tools. Moreover, it provides a homogeneous graphical interface that drives digital investigators during the acquisition and analysis of electronic evidence, and it offers a semi-automatic mechanism for the creation of the final report.
Anti-forensics of chromatic aberration
NASA Astrophysics Data System (ADS)
Mayer, Owen; Stamm, Matthew C.
2015-03-01
Over the past decade, a number of information forensic techniques have been developed to identify digital image manipulation and falsification. Recent research has shown, however, that an intelligent forger can use anti-forensic countermeasures to disguise their forgeries. In this paper, an anti-forensic technique is proposed to falsify the lateral chromatic aberration present in a digital image. Lateral chromatic aberration corresponds to the relative contraction or expansion between an image's color channels that occurs due to a lens's inability to focus all wavelengths of light on the same point. Previous work has used localized inconsistencies in an image's chromatic aberration to expose cut-and-paste image forgeries. The anti-forensic technique presented in this paper operates by estimating the expected lateral chromatic aberration at an image location, then removing deviations from this estimate caused by tampering or falsification. Experimental results are presented that demonstrate that our anti-forensic technique can be used to effectively disguise evidence of an image forgery.
Novoselov, V P; Fedorov, S A
1999-01-01
UNISCAN scanner with PC was used at department of medical criminology and at the histological department of the Novosibirsk Regional Bureau of Forensic Medical Expert Evaluations. The quality of images obtained by computers and digital photography is not inferior to that of traditional photographs.
Image processing in forensic pathology.
Oliver, W R
1998-03-01
Image processing applications in forensic pathology are becoming increasingly important. This article introduces basic concepts in image processing as applied to problems in forensic pathology in a non-mathematical context. Discussions of contrast enhancement, digital encoding, compression, deblurring, and other topics are presented.
Free will and psychiatric assessments of criminal responsibility: a parallel with informed consent.
Meynen, Gerben
2010-11-01
In some criminal cases a forensic psychiatrist is asked to make an assessment of the state of mind of the defendant at the time of the legally relevant act. A considerable number of people seem to hold that the basis for this assessment is that free will is required for legal responsibility, and that mental disorders can compromise free will. In fact, because of the alleged relationship between the forensic assessment and free will, researchers in forensic psychiatry also consider the complicated metaphysical discussions on free will relevant to the assessment. At the same time, there is concern about the lack of advancement with respect to clarifying the nature of the forensic assessment. In this paper I argue that, even if free will is considered relevant, there may be no need for forensic researchers to engage into metaphysical discussions on free will in order to make significant progress. I will do so, drawing a parallel between the assessment of criminal responsibility on the one hand, and the medical practice of obtaining informed consent on the other. I argue that also with respect to informed consent, free will is considered relevant, or even crucial. This is the parallel. Yet, researchers on informed consent have not entered into metaphysical debates on free will. Meanwhile, research on informed consent has made significant progress. Based on the parallel with respect to free will, and the differences with respect to research, I conclude that researchers on forensic assessment may not have to engage into metaphysical discussions on free will in order to advance our understanding of this psychiatric practice.
Free will and psychiatric assessments of criminal responsibility: a parallel with informed consent
2010-01-01
In some criminal cases a forensic psychiatrist is asked to make an assessment of the state of mind of the defendant at the time of the legally relevant act. A considerable number of people seem to hold that the basis for this assessment is that free will is required for legal responsibility, and that mental disorders can compromise free will. In fact, because of the alleged relationship between the forensic assessment and free will, researchers in forensic psychiatry also consider the complicated metaphysical discussions on free will relevant to the assessment. At the same time, there is concern about the lack of advancement with respect to clarifying the nature of the forensic assessment. In this paper I argue that, even if free will is considered relevant, there may be no need for forensic researchers to engage into metaphysical discussions on free will in order to make significant progress. I will do so, drawing a parallel between the assessment of criminal responsibility on the one hand, and the medical practice of obtaining informed consent on the other. I argue that also with respect to informed consent, free will is considered relevant, or even crucial. This is the parallel. Yet, researchers on informed consent have not entered into metaphysical debates on free will. Meanwhile, research on informed consent has made significant progress. Based on the parallel with respect to free will, and the differences with respect to research, I conclude that researchers on forensic assessment may not have to engage into metaphysical discussions on free will in order to advance our understanding of this psychiatric practice. PMID:20424919
Digital and multimedia forensics justified: An appraisal on professional policy and legislation
NASA Astrophysics Data System (ADS)
Popejoy, Amy Lynnette
Recent progress in professional policy and legislation at the federal level in the field of forensic science constructs a transformation of new outcomes for future experts. An exploratory and descriptive qualitative methodology was used to critique and examine Digital and Multimedia Science (DMS) as a justified forensic discipline. Chapter I summarizes Recommendations 1, 2, and 10 of the National Academy of Sciences (NAS) Report 2009 regarding disparities and challenges facing the forensic science community. Chapter I also delivers the overall foundation and framework of this thesis, specifically how it relates to DMS. Chapter II expands on Recommendation 1: "The Promotion and Development of Forensic Science," and focuses chronologically on professional policy and legislative advances through 2014. Chapter III addresses Recommendation 2: "The Standardization of Terminology in Reporting and Testimony," and the issues of legal language and terminology, model laboratory reports, and expert testimony concerning DMS case law. Chapter IV analyzes Recommendation 10: "Insufficient Education and Training," identifying legal awareness for the digital and multimedia examiner to understand the role of the expert witness, the attorney, the judge and the admission of forensic science evidence in litigation in our criminal justice system. Finally, Chapter V studies three DME specific laboratories at the Texas state, county, and city level, concentrating on current practice and procedure.
FastID: Extremely Fast Forensic DNA Comparisons
2017-05-19
FastID: Extremely Fast Forensic DNA Comparisons Darrell O. Ricke, PhD Bioengineering Systems & Technologies Massachusetts Institute of...Technology Lincoln Laboratory Lexington, MA USA Darrell.Ricke@ll.mit.edu Abstract—Rapid analysis of DNA forensic samples can have a critical impact on...time sensitive investigations. Analysis of forensic DNA samples by massively parallel sequencing is creating the next gold standard for DNA
[Digital x-ray image processing as an aid in forensic medicine].
Buitrago-Tellez, C; Wenz, W; Friedrich, G
1992-02-01
Radiology plays an important role in the identification of unknown corpses. Positive radiographic identification by comparison with antemortem films is an established technique in this setting. Technical defects together with non-well-preserved films make it sometimes difficult or even impossible to establish a confident comparison. Digital image processing after secondary digitalization of ante- and postmortem films represents an important development and aid in forensic medicine. The application of this method is demonstrated on a single case.
2012-04-10
builders . Human Intelligence (HUMINT) and Signals Intelligence (SIGINT) could then also be prioritized and employed accordingly for optimal 8...responsibility for digital and multimedia forensics, and DIA responsibility for forensic intelligence activities and programs.31 The Army is also currently...aligning functional oversight of Forensics, Biometrics, Law Enforcement, Detainee Operations, and Physical Security 11 under one overarching
Detecting Copy Move Forgery In Digital Images
NASA Astrophysics Data System (ADS)
Gupta, Ashima; Saxena, Nisheeth; Vasistha, S. K.
2012-03-01
In today's world several image manipulation software's are available. Manipulation of digital images has become a serious problem nowadays. There are many areas like medical imaging, digital forensics, journalism, scientific publications, etc, where image forgery can be done very easily. To determine whether a digital image is original or doctored is a big challenge. To find the marks of tampering in a digital image is a challenging task. The detection methods can be very useful in image forensics which can be used as a proof for the authenticity of a digital image. In this paper we propose the method to detect region duplication forgery by dividing the image into overlapping block and then perform searching to find out the duplicated region in the image.
Multimedia Forensics Is Not Computer Forensics
NASA Astrophysics Data System (ADS)
Böhme, Rainer; Freiling, Felix C.; Gloe, Thomas; Kirchner, Matthias
The recent popularity of research on topics of multimedia forensics justifies reflections on the definition of the field. This paper devises an ontology that structures forensic disciplines by their primary domain of evidence. In this sense, both multimedia forensics and computer forensics belong to the class of digital forensics, but they differ notably in the underlying observer model that defines the forensic investigator’s view on (parts of) reality, which itself is not fully cognizable. Important consequences on the reliability of probative facts emerge with regard to available counter-forensic techniques: while perfect concealment of traces is possible for computer forensics, this level of certainty cannot be expected for manipulations of sensor data. We cite concrete examples and refer to established techniques to support our arguments.
SLR digital camera for forensic photography
NASA Astrophysics Data System (ADS)
Har, Donghwan; Son, Youngho; Lee, Sungwon
2004-06-01
Forensic photography, which was systematically established in the late 19th century by Alphonse Bertillon of France, has developed a lot for about 100 years. The development will be more accelerated with the development of high technologies, in particular the digital technology. This paper reviews three studies to answer the question: Can the SLR digital camera replace the traditional silver halide type ultraviolet photography and infrared photography? 1. Comparison of relative ultraviolet and infrared sensitivity of SLR digital camera to silver halide photography. 2. How much ultraviolet or infrared sensitivity is improved when removing the UV/IR cutoff filter built in the SLR digital camera? 3. Comparison of relative sensitivity of CCD and CMOS for ultraviolet and infrared. The test result showed that the SLR digital camera has a very low sensitivity for ultraviolet and infrared. The cause was found to be the UV/IR cutoff filter mounted in front of the image sensor. Removing the UV/IR cutoff filter significantly improved the sensitivity for ultraviolet and infrared. Particularly for infrared, the sensitivity of the SLR digital camera was better than that of the silver halide film. This shows the possibility of replacing the silver halide type ultraviolet photography and infrared photography with the SLR digital camera. Thus, the SLR digital camera seems to be useful for forensic photography, which deals with a lot of ultraviolet and infrared photographs.
[The procedure for documentation of digital images in forensic medical histology].
Putintsev, V A; Bogomolov, D V; Fedulova, M V; Gribunov, Iu P; Kul'bitskiĭ, B N
2012-01-01
This paper is devoted to the novel computer technologies employed in the studies of histological preparations. These technologies allow to visualize digital images, structurize the data obtained and store the results in computer memory. The authors emphasize the necessity to properly document digital images obtained during forensic-histological studies and propose the procedure for the formulation of electronic documents in conformity with the relevant technical and legal requirements. It is concluded that the use of digital images as a new study object permits to obviate the drawbacks inherent in the work with the traditional preparations and pass from descriptive microscopy to their quantitative analysis.
Advancing the science of forensic data management
NASA Astrophysics Data System (ADS)
Naughton, Timothy S.
2002-07-01
Many individual elements comprise a typical forensics process. Collecting evidence, analyzing it, and using results to draw conclusions are all mutually distinct endeavors. Different physical locations and personnel are involved, juxtaposed against an acute need for security and data integrity. Using digital technologies and the Internet's ubiquity, these diverse elements can be conjoined using digital data as the common element. This result is a new data management process that can be applied to serve all elements of the community. The first step is recognition of a forensics lifecycle. Evidence gathering, analysis, storage, and use in legal proceedings are actually just distinct parts of a single end-to-end process, and thus, it is hypothesized that a single data system that can also accommodate each constituent phase using common network and security protocols. This paper introduces the idea of web-based Central Data Repository. Its cornerstone is anywhere, anytime Internet upload, viewing, and report distribution. Archives exist indefinitely after being created, and high-strength security and encryption protect data and ensure subsequent case file additions do not violate chain-of-custody or other handling provisions. Several legal precedents have been established for using digital information in courts of law, and in fact, effective prosecution of cyber crimes absolutely relies on its use. An example is a US Department of Agriculture division's use of digital images to back up its inspection process, with pictures and information retained on secure servers to enforce the Perishable Agricultural Commodities Act. Forensics is a cumulative process. Secure, web-based data management solutions, such as the Central Data Repository postulated here, can support each process step. Logically marrying digital technologies with Internet accessibility should help nurture a thought process to explore alternatives that make forensics data accessible to authorized individuals, whenever and wherever they need it.
Problem Based Learning in Digital Forensics
ERIC Educational Resources Information Center
Irons, Alastair; Thomas, Paula
2016-01-01
The purpose of this paper is to compare and contrast the efforts of two universities to address the issue of providing computer forensics students with the opportunity to get involved in the practical aspects of forensic search and seizure procedures. The paper discusses the approaches undertaken by the University of Sunderland and the University…
32 CFR 236.5 - Cyber security information sharing.
Code of Federal Regulations, 2014 CFR
2014-07-01
... forensics laboratory at DC3, which implements specialized handling procedures to maintain its accreditation as a digital and multimedia forensics laboratory. DC3 will maintain, control, and dispose of all...
32 CFR 236.5 - Cyber security information sharing.
Code of Federal Regulations, 2013 CFR
2013-07-01
... multimedia forensics laboratory at DC3, which implements specialized handling procedures to maintain its accreditation as a digital and multimedia forensics laboratory. DC3 will maintain, control, and dispose of all...
32 CFR 236.5 - Cyber security information sharing.
Code of Federal Regulations, 2012 CFR
2012-07-01
... multimedia forensics laboratory at DC3, which implements specialized handling procedures to maintain its accreditation as a digital and multimedia forensics laboratory. DC3 will maintain, control, and dispose of all...
The forensic validity of visual analytics
NASA Astrophysics Data System (ADS)
Erbacher, Robert F.
2008-01-01
The wider use of visualization and visual analytics in wide ranging fields has led to the need for visual analytics capabilities to be legally admissible, especially when applied to digital forensics. This brings the need to consider legal implications when performing visual analytics, an issue not traditionally examined in visualization and visual analytics techniques and research. While digital data is generally admissible under the Federal Rules of Evidence [10][21], a comprehensive validation of the digital evidence is considered prudent. A comprehensive validation requires validation of the digital data under rules for authentication, hearsay, best evidence rule, and privilege. Additional issues with digital data arise when exploring digital data related to admissibility and the validity of what information was examined, to what extent, and whether the analysis process was sufficiently covered by a search warrant. For instance, a search warrant generally covers very narrow requirements as to what law enforcement is allowed to examine and acquire during an investigation. When searching a hard drive for child pornography, how admissible is evidence of an unrelated crime, i.e. drug dealing. This is further complicated by the concept of "in plain view". When performing an analysis of a hard drive what would be considered "in plain view" when analyzing a hard drive. The purpose of this paper is to discuss the issues of digital forensics and the related issues as they apply to visual analytics and identify how visual analytics techniques fit into the digital forensics analysis process, how visual analytics techniques can improve the legal admissibility of digital data, and identify what research is needed to further improve this process. The goal of this paper is to open up consideration of legal ramifications among the visualization community; the author is not a lawyer and the discussions are not meant to be inclusive of all differences in laws between states and countries.
Investigators’ Guide to Sources of Information.
1997-04-01
identification purposes. As part of the 1995 Crime Bill, Congress mandated the Secret Service to provide forensic /technical assistance-to federal, state, and...missing and sexually exploited children. Much of the forensic assistance is used in the United States by the Secret Service’s Forensic Services Division...The forensic technology allows the document examiner to scan and digitize text and writings, and later search that material against Page 27 GAO/OSI-97
A Comprehensive Review on Adaptability of Network Forensics Frameworks for Mobile Cloud Computing
Abdul Wahab, Ainuddin Wahid; Han, Qi; Bin Abdul Rahman, Zulkanain
2014-01-01
Network forensics enables investigation and identification of network attacks through the retrieved digital content. The proliferation of smartphones and the cost-effective universal data access through cloud has made Mobile Cloud Computing (MCC) a congenital target for network attacks. However, confines in carrying out forensics in MCC is interrelated with the autonomous cloud hosting companies and their policies for restricted access to the digital content in the back-end cloud platforms. It implies that existing Network Forensic Frameworks (NFFs) have limited impact in the MCC paradigm. To this end, we qualitatively analyze the adaptability of existing NFFs when applied to the MCC. Explicitly, the fundamental mechanisms of NFFs are highlighted and then analyzed using the most relevant parameters. A classification is proposed to help understand the anatomy of existing NFFs. Subsequently, a comparison is given that explores the functional similarities and deviations among NFFs. The paper concludes by discussing research challenges for progressive network forensics in MCC. PMID:25097880
Challenge Paper: Validation of Forensic Techniques for Criminal Prosecution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erbacher, Robert F.; Endicott-Popovsky, Barbara E.; Frincke, Deborah A.
2007-04-10
Abstract: As in many domains, there is increasing agreement in the user and research community that digital forensics analysts would benefit from the extension, development and application of advanced techniques in performing large scale and heterogeneous data analysis. Modern digital forensics analysis of cyber-crimes and cyber-enabled crimes often requires scrutiny of massive amounts of data. For example, a case involving network compromise across multiple enterprises might require forensic analysis of numerous sets of network logs and computer hard drives, potentially involving 100?s of gigabytes of heterogeneous data, or even terabytes or petabytes of data. Also, the goal for forensic analysismore » is to not only determine whether the illicit activity being considered is taking place, but also to identify the source of the activity and the full extent of the compromise or impact on the local network. Even after this analysis, there remains the challenge of using the results in subsequent criminal and civil processes.« less
A comprehensive review on adaptability of network forensics frameworks for mobile cloud computing.
Khan, Suleman; Shiraz, Muhammad; Wahab, Ainuddin Wahid Abdul; Gani, Abdullah; Han, Qi; Rahman, Zulkanain Bin Abdul
2014-01-01
Network forensics enables investigation and identification of network attacks through the retrieved digital content. The proliferation of smartphones and the cost-effective universal data access through cloud has made Mobile Cloud Computing (MCC) a congenital target for network attacks. However, confines in carrying out forensics in MCC is interrelated with the autonomous cloud hosting companies and their policies for restricted access to the digital content in the back-end cloud platforms. It implies that existing Network Forensic Frameworks (NFFs) have limited impact in the MCC paradigm. To this end, we qualitatively analyze the adaptability of existing NFFs when applied to the MCC. Explicitly, the fundamental mechanisms of NFFs are highlighted and then analyzed using the most relevant parameters. A classification is proposed to help understand the anatomy of existing NFFs. Subsequently, a comparison is given that explores the functional similarities and deviations among NFFs. The paper concludes by discussing research challenges for progressive network forensics in MCC.
Hayes, S; Taylor, R; Paterson, A
2005-12-01
Forensic facial approximation involves building a likeness of the head and face on the skull of an unidentified individual, with the aim that public broadcast of the likeness will trigger recognition in those who knew the person in life. This paper presents an overview of the collaborative practice between Ronn Taylor (Forensic Sculptor to the Victorian Institute of Forensic Medicine) and Detective Sergeant Adrian Paterson (Victoria Police Criminal Identification Squad). This collaboration involves clay modelling to determine an approximation of the person's head shape and feature location, with surface texture and more speculative elements being rendered digitally onto an image of the model. The advantages of this approach are that through clay modelling anatomical contouring is present, digital enhancement resolves some of the problems of visual perception of a representation, such as edge and shape determination, and the approximation can be easily modified as and when new information is received.
Van Neste, Christophe; Van Criekinge, Wim; Deforce, Dieter; Van Nieuwerburgh, Filip
2016-01-01
It is difficult to predict if and when massively parallel sequencing of forensic STR loci will replace capillary electrophoresis as the new standard technology in forensic genetics. The main benefits of sequencing are increased multiplexing scales and SNP detection. There is not yet a consensus on how sequenced profiles should be reported. We present the Forensic Loci Allele Database (FLAD) service, made freely available on http://forensic.ugent.be/FLAD/. It offers permanent identifiers for sequenced forensic alleles (STR or SNP) and their microvariants for use in forensic allele nomenclature. Analogous to Genbank, its aim is to provide permanent identifiers for forensically relevant allele sequences. Researchers that are developing forensic sequencing kits or are performing population studies, can register on http://forensic.ugent.be/FLAD/ and add loci and allele sequences with a short and simple application interface (API). Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Ribéreau-Gayon, Agathe; Rando, Carolyn; Morgan, Ruth M; Carter, David O
2018-05-01
In the context of increased scrutiny of the methods in forensic sciences, it is essential to ensure that the approaches used in forensic taphonomy to measure decomposition and estimate the postmortem interval are underpinned by robust evidence-based data. Digital photographs are an important source of documentation in forensic taphonomic investigations but the suitability of the current approaches for photographs, rather than real-time remains, is poorly studied which can undermine accurate forensic conclusions. The present study aimed to investigate the suitability of 2D colour digital photographs for evaluating decomposition of exposed human analogues (Sus scrofa domesticus) in a tropical savanna environment (Hawaii), using two published scoring methods; Megyesi et al., 2005 and Keough et al., 2017. It was found that there were significant differences between the real-time and photograph decomposition scores when the Megyesi et al. method was used. However, the Keough et al. method applied to photographs reflected real-time decomposition more closely and thus appears more suitable to evaluate pig decomposition from 2D photographs. The findings indicate that the type of scoring method used has a significant impact on the ability to accurately evaluate the decomposition of exposed pig carcasses from photographs. It was further identified that photographic taphonomic analysis can reach high inter-observer reproducibility. These novel findings are of significant importance for the forensic sciences as they highlight the potential for high quality photograph coverage to provide useful complementary information for the forensic taphonomic investigation. New recommendations to develop robust transparent approaches adapted to photographs in forensic taphonomy are suggested based on these findings. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
A review of bioinformatic methods for forensic DNA analyses.
Liu, Yao-Yuan; Harbison, SallyAnn
2018-03-01
Short tandem repeats, single nucleotide polymorphisms, and whole mitochondrial analyses are three classes of markers which will play an important role in the future of forensic DNA typing. The arrival of massively parallel sequencing platforms in forensic science reveals new information such as insights into the complexity and variability of the markers that were previously unseen, along with amounts of data too immense for analyses by manual means. Along with the sequencing chemistries employed, bioinformatic methods are required to process and interpret this new and extensive data. As more is learnt about the use of these new technologies for forensic applications, development and standardization of efficient, favourable tools for each stage of data processing is being carried out, and faster, more accurate methods that improve on the original approaches have been developed. As forensic laboratories search for the optimal pipeline of tools, sequencer manufacturers have incorporated pipelines into sequencer software to make analyses convenient. This review explores the current state of bioinformatic methods and tools used for the analyses of forensic markers sequenced on the massively parallel sequencing (MPS) platforms currently most widely used. Copyright © 2017 Elsevier B.V. All rights reserved.
Thali, Michael J; Schweitzer, Wolf; Yen, Kathrin; Vock, Peter; Ozdoba, Christoph; Spielvogel, Elke; Dirnhofer, Richard
2003-03-01
The goal of this study was the full-body documentation of a gunshot wound victim with multislice helical computed tomography for subsequent comparison with the findings of the standard forensic autopsy. Complete volume data of the head, neck, and trunk were acquired by use of two acquisitions of less than 1 minute of total scanning time. Subsequent two-dimensional multiplanar reformations and three-dimensional shaded surface display reconstructions helped document the gunshot-created skull fractures and brain injuries, including the wound track, and the intracerebral bone fragments. Computed tomography also demonstrated intracardiac air embolism and pulmonary aspiration of blood resulting from bullet wound-related trauma. The "digital autopsy," even when postprocessing time was added, was more rapid than the classic forensic autopsy and, based on the nondestructive approach, offered certain advantages in comparison with the forensic autopsy.
Bornik, Alexander; Urschler, Martin; Schmalstieg, Dieter; Bischof, Horst; Krauskopf, Astrid; Schwark, Thorsten; Scheurer, Eva; Yen, Kathrin
2018-06-01
Three-dimensional (3D) crime scene documentation using 3D scanners and medical imaging modalities like computed tomography (CT) and magnetic resonance imaging (MRI) are increasingly applied in forensic casework. Together with digital photography, these modalities enable comprehensive and non-invasive recording of forensically relevant information regarding injuries/pathologies inside the body and on its surface. Furthermore, it is possible to capture traces and items at crime scenes. Such digitally secured evidence has the potential to similarly increase case understanding by forensic experts and non-experts in court. Unlike photographs and 3D surface models, images from CT and MRI are not self-explanatory. Their interpretation and understanding requires radiological knowledge. Findings in tomography data must not only be revealed, but should also be jointly studied with all the 2D and 3D data available in order to clarify spatial interrelations and to optimally exploit the data at hand. This is technically challenging due to the heterogeneous data representations including volumetric data, polygonal 3D models, and images. This paper presents a novel computer-aided forensic toolbox providing tools to support the analysis, documentation, annotation, and illustration of forensic cases using heterogeneous digital data. Conjoint visualization of data from different modalities in their native form and efficient tools to visually extract and emphasize findings help experts to reveal unrecognized correlations and thereby enhance their case understanding. Moreover, the 3D case illustrations created for case analysis represent an efficient means to convey the insights gained from case analysis to forensic non-experts involved in court proceedings like jurists and laymen. The capability of the presented approach in the context of case analysis, its potential to speed up legal procedures and to ultimately enhance legal certainty is demonstrated by introducing a number of representative forensic cases. Copyright © 2018 The Author(s). Published by Elsevier B.V. All rights reserved.
Sexual dimorphism in finger ridge breadth measurements: a tool for sex estimation from fingerprints.
Mundorff, Amy Z; Bartelink, Eric J; Murad, Turhon A
2014-07-01
Previous research has demonstrated significant sexual dimorphism in friction ridge skin characteristics. This study uses a novel method for measuring sexual dimorphism in finger ridge breadths to evaluate its utility as a sex estimation method from an unknown fingerprint. Beginning and ending in a valley, the width of ten parallel ridges with no obstructions or minutia was measured in a sample of 250 males and females (N = 500). The results demonstrate statistically significant differences in ridge breadth between males and females (p < 0.001), with classification accuracy for each digit varying from 83.2% to 89.3%. Classification accuracy for the pooled finger samples was 83.9% for the right hand and 86.2% for the left hand, which is applicable for cases where the digit number cannot be determined. Weight, stature, and to a lesser degree body mass index also significantly correlate with ridge breadth and account for the degree of overlap between males and females. © 2014 American Academy of Forensic Sciences.
Garamendi, Pedro M; Landa, Maria I; Botella, Miguel C; Alemán, Inmaculada
2011-01-01
In recent years, there has been a renewed interest in forensic sciences about forensic age estimation in living subjects by means of radiological methods. This research was conducted on digital thorax X-rays to test the usefulness of some radiological changes in the clavicle and first rib. The sample consisted in a total of 123 subjects of Spanish origin (61 men and 62 women; age range: 5-75 years). From all subjects, a thorax posterior-anterior radiograph was obtained in digital format. Scoring for fusion of medial epiphyses of the clavicle was carried out by Schmeling's system and ossification of the costal cartilage of the first rib by Michelson's system. Degree of ossification and epiphyseal fusion were analyzed in relation with known age and sex of these subjects. The results give a minimum age of >20 years for full fusion of the medial epiphysis of the clavicle (Stages 4 and 5). Concerning the first rib, all subjects with the final Stage 3 of ossification were above 25 years of age. These results suggest that the first rib ossification might become an additional method to the ones so far recommended for forensic age estimation in subjects around 21. New research would be desirable to confirm this suggestion. © 2010 American Academy of Forensic Sciences.
Chain of evidence generation for contrast enhancement in digital image forensics
NASA Astrophysics Data System (ADS)
Battiato, Sebastiano; Messina, Giuseppe; Strano, Daniela
2010-01-01
The quality of the images obtained by digital cameras has improved a lot since digital cameras early days. Unfortunately, it is not unusual in image forensics to find wrongly exposed pictures. This is mainly due to obsolete techniques or old technologies, but also due to backlight conditions. To extrapolate some invisible details a stretching of the image contrast is obviously required. The forensics rules to produce evidences require a complete documentation of the processing steps, enabling the replication of the entire process. The automation of enhancement techniques is thus quite difficult and needs to be carefully documented. This work presents an automatic procedure to find contrast enhancement settings, allowing both image correction and automatic scripting generation. The technique is based on a preprocessing step which extracts the features of the image and selects correction parameters. The parameters are thus saved through a JavaScript code that is used in the second step of the approach to correct the image. The generated script is Adobe Photoshop compliant (which is largely used in image forensics analysis) thus permitting the replication of the enhancement steps. Experiments on a dataset of images are also reported showing the effectiveness of the proposed methodology.
Van Neste, Christophe; Gansemans, Yannick; De Coninck, Dieter; Van Hoofstat, David; Van Criekinge, Wim; Deforce, Dieter; Van Nieuwerburgh, Filip
2015-03-01
Routine use of massively parallel sequencing (MPS) for forensic genomics is on the horizon. The last few years, several algorithms and workflows have been developed to analyze forensic MPS data. However, none have yet been tailored to the needs of the forensic analyst who does not possess an extensive bioinformatics background. We developed our previously published forensic MPS data analysis framework MyFLq (My-Forensic-Loci-queries) into an open-source, user-friendly, web-based application. It can be installed as a standalone web application, or run directly from the Illumina BaseSpace environment. In the former, laboratories can keep their data on-site, while in the latter, data from forensic samples that are sequenced on an Illumina sequencer can be uploaded to Basespace during acquisition, and can subsequently be analyzed using the published MyFLq BaseSpace application. Additional features were implemented such as an interactive graphical report of the results, an interactive threshold selection bar, and an allele length-based analysis in addition to the sequenced-based analysis. Practical use of the application is demonstrated through the analysis of four 16-plex short tandem repeat (STR) samples, showing the complementarity between the sequence- and length-based analysis of the same MPS data. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Verhoff, Marcel A; Kettner, Mattias; Lászik, András; Ramsthaler, Frank
2012-09-01
A problem encountered by medical examiners is that they have to assess injuries that have already been medically treated. Thus, they have to base their reports on clinical forensic examinations performed hours or days after an injury was sustained, or even base their assessment solely on information gleaned from medical files. In both scenarios, the forensic examiner has to rely heavily on the first responder's documentation of the original injury pattern. Medical priority will be to immediately treat a patient's injuries, and the first responder may, in addition, initially be unaware of a possibly criminal origin of an injury. As a result, the documentation of injuries is frequently of limited value for forensic purposes. This situation could be improved if photographic records were briefly made of injuries before they were treated. German-language medicolegal, criminal, and photography journals and books were selectively searched with the help of PubMed and other databases. In addition, the authors' experiences in creating and evaluating photographic records for clinical forensic use were assessed. This paper is an aid to creating photographic records of sufficient quality for forensic purposes. The options provided by digital photography in particular make this endeavor feasible even in a clinical setting. In addition, our paper illuminates some technical aspects of creating and archiving photographic records for forensic use, and addresses possible error sources. With the requisite technical background knowledge, injuries can be photographically recorded to forensic standards during patient care.
Digital Forensics Using Local Signal Statistics
ERIC Educational Resources Information Center
Pan, Xunyu
2011-01-01
With the rapid growth of the Internet and the popularity of digital imaging devices, digital imagery has become our major information source. Meanwhile, the development of digital manipulation techniques employed by most image editing software brings new challenges to the credibility of photographic images as the definite records of events. We…
Acharya, Ashith B
2014-05-01
Dentin translucency measurement is an easy yet relatively accurate approach to postmortem age estimation. Translucency area represents a two-dimensional change and may reflect age variations better than length. Manually measuring area is challenging and this paper proposes a new digital method using commercially available computer hardware and software. Area and length were measured on 100 tooth sections (age range, 19-82 years) of 250 μm thickness. Regression analysis revealed lower standard error of estimate and higher correlation with age for length than for area (R = 0.62 vs. 0.60). However, test of regression formulae on a control sample (n = 33, 21-85 years) showed smaller mean absolute difference (8.3 vs. 8.8 years) and greater frequency of smaller errors (73% vs. 67% age estimates ≤ ± 10 years) for area than for length. These suggest that digital area measurements of root translucency may be used as an alternative to length in forensic age estimation. © 2014 American Academy of Forensic Sciences.
2017-01-01
Amplicon (targeted) sequencing by massively parallel sequencing (PCR-MPS) is a potential method for use in forensic DNA analyses. In this application, PCR-MPS may supplement or replace other instrumental analysis methods such as capillary electrophoresis and Sanger sequencing for STR and mitochondrial DNA typing, respectively. PCR-MPS also may enable the expansion of forensic DNA analysis methods to include new marker systems such as single nucleotide polymorphisms (SNPs) and insertion/deletions (indels) that currently are assayable using various instrumental analysis methods including microarray and quantitative PCR. Acceptance of PCR-MPS as a forensic method will depend in part upon developing protocols and criteria that define the limitations of a method, including a defensible analytical threshold or method detection limit. This paper describes an approach to establish objective analytical thresholds suitable for multiplexed PCR-MPS methods. A definition is proposed for PCR-MPS method background noise, and an analytical threshold based on background noise is described. PMID:28542338
Young, Brian; King, Jonathan L; Budowle, Bruce; Armogida, Luigi
2017-01-01
Amplicon (targeted) sequencing by massively parallel sequencing (PCR-MPS) is a potential method for use in forensic DNA analyses. In this application, PCR-MPS may supplement or replace other instrumental analysis methods such as capillary electrophoresis and Sanger sequencing for STR and mitochondrial DNA typing, respectively. PCR-MPS also may enable the expansion of forensic DNA analysis methods to include new marker systems such as single nucleotide polymorphisms (SNPs) and insertion/deletions (indels) that currently are assayable using various instrumental analysis methods including microarray and quantitative PCR. Acceptance of PCR-MPS as a forensic method will depend in part upon developing protocols and criteria that define the limitations of a method, including a defensible analytical threshold or method detection limit. This paper describes an approach to establish objective analytical thresholds suitable for multiplexed PCR-MPS methods. A definition is proposed for PCR-MPS method background noise, and an analytical threshold based on background noise is described.
Home - Virginia Department of Forensic Science
Procedure Manuals Training Manuals Digital & Multimedia Evidence Computer Analysis Video Analysis Procedure Manual Training Manual FAQ Updates Firearms & Toolmarks Procedure Manuals Training Manuals Forensic Biology Procedure Manuals Training Manuals Familial Searches Post-Conviction DNA Issues FAQ
Van Neste, Christophe; Vandewoestyne, Mado; Van Criekinge, Wim; Deforce, Dieter; Van Nieuwerburgh, Filip
2014-03-01
Forensic scientists are currently investigating how to transition from capillary electrophoresis (CE) to massive parallel sequencing (MPS) for analysis of forensic DNA profiles. MPS offers several advantages over CE such as virtually unlimited multiplexy of loci, combining both short tandem repeat (STR) and single nucleotide polymorphism (SNP) loci, small amplicons without constraints of size separation, more discrimination power, deep mixture resolution and sample multiplexing. We present our bioinformatic framework My-Forensic-Loci-queries (MyFLq) for analysis of MPS forensic data. For allele calling, the framework uses a MySQL reference allele database with automatically determined regions of interest (ROIs) by a generic maximal flanking algorithm which makes it possible to use any STR or SNP forensic locus. Python scripts were designed to automatically make allele calls starting from raw MPS data. We also present a method to assess the usefulness and overall performance of a forensic locus with respect to MPS, as well as methods to estimate whether an unknown allele, which sequence is not present in the MySQL database, is in fact a new allele or a sequencing error. The MyFLq framework was applied to an Illumina MiSeq dataset of a forensic Illumina amplicon library, generated from multilocus STR polymerase chain reaction (PCR) on both single contributor samples and multiple person DNA mixtures. Although the multilocus PCR was not yet optimized for MPS in terms of amplicon length or locus selection, the results show excellent results for most loci. The results show a high signal-to-noise ratio, correct allele calls, and a low limit of detection for minor DNA contributors in mixed DNA samples. Technically, forensic MPS affords great promise for routine implementation in forensic genomics. The method is also applicable to adjacent disciplines such as molecular autopsy in legal medicine and in mitochondrial DNA research. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
A parallel form of the Gudjonsson Suggestibility Scale.
Gudjonsson, G H
1987-09-01
The purpose of this study is twofold: (1) to present a parallel form of the Gudjonsson Suggestibility Scale (GSS, Form 1); (2) to study test-retest reliabilities of interrogative suggestibility. Three groups of subjects were administered the two suggestibility scales in a counterbalanced order. Group 1 (28 normal subjects) and Group 2 (32 'forensic' patients) completed both scales within the same testing session, whereas Group 3 (30 'forensic' patients) completed the two scales between one week and eight months apart. All the correlations were highly significant, giving support for high 'temporal consistency' of interrogative suggestibility.
An Evidence-Based Forensic Taxonomy of Windows Phone Communication Apps.
Cahyani, Niken Dwi Wahyu; Martini, Ben; Choo, Kim-Kwang Raymond; Ab Rahman, Nurul Hidayah; Ashman, Helen
2018-05-01
Communication apps can be an important source of evidence in a forensic investigation (e.g., in the investigation of a drug trafficking or terrorism case where the communications apps were used by the accused persons during the transactions or planning activities). This study presents the first evidence-based forensic taxonomy of Windows Phone communication apps, using an existing two-dimensional Android forensic taxonomy as a baseline. Specifically, 30 Windows Phone communication apps, including Instant Messaging (IM) and Voice over IP (VoIP) apps, are examined. Artifacts extracted using physical acquisition are analyzed, and seven digital evidence objects of forensic interest are identified, namely: Call Log, Chats, Contacts, Locations, Installed Applications, SMSs and User Accounts. Findings from this study would help to facilitate timely and effective forensic investigations involving Windows Phone communication apps. © 2017 American Academy of Forensic Sciences.
On the added value of forensic science and grand innovation challenges for the forensic community.
van Asten, Arian C
2014-03-01
In this paper the insights and results are presented of a long term and ongoing improvement effort within the Netherlands Forensic Institute (NFI) to establish a valuable innovation programme. From the overall perspective of the role and use of forensic science in the criminal justice system, the concepts of Forensic Information Value Added (FIVA) and Forensic Information Value Efficiency (FIVE) are introduced. From these concepts the key factors determining the added value of forensic investigations are discussed; Evidential Value, Relevance, Quality, Speed and Cost. By unravelling the added value of forensic science and combining this with the future needs and scientific and technological developments, six forensic grand challenges are introduced: i) Molecular Photo-fitting; ii) chemical imaging, profiling and age estimation of finger marks; iii) Advancing Forensic Medicine; iv) Objective Forensic Evaluation; v) the Digital Forensic Service Centre and vi) Real time In-Situ Chemical Identification. Finally, models for forensic innovation are presented that could lead to major international breakthroughs on all these six themes within a five year time span. This could cause a step change in the added value of forensic science and would make forensic investigative methods even more valuable than they already are today. © 2013. Published by Elsevier Ireland Ltd on behalf of Forensic Science Society. All rights reserved.
Review of passive-blind detection in digital video forgery based on sensing and imaging techniques
NASA Astrophysics Data System (ADS)
Tao, Junjie; Jia, Lili; You, Ying
2016-01-01
Advances in digital video compression and IP communication technologies raised new issues and challenges concerning the integrity and authenticity of surveillance videos. It is so important that the system should ensure that once recorded, the video cannot be altered; ensuring the audit trail is intact for evidential purposes. This paper gives an overview of passive techniques of Digital Video Forensics which are based on intrinsic fingerprints inherent in digital surveillance videos. In this paper, we performed a thorough research of literatures relevant to video manipulation detection methods which accomplish blind authentications without referring to any auxiliary information. We presents review of various existing methods in literature, and much more work is needed to be done in this field of video forensics based on video data analysis and observation of the surveillance systems.
Kloosterman, Ate; Mapes, Anna; Geradts, Zeno; van Eijk, Erwin; Koper, Carola; van den Berg, Jorrit; Verheij, Saskia; van der Steen, Marcel; van Asten, Arian
2015-01-01
In this paper, the importance of modern technology in forensic investigations is discussed. Recent technological developments are creating new possibilities to perform robust scientific measurements and studies outside the controlled laboratory environment. The benefits of real-time, on-site forensic investigations are manifold and such technology has the potential to strongly increase the speed and efficacy of the criminal justice system. However, such benefits are only realized when quality can be guaranteed at all times and findings can be used as forensic evidence in court. At the Netherlands Forensic Institute, innovation efforts are currently undertaken to develop integrated forensic platform solutions that allow for the forensic investigation of human biological traces, the chemical identification of illicit drugs and the study of large amounts of digital evidence. These platforms enable field investigations, yield robust and validated evidence and allow for forensic intelligence and targeted use of expert capacity at the forensic institutes. This technological revolution in forensic science could ultimately lead to a paradigm shift in which a new role of the forensic expert emerges as developer and custodian of integrated forensic platforms. PMID:26101289
Introduction to Forensics and the Use of the Helix Free Forensic Tool
2012-01-01
computer system belongs to and his personal activities, interests, and hobbies. An example presented in the paper was that pedophiles might keep...digital records like pictures or video of their delinquent activities. As we mentioned before, we must keep an accurate record of our investigation
Estimating JPEG2000 compression for image forensics using Benford's Law
NASA Astrophysics Data System (ADS)
Qadir, Ghulam; Zhao, Xi; Ho, Anthony T. S.
2010-05-01
With the tremendous growth and usage of digital images nowadays, the integrity and authenticity of digital content is becoming increasingly important, and a growing concern to many government and commercial sectors. Image Forensics, based on a passive statistical analysis of the image data only, is an alternative approach to the active embedding of data associated with Digital Watermarking. Benford's Law was first introduced to analyse the probability distribution of the 1st digit (1-9) numbers of natural data, and has since been applied to Accounting Forensics for detecting fraudulent income tax returns [9]. More recently, Benford's Law has been further applied to image processing and image forensics. For example, Fu et al. [5] proposed a Generalised Benford's Law technique for estimating the Quality Factor (QF) of JPEG compressed images. In our previous work, we proposed a framework incorporating the Generalised Benford's Law to accurately detect unknown JPEG compression rates of watermarked images in semi-fragile watermarking schemes. JPEG2000 (a relatively new image compression standard) offers higher compression rates and better image quality as compared to JPEG compression. In this paper, we propose the novel use of Benford's Law for estimating JPEG2000 compression for image forensics applications. By analysing the DWT coefficients and JPEG2000 compression on 1338 test images, the initial results indicate that the 1st digit probability of DWT coefficients follow the Benford's Law. The unknown JPEG2000 compression rates of the image can also be derived, and proved with the help of a divergence factor, which shows the deviation between the probabilities and Benford's Law. Based on 1338 test images, the mean divergence for DWT coefficients is approximately 0.0016, which is lower than DCT coefficients at 0.0034. However, the mean divergence for JPEG2000 images compression rate at 0.1 is 0.0108, which is much higher than uncompressed DWT coefficients. This result clearly indicates a presence of compression in the image. Moreover, we compare the results of 1st digit probability and divergence among JPEG2000 compression rates at 0.1, 0.3, 0.5 and 0.9. The initial results show that the expected difference among them could be used for further analysis to estimate the unknown JPEG2000 compression rates.
Decomposed Photo Response Non-Uniformity for Digital Forensic Analysis
NASA Astrophysics Data System (ADS)
Li, Yue; Li, Chang-Tsun
The last few years have seen the applications of Photo Response Non-Uniformity noise (PRNU) - a unique stochastic fingerprint of image sensors, to various types of digital forensic investigations such as source device identification and integrity verification. In this work we proposed a new way of extracting PRNU noise pattern, called Decomposed PRNU (DPRNU), by exploiting the difference between the physical andartificial color components of the photos taken by digital cameras that use a Color Filter Array for interpolating artificial components from physical ones. Experimental results presented in this work have shown the superiority of the proposed DPRNU to the commonly used version. We also proposed a new performance metrics, Corrected Positive Rate (CPR) to evaluate the performance of the common PRNU and the proposed DPRNU.
Applications of ENF criterion in forensic audio, video, computer and telecommunication analysis.
Grigoras, Catalin
2007-04-11
This article reports on the electric network frequency criterion as a means of assessing the integrity of digital audio/video evidence and forensic IT and telecommunication analysis. A brief description is given to different ENF types and phenomena that determine ENF variations. In most situations, to reach a non-authenticity opinion, the visual inspection of spectrograms and comparison with an ENF database are enough. A more detailed investigation, in the time domain, requires short time windows measurements and analyses. The stability of the ENF over geographical distances has been established by comparison of synchronized recordings made at different locations on the same network. Real cases are presented, in which the ENF criterion was used to investigate audio and video files created with secret surveillance systems, a digitized audio/video recording and a TV broadcasted reportage. By applying the ENF Criterion in forensic audio/video analysis, one can determine whether and where a digital recording has been edited, establish whether it was made at the time claimed, and identify the time and date of the registering operation.
Baldasso, Rosane Pérez; Tinoco, Rachel Lima Ribeiro; Vieira, Cristina Saft Matos; Fernandes, Mário Marques; Oliveira, Rogério Nogueira
2016-10-01
The process of forensic facial analysis may be founded on several scientific techniques and imaging modalities, such as digital signal processing, photogrammetry and craniofacial anthropometry. However, one of the main limitations in this analysis is the comparison of images acquired with different angles of incidence. The present study aimed to explore a potential approach for the correction of the planar perspective projection (PPP) in geometric structures traced from the human face. A technique for the correction of the PPP was calibrated within photographs of two geometric structures obtained with angles of incidence distorted in 80°, 60° and 45°. The technique was performed using ImageJ ® 1.46r (National Institutes of Health, Bethesda, Maryland). The corrected images were compared with photographs of the same object obtained in 90° (reference). In a second step, the technique was validated in a digital human face created using MakeHuman ® 1.0.2 (Free Software Foundation, Massachusetts, EUA) and Blender ® 2.75 (Blender ® Foundation, Amsterdam, Nederland) software packages. The images registered with angular distortion presented a gradual decrease in height when compared to the reference. The digital technique for the correction of the PPP is a valuable tool for forensic applications using photographic imaging modalities, such as forensic facial analysis. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Clausing, Eric; Vielhauer, Claus
2014-02-01
Locksmith forensics is an important area in crime scene forensics. Due to new optical, contactless, nanometer range sensing technology, such traces can be captured, digitized and analyzed more easily allowing a complete digital forensic investigation. In this paper we present a significantly improved approach for the detection and segmentation of toolmarks on surfaces of locking cylinder components (using the example of the locking cylinder component 'key pin') acquired by a 3D Confocal Laser Scanning Microscope. This improved approach is based on our prior work1 using a block-based classification approach with textural features. In this prior work1 we achieve a solid detection rate of 75-85% for the detection of toolmarks originating from illegal opening methods. Here, in this paper we improve, expand and fuse this prior approach with additional features from acquired surface topography data, color data and an image processing approach using adapted Gabor filters. In particular we are able of raising the detection and segmentation rates above 90% with our test set of 20 key pins with approximately 700 single toolmark traces of four different opening methods. We can provide a precise pixel- based segmentation as opposed to the rather imprecise segmentation of our prior block-based approach and as the use of the two additional data types (color and especially topography) require a specific pre-processing, we furthermore propose an adequate approach for this purpose.
Expansion of Microbial Forensics
Schmedes, Sarah E.; Sajantila, Antti
2016-01-01
Microbial forensics has been defined as the discipline of applying scientific methods to the analysis of evidence related to bioterrorism, biocrimes, hoaxes, or the accidental release of a biological agent or toxin for attribution purposes. Over the past 15 years, technology, particularly massively parallel sequencing, and bioinformatics advances now allow the characterization of microorganisms for a variety of human forensic applications, such as human identification, body fluid characterization, postmortem interval estimation, and biocrimes involving tracking of infectious agents. Thus, microbial forensics should be more broadly described as the discipline of applying scientific methods to the analysis of microbial evidence in criminal and civil cases for investigative purposes. PMID:26912746
Forensic characterization of camcorded movies: digital cinema vs. celluloid film prints
NASA Astrophysics Data System (ADS)
Rolland-Nevière, Xavier; Chupeau, Bertrand; Do"rr, Gwena"l.; Blondé, Laurent
2012-03-01
Digital camcording in the premises of cinema theaters is the main source of pirate copies of newly released movies. To trace such recordings, watermarking systems are exploited in order for each projection to be unique and thus identifiable. The forensic analysis to recover these marks is different for digital and legacy cinemas. To avoid running both detectors, a reliable oracle discriminating between cams originating from analog or digital projections is required. This article details a classification framework relying on three complementary features : the spatial uniformity of the screen illumination, the vertical (in)stability of the projected image, and the luminance artifacts due to the interplay between the display and acquisition devices. The system has been tuned with cams captured in a controlled environment and benchmarked against a medium-sized dataset (61 samples) composed of real-life pirate cams. Reported experimental results demonstrate that such a framework yields over 80% classification accuracy.
Digital image forensics for photographic copying
NASA Astrophysics Data System (ADS)
Yin, Jing; Fang, Yanmei
2012-03-01
Image display technology has greatly developed over the past few decades, which make it possible to recapture high-quality images from the display medium, such as a liquid crystal display(LCD) screen or a printed paper. The recaptured images are not regarded as a separate image class in the current research of digital image forensics, while the content of the recaptured images may have been tempered. In this paper, two sets of features based on the noise and the traces of double JPEG compression are proposed to identify these recaptured images. Experimental results showed that our proposed features perform well for detecting photographic copying.
Aghayev, Emin; Staub, Lukas; Dirnhofer, Richard; Ambrose, Tony; Jackowski, Christian; Yen, Kathrin; Bolliger, Stephan; Christe, Andreas; Roeder, Christoph; Aebi, Max; Thali, Michael J
2008-04-01
Recent developments in clinical radiology have resulted in additional developments in the field of forensic radiology. After implementation of cross-sectional radiology and optical surface documentation in forensic medicine, difficulties in the validation and analysis of the acquired data was experienced. To address this problem and for the comparison of autopsy and radiological data a centralized database with internet technology for forensic cases was created. The main goals of the database are (1) creation of a digital and standardized documentation tool for forensic-radiological and pathological findings; (2) establishing a basis for validation of forensic cross-sectional radiology as a non-invasive examination method in forensic medicine that means comparing and evaluating the radiological and autopsy data and analyzing the accuracy of such data; and (3) providing a conduit for continuing research and education in forensic medicine. Considering the infrequent availability of CT or MRI for forensic institutions and the heterogeneous nature of case material in forensic medicine an evaluation of benefits and limitations of cross-sectional imaging concerning certain forensic features by a single institution may be of limited value. A centralized database permitting international forensic and cross disciplinary collaborations may provide important support for forensic-radiological casework and research.
Kloosterman, Ate; Mapes, Anna; Geradts, Zeno; van Eijk, Erwin; Koper, Carola; van den Berg, Jorrit; Verheij, Saskia; van der Steen, Marcel; van Asten, Arian
2015-08-05
In this paper, the importance of modern technology in forensic investigations is discussed. Recent technological developments are creating new possibilities to perform robust scientific measurements and studies outside the controlled laboratory environment. The benefits of real-time, on-site forensic investigations are manifold and such technology has the potential to strongly increase the speed and efficacy of the criminal justice system. However, such benefits are only realized when quality can be guaranteed at all times and findings can be used as forensic evidence in court. At the Netherlands Forensic Institute, innovation efforts are currently undertaken to develop integrated forensic platform solutions that allow for the forensic investigation of human biological traces, the chemical identification of illicit drugs and the study of large amounts of digital evidence. These platforms enable field investigations, yield robust and validated evidence and allow for forensic intelligence and targeted use of expert capacity at the forensic institutes. This technological revolution in forensic science could ultimately lead to a paradigm shift in which a new role of the forensic expert emerges as developer and custodian of integrated forensic platforms. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Who's Blogging Now?: Linguistic Features and Authorship Analysis in Sports Blogs
ERIC Educational Resources Information Center
Cox, Taylor
2017-01-01
The field of authorship determination, previously largely falling under the umbrella of literary analysis but recently becoming a large subfield of forensic linguistics, has grown substantially over the last two decades. As its body of research and its record of successful forensic application continue to grow, this growth is paralleled by the…
Forensic detection of noise addition in digital images
NASA Astrophysics Data System (ADS)
Cao, Gang; Zhao, Yao; Ni, Rongrong; Ou, Bo; Wang, Yongbin
2014-03-01
We proposed a technique to detect the global addition of noise to a digital image. As an anti-forensics tool, noise addition is typically used to disguise the visual traces of image tampering or to remove the statistical artifacts left behind by other operations. As such, the blind detection of noise addition has become imperative as well as beneficial to authenticate the image content and recover the image processing history, which is the goal of general forensics techniques. Specifically, the special image blocks, including constant and strip ones, are used to construct the features for identifying noise addition manipulation. The influence of noising on blockwise pixel value distribution is formulated and analyzed formally. The methodology of detectability recognition followed by binary decision is proposed to ensure the applicability and reliability of noising detection. Extensive experimental results demonstrate the efficacy of our proposed noising detector.
An introduction to computer forensics.
Furneaux, Nick
2006-07-01
This paper provides an introduction to the discipline of Computer Forensics. With computers being involved in an increasing number, and type, of crimes the trace data left on electronic media can play a vital part in the legal process. To ensure acceptance by the courts, accepted processes and procedures have to be adopted and demonstrated which are not dissimilar to the issues surrounding traditional forensic investigations. This paper provides a straightforward overview of the three steps involved in the examination of digital media: Acquisition of data. Investigation of evidence. Reporting and presentation of evidence. Although many of the traditional readers of Medicine, Science and the Law are those involved in the biological aspects of forensics, I believe that both disciplines can learn from each other, with electronic evidence being more readily sought and considered by the legal community and the long, tried and tested scientific methods of the forensic community being shared and adopted by the computer forensic world.
Uses of software in digital image analysis: a forensic report
NASA Astrophysics Data System (ADS)
Sharma, Mukesh; Jha, Shailendra
2010-02-01
Forensic image analysis is required an expertise to interpret the content of an image or the image itself in legal matters. Major sub-disciplines of forensic image analysis with law enforcement applications include photo-grammetry, photographic comparison, content analysis and image authentication. It has wide applications in forensic science range from documenting crime scenes to enhancing faint or indistinct patterns such as partial fingerprints. The process of forensic image analysis can involve several different tasks, regardless of the type of image analysis performed. Through this paper authors have tried to explain these tasks, which are described in to three categories: Image Compression, Image Enhancement & Restoration and Measurement Extraction. With the help of examples like signature comparison, counterfeit currency comparison and foot-wear sole impression using the software Canvas and Corel Draw.
Expansion of Microbial Forensics.
Schmedes, Sarah E; Sajantila, Antti; Budowle, Bruce
2016-08-01
Microbial forensics has been defined as the discipline of applying scientific methods to the analysis of evidence related to bioterrorism, biocrimes, hoaxes, or the accidental release of a biological agent or toxin for attribution purposes. Over the past 15 years, technology, particularly massively parallel sequencing, and bioinformatics advances now allow the characterization of microorganisms for a variety of human forensic applications, such as human identification, body fluid characterization, postmortem interval estimation, and biocrimes involving tracking of infectious agents. Thus, microbial forensics should be more broadly described as the discipline of applying scientific methods to the analysis of microbial evidence in criminal and civil cases for investigative purposes. Copyright © 2016, American Society for Microbiology. All Rights Reserved.
Forensic hash for multimedia information
NASA Astrophysics Data System (ADS)
Lu, Wenjun; Varna, Avinash L.; Wu, Min
2010-01-01
Digital multimedia such as images and videos are prevalent on today's internet and cause significant social impact, which can be evidenced by the proliferation of social networking sites with user generated contents. Due to the ease of generating and modifying images and videos, it is critical to establish trustworthiness for online multimedia information. In this paper, we propose novel approaches to perform multimedia forensics using compact side information to reconstruct the processing history of a document. We refer to this as FASHION, standing for Forensic hASH for informatION assurance. Based on the Radon transform and scale space theory, the proposed forensic hash is compact and can effectively estimate the parameters of geometric transforms and detect local tampering that an image may have undergone. Forensic hash is designed to answer a broader range of questions regarding the processing history of multimedia data than the simple binary decision from traditional robust image hashing, and also offers more efficient and accurate forensic analysis than multimedia forensic techniques that do not use any side information.
NASA Astrophysics Data System (ADS)
Merkel, Ronny; Breuhan, Andy; Hildebrandt, Mario; Vielhauer, Claus; Bräutigam, Anja
2012-06-01
In the field of crime scene forensics, current methods of evidence collection, such as the acquisition of shoe-marks, tireimpressions, palm-prints or fingerprints are in most cases still performed in an analogue way. For example, fingerprints are captured by powdering and sticky tape lifting, ninhydrine bathing or cyanoacrylate fuming and subsequent photographing. Images of the evidence are then further processed by forensic experts. With the upcoming use of new multimedia systems for the digital capturing and processing of crime scene traces in forensics, higher resolutions can be achieved, leading to a much better quality of forensic images. Furthermore, the fast and mostly automated preprocessing of such data using digital signal processing techniques is an emerging field. Also, by the optical and non-destructive lifting of forensic evidence, traces are not destroyed and therefore can be re-captured, e.g. by creating time series of a trace, to extract its aging behavior and maybe determine the time the trace was left. However, such new methods and tools face different challenges, which need to be addressed before a practical application in the field. Based on the example of fingerprint age determination, which is an unresolved research challenge to forensic experts since decades, we evaluate the influences of different environmental conditions as well as different types of sweating and their implications to the capturing sensory, preprocessing methods and feature extraction. We use a Chromatic White Light (CWL) sensor to exemplary represent such a new optical and contactless measurement device and investigate the influence of 16 different environmental conditions, 8 different sweat types and 11 different preprocessing methods on the aging behavior of 48 fingerprint time series (2592 fingerprint scans in total). We show the challenges that arise for such new multimedia systems capturing and processing forensic evidence
Benford's Law for Quality Assurance of Manner of Death Counts in Small and Large Databases.
Daniels, Jeremy; Caetano, Samantha-Jo; Huyer, Dirk; Stephen, Andrew; Fernandes, John; Lytwyn, Alice; Hoppe, Fred M
2017-09-01
To assess if Benford's law, a mathematical law used for quality assurance in accounting, can be applied as a quality assurance measure for the manner of death determination. We examined a regional forensic pathology service's monthly manner of death counts (N = 2352) from 2011 to 2013, and provincial monthly and weekly death counts from 2009 to 2013 (N = 81,831). We tested whether each dataset's leading digit followed Benford's law via the chi-square test. For each database, we assessed whether number 1 was the most common leading digit. The manner of death counts first digit followed Benford's law in all the three datasets. Two of the three datasets had 1 as the most frequent leading digit. The manner of death data in this study showed qualities consistent with Benford's law. The law has potential as a quality assurance metric in the manner of death determination for both small and large databases. © 2017 American Academy of Forensic Sciences.
Positive dental identification using tooth anatomy and digital superimposition.
Johansen, Raymond J; Michael Bowers, C
2013-03-01
Dental identification of unknown human remains continues to be a relevant and reliable adjunct to forensic investigations. The advent of genomic and mitochondrial DNA procedures has not displaced the practical use of dental and related osseous structures remaining after destructive incidents that can render human remains unrecognizable, severely burned, and fragmented. The ability to conclusively identify victims of accident and homicide is based on the availability of antemortem records containing substantial and unambiguous proof of dental and related osseous characteristics. This case report documents the use of digital comparative analysis of antemortem dental models and postmortem dentition, to determine a dental identification. Images of dental models were digitally analyzed using Adobe Photoshop(TM) software. Individual tooth anatomy was compared between the antemortem and postmortem images. Digital superimposition techniques were also used for the comparison. With the absence of antemortem radiographs, this method proved useful to reach a positive identification in this case. © 2012 American Academy of Forensic Sciences.
A note on digital dental radiography in forensic odontology.
Chiam, Sher-Lin
2014-09-01
Digital dental radiography, intraoral and extraoral, is becoming more popular in dental practice. It offers convenience, such as lower exposure to radiation, ease of storing of images, and elimination of chemical processing. However, it also has disadvantages and drawbacks. One of these is the potential for confusion of the orientation of the image. This paper outlines one example of this, namely, the lateral inversion of the image. This source of confusion is partly inherent in the older model of phosphor storage plates (PSPs), as they allow both sides to be exposed without clue to the fact that the image is acquired on the wrong side. The native software allows digital manipulation of the X-ray image, permitting both rotation and inversion. Attempts to orientate the X-ray according to the indicator incorporated on the plate can then sometimes lead to inadvertent lateral inversion of the image. This article discusses the implications of such mistakes in dental digital radiography to forensic odontology and general dental practice.
Advanced Digital Forensic and Steganalysis Methods
2009-02-01
investigation is simultaneously cropped, scaled, and processed, extending the technology when the digital image is printed, developing technology capable ...or other common processing operations). TECNOLOGY APPLICATIONS 1. Determining the origin of digital images 2. Matching an image to a camera...Technology Transfer and Innovation Partnerships Division of Research P.O. Box 6000 State University of New York Binghamton, NY 13902-6000 Phone: 607-777
Application of forensic image analysis in accident investigations.
Verolme, Ellen; Mieremet, Arjan
2017-09-01
Forensic investigations are primarily meant to obtain objective answers that can be used for criminal prosecution. Accident analyses are usually performed to learn from incidents and to prevent similar events from occurring in the future. Although the primary goal may be different, the steps in which information is gathered, interpreted and weighed are similar in both types of investigations, implying that forensic techniques can be of use in accident investigations as well. The use in accident investigations usually means that more information can be obtained from the available information than when used in criminal investigations, since the latter require a higher evidence level. In this paper, we demonstrate the applicability of forensic techniques for accident investigations by presenting a number of cases from one specific field of expertise: image analysis. With the rapid spread of digital devices and new media, a wealth of image material and other digital information has become available for accident investigators. We show that much information can be distilled from footage by using forensic image analysis techniques. These applications show that image analysis provides information that is crucial for obtaining the sequence of events and the two- and three-dimensional geometry of an accident. Since accident investigation focuses primarily on learning from accidents and prevention of future accidents, and less on the blame that is crucial for criminal investigations, the field of application of these forensic tools may be broader than would be the case in purely legal sense. This is an important notion for future accident investigations. Copyright © 2017 Elsevier B.V. All rights reserved.
Detection and localization of copy-paste forgeries in digital videos.
Singh, Raahat Devender; Aggarwal, Naveen
2017-12-01
Amidst the continual march of technology, we find ourselves relying on digital videos to proffer visual evidence in several highly sensitive areas such as journalism, politics, civil and criminal litigation, and military and intelligence operations. However, despite being an indispensable source of information with high evidentiary value, digital videos are also extremely vulnerable to conscious manipulations. Therefore, in a situation where dependence on video evidence is unavoidable, it becomes crucial to authenticate the contents of this evidence before accepting them as an accurate depiction of reality. Digital videos can suffer from several kinds of manipulations, but perhaps, one of the most consequential forgeries is copy-paste forgery, which involves insertion/removal of objects into/from video frames. Copy-paste forgeries alter the information presented by the video scene, which has a direct effect on our basic understanding of what that scene represents, and so, from a forensic standpoint, the challenge of detecting such forgeries is especially significant. In this paper, we propose a sensor pattern noise based copy-paste detection scheme, which is an improved and forensically stronger version of an existing noise-residue based technique. We also study a demosaicing artifact based image forensic scheme to estimate the extent of its viability in the domain of video forensics. Furthermore, we suggest a simplistic clustering technique for the detection of copy-paste forgeries, and determine if it possess the capabilities desired of a viable and efficacious video forensic scheme. Finally, we validate these schemes on a set of realistically tampered MJPEG, MPEG-2, MPEG-4, and H.264/AVC encoded videos in a diverse experimental set-up by varying the strength of post-production re-compressions and transcodings, bitrates, and sizes of the tampered regions. Such an experimental set-up is representative of a neutral testing platform and simulates a real-world forgery scenario where the forensic investigator has no control over any of the variable parameters of the tampering process. When tested in such an experimental set-up, the four forensic schemes achieved varying levels of detection accuracies and exhibited different scopes of applicabilities. For videos compressed using QFs in the range 70-100, the existing noise residue based technique generated average detection accuracy in the range 64.5%-82.0%, while the proposed sensor pattern noise based scheme generated average accuracy in the range 89.9%-98.7%. For the aforementioned range of QFs, average accuracy rates achieved by the suggested clustering technique and the demosaicing artifact based approach were in the range 79.1%-90.1% and 83.2%-93.3%, respectively. Copyright © 2017 Elsevier B.V. All rights reserved.
Fleischer, Luise; Sehner, Susanne; Gehl, Axel; Riemer, Martin; Raupach, Tobias; Anders, Sven
2017-05-01
Measurement of postmortem pupil width is a potential component of death time estimation. However, no standardized measurement method has been described. We analyzed a total of 71 digital images for pupil-iris ratio using the software ImageJ. Images were analyzed three times by four different examiners. In addition, serial images from 10 cases were taken between 2 and 50 h postmortem to detect spontaneous pupil changes. Intra- and inter-rater reliability of the method was excellent (ICC > 0.95). The method is observer independent and yields consistent results, and images can be digitally stored and re-evaluated. The method seems highly eligible for forensic and scientific purposes. While statistical analysis of spontaneous pupil changes revealed a significant polynomial of quartic degree for postmortem time (p = 0.001), an obvious pattern was not detected. These results do not indicate suitability of spontaneous pupil changes for forensic death time estimation, as formerly suggested. © 2016 American Academy of Forensic Sciences.
USB Storage Device Forensics for Windows 10.
Arshad, Ayesha; Iqbal, Waseem; Abbas, Haider
2018-05-01
Significantly increased use of USB devices due to their user-friendliness and large storage capacities poses various threats for many users/companies in terms of data theft that becomes easier due to their efficient mobility. Investigations for such data theft activities would require gathering critical digital information capable of recovering digital forensics artifacts like date, time, and device information. This research gathers three sets of registry and logs data: first, before insertion; second, during insertion; and the third, after removal of a USB device. These sets are analyzed to gather evidentiary information from Registry and Windows Event log that helps in tracking a USB device. This research furthers the prior research on earlier versions of Microsoft Windows and compares it with latest Windows 10 system. Comparison of Windows 8 and Windows 10 does not show much difference except for new subkey under USB Key in registry. However, comparison of Windows 7 with latest version indicates significant variances. © 2017 American Academy of Forensic Sciences.
Research in Computer Forensics
2002-06-01
systems and how they can aid in the recovery of digital evidence in a forensic analysis. Exposures to hacking techniques and tools in CS3675—Internet...cryptography, access control, authentication, biometrics, actions to be taken during an attack and case studies of hacking and information warfare. 11...chat, surfing, instant messaging and hacking with powerful access control and filter capabilities. The monitor can operates in a Prevention mode to
Semantic Modelling of Digital Forensic Evidence
NASA Astrophysics Data System (ADS)
Kahvedžić, Damir; Kechadi, Tahar
The reporting of digital investigation results are traditionally carried out in prose and in a large investigation may require successive communication of findings between different parties. Popular forensic suites aid in the reporting process by storing provenance and positional data but do not automatically encode why the evidence is considered important. In this paper we introduce an evidence management methodology to encode the semantic information of evidence. A structured vocabulary of terms, ontology, is used to model the results in a logical and predefined manner. The descriptions are application independent and automatically organised. The encoded descriptions aim to help the investigation in the task of report writing and evidence communication and can be used in addition to existing evidence management techniques.
Proposal for internet-based Digital Dental Chart for personal dental identification in forensics.
Hanaoka, Yoichi; Ueno, Asao; Tsuzuki, Tamiyuki; Kajiwara, Masahiro; Minaguchi, Kiyoshi; Sato, Yoshinobu
2007-05-03
A dental chart is very useful as a standard source of evidence in the personal identification of bodies. However, the kind of dental chart available will often vary as a number of types of odontogram have been developed where the visual representation of dental conditions has relied on hand-drawn representation. We propose the Digital Dental Chart (DDC) as a new style of dental chart, especially for open investigations aimed at establishing the identity of unknown bodies. Each DDC is constructed using actual oral digital images and dental data, and is easy to upload onto an Internet website. The DDC is a more useful forensic resource than the standard types of dental chart in current use as it has several advantages, among which are its ability to carry a large volume of information and reproduce dental conditions clearly and in detail on a cost-effective basis.
Trautz, Florian; Dreßler, Jan; Stassart, Ruth; Müller, Wolf; Ondruschka, Benjamin
2018-01-03
Immunohistochemistry (IHC) has become an integral part in forensic histopathology over the last decades. However, the underlying methods for IHC vary greatly depending on the institution, creating a lack of comparability. The aim of this study was to assess the optimal approach for different technical aspects of IHC, in order to improve and standardize this procedure. Therefore, qualitative results from manual and automatic IHC staining of brain samples were compared, as well as potential differences in suitability of common IHC glass slides. Further, possibilities of image digitalization and connected issues were investigated. In our study, automatic staining showed more consistent staining results, compared to manual staining procedures. Digitalization and digital post-processing facilitated direct analysis and analysis for reproducibility considerably. No differences were found for different commercially available microscopic glass slides regarding suitability of IHC brain researches, but a certain rate of tissue loss should be expected during the staining process.
Virtual reality and 3D animation in forensic visualization.
Ma, Minhua; Zheng, Huiru; Lallie, Harjinder
2010-09-01
Computer-generated three-dimensional (3D) animation is an ideal media to accurately visualize crime or accident scenes to the viewers and in the courtrooms. Based upon factual data, forensic animations can reproduce the scene and demonstrate the activity at various points in time. The use of computer animation techniques to reconstruct crime scenes is beginning to replace the traditional illustrations, photographs, and verbal descriptions, and is becoming popular in today's forensics. This article integrates work in the areas of 3D graphics, computer vision, motion tracking, natural language processing, and forensic computing, to investigate the state-of-the-art in forensic visualization. It identifies and reviews areas where new applications of 3D digital technologies and artificial intelligence could be used to enhance particular phases of forensic visualization to create 3D models and animations automatically and quickly. Having discussed the relationships between major crime types and level-of-detail in corresponding forensic animations, we recognized that high level-of-detail animation involving human characters, which is appropriate for many major crime types but has had limited use in courtrooms, could be useful for crime investigation. © 2010 American Academy of Forensic Sciences.
Optical 3D surface digitizing in forensic medicine: 3D documentation of skin and bone injuries.
Thali, Michael J; Braun, Marcel; Dirnhofer, Richard
2003-11-26
Photography process reduces a three-dimensional (3D) wound to a two-dimensional level. If there is a need for a high-resolution 3D dataset of an object, it needs to be three-dimensionally scanned. No-contact optical 3D digitizing surface scanners can be used as a powerful tool for wound and injury-causing instrument analysis in trauma cases. The 3D skin wound and a bone injury documentation using the optical scanner Advanced TOpometric Sensor (ATOS II, GOM International, Switzerland) will be demonstrated using two illustrative cases. Using this 3D optical digitizing method the wounds (the virtual 3D computer model of the skin and the bone injuries) and the virtual 3D model of the injury-causing tool are graphically documented in 3D in real-life size and shape and can be rotated in the CAD program on the computer screen. In addition, the virtual 3D models of the bone injuries and tool can now be compared in a 3D CAD program against one another in virtual space, to see if there are matching areas. Further steps in forensic medicine will be a full 3D surface documentation of the human body and all the forensic relevant injuries using optical 3D scanners.
Buck, Ursula; Naether, Silvio; Braun, Marcel; Thali, Michael
2008-09-18
Non-invasive documentation methods such as surface scanning and radiological imaging are gaining in importance in the forensic field. These three-dimensional technologies provide digital 3D data, which are processed and handled in the computer. However, the sense of touch gets lost using the virtual approach. The haptic device enables the use of the sense of touch to handle and feel digital 3D data. The multifunctional application of a haptic device for forensic approaches is evaluated and illustrated in three different cases: the representation of bone fractures of the lower extremities, by traffic accidents, in a non-invasive manner; the comparison of bone injuries with the presumed injury-inflicting instrument; and in a gunshot case, the identification of the gun by the muzzle imprint, and the reconstruction of the holding position of the gun. The 3D models of the bones are generated from the Computed Tomography (CT) images. The 3D models of the exterior injuries, the injury-inflicting tools and the bone injuries, where a higher resolution is necessary, are created by the optical surface scan. The haptic device is used in combination with the software FreeForm Modelling Plus for touching the surface of the 3D models to feel the minute injuries and the surface of tools, to reposition displaced bone parts and to compare an injury-causing instrument with an injury. The repositioning of 3D models in a reconstruction is easier, faster and more precisely executed by means of using the sense of touch and with the user-friendly movement in the 3D space. For representation purposes, the fracture lines of bones are coloured. This work demonstrates that the haptic device is a suitable and efficient application in forensic science. The haptic device offers a new way in the handling of digital data in the virtual 3D space.
Understanding the Enemy: The Enduring Value of Technical and Forensic Exploitation
2014-01-01
designers, builders , emplacers, triggermen, financiers, component sup- pliers, trainers, planners, and operational leaders who made up the web of actors...help to isolate insurgents from the populace and under- mine their propaganda. In terms of joint functions , TECHINT and WTI support com- mand and...measurable biological and behavioral characteris- tics to uniquely identify people.24 The Air Force is the EA for Digital and Multimedia Forensics
[The application of X-ray imaging in forensic medicine].
Kučerová, Stěpánka; Safr, Miroslav; Ublová, Michaela; Urbanová, Petra; Hejna, Petr
2014-07-01
X-ray is the most common, basic and essential imaging method used in forensic medicine. It serves to display and localize the foreign objects in the body and helps to detect various traumatic and pathological changes. X-ray imaging is valuable in anthropological assessment of an individual. X-ray allows non-invasive evaluation of important findings before the autopsy and thus selection of the optimal strategy for dissection. Basic indications for postmortem X-ray imaging in forensic medicine include gunshot and explosive fatalities (identification and localization of projectiles or other components of ammunition, visualization of secondary missiles), sharp force injuries (air embolism, identification of the weapon) and motor vehicle related deaths. The method is also helpful for complex injury evaluation in abused victims or in persons where abuse is suspected. Finally, X-ray imaging still remains the gold standard method for identification of unknown deceased. With time modern imaging methods, especially computed tomography and magnetic resonance imaging, are more and more applied in forensic medicine. Their application extends possibilities of the visualization the bony structures toward a more detailed imaging of soft tissues and internal organs. The application of modern imaging methods in postmortem body investigation is known as digital or virtual autopsy. At present digital postmortem imaging is considered as a bloodless alternative to the conventional autopsy.
Exposing Vital Forensic Artifacts of USB Devices in the Windows 10 Registry
2015-06-01
12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) Digital media devices are regularly seized pursuant to criminal investigations and...ABSTRACT Digital media devices are regularly seized pursuant to criminal investigations and Microsoft Windows is the most commonly encountered... digital footprints available on seized computers that assist in re-creating a crime scene and telling the story of the events that occurred. Part of this
Forensic nursing and the palliative approach to care: an empirical nursing ethics analysis.
Wright, David Kenneth; Vanderspank-Wright, Brandi; Holmes, Dave; Skinner, Elise
2017-08-02
A movement is underway to promote a palliative approach to care in all contexts where people age and live with life-limiting conditions, including psychiatric settings. Forensic psychiatry nursing-a subfield of mental health nursing- focuses on individuals who are in conflict with the criminal justice system. We know little about the values of nurses working in forensic psychiatry, and how these values might influence a palliative approach to care for frail and aging patients. Interviews with four nurses working on one of two forensic units of a university-affiliated mental health hospital in an urban area of eastern Canada. Three specific values were found to guide forensic nurses in their care of aging patients that are commensurate with a palliative approach: hope, inclusivity, and quality of life. When we started this project, we wondered whether the culture of forensic nursing practice was antithetical to the values of a palliative approach. Instead, we found several parallels between forensic nurses' moral identities and palliative philosophy. These findings have implications for how we think about the palliative approach in contexts not typically associated with palliative care, but in which patients will increasingly age and die.
A genomic audit of newly-adopted autosomal STRs for forensic identification.
Phillips, C
2017-07-01
In preparation for the growing use of massively parallel sequencing (MPS) technology to genotype forensic STRs, a comprehensive genomic audit of 73 STRs was made in 2016 [Parson et al., Forensic Sci. Int. Genet. 22, 54-63]. The loci examined included miniSTRs that were not in widespread use, but had been incorporated into MPS kits or were under consideration for this purpose. The current study expands the genomic analysis of autosomal STRs that are not commonly used, to include the full set of developed miniSTRs and an additional 24 STRs, most of which have been recently included in several supplementary forensic multiplex kits for capillary electrophoresis. The genomic audit of these 47 newly-adopted STRs examined the linkage status of new loci on the same chromosome as established forensic STRs; analyzed world-wide population variation of the newly-adopted STRs using published data; assessed their forensic informativeness; and compiled the sequence characteristics, repeat structures and flanking regions of each STR. A further 44 autosomal STRs developed for forensic analyses but not incorporated into commercial kits, are also briefly described. Copyright © 2017 Elsevier B.V. All rights reserved.
Forensic use of photo response non-uniformity of imaging sensors and a counter method.
Dirik, Ahmet Emir; Karaküçük, Ahmet
2014-01-13
Analogous to use of bullet scratches in forensic science, the authenticity of a digital image can be verified through the noise characteristics of an imaging sensor. In particular, photo-response non-uniformity noise (PRNU) has been used in source camera identification (SCI). However, this technique can be used maliciously to track or inculpate innocent people. To impede such tracking, PRNU noise should be suppressed significantly. Based on this motivation, we propose a counter forensic method to deceive SCI. Experimental results show that it is possible to impede PRNU-based camera identification for various imaging sensors while preserving the image quality.
NASA Astrophysics Data System (ADS)
Dutton, Gregory
Forensic science is a collection of applied disciplines that draws from all branches of science. A key question in forensic analysis is: to what degree do a piece of evidence and a known reference sample share characteristics? Quantification of similarity, estimation of uncertainty, and determination of relevant population statistics are of current concern. A 2016 PCAST report questioned the foundational validity and the validity in practice of several forensic disciplines, including latent fingerprints, firearms comparisons and DNA mixture interpretation. One recommendation was the advancement of objective, automated comparison methods based on image analysis and machine learning. These concerns parallel the National Institute of Justice's ongoing R&D investments in applied chemistry, biology and physics. NIJ maintains a funding program spanning fundamental research with potential for forensic application to the validation of novel instruments and methods. Since 2009, NIJ has funded over 179M in external research to support the advancement of accuracy, validity and efficiency in the forensic sciences. An overview of NIJ's programs will be presented, with examples of relevant projects from fluid dynamics, 3D imaging, acoustics, and materials science.
Considerations on the ASTM standards 1789-04 and 1422-05 on the forensic examination of ink.
Neumann, Cedric; Margot, Pierre
2010-09-01
The ASTM standards on Writing Ink Identification (ASTM 1789-04) and on Writing Ink Comparison (ASTM 1422-05) are the most up-to-date guidelines that have been published on the forensic analysis of ink. The aim of these documents is to cover most aspects of the forensic analysis of ink evidence, from the analysis of ink samples, the comparison of the analytical profile of these samples (with the aim to differentiate them or not), through to the interpretation of the result of the examination of these samples in a forensic context. Significant evolutions in the technology available to forensic scientists, in the quality assurance requirements brought onto them, and in the understanding of frameworks to interpret forensic evidence have been made in recent years. This article reviews the two standards in the light of these evolutions and proposes some practical improvements in terms of the standardization of the analyses, the comparison of ink samples, and the interpretation of ink examination. Some of these suggestions have already been included in a DHS funded project aimed at creating a digital ink library for the United States Secret Service. © 2010 American Academy of Forensic Sciences.
Parallel Digital Phase-Locked Loops
NASA Technical Reports Server (NTRS)
Sadr, Ramin; Shah, Biren N.; Hinedi, Sami M.
1995-01-01
Wide-band microwave receivers of proposed type include digital phase-locked loops in which band-pass filtering and down-conversion of input signals implemented by banks of multirate digital filters operating in parallel. Called "parallel digital phase-locked loops" to distinguish them from other digital phase-locked loops. Systems conceived as cost-effective solution to problem of filtering signals at high sampling rates needed to accommodate wide input frequency bands. Each of M filters process 1/M of spectrum of signal.
Biwasaka, Hitoshi; Saigusa, Kiyoshi; Aoki, Yasuhiro
2005-03-01
In this study, the applicability of holography in the 3-dimensional recording of forensic objects such as skulls and mandibulae, and the accuracy of the reconstructed 3-D images, were examined. The virtual holographic image, which records the 3-dimensional data of the original object, is visually observed on the other side of the holographic plate, and reproduces the 3-dimensional shape of the object well. Another type of holographic image, the real image, is focused on a frosted glass screen, and cross-sectional images of the object can be observed. When measuring the distances between anatomical reference points using an image-processing software, the average deviations in the holographic images as compared to the actual objects were less than 0.1 mm. Therefore, holography could be useful as a 3-dimensional recording method of forensic objects. Two superimposition systems using holographic images were examined. In the 2D-3D system, the transparent virtual holographic image of an object is directly superimposed onto the digitized photograph of the same object on the LCD monitor. On the other hand, in the video system, the holographic image captured by the CCD camera is superimposed onto the digitized photographic image using a personal computer. We found that the discrepancy between the outlines of the superimposed holographic and photographic dental images using the video system was smaller than that using the 2D-3D system. Holography seemed to perform comparably to the computer graphic system; however, a fusion with the digital technique would expand the utility of holography in superimposition.
[Study on the indexes of forensic identification by the occlusal-facial digital radiology].
Gao, Dong; Wang, Hu; Hu, Jin-liang; Xu, Zhe; Deng, Zhen-hua
2006-02-01
To discuss the coding of full dentition with 32 locations and measure the characteristics of some bony indexes in occlusal-facial digital radiology (DR). To select randomly three hundred DR orthopantomogram and code the full dentition, then analyze the diversity of dental patterns. To select randomly one hundred DR lateral cephalogram and measure six indexes (N-S,N-Me,Cd-Gn,Cd-Go,NP-SN,MP-SN) separately by one odontologist and one trained forensic graduate student, then calculate the coefficient variation (CV) of every index and take a correlation analysis for the consistency between two measurements. (1) The total diversity of 300 dental patterns was 75%.It was a very high value. (2)All six quantitative variables had comparatively high CV value.(3) After the linear correlation analysis between two measurements, all six coefficient correlations were close to 1. This indicated that the measurements were stable and consistent. The method of coding full dentition in DR orthopantomogram and measuring six bony indexes in DR lateral cephalogram can be used to forensic identification.
A Forensic Examination of Online Search Facility URL Record Structures.
Horsman, Graeme
2018-05-29
The use of search engines and associated search functions to locate content online is now common practice. As a result, a forensic examination of a suspect's online search activity can be a critical aspect in establishing whether an offense has been committed in many investigations. This article offers an analysis of online search URL structures to support law enforcement and associated digital forensics practitioners interpret acts of online searching during an investigation. Google, Bing, Yahoo!, and DuckDuckGo searching functions are examined, and key URL attribute structures and metadata have been documented. In addition, an overview of social media searching covering Twitter, Facebook, Instagram, and YouTube is offered. Results show the ability to extract embedded metadata from search engine URLs which can establish online searching behaviors and the timing of searches. © 2018 American Academy of Forensic Sciences.
Chen, Chenglong; Ni, Jiangqun; Shen, Zhaoyi; Shi, Yun Qing
2017-06-01
Geometric transformations, such as resizing and rotation, are almost always needed when two or more images are spliced together to create convincing image forgeries. In recent years, researchers have developed many digital forensic techniques to identify these operations. Most previous works in this area focus on the analysis of images that have undergone single geometric transformations, e.g., resizing or rotation. In several recent works, researchers have addressed yet another practical and realistic situation: successive geometric transformations, e.g., repeated resizing, resizing-rotation, rotation-resizing, and repeated rotation. We will also concentrate on this topic in this paper. Specifically, we present an in-depth analysis in the frequency domain of the second-order statistics of the geometrically transformed images. We give an exact formulation of how the parameters of the first and second geometric transformations influence the appearance of periodic artifacts. The expected positions of characteristic resampling peaks are analytically derived. The theory developed here helps to address the gap left by previous works on this topic and is useful for image security and authentication, in particular, the forensics of geometric transformations in digital images. As an application of the developed theory, we present an effective method that allows one to distinguish between the aforementioned four different processing chains. The proposed method can further estimate all the geometric transformation parameters. This may provide useful clues for image forgery detection.
Using Digital Logs to Reduce Academic Misdemeanour by Students in Digital Forensic Assessments
ERIC Educational Resources Information Center
Lallie, Harjinder Singh; Lawson, Phillip; Day, David J.
2011-01-01
Identifying academic misdemeanours and actual applied effort in student assessments involving practical work can be problematic. For instance, it can be difficult to assess the actual effort that a student applied, the sequence and method applied, and whether there was any form of collusion or collaboration. In this paper we propose a system of…
THE ROLE OF FORENSIC DENTIST FOLLOWING MASS DISASTER
Kolude, B.; Adeyemi, B.F.; Taiwo, J.O.; Sigbeku, O.F.; Eze, U.O.
2010-01-01
This review article focuses on mass disaster situations that may arise from natural or manmade circumstances and the significant role of forensic dental personnel in human identification following such occurrences. The various forensic dental modalities of identification that include matching techniques, postmortem profiling, genetic fingerprinting, dental fossil assessment and dental biometrics with digital subtraction were considered. The varying extent of use of forensic dental techniques and the resulting positive impact on human identification were considered. The importance of preparation by way of special training for forensic dental personnel, mock disaster rehearsal, and use of modern day technology was stressed. The need for international standardization of identification through the use of Interpol Disaster Victim Identification (DVI) for ms was further emphasized. Recommendations for improved human identification in Nigerian situation include reform of the National Emergency Management Association (NEMA), incorporation of dental care in primary health care to facilitate proper ante mortem database of the populace and commencement of identification at site of disaster. PMID:25161478
Urbanová, Petra; Hejna, Petr; Jurda, Mikoláš
2015-05-01
Three-dimensional surface technologies particularly close range photogrammetry and optical surface scanning have recently advanced into affordable, flexible and accurate techniques. Forensic postmortem investigation as performed on a daily basis, however, has not yet fully benefited from their potentials. In the present paper, we tested two approaches to 3D external body documentation - digital camera-based photogrammetry combined with commercial Agisoft PhotoScan(®) software and stereophotogrammetry-based Vectra H1(®), a portable handheld surface scanner. In order to conduct the study three human subjects were selected, a living person, a 25-year-old female, and two forensic cases admitted for postmortem examination at the Department of Forensic Medicine, Hradec Králové, Czech Republic (both 63-year-old males), one dead to traumatic, self-inflicted, injuries (suicide by hanging), the other diagnosed with the heart failure. All three cases were photographed in 360° manner with a Nikon 7000 digital camera and simultaneously documented with the handheld scanner. In addition to having recorded the pre-autopsy phase of the forensic cases, both techniques were employed in various stages of autopsy. The sets of collected digital images (approximately 100 per case) were further processed to generate point clouds and 3D meshes. Final 3D models (a pair per individual) were counted for numbers of points and polygons, then assessed visually and compared quantitatively using ICP alignment algorithm and a cloud point comparison technique based on closest point to point distances. Both techniques were proven to be easy to handle and equally laborious. While collecting the images at autopsy took around 20min, the post-processing was much more time-demanding and required up to 10h of computation time. Moreover, for the full-body scanning the post-processing of the handheld scanner required rather time-consuming manual image alignment. In all instances the applied approaches produced high-resolution photorealistic, real sized or easy to calibrate 3D surface models. Both methods equally failed when the scanned body surface was covered with body hair or reflective moist areas. Still, it can be concluded that single camera close range photogrammetry and optical surface scanning using Vectra H1 scanner represent relatively low-cost solutions which were shown to be beneficial for postmortem body documentation in forensic pathology. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
A novel method for detecting light source for digital images forensic
NASA Astrophysics Data System (ADS)
Roy, A. K.; Mitra, S. K.; Agrawal, R.
2011-06-01
Manipulation in image has been in practice since centuries. These manipulated images are intended to alter facts — facts of ethics, morality, politics, sex, celebrity or chaos. Image forensic science is used to detect these manipulations in a digital image. There are several standard ways to analyze an image for manipulation. Each one has some limitation. Also very rarely any method tried to capitalize on the way image was taken by the camera. We propose a new method that is based on light and its shade as light and shade are the fundamental input resources that may carry all the information of the image. The proposed method measures the direction of light source and uses the light based technique for identification of any intentional partial manipulation in the said digital image. The method is tested for known manipulated images to correctly identify the light sources. The light source of an image is measured in terms of angle. The experimental results show the robustness of the methodology.
Dabbs, Gretchen R; Bytheway, Joan A; Connor, Melissa
2017-09-01
When in forensic casework or empirical research in-person assessment of human decomposition is not possible, the sensible substitution is color photographic images. To date, no research has confirmed the utility of color photographic images as a proxy for in situ observation of the level of decomposition. Sixteen observers scored photographs of 13 human cadavers in varying decomposition stages (PMI 2-186 days) using the Total Body Score system (total n = 929 observations). The on-site TBS was compared with recorded observations from digital color images using a paired samples t-test. The average difference between on-site and photographic observations was -0.20 (t = -1.679, df = 928, p = 0.094). Individually, only two observers, both students with <1 year of experience, demonstrated TBS statistically significantly different than the on-site value, suggesting that with experience, observations of human decomposition based on digital images can be substituted for assessments based on observation of the corpse in situ, when necessary. © 2017 American Academy of Forensic Sciences.
Davy-Jow, Stephanie Lynn; Lees, Duncan M B; Russell, Sean
2013-01-10
Full-body 3D virtual reconstructions were generated using 3D technology and anthropometry following the death of a young girl, allegedly from severe malnutrition as a result of abuse and neglect. Close range laser scanning, in conjunction with full colour digital texture photography, was used to document the child's condition shortly after death in order to demonstrate the number and pattern of injuries and to be able to demonstrate her condition forensically. Full-body digital reconstructions were undertaken to illustrate the extent of the malnutrition by comparing the processed post mortem scans with reconstructed images at normal weight for height and age. This is the first known instance of such an investigative tool. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Ernst, E J; Speck, Patricia M; Fitzpatrick, Joyce J
2011-12-01
With the patient's consent, physical injuries sustained in a sexual assault are evaluated and treated by the sexual assault nurse examiner (SANE) and documented on preprinted traumagrams and with photographs. Digital imaging is now available to the SANE for documentation of sexual assault injuries, but studies of the image quality of forensic digital imaging of female genital injuries after sexual assault were not found in the literature. The Photo Documentation Image Quality Scoring System (PDIQSS) was developed to rate the image quality of digital photo documentation of female genital injuries after sexual assault. Three expert observers performed evaluations on 30 separate images at two points in time. An image quality score, the sum of eight integral technical and anatomical attributes on the PDIQSS, was obtained for each image. Individual image quality ratings, defined by rating image quality for each of the data, were also determined. The results demonstrated a high level of image quality and agreement when measured in all dimensions. For the SANE in clinical practice, the results of this study indicate that a high degree of agreement exists between expert observers when using the PDIQSS to rate image quality of individual digital photographs of female genital injuries after sexual assault. © 2011 International Association of Forensic Nurses.
Joint forensics and watermarking approach for video authentication
NASA Astrophysics Data System (ADS)
Thiemert, Stefan; Liu, Huajian; Steinebach, Martin; Croce-Ferri, Lucilla
2007-02-01
In our paper we discuss and compare the possibilities and shortcomings of both content-fragile watermarking and digital forensics and analyze if the combination of both techniques allows the identification of more than the sum of all manipulations identified by both techniques on their own due to synergetic effects. The first part of the paper discusses the theoretical possibilities offered by a combined approach, in which forensics and watermarking are considered as complementary tools for data authentication or deeply combined together, in order to reduce their error rate and to enhance the detection efficiency. After this conceptual discussion the paper proposes some concrete examples in which the joint approach is applied to video authentication. Some specific forensics techniques are analyzed and expanded to handle efficiently video data. The examples show possible extensions of passive-blind image forgery detection to video data, where the motion and time related characteristics of video are efficiently exploited.
Eduardoff, M; Gross, T E; Santos, C; de la Puente, M; Ballard, D; Strobl, C; Børsting, C; Morling, N; Fusco, L; Hussing, C; Egyed, B; Souto, L; Uacyisrael, J; Syndercombe Court, D; Carracedo, Á; Lareu, M V; Schneider, P M; Parson, W; Phillips, C; Parson, W; Phillips, C
2016-07-01
The EUROFORGEN Global ancestry-informative SNP (AIM-SNPs) panel is a forensic multiplex of 128 markers designed to differentiate an individual's ancestry from amongst the five continental population groups of Africa, Europe, East Asia, Native America, and Oceania. A custom multiplex of AmpliSeq™ PCR primers was designed for the Global AIM-SNPs to perform massively parallel sequencing using the Ion PGM™ system. This study assessed individual SNP genotyping precision using the Ion PGM™, the forensic sensitivity of the multiplex using dilution series, degraded DNA plus simple mixtures, and the ancestry differentiation power of the final panel design, which required substitution of three original ancestry-informative SNPs with alternatives. Fourteen populations that had not been previously analyzed were genotyped using the custom multiplex and these studies allowed assessment of genotyping performance by comparison of data across five laboratories. Results indicate a low level of genotyping error can still occur from sequence misalignment caused by homopolymeric tracts close to the target SNP, despite careful scrutiny of candidate SNPs at the design stage. Such sequence misalignment required the exclusion of component SNP rs2080161 from the Global AIM-SNPs panel. However, the overall genotyping precision and sensitivity of this custom multiplex indicates the Ion PGM™ assay for the Global AIM-SNPs is highly suitable for forensic ancestry analysis with massively parallel sequencing. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Applications of image processing and visualization in the evaluation of murder and assault
NASA Astrophysics Data System (ADS)
Oliver, William R.; Rosenman, Julian G.; Boxwala, Aziz; Stotts, David; Smith, John; Soltys, Mitchell; Symon, James; Cullip, Tim; Wagner, Glenn
1994-09-01
Recent advances in image processing and visualization are of increasing use in the investigation of violent crime. The Digital Image Processing Laboratory at the Armed Forces Institute of Pathology in collaboration with groups at the University of North Carolina at Chapel Hill are actively exploring visualization applications including image processing of trauma images, 3D visualization, forensic database management and telemedicine. Examples of recent applications are presented. Future directions of effort include interactive consultation and image manipulation tools for forensic data exploration.
Bond, John W; Brady, Thomas F
2013-05-01
Pipe bombs made from 1 mm thick copper pipe were detonated with a low explosive power powder. Analysis of the physical characteristics of fragments revealed that the copper had undergone work hardening with an increased Vickers Hardness of 107HV1 compared with 80HV1 for unexploded copper pipe. Mean plastic strain prior to fracture was calculated at 0.28 showing evidence of both plastic deformation and wall thinning. An examination of the external surface showed microfractures running parallel with the length of the pipe at approximately 100 μm intervals and 1-2 μm in width. Many larger fragments had folded "inside out" making the original outside surface inaccessible and difficult to fold back through work hardening. A visual examination for fingerprint corrosion revealed ridge details on several fragments that were enhanced by selective digital mapping of colors reflected from the surface of the copper. One of these fingerprints was identified partially to the original donor. © 2013 American Academy of Forensic Sciences.
Massively Parallel Sequencing of Forensic STRs Using the Ion Chef™ and the Ion S5™ XL Systems.
Wang, Le; Chen, Man; Wu, Bo; Liu, Yi-Cheng; Zhang, Guang-Feng; Jiang, Li; Xu, Xiu-Lan; Zhao, Xing-Chun; Ji, An-Quan; Ye, Jian
2018-03-01
Next-generation sequencing (NGS) has been used to genotype forensic short tandem repeat (STR) markers for individual identification and kinship analysis. STR data from several NGS platforms have been published, but forensic application trials using the Ion S5™ XL system have not been reported. In this work, we report sensitivity, reproducibility, mixture, simulated degradation, and casework sample data on the Ion Chef™ and S5™ XL systems using an early access 25-plex panel. Sensitivity experiments showed that over 97% of the alleles were detectable with down to 62 pg input of genomic DNA. In mixture studies, alleles from minor contributors were correctly assigned at 1:9 and 9:1 ratios. NGS successfully gave 12 full genotype results from 13 challenging casework samples, compared with five full results using the CE platform. In conclusion, the Ion Chef™ and the Ion S5™ XL systems provided an alternative and promising approach for forensic STR genotyping. © 2018 American Academy of Forensic Sciences.
Instrument for Real-Time Digital Nucleic Acid Amplification on Custom Microfluidic Devices
Selck, David A.
2016-01-01
Nucleic acid amplification tests that are coupled with a digital readout enable the absolute quantification of single molecules, even at ultralow concentrations. Digital methods are robust, versatile and compatible with many amplification chemistries including isothermal amplification, making them particularly invaluable to assays that require sensitive detection, such as the quantification of viral load in occult infections or detection of sparse amounts of DNA from forensic samples. A number of microfluidic platforms are being developed for carrying out digital amplification. However, the mechanistic investigation and optimization of digital assays has been limited by the lack of real-time kinetic information about which factors affect the digital efficiency and analytical sensitivity of a reaction. Commercially available instruments that are capable of tracking digital reactions in real-time are restricted to only a small number of device types and sample-preparation strategies. Thus, most researchers who wish to develop, study, or optimize digital assays rely on the rate of the amplification reaction when performed in a bulk experiment, which is now recognized as an unreliable predictor of digital efficiency. To expand our ability to study how digital reactions proceed in real-time and enable us to optimize both the digital efficiency and analytical sensitivity of digital assays, we built a custom large-format digital real-time amplification instrument that can accommodate a wide variety of devices, amplification chemistries and sample-handling conditions. Herein, we validate this instrument, we provide detailed schematics that will enable others to build their own custom instruments, and we include a complete custom software suite to collect and analyze the data retrieved from the instrument. We believe assay optimizations enabled by this instrument will improve the current limits of nucleic acid detection and quantification, improving our fundamental understanding of single-molecule reactions and providing advancements in practical applications such as medical diagnostics, forensics and environmental sampling. PMID:27760148
Thompson, T J U; Norris, P
2018-05-01
Footwear impressions are one of the most common forms of evidence to be found at a crime scene, and can potentially offer the investigator a wealth of intelligence. Our aim is to highlight a new and improved technique for the recovery of footwear impressions, using three-dimensional structured light scanning. Results from this preliminary study demonstrate that this new approach is non-destructive, safe to use and is fast, reliable and accurate. Further, since this is a digital method, there is also the option of digital comparison between items of footwear and footwear impressions, and an increased ability to share recovered footwear impressions between forensic staff thus speeding up the investigation. Copyright © 2018 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.
[Video recording system of endoscopic procedures for digital forensics].
Endo, Chiaki; Sakurada, A; Kondo, T
2009-07-01
Recently, endoscopic procedures including surgery, intervention, and examination have been widely performed. Medical practitioners are required to record the procedures precisely in order to check the procedures retrospectively and to get the legally reliable record. Medical Forensic System made by KS Olympus Japan offers 2 kinds of movie and patient's data, such as heart rate, blood pressure, and Spo, which are simultaneously recorded. We installed this system into the bronchoscopy room and have experienced its benefit. Under this system, we can get bronchoscopic image, bronchoscopy room view, and patient's data simultaneously. We can check the quality of the bronchoscopic procedures retrospectively, which is useful for bronchoscopy staff training. Medical Forensic System should be installed in any kind of endoscopic procedures.
Into the decomposed body-forensic digital autopsy using multislice-computed tomography.
Thali, M J; Yen, K; Schweitzer, W; Vock, P; Ozdoba, C; Dirnhofer, R
2003-07-08
It is impossible to obtain a representative anatomical documentation of an entire body using classical X-ray methods, they subsume three-dimensional bodies into a two-dimensional level. We used the novel multislice-computed tomography (MSCT) technique in order to evaluate a case of homicide with putrefaction of the corpse before performing a classical forensic autopsy. This non-invasive method showed gaseous distension of the decomposing organs and tissues in detail as well as a complex fracture of the calvarium. MSCT also proved useful in screening for foreign matter in decomposing bodies, and full-body scanning took only a few minutes. In conclusion, we believe postmortem MSCT imaging is an excellent vizualisation tool with great potential for forensic documentation and evaluation of decomposed bodies.
Urschler, Martin; Höller, Johannes; Bornik, Alexander; Paul, Tobias; Giretzlehner, Michael; Bischof, Horst; Yen, Kathrin; Scheurer, Eva
2014-08-01
The increasing use of CT/MR devices in forensic analysis motivates the need to present forensic findings from different sources in an intuitive reference visualization, with the aim of combining 3D volumetric images along with digital photographs of external findings into a 3D computer graphics model. This model allows a comprehensive presentation of forensic findings in court and enables comparative evaluation studies correlating data sources. The goal of this work was to investigate different methods to generate anonymous and patient-specific 3D models which may be used as reference visualizations. The issue of registering 3D volumetric as well as 2D photographic data to such 3D models is addressed to provide an intuitive context for injury documentation from arbitrary modalities. We present an image processing and visualization work-flow, discuss the major parts of this work-flow, compare the different investigated reference models, and show a number of cases studies that underline the suitability of the proposed work-flow for presenting forensically relevant information in 3D visualizations. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Schmidt, Sven; Nitz, Inna; Schulz, Ronald; Tsokos, Michael; Schmeling, Andreas
2009-11-01
As a collection of radiographic standards of the normal hand development with a homogenous degree of maturity of all skeletal elements, the digital atlas of skeletal maturity by Gilsanz and Ratib combines the possibilities of digital imaging with the principle of a conventional atlas method. The present paper analyses the forensic applicability of skeletal age assessment according to Gilsanz and Ratib to age estimation in criminal proceedings. For this, the hand X-rays of 180 children and adolescents aged 10-18 years old were examined retrospectively. For the entire age range, the minima and maxima, the mean values and standard deviations as well as the medians with upper and lower quartiles are specified by sex. For the legally relevant age groups from 14 to 18 years, there is a risk of overestimation of the chronological age of up to 7.2 months in females. The method of Gilsanz and Ratib is therefore only suitable to forensic age estimation in criminal proceedings to a limited extent.
Exploration of operator method digital optical computers for application to NASA
NASA Technical Reports Server (NTRS)
1990-01-01
Digital optical computer design has been focused primarily towards parallel (single point-to-point interconnection) implementation. This architecture is compared to currently developing VHSIC systems. Using demonstrated multichannel acousto-optic devices, a figure of merit can be formulated. The focus is on a figure of merit termed Gate Interconnect Bandwidth Product (GIBP). Conventional parallel optical digital computer architecture demonstrates only marginal competitiveness at best when compared to projected semiconductor implements. Global, analog global, quasi-digital, and full digital interconnects are briefly examined as alternative to parallel digital computer architecture. Digital optical computing is becoming a very tough competitor to semiconductor technology since it can support a very high degree of three dimensional interconnect density and high degrees of Fan-In without capacitive loading effects at very low power consumption levels.
Next generation sequencing (NGS): a golden tool in forensic toolkit.
Aly, S M; Sabri, D M
The DNA analysis is a cornerstone in contemporary forensic sciences. DNA sequencing technologies are powerful tools that enrich molecular sciences in the past based on Sanger sequencing and continue to glowing these sciences based on Next generation sequencing (NGS). Next generation sequencing has excellent potential to flourish and increase the molecular applications in forensic sciences by jumping over the pitfalls of the conventional method of sequencing. The main advantages of NGS compared to conventional method that it utilizes simultaneously a large number of genetic markers with high-resolution of genetic data. These advantages will help in solving several challenges such as mixture analysis and dealing with minute degraded samples. Based on these new technologies, many markers could be examined to get important biological data such as age, geographical origins, tissue type determination, external visible traits and monozygotic twins identification. It also could get data related to microbes, insects, plants and soil which are of great medico-legal importance. Despite the dozens of forensic research involving NGS, there are requirements before using this technology routinely in forensic cases. Thus, there is a great need to more studies that address robustness of these techniques. Therefore, this work highlights the applications of forensic sciences in the era of massively parallel sequencing.
Sequential or parallel decomposed processing of two-digit numbers? Evidence from eye-tracking.
Moeller, Korbinian; Fischer, Martin H; Nuerk, Hans-Christoph; Willmes, Klaus
2009-02-01
While reaction time data have shown that decomposed processing of two-digit numbers occurs, there is little evidence about how decomposed processing functions. Poltrock and Schwartz (1984) argued that multi-digit numbers are compared in a sequential digit-by-digit fashion starting at the leftmost digit pair. In contrast, Nuerk and Willmes (2005) favoured parallel processing of the digits constituting a number. These models (i.e., sequential decomposition, parallel decomposition) make different predictions regarding the fixation pattern in a two-digit number magnitude comparison task and can therefore be differentiated by eye fixation data. We tested these models by evaluating participants' eye fixation behaviour while selecting the larger of two numbers. The stimulus set consisted of within-decade comparisons (e.g., 53_57) and between-decade comparisons (e.g., 42_57). The between-decade comparisons were further divided into compatible and incompatible trials (cf. Nuerk, Weger, & Willmes, 2001) and trials with different decade and unit distances. The observed fixation pattern implies that the comparison of two-digit numbers is not executed by sequentially comparing decade and unit digits as proposed by Poltrock and Schwartz (1984) but rather in a decomposed but parallel fashion. Moreover, the present fixation data provide first evidence that digit processing in multi-digit numbers is not a pure bottom-up effect, but is also influenced by top-down factors. Finally, implications for multi-digit number processing beyond the range of two-digit numbers are discussed.
Separation/extraction, detection, and interpretation of DNA mixtures in forensic science (review).
Tao, Ruiyang; Wang, Shouyu; Zhang, Jiashuo; Zhang, Jingyi; Yang, Zihao; Sheng, Xiang; Hou, Yiping; Zhang, Suhua; Li, Chengtao
2018-05-25
Interpreting mixed DNA samples containing material from multiple contributors has long been considered a major challenge in forensic casework, especially when encountering low-template DNA (LT-DNA) or high-order mixtures that may involve missing alleles (dropout) and unrelated alleles (drop-in), among others. In the last decades, extraordinary progress has been made in the analysis of mixed DNA samples, which has led to increasing attention to this research field. The advent of new methods for the separation and extraction of DNA from mixtures, novel or jointly applied genetic markers for detection and reliable interpretation approaches for estimating the weight of evidence, as well as the powerful massively parallel sequencing (MPS) technology, has greatly extended the range of mixed samples that can be correctly analyzed. Here, we summarized the investigative approaches and progress in the field of forensic DNA mixture analysis, hoping to provide some assistance to forensic practitioners and to promote further development involving this issue.
Protecting Digital Evidence Integrity by Using Smart Cards
NASA Astrophysics Data System (ADS)
Saleem, Shahzad; Popov, Oliver
RFC 3227 provides general guidelines for digital evidence collection and archiving, while the International Organization on Computer Evidence offers guidelines for best practice in the digital forensic examination. In the light of these guidelines we will analyze integrity protection mechanism provided by EnCase and FTK which is mainly based on Message Digest Codes (MDCs). MDCs for integrity protection are not tamper proof, hence they can be forged. With the proposed model for protecting digital evidence integrity by using smart cards (PIDESC) that establishes a secure platform for digitally signing the MDC (in general for a whole range of cryptographic services) in combination with Public Key Cryptography (PKC), one can show that this weakness might be overcome.
Information Assurance and Forensic Readiness
NASA Astrophysics Data System (ADS)
Pangalos, Georgios; Katos, Vasilios
Egalitarianism and justice are amongst the core attributes of a democratic regime and should be also secured in an e-democratic setting. As such, the rise of computer related offenses pose a threat to the fundamental aspects of e-democracy and e-governance. Digital forensics are a key component for protecting and enabling the underlying (e-)democratic values and therefore forensic readiness should be considered in an e-democratic setting. This position paper commences from the observation that the density of compliance and potential litigation activities is monotonically increasing in modern organizations, as rules, legislative regulations and policies are being constantly added to the corporate environment. Forensic practices seem to be departing from the niche of law enforcement and are becoming a business function and infrastructural component, posing new challenges to the security professionals. Having no a priori knowledge on whether a security related event or corporate policy violation will lead to litigation, we advocate that computer forensics need to be applied to all investigatory, monitoring and auditing activities. This would result into an inflation of the responsibilities of the Information Security Officer. After exploring some commonalities and differences between IS audit and computer forensics, we present a list of strategic challenges the organization and, in effect, the IS security and audit practitioner will face.
Zhou, Xian; Chen, Xue
2011-05-09
The digital coherent receivers combine coherent detection with digital signal processing (DSP) to compensate for transmission impairments, and therefore are a promising candidate for future high-speed optical transmission system. However, the maximum symbol rate supported by such real-time receivers is limited by the processing rate of hardware. In order to cope with this difficulty, the parallel processing algorithms is imperative. In this paper, we propose a novel parallel digital timing recovery loop (PDTRL) based on our previous work. Furthermore, for increasing the dynamic dispersion tolerance range of receivers, we embed a parallel adaptive equalizer in the PDTRL. This parallel joint scheme (PJS) can be used to complete synchronization, equalization and polarization de-multiplexing simultaneously. Finally, we demonstrate that PDTRL and PJS allow the hardware to process 112 Gbit/s POLMUX-DQPSK signal at the hundreds MHz range. © 2011 Optical Society of America
Forensic identification of resampling operators: A semi non-intrusive approach.
Cao, Gang; Zhao, Yao; Ni, Rongrong
2012-03-10
Recently, several new resampling operators have been proposed and successfully invalidate the existing resampling detectors. However, the reliability of such anti-forensic techniques is unaware and needs to be investigated. In this paper, we focus on the forensic identification of digital image resampling operators including the traditional type and the anti-forensic type which hides the trace of traditional resampling. Various resampling algorithms involving geometric distortion (GD)-based, dual-path-based and postprocessing-based are investigated. The identification is achieved in the manner of semi non-intrusive, supposing the resampling software could be accessed. Given an input pattern of monotone signal, polarity aberration of GD-based resampled signal's first derivative is analyzed theoretically and measured by effective feature metric. Dual-path-based and postprocessing-based resampling can also be identified by feeding proper test patterns. Experimental results on various parameter settings demonstrate the effectiveness of the proposed approach. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Trochesset, Denise A; Serchuk, Richard B; Colosi, Dan C
2014-03-01
Identification of unknown individuals using dental comparison is well established in the forensic setting. The identification technique can be time and resource consuming if many individuals need to be identified at once. Medical CT (MDCT) for dental profiling has had limited success, mostly due to artifact from metal-containing dental restorations and implants. The authors describe a CBCT reformatting technique that creates images, which closely approximate conventional dental images. Using a i-CAT Platinum CBCT unit and standard issue i-CAT Vision software, a protocol is developed to reproducibly and reliably reformat CBCT volumes. The reformatted images are presented with conventional digital images from the same anatomic area for comparison. The authors conclude that images derived from CBCT volumes following this protocol are similar enough to conventional dental radiographs to allow for dental forensic comparison/identification and that CBCT offers a superior option over MDCT for this purpose. © 2013 American Academy of Forensic Sciences.
Zhang, Qing-Xia; Yang, Meng; Pan, Ya-Jiao; Zhao, Jing; Qu, Bao-Wang; Cheng, Feng; Yang, Ya-Ran; Jiao, Zhang-Ping; Liu, Li; Yan, Jiang-Wei
2018-05-17
Massively parallel sequencing (MPS) has been used in forensic genetics in recent years owing to several advantages, e.g. MPS can provide precise descriptions of the repeat allele structure and variation in the repeat-flanking regions, increasing the discriminating power among loci and individuals. However, it cannot be fully utilized unless sufficient population data are available for all loci. Thus, there is a pressing need to perform population studies providing a basis for the introduction of MPS into forensic practice. Here, we constructed a multiplex PCR system with fusion primers for one-directional PCR for MPS of 15 commonly used forensic autosomal STRs and amelogenin. Samples from 554 unrelated Chinese Northern Han individuals were typed using this MPS assay. In total, 313 alleles obtained by MPS for all 15 STRs were observed, and the corresponding allele frequencies ranged between 0.0009 and 0.5162. Of all 15 loci, the number of alleles identified for 12 loci increased compared to capillary electrophoresis approaches, and for the following six loci more than double the number of alleles was found: D2S1338, D5S818, D21S11, D13S317, vWA, and D3S1358. Forensic parameters were calculated based on length and sequence-based alleles. D21S11 showed the highest heterozygosity (0.8791), discrimination power (0.9865), and paternity exclusion probability in trios (0.7529). The cumulative match probability for MPS was approximately 2.3157 × 10 -20 . © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Diegoli, Toni Marie; Rohde, Heinrich; Borowski, Stefan; Krawczak, Michael; Coble, Michael D; Nothnagel, Michael
2016-11-01
Typing of X chromosomal short tandem repeat (X STR) markers has become a standard element of human forensic genetic analysis. Joint consideration of many X STR markers at a time increases their discriminatory power but, owing to physical linkage, requires inter-marker recombination rates to be accurately known. We estimated the recombination rates between 15 well established X STR markers using genotype data from 158 families (1041 individuals) and following a previously proposed likelihood-based approach that allows for single-step mutations. To meet the computational requirements of this family-based type of analysis, we modified a previous implementation so as to allow multi-core parallelization on a high-performance computing system. While we obtained recombination rate estimates larger than zero for all but one pair of adjacent markers within the four previously proposed linkage groups, none of the three X STR pairs defining the junctions of these groups yielded a recombination rate estimate of 0.50. Corroborating previous studies, our results therefore argue against a simple model of independent X chromosomal linkage groups. Moreover, the refined recombination fraction estimates obtained in our study will facilitate the appropriate joint consideration of all 15 investigated markers in forensic analysis. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Evidence of tampering in watermark identification
NASA Astrophysics Data System (ADS)
McLauchlan, Lifford; Mehrübeoglu, Mehrübe
2009-08-01
In this work, watermarks are embedded in digital images in the discrete wavelet transform (DWT) domain. Principal component analysis (PCA) is performed on the DWT coefficients. Next higher order statistics based on the principal components and the eigenvalues are determined for different sets of images. Feature sets are analyzed for different types of attacks in m dimensional space. The results demonstrate the separability of the features for the tampered digital copies. Different feature sets are studied to determine more effective tamper evident feature sets. The digital forensics, the probable manipulation(s) or modification(s) performed on the digital information can be identified using the described technique.
An evolution of image source camera attribution approaches.
Jahanirad, Mehdi; Wahab, Ainuddin Wahid Abdul; Anuar, Nor Badrul
2016-05-01
Camera attribution plays an important role in digital image forensics by providing the evidence and distinguishing characteristics of the origin of the digital image. It allows the forensic analyser to find the possible source camera which captured the image under investigation. However, in real-world applications, these approaches have faced many challenges due to the large set of multimedia data publicly available through photo sharing and social network sites, captured with uncontrolled conditions and undergone variety of hardware and software post-processing operations. Moreover, the legal system only accepts the forensic analysis of the digital image evidence if the applied camera attribution techniques are unbiased, reliable, nondestructive and widely accepted by the experts in the field. The aim of this paper is to investigate the evolutionary trend of image source camera attribution approaches from fundamental to practice, in particular, with the application of image processing and data mining techniques. Extracting implicit knowledge from images using intrinsic image artifacts for source camera attribution requires a structured image mining process. In this paper, we attempt to provide an introductory tutorial on the image processing pipeline, to determine the general classification of the features corresponding to different components for source camera attribution. The article also reviews techniques of the source camera attribution more comprehensively in the domain of the image forensics in conjunction with the presentation of classifying ongoing developments within the specified area. The classification of the existing source camera attribution approaches is presented based on the specific parameters, such as colour image processing pipeline, hardware- and software-related artifacts and the methods to extract such artifacts. The more recent source camera attribution approaches, which have not yet gained sufficient attention among image forensics researchers, are also critically analysed and further categorised into four different classes, namely, optical aberrations based, sensor camera fingerprints based, processing statistics based and processing regularities based, to present a classification. Furthermore, this paper aims to investigate the challenging problems, and the proposed strategies of such schemes based on the suggested taxonomy to plot an evolution of the source camera attribution approaches with respect to the subjective optimisation criteria over the last decade. The optimisation criteria were determined based on the strategies proposed to increase the detection accuracy, robustness and computational efficiency of source camera brand, model or device attribution. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Phillips, C; Gettings, K Butler; King, J L; Ballard, D; Bodner, M; Borsuk, L; Parson, W
2018-05-01
The STR sequence template file published in 2016 as part of the considerations from the DNA Commission of the International Society for Forensic Genetics on minimal STR sequence nomenclature requirements, has been comprehensively revised and audited using the latest GRCh38 genome assembly. The list of forensic STRs characterized was expanded by including supplementary autosomal, X- and Y-chromosome microsatellites in less common use for routine DNA profiling, but some likely to be adopted in future massively parallel sequencing (MPS) STR panels. We outline several aspects of sequence alignment and annotation that required care and attention to detail when comparing sequences to GRCh37 and GRCh38 assemblies, as well as the necessary matching of MPS-based allele descriptions to previously established repeat region structures described in initial sequencing studies of the less well known forensic STRs. The revised sequence guide is now available in a dynamically updated FTP format from the STRidER website with a date-stamped change log to allow users to explore their own MPS data with the most up-to-date forensic STR sequence information compiled in a simple guide. Copyright © 2018 Elsevier B.V. All rights reserved.
Siegel, David M; Kinscherff, Robert
2018-04-25
The standard of practice for forensic interviews in criminal and delinquency cases, other than those conducted as part of brief preliminary screening evaluations or in emergency situations, should include a digital recording requirement. This standard should be adopted because of the greater availability of, and familiarity with, recording technology on the part of mental health professionals, the greater use and proven effectiveness of recording in other contexts of the criminal justice system, and the improvement in court presentation and accuracy of judicial determinations involving forensic assessments that recording will provide. The experience of practitioners with recording since professional associations last studied the issue should be taken into account, as informal data suggest it has been positive. Unfortunately, the legal system is unlikely to prompt this advance without its reconsideration by the forensic mental health professions, because current constitutional jurisprudence does not require recording and effectively makes it contingent upon request by examiners. Forensic evaluators thus have a valuable opportunity to educate the legal system on the utility and importance of this key reform, and so should adopt it as a best practice. Copyright © 2018 John Wiley & Sons, Ltd.
Ambers, Angie D; Churchill, Jennifer D; King, Jonathan L; Stoljarova, Monika; Gill-King, Harrell; Assidi, Mourad; Abu-Elmagd, Muhammad; Buhmeida, Abdelbaset; Al-Qahtani, Mohammed; Budowle, Bruce
2016-10-17
Although the primary objective of forensic DNA analyses of unidentified human remains is positive identification, cases involving historical or archaeological skeletal remains often lack reference samples for comparison. Massively parallel sequencing (MPS) offers an opportunity to provide biometric data in such cases, and these cases provide valuable data on the feasibility of applying MPS for characterization of modern forensic casework samples. In this study, MPS was used to characterize 140-year-old human skeletal remains discovered at a historical site in Deadwood, South Dakota, United States. The remains were in an unmarked grave and there were no records or other metadata available regarding the identity of the individual. Due to the high throughput of MPS, a variety of biometric markers could be typed using a single sample. Using MPS and suitable forensic genetic markers, more relevant information could be obtained from a limited quantity and quality sample. Results were obtained for 25/26 Y-STRs, 34/34 Y SNPs, 166/166 ancestry-informative SNPs, 24/24 phenotype-informative SNPs, 102/102 human identity SNPs, 27/29 autosomal STRs (plus amelogenin), and 4/8 X-STRs (as well as ten regions of mtDNA). The Y-chromosome (Y-STR, Y-SNP) and mtDNA profiles of the unidentified skeletal remains are consistent with the R1b and H1 haplogroups, respectively. Both of these haplogroups are the most common haplogroups in Western Europe. Ancestry-informative SNP analysis also supported European ancestry. The genetic results are consistent with anthropological findings that the remains belong to a male of European ancestry (Caucasian). Phenotype-informative SNP data provided strong support that the individual had light red hair and brown eyes. This study is among the first to genetically characterize historical human remains with forensic genetic marker kits specifically designed for MPS. The outcome demonstrates that substantially more genetic information can be obtained from the same initial quantities of DNA as that of current CE-based analyses.
[Current macro-diagnostic trends of forensic medicine in the Czech Republic].
Frišhons, Jan; Kučerová, Štěpánka; Jurda, Mikoláš; Sokol, Miloš; Vojtíšek, Tomáš; Hejna, Petr
2017-01-01
Over the last few years, advanced diagnostic methods have penetrated in the realm of forensic medicine in addition to standard autopsy techniques supported by traditional X-ray examination and macro-diagnostic laboratory tests. Despite the progress of imaging methods, the conventional autopsy has remained basic and essential diagnostic tool in forensic medicine. Postmortem computed tomography and magnetic resonance imaging are far the most progressive modern radio diagnostic methods setting the current trend of virtual autopsies all over the world. Up to now, only two institutes of forensic medicine have available postmortem computed tomography for routine diagnostic purposes in the Czech Republic. Postmortem magnetic resonance is currently unattainable for routine diagnostic use and was employed only for experimental purposes. Photogrammetry is digital method focused primarily on body surface imaging. Recently, the most fruitful results have been yielded from the interdisciplinary cooperation between forensic medicine and forensic anthropology with the implementation of body scanning techniques and 3D printing. Non-invasive and mini-invasive investigative methods such as postmortem sonography and postmortem endoscopy was unsystematically tested for diagnostic performance with good outcomes despite of limitations of these methods in postmortem application. Other futuristic methods, such as the use of a drone to inspect the crime scene are still experimental tools. The authors of the article present a basic overview of the both routinely and experimentally used investigative methods and current macro-diagnostic trends of the forensic medicine in the Czech Republic.
[Application of computed tomography (CT) examination for forensic medicine].
Urbanik, Andrzej; Chrzan, Robert
2013-01-01
The aim of the study is to present a own experiences in usage of post mortem CT examination for forensic medicine. With the help of 16-slice CT scanner 181 corpses were examined. Obtained during acquisition imaging data are later developed with dedicated programmes. Analyzed images were extracted from axial sections, multiplanar reconstructions as well as 3D reconstructions. Gained information helped greatly when classical autopsy was performed by making it more accurate. A CT scan images recorded digitally enable to evaluate corpses at any time, despite processes of putrefaction or cremation. If possible CT examination should precede classical autopsy.
NASA Astrophysics Data System (ADS)
Sun, Degui; Wang, Na-Xin; He, Li-Ming; Weng, Zhao-Heng; Wang, Daheng; Chen, Ray T.
1996-06-01
A space-position-logic-encoding scheme is proposed and demonstrated. This encoding scheme not only makes the best use of the convenience of binary logic operation, but is also suitable for the trinary property of modified signed- digit (MSD) numbers. Based on the space-position-logic-encoding scheme, a fully parallel modified signed-digit adder and subtractor is built using optoelectronic switch technologies in conjunction with fiber-multistage 3D optoelectronic interconnects. Thus an effective combination of a parallel algorithm and a parallel architecture is implemented. In addition, the performance of the optoelectronic switches used in this system is experimentally studied and verified. Both the 3-bit experimental model and the experimental results of a parallel addition and a parallel subtraction are provided and discussed. Finally, the speed ratio between the MSD adder and binary adders is discussed and the advantage of the MSD in operating speed is demonstrated.
Toward a Model Framework of Generalized Parallel Componential Processing of Multi-Symbol Numbers
ERIC Educational Resources Information Center
Huber, Stefan; Cornelsen, Sonja; Moeller, Korbinian; Nuerk, Hans-Christoph
2015-01-01
In this article, we propose and evaluate a new model framework of parallel componential multi-symbol number processing, generalizing the idea of parallel componential processing of multi-digit numbers to the case of negative numbers by considering the polarity signs similar to single digits. In a first step, we evaluated this account by defining…
Liu, Jing; Wang, Zheng; He, Guanglin; Zhao, Xueying; Wang, Mengge; Luo, Tao; Li, Chengtao; Hou, Yiping
2018-07-01
Massively parallel sequencing (MPS) technologies can sequence many targeted regions of multiple samples simultaneously and are gaining great interest in the forensic community. The Precision ID Identity Panel contains 90 autosomal SNPs and 34 upper Y-Clade SNPs, which was designed with small amplicons and optimized for forensic degraded or challenging samples. Here, 184 unrelated individuals from three East Asian minority ethnicities (Tibetan, Uygur and Hui) were analyzed using the Precision ID Identity Panel and the Ion PGM System. The sequencing performance and corresponding forensic statistical parameters of this MPS-SNP panel were investigated. The inter-population relationships and substructures among three investigated populations and 30 worldwide populations were further investigated using PCA, MDS, cladogram and STRUCTURE. No significant deviation from Hardy-Weinberg equilibrium (HWE) and Linkage Disequilibrium (LD) tests was observed across all 90 autosomal SNPs. The combined matching probability (CMP) for Tibetan, Uygur and Hui were 2.5880 × 10 -33 , 1.7480 × 10 -35 and 4.6326 × 10 -34 respectively, and the combined power of exclusion (CPE) were 0.999999386152271, 0.999999607712827 and 0.999999696360182 respectively. For 34 Y-SNPs, only 16 haplogroups were obtained, but the haplogroup distributions differ among the three populations. Tibetans from the Sino-Tibetan population and Hui with multiple ethnicities with an admixture population have genetic affinity with East Asian populations, while Uygurs of a Eurasian admixture population have similar genetic components to the South Asian populations and are distributed between East Asian and European populations. The aforementioned results suggest that the Precision ID Identity Panel is informative and polymorphic in three investigated populations and could be used as an effective tool for human forensics. Copyright © 2018 Elsevier B.V. All rights reserved.
Parallel pulse processing and data acquisition for high speed, low error flow cytometry
van den Engh, Gerrit J.; Stokdijk, Willem
1992-01-01
A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate.
Olakanye, Ayodeji O; Thompson, Tim; Ralebitso-Senior, T Komang
2015-12-01
In a forensic context, microbial-mediated cadaver decomposition and nutrient recycling cannot be overlooked. As a result, forensic ecogenomics research has intensified to gain a better understanding of cadaver/soil ecology interactions as a powerful potential tool for forensic practitioners. For this study, domestic pig (Sus scrofa domesticus) (4g) and grass (Agrostis/Festuca spp) cuttings (4g) were buried (July 2013 to July 2014) in sandy clay loam (80 g) triplicates in sealed microcosms (127 ml; 50 × 70 cm) with parallel soil only controls. The effects of the two carbon sources were determined by monitoring key environmental factors and changes in soil bacterial (16S rRNA gene) and fungal (18S rRNA gene) biodiversity. Soil pH changes showed statistically significant differences (p<0.05) between the treatments. The measured ecological diversity indices (Shannon-Wiener, HꞋ; Simpson, D; and richness, S) of the 16S rRNA and 18S rRNA gene profiles also revealed differences between the treatments, with bacterial and fungal community dominance recorded in the presence of S. scrofa domesticus and grass trimming decomposition, respectively. In contrast, no statistically significant difference in evenness (p>0.05) was observed between the treatments. Copyright © 2015 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.
Mitochondrial DNA heteroplasmy in the emerging field of massively parallel sequencing
Just, Rebecca S.; Irwin, Jodi A.; Parson, Walther
2015-01-01
Long an important and useful tool in forensic genetic investigations, mitochondrial DNA (mtDNA) typing continues to mature. Research in the last few years has demonstrated both that data from the entire molecule will have practical benefits in forensic DNA casework, and that massively parallel sequencing (MPS) methods will make full mitochondrial genome (mtGenome) sequencing of forensic specimens feasible and cost-effective. A spate of recent studies has employed these new technologies to assess intraindividual mtDNA variation. However, in several instances, contamination and other sources of mixed mtDNA data have been erroneously identified as heteroplasmy. Well vetted mtGenome datasets based on both Sanger and MPS sequences have found authentic point heteroplasmy in approximately 25% of individuals when minor component detection thresholds are in the range of 10–20%, along with positional distribution patterns in the coding region that differ from patterns of point heteroplasmy in the well-studied control region. A few recent studies that examined very low-level heteroplasmy are concordant with these observations when the data are examined at a common level of resolution. In this review we provide an overview of considerations related to the use of MPS technologies to detect mtDNA heteroplasmy. In addition, we examine published reports on point heteroplasmy to characterize features of the data that will assist in the evaluation of future mtGenome data developed by any typing method. PMID:26009256
Analytical Characterization of Erythritol Tetranitrate, an Improvised Explosive.
Matyáš, Robert; Lyčka, Antonín; Jirásko, Robert; Jakový, Zdeněk; Maixner, Jaroslav; Mišková, Linda; Künzel, Martin
2016-05-01
Erythritol tetranitrate (ETN), an ester of nitric acid and erythritol, is a solid crystalline explosive with high explosive performance. Although it has never been used in any industrial or military application, it has become one of the most prepared and misused improvise explosives. In this study, several analytical techniques were explored to facilitate analysis in forensic laboratories. FTIR and Raman spectrometry measurements expand existing data and bring more detailed assignment of bands through the parallel study of erythritol [(15) N4 ] tetranitrate. In the case of powder diffraction, recently published data were verified, and (1) H, (13) C, and (15) N NMR spectra are discussed in detail. The technique of electrospray ionization tandem mass spectrometry was successfully used for the analysis of ETN. Described methods allow fast, versatile, and reliable detection or analysis of samples containing erythritol tetranitrate in forensic laboratories. © 2016 American Academy of Forensic Sciences.
NASA Astrophysics Data System (ADS)
Watanabe, Shuji; Takano, Hiroshi; Fukuda, Hiroya; Hiraki, Eiji; Nakaoka, Mutsuo
This paper deals with a digital control scheme of multiple paralleled high frequency switching current amplifier with four-quadrant chopper for generating gradient magnetic fields in MRI (Magnetic Resonance Imaging) systems. In order to track high precise current pattern in Gradient Coils (GC), the proposal current amplifier cancels the switching current ripples in GC with each other and designed optimum switching gate pulse patterns without influences of the large filter current ripple amplitude. The optimal control implementation and the linear control theory in GC current amplifiers have affinity to each other with excellent characteristics. The digital control system can be realized easily through the digital control implementation, DSPs or microprocessors. Multiple-parallel operational microprocessors realize two or higher paralleled GC current pattern tracking amplifier with optimal control design and excellent results are given for improving the image quality of MRI systems.
Microfluidic Devices for Forensic DNA Analysis: A Review.
Bruijns, Brigitte; van Asten, Arian; Tiggelaar, Roald; Gardeniers, Han
2016-08-05
Microfluidic devices may offer various advantages for forensic DNA analysis, such as reduced risk of contamination, shorter analysis time and direct application at the crime scene. Microfluidic chip technology has already proven to be functional and effective within medical applications, such as for point-of-care use. In the forensic field, one may expect microfluidic technology to become particularly relevant for the analysis of biological traces containing human DNA. This would require a number of consecutive steps, including sample work up, DNA amplification and detection, as well as secure storage of the sample. This article provides an extensive overview of microfluidic devices for cell lysis, DNA extraction and purification, DNA amplification and detection and analysis techniques for DNA. Topics to be discussed are polymerase chain reaction (PCR) on-chip, digital PCR (dPCR), isothermal amplification on-chip, chip materials, integrated devices and commercially available techniques. A critical overview of the opportunities and challenges of the use of chips is discussed, and developments made in forensic DNA analysis over the past 10-20 years with microfluidic systems are described. Areas in which further research is needed are indicated in a future outlook.
Post-mortem chemical excitability of the iris should not be used for forensic death time diagnosis.
Koehler, Katja; Sehner, Susanne; Riemer, Martin; Gehl, Axel; Raupach, Tobias; Anders, Sven
2018-04-18
Post-mortem chemical excitability of the iris is one of the non-temperature-based methods in forensic diagnosis of the time since death. Although several authors reported on their findings, using different measurement methods, currently used time limits are based on a single dissertation which has recently been doubted to be applicable for forensic purpose. We investigated changes in pupil-iris ratio after application of acetylcholine (n = 79) or tropicamide (n = 58) and in controls at upper and lower time limits that are suggested in the current literature, using a digital photography-based measurement method with excellent reliability. We observed "positive," "negative," and "paradox" reactions in both intervention and control conditions at all investigated post-mortem time points, suggesting spontaneous changes in pupil size to be causative for the finding. According to our observations, post-mortem chemical excitability of the iris should not be used in forensic death time estimation, as results may cause false conclusions regarding the correct time point of death and might therefore be strongly misleading.
Cost-effective forensic image enhancement
NASA Astrophysics Data System (ADS)
Dalrymple, Brian E.
1998-12-01
In 1977, a paper was presented at the SPIE conference in Reston, Virginia, detailing the computer enhancement of the Zapruder film. The forensic value of this examination in a major homicide investigation was apparent to the viewer. Equally clear was the potential for extracting evidence which is beyond the reach of conventional detection techniques. The cost of this technology in 1976, however, was prohibitive, and well beyond the means of most police agencies. Twenty-two years later, a highly efficient means of image enhancement is easily within the grasp of most police agencies, not only for homicides but for any case application. A PC workstation combined with an enhancement software package allows a forensic investigator to fully exploit digital technology. The goal of this approach is the optimization of the signal to noise ratio in images. Obstructive backgrounds may be diminished or eliminated while weak signals are optimized by the use of algorithms including Fast Fourier Transform, Histogram Equalization and Image Subtraction. An added benefit is the speed with which these processes are completed and the results known. The efficacy of forensic image enhancement is illustrated through case applications.
Shamata, Awatif; Thompson, Tim
2018-05-10
Non-contact three-dimensional (3D) surface scanning has been applied in forensic medicine and has been shown to mitigate shortcoming of traditional documentation methods. The aim of this paper is to assess the efficiency of structured light 3D surface scanning in recording traumatic injuries of live cases in clinical forensic medicine. The work was conducted in Medico-Legal Centre in Benghazi, Libya. A structured light 3D surface scanner and ordinary digital camera with close-up lens were used to record the injuries and to have 3D and two-dimensional (2D) documents of the same traumas. Two different types of comparison were performed. Firstly, the 3D wound documents were compared to 2D documents based on subjective visual assessment. Additionally, 3D wound measurements were compared to conventional measurements and this was done to determine whether there was a statistical significant difference between them. For this, Friedman test was used. The study established that the 3D wound documents had extra features over the 2D documents. Moreover; the 3D scanning method was able to overcome the main deficiencies of the digital photography. No statistically significant difference was found between the 3D and conventional wound measurements. The Spearman's correlation established strong, positive correlation between the 3D and conventional measurement methods. Although, the 3D surface scanning of the injuries of the live subjects faced some difficulties, the 3D results were appreciated, the validity of 3D measurements based on the structured light 3D scanning was established. Further work will be achieved in forensic pathology to scan open injuries with depth information. Crown Copyright © 2018. Published by Elsevier Ltd. All rights reserved.
room) or while being on the mobile (agents in action). While desktop based applications can be used to monitor but also process and analyse surveillance data coming from a variety of sources, mobile-based techniques Digital forensics analysis Visualization techniques for surveillance Mobile-based surveillance
Two-stage Keypoint Detection Scheme for Region Duplication Forgery Detection in Digital Images.
Emam, Mahmoud; Han, Qi; Zhang, Hongli
2018-01-01
In digital image forensics, copy-move or region duplication forgery detection became a vital research topic recently. Most of the existing keypoint-based forgery detection methods fail to detect the forgery in the smooth regions, rather than its sensitivity to geometric changes. To solve these problems and detect points which cover all the regions, we proposed two steps for keypoint detection. First, we employed the scale-invariant feature operator to detect the spatially distributed keypoints from the textured regions. Second, the keypoints from the missing regions are detected using Harris corner detector with nonmaximal suppression to evenly distribute the detected keypoints. To improve the matching performance, local feature points are described using Multi-support Region Order-based Gradient Histogram descriptor. Based on precision-recall rates and commonly tested dataset, comprehensive performance evaluation is performed. The results demonstrated that the proposed scheme has better detection and robustness against some geometric transformation attacks compared with state-of-the-art methods. © 2017 American Academy of Forensic Sciences.
Is the thumb a fifth finger? A study of digit interaction during force production tasks
Olafsdottir, Halla; Zatsiorsky, Vladimir M.; Latash, Mark L.
2010-01-01
We studied indices of digit interaction in single- and multi-digit maximal voluntary contraction (MVC) tests when the thumb acted either in parallel or in opposition to the fingers. The peak force produced by the thumb was much higher when the thumb acted in opposition to the fingers and its share of the total force in the five-digit MVC test increased dramatically. The fingers showed relatively similar peak forces and unchanged sharing patterns in the four-finger MVC task when the thumb acted in parallel and in opposition to the fingers. Enslaving during one-digit tasks showed relatively mild differences between the two conditions, while the differences became large when enslaving was quantified for multi-digit tasks. Force deficit was pronounced when the thumb acted in parallel to the fingers; it showed a monotonic increase with the number of explicitly involved digits up to four digits and then a drop when all five digits were involved. Force deficit all but disappeared when the thumb acted in opposition to the fingers. However, for both thumb positions, indices of digit interaction were similar for groups of digits that did or did not include the thumb. These results suggest that, given a certain hand configuration, the central nervous system treats the thumb as a fifth finger. They provide strong support for the hypothesis that indices of digit interaction reflect neural factors, not the peripheral design of the hand. An earlier formal model was able to account for the data when the thumb acted in parallel to the fingers. However, it failed for the data with the thumb acting in opposition to the fingers. PMID:15322785
USDA-ARS?s Scientific Manuscript database
Flesh flies in the genus Sarcophaga are important models for investigating endocrinology, diapause, cold hardiness, reproduction, and immunity. Despite the prominence of Sarcophaga flesh flies as models for insect physiology and biochemistry, and in forensic studies, little genomic or transcriptom...
Eduardoff, Mayra; Xavier, Catarina; Strobl, Christina; Casas-Vargas, Andrea; Parson, Walther
2017-01-01
The analysis of mitochondrial DNA (mtDNA) has proven useful in forensic genetics and ancient DNA (aDNA) studies, where specimens are often highly compromised and DNA quality and quantity are low. In forensic genetics, the mtDNA control region (CR) is commonly sequenced using established Sanger-type Sequencing (STS) protocols involving fragment sizes down to approximately 150 base pairs (bp). Recent developments include Massively Parallel Sequencing (MPS) of (multiplex) PCR-generated libraries using the same amplicon sizes. Molecular genetic studies on archaeological remains that harbor more degraded aDNA have pioneered alternative approaches to target mtDNA, such as capture hybridization and primer extension capture (PEC) methods followed by MPS. These assays target smaller mtDNA fragment sizes (down to 50 bp or less), and have proven to be substantially more successful in obtaining useful mtDNA sequences from these samples compared to electrophoretic methods. Here, we present the modification and optimization of a PEC method, earlier developed for sequencing the Neanderthal mitochondrial genome, with forensic applications in mind. Our approach was designed for a more sensitive enrichment of the mtDNA CR in a single tube assay and short laboratory turnaround times, thus complying with forensic practices. We characterized the method using sheared, high quantity mtDNA (six samples), and tested challenging forensic samples (n = 2) as well as compromised solid tissue samples (n = 15) up to 8 kyrs of age. The PEC MPS method produced reliable and plausible mtDNA haplotypes that were useful in the forensic context. It yielded plausible data in samples that did not provide results with STS and other MPS techniques. We addressed the issue of contamination by including four generations of negative controls, and discuss the results in the forensic context. We finally offer perspectives for future research to enable the validation and accreditation of the PEC MPS method for final implementation in forensic genetic laboratories. PMID:28934125
A first proposal for a general description model of forensic traces
NASA Astrophysics Data System (ADS)
Lindauer, Ina; Schäler, Martin; Vielhauer, Claus; Saake, Gunter; Hildebrandt, Mario
2012-06-01
In recent years, the amount of digitally captured traces at crime scenes increased rapidly. There are various kinds of such traces, like pick marks on locks, latent fingerprints on various surfaces as well as different micro traces. Those traces are different from each other not only in kind but also in which information they provide. Every kind of trace has its own properties (e.g., minutiae for fingerprints, or raking traces for locks) but there are also large amounts of metadata which all traces have in common like location, time and other additional information in relation to crime scenes. For selected types of crime scene traces, type-specific databases already exist, such as the ViCLAS for sexual offences, the IBIS for ballistic forensics or the AFIS for fingerprints. These existing forensic databases strongly differ in the trace description models. For forensic experts it would be beneficial to work with only one database capable of handling all possible forensic traces acquired at a crime scene. This is especially the case when different kinds of traces are interrelated (e.g., fingerprints and ballistic marks on a bullet casing). Unfortunately, current research on interrelated traces as well as general forensic data models and structures is not mature enough to build such an encompassing forensic database. Nevertheless, recent advances in the field of contact-less scanning make it possible to acquire different kinds of traces with the same device. Therefore the data of these traces is structured similarly what simplifies the design of a general forensic data model for different kinds of traces. In this paper we introduce a first common description model for different forensic trace types. Furthermore, we apply for selected trace types from the well established database schema development process the phases of transferring expert knowledge in the corresponding forensic fields into an extendible, database-driven, generalised forensic description model. The trace types considered here are fingerprint traces, traces at locks, micro traces and ballistic traces. Based on these basic trace types, also combined traces (multiple or overlapped fingerprints, fingerprints on bullet casings, etc) and partial traces are considered.
A Novel Method for Block Size Forensics Based on Morphological Operations
NASA Astrophysics Data System (ADS)
Luo, Weiqi; Huang, Jiwu; Qiu, Guoping
Passive forensics analysis aims to find out how multimedia data is acquired and processed without relying on pre-embedded or pre-registered information. Since most existing compression schemes for digital images are based on block processing, one of the fundamental steps for subsequent forensics analysis is to detect the presence of block artifacts and estimate the block size for a given image. In this paper, we propose a novel method for blind block size estimation. A 2×2 cross-differential filter is first applied to detect all possible block artifact boundaries, morphological operations are then used to remove the boundary effects caused by the edges of the actual image contents, and finally maximum-likelihood estimation (MLE) is employed to estimate the block size. The experimental results evaluated on over 1300 nature images show the effectiveness of our proposed method. Compared with existing gradient-based detection method, our method achieves over 39% accuracy improvement on average.
Windows 7 Antiforensics: A Review and a Novel Approach.
Eterovic-Soric, Brett; Choo, Kim-Kwang Raymond; Mubarak, Sameera; Ashman, Helen
2017-07-01
In this paper, we review literature on antiforensics published between 2010 and 2016 and reveal the surprising lack of up-to-date research on this topic. This research aims to contribute to this knowledge gap by investigating different antiforensic techniques for devices running Windows 7, one of the most popular operating systems. An approach which allows for removal or obfuscation of most forensic evidence is then presented. Using the Trojan software DarkComet RAT as a case study, we demonstrate the utility of our approach and that a Trojan Horse infection may be a legitimate possibility, even if there is no evidence of an infection on a seized computer's hard drive. Up-to-date information regarding how forensic artifacts can be compromised will allow relevant stakeholders to make informed decisions when deciding the outcome of legal cases involving digital evidence. © 2017 American Academy of Forensic Sciences.
Blurriness in Live Forensics: An Introduction
NASA Astrophysics Data System (ADS)
Savoldi, Antonio; Gubian, Paolo
The Live Forensics discipline aims at answering basic questions related to a digital crime, which usually involves a computer-based system. The investigation should be carried out with the very goal to establish which processes were running, when they were started and by whom, what specific activities those processes were doing and the state of active network connections. Besides, a set of tools needs to be launched on the running system by altering, as a consequence of the Locard’s exchange principle [2], the system’s memory. All the methodologies for the live forensics field proposed until now have a basic, albeit important, weakness, which is the inability to quantify the perturbation, or blurriness, of the system’s memory of the investigated computer. This is the very last goal of this paper: to provide a set of guidelines which can be effectively used for measuring the uncertainty of the collected volatile memory on a live system being investigated.
[Student tragedy. Forensic-psychiatric and legal medicine aspects of an unusual crime].
Cabanis, D; Bratzke, H
1985-01-01
The unusual circumstances of the violent killing of an 18-year-old girl by her 18.8-years-old schoolfriend led us to undertake a forensic-psychiatric analysis of the offence action as well as a presentation of legal-medical points of view. The crime, which can be classified as a collective lover crime for which there is no parallel in the literature, was only solved 9 months later when one of the two delinquents confessed a further offence. The killing was planned and prepared, the victim being buried hurriedly in a previously made hole in a wood after she had been strangled.
Two schemes for rapid generation of digital video holograms using PC cluster
NASA Astrophysics Data System (ADS)
Park, Hanhoon; Song, Joongseok; Kim, Changseob; Park, Jong-Il
2017-12-01
Computer-generated holography (CGH), which is a process of generating digital holograms, is computationally expensive. Recently, several methods/systems of parallelizing the process using graphic processing units (GPUs) have been proposed. Indeed, use of multiple GPUs or a personal computer (PC) cluster (each PC with GPUs) enabled great improvements in the process speed. However, extant literature has less often explored systems involving rapid generation of multiple digital holograms and specialized systems for rapid generation of a digital video hologram. This study proposes a system that uses a PC cluster and is able to more efficiently generate a video hologram. The proposed system is designed to simultaneously generate multiple frames and accelerate the generation by parallelizing the CGH computations across a number of frames, as opposed to separately generating each individual frame while parallelizing the CGH computations within each frame. The proposed system also enables the subprocesses for generating each frame to execute in parallel through multithreading. With these two schemes, the proposed system significantly reduced the data communication time for generating a digital hologram when compared with that of the state-of-the-art system.
2010-09-01
of Mannheim seeks to produce realistic digital images for student analysis ( Moch & Freiling, 2009). Using instructor generated scripts and the...laboratory. ACM Transactions on Information and System Security, (pp. 262-294). Moch , C., & Freiling, F. (2009). The forensic image generator
Dental photography today. Part 1: basic concepts.
Casaglia, A; DE Dominicis, P; Arcuri, L; Gargari, M; Ottria, L
2015-01-01
This paper is the first article in a new series on digital dental photography. Part 1 defines the aims and objectives of dental photography for examination, diagnosis and treatment planning, legal and forensic documentation, publishing, education, marketing and communication with patients, dental team members, colleagues and dental laboratory.
Parallel pulse processing and data acquisition for high speed, low error flow cytometry
Engh, G.J. van den; Stokdijk, W.
1992-09-22
A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate. 17 figs.
Overview of Digital Forensics Algorithms in Dslr Cameras
NASA Astrophysics Data System (ADS)
Aminova, E.; Trapeznikov, I.; Priorov, A.
2017-05-01
The widespread usage of the mobile technologies and the improvement of the digital photo devices getting has led to more frequent cases of falsification of images including in the judicial practice. Consequently, the actual task for up-to-date digital image processing tools is the development of algorithms for determining the source and model of the DSLR (Digital Single Lens Reflex) camera and improve image formation algorithms. Most research in this area based on the mention that the extraction of unique sensor trace of DSLR camera could be possible on the certain stage of the imaging process into the camera. It is considered that the study focuses on the problem of determination of unique feature of DSLR cameras based on optical subsystem artifacts and sensor noises.
Watermarking requirements for Boeing digital cinema
NASA Astrophysics Data System (ADS)
Lixvar, John P.
2003-06-01
The enormous economic incentives for safeguarding intellectual property in the digital domain have made forensic watermarking a research topic of considerable interest. However, a recent examination of some of the leading product development efforts reveals that at present there is no effective watermarking implementation that addresses both the fidelity and security requirements of high definition digital cinema. If Boeing Digital Cinema (BDC, a business unit of Boeing Integrated Defense Systems) is to succeed in using watermarking as a deterrent to the unauthorized capture and distribution of high value cinematic material, the technology must be robust, transparent, asymmetric in its insertion/detection costs, and compatible with all the other elements of Boeing's multi-layered security system, including its compression, encryption, and key management services.
Performance comparison of denoising filters for source camera identification
NASA Astrophysics Data System (ADS)
Cortiana, A.; Conotter, V.; Boato, G.; De Natale, F. G. B.
2011-02-01
Source identification for digital content is one of the main branches of digital image forensics. It relies on the extraction of the photo-response non-uniformity (PRNU) noise as a unique intrinsic fingerprint that efficiently characterizes the digital device which generated the content. Such noise is estimated as the difference between the content and its de-noised version obtained via denoising filter processing. This paper proposes a performance comparison of different denoising filters for source identification purposes. In particular, results achieved with a sophisticated 3D filter are presented and discussed with respect to state-of-the-art denoising filters previously employed in such a context.
Digital imaging and image analysis applied to numerical applications in forensic hair examination.
Brooks, Elizabeth; Comber, Bruce; McNaught, Ian; Robertson, James
2011-03-01
A method that provides objective data to complement the hair analysts' microscopic observations, which is non-destructive, would be of obvious benefit in the forensic examination of hairs. This paper reports on the use of objective colour measurement and image analysis techniques of auto-montaged images. Brown Caucasian telogen scalp hairs were chosen as a stern test of the utility of these approaches. The results show the value of using auto-montaged images and the potential for the use of objective numerical measures of colour and pigmentation to complement microscopic observations. 2010. Published by Elsevier Ireland Ltd. All rights reserved.
A generalized Benford's law for JPEG coefficients and its applications in image forensics
NASA Astrophysics Data System (ADS)
Fu, Dongdong; Shi, Yun Q.; Su, Wei
2007-02-01
In this paper, a novel statistical model based on Benford's law for the probability distributions of the first digits of the block-DCT and quantized JPEG coefficients is presented. A parametric logarithmic law, i.e., the generalized Benford's law, is formulated. Furthermore, some potential applications of this model in image forensics are discussed in this paper, which include the detection of JPEG compression for images in bitmap format, the estimation of JPEG compression Qfactor for JPEG compressed bitmap image, and the detection of double compressed JPEG image. The results of our extensive experiments demonstrate the effectiveness of the proposed statistical model.
Exline, David L; Wallace, Christie; Roux, Claude; Lennard, Chris; Nelson, Matthew P; Treado, Patrick J
2003-09-01
Chemical imaging technology is a rapid examination technique that combines molecular spectroscopy and digital imaging, providing information on morphology, composition, structure, and concentration of a material. Among many other applications, chemical imaging offers an array of novel analytical testing methods, which limits sample preparation and provides high-quality imaging data essential in the detection of latent fingerprints. Luminescence chemical imaging and visible absorbance chemical imaging have been successfully applied to ninhydrin, DFO, cyanoacrylate, and luminescent dye-treated latent fingerprints, demonstrating the potential of this technology to aid forensic investigations. In addition, visible absorption chemical imaging has been applied successfully to visualize untreated latent fingerprints.
Thali, Michael J; Braun, Marcel; Wirth, Joachim; Vock, Peter; Dirnhofer, Richard
2003-11-01
A main goal of forensic medicine is to document and to translate medical findings to a language and/or visualization that is readable and understandable for judicial persons and for medical laymen. Therefore, in addition to classical methods, scientific cutting-edge technologies can and should be used. Through the use of the Forensic, 3-D/CAD-supported Photogrammetric method the documentation of so-called "morphologic fingerprints" has been realized. Forensic, 3-D/CAD-supported Photogrammetry creates morphologic data models of the injury and of the suspected injury-causing instrument allowing the evaluation of a match between the injury and the instrument. In addition to the photogrammetric body surface registration, the radiological documentation provided by a volume scan (i.e., spiral, multi-detector CT, or MRI) registers the sub-surface injury, which is not visible to Photogrammetry. The new, combined method of merging Photogrammetry and Radiology data sets creates the potential to perform many kinds of reconstructions and postprocessing of (patterned) injuries in the realm of forensic medical case work. Using this merging method of colored photogrammetric surface and gray-scale radiological internal documentation, a great step towards a new kind of reality-based, high-tech wound documentation and visualization in forensic medicine is made. The combination of the methods of 3D/CAD Photogrammetry and Radiology has the advantage of being observer-independent, non-subjective, non-invasive, digitally storable over years or decades and even transferable over the web for second opinion.
A user-friendly technical set-up for infrared photography of forensic findings.
Rost, Thomas; Kalberer, Nicole; Scheurer, Eva
2017-09-01
Infrared photography is interesting for a use in forensic science and forensic medicine since it reveals findings that normally are almost invisible to the human eye. Originally, infrared photography has been made possible by the placement of an infrared light transmission filter screwed in front of the camera objective lens. However, this set-up is associated with many drawbacks such as the loss of the autofocus function, the need of an external infrared source, and long exposure times which make the use of a tripod necessary. These limitations prevented up to now the routine application of infrared photography in forensics. In this study the use of a professional modification inside the digital camera body was evaluated regarding camera handling and image quality. This permanent modification consisted of the replacement of the in-built infrared blocking filter by an infrared transmission filter of 700nm and 830nm, respectively. The application of this camera set-up for the photo-documentation of forensically relevant post-mortem findings was investigated in examples of trace evidence such as gunshot residues on the skin, in external findings, e.g. hematomas, as well as in an exemplary internal finding, i.e., Wischnewski spots in a putrefied stomach. The application of scattered light created by indirect flashlight yielded a more uniform illumination of the object, and the use of the 700nm filter resulted in better pictures than the 830nm filter. Compared to pictures taken under visible light, infrared photographs generally yielded better contrast. This allowed for discerning more details and revealed findings which were not visible otherwise, such as imprints on a fabric and tattoos in mummified skin. The permanent modification of a digital camera by building in a 700nm infrared transmission filter resulted in a user-friendly and efficient set-up which qualified for the use in daily forensic routine. Main advantages were a clear picture in the viewfinder, an auto-focus usable over the whole range of infrared light, and the possibility of using short shutter speeds which allows taking infrared pictures free-hand. The proposed set-up with a modification of the camera allows a user-friendly application of infrared photography in post-mortem settings. Copyright © 2017 Elsevier B.V. All rights reserved.
Modified signed-digit arithmetic based on redundant bit representation.
Huang, H; Itoh, M; Yatagai, T
1994-09-10
Fully parallel modified signed-digit arithmetic operations are realized based on redundant bit representation of the digits proposed. A new truth-table minimizing technique is presented based on redundant-bitrepresentation coding. It is shown that only 34 minterms are enough for implementing one-step modified signed-digit addition and subtraction with this new representation. Two optical implementation schemes, correlation and matrix multiplication, are described. Experimental demonstrations of the correlation architecture are presented. Both architectures use fixed minterm masks for arbitrary-length operands, taking full advantage of the parallelism of the modified signed-digit number system and optics.
Passive detection of copy-move forgery in digital images: state-of-the-art.
Al-Qershi, Osamah M; Khoo, Bee Ee
2013-09-10
Currently, digital images and videos have high importance because they have become the main carriers of information. However, the relative ease of tampering with images and videos makes their authenticity untrustful. Digital image forensics addresses the problem of the authentication of images or their origins. One main branch of image forensics is passive image forgery detection. Images could be forged using different techniques, and the most common forgery is the copy-move, in which a region of an image is duplicated and placed elsewhere in the same image. Active techniques, such as watermarking, have been proposed to solve the image authenticity problem, but those techniques have limitations because they require human intervention or specially equipped cameras. To overcome these limitations, several passive authentication methods have been proposed. In contrast to active methods, passive methods do not require any previous information about the image, and they take advantage of specific detectable changes that forgeries can bring into the image. In this paper, we describe the current state-of-the-art of passive copy-move forgery detection methods. The key current issues in developing a robust copy-move forgery detector are then identified, and the trends of tackling those issues are addressed. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
REMOTE SENSING TOOLS ASSIST IN ENVIRONMENTAL FORENSICS PART II - DIGITAL TOOLS
This is part two of a two-part discussion, in which we will provide an overview of the use of GIS and GPS in environmental analysis and enforcement.
GIS describes a system that manages, analyzes and displays geographic information. Environmental applications include anal...
Dental photography today. Part 1: basic concepts
CASAGLIA, A.; DE DOMINICIS, P.; ARCURI, L.; GARGARI, M.; OTTRIA, L.
2015-01-01
SUMMARY This paper is the first article in a new series on digital dental photography. Part 1 defines the aims and objectives of dental photography for examination, diagnosis and treatment planning, legal and forensic documentation, publishing, education, marketing and communication with patients, dental team members, colleagues and dental laboratory. PMID:28042424
AMULET: A MUlti-cLuE Approach to Image Forensics
2014-12-31
celebrities have been substituted in the other two pictures. 3.2.5 Choice of reliability properties Let us now apply the BBA mapping approach proposed in...Jiang, and L. Ma, “Ds evidence theory based digital image trustworthiness evaluation model,” in MINES 2009, International Conference on Multimedia
Application of growing hierarchical SOM for visualisation of network forensics traffic data.
Palomo, E J; North, J; Elizondo, D; Luque, R M; Watson, T
2012-08-01
Digital investigation methods are becoming more and more important due to the proliferation of digital crimes and crimes involving digital evidence. Network forensics is a research area that gathers evidence by collecting and analysing network traffic data logs. This analysis can be a difficult process, especially because of the high variability of these attacks and large amount of data. Therefore, software tools that can help with these digital investigations are in great demand. In this paper, a novel approach to analysing and visualising network traffic data based on growing hierarchical self-organising maps (GHSOM) is presented. The self-organising map (SOM) has been shown to be successful for the analysis of highly-dimensional input data in data mining applications as well as for data visualisation in a more intuitive and understandable manner. However, the SOM has some problems related to its static topology and its inability to represent hierarchical relationships in the input data. The GHSOM tries to overcome these limitations by generating a hierarchical architecture that is automatically determined according to the input data and reflects the inherent hierarchical relationships among them. Moreover, the proposed GHSOM has been modified to correctly treat the qualitative features that are present in the traffic data in addition to the quantitative features. Experimental results show that this approach can be very useful for a better understanding of network traffic data, making it easier to search for evidence of attacks or anomalous behaviour in a network environment. Copyright © 2012 Elsevier Ltd. All rights reserved.
Jakupciak, John P; Wells, Jeffrey M; Karalus, Richard J; Pawlowski, David R; Lin, Jeffrey S; Feldman, Andrew B
2013-01-01
Large-scale genomics projects are identifying biomarkers to detect human disease. B. pseudomallei and B. mallei are two closely related select agents that cause melioidosis and glanders. Accurate characterization of metagenomic samples is dependent on accurate measurements of genetic variation between isolates with resolution down to strain level. Often single biomarker sensitivity is augmented by use of multiple or panels of biomarkers. In parallel with single biomarker validation, advances in DNA sequencing enable analysis of entire genomes in a single run: population-sequencing. Potentially, direct sequencing could be used to analyze an entire genome to serve as the biomarker for genome identification. However, genome variation and population diversity complicate use of direct sequencing, as well as differences caused by sample preparation protocols including sequencing artifacts and mistakes. As part of a Department of Homeland Security program in bacterial forensics, we examined how to implement whole genome sequencing (WGS) analysis as a judicially defensible forensic method for attributing microbial sample relatedness; and also to determine the strengths and limitations of whole genome sequence analysis in a forensics context. Herein, we demonstrate use of sequencing to provide genetic characterization of populations: direct sequencing of populations.
Jakupciak, John P.; Wells, Jeffrey M.; Karalus, Richard J.; Pawlowski, David R.; Lin, Jeffrey S.; Feldman, Andrew B.
2013-01-01
Large-scale genomics projects are identifying biomarkers to detect human disease. B. pseudomallei and B. mallei are two closely related select agents that cause melioidosis and glanders. Accurate characterization of metagenomic samples is dependent on accurate measurements of genetic variation between isolates with resolution down to strain level. Often single biomarker sensitivity is augmented by use of multiple or panels of biomarkers. In parallel with single biomarker validation, advances in DNA sequencing enable analysis of entire genomes in a single run: population-sequencing. Potentially, direct sequencing could be used to analyze an entire genome to serve as the biomarker for genome identification. However, genome variation and population diversity complicate use of direct sequencing, as well as differences caused by sample preparation protocols including sequencing artifacts and mistakes. As part of a Department of Homeland Security program in bacterial forensics, we examined how to implement whole genome sequencing (WGS) analysis as a judicially defensible forensic method for attributing microbial sample relatedness; and also to determine the strengths and limitations of whole genome sequence analysis in a forensics context. Herein, we demonstrate use of sequencing to provide genetic characterization of populations: direct sequencing of populations. PMID:24455204
[Double diagnosis and forensic psychiatric opinion].
Kocur, Józef; Trendak, Wiesława
2009-01-01
Addiction to alcohol or any other psychoactive substance can run parallel with other diseases or mental disorders. One can then observe co-occurrence and mutual interaction of dysfunctions typical of addiction and of other mental disorders that accompany addiction. That is why, clinical pictures of such states (double diagnosis) are usually less unique, have an unusual course and cause diagnostic and therapeutic difficulty. The problem of forensic psychiatric opinion and treatment of people with a double diagnosis is another aspect of these difficulties. It is caused by the fact that forensic psychiatric assessment of the mental state of such people requires taking into consideration a very complex clinical and legal situation triggered by the interference of various ethiopathogenetic and clinical disorders. It leads to the need for complex evaluation and reference to sanity or other signs of functioning within the current law should result, first of all, from the analyses directly pertaining to the influence of the diagnosed disorders on the state of patients with double diagnosis. The forensic psychiatric aspect of disorders connected with double diagnosis is particularly significant as there is a relatively high risk of behaviours posing a threat to public order in this group of patients.
REWRITING ECOLOGICAL SUCCESSION HISTORY: DID CARRION ECOLOGISTS GET THERE FIRST?
Michaud, Jean-Philippe; Schoenly, Kenneth G; Moreau, Gaétan
2015-03-01
Ecological succession is arguably the most enduring contribution of plant ecologists and its origins have never been contested. However, we show that French entomologist Pierre Mégnin, while collaborating with medical examiners in the late 1800s, advanced the first formal definition and testable mechanism of ecological succession. This discovery gave birth to the twin disciplines of carrion ecology and forensic entomology. As a novel case of multiple independent discovery, we chronicle how the disciplines of plant and carrion ecology (including forensic entomology) accumulated strikingly similar parallel histories and contributions. In the 1900s, the two groups diverged in methodology and purpose, with carrion ecologists and forensic entomologists focusing mostly on case reports and observational studies instead of hypothesis testing. Momentum is currently growing, however, to develop the ecological framework of forensic entomology and advance carrion ecology theory. Researchers are recognizing the potential of carcasses as subjects for testing not only succession mechanisms (without assuming space-for-time substitution), but also aggregation and coexistence models, diversity-ecosystem function relationships, and the dynamics of pulsed resources. By comparing the contributions of plant and carrion ecologists, we hope to stimulate future crossover research that leads to a general theory of ecological succession.
Tillmar, Andreas O; Phillips, Chris
2017-01-01
Advances in massively parallel sequencing technology have enabled the combination of a much-expanded number of DNA markers (notably STRs and SNPs in one or combined multiplexes), with the aim of increasing the weight of evidence in forensic casework. However, when data from multiple loci on the same chromosome are used, genetic linkage can affect the final likelihood calculation. In order to study the effect of linkage for different sets of markers we developed the biostatistical tool ILIR, (Impact of Linkage on forensic markers for Identity and Relationship tests). The ILIR tool can be used to study the overall impact of genetic linkage for an arbitrary set of markers used in forensic testing. Application of ILIR can be useful during marker selection and design of new marker panels, as well as being highly relevant for existing marker sets as a way to properly evaluate the effects of linkage on a case-by-case basis. ILIR, implemented via the open source platform R, includes variation and genomic position reference data for over 40 STRs and 140 SNPs, combined with the ability to include additional forensic markers of interest. The use of the software is demonstrated with examples from several different established marker sets (such as the expanded CODIS core loci) including a review of the interpretation of linked genetic data. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Hanlon, Katharine L
2018-01-01
Cross-polarisation, with regard to visible light, is a process wherein two polarisers with perpendicular orientation to one another are used on the incident and reflected lights. Under cross-polarised light birefringent structures which are otherwise invisible become apparent. Cross-polarised light eliminates glare and specular highlights, allowing for an unobstructed view of subsurface pathology. Parallel-polarisation occurs when the polarisers are rotated to the same orientation. When cross- or parallel-polarisation is applied to photography, images can be generated which aid in visualisation of surface and subsurface elements. Improved access to equipment and education has the potential to benefit practitioners, researchers, investigators and patients.
Fontana, F; Rapone, C; Bregola, G; Aversa, R; de Meo, A; Signorini, G; Sergio, M; Ferrarini, A; Lanzellotto, R; Medoro, G; Giorgini, G; Manaresi, N; Berti, A
2017-07-01
Latest genotyping technologies allow to achieve a reliable genetic profile for the offender identification even from extremely minute biological evidence. The ultimate challenge occurs when genetic profiles need to be retrieved from a mixture, which is composed of biological material from two or more individuals. In this case, DNA profiling will often result in a complex genetic profile, which is then subject matter for statistical analysis. In principle, when more individuals contribute to a mixture with different biological fluids, their single genetic profiles can be obtained by separating the distinct cell types (e.g. epithelial cells, blood cells, sperm), prior to genotyping. Different approaches have been investigated for this purpose, such as fluorescent-activated cell sorting (FACS) or laser capture microdissection (LCM), but currently none of these methods can guarantee the complete separation of different type of cells present in a mixture. In other fields of application, such as oncology, DEPArray™ technology, an image-based, microfluidic digital sorter, has been widely proven to enable the separation of pure cells, with single-cell precision. This study investigates the applicability of DEPArray™ technology to forensic samples analysis, focusing on the resolution of the forensic mixture problem. For the first time, we report here the development of an application-specific DEPArray™ workflow enabling the detection and recovery of pure homogeneous cell pools from simulated blood/saliva and semen/saliva mixtures, providing full genetic match with genetic profiles of corresponding donors. In addition, we assess the performance of standard forensic methods for DNA quantitation and genotyping on low-count, DEPArray™-isolated cells, showing that pure, almost complete profiles can be obtained from as few as ten haploid cells. Finally, we explore the applicability in real casework samples, demonstrating that the described approach provides complete separation of cells with outstanding precision. In all examined cases, DEPArray™ technology proves to be a groundbreaking technology for the resolution of forensic biological mixtures, through the precise isolation of pure cells for an incontrovertible attribution of the obtained genetic profiles. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Wang, Zheng; Zhou, Di; Wang, Hui; Jia, Zhenjun; Liu, Jing; Qian, Xiaoqin; Li, Chengtao; Hou, Yiping
2017-11-01
Massively parallel sequencing (MPS) technologies have proved capable of sequencing the majority of the key forensic STR markers. By MPS, not only the repeat-length size but also sequence variations could be detected. Recently, Thermo Fisher Scientific has designed an advanced MPS 32-plex panel, named the Precision ID GlobalFiler™ NGS STR Panel, where the primer set has been designed specifically for the purpose of MPS technologies and the data analysis are supported by a new version HID STR Genotyper Plugin (V4.0). In this study, a series of experiments that evaluated concordance, reliability, sensitivity of detection, mixture analysis, and the ability to analyze case-type and challenged samples were conducted. In addition, 106 unrelated Han individuals were sequenced to perform genetic analyses of allelic diversity. As expected, MPS detected broader allele variations and gained higher power of discrimination and exclusion rate. MPS results were found to be concordant with current capillary electrophoresis methods, and single source complete profiles could be obtained stably using as little as 100pg of input DNA. Moreover, this MPS panel could be adapted to case-type samples and partial STR genotypes of the minor contributor could be detected up to 19:1 mixture. Aforementioned results indicate that the Precision ID GlobalFiler™ NGS STR Panel is reliable, robust and reproducible and have the potential to be used as a tool for human forensics. Copyright © 2017 Elsevier B.V. All rights reserved.
A parallel algorithm for switch-level timing simulation on a hypercube multiprocessor
NASA Technical Reports Server (NTRS)
Rao, Hariprasad Nannapaneni
1989-01-01
The parallel approach to speeding up simulation is studied, specifically the simulation of digital LSI MOS circuitry on the Intel iPSC/2 hypercube. The simulation algorithm is based on RSIM, an event driven switch-level simulator that incorporates a linear transistor model for simulating digital MOS circuits. Parallel processing techniques based on the concepts of Virtual Time and rollback are utilized so that portions of the circuit may be simulated on separate processors, in parallel for as large an increase in speed as possible. A partitioning algorithm is also developed in order to subdivide the circuit for parallel processing.
Burrow, J Gordon
2016-05-01
This small-scale study examined the role that bare footprint collection and measurement processes have on the Reel method of measurement in forensic podiatry and its use in the Criminal Justice System. Previous research indicated that the Reel method was a valid and reliable measurement system for bare footprint analysis but various collection systems have been used to collect footprint data and both manual and digital measurement processes were utilized in forensic podiatry and other disciplines. This study contributes to the debate about collecting bare footprints; the techniques employed to quantify various Reel measurements and considered whether there was asymmetry between feet and footprints of the same person. An inductive, quantitative paradigm used the Podotrack gathering procedure for footprint collection and the subsequent dynamic footprints subjected to Adobe Photoshop techniques of calculating the Reel linear variables. Statistical analyses using paired-sample t tests were conducted to test hypotheses and compare data sets. Standard error of mean (SEM) showed variation between feet and the findings provide support for the Reel study and measurement method. Copyright © 2016 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.
Microfluidic Devices for Forensic DNA Analysis: A Review
Bruijns, Brigitte; van Asten, Arian; Tiggelaar, Roald; Gardeniers, Han
2016-01-01
Microfluidic devices may offer various advantages for forensic DNA analysis, such as reduced risk of contamination, shorter analysis time and direct application at the crime scene. Microfluidic chip technology has already proven to be functional and effective within medical applications, such as for point-of-care use. In the forensic field, one may expect microfluidic technology to become particularly relevant for the analysis of biological traces containing human DNA. This would require a number of consecutive steps, including sample work up, DNA amplification and detection, as well as secure storage of the sample. This article provides an extensive overview of microfluidic devices for cell lysis, DNA extraction and purification, DNA amplification and detection and analysis techniques for DNA. Topics to be discussed are polymerase chain reaction (PCR) on-chip, digital PCR (dPCR), isothermal amplification on-chip, chip materials, integrated devices and commercially available techniques. A critical overview of the opportunities and challenges of the use of chips is discussed, and developments made in forensic DNA analysis over the past 10–20 years with microfluidic systems are described. Areas in which further research is needed are indicated in a future outlook. PMID:27527231
NASA Astrophysics Data System (ADS)
Ying, Jia-ju; Chen, Yu-dan; Liu, Jie; Wu, Dong-sheng; Lu, Jun
2016-10-01
The maladjustment of photoelectric instrument binocular optical axis parallelism will affect the observe effect directly. A binocular optical axis parallelism digital calibration system is designed. On the basis of the principle of optical axis binocular photoelectric instrument calibration, the scheme of system is designed, and the binocular optical axis parallelism digital calibration system is realized, which include four modules: multiband parallel light tube, optical axis translation, image acquisition system and software system. According to the different characteristics of thermal infrared imager and low-light-level night viewer, different algorithms is used to localize the center of the cross reticle. And the binocular optical axis parallelism calibration is realized for calibrating low-light-level night viewer and thermal infrared imager.
GrigoraSNPs: Optimized Analysis of SNPs for DNA Forensics.
Ricke, Darrell O; Shcherbina, Anna; Michaleas, Adam; Fremont-Smith, Philip
2018-04-16
High-throughput sequencing (HTS) of single nucleotide polymorphisms (SNPs) enables additional DNA forensic capabilities not attainable using traditional STR panels. However, the inclusion of sets of loci selected for mixture analysis, extended kinship, phenotype, biogeographic ancestry prediction, etc., can result in large panel sizes that are difficult to analyze in a rapid fashion. GrigoraSNP was developed to address the allele-calling bottleneck that was encountered when analyzing SNP panels with more than 5000 loci using HTS. GrigoraSNPs uses a MapReduce parallel data processing on multiple computational threads plus a novel locus-identification hashing strategy leveraging target sequence tags. This tool optimizes the SNP calling module of the DNA analysis pipeline with runtimes that scale linearly with the number of HTS reads. Results are compared with SNP analysis pipelines implemented with SAMtools and GATK. GrigoraSNPs removes a computational bottleneck for processing forensic samples with large HTS SNP panels. Published 2018. This article is a U.S. Government work and is in the public domain in the USA.
NASA Astrophysics Data System (ADS)
Gloe, Thomas; Borowka, Karsten; Winkler, Antje
2010-01-01
The analysis of lateral chromatic aberration forms another ingredient for a well equipped toolbox of an image forensic investigator. Previous work proposed its application to forgery detection1 and image source identification.2 This paper takes a closer look on the current state-of-the-art method to analyse lateral chromatic aberration and presents a new approach to estimate lateral chromatic aberration in a runtime-efficient way. Employing a set of 11 different camera models including 43 devices, the characteristic of lateral chromatic aberration is investigated in a large-scale. The reported results point to general difficulties that have to be considered in real world investigations.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-31
... Administrative Law Judge; Termination of the Investigation AGENCY: U.S. International Trade Commission. ACTION... law judge in the above-identified investigation. FOR FURTHER INFORMATION CONTACT: James A. Worth..., Washington d/b/a CRU-DataPort LLC of Vancouver, Washington (``CRU''); Digital Intelligence, Inc. of New...
Determining approximate age of digital images using sensor defects
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav
2011-02-01
The goal of temporal forensics is to establish temporal relationship among two or more pieces of evidence. In this paper, we focus on digital images and describe a method using which an analyst can estimate the acquisition time of an image given a set of other images from the same camera whose time ordering is known. This is achieved by first estimating the parameters of pixel defects, including their onsets, and then detecting their presence in the image under investigation. Both estimators are constructed using the maximum-likelihood principle. The accuracy and limitations of this approach are illustrated on experiments with three cameras. Forensic and law-enforcement analysts are expected to benefit from this technique in situations when the temporal data stored in the EXIF header is lost due to processing or editing images off-line or when the header cannot be trusted. Reliable methods for establishing temporal order between individual pieces of evidence can help reveal deception attempts of an adversary or a criminal. The causal relationship may also provide information about the whereabouts of the photographer.
Parallel digital modem using multirate digital filter banks
NASA Technical Reports Server (NTRS)
Sadr, Ramin; Vaidyanathan, P. P.; Raphaeli, Dan; Hinedi, Sami
1994-01-01
A new class of architectures for an all-digital modem is presented in this report. This architecture, referred to as the parallel receiver (PRX), is based on employing multirate digital filter banks (DFB's) to demodulate, track, and detect the received symbol stream. The resulting architecture is derived, and specifications are outlined for designing the DFB for the PRX. The key feature of this approach is a lower processing rate then either the Nyquist rate or the symbol rate, without any degradation in the symbol error rate. Due to the freedom in choosing the processing rate, the designer is able to arbitrarily select and use digital components, independent of the speed of the integrated circuit technology. PRX architecture is particularly suited for high data rate applications, and due to the modular structure of the parallel signal path, expansion to even higher data rates is accommodated with each. Applications of the PRX would include gigabit satellite channels, multiple spacecraft, optical links, interactive cable-TV, telemedicine, code division multiple access (CDMA) communications, and others.
Massively parallel information processing systems for space applications
NASA Technical Reports Server (NTRS)
Schaefer, D. H.
1979-01-01
NASA is developing massively parallel systems for ultra high speed processing of digital image data collected by satellite borne instrumentation. Such systems contain thousands of processing elements. Work is underway on the design and fabrication of the 'Massively Parallel Processor', a ground computer containing 16,384 processing elements arranged in a 128 x 128 array. This computer uses existing technology. Advanced work includes the development of semiconductor chips containing thousands of feedthrough paths. Massively parallel image analog to digital conversion technology is also being developed. The goal is to provide compact computers suitable for real-time onboard processing of images.
Katherine Spradley, M; Jantz, Richard L
2016-07-01
Standard cranial measurements are commonly used for ancestry estimation; however, 3D digitizers have made cranial landmark data collection and geometric morphometric (GM) analyses more popular within forensic anthropology. Yet there has been little focus on which data type works best. The goal of the present research is to test the discrimination ability of standard and nonstandard craniometric measurements and data derived from GM analysis. A total of 31 cranial landmarks were used to generate 465 interlandmark distances, including a subset of 20 commonly used measurements, and to generate principal component scores from procrustes coordinates. All were subjected to discriminant function analysis to ascertain which type of data performed best for ancestry estimation of American Black and White and Hispanic males and females. The nonstandard interlandmark distances generated the highest classification rates for females (90.5%) and males (88.2%). Using nonstandard interlandmark distances over more commonly used measurements leads to better ancestry estimates for our current population structure. © 2016 American Academy of Forensic Sciences.
Cyber Forensics Ontology for Cyber Criminal Investigation
NASA Astrophysics Data System (ADS)
Park, Heum; Cho, Sunho; Kwon, Hyuk-Chul
We developed Cyber Forensics Ontology for the criminal investigation in cyber space. Cyber crime is classified into cyber terror and general cyber crime, and those two classes are connected with each other. The investigation of cyber terror requires high technology, system environment and experts, and general cyber crime is connected with general crime by evidence from digital data and cyber space. Accordingly, it is difficult to determine relational crime types and collect evidence. Therefore, we considered the classifications of cyber crime, the collection of evidence in cyber space and the application of laws to cyber crime. In order to efficiently investigate cyber crime, it is necessary to integrate those concepts for each cyber crime-case. Thus, we constructed a cyber forensics domain ontology for criminal investigation in cyber space, according to the categories of cyber crime, laws, evidence and information of criminals. This ontology can be used in the process of investigating of cyber crime-cases, and for data mining of cyber crime; classification, clustering, association and detection of crime types, crime cases, evidences and criminals.
Heninger, Michael
2016-03-01
The images of 66 gunshot entrance wounds with a defect on the back, a bullet in the body, hemorrhage along the wound track, and logical certainty that it was an entrance wound were collected from the files of a moderately busy medical examiner's office. Participants numbering 22 board-certified forensic pathologists viewed a single digital archival image of each of the 66 entrance wounds randomly mixed with 74 presumptive exit wounds to determine whether they were entrance or exit wounds. The concordance rate for correctly identifying the 66 logically known entrance wounds was 82.8% with a range from 58% to 97%. This pilot study was conducted to provide an evidence-based approach to the interpretation of the direction of gunshot wounds by reviewing pathologists with access only to archival photographs, and it is not a measure of the accuracy to distinguish entrance from exit wounds when given all of the circumstances. © 2016 American Academy of Forensic Sciences.
A forensic identification case and DPid - can it be a useful tool?
Queiroz, Cristhiane Leão de; Bostock, Ellen Marie; Santos, Carlos Ferreira; Guimarães, Marco Aurélio; Silva, Ricardo Henrique Alves da
2017-01-01
The aim of this study was to show DPid as an important tool of potential application to solve cases with dental prosthesis, such as the forensic case reported, in which a skull, denture and dental records were received for analysis. Human identification is still challenging in various circumstances and Dental Prosthetics Identification (DPid) stores the patient's name and prosthesis information and provides access through an embedded code in dental prosthesis or an identification card. All of this information is digitally stored on servers accessible only by dentists, laboratory technicians and patients with their own level of secure access. DPid provides a complete single-source list of all dental prosthesis features (materials and components) under complete and secure documentation used for clinical follow-up and for human identification. If DPid tool was present in this forensic case, it could have been solved without requirement of DNA exam, which confirmed the dental comparison of antemortem and postmortem records, and concluded the case as a positive identification.
The use of clinical CCT images in the forensic examination of closed head injuries.
Bauer, M; Polzin, S; Patzelt, D
2004-04-01
The forensic evaluation of clinical cranial computed tomographies (CCT) frequently is the only reliable source of morphological evidence in head injuries when the injured individual survives or when death is delayed and autopsy findings are characterized by secondary changes. We have reviewed 21 cases where clinical CCT examinations were used to establish a medico-legal diagnosis. In 18 cases falls (n = 13) could be distinguished from blows (n = 5) due to the presence and/or absence of coup and contrecoup lesions and linear or depressed skull fractures. In two cases the striking object could be identified by digital superimposition. The minimum number of blows could be determined in 1 case. Only in 3 remaining cases the results were inconclusive. In our experience, CCT scans provide an important source of information for the forensic expert. To have unbiased access to these information, it is useful to evaluate the CT scans personally which requires a basic knowledge of traumatic changes found on radiographs.
Ampanozi, Garyfalia; Zimmermann, David; Hatch, Gary M; Ruder, Thomas D; Ross, Steffen; Flach, Patricia M; Thali, Michael J; Ebert, Lars C
2012-05-01
The objective of this study was to explore the perception of the legal authorities regarding different report types and visualization techniques for post-mortem radiological findings. A standardized digital questionnaire was developed and the district attorneys in the catchment area of the affiliated Forensic Institute were requested to evaluate four different types of forensic imaging reports based on four cases examples. Each case was described in four different report types (short written report only, gray-scale CT image with figure caption, color-coded CT image with figure caption, 3D-reconstruction with figure caption). The survey participants were asked to evaluate those types of reports regarding understandability, cost effectiveness and overall appropriateness for the courtroom. 3D reconstructions and color-coded CT images accompanied by written report were preferred regarding understandability and cost/effectiveness. 3D reconstructions of the forensic findings reviewed as most adequate for court. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Classified one-step high-radix signed-digit arithmetic units
NASA Astrophysics Data System (ADS)
Cherri, Abdallah K.
1998-08-01
High-radix number systems enable higher information storage density, less complexity, fewer system components, and fewer cascaded gates and operations. A simple one-step fully parallel high-radix signed-digit arithmetic is proposed for parallel optical computing based on new joint spatial encodings. This reduces hardware requirements and improves throughput by reducing the space-bandwidth produce needed. The high-radix signed-digit arithmetic operations are based on classifying the neighboring input digit pairs into various groups to reduce the computation rules. A new joint spatial encoding technique is developed to present both the operands and the computation rules. This technique increases the spatial bandwidth product of the spatial light modulators of the system. An optical implementation of the proposed high-radix signed-digit arithmetic operations is also presented. It is shown that our one-step trinary signed-digit and quaternary signed-digit arithmetic units are much simpler and better than all previously reported high-radix signed-digit techniques.
Rapid Parallel Semantic Processing of Numbers without Awareness
ERIC Educational Resources Information Center
Van Opstal, Filip; de Lange, Floris P.; Dehaene, Stanislas
2011-01-01
In this study, we investigate whether multiple digits can be processed at a semantic level without awareness, either serially or in parallel. In two experiments, we presented participants with two successive sets of four simultaneous Arabic digits. The first set was masked and served as a subliminal prime for the second, visible target set.…
Application of modern autoradiography to nuclear forensic analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parsons-Davis, Tashi; Knight, Kim; Fitzgerald, Marc
Modern autoradiography techniques based on phosphorimaging technology using image plates (IPs) and digital scanning can identify heterogeneities in activity distributions and reveal material properties, serving to inform subsequent analyses. Here, we have adopted these advantages for applications in nuclear forensics, the technical analysis of radioactive or nuclear materials found outside of legal control to provide data related to provenance, production history, and trafficking route for the materials. IP autoradiography is a relatively simple, non-destructive method for sample characterization that records an image reflecting the relative intensity of alpha and beta emissions from a two-dimensional surface. Such data are complementary tomore » information gathered from radiochemical characterization via bulk counting techniques, and can guide the application of other spatially resolved techniques such as scanning electron microscopy (SEM) and secondary ion mass spectrometry (SIMS). IP autoradiography can image large 2-dimenstional areas (up to 20 × 40 cm), with relatively low detection limits for actinides and other radioactive nuclides, and sensitivity to a wide dynamic range (10 5) of activity density in a single image. Distributions of radioactivity in nuclear materials can be generated with a spatial resolution of approximately 50 μm using IP autoradiography and digital scanning. While the finest grain silver halide films still provide the best possible resolution (down to ~10 μm), IP autoradiography has distinct practical advantages such as shorter exposure times, no chemical post-processing, reusability, rapid plate scanning, and automated image digitization. Sample preparation requirements are minimal, and the analytical method does not consume or alter the sample. These advantages make IP autoradiography ideal for routine screening of nuclear materials, and for the identification of areas of interest for subsequent micro-characterization methods. Here in this article we present a summary of our setup, as modified for nuclear forensic sample analysis and related research, and provide examples of data from select samples from the nuclear fuel cycle and historical nuclear test debris.« less
Application of modern autoradiography to nuclear forensic analysis
Parsons-Davis, Tashi; Knight, Kim; Fitzgerald, Marc; ...
2018-05-20
Modern autoradiography techniques based on phosphorimaging technology using image plates (IPs) and digital scanning can identify heterogeneities in activity distributions and reveal material properties, serving to inform subsequent analyses. Here, we have adopted these advantages for applications in nuclear forensics, the technical analysis of radioactive or nuclear materials found outside of legal control to provide data related to provenance, production history, and trafficking route for the materials. IP autoradiography is a relatively simple, non-destructive method for sample characterization that records an image reflecting the relative intensity of alpha and beta emissions from a two-dimensional surface. Such data are complementary tomore » information gathered from radiochemical characterization via bulk counting techniques, and can guide the application of other spatially resolved techniques such as scanning electron microscopy (SEM) and secondary ion mass spectrometry (SIMS). IP autoradiography can image large 2-dimenstional areas (up to 20 × 40 cm), with relatively low detection limits for actinides and other radioactive nuclides, and sensitivity to a wide dynamic range (10 5) of activity density in a single image. Distributions of radioactivity in nuclear materials can be generated with a spatial resolution of approximately 50 μm using IP autoradiography and digital scanning. While the finest grain silver halide films still provide the best possible resolution (down to ~10 μm), IP autoradiography has distinct practical advantages such as shorter exposure times, no chemical post-processing, reusability, rapid plate scanning, and automated image digitization. Sample preparation requirements are minimal, and the analytical method does not consume or alter the sample. These advantages make IP autoradiography ideal for routine screening of nuclear materials, and for the identification of areas of interest for subsequent micro-characterization methods. Here in this article we present a summary of our setup, as modified for nuclear forensic sample analysis and related research, and provide examples of data from select samples from the nuclear fuel cycle and historical nuclear test debris.« less
Application of modern autoradiography to nuclear forensic analysis.
Parsons-Davis, Tashi; Knight, Kim; Fitzgerald, Marc; Stone, Gary; Caldeira, Lee; Ramon, Christina; Kristo, Michael
2018-05-01
Modern autoradiography techniques based on phosphorimaging technology using image plates (IPs) and digital scanning can identify heterogeneities in activity distributions and reveal material properties, serving to inform subsequent analyses. Here, we have adopted these advantages for applications in nuclear forensics, the technical analysis of radioactive or nuclear materials found outside of legal control to provide data related to provenance, production history, and trafficking route for the materials. IP autoradiography is a relatively simple, non-destructive method for sample characterization that records an image reflecting the relative intensity of alpha and beta emissions from a two-dimensional surface. Such data are complementary to information gathered from radiochemical characterization via bulk counting techniques, and can guide the application of other spatially resolved techniques such as scanning electron microscopy (SEM) and secondary ion mass spectrometry (SIMS). IP autoradiography can image large 2-dimenstional areas (up to 20×40cm), with relatively low detection limits for actinides and other radioactive nuclides, and sensitivity to a wide dynamic range (10 5 ) of activity density in a single image. Distributions of radioactivity in nuclear materials can be generated with a spatial resolution of approximately 50μm using IP autoradiography and digital scanning. While the finest grain silver halide films still provide the best possible resolution (down to ∼10μm), IP autoradiography has distinct practical advantages such as shorter exposure times, no chemical post-processing, reusability, rapid plate scanning, and automated image digitization. Sample preparation requirements are minimal, and the analytical method does not consume or alter the sample. These advantages make IP autoradiography ideal for routine screening of nuclear materials, and for the identification of areas of interest for subsequent micro-characterization methods. In this paper we present a summary of our setup, as modified for nuclear forensic sample analysis and related research, and provide examples of data from select samples from the nuclear fuel cycle and historical nuclear test debris. Copyright © 2018 Elsevier B.V. All rights reserved.
The evidential value of distorted and rectified digital images in footwear imprint examination.
Shor, Yaron; Chaikovsky, Alan; Tsach, Tsadok
2006-06-27
The procedure for forensic photography requires that the film plane be parallel to the taken image. Another procedure must be used when the print is located on reflecting surfaces such as vehicles, or faint marks on porous surfaces. Examination was made of the evidential value of footprint images received from the scene or taken deliberately at an angle out of proper perspective (i.e., the lens axis is not perpendicular to the target plane). An artificial target was prepared and photographed from several lens axis angles ranging from 10 degrees to 85 degrees to the perpendicular, and then rectified using the Adobe Photoshop Version 7.0. It was found that at angles less than 40 degrees , the shape and location of all the individual characteristics were similar enough in comparison to the original image. In images taken at higher angles, the original image could not be adequately restored. The full potential of this image, therefore, could not be achieved after rectification. The results of this study show that the images of a footprint taken at an angle less than 40 degrees , preserve the evidential value of the unique characteristics.
Kumar, Nerella Narendra; Panchaksharappa, Mamatha Gowda; Annigeri, Rajeshwari G
2016-04-01
The aim of the present study is to estimate the age of Davangere population by evaluating the pulp to tooth area ratio (PTR) by using digitized intraoral periapical radiographs of permanent mandibular second molar. 400 intraoral periapical radiograph (IOPA) of permanent mandibular 2nd molar of both the sexes aged 14-60 years were used. Digital camera was used to image the radiographs. Images were computed and PTR was calculated by AUTOCAD software. Intra and Inter observer variability was also assessed. Regression analysis was used to estimate the age of an individual by taking PTR as dependent variable. The mean PTR of males and females was 0.10 ± 0.02 and 0.09 ± 0.02 respectively. Negative correlation was observed, when age was compared with PTR {r = -0.441, -0.406 & -0.419 among males, females and total subjects (p < 0.001)}. Regression analysis showed a Standard Error of Estimate (SEE) of 12 years. The Kappa coefficient value for the intra and inter examiner variability was 0.85 & 0.83 respectively. Our results showed that permanent mandibular 2(nd) molar can be taken as an index tooth for estimating the age of the adults using digitized periapical radiograph and AUTOCAD software. Also high differences were observed between estimated and chronological age of 12 years which is not in the acceptable range. But it provides a new window for research in the forensic sciences in estimating the adult age. Copyright © 2016 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Validating the Factor Structure of the Self-Report Psychopathy Scale in a Community Sample
ERIC Educational Resources Information Center
Mahmut, Mehmet K.; Menictas, Con; Stevenson, Richard J.; Homewood, Judi
2011-01-01
Currently, there is no standard self-report measure of psychopathy in community-dwelling samples that parallels the most commonly used measure of psychopathy in forensic and clinical samples, the Psychopathy Checklist. A promising instrument is the Self-Report Psychopathy scale (SRP), which was derived from the original version the Psychopathy…
Digitally programmable signal generator and method
Priatko, G.J.; Kaskey, J.A.
1989-11-14
Disclosed is a digitally programmable waveform generator for generating completely arbitrary digital or analog waveforms from very low frequencies to frequencies in the gigasample per second range. A memory array with multiple parallel outputs is addressed; then the parallel output data is latched into buffer storage from which it is serially multiplexed out at a data rate many times faster than the access time of the memory array itself. While data is being multiplexed out serially, the memory array is accessed with the next required address and presents its data to the buffer storage before the serial multiplexing of the last group of data is completed, allowing this new data to then be latched into the buffer storage for smooth continuous serial data output. In a preferred implementation, a plurality of these serial data outputs are paralleled to form the input to a digital to analog converter, providing a programmable analog output. 6 figs.
Digitally programmable signal generator and method
Priatko, Gordon J.; Kaskey, Jeffrey A.
1989-01-01
A digitally programmable waveform generator for generating completely arbitrary digital or analog waveforms from very low frequencies to frequencies in the gigasample per second range. A memory array with multiple parallel outputs is addressed; then the parallel output data is latched into buffer storage from which it is serially multiplexed out at a data rate many times faster than the access time of the memory array itself. While data is being multiplexed out serially, the memory array is accessed with the next required address and presents its data to the buffer storage before the serial multiplexing of the last group of data is completed, allowing this new data to then be latched into the buffer storage for smooth continuous serial data output. In a preferred implementation, a plurality of these serial data outputs are paralleled to form the input to a digital to analog converter, providing a programmable analog output.
NASA Astrophysics Data System (ADS)
Mandai, Shingo; Jain, Vishwas; Charbon, Edoardo
2014-02-01
This paper presents a digital silicon photomultiplier (SiPM) partitioned in columns, whereas each column is connected to a column-parallel time-to-digital converter (TDC), in order to improve the timing resolution of single-photon detection. By reducing the number of pixels per TDC using a sharing scheme with three TDCs per column, the pixel-to-pixel skew is reduced. We report the basic characterization of the SiPM, comprising 416 single-photon avalanche diodes (SPADs); the characterization includes photon detection probability, dark count rate, afterpulsing, and crosstalk. We achieved 264-ps full-width at half maximum timing resolution of single-photon detection using a 48-fold column-parallel TDC with a temporal resolution of 51.8 ps (least significant bit), fully integrated in standard complementary metal-oxide semiconductor technology.
Digital intermediate frequency QAM modulator using parallel processing
Pao, Hsueh-Yuan [Livermore, CA; Tran, Binh-Nien [San Ramon, CA
2008-05-27
The digital Intermediate Frequency (IF) modulator applies to various modulation types and offers a simple and low cost method to implement a high-speed digital IF modulator using field programmable gate arrays (FPGAs). The architecture eliminates multipliers and sequential processing by storing the pre-computed modulated cosine and sine carriers in ROM look-up-tables (LUTs). The high-speed input data stream is parallel processed using the corresponding LUTs, which reduces the main processing speed, allowing the use of low cost FPGAs.
Geoethics and Forensic Geology
NASA Astrophysics Data System (ADS)
Donnelly, Laurance
2017-04-01
The International Union of Geological Sciences (IUGS), Initiative on Forensic Geology (IFG) was set up in 2011 to promote and develop the applications of geology to policing and law enforcement throughout the world. This includes the provision of crime scene examinations, searches to locate graves or items of interest that have been buried beneath the ground surface as part of a criminal act and geological trace analysis and evidence. Forensic geologists may assist the police and law enforcement in a range of ways including for example; homicide, sexual assaults, counter terrorism, kidnapping, humanitarian incidents, environmental crimes, precious minerals theft, fakes and fraudulent crimes. The objective of this paper is to consider the geoethical aspects of forensic geology. This includes both delivery to research and teaching, and contribution to the practical applications of forensic geology in case work. The case examples cited are based on the personal experiences of the authors. Often, the technical and scientific aspect of forensic geology investigation may be the most straightforward, after all, this is what the forensic geologist has been trained to do. The associated geoethical issues can be the most challenging and complex to manage. Generally, forensic geologists are driven to carry-out their research or case work with integrity, honesty and in a manner that is law abiding, professional, socially acceptable and highly responsible. This is necessary in advising law enforcement organisations, society and the scientific community that they represent. As the science of forensic geology begins to advance around the world it is desirable to establish a standard set of principles, values and to provide an agreed ethical a framework. But what are these core values? Who is responsible for producing these? How may these become enforced? What happens when geoethical standards are breached? This paper does not attempt to provide all of the answers, as further work is required. However, it draws attention to some of the relevant geoethical issues within forensic geology and forensic geoscience. This paper also highlights the need for the development of a set of resources; references and guidelines, standards and protocols, a code of conduct (including for example integrity, accountability, honesty, professional fairness, courtesy, trustworthiness), data sharing and information transparency, education and training, multi-disciplinary collaboration, development of research, fair debate, evaluating uncertainty and risk, regulation and accreditation, effective communication and diplomacy, attendance at crime scenes, presenting evidence in courts of law, dealing with the media and elimination of potential bias. The uptake of Forensic Geoscience brings with it considerable challenges arising from the direct and often very sensitive human interactions. By developing this ethical component to the work that the IUGS-IFG group does, combines technical approaches with sensitive solutions, and also in parallel helps define an ethical framework for forensic geoscientists' research and practice in addressing these challenges.
Satoh, Tetsuya; Kouroki, Seiya; Ogawa, Keita; Tanaka, Yorika; Matsumura, Kazutoshi; Iwase, Susumu
2018-04-25
Identifying body fluids from forensic samples can provide valuable evidence for criminal investigations. Messenger RNA (mRNA)-based body fluid identification was recently developed, and highly sensitive parallel identification using reverse transcription polymerase chain reaction (RT-PCR) has been described. In this study, we developed reverse transcription loop-mediated isothermal amplification (RT-LAMP) as a simple, rapid assay for identifying three common forensic body fluids, namely blood, semen, and saliva, and evaluated its specificity and sensitivity. Hemoglobin beta (HBB), transglutaminase 4 (TGM4), and statherin (STATH) were selected as marker genes for blood, semen, and saliva, respectively. RT-LAMP could be performed in a single step including both reverse transcription and DNA amplification under an isothermal condition within 60 min, and detection could be conveniently performed via visual fluorescence. Marker-specific amplification was performed in each assay, and no cross-reaction was observed among five representative forensically relevant body fluids. The detection limits of the assays were 0.3 nL, 30 nL, and 0.3 μL for blood, semen, and saliva, respectively, and their sensitivities were comparable with those of RT-PCR. Furthermore, RT-LAMP assays were applicable to forensic casework samples. It is considered that RT-LAMP is useful for body fluid identification.
Effects of the Ion PGM™ Hi-Q™ sequencing chemistry on sequence data quality.
Churchill, Jennifer D; King, Jonathan L; Chakraborty, Ranajit; Budowle, Bruce
2016-09-01
Massively parallel sequencing (MPS) offers substantial improvements over current forensic DNA typing methodologies such as increased resolution, scalability, and throughput. The Ion PGM™ is a promising MPS platform for analysis of forensic biological evidence. The system employs a sequencing-by-synthesis chemistry on a semiconductor chip that measures a pH change due to the release of hydrogen ions as nucleotides are incorporated into the growing DNA strands. However, implementation of MPS into forensic laboratories requires a robust chemistry. Ion Torrent's Hi-Q™ Sequencing Chemistry was evaluated to determine if it could improve on the quality of the generated sequence data in association with selected genetic marker targets. The whole mitochondrial genome and the HID-Ion STR 10-plex panel were sequenced on the Ion PGM™ system with the Ion PGM™ Sequencing 400 Kit and the Ion PGM™ Hi-Q™ Sequencing Kit. Concordance, coverage, strand balance, noise, and deletion ratios were assessed in evaluating the performance of the Ion PGM™ Hi-Q™ Sequencing Kit. The results indicate that reliable, accurate data are generated and that sequencing through homopolymeric regions can be improved with the use of Ion Torrent's Hi-Q™ Sequencing Chemistry. Overall, the quality of the generated sequencing data supports the potential for use of the Ion PGM™ in forensic genetic laboratories.
Development of a forensic skin colour predictive test.
Maroñas, Olalla; Phillips, Chris; Söchtig, Jens; Gomez-Tato, Antonio; Cruz, Raquel; Alvarez-Dios, José; de Cal, María Casares; Ruiz, Yarimar; Fondevila, Manuel; Carracedo, Ángel; Lareu, María V
2014-11-01
There is growing interest in skin colour prediction in the forensic field. However, a lack of consensus approaches for recording skin colour phenotype plus the complicating factors of epistatic effects, environmental influences such as exposure to the sun and unidentified genetic variants, present difficulties for the development of a forensic skin colour predictive test centred on the most strongly associated SNPs. Previous studies have analysed skin colour variation in single unadmixed population groups, including South Asians (Stokowski et al., 2007, Am. J. Hum. Genet, 81: 1119-32) and Europeans (Jacobs et al., 2013, Hum Genet. 132: 147-58). Nevertheless, a major challenge lies in the analysis of skin colour in admixed individuals, where co-ancestry proportions do not necessarily dictate any one person's skin colour. Our study sought to analyse genetic differences between African, European and admixed African-European subjects where direct spectrometric measurements and photographs of skin colour were made in parallel. We identified strong associations to skin colour variation in the subjects studied from a pigmentation SNP discovery panel of 59 markers and developed a forensic online classifier based on naïve Bayes analysis of the SNP profiles made. A skin colour predictive test is described using the ten most strongly associated SNPs in 8 genes linked to skin pigmentation variation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Hirata, Satoshi; Kojima, Kaname; Misawa, Kazuharu; Gervais, Olivier; Kawai, Yosuke; Nagasaki, Masao
2018-05-01
Forensic DNA typing is widely used to identify missing persons and plays a central role in forensic profiling. DNA typing usually uses capillary electrophoresis fragment analysis of PCR amplification products to detect the length of short tandem repeat (STR) markers. Here, we analyzed whole genome data from 1,070 Japanese individuals generated using massively parallel short-read sequencing of 162 paired-end bases. We have analyzed 843,473 STR loci with two to six basepair repeat units and cataloged highly polymorphic STR loci in the Japanese population. To evaluate the performance of the cataloged STR loci, we compared 23 STR loci, widely used in forensic DNA typing, with capillary electrophoresis based STR genotyping results in the Japanese population. Seventeen loci had high correlations and high call rates. The other six loci had low call rates or low correlations due to either the limitations of short-read sequencing technology, the bioinformatics tool used, or the complexity of repeat patterns. With these analyses, we have also purified the suitable 218 STR loci with four basepair repeat units and 53 loci with five basepair repeat units both for short read sequencing and PCR based technologies, which would be candidates to the actual forensic DNA typing in Japanese population.
Self-balanced modulation and magnetic rebalancing method for parallel multilevel inverters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Hui; Shi, Yanjun
A self-balanced modulation method and a closed-loop magnetic flux rebalancing control method for parallel multilevel inverters. The combination of the two methods provides for balancing of the magnetic flux of the inter-cell transformers (ICTs) of the parallel multilevel inverters without deteriorating the quality of the output voltage. In various embodiments a parallel multi-level inverter modulator is provide including a multi-channel comparator to generate a multiplexed digitized ideal waveform for a parallel multi-level inverter and a finite state machine (FSM) module coupled to the parallel multi-channel comparator, the FSM module to receive the multiplexed digitized ideal waveform and to generate amore » pulse width modulated gate-drive signal for each switching device of the parallel multi-level inverter. The system and method provides for optimization of the output voltage spectrum without influence the magnetic balancing.« less
FPGA-Based Filterbank Implementation for Parallel Digital Signal Processing
NASA Technical Reports Server (NTRS)
Berner, Stephan; DeLeon, Phillip
1999-01-01
One approach to parallel digital signal processing decomposes a high bandwidth signal into multiple lower bandwidth (rate) signals by an analysis bank. After processing, the subband signals are recombined into a fullband output signal by a synthesis bank. This paper describes an implementation of the analysis and synthesis banks using (Field Programmable Gate Arrays) FPGAs.
Parson, Walther; Ballard, David; Budowle, Bruce; Butler, John M; Gettings, Katherine B; Gill, Peter; Gusmão, Leonor; Hares, Douglas R; Irwin, Jodi A; King, Jonathan L; Knijff, Peter de; Morling, Niels; Prinz, Mechthild; Schneider, Peter M; Neste, Christophe Van; Willuweit, Sascha; Phillips, Christopher
2016-05-01
The DNA Commission of the International Society for Forensic Genetics (ISFG) is reviewing factors that need to be considered ahead of the adoption by the forensic community of short tandem repeat (STR) genotyping by massively parallel sequencing (MPS) technologies. MPS produces sequence data that provide a precise description of the repeat allele structure of a STR marker and variants that may reside in the flanking areas of the repeat region. When a STR contains a complex arrangement of repeat motifs, the level of genetic polymorphism revealed by the sequence data can increase substantially. As repeat structures can be complex and include substitutions, insertions, deletions, variable tandem repeat arrangements of multiple nucleotide motifs, and flanking region SNPs, established capillary electrophoresis (CE) allele descriptions must be supplemented by a new system of STR allele nomenclature, which retains backward compatibility with the CE data that currently populate national DNA databases and that will continue to be produced for the coming years. Thus, there is a pressing need to produce a standardized framework for describing complex sequences that enable comparison with currently used repeat allele nomenclature derived from conventional CE systems. It is important to discern three levels of information in hierarchical order (i) the sequence, (ii) the alignment, and (iii) the nomenclature of STR sequence data. We propose a sequence (text) string format the minimal requirement of data storage that laboratories should follow when adopting MPS of STRs. We further discuss the variant annotation and sequence comparison framework necessary to maintain compatibility among established and future data. This system must be easy to use and interpret by the DNA specialist, based on a universally accessible genome assembly, and in place before the uptake of MPS by the general forensic community starts to generate sequence data on a large scale. While the established nomenclature for CE-based STR analysis will remain unchanged in the future, the nomenclature of sequence-based STR genotypes will need to follow updated rules and be generated by expert systems that translate MPS sequences to match CE conventions in order to guarantee compatibility between the different generations of STR data. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Sequence investigation of 34 forensic autosomal STRs with massively parallel sequencing.
Zhang, Suhua; Niu, Yong; Bian, Yingnan; Dong, Rixia; Liu, Xiling; Bao, Yun; Jin, Chao; Zheng, Hancheng; Li, Chengtao
2018-05-01
STRs vary not only in the length of the repeat units and the number of repeats but also in the region with which they conform to an incremental repeat pattern. Massively parallel sequencing (MPS) offers new possibilities in the analysis of STRs since they can simultaneously sequence multiple targets in a single reaction and capture potential internal sequence variations. Here, we sequenced 34 STRs applied in the forensic community of China with a custom-designed panel. MPS performance were evaluated from sequencing reads analysis, concordance study and sensitivity testing. High coverage sequencing data were obtained to determine the constitute ratios and heterozygous balance. No actual inconsistent genotypes were observed between capillary electrophoresis (CE) and MPS, demonstrating the reliability of the panel and the MPS technology. With the sequencing data from the 200 investigated individuals, 346 and 418 alleles were obtained via CE and MPS technologies at the 34 STRs, indicating MPS technology provides higher discrimination than CE detection. The whole study demonstrated that STR genotyping with the custom panel and MPS technology has the potential not only to reveal length and sequence variations but also to satisfy the demands of high throughput and high multiplexing with acceptable sensitivity.
Parallel optoelectronic trinary signed-digit division
NASA Astrophysics Data System (ADS)
Alam, Mohammad S.
1999-03-01
The trinary signed-digit (TSD) number system has been found to be very useful for parallel addition and subtraction of any arbitrary length operands in constant time. Using the TSD addition and multiplication modules as the basic building blocks, we develop an efficient algorithm for performing parallel TSD division in constant time. The proposed division technique uses one TSD subtraction and two TSD multiplication steps. An optoelectronic correlator based architecture is suggested for implementation of the proposed TSD division algorithm, which fully exploits the parallelism and high processing speed of optics. An efficient spatial encoding scheme is used to ensure better utilization of space bandwidth product of the spatial light modulators used in the optoelectronic implementation.
Towards Trustable Digital Evidence with PKIDEV: PKI Based Digital Evidence Verification Model
NASA Astrophysics Data System (ADS)
Uzunay, Yusuf; Incebacak, Davut; Bicakci, Kemal
How to Capture and Preserve Digital Evidence Securely? For the investigation and prosecution of criminal activities that involve computers, digital evidence collected in the crime scene has a vital importance. On one side, it is a very challenging task for forensics professionals to collect them without any loss or damage. On the other, there is the second problem of providing the integrity and authenticity in order to achieve legal acceptance in a court of law. By conceiving digital evidence simply as one instance of digital data, it is evident that modern cryptography offers elegant solutions for this second problem. However, to our knowledge, there is not any previous work proposing a systematic model having a holistic view to address all the related security problems in this particular case of digital evidence verification. In this paper, we present PKIDEV (Public Key Infrastructure based Digital Evidence Verification model) as an integrated solution to provide security for the process of capturing and preserving digital evidence. PKIDEV employs, inter alia, cryptographic techniques like digital signatures and secure time-stamping as well as latest technologies such as GPS and EDGE. In our study, we also identify the problems public-key cryptography brings when it is applied to the verification of digital evidence.
Porter, Glenn; Ebeyan, Robert; Crumlish, Charles; Renshaw, Adrian
2015-03-01
The photographic preservation of fingermark impression evidence found on ammunition cases remains problematic due to the cylindrical shape of the deposition substrate preventing complete capture of the impression in a single image. A novel method was developed for the photographic recovery of fingermarks from curved surfaces using digital imaging. The process involves the digital construction of a complete impression image made from several different images captured from multiple camera perspectives. Fingermark impressions deposited onto 9-mm and 0.22-caliber brass cartridge cases and a plastic 12-gauge shotgun shell were tested using various image parameters, including digital stitching method, number of images per 360° rotation of shell, image cropping, and overlap. The results suggest that this method may be successfully used to recover fingermark impression evidence from the surfaces of ammunition cases or other similar cylindrical surfaces. © 2014 American Academy of Forensic Sciences.
Application of multirate digital filter banks to wideband all-digital phase-locked loops design
NASA Technical Reports Server (NTRS)
Sadr, Ramin; Shah, Biren; Hinedi, Sami
1993-01-01
A new class of architecture for all-digital phase-locked loops (DPLL's) is presented in this article. These architectures, referred to as parallel DPLL (PDPLL), employ multirate digital filter banks (DFB's) to track signals with a lower processing rate than the Nyquist rate, without reducing the input (Nyquist) bandwidth. The PDPLL basically trades complexity for hardware-processing speed by introducing parallel processing in the receiver. It is demonstrated here that the DPLL performance is identical to that of a PDPLL for both steady-state and transient behavior. A test signal with a time-varying Doppler characteristic is used to compare the performance of both the DPLL and the PDPLL.
Application of multirate digital filter banks to wideband all-digital phase-locked loops design
NASA Astrophysics Data System (ADS)
Sadr, Ramin; Shah, Biren; Hinedi, Sami
1993-06-01
A new class of architecture for all-digital phase-locked loops (DPLL's) is presented in this article. These architectures, referred to as parallel DPLL (PDPLL), employ multirate digital filter banks (DFB's) to track signals with a lower processing rate than the Nyquist rate, without reducing the input (Nyquist) bandwidth. The PDPLL basically trades complexity for hardware-processing speed by introducing parallel processing in the receiver. It is demonstrated here that the DPLL performance is identical to that of a PDPLL for both steady-state and transient behavior. A test signal with a time-varying Doppler characteristic is used to compare the performance of both the DPLL and the PDPLL.
Application of multirate digital filter banks to wideband all-digital phase-locked loops design
NASA Astrophysics Data System (ADS)
Sadr, R.; Shah, B.; Hinedi, S.
1992-11-01
A new class of architecture for all-digital phase-locked loops (DPLL's) is presented in this article. These architectures, referred to as parallel DPLL (PDPLL), employ multirate digital filter banks (DFB's) to track signals with a lower processing rate than the Nyquist rate, without reducing the input (Nyquist) bandwidth. The PDPLL basically trades complexity for hardware-processing speed by introducing parallel processing in the receiver. It is demonstrated here that the DPLL performance is identical to that of a PDPLL for both steady-state and transient behavior. A test signal with a time-varying Doppler characteristic is used to compare the performance of both the DPLL and the PDPLL.
Application of multirate digital filter banks to wideband all-digital phase-locked loops design
NASA Technical Reports Server (NTRS)
Sadr, R.; Shah, B.; Hinedi, S.
1992-01-01
A new class of architecture for all-digital phase-locked loops (DPLL's) is presented in this article. These architectures, referred to as parallel DPLL (PDPLL), employ multirate digital filter banks (DFB's) to track signals with a lower processing rate than the Nyquist rate, without reducing the input (Nyquist) bandwidth. The PDPLL basically trades complexity for hardware-processing speed by introducing parallel processing in the receiver. It is demonstrated here that the DPLL performance is identical to that of a PDPLL for both steady-state and transient behavior. A test signal with a time-varying Doppler characteristic is used to compare the performance of both the DPLL and the PDPLL.
Towards Statistically Undetectable Steganography
2011-06-30
payload size. Middle, payload proportional to y/N. Right, proportional to N. LSB replacement steganography in never-compressed cover images , detected...Books. (1) J. Fridrich, Steganography in Digital Media: Principles, Algorithms , and Applications, Cambridge University Press, November 2009. Journal... Images for Applications in Steganography ," IEEE Trans, on Info. Forensics and Security, vol. 3(2), pp. 247-258, 2008. Conference papers. (1) T. Filler
Forensic Analysis of Digital Image Tampering
2004-12-01
analysis of when each method fails, which Chapter 4 discusses. Finally, a test image containing an invisible watermark using LSB steganography is...2.2 – Example of invisible watermark using Steganography Software F5 ............. 8 Figure 2.3 – Example of copy-move image forgery [12...Figure 3.11 – Algorithm for JPEG Block Technique ....................................................... 54 Figure 3.12 – “Forged” Image with Result
2010-05-01
classrooms and to build a digital workforce for the 21st century. Strengthening Partnerships: Neither government nor the private sector nor individual...don’t lie beyond our reach. They exist in our laboratories and universities; in our fields and our factories; in the imaginations of our entrepreneurs ...the Right to Access Information: The emergence of tech- nologies such as the Internet, wireless networks, mobile smart -phones, investigative forensics
[True color accuracy in digital forensic photography].
Ramsthaler, Frank; Birngruber, Christoph G; Kröll, Ann-Katrin; Kettner, Mattias; Verhoff, Marcel A
2016-01-01
Forensic photographs not only need to be unaltered and authentic and capture context-relevant images, along with certain minimum requirements for image sharpness and information density, but color accuracy also plays an important role, for instance, in the assessment of injuries or taphonomic stages, or in the identification and evaluation of traces from photos. The perception of color not only varies subjectively from person to person, but as a discrete property of an image, color in digital photos is also to a considerable extent influenced by technical factors such as lighting, acquisition settings, camera, and output medium (print, monitor). For these reasons, consistent color accuracy has so far been limited in digital photography. Because images usually contain a wealth of color information, especially for complex or composite colors or shades of color, and the wavelength-dependent sensitivity to factors such as light and shadow may vary between cameras, the usefulness of issuing general recommendations for camera capture settings is limited. Our results indicate that true image colors can best and most realistically be captured with the SpyderCheckr technical calibration tool for digital cameras tested in this study. Apart from aspects such as the simplicity and quickness of the calibration procedure, a further advantage of the tool is that the results are independent of the camera used and can also be used for the color management of output devices such as monitors and printers. The SpyderCheckr color-code patches allow true colors to be captured more realistically than with a manual white balance tool or an automatic flash. We therefore recommend that the use of a color management tool should be considered for the acquisition of all images that demand high true color accuracy (in particular in the setting of injury documentation).
Forensic Investigation of Cooperative Storage Cloud Service: Symform as a Case Study.
Teing, Yee-Yang; Dehghantanha, Ali; Choo, Kim-Kwang Raymond; Dargahi, Tooska; Conti, Mauro
2017-05-01
Researchers envisioned Storage as a Service (StaaS) as an effective solution to the distributed management of digital data. Cooperative storage cloud forensic is relatively new and is an under-explored area of research. Using Symform as a case study, we seek to determine the data remnants from the use of cooperative cloud storage services. In particular, we consider both mobile devices and personal computers running various popular operating systems, namely Windows 8.1, Mac OS X Mavericks 10.9.5, Ubuntu 14.04.1 LTS, iOS 7.1.2, and Android KitKat 4.4.4. Potential artefacts recovered during the research include data relating to the installation and uninstallation of the cloud applications, log-in to and log-out from Symform account using the client application, file synchronization as well as their time stamp information. This research contributes to an in-depth understanding of the types of terrestrial artifacts that are likely to remain after the use of cooperative storage cloud on client devices. © 2016 American Academy of Forensic Sciences.
Using spectral information in forensic imaging.
Miskelly, Gordon M; Wagner, John H
2005-12-20
Improved detection of forensic evidence by combining narrow band photographic images taken at a range of wavelengths is dependent on the substance of interest having a significantly different spectrum from the underlying substrate. While some natural substances such as blood have distinctive spectral features which are readily distinguished from common colorants, this is not true for visualization agents commonly used in forensic science. We now show that it is possible to select reagents with narrow spectral features that lead to increased visibility using digital cameras and computer image enhancement programs even if their coloration is much less intense to the unaided eye than traditional reagents. The concept is illustrated by visualising latent fingermarks on paper with the zinc complex of Ruhemann's Purple, cyanoacrylate-fumed fingerprints with Eu(tta)(3)(phen), and soil prints with 2,6-bis(benzimidazol-2-yl)-4-[4'-(dimethylamino)phenyl]pyridine [BBIDMAPP]. In each case background correction is performed at one or two wavelengths bracketing the narrow absorption or emission band of these compounds. However, compounds with sharp spectral features would also lead to improved detection using more advanced algorithms such as principal component analysis.
Franklin, Daniel; O'Higgins, Paul; Oxnard, Charles E; Dadour, Ian
2007-03-01
This article forms part of an ongoing series of investigations designed to apply three-dimensional (3D) technology to problems in forensic anthropology. We report here on new morphometric data examining sexual dimorphism and population variation in the adult human mandible. The material is sourced from dissection hall subjects of South African and American origin consequently the sex and a statement of age are known for each individual. Thirty-eight bilateral 3D landmarks were designed and acquired using a Microscribe G2X portable digitizer. The shape analysis software morphologika (www.york.ac.uk/res/fme) is used to analyze the 3D coordinates of the landmarks. A selection of multivariate statistics is applied to visualize the pattern, and assess the significance of, shape variation between the sexes and populations. The determination of sex and identification of population affinity are two important aspects of forensic investigation. Our results indicate that the adult mandible can be used to identify both sex and population affinity with increased sensitivity and objectivity compared to standard analytical techniques.
A forensic identification case and DPid - can it be a useful tool?
de QUEIROZ, Cristhiane Leão; BOSTOCK, Ellen Marie; SANTOS, Carlos Ferreira; GUIMARÃES, Marco Aurélio; da SILVA, Ricardo Henrique Alves
2017-01-01
Abstract Objective The aim of this study was to show DPid as an important tool of potential application to solve cases with dental prosthesis, such as the forensic case reported, in which a skull, denture and dental records were received for analysis. Material and Methods Human identification is still challenging in various circumstances and Dental Prosthetics Identification (DPid) stores the patient’s name and prosthesis information and provides access through an embedded code in dental prosthesis or an identification card. All of this information is digitally stored on servers accessible only by dentists, laboratory technicians and patients with their own level of secure access. DPid provides a complete single-source list of all dental prosthesis features (materials and components) under complete and secure documentation used for clinical follow-up and for human identification. Results and Conclusion If DPid tool was present in this forensic case, it could have been solved without requirement of DNA exam, which confirmed the dental comparison of antemortem and postmortem records, and concluded the case as a positive identification. PMID:28678955
A cloud-based forensics tracking scheme for online social network clients.
Lin, Feng-Yu; Huang, Chien-Cheng; Chang, Pei-Ying
2015-10-01
In recent years, with significant changes in the communication modes, most users are diverted to cloud-based applications, especially online social networks (OSNs), which applications are mostly hosted on the outside and available to criminals, enabling them to impede criminal investigations and intelligence gathering. In the virtual world, how the Law Enforcement Agency (LEA) identifies the "actual" identity of criminal suspects, and their geolocation in social networks, is a major challenge to current digital investigation. In view of this, this paper proposes a scheme, based on the concepts of IP location and network forensics, which aims to develop forensics tracking on OSNs. According to our empirical analysis, the proposed mechanism can instantly trace the "physical location" of a targeted service resource identifier (SRI), when the target client is using online social network applications (Facebook, Twitter, etc.), and can analyze the probable target client "identity" associatively. To the best of our knowledge, this is the first individualized location method and architecture developed and evaluated in OSNs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
The 2nd Symposium on the Frontiers of Massively Parallel Computations
NASA Technical Reports Server (NTRS)
Mills, Ronnie (Editor)
1988-01-01
Programming languages, computer graphics, neural networks, massively parallel computers, SIMD architecture, algorithms, digital terrain models, sort computation, simulation of charged particle transport on the massively parallel processor and image processing are among the topics discussed.
Neumann, Cedric; Ramotowski, Robert; Genessay, Thibault
2011-05-13
Forensic examinations of ink have been performed since the beginning of the 20th century. Since the 1960s, the International Ink Library, maintained by the United States Secret Service, has supported those analyses. Until 2009, the search and identification of inks were essentially performed manually. This paper describes the results of a project designed to improve ink samples' analytical and search processes. The project focused on the development of improved standardization procedures to ensure the best possible reproducibility between analyses run on different HPTLC plates. The successful implementation of this new calibration method enabled the development of mathematical algorithms and of a software package to complement the existing ink library. Copyright © 2010 Elsevier B.V. All rights reserved.
Pereira, Vania; Mogensen, Helle S; Børsting, Claus; Morling, Niels
2017-05-01
The application of massive parallel sequencing (MPS) methodologies in forensic genetics is promising and it is gradually being implemented in forensic genetic case work. One of the major advantages of these technologies is that several traditional electrophoresis assays can be combined into one single MPS assay. This reduces both the amount of sample used and the time of the investigations. This study assessed the utility of the Precision ID Ancestry Panel (Thermo Fisher Scientific, Waltham, USA) in forensic genetics. This assay was developed for the Ion Torrent PGM™ System and genotypes 165 ancestry informative SNPs. The performance of the assay and the accompanying software solution for ancestry inference was assessed by typing 142 Danes and 98 Somalis. Locus balance, heterozygote balance, and noise levels were calculated and future analysis criteria for crime case work were estimated. Overall, the Precision ID Ancestry Panel performed well, and only minor changes to the recommended protocol were implemented. Three out of the 165 loci (rs459920, rs7251928, and rs7722456) had consistently poor performance, mainly due to misalignment of homopolymeric stretches. We suggest that these loci should be excluded from the analyses. The different statistical methods for reporting ancestry in forensic genetic case work are discussed. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wan, Yuhong; Man, Tianlong; Wu, Fan; Kim, Myung K.; Wang, Dayong
2016-11-01
We present a new self-interference digital holographic approach that allows single-shot capturing three-dimensional intensity distribution of the spatially incoherent objects. The Fresnel incoherent correlation holographic microscopy is combined with parallel phase-shifting technique to instantaneously obtain spatially multiplexed phase-shifting holograms. The compressive-sensing-based reconstruction algorithm is implemented to reconstruct the original object from the under sampled demultiplexed holograms. The scheme is verified with simulations. The validity of the proposed method is experimentally demonstrated in an indirectly way by simulating the use of specific parallel phase-shifting recording device.
Jacobsohn, D.H.; Merrill, L.C.
1959-01-20
An improved parallel addition unit is described which is especially adapted for use in electronic digital computers and characterized by propagation of the carry signal through each of a plurality of denominationally ordered stages within a minimum time interval. In its broadest aspects, the invention incorporates a fast multistage parallel digital adder including a plurality of adder circuits, carry-propagation circuit means in all but the most significant digit stage, means for conditioning each carry-propagation circuit during the time period in which information is placed into the adder circuits, and means coupling carry-generation portions of thc adder circuit to the carry propagating means.
Research in Optical Symbolic Tasks
1989-11-29
November 1989. Specifically, we have concentrated on the following topics: complexity studies for optical neural and digital systems, architecture and...1989. Specifically, we hav, concentrated on the following topics: complexity studies for optical neural and digital systems, architecture and models for...Digital Systems 1.1 Digital Optical Parallel System Complexity Our study of digital optical system complexity has included a comparison of optical and
Ebert, Lars Christian; Ptacek, Wolfgang; Naether, Silvio; Fürst, Martin; Ross, Steffen; Buck, Ursula; Weber, Stefan; Thali, Michael
2010-03-01
The Virtopsy project, a multi-disciplinary project that involves forensic science, diagnostic imaging, computer science, automation technology, telematics and biomechanics, aims to develop new techniques to improve the outcome of forensic investigations. This paper presents a new approach in the field of minimally invasive virtual autopsy for a versatile robotic system that is able to perform three-dimensional (3D) surface scans as well as post mortem image-guided soft tissue biopsies. The system consists of an industrial six-axis robot with additional extensions (i.e. a linear axis to increase working space, a tool-changing system and a dedicated safety system), a multi-slice CT scanner with equipment for angiography, a digital photogrammetry and 3D optical surface-scanning system, a 3D tracking system, and a biopsy end effector for automatic needle placement. A wax phantom was developed for biopsy accuracy tests. Surface scanning times were significantly reduced (scanning times cut in half, calibration three times faster). The biopsy module worked with an accuracy of 3.2 mm. Using the Virtobot, the surface-scanning procedure could be standardized and accelerated. The biopsy module is accurate enough for use in biopsies in a forensic setting. The Virtobot can be utilized for several independent tasks in the field of forensic medicine, and is sufficiently versatile to be adapted to different tasks in the future. (c) 2009 John Wiley & Sons, Ltd.
Thakare, Shweta; Mhapuskar, Amit; Hiremutt, Darshan; Giroh, Versha R; Kalyanpur, Kedarnath; Alpana, K R
2016-09-01
Evaluation of the position of mental foramen aids in forensic, surgical, endodontic, as well as diagnostic procedures. Thus, in view of this, the present study was conducted among the population of Pune, a central part of India, to determine the most regular location of the mental foramen and to estimate difference in position of mental foramen based on gender. The present retrospective study was commenced on 200 digital panoramic radiographs of dentate patients. The location of the representation of the mental foramen was traced. Measurements for evaluating distance of superior and inferior borders of the foramen in relation to the lower border of the mandible were made using the reference lines drawn from anatomical landmarks. The data so obtained were statistically analyzed using chi-square test. The most common position of mental foramen among Pune population in horizontal plane in both male and female patients was in line with second premolar followed by position in between first and second premolar, whereas in the vertical plane, most common position was at or in line with apex of second premolar followed by in between apex of first and second premolar. The variation in length of superior and inferior border of the foramen in relation to lower border of the mandible with respect to gender was found to be significant, with p-value <0.05. There was no difference in position of mental foramen in horizontal and vertical planes based on gender. The stability of location of mental foramen and significant difference in length of superior and inferior border of the foramen in relation to lower border of the mandible with respect to gender offer its application in forensic identification of gender.
2011-06-01
metacity [ 2788] gnome-panel [ 2790] nautilus [ 2794] bonobo -activati [ 2797] gnome-vfs-daemo [ 2799] eggcups [ 2800] gnome-volume-ma [ 2809] bt...xrdb [ 2784] metacity [ 2788] gnome-panel [ 2790] nautilus [ 2794] bonobo -activati [ 2797] gnome-vfs-daemo [ 2799] eggcups [ 2800] gnome-volume...gnome-keyring-d [ 2764] gnome-settings- [ 2780] xrdb [ 2784] metacity [ 2788] gnome-panel [ 2790] nautilus [ 2794] bonobo -activati [ 2797] gnome
High Rate Digital Demodulator ASIC
NASA Technical Reports Server (NTRS)
Ghuman, Parminder; Sheikh, Salman; Koubek, Steve; Hoy, Scott; Gray, Andrew
1998-01-01
The architecture of High Rate (600 Mega-bits per second) Digital Demodulator (HRDD) ASIC capable of demodulating BPSK and QPSK modulated data is presented in this paper. The advantages of all-digital processing include increased flexibility and reliability with reduced reproduction costs. Conventional serial digital processing would require high processing rates necessitating a hardware implementation in other than CMOS technology such as Gallium Arsenide (GaAs) which has high cost and power requirements. It is more desirable to use CMOS technology with its lower power requirements and higher gate density. However, digital demodulation of high data rates in CMOS requires parallel algorithms to process the sampled data at a rate lower than the data rate. The parallel processing algorithms described here were developed jointly by NASA's Goddard Space Flight Center (GSFC) and the Jet Propulsion Laboratory (JPL). The resulting all-digital receiver has the capability to demodulate BPSK, QPSK, OQPSK, and DQPSK at data rates in excess of 300 Mega-bits per second (Mbps) per channel. This paper will provide an overview of the parallel architecture and features of the HRDR ASIC. In addition, this paper will provide an over-view of the implementation of the hardware architectures used to create flexibility over conventional high rate analog or hybrid receivers. This flexibility includes a wide range of data rates, modulation schemes, and operating environments. In conclusion it will be shown how this high rate digital demodulator can be used with an off-the-shelf A/D and a flexible analog front end, both of which are numerically computer controlled, to produce a very flexible, low cost high rate digital receiver.
3D motion picture of transparent gas flow by parallel phase-shifting digital holography
NASA Astrophysics Data System (ADS)
Awatsuji, Yasuhiro; Fukuda, Takahito; Wang, Yexin; Xia, Peng; Kakue, Takashi; Nishio, Kenzo; Matoba, Osamu
2018-03-01
Parallel phase-shifting digital holography is a technique capable of recording three-dimensional (3D) motion picture of dynamic object, quantitatively. This technique can record single hologram of an object with an image sensor having a phase-shift array device and reconstructs the instantaneous 3D image of the object with a computer. In this technique, a single hologram in which the multiple holograms required for phase-shifting digital holography are multiplexed by using space-division multiplexing technique pixel by pixel. We demonstrate 3D motion picture of dynamic and transparent gas flow recorded and reconstructed by the technique. A compressed air duster was used to generate the gas flow. A motion picture of the hologram of the gas flow was recorded at 180,000 frames/s by parallel phase-shifting digital holography. The phase motion picture of the gas flow was reconstructed from the motion picture of the hologram. The Abel inversion was applied to the phase motion picture and then the 3D motion picture of the gas flow was obtained.
Near real-time digital holographic microscope based on GPU parallel computing
NASA Astrophysics Data System (ADS)
Zhu, Gang; Zhao, Zhixiong; Wang, Huarui; Yang, Yan
2018-01-01
A transmission near real-time digital holographic microscope with in-line and off-axis light path is presented, in which the parallel computing technology based on compute unified device architecture (CUDA) and digital holographic microscopy are combined. Compared to other holographic microscopes, which have to implement reconstruction in multiple focal planes and are time-consuming the reconstruction speed of the near real-time digital holographic microscope can be greatly improved with the parallel computing technology based on CUDA, so it is especially suitable for measurements of particle field in micrometer and nanometer scale. Simulations and experiments show that the proposed transmission digital holographic microscope can accurately measure and display the velocity of particle field in micrometer scale, and the average velocity error is lower than 10%.With the graphic processing units(GPU), the computing time of the 100 reconstruction planes(512×512 grids) is lower than 120ms, while it is 4.9s using traditional reconstruction method by CPU. The reconstruction speed has been raised by 40 times. In other words, it can handle holograms at 8.3 frames per second and the near real-time measurement and display of particle velocity field are realized. The real-time three-dimensional reconstruction of particle velocity field is expected to achieve by further optimization of software and hardware. Keywords: digital holographic microscope,
Multiplexed Oversampling Digitizer in 65 nm CMOS for Column-Parallel CCD Readout
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grace, Carl; Walder, Jean-Pierre; von der Lippe, Henrik
2012-04-10
A digitizer designed to read out column-parallel charge-coupled devices (CCDs) used for high-speed X-ray imaging is presented. The digitizer is included as part of the High-Speed Image Preprocessor with Oversampling (HIPPO) integrated circuit. The digitizer module comprises a multiplexed, oversampling, 12-bit, 80 MS/s pipelined Analog-to-Digital Converter (ADC) and a bank of four fast-settling sample-and-hold amplifiers to instrument four analog channels. The ADC multiplexes and oversamples to reduce its area to allow integration that is pitch-matched to the columns of the CCD. Novel design techniques are used to enable oversampling and multiplexing with a reduced power penalty. The ADC exhibits 188more » ?V-rms noise which is less than 1 LSB at a 12-bit level. The prototype is implemented in a commercially available 65 nm CMOS process. The digitizer will lead to a proof-of-principle 2D 10 Gigapixel/s X-ray detector.« less
Binocular optical axis parallelism detection precision analysis based on Monte Carlo method
NASA Astrophysics Data System (ADS)
Ying, Jiaju; Liu, Bingqi
2018-02-01
According to the working principle of the binocular photoelectric instrument optical axis parallelism digital calibration instrument, and in view of all components of the instrument, the various factors affect the system precision is analyzed, and then precision analysis model is established. Based on the error distribution, Monte Carlo method is used to analyze the relationship between the comprehensive error and the change of the center coordinate of the circle target image. The method can further guide the error distribution, optimize control the factors which have greater influence on the comprehensive error, and improve the measurement accuracy of the optical axis parallelism digital calibration instrument.
Using drone-mounted cameras for on-site body documentation: 3D mapping and active survey.
Urbanová, Petra; Jurda, Mikoláš; Vojtíšek, Tomáš; Krajsa, Jan
2017-12-01
Recent advances in unmanned aerial technology have substantially lowered the cost associated with aerial imagery. As a result, forensic practitioners are today presented with easy low-cost access to aerial photographs at remote locations. The present paper aims to explore boundaries in which the low-end drone technology can operate as professional crime scene equipment, and to test the prospects of aerial 3D modeling in the forensic context. The study was based on recent forensic cases of falls from height admitted for postmortem examinations. Three mock outdoor forensic scenes featuring a dummy, skeletal remains and artificial blood were constructed at an abandoned quarry and subsequently documented using a commercial DJI Phantom 2 drone equipped with a GoPro HERO 4 digital camera. In two of the experiments, the purpose was to conduct aerial and ground-view photography and to process the acquired images with a photogrammetry protocol (using Agisoft PhotoScan ® 1.2.6) in order to generate 3D textured models. The third experiment tested the employment of drone-based video recordings in mapping scattered body parts. The results show that drone-based aerial photography is capable of producing high-quality images, which are appropriate for building accurate large-scale 3D models of a forensic scene. If, however, high-resolution top-down three-dimensional scene documentation featuring details on a corpse or other physical evidence is required, we recommend building a multi-resolution model by processing aerial and ground-view imagery separately. The video survey showed that using an overview recording for seeking out scattered body parts was efficient. In contrast, the less easy-to-spot evidence, such as bloodstains, was detected only after having been marked properly with crime scene equipment. Copyright © 2017 Elsevier B.V. All rights reserved.
Intelligent image capture of cartridge cases for firearms examiners
NASA Astrophysics Data System (ADS)
Jones, Brett C.; Guerci, Joseph R.
1997-02-01
The FBI's DRUGFIRETM system is a nationwide computerized networked image database of ballistic forensic evidence. This evidence includes images of cartridge cases and bullets obtained from both crime scenes and controlled test firings of seized weapons. Currently, the system is installed in over 80 forensic labs across the country and has enjoyed a high degree of success. In this paper, we discuss some of the issues and methods associated with providing a front-end semi-automated image capture system that simultaneously satisfies the often conflicting criteria of the many human examiners visual perception versus the criteria associated with optimizing autonomous digital image correlation. Specifically, we detail the proposed processing chain of an intelligent image capture system (IICS), involving a real- time capture 'assistant,' which assesses the quality of the image under test utilizing a custom designed neural network.
Minimizing inhibition of PCR-STR typing using digital agarose droplet microfluidics.
Geng, Tao; Mathies, Richard A
2015-01-01
The presence of PCR inhibitors in forensic and other biological samples reduces the amplification efficiency, sometimes resulting in complete PCR failure. Here we demonstrate a high-performance digital agarose droplet microfluidics technique for single-cell and single-molecule forensic short tandem repeat (STR) typing of samples contaminated with high concentrations of PCR inhibitors. In our multifaceted strategy, the mitigation of inhibitory effects is achieved by the efficient removal of inhibitors from the porous agarose microgel droplets carrying the DNA template through washing and by the significant dilution of targets and remaining inhibitors to the stochastic limit within the ultralow nL volume droplet reactors. Compared to conventional tube-based bulk PCR, our technique shows enhanced (20 ×, 10 ×, and 16 ×) tolerance of urea, tannic acid, and humic acid, respectively, in STR typing of GM09948 human lymphoid cells. STR profiling of single cells is not affected by small soluble molecules like urea and tannic acid because of their effective elimination from the agarose droplets; however, higher molecular weight humic acid still partially inhibits single-cell PCR when the concentration is higher than 200 ng/μL. Nevertheless, the full STR profile of 9948 male genomic DNA contaminated with 500 ng/μL humic acid was generated by pooling and amplifying beads carrying single-molecule 9948 DNA PCR products in a single secondary reaction. This superior performance suggests that our digital agarose droplet microfluidics technology is a promising approach for analyzing low-abundance DNA targets in the presence of inhibitors. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Forensic Analysis of the Sony Playstation Portable
NASA Astrophysics Data System (ADS)
Conrad, Scott; Rodriguez, Carlos; Marberry, Chris; Craiger, Philip
The Sony PlayStation Portable (PSP) is a popular portable gaming device with features such as wireless Internet access and image, music and movie playback. As with most systems built around a processor and storage, the PSP can be used for purposes other than it was originally intended - legal as well as illegal. This paper discusses the features of the PSP browser and suggests best practices for extracting digital evidence.
Network Monitoring Traffic Compression Using Singular Value Decomposition
2014-03-27
Shootouts." Workshop on Intrusion Detection and Network Monitoring. 1999. [12] Goodall , John R. "Visualization is better! a comparative evaluation...34 Visualization for Cyber Security, 2009. VizSec 2009. 6th International Workshop on IEEE, 2009. [13] Goodall , John R., and Mark Sowul. "VIAssist...Viruses and Log Visualization.” In Australian Digital Forensics Conference. Paper 54, 2008. [30] Tesone, Daniel R., and John R. Goodall . "Balancing
Prediction of age and gender using digital radiographic method: A retrospective study.
Poongodi, V; Kanmani, R; Anandi, M S; Krithika, C L; Kannan, A; Raghuram, P H
2015-08-01
To investigate age, sex based on gonial angle, width and breadth of the ramus of the mandible by digital orthopantomograph. A total of 200 panoramic radiographic images were selected. The age of the individuals ranged between 4 and 75 years of both the gender - males (113) and females (87) and selected radiographic images were measured using KLONK image measurement software tool with linear, angular measurement. The investigated radiographs were collected from the records of SRM Dental College, Department of Oral Medicine and Radiology. Radiographs with any pathology, facial deformities, if no observation of mental foramen, congenital deformities, magnification, and distortion were excluded. Mean, median, standard deviation, derived to check the first and third quartile, linear regression is used to check age and gender correlation with angle of mandible, height and width of the ramus of mandible. The radiographic method is a simpler and cost-effective method of age identification compared with histological and biochemical methods. Mandible is strongest facial bone after the skull, pelvic bone. It is validatory to predict age and gender by many previous studies. Radiographic and tomographic images have become an essential aid for human identification in forensic dentistry forensic dentists can choose the most appropriate one since the validity of age and gender estimation crucially depends on the method used and its proper application.
Forensics for flatbed scanners
NASA Astrophysics Data System (ADS)
Gloe, Thomas; Franz, Elke; Winkler, Antje
2007-02-01
Within this article, we investigate possibilities for identifying the origin of images acquired with flatbed scanners. A current method for the identification of digital cameras takes advantage of image sensor noise, strictly speaking, the spatial noise. Since flatbed scanners and digital cameras use similar technologies, the utilization of image sensor noise for identifying the origin of scanned images seems to be possible. As characterization of flatbed scanner noise, we considered array reference patterns and sensor line reference patterns. However, there are particularities of flatbed scanners which we expect to influence the identification. This was confirmed by extensive tests: Identification was possible to a certain degree, but less reliable than digital camera identification. In additional tests, we simulated the influence of flatfielding and down scaling as examples for such particularities of flatbed scanners on digital camera identification. One can conclude from the results achieved so far that identifying flatbed scanners is possible. However, since the analyzed methods are not able to determine the image origin in all cases, further investigations are necessary.
Measuring Distances Using Digital Cameras
ERIC Educational Resources Information Center
Kendal, Dave
2007-01-01
This paper presents a generic method of calculating accurate horizontal and vertical object distances from digital images taken with any digital camera and lens combination, where the object plane is parallel to the image plane or tilted in the vertical plane. This method was developed for a project investigating the size, density and spatial…
Accuracy and eligibility of CBCT to digitize dental plaster casts.
Becker, Kathrin; Schmücker, Ulf; Schwarz, Frank; Drescher, Dieter
2018-05-01
Software-based dental planning requires digital casts and oftentimes cone-beam computed tomography (CBCT) radiography. However, buying a dedicated model digitizing device can be expensive and might not be required. The present study aimed to assess whether digital models derived from CBCT and models digitized using a dedicated optical device are of comparable accuracy. A total of 20 plaster casts were digitized with eight CBCT and five optical model digitizers. Corresponding models were superimposed using six control points and subsequent iterative closest point matching. Median distances were calculated among all registered models. Data were pooled per scanner and model. Boxplots were generated, and the paired t test, a Friedman test, and a post-hoc Nemenyi test were employed for statistical comparison. Results were found significant at p < 0.05. All CBCT devices allowed the digitization of plaster casts, but failed to reach the accuracy of the dedicated model digitizers (p < 0.001). Median distances between CBCT and optically digitized casts were 0.064 + - 0.005 mm. Qualitative differences among the CBCT systems were detected (χ 2 = 78.07, p < 0.001), and one CBCT providing a special plaster cast digitization mode was found superior to the competitors (p < 0.05). CBCT systems failed to reach the accuracy from optical digitizers, but within the limits of the study, accuracy appeared to be sufficient for digital planning and forensic purposes. Most CBCT systems enabled digitization of plaster casts, and accuracy was found sufficient for digital planning and storage purposes.
The factorization of large composite numbers on the MPP
NASA Technical Reports Server (NTRS)
Mckurdy, Kathy J.; Wunderlich, Marvin C.
1987-01-01
The continued fraction method for factoring large integers (CFRAC) was an ideal algorithm to be implemented on a massively parallel computer such as the Massively Parallel Processor (MPP). After much effort, the first 60 digit number was factored on the MPP using about 6 1/2 hours of array time. Although this result added about 10 digits to the size number that could be factored using CFRAC on a serial machine, it was already badly beaten by the implementation of Davis and Holdridge on the CRAY-1 using the quadratic sieve, an algorithm which is clearly superior to CFRAC for large numbers. An algorithm is illustrated which is ideally suited to the single instruction multiple data (SIMD) massively parallel architecture and some of the modifications which were needed in order to make the parallel implementation effective and efficient are described.
NASA Technical Reports Server (NTRS)
Athale, R. A.; Lee, S. H.
1978-01-01
The paper describes the fabrication and operation of an optical parallel logic (OPAL) device which performs Boolean algebraic operations on binary images. Several logic operations on two input binary images were demonstrated using an 8 x 8 device with a CdS photoconductor and a twisted nematic liquid crystal. Two such OPAL devices can be interconnected to form a half-adder circuit which is one of the essential components of a CPU in a digital signal processor.
Detecting double compression of audio signal
NASA Astrophysics Data System (ADS)
Yang, Rui; Shi, Yun Q.; Huang, Jiwu
2010-01-01
MP3 is the most popular audio format nowadays in our daily life, for example music downloaded from the Internet and file saved in the digital recorder are often in MP3 format. However, low bitrate MP3s are often transcoded to high bitrate since high bitrate ones are of high commercial value. Also audio recording in digital recorder can be doctored easily by pervasive audio editing software. This paper presents two methods for the detection of double MP3 compression. The methods are essential for finding out fake-quality MP3 and audio forensics. The proposed methods use support vector machine classifiers with feature vectors formed by the distributions of the first digits of the quantized MDCT (modified discrete cosine transform) coefficients. Extensive experiments demonstrate the effectiveness of the proposed methods. To the best of our knowledge, this piece of work is the first one to detect double compression of audio signal.
Camera-Model Identification Using Markovian Transition Probability Matrix
NASA Astrophysics Data System (ADS)
Xu, Guanshuo; Gao, Shang; Shi, Yun Qing; Hu, Ruimin; Su, Wei
Detecting the (brands and) models of digital cameras from given digital images has become a popular research topic in the field of digital forensics. As most of images are JPEG compressed before they are output from cameras, we propose to use an effective image statistical model to characterize the difference JPEG 2-D arrays of Y and Cb components from the JPEG images taken by various camera models. Specifically, the transition probability matrices derived from four different directional Markov processes applied to the image difference JPEG 2-D arrays are used to identify statistical difference caused by image formation pipelines inside different camera models. All elements of the transition probability matrices, after a thresholding technique, are directly used as features for classification purpose. Multi-class support vector machines (SVM) are used as the classification tool. The effectiveness of our proposed statistical model is demonstrated by large-scale experimental results.
Ramsthaler, Frank; Kettner, Mattias; Verhoff, Marcel A
2014-01-01
In forensic anthropological casework, estimating age-at-death is key to profiling unknown skeletal remains. The aim of this study was to examine the reliability of a new, simple, fast, and inexpensive digital odontological method for age-at-death estimation. The method is based on the original Lamendin method, which is a widely used technique in the repertoire of odontological aging methods in forensic anthropology. We examined 129 single root teeth employing a digital camera and imaging software for the measurement of the luminance of the teeth's translucent root zone. Variability in luminance detection was evaluated using statistical technical error of measurement analysis. The method revealed stable values largely unrelated to observer experience, whereas requisite formulas proved to be camera-specific and should therefore be generated for an individual recording setting based on samples of known chronological age. Multiple regression analysis showed a highly significant influence of the coefficients of the variables "arithmetic mean" and "standard deviation" of luminance for the regression formula. For the use of this primer multivariate equation for age-at-death estimation in casework, a standard error of the estimate of 6.51 years was calculated. Step-by-step reduction of the number of embedded variables to linear regression analysis employing the best contributor "arithmetic mean" of luminance yielded a regression equation with a standard error of 6.72 years (p < 0.001). The results of this study not only support the premise of root translucency as an age-related phenomenon, but also demonstrate that translucency reflects a number of other influencing factors in addition to age. This new digital measuring technique of the zone of dental root luminance can broaden the array of methods available for estimating chronological age, and furthermore facilitate measurement and age classification due to its low dependence on observer experience.
The computer-aided parallel external fixator for complex lower limb deformity correction.
Wei, Mengting; Chen, Jianwen; Guo, Yue; Sun, Hao
2017-12-01
Since parameters of the parallel external fixator are difficult to measure and calculate in real applications, this study developed computer software that can help the doctor measure parameters using digital technology and generate an electronic prescription for deformity correction. According to Paley's deformity measurement method, we provided digital measurement techniques. In addition, we proposed an deformity correction algorithm to calculate the elongations of the six struts and developed a electronic prescription software. At the same time, a three-dimensional simulation of the parallel external fixator and deformed fragment was made using virtual reality modeling language technology. From 2013 to 2015, fifteen patients with complex lower limb deformity were treated with parallel external fixators and the self-developed computer software. All of the cases had unilateral limb deformity. The deformities were caused by old osteomyelitis in nine cases and traumatic sequelae in six cases. A doctor measured the related angulation, displacement and rotation on postoperative radiographs using the digital measurement techniques. Measurement data were input into the electronic prescription software to calculate the daily adjustment elongations of the struts. Daily strut adjustments were conducted according to the data calculated. The frame was removed when expected results were achieved. Patients lived independently during the adjustment. The mean follow-up was 15 months (range 10-22 months). The duration of frame fixation from the time of application to the time of removal averaged 8.4 months (range 2.5-13.1 months). All patients were satisfied with the corrected limb alignment. No cases of wound infections or complications occurred. Using the computer-aided parallel external fixator for the correction of lower limb deformities can achieve satisfactory outcomes. The correction process can be simplified and is precise and digitized, which will greatly improve the treatment in a clinical application.
ERIC Educational Resources Information Center
Serapiglia, Anthony
2014-01-01
The following Teaching Case is designed to expose students to three scenarios related to data stored on hard drives, techniques that could be used to retrieve deleted or corrupted data, and a method for a more thorough deletion of data from a hard drive. These issues are often overlooked in current IT curriculum and in our age of digital clutter…
Digital Forensics Research: The Next 10 Years
2010-01-01
techniques were developed primarily for data recovery. For example, Wood et al. relate a story about two local data recovery experts working for 70 h to...recover the only copy of a highly fragmented database file inadvertently erased by a careless researcher (pp.123e124 Wood et al., 1987). By the late 1980s... Apple , Blackberry, Windows Mobile, Symbian), more than a dozen “proprietary” systems, and more than 100,000 downloadable applications. There are
An FPGA-Based System for Tracking Digital Information Transmitted Via Peer-to-Peer Protocols
2009-03-01
Sullivan. ISPs are Pressed to Become Child Porn Cops, October 2008. http://www.msnbc.msn.com/id/27198621. DVR07. Hamza Dahmouni, Sandrine Vaton, and David...of child pornography. The Federal Bureau of Investigation’s (FBI) Re- gional Computer Forensics Laboratory states in its 2007 annual report that...cyber- crime, which includes crimes against children and child pornography, is the offense for which law enforcement requested assistance most often
2016-09-07
and the University of Southern California through have been collaborating on a proposal led by Florida International University’s School of Computing...security. We will develop an action plan to identify needs, assess vulnerabilities and address disruptive technologies that could clearly provide a ...Institute of Technology and his Bachelor of Science degree in Aerospace Engineering, Polytechnic University of New York. Mr. Hurtado is a member of the
DOE Office of Scientific and Technical Information (OSTI.GOV)
Satake, Shin-ichi; Kanamori, Hiroyuki; Kunugi, Tomoaki
2007-02-01
We have developed a parallel algorithm for microdigital-holographic particle-tracking velocimetry. The algorithm is used in (1) numerical reconstruction of a particle image computer using a digital hologram, and (2) searching for particles. The numerical reconstruction from the digital hologram makes use of the Fresnel diffraction equation and the FFT (fast Fourier transform),whereas the particle search algorithm looks for local maximum graduation in a reconstruction field represented by a 3D matrix. To achieve high performance computing for both calculations (reconstruction and particle search), two memory partitions are allocated to the 3D matrix. In this matrix, the reconstruction part consists of horizontallymore » placed 2D memory partitions on the x-y plane for the FFT, whereas, the particle search part consists of vertically placed 2D memory partitions set along the z axes.Consequently, the scalability can be obtained for the proportion of processor elements,where the benchmarks are carried out for parallel computation by a SGI Altix machine.« less
NASA Astrophysics Data System (ADS)
Bower, P.; Liddicoat (2), J.
2009-04-01
Brownfield Action (BA - http://www.brownfieldaction.org) is a web-based, interactive, three-dimensional digital space and learning simulation in which students form geotechnical consulting companies and work collaboratively to explore and solve problems in environmental forensics. BA is being used in the United States at 10 colleges and universities in earth, environmental, or engineering sciences undergraduate and graduate courses. As a semester-long activity or done in modular form for specific topics, BA encourages active learning that requires attention to detail, intuition, and positive interaction between peers that results in Phase 1 and Phase 2 Environmental Site Assessments. Besides use in higher education courses, BA also can be adapted for instruction to local, state, and federal governmental employees, and employees in industry where brownfields need to be investigated or require remediation.
2009-01-01
objects, and in particular the attribute of SHA256 hash is expressed (but other attributes may also be expressed). Digital signatures have been used in...34a62f06/00000 aff4: sha256 ¼þXf4i..7rPCgo ¼ urn:aff4:34a62f06/00000.idx aff4: sha256 ¼ ptV7xOK6..C7R6Xs ¼ urn:aff4:34a62f06/properties aff4: sha256 ¼ yoZ..YMtk...urn:aff4:34a62f06 aff4: sha256 ¼ udajC5C.BVii7psU ¼ fls-i aff4 ‘‘NY case 1’’ aff4imager -i -o http://ny.wan/evidence2.aff4 \\ -k http://ny.wan/alice.key
Ahmed, Abdulghani Ali; Xue Li, Chua
2018-01-01
Cloud storage service allows users to store their data online, so that they can remotely access, maintain, manage, and back up data from anywhere via the Internet. Although helpful, this storage creates a challenge to digital forensic investigators and practitioners in collecting, identifying, acquiring, and preserving evidential data. This study proposes an investigation scheme for analyzing data remnants and determining probative artifacts in a cloud environment. Using pCloud as a case study, this research collected the data remnants available on end-user device storage following the storing, uploading, and accessing of data in the cloud storage. Data remnants are collected from several sources, including client software files, directory listing, prefetch, registry, network PCAP, browser, and memory and link files. Results demonstrate that the collected remnants data are beneficial in determining a sufficient number of artifacts about the investigated cybercrime. © 2017 American Academy of Forensic Sciences.
Guo, Fei; Yu, Jiao; Zhang, Lu; Li, Jun
2017-11-01
The ForenSeq™ DNA Signature Prep Kit (ForenSeq Kit) is designed to detect more than 200 forensically relevant markers in a single reaction on the MiSeq FGx™ Forensic Genomics System (MiSeq FGx System), including Amelogenin, 27 autosomal short tandem repeats (A-STRs), 7 X chromosomal STRs (X-STRs), 24 Y chromosomal STRs (Y-STRs) and 94 identity-informative single nucleotide polymorphisms (iSNPs) with the option to contain 22 phenotypic-informative SNPs (pSNPs) and 56 ancestry-informative SNPs (aSNPs). In this study, we evaluated the MiSeq FGx System on three major parts: methodological optimization (DNA extraction, sample quantification, library normalization, diluted libraries concentration, and sample-to-cell arrangement), massively parallel sequencing (MPS) performance (depth of coverage, sequence coverage ratio, and allele coverage ratio), and ForenSeq Kit characteristics (repeatability and concordance, sensitivity, mixture, stability and case-type samples). Results showed that quantitative polymerase chain reaction (qPCR)-based sample quantification and library normalization and the appropriate number of pooled libraries and concentration of diluted libraries provided a greater level of MPS performance and repeatability. Repeatable and concordant genotypes were obtained by the ForenSeq Kit. Full profiles were obtained from ≥100pg input DNA for STRs and ≥200pg for SNPs. A sample with ≥5% minor contributors was considered as a mixture by imbalanced allele coverage ratio distribution, and full profiles from minor contributors were easily detected between 9:1 and 1:9 mixtures with known reference profiles. The ForenSeq Kit tolerated considerable concentrations of inhibitors like ≤200μM hematin and ≤50μg/ml humic acid, and >56% STR profiles and >88% SNP profiles were obtained from ≥200-bp degraded samples. Also, it was adapted to case-type samples. As a whole, the ForenSeq Kit is a well-performed, robust, reliable, reproducible and highly informative assay, and it can fully meet requirements for human identification. Further, sensitive QC indicator and automated sample comparison function in the ForenSeq™ Universal Analysis Software are quite helpful, so that we can concentrate on questionable genotypes and avoid tedious and time-consuming labor to maximum the time spent in data analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
Memory-based frame synchronizer. [for digital communication systems
NASA Technical Reports Server (NTRS)
Stattel, R. J.; Niswander, J. K. (Inventor)
1981-01-01
A frame synchronizer for use in digital communications systems wherein data formats can be easily and dynamically changed is described. The use of memory array elements provide increased flexibility in format selection and sync word selection in addition to real time reconfiguration ability. The frame synchronizer comprises a serial-to-parallel converter which converts a serial input data stream to a constantly changing parallel data output. This parallel data output is supplied to programmable sync word recognizers each consisting of a multiplexer and a random access memory (RAM). The multiplexer is connected to both the parallel data output and an address bus which may be connected to a microprocessor or computer for purposes of programming the sync word recognizer. The RAM is used as an associative memory or decorder and is programmed to identify a specific sync word. Additional programmable RAMs are used as counter decoders to define word bit length, frame word length, and paragraph frame length.
Blaettler, M; Bruegger, A; Forster, I C; Lehareinger, Y
1988-03-01
The design of an analog interface to a digital audio signal processor (DASP)-video cassette recorder (VCR) system is described. The complete system represents a low-cost alternative to both FM instrumentation tape recorders and multi-channel chart recorders. The interface or DASP input-output unit described in this paper enables the recording and playback of up to 12 analog channels with a maximum of 12 bit resolution and a bandwidth of 2 kHz per channel. Internal control and timing in the recording component of the interface is performed using ROMs which can be reprogrammed to suit different analog-to-digital converter hardware. Improvement in the bandwidth specifications is possible by connecting channels in parallel. A parallel 16 bit data output port is provided for direct transfer of the digitized data to a computer.
Towards a Standard Mixed-Signal Parallel Processing Architecture for Miniature and Microrobotics.
Sadler, Brian M; Hoyos, Sebastian
2014-01-01
The conventional analog-to-digital conversion (ADC) and digital signal processing (DSP) architecture has led to major advances in miniature and micro-systems technology over the past several decades. The outlook for these systems is significantly enhanced by advances in sensing, signal processing, communications and control, and the combination of these technologies enables autonomous robotics on the miniature to micro scales. In this article we look at trends in the combination of analog and digital (mixed-signal) processing, and consider a generalized sampling architecture. Employing a parallel analog basis expansion of the input signal, this scalable approach is adaptable and reconfigurable, and is suitable for a large variety of current and future applications in networking, perception, cognition, and control.
Towards a Standard Mixed-Signal Parallel Processing Architecture for Miniature and Microrobotics
Sadler, Brian M; Hoyos, Sebastian
2014-01-01
The conventional analog-to-digital conversion (ADC) and digital signal processing (DSP) architecture has led to major advances in miniature and micro-systems technology over the past several decades. The outlook for these systems is significantly enhanced by advances in sensing, signal processing, communications and control, and the combination of these technologies enables autonomous robotics on the miniature to micro scales. In this article we look at trends in the combination of analog and digital (mixed-signal) processing, and consider a generalized sampling architecture. Employing a parallel analog basis expansion of the input signal, this scalable approach is adaptable and reconfigurable, and is suitable for a large variety of current and future applications in networking, perception, cognition, and control. PMID:26601042
On Textual Analysis and Machine Learning for Cyberstalking Detection.
Frommholz, Ingo; Al-Khateeb, Haider M; Potthast, Martin; Ghasem, Zinnar; Shukla, Mitul; Short, Emma
2016-01-01
Cyber security has become a major concern for users and businesses alike. Cyberstalking and harassment have been identified as a growing anti-social problem. Besides detecting cyberstalking and harassment, there is the need to gather digital evidence, often by the victim. To this end, we provide an overview of and discuss relevant technological means, in particular coming from text analytics as well as machine learning, that are capable to address the above challenges. We present a framework for the detection of text-based cyberstalking and the role and challenges of some core techniques such as author identification, text classification and personalisation. We then discuss PAN, a network and evaluation initiative that focusses on digital text forensics, in particular author identification.
100 Gbps Wireless System and Circuit Design Using Parallel Spread-Spectrum Sequencing
NASA Astrophysics Data System (ADS)
Scheytt, J. Christoph; Javed, Abdul Rehman; Bammidi, Eswara Rao; KrishneGowda, Karthik; Kallfass, Ingmar; Kraemer, Rolf
2017-09-01
In this article mixed analog/digital signal processing techniques based on parallel spread-spectrum sequencing (PSSS) and radio frequency (RF) carrier synchronization for ultra-broadband wireless communication are investigated on system and circuit level.
Trinary signed-digit arithmetic using an efficient encoding scheme
NASA Astrophysics Data System (ADS)
Salim, W. Y.; Alam, M. S.; Fyath, R. S.; Ali, S. A.
2000-09-01
The trinary signed-digit (TSD) number system is of interest for ultrafast optoelectronic computing systems since it permits parallel carry-free addition and borrow-free subtraction of two arbitrary length numbers in constant time. In this paper, a simple coding scheme is proposed to encode the decimal number directly into the TSD form. The coding scheme enables one to perform parallel one-step TSD arithmetic operation. The proposed coding scheme uses only a 5-combination coding table instead of the 625-combination table reported recently for recoded TSD arithmetic technique.
One-step trinary signed-digit arithmetic using an efficient encoding scheme
NASA Astrophysics Data System (ADS)
Salim, W. Y.; Fyath, R. S.; Ali, S. A.; Alam, Mohammad S.
2000-11-01
The trinary signed-digit (TSD) number system is of interest for ultra fast optoelectronic computing systems since it permits parallel carry-free addition and borrow-free subtraction of two arbitrary length numbers in constant time. In this paper, a simple coding scheme is proposed to encode the decimal number directly into the TSD form. The coding scheme enables one to perform parallel one-step TSD arithmetic operation. The proposed coding scheme uses only a 5-combination coding table instead of the 625-combination table reported recently for recoded TSD arithmetic technique.
Parallel Processing with Digital Signal Processing Hardware and Software
NASA Technical Reports Server (NTRS)
Swenson, Cory V.
1995-01-01
The assembling and testing of a parallel processing system is described which will allow a user to move a Digital Signal Processing (DSP) application from the design stage to the execution/analysis stage through the use of several software tools and hardware devices. The system will be used to demonstrate the feasibility of the Algorithm To Architecture Mapping Model (ATAMM) dataflow paradigm for static multiprocessor solutions of DSP applications. The individual components comprising the system are described followed by the installation procedure, research topics, and initial program development.
Ravindra, S V; Mamatha, G P; Sunita, J D; Balappanavar, Aswini Y; Sardana, Varun
2015-01-01
Teeth are hardest part of the body and are least affected by the taphonomic process. They are considered as one of the reliable methods of identification of a person in forensic sciences. The aim of the following study is to establish morphometeric measurements by AutoCad 2009 (Autodesk, Inc) of permanent maxillary central incisors in different age groups of Udaipur population. Hospital-based descriptive cross-sectional study carried out in Udaipur. A study was carried out on 308 subjects of both genders with the age range of 9-68 years. Standardized intra-oral radiographs were made by paralleling technique and processed. The radiographs were scanned and the obtained images were standardized to the actual size of radiographic film. This was followed by measuring them using software AutoCad 2009. F-test, post-hoc test, Pearson's correlation test. For left maxillary central incisor, the total pulp area was found to be of 38.41 ± 12.88 mm and 14.32 ± 7.04 mm respectively. For right maxillary central incisor, the total pulp size was 38.39 ± 14.95 mm and 12.35 ± 5 mm respectively. Males (32.50, 32.87 mm(2)) had more pulp area when compared with females (28.82, 30.05 mm(2)). There was a decrease in total pulp area with increasing age which may be attributed to secondary dentin formation.
Connecting the Dots: From an Easy Method to Computerized Species Determination
Niederegger, Senta; Döge, Klaus-Peter; Peter, Marcus; Eickhölter, Tobias; Mall, Gita
2017-01-01
Differences in growth rate of forensically important dipteran larvae make species determination an essential requisite for an accurate estimation of time since colonization of the body. Interspecific morphological similarities, however, complicate species determination. Muscle attachment site (MAS) patterns on the inside of the cuticula of fly larvae are species specific and grow proportionally with the animal. The patterns can therefore be used for species identification, as well as age estimation in forensically important dipteran larvae. Additionally, in species where determination has proven to be difficult—even when employing genetic methods—this easy and cheap method can be successfully applied. The method was validated for a number of Calliphoridae, as well as Sarcophagidae; for Piophilidae species, however, the method proved to be inapt. The aim of this article is to assess the utility of the MAS method for applications in forensic entomology. Furthermore, the authors are currently engineering automation for pattern acquisition in order to expand the scope of the method. Automation is also required for the fast and reasonable application of MAS for species determination. Using filters on digital microscope pictures and cross-correlating them within their frequency range allows for a calculation of the correlation coefficients. Such pattern recognition permits an automatic comparison of one larva with a database of MAS reference patterns in order to find the correct, or at least the most likely, species. This facilitates species determination in immature stages of forensically important flies and economizes time investment, as rearing to adult flies will no longer be required. PMID:28524106
Single cells for forensic DNA analysis--from evidence material to test tube.
Brück, Simon; Evers, Heidrun; Heidorn, Frank; Müller, Ute; Kilper, Roland; Verhoff, Marcel A
2011-01-01
The purpose of this project was to develop a method that, while providing morphological quality control, allows single cells to be obtained from the surfaces of various evidence materials and be made available for DNA analysis in cases where only small amounts of cell material are present or where only mixed traces are found. With the SteREO Lumar.V12 stereomicroscope and UV unit from Zeiss, it was possible to detect and assess single epithelial cells on the surfaces of various objects (e.g., glass, plastic, metal). A digitally operated micromanipulator developed by aura optik was used to lift a single cell from the surface of evidence material and to transfer it to a conventional PCR tube or to an AmpliGrid(®) from Advalytix. The actual lifting of the cells was performed with microglobes that acted as carriers. The microglobes were held with microtweezers and were transferred to the DNA analysis receptacles along with the adhering cells. In a next step, the PCR can be carried out in this receptacle without removing the microglobe. Our method allows a single cell to be isolated directly from evidence material and be made available for forensic DNA analysis. © 2010 American Academy of Forensic Sciences.
Fredericks, Jamie D; Ringrose, Trevor J; Dicken, Anthony; Williams, Anna; Bennett, Phil
2015-03-01
Extracting viable DNA from many forensic sample types can be very challenging, as environmental conditions may be far from optimal with regard to DNA preservation. Consequently, skeletal tissue can often be an invaluable source of DNA. The bone matrix provides a hardened material that encapsulates DNA, acting as a barrier to environmental insults that would otherwise be detrimental to its integrity. However, like all forensic samples, DNA in bone can still become degraded in extreme conditions, such as intense heat. Extracting DNA from bone can be laborious and time-consuming. Thus, a lot of time and money can be wasted processing samples that do not ultimately yield viable DNA. We describe the use of colorimetry as a novel diagnostic tool that can assist DNA analysis from heat-treated bone. This study focuses on characterizing changes in the material and physical properties of heated bone, and their correlation with digitally measured color variation. The results demonstrate that the color of bone, which serves as an indicator of the chemical processes that have occurred, can be correlated with the success or failure of subsequent DNA amplification. Copyright © 2014 Forensic Science Society. Published by Elsevier Ireland Ltd. All rights reserved.
Video gaming and sexual violence: rethinking forensic nursing in a digital age.
Mercer, Dave; Parkinson, Denis
2014-01-01
This article reports findings from a qualitative study into how forensic nurses, and male personality disordered sexual offenders, talked about "pornography" in one U.K. high-security hospital. Research rationale was rooted in current professional and political debates, adopting a discourse analytic design to situate the project in a clinical context. Semistructured interviews, as co-constructed accounts, explored talk about sexual media, offending, treatment, and risk. Data were analyzed using a version of discourse analysis popular in healthcare research, identifying discursive repertoires, or collective language use, characteristic of the institutional culture. Findings revealed that masculine discourse marginalized female nurses and contradicted therapeutic goals, where men's talk about pornography, sex, and sexual crime represented discriminatory and gendered language. Nursing definitions of pornography were constructed in the context of the client group and an organizational need to manage risk. In a highly controlled environment, with a long-stay population, priority in respondent talk was given to mainstream commercial sexual media and everyday items/images perceived to have embedded sexual meaning. However, little mention was made of contemporary modes of producing/distributing pornography, where sex and sexual violence are enacted in virtual realities of cyberspace. Failure to engage with information technology, and globally mediated sex, is discussed as a growing concern for forensic health workers.
Wideband aperture array using RF channelizers and massively parallel digital 2D IIR filterbank
NASA Astrophysics Data System (ADS)
Sengupta, Arindam; Madanayake, Arjuna; Gómez-García, Roberto; Engeberg, Erik D.
2014-05-01
Wideband receive-mode beamforming applications in wireless location, electronically-scanned antennas for radar, RF sensing, microwave imaging and wireless communications require digital aperture arrays that offer a relatively constant far-field beam over several octaves of bandwidth. Several beamforming schemes including the well-known true time-delay and the phased array beamformers have been realized using either finite impulse response (FIR) or fast Fourier transform (FFT) digital filter-sum based techniques. These beamforming algorithms offer the desired selectivity at the cost of a high computational complexity and frequency-dependant far-field array patterns. A novel approach to receiver beamforming is the use of massively parallel 2-D infinite impulse response (IIR) fan filterbanks for the synthesis of relatively frequency independent RF beams at an order of magnitude lower multiplier complexity compared to FFT or FIR filter based conventional algorithms. The 2-D IIR filterbanks demand fast digital processing that can support several octaves of RF bandwidth, fast analog-to-digital converters (ADCs) for RF-to-bits type direct conversion of wideband antenna element signals. Fast digital implementation platforms that can realize high-precision recursive filter structures necessary for real-time beamforming, at RF radio bandwidths, are also desired. We propose a novel technique that combines a passive RF channelizer, multichannel ADC technology, and single-phase massively parallel 2-D IIR digital fan filterbanks, realized at low complexity using FPGA and/or ASIC technology. There exists native support for a larger bandwidth than the maximum clock frequency of the digital implementation technology. We also strive to achieve More-than-Moore throughput by processing a wideband RF signal having content with N-fold (B = N Fclk/2) bandwidth compared to the maximum clock frequency Fclk Hz of the digital VLSI platform under consideration. Such increase in bandwidth is achieved without use of polyphase signal processing or time-interleaved ADC methods. That is, all digital processors operate at the same Fclk clock frequency without phasing, while wideband operation is achieved by sub-sampling of narrower sub-bands at the the RF channelizer outputs.
The chronology of third molar mineralization by digital orthopantomography.
Maled, Venkatesh; Vishwanath, S B
2016-10-01
The present study was designed to determine the chronology of third molar mineralization to establish Indian reference data and to observe the advantages of digital orthopantomography. Therefore, a cross-sectional study was undertaken by evaluating 167 digital orthopantomographs in order to assess the mineralization status of the mandibular third molar of Caucasian individuals (85 males and 82 females) between the age of 14 and 24. The evaluation was carried out using the 8-stage developmental scheme of Demirjian et al (1973). The range, mean age, standard deviation and Student t-test are presented for each stage of mineralization in all four quadrants. Statistically significant differences between males and females were not found for all four third molars. All the individuals in this study with mature third molar were at least 18 years of age. For medicolegal purposes, the likelihood of whether an Indian is older than 18 years or not was determined. The advantage of digital orthopantomography in the interpretation of the tooth mineralization over the traditional method was acknowledged. Copyright © 2016 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Estimation of color modification in digital images by CFA pattern change.
Choi, Chang-Hee; Lee, Hae-Yeoun; Lee, Heung-Kyu
2013-03-10
Extensive studies have been carried out for detecting image forgery such as copy-move, re-sampling, blurring, and contrast enhancement. Although color modification is a common forgery technique, there is no reported forensic method for detecting this type of manipulation. In this paper, we propose a novel algorithm for estimating color modification in images acquired from digital cameras when the images are modified. Most commercial digital cameras are equipped with a color filter array (CFA) for acquiring the color information of each pixel. As a result, the images acquired from such digital cameras include a trace from the CFA pattern. This pattern is composed of the basic red green blue (RGB) colors, and it is changed when color modification is carried out on the image. We designed an advanced intermediate value counting method for measuring the change in the CFA pattern and estimating the extent of color modification. The proposed method is verified experimentally by using 10,366 test images. The results confirmed the ability of the proposed method to estimate color modification with high accuracy. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Research on moving object detection based on frog's eyes
NASA Astrophysics Data System (ADS)
Fu, Hongwei; Li, Dongguang; Zhang, Xinyuan
2008-12-01
On the basis of object's information processing mechanism with frog's eyes, this paper discussed a bionic detection technology which suitable for object's information processing based on frog's vision. First, the bionics detection theory by imitating frog vision is established, it is an parallel processing mechanism which including pick-up and pretreatment of object's information, parallel separating of digital image, parallel processing, and information synthesis. The computer vision detection system is described to detect moving objects which has special color, special shape, the experiment indicates that it can scheme out the detecting result in the certain interfered background can be detected. A moving objects detection electro-model by imitating biologic vision based on frog's eyes is established, the video simulative signal is digital firstly in this system, then the digital signal is parallel separated by FPGA. IN the parallel processing, the video information can be caught, processed and displayed in the same time, the information fusion is taken by DSP HPI ports, in order to transmit the data which processed by DSP. This system can watch the bigger visual field and get higher image resolution than ordinary monitor systems. In summary, simulative experiments for edge detection of moving object with canny algorithm based on this system indicate that this system can detect the edge of moving objects in real time, the feasibility of bionic model was fully demonstrated in the engineering system, and it laid a solid foundation for the future study of detection technology by imitating biologic vision.
NASA Astrophysics Data System (ADS)
Huffmann, Master; Siegel, Edward Carl-Ludwig
2013-03-01
Newcomb-Benford(NeWBe)-Siegel log-law BEC Digit-Physics Network/Graph-Physics Barabasi et.al. evolving-``complex''-networks/graphs BEC JAMMING DOA attacks: Amazon(weekends: Microsoft I.E.-7/8(vs. Firefox): Memorial-day, Labor-day,...), MANY U.S.-Banks:WF,BoA,UB,UBS,...instantiations AGAIN militate for MANDATORY CONVERSION to PARALLEL ANALOG FAULT-TOLERANT but slow(er) SECURITY-ASSURANCE networks/graphs in parallel with faster ``sexy'' DIGITAL-Networks/graphs:``Cloud'', telecomm: n-G,..., because of common ACHILLES-HEEL VULNERABILITY: DIGITS!!! ``In fast-hare versus slow-tortoise race, Slow-But-Steady ALWAYS WINS!!!'' (Zeno). {Euler [#s(1732)] ∑- ∏()-Riemann[Monats. Akad. Berlin (1859)] ∑- ∏()- Kummer-Bernoulli (#s)}-Newcomb [Am.J.Math.4(1),39 (81) discovery of the QUANTUM!!!]-{Planck (01)]}-{Einstein (05)]-Poincar e [Calcul Probabilités,313(12)]-Weyl[Goett. Nach.(14); Math.Ann.77,313(16)]-(Bose (24)-Einstein(25)]-VS. -Fermi (27)-Dirac(27))-Menger [Dimensiontheorie(29)]-Benford [J.Am. Phil.Soc.78,115(38)]-Kac[Maths Stats.-Reason. (55)]- Raimi [Sci.Am.221,109(69)]-Jech-Hill [Proc.AMS,123,3,887(95)] log-function
Programmable Remapper with Single Flow Architecture
NASA Technical Reports Server (NTRS)
Fisher, Timothy E. (Inventor)
1993-01-01
An apparatus for image processing comprising a camera for receiving an original visual image and transforming the original visual image into an analog image, a first converter for transforming the analog image of the camera to a digital image, a processor having a single flow architecture for receiving the digital image and producing, with a single algorithm, an output image, a second converter for transforming the digital image of the processor to an analog image, and a viewer for receiving the analog image, transforming the analog image into a transformed visual image for observing the transformations applied to the original visual image. The processor comprises one or more subprocessors for the parallel reception of a digital image for producing an output matrix of the transformed visual image. More particularly, the processor comprises a plurality of subprocessors for receiving in parallel and transforming the digital image for producing a matrix of the transformed visual image, and an output interface means for receiving the respective portions of the transformed visual image from the respective subprocessor for producing an output matrix of the transformed visual image.
NASA Technical Reports Server (NTRS)
Zhou, Zhimin (Inventor); Pain, Bedabrata (Inventor)
1999-01-01
An analog-to-digital converter for on-chip focal-plane image sensor applications. The analog-to-digital converter utilizes a single charge integrating amplifier in a charge balancing architecture to implement successive approximation analog-to-digital conversion. This design requires minimal chip area and has high speed and low power dissipation for operation in the 2-10 bit range. The invention is particularly well suited to CMOS on-chip applications requiring many analog-to-digital converters, such as column-parallel focal-plane architectures.
Method and apparatus for data sampling
Odell, Daniel M. C.
1994-01-01
A method and apparatus for sampling radiation detector outputs and determining event data from the collected samples. The method uses high speed sampling of the detector output, the conversion of the samples to digital values, and the discrimination of the digital values so that digital values representing detected events are determined. The high speed sampling and digital conversion is performed by an A/D sampler that samples the detector output at a rate high enough to produce numerous digital samples for each detected event. The digital discrimination identifies those digital samples that are not representative of detected events. The sampling and discrimination also provides for temporary or permanent storage, either serially or in parallel, to a digital storage medium.
2012-09-01
relative performance of several conventional SQL and NoSQL databases with a set of one billion file block hashes. Digital Forensics, Sector Hashing, Full... NoSQL databases with a set of one billion file block hashes. v THIS PAGE INTENTIONALLY LEFT BLANK vi Table of Contents List of Acronyms and...Operating System NOOP No Operation assembly instruction NoSQL “Not only SQL” model for non-relational database management NSRL National Software
A Framework for Automated Digital Forensic Reporting
2009-03-01
provide a simple way to extract local accounts from a full system image. Unix, Linux and the BSD variants store user accounts in the /etc/ passwd file...with hashes of the user passwords in the /etc/shadow file for linux or /etc/master.passwd for BSD. /etc/ passwd also contains mappings from usernames to... passwd file may not map directly to real-world names, it can be a crucial link in this eventual mapping. Following are two examples where it could prove
Data Hiding and the Statistics of Images
NASA Astrophysics Data System (ADS)
Cox, Ingemar J.
The fields of digital watermarking, steganography and steganalysis, and content forensics are closely related. In all cases, there is a class of images that is considered “natural”, i.e. images that do not contain watermarks, images that do not contain covert messages, or images that have not been tampered with. And, conversely, there is a class of images that is considered to be “unnatural”, i.e. images that contain watermarks, images that contain covert messages, or images that have been tampered with.
ORNL`s war on crime, technically speaking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiques, P.
This paper describes research being carried out by the Center for Applied Science and Technology for Law Enforcement (CASTLE), at Oak Ridge National Laboratory. This program works on projects which are solvable, affordable, and outside the scope of the private sector. Examples are presented of work related to: the lifetime of childrens fingerprints compared to adults; the development of ways of providing cooler body armor; digital enhancement technology applied to security-camera images from crime scenes; victim identification by skeletal reconstruction for use by forensic anthropologists.
Flexible All-Digital Receiver for Bandwidth Efficient Modulations
NASA Technical Reports Server (NTRS)
Gray, Andrew; Srinivasan, Meera; Simon, Marvin; Yan, Tsun-Yee
2000-01-01
An all-digital high data rate parallel receiver architecture developed jointly by Goddard Space Flight Center and the Jet Propulsion Laboratory is presented. This receiver utilizes only a small number of high speed components along with a majority of lower speed components operating in a parallel frequency domain structure implementable in CMOS, and can currently process up to 600 Mbps with standard QPSK modulation. Performance results for this receiver for bandwidth efficient QPSK modulation schemes such as square-root raised cosine pulse shaped QPSK and Feher's patented QPSK are presented, demonstrating the flexibility of the receiver architecture.
Digital image processing using parallel computing based on CUDA technology
NASA Astrophysics Data System (ADS)
Skirnevskiy, I. P.; Pustovit, A. V.; Abdrashitova, M. O.
2017-01-01
This article describes expediency of using a graphics processing unit (GPU) in big data processing in the context of digital images processing. It provides a short description of a parallel computing technology and its usage in different areas, definition of the image noise and a brief overview of some noise removal algorithms. It also describes some basic requirements that should be met by certain noise removal algorithm in the projection to computer tomography. It provides comparison of the performance with and without using GPU as well as with different percentage of using CPU and GPU.
NASA Astrophysics Data System (ADS)
Shinya, A.; Ishihara, T.; Inoue, K.; Nozaki, K.; Kita, S.; Notomi, M.
2018-02-01
We propose an optical parallel adder based on a binary decision diagram that can calculate simply by propagating light through electrically controlled optical pass gates. The CARRY and CARRY operations are multiplexed in one circuit by a wavelength division multiplexing scheme to reduce the number of optical elements, and only a single gate constitutes the critical path for one digit calculation. The processing time reaches picoseconds per digit when we use a 100-μm-long optical path gates, which is ten times faster than a CMOS circuit.
Tampered Region Localization of Digital Color Images Based on JPEG Compression Noise
NASA Astrophysics Data System (ADS)
Wang, Wei; Dong, Jing; Tan, Tieniu
With the availability of various digital image edit tools, seeing is no longer believing. In this paper, we focus on tampered region localization for image forensics. We propose an algorithm which can locate tampered region(s) in a lossless compressed tampered image when its unchanged region is output of JPEG decompressor. We find the tampered region and the unchanged region have different responses for JPEG compression. The tampered region has stronger high frequency quantization noise than the unchanged region. We employ PCA to separate different spatial frequencies quantization noises, i.e. low, medium and high frequency quantization noise, and extract high frequency quantization noise for tampered region localization. Post-processing is involved to get final localization result. The experimental results prove the effectiveness of our proposed method.
Development of a combined portable x-ray fluorescence and Raman spectrometer for in situ analysis.
Guerra, M; Longelin, S; Pessanha, S; Manso, M; Carvalho, M L
2014-06-01
In this work, we have built a portable X-ray fluorescence (XRF) spectrometer in a planar configuration coupled to a Raman head and a digital optical microscope, for in situ analysis. Several geometries for the XRF apparatus and digital microscope are possible in order to overcome spatial constraints and provide better measurement conditions. With this combined spectrometer, we are now able to perform XRF and Raman measurements in the same point without the need for sample collection, which can be crucial when dealing with cultural heritage objects, as well as forensic analysis. We show the capabilities of the spectrometer by measuring several standard reference materials, as well as other samples usually encountered in cultural heritage, geological, as well as biomedical studies.
Multi-gigabit optical interconnects for next-generation on-board digital equipment
NASA Astrophysics Data System (ADS)
Venet, Norbert; Favaro, Henri; Sotom, Michel; Maignan, Michel; Berthon, Jacques
2017-11-01
Parallel optical interconnects are experimentally assessed as a technology that may offer the high-throughput data communication capabilities required to the next-generation on-board digital processing units. An optical backplane interconnect was breadboarded, on the basis of a digital transparent processor that provides flexible connectivity and variable bandwidth in telecom missions with multi-beam antenna coverage. The unit selected for the demonstration required that more than tens of Gbit/s be supported by the backplane. The demonstration made use of commercial parallel optical link modules at 850 nm wavelength, with 12 channels running at up to 2.5 Gbit/s. A flexible optical fibre circuit was developed so as to route board-to-board connections. It was plugged to the optical transmitter and receiver modules through 12-fibre MPO connectors. BER below 10-14 and optical link budgets in excess of 12 dB were measured, which would enable to integrate broadcasting. Integration of the optical backplane interconnect was successfully demonstrated by validating the overall digital processor functionality.
Multi-gigabit optical interconnects for next-generation on-board digital equipment
NASA Astrophysics Data System (ADS)
Venet, Norbert; Favaro, Henri; Sotom, Michel; Maignan, Michel; Berthon, Jacques
2004-06-01
Parallel optical interconnects are experimentally assessed as a technology that may offer the high-throughput data communication capabilities required to the next-generation on-board digital processing units. An optical backplane interconnect was breadboarded, on the basis of a digital transparent processor that provides flexible connectivity and variable bandwidth in telecom missions with multi-beam antenna coverage. The unit selected for the demonstration required that more than tens of Gbit/s be supported by the backplane. The demonstration made use of commercial parallel optical link modules at 850 nm wavelength, with 12 channels running at up to 2.5 Gbit/s. A flexible optical fibre circuit was developed so as to route board-to-board connections. It was plugged to the optical transmitter and receiver modules through 12-fibre MPO connectors. BER below 10-14 and optical link budgets in excess of 12 dB were measured, which would enable to integrate broadcasting. Integration of the optical backplane interconnect was successfully demonstrated by validating the overall digital processor functionality.
Toledo, Víctor A; Fonseca, Gabriel M; González, Paula A; Ibarra, Luis; Torres, Francisco J; Sáez, Pedro L
2017-03-01
In animal bites, the dental attributes can be fundamental in identifying the marks made by various species on different matrices. Although rodent bite marks have been studied in the context of postmortem interference, little research has used different baits to analyze these marks linking not only specific behavior patterns but also the possibility of structural damage. Twenty mice (Mus musculus) were exposed to different baits to study their bite marks in a controlled model. The known pattern of parallel and multiple grooves has been seen in all baits, but polyvinyl chloride and fiber-optic cable were significantly different between each other and the other baits. Some baits showed patterns of anchorage of the upper incisors and space between the lower incisors when gnawing. This technical note represents a novel model of analysis where veterinarians and/or dentists may be asked to give an opinion on alleged animal bite marks. © 2016 American Academy of Forensic Sciences.
Koenig, Bruce E; Lacey, Douglas S
2014-07-01
In this research project, nine small digital audio recorders were tested using five sets of 30-min recordings at all available recording modes, with consistent audio material, identical source and microphone locations, and identical acoustic environments. The averaged direct current (DC) offset values and standard deviations were measured for 30-sec and 1-, 2-, 3-, 6-, 10-, 15-, and 30-min segments. The research found an inverse association between segment lengths and the standard deviation values and that lengths beyond 30 min may not meaningfully reduce the standard deviation values. This research supports previous studies indicating that measured averaged DC offsets should only be used for exclusionary purposes in authenticity analyses and exhibit consistent values when the general acoustic environment and microphone/recorder configurations were held constant. Measured average DC offset values from exemplar recorders may not be directly comparable to those of submitted digital audio recordings without exactly duplicating the acoustic environment and microphone/recorder configurations. © 2014 American Academy of Forensic Sciences.
Exsanguinated blood volume estimation using fractal analysis of digital images.
Sant, Sonia P; Fairgrieve, Scott I
2012-05-01
The estimation of bloodstain volume using fractal analysis of digital images of passive blood stains is presented. Binary digital photos of bloodstains of known volumes (ranging from 1 to 7 mL), dispersed in a defined area, were subjected to image analysis using FracLac V. 2.0 for ImageJ. The box-counting method was used to generate a fractal dimension for each trial. A positive correlation between the generated fractal number and the volume of blood was found (R(2) = 0.99). Regression equations were produced to estimate the volume of blood in blind trials. An error rate ranging from 78% for 1 mL to 7% for 6 mL demonstrated that as the volume increases so does the accuracy of the volume estimation. This method used in the preliminary study proved that bloodstain patterns may be deconstructed into mathematical parameters, thus removing the subjective element inherent in other methods of volume estimation. © 2012 American Academy of Forensic Sciences.
Lupariello, Francesco; Curti, Serena; Barber Duval, Janet; Abbattista, Giovanni; Di Vella, Giancarlo
2018-05-16
Geberth in 2006 stated that "staging is a conscious criminal action on the part of an offender to thwart an investigation." In the present paper two crime scenes staged by handling digital evidence are reported. The first case involves a 50-year-old woman who had been living with the offender for three years before he murdered her at the end of their relationship. He staged the scene as a sex-related crime committed by an unknown perpetrator. The second case concerns a young woman who was found dead in Southern Italy in January 2004 with a gunshot on the forehead. The boyfriend, responsible for the murder, had staged the crime scene as a suicide. Three years earlier in Germany, he had also murdered the victim's mother. In both cases, the correlation of physical and digital forensic evidence was crucial in the definition of the manner of death. Copyright © 2018 Elsevier B.V. All rights reserved.
Kim, Eun Hye; Lee, Hwan Young; Yang, In Seok; Jung, Sang-Eun; Yang, Woo Ick; Shin, Kyoung-Jin
2016-05-01
The next-generation sequencing (NGS) method has been utilized to analyze short tandem repeat (STR) markers, which are routinely used for human identification purposes in the forensic field. Some researchers have demonstrated the successful application of the NGS system to STR typing, suggesting that NGS technology may be an alternative or additional method to overcome limitations of capillary electrophoresis (CE)-based STR profiling. However, there has been no available multiplex PCR system that is optimized for NGS analysis of forensic STR markers. Thus, we constructed a multiplex PCR system for the NGS analysis of 18 markers (13CODIS STRs, D2S1338, D19S433, Penta D, Penta E and amelogenin) by designing amplicons in the size range of 77-210 base pairs. Then, PCR products were generated from two single-sources, mixed samples and artificially degraded DNA samples using a multiplex PCR system, and were prepared for sequencing on the MiSeq system through construction of a subsequent barcoded library. By performing NGS and analyzing the data, we confirmed that the resultant STR genotypes were consistent with those of CE-based typing. Moreover, sequence variations were detected in targeted STR regions. Through the use of small-sized amplicons, the developed multiplex PCR system enables researchers to obtain successful STR profiles even from artificially degraded DNA as well as STR loci which are analyzed with large-sized amplicons in the CE-based commercial kits. In addition, successful profiles can be obtained from mixtures up to a 1:19 ratio. Consequently, the developed multiplex PCR system, which produces small size amplicons, can be successfully applied to STR NGS analysis of forensic casework samples such as mixtures and degraded DNA samples. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Walsh, Susan; Chaitanya, Lakshmi; Clarisse, Lindy; Wirken, Laura; Draus-Barini, Jolanta; Kovatsi, Leda; Maeda, Hitoshi; Ishikawa, Takaki; Sijen, Titia; de Knijff, Peter; Branicki, Wojciech; Liu, Fan; Kayser, Manfred
2014-03-01
Forensic DNA Phenotyping or 'DNA intelligence' tools are expected to aid police investigations and find unknown individuals by providing information on externally visible characteristics of unknown suspects, perpetrators and missing persons from biological samples. This is especially useful in cases where conventional DNA profiling or other means remain non-informative. Recently, we introduced the HIrisPlex system, capable of predicting both eye and hair colour from DNA. In the present developmental validation study, we demonstrate that the HIrisPlex assay performs in full agreement with the Scientific Working Group on DNA Analysis Methods (SWGDAM) guidelines providing an essential prerequisite for future HIrisPlex applications to forensic casework. The HIrisPlex assay produces complete profiles down to only 63 pg of DNA. Species testing revealed human specificity for a complete HIrisPlex profile, while only non-human primates showed the closest full profile at 20 out of the 24 DNA markers, in all animals tested. Rigorous testing of simulated forensic casework samples such as blood, semen, saliva stains, hairs with roots as well as extremely low quantity touch (trace) DNA samples, produced complete profiles in 88% of cases. Concordance testing performed between five independent forensic laboratories displayed consistent reproducible results on varying types of DNA samples. Due to its design, the assay caters for degraded samples, underlined here by results from artificially degraded DNA and from simulated casework samples of degraded DNA. This aspect was also demonstrated previously on DNA samples from human remains up to several hundreds of years old. With this paper, we also introduce enhanced eye and hair colour prediction models based on enlarged underlying databases of HIrisPlex genotypes and eye/hair colour phenotypes (eye colour: N = 9188 and hair colour: N = 1601). Furthermore, we present an online web-based system for individual eye and hair colour prediction from full and partial HIrisPlex DNA profiles. By demonstrating that the HIrisPlex assay is fully compatible with the SWGDAM guidelines, we provide the first forensically validated DNA test system for parallel eye and hair colour prediction now available to forensic laboratories for immediate casework application, including missing person cases. Given the robustness and sensitivity described here and in previous work, the HIrisPlex system is also suitable for analysing old and ancient DNA in anthropological and evolutionary studies. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Locus and persistence of capacity limitations in visual information processing.
Kleiss, J A; Lane, D M
1986-05-01
Although there is considerable evidence that stimuli such as digits and letters are extensively processed in parallel and without capacity limitations, recent data suggest that only the features of stimuli are processed in parallel. In an attempt to reconcile this discrepancy, we used the simultaneous/successive detection paradigm with stimuli from experiments indicating parallel processing and with stimuli from experiments indicating that only features can be processed in parallel. In Experiment 1, large differences between simultaneous and successive presentations were obtained with an R target among P and Q distractors and among P and B distractors, but not with digit targets among letter distractors. As predicted by the feature integration theory of attention, false-alarm rates in the simultaneous condition were much higher than in the successive condition with the R/PQ stimuli. In Experiment 2, the possibility that attention is required for any difficult discrimination was ruled out as an explanation of the discrepancy between the digit/letter results and the R/PQ and R/PB results. Experiment 3A replicated the R/PQ and R/PB results of Experiment 1, and Experiment 3B extended these findings to a new set of stimuli. In Experiment 4, we found that large amounts of consistent practice did not generally eliminate capacity limitations. From this series of experiments we strongly conclude that the notion of capacity-free letter perception has limited generality.
Parallel-Processing Equalizers for Multi-Gbps Communications
NASA Technical Reports Server (NTRS)
Gray, Andrew; Ghuman, Parminder; Hoy, Scott; Satorius, Edgar H.
2004-01-01
Architectures have been proposed for the design of frequency-domain least-mean-square complex equalizers that would be integral parts of parallel- processing digital receivers of multi-gigahertz radio signals and other quadrature-phase-shift-keying (QPSK) or 16-quadrature-amplitude-modulation (16-QAM) of data signals at rates of multiple gigabits per second. Equalizers as used here denotes receiver subsystems that compensate for distortions in the phase and frequency responses of the broad-band radio-frequency channels typically used to convey such signals. The proposed architectures are suitable for realization in very-large-scale integrated (VLSI) circuitry and, in particular, complementary metal oxide semiconductor (CMOS) application- specific integrated circuits (ASICs) operating at frequencies lower than modulation symbol rates. A digital receiver of the type to which the proposed architecture applies (see Figure 1) would include an analog-to-digital converter (A/D) operating at a rate, fs, of 4 samples per symbol period. To obtain the high speed necessary for sampling, the A/D and a 1:16 demultiplexer immediately following it would be constructed as GaAs integrated circuits. The parallel-processing circuitry downstream of the demultiplexer, including a demodulator followed by an equalizer, would operate at a rate of only fs/16 (in other words, at 1/4 of the symbol rate). The output from the equalizer would be four parallel streams of in-phase (I) and quadrature (Q) samples.
Photography/Digital Imaging: Parallel & Paradoxical Histories.
ERIC Educational Resources Information Center
Witte, Mary Stieglitz
With the introduction of photography and photomechanical printing processes in the 19th century, the first age of machine pictures and reproductions emerged. The 20th century introduced computer image processing systems, creating a digital imaging revolution. Rather than concentrating on the adversarial aspects of the computer's influence on…
Optical memories in digital computing
NASA Technical Reports Server (NTRS)
Alford, C. O.; Gaylord, T. K.
1979-01-01
High capacity optical memories with relatively-high data-transfer rate and multiport simultaneous access capability may serve as basis for new computer architectures. Several computer structures that might profitably use memories are: a) simultaneous record-access system, b) simultaneously-shared memory computer system, and c) parallel digital processing structure.
NASA Astrophysics Data System (ADS)
Yin, Guoyan; Zhang, Limin; Zhang, Yanqi; Liu, Han; Du, Wenwen; Ma, Wenjuan; Zhao, Huijuan; Gao, Feng
2018-02-01
Pharmacokinetic diffuse fluorescence tomography (DFT) can describe the metabolic processes of fluorescent agents in biomedical tissue and provide helpful information for tumor differentiation. In this paper, a dynamic DFT system was developed by employing digital lock-in-photon-counting with square wave modulation, which predominates in ultra-high sensitivity and measurement parallelism. In this system, 16 frequency-encoded laser diodes (LDs) driven by self-designed light source system were distributed evenly in the imaging plane and irradiated simultaneously. Meanwhile, 16 detection fibers collected emission light in parallel by the digital lock-in-photon-counting module. The fundamental performances of the proposed system were assessed with phantom experiments in terms of stability, linearity, anti-crosstalk as well as images reconstruction. The results validated the availability of the proposed dynamic DFT system.
Fingerprint Ridge Density as a Potential Forensic Anthropological Tool for Sex Identification.
Dhall, Jasmine Kaur; Kapoor, Anup Kumar
2016-03-01
In cases of partial or poor print recovery and lack of database/suspect print, fingerprint evidence is generally neglected. In light of such constraints, this study was designed to examine whether ridge density can aid in narrowing down the investigation for sex identification. The study was conducted on the right-hand index digit of 245 males and 246 females belonging to the Punjabis of Delhi region. Five ridge density count areas, namely upper radial, radial, ulnar, upper ulnar, and proximal, were selected and designated. Probability of sex origin was calculated, and stepwise discriminant function analysis was performed to determine the discriminating ability of the selected areas. Females were observed with a significantly higher ridge density than males in all the five areas. Discriminant function analysis and logistic regression exhibited 96.8% and 97.4% accuracy, respectively, in sex identification. Hence, fingerprint ridge density is a potential tool for sex identification, even from partial prints. © 2015 American Academy of Forensic Sciences.
New Technologies, New Problems, New Laws.
Recupero, Patricia R
2016-09-01
Forensic psychiatrists in the 21st century can expect to encounter technology-related social problems for which existing legal remedies are limited. In addition to the inadequate protection of adolescents using social media as outlined by Costello et al., current laws are often poorly suited to remedy problems such as cyberharassment, sexting among minors, and the publication of threatening or harmful communications online. Throughout history, technological developments have often preceded the introduction of new laws or the careful revision of existing laws. This pattern is evident in many of the newer social problems that involve technology, including cyberbullying, online impersonation, and revenge porn. As specialists working at the intersection of human behavior and the law, forensic psychiatrists are uniquely situated to help legal professionals and others understand the impact of internet-related problematic behaviors on victims and, perhaps, to assist in the development of new legal remedies that are better tailored to our increasingly digital society. © 2016 American Academy of Psychiatry and the Law.
[Comparision of Different Methods of Area Measurement in Irregular Scar].
Ran, D; Li, W J; Sun, Q G; Li, J Q; Xia, Q
2016-10-01
To determine a measurement standard of irregular scar area by comparing the advantages and disadvantages of different measurement methods in measuring same irregular scar area. Irregular scar area was scanned by digital scanning and measured by coordinate reading method, AutoCAD pixel method, Photoshop lasso pixel method, Photoshop magic bar filled pixel method and Foxit PDF reading software, and some aspects of these methods such as measurement time, repeatability, whether could be recorded and whether could be traced were compared and analyzed. There was no significant difference in the scar areas by the measurement methods above. However, there was statistical difference in the measurement time and repeatability by one or multi performers and only Foxit PDF reading software could be traced back. The methods above can be used for measuring scar area, but each one has its advantages and disadvantages. It is necessary to develop new measurement software for forensic identification. Copyright© by the Editorial Department of Journal of Forensic Medicine
Stature estimation and formulation of based on ulna length in Kurdish racial subgroup.
Ghanbaril, Kimia; Nazari, Ali Reza; Ghanbari, Ali; Chehrei, Shima
2016-01-01
Measuring stature is useful for forensic and anthropometrical sciences. The present study was con- ducted to calculate the stature from ulna length among Kurdish racial subgroup living in Iran. In this study, 50 females aged 19-24 were recruited. The ulna length of subjects was taken indepen- dently on left and right sides using a digital sliding caliper. The height was measured between vertex and floor. The height (Y) was also estimated by linear regression formulas from the length of right (X1) or left side ulna (X2). For right side, Y1 = 59.48 + 4.005 X1 ± 4.09295 (R=0.753); for left side, Y2 = 63.44 +3.887 X2 ± 4.24106 (R=0.731). The derived formulae are population specific and are designed for use in forensic and anthropometric skeletal analysis of Kurdish racial subgroup. These data provide a scientific basis for further investigations on racial subgroups living in Iran.
Real-time color image processing for forensic fiber investigations
NASA Astrophysics Data System (ADS)
Paulsson, Nils
1995-09-01
This paper describes a system for automatic fiber debris detection based on color identification. The properties of the system are fast analysis and high selectivity, a necessity when analyzing forensic fiber samples. An ordinary investigation separates the material into well above 100,000 video images to analyze. The system is based on standard techniques such as CCD-camera, motorized sample table, and IBM-compatible PC/AT with add-on-boards for video frame digitalization and stepping motor control as the main parts. It is possible to operate the instrument at full video rate (25 image/s) with aid of the HSI-color system (hue- saturation-intensity) and software optimization. High selectivity is achieved by separating the analysis into several steps. The first step is fast direct color identification of objects in the analyzed video images and the second step analyzes detected objects with a more complex and time consuming stage of the investigation to identify single fiber fragments for subsequent analysis with more selective techniques.
Fetisov, V A; Makarov, I Yu; Gusarov, A A; Lorents, A S; Smirenin, S A; Stragis, V B
The study of blood stains retained at the scene of the crime is of crucial importance for the preliminary inquiry. The present article is focused on the analysis of the possibilities and prospects for the use of photogrammetry (PM) as exemplified by the foreign expert practice of the blood stains examination at the site of the event. It is shown that the results of the application of digital photogrammetry in addition to the traditional methods of morphological investigations enables the forensic medical experts to reconstruct a number of unique features and circumstances that accompanied the commission of a crime at the site of the event. Such PM techniques supplemented by the ballistic analysis of the blood splatter and droplet trajectories provides additional evidence that allows the forensic medical experts to reconstruct the scene of the crime including the pose and position of the victim at the moment of causing injury. Moreover, these data make it possible to determine the maximum number and the sequence of injurious impacts (blows). The authors discuss the advantages and relative disadvantages of the application of the photogrammetric technique in the routine practical expert work. It is emphasized that the published decision making algorithms provide the specialists in various disciplines and professional experts with the ready-made technological tools for obtaining the additional criteria for the objective improvement of the quality of the studies they carry out and for the enhancement of the value of expert conclusions. It is concluded that the application of the modern photogrammetric technologies can be recommended for the solution of the applied forensic medical problems and conducting the relevant expert research.
Issues in the digital implementation of control compensators. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Moroney, P.
1979-01-01
Techniques developed for the finite-precision implementation of digital filters were used, adapted, and extended for digital feedback compensators, with particular emphasis on steady state, linear-quadratic-Gaussian compensators. Topics covered include: (1) the linear-quadratic-Gaussian problem; (2) compensator structures; (3) architectural issues: serialism, parallelism, and pipelining; (4) finite wordlength effects: quantization noise, quantizing the coefficients, and limit cycles; and (5) the optimization of structures.
Method and apparatus for data sampling
Odell, D.M.C.
1994-04-19
A method and apparatus for sampling radiation detector outputs and determining event data from the collected samples is described. The method uses high speed sampling of the detector output, the conversion of the samples to digital values, and the discrimination of the digital values so that digital values representing detected events are determined. The high speed sampling and digital conversion is performed by an A/D sampler that samples the detector output at a rate high enough to produce numerous digital samples for each detected event. The digital discrimination identifies those digital samples that are not representative of detected events. The sampling and discrimination also provides for temporary or permanent storage, either serially or in parallel, to a digital storage medium. 6 figures.
Bose, Nikhil; Carlberg, Katie; Sensabaugh, George; Erlich, Henry; Calloway, Cassandra
2018-05-01
DNA from biological forensic samples can be highly fragmented and present in limited quantity. When DNA is highly fragmented, conventional PCR based Short Tandem Repeat (STR) analysis may fail as primer binding sites may not be present on a single template molecule. Single Nucleotide Polymorphisms (SNPs) can serve as an alternative type of genetic marker for analysis of degraded samples because the targeted variation is a single base. However, conventional PCR based SNP analysis methods still require intact primer binding sites for target amplification. Recently, probe capture methods for targeted enrichment have shown success in recovering degraded DNA as well as DNA from ancient bone samples using next-generation sequencing (NGS) technologies. The goal of this study was to design and test a probe capture assay targeting forensically relevant nuclear SNP markers for clonal and massively parallel sequencing (MPS) of degraded and limited DNA samples as well as mixtures. A set of 411 polymorphic markers totaling 451 nuclear SNPs (375 SNPs and 36 microhaplotype markers) was selected for the custom probe capture panel. The SNP markers were selected for a broad range of forensic applications including human individual identification, kinship, and lineage analysis as well as for mixture analysis. Performance of the custom SNP probe capture NGS assay was characterized by analyzing read depth and heterozygote allele balance across 15 samples at 25 ng input DNA. Performance thresholds were established based on read depth ≥500X and heterozygote allele balance within ±10% deviation from 50:50, which was observed for 426 out of 451 SNPs. These 426 SNPs were analyzed in size selected samples (at ≤75 bp, ≤100 bp, ≤150 bp, ≤200 bp, and ≤250 bp) as well as mock degraded samples fragmented to an average of 150 bp. Samples selected for ≤75 bp exhibited 99-100% reportable SNPs across varied DNA amounts and as low as 0.5 ng. Mock degraded samples at 1 ng and 10 ng exhibited >90% reportable SNPs. Finally, two-person male-male mixtures were tested at 10 ng in contributor varying ratios. Overall, 85-100% of alleles unique to the minor contributor were observed at all mixture ratios. Results from these studies using the SNP probe capture NGS system demonstrates proof of concept for application to forensically relevant degraded and mixed DNA samples. Copyright © 2018 Elsevier B.V. All rights reserved.
Accessible biometrics: A frustrated total internal reflection approach to imaging fingerprints.
Smith, Nathan D; Sharp, James S
2017-05-01
Fingerprints are widely used as a means of identifying persons of interest because of the highly individual nature of the spatial distribution and types of features (or minuta) found on the surface of a finger. This individuality has led to their wide application in the comparison of fingerprints found at crime scenes with those taken from known offenders and suspects in custody. However, despite recent advances in machine vision technology and image processing techniques, fingerprint evidence is still widely being collected using outdated practices involving ink and paper - a process that can be both time consuming and expensive. Reduction of forensic service budgets increasingly requires that evidence be gathered and processed more rapidly and efficiently. However, many of the existing digital fingerprint acquisition devices have proven too expensive to roll out on a large scale. As a result new, low-cost imaging technologies are required to increase the quality and throughput of the processing of fingerprint evidence. Here we describe an inexpensive approach to digital fingerprint acquisition that is based upon frustrated total internal reflection imaging. The quality and resolution of the images produced are shown to be as good as those currently acquired using ink and paper based methods. The same imaging technique is also shown to be capable of imaging powdered fingerprints that have been lifted from a crime scene using adhesive tape or gel lifters. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.
A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL)
NASA Technical Reports Server (NTRS)
Carroll, Chester C.; Owen, Jeffrey E.
1988-01-01
A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL) is presented which overcomes the traditional disadvantages of simulations executed on a digital computer. The incorporation of parallel processing allows the mapping of simulations into a digital computer to be done in the same inherently parallel manner as they are currently mapped onto an analog computer. The direct-execution format maximizes the efficiency of the executed code since the need for a high level language compiler is eliminated. Resolution is greatly increased over that which is available with an analog computer without the sacrifice in execution speed normally expected with digitial computer simulations. Although this report covers all aspects of the new architecture, key emphasis is placed on the processing element configuration and the microprogramming of the ACLS constructs. The execution times for all ACLS constructs are computed using a model of a processing element based on the AMD 29000 CPU and the AMD 29027 FPU. The increase in execution speed provided by parallel processing is exemplified by comparing the derived execution times of two ACSL programs with the execution times for the same programs executed on a similar sequential architecture.
Childhood in a Digital Age: Creative Challenges for Educational Futures
ERIC Educational Resources Information Center
Craft, Anna
2012-01-01
The early twenty-first century is characterised by rapid change. Commentators note how permeating digital technologies engage increasing numbers of children, young people and adults as consumers and also producers. In the shifting technological landscape, childhood and youth are changing. Connectivity around the clock, with a parallel existence in…
The mental status examination in the age of the internet.
Recupero, Patricia R
2010-01-01
The Internet has grown increasingly relevant in the practice of forensic psychiatry. To a psychiatrist conducting a forensic evaluation, the evaluee's Internet use can be relevant in nearly all aspects of the analysis. An evaluee's Internet presence may help to confirm, corroborate, refute, or elaborate on the psychiatrist's general impression of the person. Questions about the individual's choice of screen names, activities, images, and phrases can be valuable conversational tools to increase candor and self-disclosure, even among less cooperative evaluees. Difficulties in mood or affect regulation, problems with thought process or content, and impaired impulse control may be apparent in the evaluee's behavior in various Internet forums-for example, hostile or provocative behavior in social forums or excessive use of gaming or shopping websites. Discussions about the evaluee's behavior on the Internet can help the psychiatrist to assess for impaired insight and judgment. Perceptual disturbances, such as derealization and depersonalization, may be related to an evaluee's overidentification with the virtual world to the neglect of real-life needs and responsibilities. Furthermore, digital evidence can be especially useful in assessments of impairment, credibility, and dangerousness or risk, particularly when the evaluee is uncooperative or unreliable in the face-to-face psychiatric examination. This discussion will provide illustrative examples and suggestions for questions and topics the forensic psychiatrist may find helpful in conducting a thorough evaluation in this new age of the Internet.
Park, Jong Kang; Rowlands, Christopher J; So, Peter T C
2017-01-01
Temporal focusing multiphoton microscopy is a technique for performing highly parallelized multiphoton microscopy while still maintaining depth discrimination. While the conventional wide-field configuration for temporal focusing suffers from sub-optimal axial resolution, line scanning temporal focusing, implemented here using a digital micromirror device (DMD), can provide substantial improvement. The DMD-based line scanning temporal focusing technique dynamically trades off the degree of parallelization, and hence imaging speed, for axial resolution, allowing performance parameters to be adapted to the experimental requirements. We demonstrate this new instrument in calibration specimens and in biological specimens, including a mouse kidney slice.
Park, Jong Kang; Rowlands, Christopher J.; So, Peter T. C.
2017-01-01
Temporal focusing multiphoton microscopy is a technique for performing highly parallelized multiphoton microscopy while still maintaining depth discrimination. While the conventional wide-field configuration for temporal focusing suffers from sub-optimal axial resolution, line scanning temporal focusing, implemented here using a digital micromirror device (DMD), can provide substantial improvement. The DMD-based line scanning temporal focusing technique dynamically trades off the degree of parallelization, and hence imaging speed, for axial resolution, allowing performance parameters to be adapted to the experimental requirements. We demonstrate this new instrument in calibration specimens and in biological specimens, including a mouse kidney slice. PMID:29387484
NASA Astrophysics Data System (ADS)
Kakue, T.; Endo, Y.; Shimobaba, T.; Ito, T.
2014-11-01
We report frequency estimation of loudspeaker diaphragm vibrating at high speed by parallel phase-shifting digital holography which is a technique of single-shot phase-shifting interferometry. This technique records multiple phaseshifted holograms required for phase-shifting interferometry by using space-division multiplexing. We constructed a parallel phase-shifting digital holography system consisting of a high-speed polarization-imaging camera. This camera has a micro-polarizer array which selects four linear polarization axes for 2 × 2 pixels. We set a loudspeaker as an object, and recorded vibration of diaphragm of the loudspeaker by the constructed system. By the constructed system, we demonstrated observation of vibration displacement of loudspeaker diaphragm. In this paper, we aim to estimate vibration frequency of the loudspeaker diaphragm by applying the experimental results to frequency analysis. Holograms consisting of 128 × 128 pixels were recorded at a frame rate of 262,500 frames per second by the camera. A sinusoidal wave was input to the loudspeaker via a phone connector. We observed displacement of the loudspeaker diaphragm vibrating by the system. We also succeeded in estimating vibration frequency of the loudspeaker diaphragm by applying frequency analysis to the experimental results.
Mishori, Ranit; Anastario, Michael; Naimer, Karen; Varanasi, Sucharita; Ferdowsian, Hope; Abel, Dori; Chugh, Kevin
2017-01-01
ABSTRACT Digital health development and use has been expansive and operationalized in a variety of settings and modalities around the world, including in low- and middle-income countries. Mobile applications have been developed for a variety of health professionals and frontline health workers including physicians, midwives, nurses, and community health workers. However, there are no published studies on the development and use of digital health related to human rights fieldwork and to our knowledge no mobile health platforms exist specifically for use by frontline health workers to forensically and clinically document sexual violence. We describe a participatory development and user design process with Congolese end-users of a novel human rights app for clinicians intended to standardize the documentation of sexual violence evidence for forensic and legal purposes, called MediCapt. The app, yet to be launched and still in the future proofing phase, has included several development phases: (1) initial needs assessment conducted in 2011, (2) prototype development and field-testing in 2014 with 8 Congolese physicians, (3) prototype refinement and field-testing in 2015 with 9 clinicians. Feedback from the first field-testing phase was incorporated into the design of the second prototype; key features that were added to MediCapt include the ability for users to take photographs and draw on a pictogram to include as part of the evidence package, as well as the ability to print a form with the completed data. Questionnaires and key-informant interviews during the second and third field-testing phases revealed overall positive attitudes about MediCapt, but multiple perceived and actual barriers to implementation were identified, from personal behaviors, such as individual clinicians' comfort with new technology, to more systemic and infrastructure factors, such as strong cultural preferences for print documentation of evidence and limited Internet connectivity. Next phases of development include consideration of patients' acceptance of this technology, how it actually fits in the clinical workflow, and testing of how to transfer the collected evidence to law enforcement and legal authorities. Ultimately, we plan on conducting a robust evaluation to assess effectiveness of the app on medical, legal, and human rights outcomes. We believe our experience of collecting data that will potentially serve as legal evidence broadens the traditional scope of digital health and crosses a wide range of fields including medical, technological, legal, and ethical, and thus propose refining and defining this unique field of exploration as mobile justice, or mJustice. PMID:28351881
Redundant binary number representation for an inherently parallel arithmetic on optical computers.
De Biase, G A; Massini, A
1993-02-10
A simple redundant binary number representation suitable for digital-optical computers is presented. By means of this representation it is possible to build an arithmetic with carry-free parallel algebraic sums carried out in constant time and parallel multiplication in log N time. This redundant number representation naturally fits the 2's complement binary number system and permits the construction of inherently parallel arithmetic units that are used in various optical technologies. Some properties of this number representation and several examples of computation are presented.
Parallel Event Analysis Under Unix
NASA Astrophysics Data System (ADS)
Looney, S.; Nilsson, B. S.; Oest, T.; Pettersson, T.; Ranjard, F.; Thibonnier, J.-P.
The ALEPH experiment at LEP, the CERN CN division and Digital Equipment Corp. have, in a joint project, developed a parallel event analysis system. The parallel physics code is identical to ALEPH's standard analysis code, ALPHA, only the organisation of input/output is changed. The user may switch between sequential and parallel processing by simply changing one input "card". The initial implementation runs on an 8-node DEC 3000/400 farm, using the PVM software, and exhibits a near-perfect speed-up linearity, reducing the turn-around time by a factor of 8.
NASA Astrophysics Data System (ADS)
Qin, Cheng-Zhi; Zhan, Lijun
2012-06-01
As one of the important tasks in digital terrain analysis, the calculation of flow accumulations from gridded digital elevation models (DEMs) usually involves two steps in a real application: (1) using an iterative DEM preprocessing algorithm to remove the depressions and flat areas commonly contained in real DEMs, and (2) using a recursive flow-direction algorithm to calculate the flow accumulation for every cell in the DEM. Because both algorithms are computationally intensive, quick calculation of the flow accumulations from a DEM (especially for a large area) presents a practical challenge to personal computer (PC) users. In recent years, rapid increases in hardware capacity of the graphics processing units (GPUs) provided in modern PCs have made it possible to meet this challenge in a PC environment. Parallel computing on GPUs using a compute-unified-device-architecture (CUDA) programming model has been explored to speed up the execution of the single-flow-direction algorithm (SFD). However, the parallel implementation on a GPU of the multiple-flow-direction (MFD) algorithm, which generally performs better than the SFD algorithm, has not been reported. Moreover, GPU-based parallelization of the DEM preprocessing step in the flow-accumulation calculations has not been addressed. This paper proposes a parallel approach to calculate flow accumulations (including both iterative DEM preprocessing and a recursive MFD algorithm) on a CUDA-compatible GPU. For the parallelization of an MFD algorithm (MFD-md), two different parallelization strategies using a GPU are explored. The first parallelization strategy, which has been used in the existing parallel SFD algorithm on GPU, has the problem of computing redundancy. Therefore, we designed a parallelization strategy based on graph theory. The application results show that the proposed parallel approach to calculate flow accumulations on a GPU performs much faster than either sequential algorithms or other parallel GPU-based algorithms based on existing parallelization strategies.
Automatic source camera identification using the intrinsic lens radial distortion
NASA Astrophysics Data System (ADS)
Choi, Kai San; Lam, Edmund Y.; Wong, Kenneth K. Y.
2006-11-01
Source camera identification refers to the task of matching digital images with the cameras that are responsible for producing these images. This is an important task in image forensics, which in turn is a critical procedure in law enforcement. Unfortunately, few digital cameras are equipped with the capability of producing watermarks for this purpose. In this paper, we demonstrate that it is possible to achieve a high rate of accuracy in the identification by noting the intrinsic lens radial distortion of each camera. To reduce manufacturing cost, the majority of digital cameras are equipped with lenses having rather spherical surfaces, whose inherent radial distortions serve as unique fingerprints in the images. We extract, for each image, parameters from aberration measurements, which are then used to train and test a support vector machine classifier. We conduct extensive experiments to evaluate the success rate of a source camera identification with five cameras. The results show that this is a viable approach with high accuracy. Additionally, we also present results on how the error rates may change with images captured using various optical zoom levels, as zooming is commonly available in digital cameras.
Parallel-quadrature phase-shifting digital holographic microscopy using polarization beam splitter
Das, Bhargab; Yelleswarapu, Chandra S; Rao, DVGLN
2012-01-01
We present a digital holography microscopy technique based on parallel-quadrature phase-shifting method. Two π/2 phase-shifted holograms are recorded simultaneously using polarization phase-shifting principle, slightly off-axis recording geometry, and two identical CCD sensors. The parallel phase-shifting is realized by combining circularly polarized object beam with a 45° degree polarized reference beam through a polarizing beam splitter. DC term is eliminated by subtracting the two holograms from each other and the object information is reconstructed after selecting the frequency spectrum of the real image. Both amplitude and phase object reconstruction results are presented. Simultaneous recording eliminates phase errors caused by mechanical vibrations and air turbulences. The slightly off-axis recording geometry with phase-shifting allows a much larger dimension of the spatial filter for reconstruction of the object information. This leads to better reconstruction capability than traditional off-axis holography. PMID:23109732
Recursive Algorithms for Real-Time Digital CR-RCn Pulse Shaping
NASA Astrophysics Data System (ADS)
Nakhostin, M.
2011-10-01
This paper reports on recursive algorithms for real-time implementation of CR-(RC)n filters in digital nuclear spectroscopy systems. The algorithms are derived by calculating the Z-transfer function of the filters for filter orders up to n=4 . The performances of the filters are compared with the performance of the conventional digital trapezoidal filter using a noise generator which separately generates pure series, 1/f and parallel noise. The results of our study enable one to select the optimum digital filter for different noise and rate conditions.
Enhanced optical security by using information carrier digital screening
NASA Astrophysics Data System (ADS)
Koltai, Ferenc; Adam, Bence
2004-06-01
Jura has developed different security features based on Information Carrier Digital Screening. Substance of such features is that a non-visible secondary image is encoded in a visible primary image. The encoded image will be visible only by using a decoding device. One of such developments is JURA's Invisible Personal Information (IPI) is widely used in high security documents, where personal data of the document holder are encoded in the screen of the document holder's photography and they can be decoded by using an optical decoding device. In order to make document verification fully automated, enhance security and eliminate human factors, digital version of IPI, the D-IPI was developed. A special 2D-barcode structure was designed, which contains sufficient quantity of encoded digital information and can be embedded into the photo. Other part of Digital-IPI is the reading software, that is able to retrieve the encoded information with high reliability. The reading software developed with a specific 2D structure is providing the possibility of a forensic analysis. Such analysis will discover all kind of manipulations -- globally, if the photography was simply changed and selectively, if only part of the photography was manipulated. Digital IPI is a good example how benefits of digital technology can be exploited by using optical security and how technology for optical security can be converted into digital technology. The D-IPI process is compatible with all current personalization printers and materials (polycarbonate, PVC, security papers, Teslin-foils, etc.) and can provide any document with enhanced security and tamper-resistance.
Full range line-field parallel swept source imaging utilizing digital refocusing
NASA Astrophysics Data System (ADS)
Fechtig, Daniel J.; Kumar, Abhishek; Drexler, Wolfgang; Leitgeb, Rainer A.
2015-12-01
We present geometric optics-based refocusing applied to a novel off-axis line-field parallel swept source imaging (LPSI) system. LPSI is an imaging modality based on line-field swept source optical coherence tomography, which permits 3-D imaging at acquisition speeds of up to 1 MHz. The digital refocusing algorithm applies a defocus-correcting phase term to the Fourier representation of complex-valued interferometric image data, which is based on the geometrical optics information of the LPSI system. We introduce the off-axis LPSI system configuration, the digital refocusing algorithm and demonstrate the effectiveness of our method for refocusing volumetric images of technical and biological samples. An increase of effective in-focus depth range from 255 μm to 4.7 mm is achieved. The recovery of the full in-focus depth range might be especially valuable for future high-speed and high-resolution diagnostic applications of LPSI in ophthalmology.
Phase reconstruction using compressive two-step parallel phase-shifting digital holography
NASA Astrophysics Data System (ADS)
Ramachandran, Prakash; Alex, Zachariah C.; Nelleri, Anith
2018-04-01
The linear relationship between the sample complex object wave and its approximated complex Fresnel field obtained using single shot parallel phase-shifting digital holograms (PPSDH) is used in compressive sensing framework and an accurate phase reconstruction is demonstrated. It is shown that the accuracy of phase reconstruction of this method is better than that of compressive sensing adapted single exposure inline holography (SEOL) method. It is derived that the measurement model of PPSDH method retains both the real and imaginary parts of the Fresnel field but with an approximation noise and the measurement model of SEOL retains only the real part exactly equal to the real part of the complex Fresnel field and its imaginary part is completely not available. Numerical simulation is performed for CS adapted PPSDH and CS adapted SEOL and it is demonstrated that the phase reconstruction is accurate for CS adapted PPSDH and can be used for single shot digital holographic reconstruction.
NASA Astrophysics Data System (ADS)
Lasher, Mark E.; Henderson, Thomas B.; Drake, Barry L.; Bocker, Richard P.
1986-09-01
The modified signed-digit (MSD) number representation offers full parallel, carry-free addition. A MSD adder has been described by the authors. This paper describes how the adder can be used in a tree structure to implement an optical multiply algorithm. Three different optical schemes, involving position, polarization, and intensity encoding, are proposed for realizing the trinary logic system. When configured in the generic multiplier architecture, these schemes yield the combinatorial logic necessary to carry out the multiplication algorithm. The optical systems are essentially three dimensional arrangements composed of modular units. Of course, this modularity is important for design considerations, while the parallelism and noninterfering communication channels of optical systems are important from the standpoint of reduced complexity. The authors have also designed electronic hardware to demonstrate and model the combinatorial logic required to carry out the algorithm. The electronic and proposed optical systems will be compared in terms of complexity and speed.
NASA Technical Reports Server (NTRS)
2002-01-01
Retinex Imaging Processing, winner of NASA's 1999 Space Act Award, is commercially available through TruView Imaging Company. With this technology, amateur photographers use their personal computers to improve the brightness, scene contrast, detail, and overall sharpness of images with increased ease. The process was originally developed for remote sensing of the Earth by researchers at Langley Research Center and Science and Technology Corporation (STC). It automatically enhances a digital image in terms of dynamic range compression, color independence from the spectral distribution of the scene illuminant, and color/lightness rendition. As a result, the enhanced digital image is much closer to the scene perceived by the human visual system, under all kinds and levels of lighting variations. TruView believes there are other applications for the software in medical imaging, forensics, security, recognizance, mining, assembly, and other industrial areas.
Advances in digital polymerase chain reaction (dPCR) and its emerging biomedical applications.
Cao, Lei; Cui, Xingye; Hu, Jie; Li, Zedong; Choi, Jane Ru; Yang, Qingzhen; Lin, Min; Ying Hui, Li; Xu, Feng
2017-04-15
Since the invention of polymerase chain reaction (PCR) in 1985, PCR has played a significant role in molecular diagnostics for genetic diseases, pathogens, oncogenes and forensic identification. In the past three decades, PCR has evolved from end-point PCR, through real-time PCR, to its current version, which is the absolute quantitive digital PCR (dPCR). In this review, we first discuss the principles of all key steps of dPCR, i.e., sample dispersion, amplification, and quantification, covering commercialized apparatuses and other devices still under lab development. We highlight the advantages and disadvantages of different technologies based on these steps, and discuss the emerging biomedical applications of dPCR. Finally, we provide a glimpse of the existing challenges and future perspectives for dPCR. Copyright © 2016 Elsevier B.V. All rights reserved.
Scalable Parallel Algorithms for Multidimensional Digital Signal Processing
1991-12-31
Proceedings, San Diego CL., August 1989, pp. 132-146. 53 [13] A. L. Gorin, L. Auslander, and A. Silberger . Balanced computation of 2D trans- forms on a tree...Speech, Signal Processing. ASSP-34, Oct. 1986,pp. 1301-1309. [24] A. Norton and A. Silberger . Parallelization and performance analysis of the Cooley-Tukey
Ravindra, S. V.; Mamatha, G. P.; Sunita, J. D.; Balappanavar, Aswini Y.; Sardana, Varun
2015-01-01
Context: Teeth are hardest part of the body and are least affected by the taphonomic process. They are considered as one of the reliable methods of identification of a person in forensic sciences. Aim: The aim of the following study is to establish morphometeric measurements by AutoCad 2009 (Autodesk, Inc) of permanent maxillary central incisors in different age groups of Udaipur population. Setting and Design: Hospital-based descriptive cross-sectional study carried out in Udaipur. Materials and Methods: A study was carried out on 308 subjects of both genders with the age range of 9-68 years. Standardized intra-oral radiographs were made by paralleling technique and processed. The radiographs were scanned and the obtained images were standardized to the actual size of radiographic film. This was followed by measuring them using software AutoCad 2009. Statistical Analysis Used: F-test, post-hoc test, Pearson's correlation test. Results: For left maxillary central incisor, the total pulp area was found to be of 38.41 ± 12.88 mm and 14.32 ± 7.04 mm respectively. For right maxillary central incisor, the total pulp size was 38.39 ± 14.95 mm and 12.35 ± 5 mm respectively. Males (32.50, 32.87 mm2) had more pulp area when compared with females (28.82, 30.05 mm2). Conclusion: There was a decrease in total pulp area with increasing age which may be attributed to secondary dentin formation. PMID:26816461
Design of a real-time wind turbine simulator using a custom parallel architecture
NASA Technical Reports Server (NTRS)
Hoffman, John A.; Gluck, R.; Sridhar, S.
1995-01-01
The design of a new parallel-processing digital simulator is described. The new simulator has been developed specifically for analysis of wind energy systems in real time. The new processor has been named: the Wind Energy System Time-domain simulator, version 3 (WEST-3). Like previous WEST versions, WEST-3 performs many computations in parallel. The modules in WEST-3 are pure digital processors, however. These digital processors can be programmed individually and operated in concert to achieve real-time simulation of wind turbine systems. Because of this programmability, WEST-3 is very much more flexible and general than its two predecessors. The design features of WEST-3 are described to show how the system produces high-speed solutions of nonlinear time-domain equations. WEST-3 has two very fast Computational Units (CU's) that use minicomputer technology plus special architectural features that make them many times faster than a microcomputer. These CU's are needed to perform the complex computations associated with the wind turbine rotor system in real time. The parallel architecture of the CU causes several tasks to be done in each cycle, including an IO operation and the combination of a multiply, add, and store. The WEST-3 simulator can be expanded at any time for additional computational power. This is possible because the CU's interfaced to each other and to other portions of the simulation using special serial buses. These buses can be 'patched' together in essentially any configuration (in a manner very similar to the programming methods used in analog computation) to balance the input/ output requirements. CU's can be added in any number to share a given computational load. This flexible bus feature is very different from many other parallel processors which usually have a throughput limit because of rigid bus architecture.
Zhang, Suhua; Bian, Yingnan; Chen, Anqi; Zheng, Hancheng; Gao, Yuzhen; Hou, Yiping; Li, Chengtao
2017-03-01
Utilizing massively parallel sequencing (MPS) technology for SNP testing in forensic genetics is becoming attractive because of the shortcomings of STR markers, such as their high mutation rates and disadvantages associated with the current PCR-CE method as well as its limitations regarding multiplex capabilities. MPS offers the potential to genotype hundreds to thousands of SNPs from multiple samples in a single experimental run. In this study, we designed a customized SNP panel that includes 273 forensically relevant identity SNPs chosen from SNPforID, IISNP, and the HapMap database as well as previously related studies and evaluated the levels of genotyping precision, sequence coverage, sensitivity and SNP performance using the Ion Torrent PGM. In a concordant study of the custom MPS-SNP panel, only four MPS callings were missing due to coverage reads that were too low (<20), whereas the others were fully concordant with Sanger's sequencing results across the two control samples, that is, 9947A and 9948. The analyses indicated a balanced coverage among the included loci, with the exception of the 16 SNPs that were used to detect an inconsistent allele balance and/or lower coverage reads among 50 tested individuals from the Chinese HAN population and the above controls. With the exception of the 16 poorly performing SNPs, the sequence coverage obtained was extensive for the bulk of the SNPs, and only three Y-SNPs (rs16980601, rs11096432, rs3900) showed a mean coverage below 1000. Analyses of the dilution series of control DNA 9948 yielded reproducible results down to 1ng of DNA input. In addition, we provide an analysis tool for automated data quality control and genotyping checks, and we conclude that the SNP targets are polymorphic and independent in the Chinese HAN population. In summary, the evaluation of the sensitivity, accuracy and genotyping performance provides strong support for the application of MPS technology in forensic SNP analysis, and the assay offers a straightforward sample-to-genotype workflow that could be beneficial in forensic casework with respect to both individual identification and complex kinship issues. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Multimodal digital color imaging system for facial skin lesion analysis
NASA Astrophysics Data System (ADS)
Bae, Youngwoo; Lee, Youn-Heum; Jung, Byungjo
2008-02-01
In dermatology, various digital imaging modalities have been used as an important tool to quantitatively evaluate the treatment effect of skin lesions. Cross-polarization color image was used to evaluate skin chromophores (melanin and hemoglobin) information and parallel-polarization image to evaluate skin texture information. In addition, UV-A induced fluorescent image has been widely used to evaluate various skin conditions such as sebum, keratosis, sun damages, and vitiligo. In order to maximize the evaluation efficacy of various skin lesions, it is necessary to integrate various imaging modalities into an imaging system. In this study, we propose a multimodal digital color imaging system, which provides four different digital color images of standard color image, parallel and cross-polarization color image, and UV-A induced fluorescent color image. Herein, we describe the imaging system and present the examples of image analysis. By analyzing the color information and morphological features of facial skin lesions, we are able to comparably and simultaneously evaluate various skin lesions. In conclusion, we are sure that the multimodal color imaging system can be utilized as an important assistant tool in dermatology.
Uncertainty in the use of MAMA software to measure particle morphological parameters from SEM images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwartz, Daniel S.; Tandon, Lav
The MAMA software package developed at LANL is designed to make morphological measurements on a wide variety of digital images of objects. At LANL, we have focused on using MAMA to measure scanning electron microscope (SEM) images of particles, as this is a critical part of our forensic analysis of interdicted radiologic materials. In order to successfully use MAMA to make such measurements, we must understand the level of uncertainty involved in the process, so that we can rigorously support our quantitative conclusions.
Digital tomosynthesis mammography using a parallel maximum-likelihood reconstruction method
NASA Astrophysics Data System (ADS)
Wu, Tao; Zhang, Juemin; Moore, Richard; Rafferty, Elizabeth; Kopans, Daniel; Meleis, Waleed; Kaeli, David
2004-05-01
A parallel reconstruction method, based on an iterative maximum likelihood (ML) algorithm, is developed to provide fast reconstruction for digital tomosynthesis mammography. Tomosynthesis mammography acquires 11 low-dose projections of a breast by moving an x-ray tube over a 50° angular range. In parallel reconstruction, each projection is divided into multiple segments along the chest-to-nipple direction. Using the 11 projections, segments located at the same distance from the chest wall are combined to compute a partial reconstruction of the total breast volume. The shape of the partial reconstruction forms a thin slab, angled toward the x-ray source at a projection angle 0°. The reconstruction of the total breast volume is obtained by merging the partial reconstructions. The overlap region between neighboring partial reconstructions and neighboring projection segments is utilized to compensate for the incomplete data at the boundary locations present in the partial reconstructions. A serial execution of the reconstruction is compared to a parallel implementation, using clinical data. The serial code was run on a PC with a single PentiumIV 2.2GHz CPU. The parallel implementation was developed using MPI and run on a 64-node Linux cluster using 800MHz Itanium CPUs. The serial reconstruction for a medium-sized breast (5cm thickness, 11cm chest-to-nipple distance) takes 115 minutes, while a parallel implementation takes only 3.5 minutes. The reconstruction time for a larger breast using a serial implementation takes 187 minutes, while a parallel implementation takes 6.5 minutes. No significant differences were observed between the reconstructions produced by the serial and parallel implementations.
Noncoherent parallel optical processor for discrete two-dimensional linear transformations.
Glaser, I
1980-10-01
We describe a parallel optical processor, based on a lenslet array, that provides general linear two-dimensional transformations using noncoherent light. Such a processor could become useful in image- and signal-processing applications in which the throughput requirements cannot be adequately satisfied by state-of-the-art digital processors. Experimental results that illustrate the feasibility of the processor by demonstrating its use in parallel optical computation of the two-dimensional Walsh-Hadamard transformation are presented.
Video coding for next-generation surveillance systems
NASA Astrophysics Data System (ADS)
Klasen, Lena M.; Fahlander, Olov
1997-02-01
Video is used as recording media in surveillance system and also more frequently by the Swedish Police Force. Methods for analyzing video using an image processing system have recently been introduced at the Swedish National Laboratory of Forensic Science, and new methods are in focus in a research project at Linkoping University, Image Coding Group. The accuracy of the result of those forensic investigations often depends on the quality of the video recordings, and one of the major problems when analyzing videos from crime scenes is the poor quality of the recordings. Enhancing poor image quality might add manipulative or subjective effects and does not seem to be the right way of getting reliable analysis results. The surveillance system in use today is mainly based on video techniques, VHS or S-VHS, and the weakest link is the video cassette recorder, (VCR). Multiplexers for selecting one of many camera outputs for recording is another problem as it often filters the video signal, and recording is limited to only one of the available cameras connected to the VCR. A way to get around the problem of poor recording is to simultaneously record all camera outputs digitally. It is also very important to build such a system bearing in mind that image processing analysis methods becomes more important as a complement to the human eye. Using one or more cameras gives a large amount of data, and the need for data compression is more than obvious. Crime scenes often involve persons or moving objects, and the available coding techniques are more or less useful. Our goal is to propose a possible system, being the best compromise with respect to what needs to be recorded, movements in the recorded scene, loss of information and resolution etc., to secure the efficient recording of the crime and enable forensic analysis. The preventative effective of having a well functioning surveillance system and well established image analysis methods is not to be neglected. Aspects of this next generation of digital surveillance systems are discussed in this paper.
Shen, Feng; Davydova, Elena K; Du, Wenbin; Kreutz, Jason E; Piepenburg, Olaf; Ismagilov, Rustem F
2011-05-01
In this paper, digital quantitative detection of nucleic acids was achieved at the single-molecule level by chemical initiation of over one thousand sequence-specific, nanoliter isothermal amplification reactions in parallel. Digital polymerase chain reaction (digital PCR), a method used for quantification of nucleic acids, counts the presence or absence of amplification of individual molecules. However, it still requires temperature cycling, which is undesirable under resource-limited conditions. This makes isothermal methods for nucleic acid amplification, such as recombinase polymerase amplification (RPA), more attractive. A microfluidic digital RPA SlipChip is described here for simultaneous initiation of over one thousand nL-scale RPA reactions by adding a chemical initiator to each reaction compartment with a simple slipping step after instrument-free pipet loading. Two designs of the SlipChip, two-step slipping and one-step slipping, were validated using digital RPA. By using the digital RPA SlipChip, false-positive results from preinitiation of the RPA amplification reaction before incubation were eliminated. End point fluorescence readout was used for "yes or no" digital quantification. The performance of digital RPA in a SlipChip was validated by amplifying and counting single molecules of the target nucleic acid, methicillin-resistant Staphylococcus aureus (MRSA) genomic DNA. The digital RPA on SlipChip was also tolerant to fluctuations of the incubation temperature (37-42 °C), and its performance was comparable to digital PCR on the same SlipChip design. The digital RPA SlipChip provides a simple method to quantify nucleic acids without requiring thermal cycling or kinetic measurements, with potential applications in diagnostics and environmental monitoring under resource-limited settings. The ability to initiate thousands of chemical reactions in parallel on the nanoliter scale using solvent-resistant glass devices is likely to be useful for a broader range of applications.
Shen, Feng; Davydova, Elena K.; Du, Wenbin; Kreutz, Jason E.; Piepenburg, Olaf; Ismagilov, Rustem F.
2011-01-01
In this paper, digital quantitative detection of nucleic acids was achieved at the single-molecule level by chemical initiation of over one thousand sequence-specific, nanoliter, isothermal amplification reactions in parallel. Digital polymerase chain reaction (digital PCR), a method used for quantification of nucleic acids, counts the presence or absence of amplification of individual molecules. However it still requires temperature cycling, which is undesirable under resource-limited conditions. This makes isothermal methods for nucleic acid amplification, such as recombinase polymerase amplification (RPA), more attractive. A microfluidic digital RPA SlipChip is described here for simultaneous initiation of over one thousand nL-scale RPA reactions by adding a chemical initiator to each reaction compartment with a simple slipping step after instrument-free pipette loading. Two designs of the SlipChip, two-step slipping and one-step slipping, were validated using digital RPA. By using the digital RPA SlipChip, false positive results from pre-initiation of the RPA amplification reaction before incubation were eliminated. End-point fluorescence readout was used for “yes or no” digital quantification. The performance of digital RPA in a SlipChip was validated by amplifying and counting single molecules of the target nucleic acid, Methicillin-resistant Staphylococcus aureus (MRSA) genomic DNA. The digital RPA on SlipChip was also tolerant to fluctuations of the incubation temperature (37–42 °C), and its performance was comparable to digital PCR on the same SlipChip design. The digital RPA SlipChip provides a simple method to quantify nucleic acids without requiring thermal cycling or kinetic measurements, with potential applications in diagnostics and environmental monitoring under resource-limited settings. The ability to initiate thousands of chemical reactions in parallel on the nanoliter scale using solvent-resistant glass devices is likely to be useful for a broader range of applications. PMID:21476587
Research on Parallel Three Phase PWM Converters base on RTDS
NASA Astrophysics Data System (ADS)
Xia, Yan; Zou, Jianxiao; Li, Kai; Liu, Jingbo; Tian, Jun
2018-01-01
Converters parallel operation can increase capacity of the system, but it may lead to potential zero-sequence circulating current, so the control of circulating current was an important goal in the design of parallel inverters. In this paper, the Real Time Digital Simulator (RTDS) is used to model the converters parallel system in real time and study the circulating current restraining. The equivalent model of two parallel converters and zero-sequence circulating current(ZSCC) were established and analyzed, then a strategy using variable zero vector control was proposed to suppress the circulating current. For two parallel modular converters, hardware-in-the-loop(HIL) study based on RTDS and practical experiment were implemented, results prove that the proposed control strategy is feasible and effective.
Evolving binary classifiers through parallel computation of multiple fitness cases.
Cagnoni, Stefano; Bergenti, Federico; Mordonini, Monica; Adorni, Giovanni
2005-06-01
This paper describes two versions of a novel approach to developing binary classifiers, based on two evolutionary computation paradigms: cellular programming and genetic programming. Such an approach achieves high computation efficiency both during evolution and at runtime. Evolution speed is optimized by allowing multiple solutions to be computed in parallel. Runtime performance is optimized explicitly using parallel computation in the case of cellular programming or implicitly taking advantage of the intrinsic parallelism of bitwise operators on standard sequential architectures in the case of genetic programming. The approach was tested on a digit recognition problem and compared with a reference classifier.
Karell, Mara A; Langstaff, Helen K; Halazonetis, Demetrios J; Minghetti, Caterina; Frelat, Mélanie; Kranioti, Elena F
2016-09-01
The commingling of human remains often hinders forensic/physical anthropologists during the identification process, as there are limited methods to accurately sort these remains. This study investigates a new method for pair-matching, a common individualization technique, which uses digital three-dimensional models of bone: mesh-to-mesh value comparison (MVC). The MVC method digitally compares the entire three-dimensional geometry of two bones at once to produce a single value to indicate their similarity. Two different versions of this method, one manual and the other automated, were created and then tested for how well they accurately pair-matched humeri. Each version was assessed using sensitivity and specificity. The manual mesh-to-mesh value comparison method was 100 % sensitive and 100 % specific. The automated mesh-to-mesh value comparison method was 95 % sensitive and 60 % specific. Our results indicate that the mesh-to-mesh value comparison method overall is a powerful new tool for accurately pair-matching commingled skeletal elements, although the automated version still needs improvement.
Optical analog-to-digital converter
Vawter, G Allen [Corrales, NM; Raring, James [Goleta, CA; Skogen, Erik J [Albuquerque, NM
2009-07-21
An optical analog-to-digital converter (ADC) is disclosed which converts an input optical analog signal to an output optical digital signal at a sampling rate defined by a sampling optical signal. Each bit of the digital representation is separately determined using an optical waveguide interferometer and an optical thresholding element. The interferometer uses the optical analog signal and the sampling optical signal to generate a sinusoidally-varying output signal using cross-phase-modulation (XPM) or a photocurrent generated from the optical analog signal. The sinusoidally-varying output signal is then digitized by the thresholding element, which includes a saturable absorber or at least one semiconductor optical amplifier, to form the optical digital signal which can be output either in parallel or serially.
NASA Astrophysics Data System (ADS)
Qian, Feng; Li, Guoqiang
2001-12-01
In this paper a generalized look-ahead logic algorithm for number conversion from signed-digit to its complement representation is developed. By properly encoding the signed digits, all the operations are performed by binary logic, and unified logical expressions can be obtained for conversion from modified-signed-digit (MSD) to 2's complement, trinary signed-digit (TSD) to 3's complement, and quaternary signed-digit (QSD) to 4's complement. For optical implementation, a parallel logical array module using electron-trapping device is employed, which is suitable for realizing complex logic functions in the form of sum-of-product. The proposed algorithm and architecture are compatible with a general-purpose optoelectronic computing system.
Hypercluster Parallel Processor
NASA Technical Reports Server (NTRS)
Blech, Richard A.; Cole, Gary L.; Milner, Edward J.; Quealy, Angela
1992-01-01
Hypercluster computer system includes multiple digital processors, operation of which coordinated through specialized software. Configurable according to various parallel-computing architectures of shared-memory or distributed-memory class, including scalar computer, vector computer, reduced-instruction-set computer, and complex-instruction-set computer. Designed as flexible, relatively inexpensive system that provides single programming and operating environment within which one can investigate effects of various parallel-computing architectures and combinations on performance in solution of complicated problems like those of three-dimensional flows in turbomachines. Hypercluster software and architectural concepts are in public domain.
The Effects of Blogging on Reading Engagement in the Upper Elementary Grades
ERIC Educational Resources Information Center
Ray, Holly Wilson
2013-01-01
With children growing up as digital natives, they have become accustomed to a fast paced, multitasking environment. In this digital landscape, children's brains learn in a parallel format, with instant information, as opposed to the traditional linear format with one skill being introduced at a time. Today's education methods used to…
An approach to the optical MSD adder
NASA Astrophysics Data System (ADS)
Takahashi, Hideya; Matsushita, Kenji; Shimizu, Eiji
1990-07-01
The intrinsic parallelism of optical elements for computation is presently taken fuller advantage of than heretofore possible through an optical implementation of the modified signed digit (MSD) number system, which yields carry-free addition and subtraction. In the present optical implementation of the MSD system, optical phase data are used to preclude negative value representation. Attention is given to an MSD adder array for addition operations on two n-digit trinary numbers; the output is composed of n + 1 trinary digits.
Optimising crime scene temperature collection for forensic entomology casework.
Hofer, Ines M J; Hart, Andrew J; Martín-Vega, Daniel; Hall, Martin J R
2017-01-01
The value of minimum post-mortem interval (minPMI) estimations in suspicious death investigations from insect evidence using temperature modelling is indisputable. In order to investigate the reliability of the collected temperature data used for modelling minPMI, it is necessary to study the effects of data logger location on the accuracy and precision of measurements. Digital data logging devices are the most commonly used temperature measuring devices in forensic entomology, however, the relationship between ambient temperatures (measured by loggers) and body temperatures has been little studied. The placement of loggers in this study in three locations (two outdoors, one indoors) had measurable effects when compared with actual body temperature measurements (simulated with pig heads), some more significant than others depending on season, exposure to the environment and logger location. Overall, the study demonstrated the complexity of the question of optimal logger placement at a crime scene and the potential impact of inaccurate temperature data on minPMI estimations, showing the importance of further research in this area and development of a standard protocol. Initial recommendations are provided for data logger placement (within a Stevenson Screen where practical), situations to avoid (e.g. placement of logger in front of windows when measuring indoor temperatures), and a baseline for further research into producing standard guidelines for logger placement, to increase the accuracy of minPMI estimations and, thereby, the reliability of forensic entomology evidence in court. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Digital UV/IR photography for tattoo evaluation in mummified remains.
Oliver, William R; Leone, Lisa
2012-07-01
The presence and location of tattoos can be an important component in the identification of remains in the extended postmortem period if remnants of skin persist. However, when there is significant mummification, visualization of tattoos can be problematic. Multiple methods have been proposed to make tattoos more visible, but all have limitation. In this case report, a mummified body was discovered. The presumptive victim was reported to have a small tattoo on her hand but it was not visible to the naked eye. The hand was photographed using ultraviolet (UV) and infrared (IR) light. A tattoo matching the description was noted in the photographs. In contrast to film-based IR and UV photography, digital UV and IR photography allows rapid visual evaluation of results and optimization of image utility. The ability to quickly modify photographic parameters quickly greatly increases the utility of IR and UV photography in the autopsy suite. © 2012 American Academy of Forensic Sciences.
Using mid-range laser scanners to digitize cultural-heritage sites.
Spring, Adam P; Peters, Caradoc; Minns, Tom
2010-01-01
Here, we explore new, more accessible ways of modeling 3D data sets that both professionals and amateurs can employ in areas such as architecture, forensics, geotechnics, cultural heritage, and even hobbyist modeling. To support our arguments, we present images from a recent case study in digital preservation of cultural heritage using a mid-range laser scanner. Our appreciation of the increasing variety of methods for capturing 3D spatial data inspired our research. Available methods include photogrammetry, airborne lidar, sonar, total stations (a combined electronic and optical survey instrument), and midand close-range scanning.1 They all can produce point clouds of varying density. In our case study, the point cloud produced by a mid-range scanner demonstrates how open source software can make modeling and disseminating data easier. Normally, researchers would model this data using expensive specialized software, and the data wouldn't extend beyond the laser-scanning community.
Dorofeeva, A A; Khrustalev, A V; Krylov, Iu V; Bocharov, D A; Negasheva, M A
2010-01-01
Digital images of the iris were received for study peculiarities of the iris color during the anthropological examination of 578 students aged 16-24 years. Simultaneously with the registration of the digital images, the visual assessment of the eye color was carried out using the traditional scale of Bunak, based on 12 ocular prostheses. Original software for automatic determination of the iris color based on 12 classes scale of Bunak was designed, and computer version of that scale was developed. The software proposed allows to conduct the determination of the iris color with high validity based on numerical evaluation; its application may reduce the bias due to subjective assessment and methodological divergences of the different researchers. The software designed for automatic determination of the iris color may help develop both theoretical and applied anthropology, it may be used in forensic and emergency medicine, sports medicine, medico-genetic counseling and professional selection.
Study of noninvasive detection of latent fingerprints using UV laser
NASA Astrophysics Data System (ADS)
Li, Hong-xia; Cao, Jing; Niu, Jie-qing; Huang, Yun-gang; Mao, Lin-jie; Chen, Jing-rong
2011-06-01
Latent fingerprints present a considerable challenge in forensics, and noninvasive procedure that captures a digital image of the latent fingerprints is significant in the field of criminal investigation. The capability of photography technologies using 266nm UV Nd:YAG solid state laser as excitation light source to provide detailed images of unprocessed latent fingerprints is demonstrated. Unprocessed latent fingerprints were developed on various non-absorbent and absorbing substrates. According to the special absorption, reflection, scattering and fluorescence characterization of the various residues in fingerprints (fatty acid ester, protein, and carbosylic acid salts etc) to the UV light to weaken or eliminate the background disturbance and increase the brightness contrast of fingerprints with the background, and using 266nm UV laser as excitation light source, fresh and old latent fingerprints on the surface of four types of non-absorbent objects as magazine cover, glass, back of cellphone, wood desktop paintwork and two types of absorbing objects as manila envelope, notebook paper were noninvasive detected and appeared through reflection photography and fluorescence photography technologies, and the results meet the fingerprint identification requirements in forensic science.
Lin, Hancheng; Wang, Zhenyuan; Dong, Hongmei
2017-01-01
In forensic practice, determination of electrocution as a cause of death usually depends on the conventional histological examination of electrical mark in the body skin, but the limitation of this method includes subjective bias by different forensic pathologists, especially for identifying suspicious electrical mark. The aim of our work is to introduce Fourier transform infrared (FTIR) spectroscopy in combination with chemometrics as a complementary tool for providing an relatively objective diagnosis. The results of principle component analysis (PCA) showed that there were significant differences of protein structural profile between electrical mark and normal skin in terms of α-helix, antiparallel β-sheet and β-sheet content. Then a partial least square (PLS) model was established based on this spectral dataset and used to discriminate electrical mark from normal skin areas in independent tissue sections as revealed by color-coded digital maps, making the visualization of electrical injury more intuitively. Our pilot study demonstrates the potential of FTIR spectroscopy as a complementary tool for diagnosis of electrical mark. PMID:28118398
Thali, Michael J; Taubenreuther, Ulrike; Karolczak, Marek; Braun, Marcel; Brueschweiler, Walter; Kalender, Willi A; Dirnhofer, Richard
2003-11-01
When a knife is stabbed in bone, it leaves an impression in the bone. The characteristics (shape, size, etc.) may indicate the type of tool used to produce the patterned injury in bone. Until now it has been impossible in forensic sciences to document such damage precisely and non-destructively. Micro-computed tomography (Micro-CT) offers an opportunity to analyze patterned injuries of tool marks made in bone. Using high-resolution Micro-CT and computer software, detailed analysis of three-dimensional (3D) architecture has recently become feasible and allows microstructural 3D bone information to be collected. With adequate viewing software, data from 2D slice of an arbitrary plane can be extracted from 3D datasets. Using such software as a "digital virtual knife," the examiner can interactively section and analyze the 3D sample. Analysis of the bone injury revealed that Micro-CT provides an opportunity to correlate a bone injury to an injury-causing instrument. Even broken knife tips can be graphically and non-destructively assigned to a suspect weapon.
Multiple Representations-Based Face Sketch-Photo Synthesis.
Peng, Chunlei; Gao, Xinbo; Wang, Nannan; Tao, Dacheng; Li, Xuelong; Li, Jie
2016-11-01
Face sketch-photo synthesis plays an important role in law enforcement and digital entertainment. Most of the existing methods only use pixel intensities as the feature. Since face images can be described using features from multiple aspects, this paper presents a novel multiple representations-based face sketch-photo-synthesis method that adaptively combines multiple representations to represent an image patch. In particular, it combines multiple features from face images processed using multiple filters and deploys Markov networks to exploit the interacting relationships between the neighboring image patches. The proposed framework could be solved using an alternating optimization strategy and it normally converges in only five outer iterations in the experiments. Our experimental results on the Chinese University of Hong Kong (CUHK) face sketch database, celebrity photos, CUHK Face Sketch FERET Database, IIIT-D Viewed Sketch Database, and forensic sketches demonstrate the effectiveness of our method for face sketch-photo synthesis. In addition, cross-database and database-dependent style-synthesis evaluations demonstrate the generalizability of this novel method and suggest promising solutions for face identification in forensic science.
Insights into bird wing evolution and digit specification from polarizing region fate maps.
Towers, Matthew; Signolet, Jason; Sherman, Adrian; Sang, Helen; Tickle, Cheryll
2011-08-09
The proposal that birds descended from theropod dinosaurs with digits 2, 3 and 4 was recently given support by short-term fate maps, suggesting that the chick wing polarizing region-a group that Sonic hedgehog-expressing cells-gives rise to digit 4. Here we show using long-term fate maps that Green fluorescent protein-expressing chick wing polarizing region grafts contribute only to soft tissues along the posterior margin of digit 4, supporting fossil data that birds descended from theropods that had digits 1, 2 and 3. In contrast, digit IV of the chick leg with four digits (I-IV) arises from the polarizing region. To determine how digit identity is specified over time, we inhibited Sonic hedgehog signalling. Fate maps show that polarizing region and adjacent cells are specified in parallel through a series of anterior to posterior digit fates-a process of digit specification that we suggest is involved in patterning all vertebrate limbs with more than three digits.
Keating, Brendan; Bansal, Aruna T; Walsh, Susan; Millman, Jonathan; Newman, Jonathan; Kidd, Kenneth; Budowle, Bruce; Eisenberg, Arthur; Donfack, Joseph; Gasparini, Paolo; Budimlija, Zoran; Henders, Anjali K; Chandrupatla, Hareesh; Duffy, David L; Gordon, Scott D; Hysi, Pirro; Liu, Fan; Medland, Sarah E; Rubin, Laurence; Martin, Nicholas G; Spector, Timothy D; Kayser, Manfred
2013-05-01
When a forensic DNA sample cannot be associated directly with a previously genotyped reference sample by standard short tandem repeat profiling, the investigation required for identifying perpetrators, victims, or missing persons can be both costly and time consuming. Here, we describe the outcome of a collaborative study using the Identitas Version 1 (v1) Forensic Chip, the first commercially available all-in-one tool dedicated to the concept of developing intelligence leads based on DNA. The chip allows parallel interrogation of 201,173 genome-wide autosomal, X-chromosomal, Y-chromosomal, and mitochondrial single nucleotide polymorphisms for inference of biogeographic ancestry, appearance, relatedness, and sex. The first assessment of the chip's performance was carried out on 3,196 blinded DNA samples of varying quantities and qualities, covering a wide range of biogeographic origin and eye/hair coloration as well as variation in relatedness and sex. Overall, 95 % of the samples (N = 3,034) passed quality checks with an overall genotype call rate >90 % on variable numbers of available recorded trait information. Predictions of sex, direct match, and first to third degree relatedness were highly accurate. Chip-based predictions of biparental continental ancestry were on average ~94 % correct (further support provided by separately inferred patrilineal and matrilineal ancestry). Predictions of eye color were 85 % correct for brown and 70 % correct for blue eyes, and predictions of hair color were 72 % for brown, 63 % for blond, 58 % for black, and 48 % for red hair. From the 5 % of samples (N = 162) with <90 % call rate, 56 % yielded correct continental ancestry predictions while 7 % yielded sufficient genotypes to allow hair and eye color prediction. Our results demonstrate that the Identitas v1 Forensic Chip holds great promise for a wide range of applications including criminal investigations, missing person investigations, and for national security purposes.
Parallel database search and prime factorization with magnonic holographic memory devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khitun, Alexander
In this work, we describe the capabilities of Magnonic Holographic Memory (MHM) for parallel database search and prime factorization. MHM is a type of holographic device, which utilizes spin waves for data transfer and processing. Its operation is based on the correlation between the phases and the amplitudes of the input spin waves and the output inductive voltage. The input of MHM is provided by the phased array of spin wave generating elements allowing the producing of phase patterns of an arbitrary form. The latter makes it possible to code logic states into the phases of propagating waves and exploitmore » wave superposition for parallel data processing. We present the results of numerical modeling illustrating parallel database search and prime factorization. The results of numerical simulations on the database search are in agreement with the available experimental data. The use of classical wave interference may results in a significant speedup over the conventional digital logic circuits in special task data processing (e.g., √n in database search). Potentially, magnonic holographic devices can be implemented as complementary logic units to digital processors. Physical limitations and technological constrains of the spin wave approach are also discussed.« less
Parallel database search and prime factorization with magnonic holographic memory devices
NASA Astrophysics Data System (ADS)
Khitun, Alexander
2015-12-01
In this work, we describe the capabilities of Magnonic Holographic Memory (MHM) for parallel database search and prime factorization. MHM is a type of holographic device, which utilizes spin waves for data transfer and processing. Its operation is based on the correlation between the phases and the amplitudes of the input spin waves and the output inductive voltage. The input of MHM is provided by the phased array of spin wave generating elements allowing the producing of phase patterns of an arbitrary form. The latter makes it possible to code logic states into the phases of propagating waves and exploit wave superposition for parallel data processing. We present the results of numerical modeling illustrating parallel database search and prime factorization. The results of numerical simulations on the database search are in agreement with the available experimental data. The use of classical wave interference may results in a significant speedup over the conventional digital logic circuits in special task data processing (e.g., √n in database search). Potentially, magnonic holographic devices can be implemented as complementary logic units to digital processors. Physical limitations and technological constrains of the spin wave approach are also discussed.
NASA Astrophysics Data System (ADS)
Ying, Jia-ju; Yin, Jian-ling; Wu, Dong-sheng; Liu, Jie; Chen, Yu-dan
2017-11-01
Low-light level night vision device and thermal infrared imaging binocular photoelectric instrument are used widely. The maladjustment of binocular instrument ocular axises parallelism will cause the observer the symptom such as dizziness, nausea, when use for a long time. Binocular photoelectric equipment digital calibration instrument is developed for detecting ocular axises parallelism. And the quantitative value of optical axis deviation can be quantitatively measured. As a testing instrument, the precision must be much higher than the standard of test instrument. Analyzes the factors that influence the accuracy of detection. Factors exist in each testing process link which affect the precision of the detecting instrument. They can be divided into two categories, one category is factors which directly affect the position of reticle image, the other category is factors which affect the calculation the center of reticle image. And the Synthesize error is calculated out. And further distribute the errors reasonably to ensure the accuracy of calibration instruments.
Raikar, Neha Ajit; Meundi, Manasa A; David, Chaya M; Rao, Mahesh Dathu; Jogigowda, Sanjay Chikkarasinakere
2016-01-01
Personal identification is a vital arena of forensic investigation, facilitating the search for missing persons. This process of identification is eased by the determination of age, sex, and ethnicity. In situations where there are fragmented and mutilated skeletal remains, sex determination is relatively difficult, and it becomes important to establish the accuracy of individual bones. This study aims to evaluate sexual dimorphism in foramen magnum (FM) dimensions in the South Indian population using digital submentovertex (SMV) radiograph. 150 individuals (75 males and 75 females) were subjected to digital SMV radiography. FM in the resultant image was assessed for longitudinal and transverse diameters, circumference, and area. Also, one particular shape was assigned to each image based on the classification of Chethan et al . of FM shapes. Three qualified oral radiologists performed all the measurements twice within an interval of 10 days. The values obtained for all four parameters were statistically significant and higher in males than in females. The most common morphology of FM was an egg shape while hexagonal was the least common morphology. Circumference was the best indicator of sex followed by area, transverse diameter, and longitudinal diameter. Having achieved a high accuracy of 67.3% with digital SMV radiograph makes it a reliable and reproducible alternative to dry skulls for sex determination.
Study of a hybrid multispectral processor
NASA Technical Reports Server (NTRS)
Marshall, R. E.; Kriegler, F. J.
1973-01-01
A hybrid processor is described offering enough handling capacity and speed to process efficiently the large quantities of multispectral data that can be gathered by scanner systems such as MSDS, SKYLAB, ERTS, and ERIM M-7. Combinations of general-purpose and special-purpose hybrid computers were examined to include both analog and digital types as well as all-digital configurations. The current trend toward lower costs for medium-scale digital circuitry suggests that the all-digital approach may offer the better solution within the time frame of the next few years. The study recommends and defines such a hybrid digital computing system in which both special-purpose and general-purpose digital computers would be employed. The tasks of recognizing surface objects would be performed in a parallel, pipeline digital system while the tasks of control and monitoring would be handled by a medium-scale minicomputer system. A program to design and construct a small, prototype, all-digital system has been started.
Digital technology and human development: a charter for nature conservation.
Maffey, Georgina; Homans, Hilary; Banks, Ken; Arts, Koen
2015-11-01
The application of digital technology in conservation holds much potential for advancing the understanding of, and facilitating interaction with, the natural world. In other sectors, digital technology has long been used to engage communities and share information. Human development-which holds parallels with the nature conservation sector-has seen a proliferation of innovation in technological development. Throughout this Perspective, we consider what nature conservation can learn from the introduction of digital technology in human development. From this, we derive a charter to be used before and throughout project development, in order to help reduce replication and failure of digital innovation in nature conservation projects. We argue that the proposed charter will promote collaboration with the development of digital tools and ensure that nature conservation projects progress appropriately with the development of new digital technologies.
Direct drive digital servo press with high parallel control
NASA Astrophysics Data System (ADS)
Murata, Chikara; Yabe, Jun; Endou, Junichi; Hasegawa, Kiyoshi
2013-12-01
Direct drive digital servo press has been developed as the university-industry joint research and development since 1998. On the basis of this result, 4-axes direct drive digital servo press has been developed and in the market on April of 2002. This servo press is composed of 1 slide supported by 4 ball screws and each axis has linearscale measuring the position of each axis with high accuracy less than μm order level. Each axis is controlled independently by servo motor and feedback system. This system can keep high level parallelism and high accuracy even with high eccentric load. Furthermore the 'full stroke full power' is obtained by using ball screws. Using these features, new various types of press forming and stamping have been obtained by development and production. The new stamping and forming methods are introduced and 'manufacturing' need strategy of press forming with high added value and also the future direction of press forming are also introduced.
NASA Technical Reports Server (NTRS)
Schumann, H. H. (Principal Investigator)
1972-01-01
The author has identified the following significant results. Preliminary analysis of DCS data from the USGS Verde River stream flow measuring site indicates the DCS system is furnishing high quality data more frequently than had been expected. During the 43-day period between Nov. 3, and Dec. 15, 1972, 552 DCS transmissions were received during 193 data passes. The amount of data received far exceeded the single high quality transmission per 12-hour period expected from the DCS system. The digital-parallel ERTS-1 data has furnished sufficient to accurately compute mean daily gage heights. These in turn, are used to compute average daily streamflow rates during periods of stable or slowly changing flow conditions. The digital-parallel data has also furnished useful information during peak flow periods. However, the serial-digital DCS capability, currently under development for transmitting streamflow data, should provide data of greater utility for determining times of flood peaks.
Going Digital - The Transition from Mark IV to DBBC at Onsala
NASA Astrophysics Data System (ADS)
Kareinen, Niko; Haas, Rüdiger; La Porta, Laura; Bertarini, Alessandra
2014-12-01
The Onsala Space Observatory is currently equipped with both a VLBI Mark IV rack and a digital BBC (DBBC). The Mark IV rack at Onsala has been used operationally for both astronomical and geodetic VLBI for more than 40 years. In 2011, Onsala purchased a DBBC and we started to test it and to gain experience with the new device, both for astronomical and geodetic VLBI. The DBBC was upgraded several times and the Field System (FS) interface was implemented. We did parallel recordings, with both the old Mark IV/Mark 5A system and the new DBBC/Mark 5B+ system, during numerous geodetic VLBI sessions. Several R1, T2, and Euro sessions were correlated during the last two years by the Bonn correlator with Onsala being included both as an analog station (two-letter code On) and as a digital station (two-letter code Od). We present results from these parallel sessions, both results from the original correlation and results from the analysis of the corresponding databases.
Morphologic analysis of third-molar maturity by digital orthopantomographic assessment.
Introna, Francesco; Santoro, Valeria; De Donno, Antonio; Belviso, Maura
2008-03-01
Accurate timing of the eruption of first and permanent teeth is an important parameter in forensic odontology to establish the age of dead or live individuals. Determination of adulthood may determine, for example, whether an individual convicted of a crime is sentenced as an adult and incarcerated in a state penal institution or as a juvenile and sent to a juvenile camp. At present, there is a large immigrant population in Italy, and young foreign criminals sometimes have false passports bearing a later birth date, with the aim of evading punishment. In such circumstances, age determination is becoming a significant forensic issue.Late in adolescence, after formation of the premolars and canines, only the third molars continue to develop. According to several studies, although the third molars are the most variable teeth in the dentition, they remain the most reliable biologic indicator available for estimation of age during the middle teens and early twenties.In this study, the authors test the possibilities offered by orthopantomography executed by means of digital technology, with the aim of exploiting the advantages of the computerized digital technique compared with the conventional technique, to determine adult age on the basis of root development of the third molar.Digital radiography is simple to use, quick, and effective, allowing superimposition and enlargement; the images can be electronically stored and transported. In comparison with traditional orthopantomography, the digital technique features greater diagnostic accuracy of some anatomic structures: upper and lower front teeth, root apexes, floor of the nasal fossa and maxillary sinus, nasal septum, mandibular condylus. Moreover, digital orthopantomography suffers less from artifacts.The digital orthopantomographies of 83 Caucasian subjects (43 females and 40 males) aged between 16 and 22 years were analyzed in standard conditions, assessing the degree of maturation of the upper and lower third molars. A standardized computer procedure was used to acquire the x-ray images, recording 3 per plate: the overall orthopantomography and 2 enlargements of optical type of the left and right sides, to reveal the third molars while maintaining unaltered the image resolution.For the analysis, the authors adopted Demirjian's staging system that classifies development of the third molar in 8 stages (A, B, C, D, E, F, G, H) on the basis of morphologic criteria. This has been statistically proved to feature notable precision and high predictive ability.To assess any sex-related variations in mineralization speed, the series was subdivided by gender. The study demonstrated that such differences are more evident under the age of 18 years.Overall, the observation of 245 third molars showed faster development of the upper than the lower third molars, a prevalence of stages D to G in the age range between 16 and 18 years, and a clear predominance of stage H in individuals over 18 years of age. Finally, an intermediate stage between G and H was demonstrated in subjects aged between 17 and 21 years.
ERIC Educational Resources Information Center
Loftus, Maria; Tiernan, Peter; Cherian, Sebastian
2014-01-01
Evidence has shown that students have greatly increased their consumption of digital video, principally through video sharing sites. In parallel, students' participation in video sharing and creation has also risen. As educators, we need to question how this can be effectively translated into a positive learning experience for students, whilst…
Serial and Parallel Processes in Working Memory after Practice
ERIC Educational Resources Information Center
Oberauer, Klaus; Bialkova, Svetlana
2011-01-01
Six young adults practiced for 36 sessions on a working-memory updating task in which 2 digits and 2 spatial positions were continuously updated. Participants either did 1 updating operation at a time, or attempted 1 numerical and 1 spatial operation at the same time. In contrast to previous research using the same paradigm with a single digit and…
Attitudes and Opinions of Special Education Candidate Teachers Regarding Digital Technology
ERIC Educational Resources Information Center
Ozdamli, Fezile
2017-01-01
Parallel to the rapid development of information and communication technology, the demand for its use in schools and classroom is increasing. So, the purpose of this study is to determine the attitudes and views of students who will be special education teachers in the future regarding digital technology on the use in education. A mixed method,…
ERIC Educational Resources Information Center
Yavuz Konokman, Gamze; Yanpar Yelken, Tugba
2016-01-01
The purpose of the study was to determine the effect of preparing digital stories through an inquiry based learning approach on prospective teachers' resistive behaviors toward technology based instruction and conducting research. The research model was convergent parallel design. The sample consisted of 50 prospective teachers who had completed…
NASA Astrophysics Data System (ADS)
Li, Guoqiang; Qian, Feng
2001-11-01
We present, for the first time to our knowledge, a generalized lookahead logic algorithm for number conversion from signed-digit to complement representation. By properly encoding the signed-digits, all the operations are performed by binary logic, and unified logical expressions can be obtained for conversion from modified-signed- digit (MSD) to 2's complement, trinary signed-digit (TSD) to 3's complement, and quarternary signed-digit (QSD) to 4's complement. For optical implementation, a parallel logical array module using an electron-trapping device is employed and experimental results are shown. This optical module is suitable for implementing complex logic functions in the form of the sum of the product. The algorithm and architecture are compatible with a general-purpose optoelectronic computing system.
A new approach for the analysis of facial growth and age estimation: Iris ratio
Machado, Carlos Eduardo Palhares; Flores, Marta Regina Pinheiro; Lima, Laíse Nascimento Correia; Tinoco, Rachel Lima Ribeiro; Bezerra, Ana Cristina Barreto; Evison, Martin Paul; Guimarães, Marco Aurélio
2017-01-01
The study of facial growth is explored in many fields of science, including anatomy, genetics, and forensics. In the field of forensics, it acts as a valuable tool for combating child pornography. The present research proposes a new method, based on relative measurements and fixed references of the human face—specifically considering measurements of the diameter of the iris (iris ratio)—for the analysis of facial growth in association with age in children and sub-adults. The experimental sample consisted of digital photographs of 1000 Brazilian subjects, aged between 6 and 22 years, distributed equally by sex and divided into five specific age groups (6, 10, 14, 18, and 22 year olds ± one month). The software package SAFF-2D® (Forensic Facial Analysis System, Brazilian Federal Police, Brazil) was used for positioning 11 landmarks on the images. Ten measurements were calculated and used as fixed references to evaluate the growth of the other measurements for each age group, as well the accumulated growth (6–22 years old). The Intraclass Correlation Coefficient (ICC) was applied for the evaluation of intra-examiner and inter-examiner reliability within a specific set of images. Pearson’s Correlation Coefficient was used to assess the association between each measurement taken and the respective age groups. ANOVA and Post-hoc Tukey tests were used to search for statistical differences between the age groups. The outcomes indicated that facial structures grow with different timing in children and adolescents. Moreover, the growth allometry expressed in this study may be used to understand what structures have more or less proportional variation in function for the age ranges studied. The diameter of the iris was found to be the most stable measurement compared to the others and represented the best cephalometric measurement as a fixed reference for facial growth ratios (or indices). The method described shows promising potential for forensic applications, especially as part of the armamentarium against crimes involving child pornography and child abuse. PMID:28686631
A new approach for the analysis of facial growth and age estimation: Iris ratio.
Machado, Carlos Eduardo Palhares; Flores, Marta Regina Pinheiro; Lima, Laíse Nascimento Correia; Tinoco, Rachel Lima Ribeiro; Franco, Ademir; Bezerra, Ana Cristina Barreto; Evison, Martin Paul; Guimarães, Marco Aurélio
2017-01-01
The study of facial growth is explored in many fields of science, including anatomy, genetics, and forensics. In the field of forensics, it acts as a valuable tool for combating child pornography. The present research proposes a new method, based on relative measurements and fixed references of the human face-specifically considering measurements of the diameter of the iris (iris ratio)-for the analysis of facial growth in association with age in children and sub-adults. The experimental sample consisted of digital photographs of 1000 Brazilian subjects, aged between 6 and 22 years, distributed equally by sex and divided into five specific age groups (6, 10, 14, 18, and 22 year olds ± one month). The software package SAFF-2D® (Forensic Facial Analysis System, Brazilian Federal Police, Brazil) was used for positioning 11 landmarks on the images. Ten measurements were calculated and used as fixed references to evaluate the growth of the other measurements for each age group, as well the accumulated growth (6-22 years old). The Intraclass Correlation Coefficient (ICC) was applied for the evaluation of intra-examiner and inter-examiner reliability within a specific set of images. Pearson's Correlation Coefficient was used to assess the association between each measurement taken and the respective age groups. ANOVA and Post-hoc Tukey tests were used to search for statistical differences between the age groups. The outcomes indicated that facial structures grow with different timing in children and adolescents. Moreover, the growth allometry expressed in this study may be used to understand what structures have more or less proportional variation in function for the age ranges studied. The diameter of the iris was found to be the most stable measurement compared to the others and represented the best cephalometric measurement as a fixed reference for facial growth ratios (or indices). The method described shows promising potential for forensic applications, especially as part of the armamentarium against crimes involving child pornography and child abuse.
2013-06-01
forensic pathology, forensic anthropology, and forensic toxicology . 13DOD’s forensic directive defines DOD components as the Office of the...DEFENSE FORENSICS Additional Planning and Oversight Needed to Establish an Enduring Expeditionary Forensic ...COVERED 00-00-2013 to 00-00-2013 4. TITLE AND SUBTITLE Defense Forensics : Additional Planning and Oversight Needed to Establish an Enduring
Crosetto, D.B.
1996-12-31
The present device provides for a dynamically configurable communication network having a multi-processor parallel processing system having a serial communication network and a high speed parallel communication network. The serial communication network is used to disseminate commands from a master processor to a plurality of slave processors to effect communication protocol, to control transmission of high density data among nodes and to monitor each slave processor`s status. The high speed parallel processing network is used to effect the transmission of high density data among nodes in the parallel processing system. Each node comprises a transputer, a digital signal processor, a parallel transfer controller, and two three-port memory devices. A communication switch within each node connects it to a fast parallel hardware channel through which all high density data arrives or leaves the node. 6 figs.
Crosetto, Dario B.
1996-01-01
The present device provides for a dynamically configurable communication network having a multi-processor parallel processing system having a serial communication network and a high speed parallel communication network. The serial communication network is used to disseminate commands from a master processor (100) to a plurality of slave processors (200) to effect communication protocol, to control transmission of high density data among nodes and to monitor each slave processor's status. The high speed parallel processing network is used to effect the transmission of high density data among nodes in the parallel processing system. Each node comprises a transputer (104), a digital signal processor (114), a parallel transfer controller (106), and two three-port memory devices. A communication switch (108) within each node (100) connects it to a fast parallel hardware channel (70) through which all high density data arrives or leaves the node.
Digital Recording and Documentation of Endoscopic Procedures: Do Patients and Doctors Think Alike?
Willner, Nadav; Peled-Raz, Maya; Shteinberg, Dan; Shteinberg, Michal; Keren, Dean; Rainis, Tova
2016-01-01
Aims and Methods. Conducting a survey study of a large number of patients and gastroenterologists aimed at identifying relevant predictors of interest in digital recording and documentation (DRD) of endoscopic procedures. Outpatients presenting to the endoscopy unit at our institution for an endoscopy examination were anonymously surveyed, regarding their views and opinions of a possible recording of the procedure. A parallel survey for gastroenterologists was conducted. Results. 417 patients and 62 gastroenterologists participated in two parallel surveys regarding DRD of endoscopic procedures. 66.4% of the patients expressed interest in digital documentation of their endoscopic procedure, with 90.5% of them requesting a copy. 43.6% of the physicians supported digital recording while 27.4% opposed it, with 48.4% opposing to making a copy of the recording available to the patient. No sociodemographic or background factors predicted patient's interest in DRD. 66% of the physicians reported having recording facilities in their institutions, but only 43.6% of them stated performing recording. Having institutional guidelines for DRD was found to be the only significant predictor for routine recording. Conclusions. Our study exposes patients' positive views of digital recording and documentation of endoscopic procedures. In contrast, physicians appear to be much more reluctant towards DRD and are centrally motivated by legal concerns when opposing DRD, as well as when supporting it.
Studies in optical parallel processing. [All optical and electro-optic approaches
NASA Technical Reports Server (NTRS)
Lee, S. H.
1978-01-01
Threshold and A/D devices for converting a gray scale image into a binary one were investigated for all-optical and opto-electronic approaches to parallel processing. Integrated optical logic circuits (IOC) and optical parallel logic devices (OPA) were studied as an approach to processing optical binary signals. In the IOC logic scheme, a single row of an optical image is coupled into the IOC substrate at a time through an array of optical fibers. Parallel processing is carried out out, on each image element of these rows, in the IOC substrate and the resulting output exits via a second array of optical fibers. The OPAL system for parallel processing which uses a Fabry-Perot interferometer for image thresholding and analog-to-digital conversion, achieves a higher degree of parallel processing than is possible with IOC.
López-Valcárcel, Beatriz G; González-Martel, Christian; Peiro, Salvador
2018-01-01
Objective Newcomb-Benford’s Law (NBL) proposes a regular distribution for first digits, second digits and digit combinations applicable to many different naturally occurring sources of data. Testing deviations from NBL is used in many datasets as a screening tool for identifying data trustworthiness problems. This study aims to compare public available waiting lists (WL) data from Finland and Spain for testing NBL as an instrument to flag up potential manipulation in WLs. Design Analysis of the frequency of Finnish and Spanish WLs first digits to determine if their distribution is similar to the pattern documented by NBL. Deviations from the expected first digit frequency were analysed using Pearson’s χ2, mean absolute deviation and Kuiper tests. Setting/participants Publicly available WL data from Finland and Spain, two countries with universal health insurance and National Health Systems but characterised by different levels of transparency and good governance standards. Main outcome measures Adjustment of the observed distribution of the numbers reported in Finnish and Spanish WL data to the expected distribution according to NBL. Results WL data reported by the Finnish health system fits first digit NBL according to all statistical tests used (p=0.6519 in χ2 test). For Spanish data, this hypothesis was rejected in all tests (p<0.0001 in χ2 test). Conclusions Testing deviations from NBL distribution can be a useful tool to identify problems with WL data trustworthiness and signalling the need for further testing. PMID:29743333
Mendelsohn, Alana I.; Dasen, Jeremy S.; Jessell, Thomas M.
2017-01-01
Summary The establishment of spinal motor neuron subclass diversity is achieved through developmental programs that are aligned with the organization of muscle targets in the limb. The evolutionary emergence of digits represents a specialized adaptation of limb morphology, yet it remains unclear how the specification of digit-innervating motor neuron subtypes parallels the elaboration of digits. We show that digit-innervating motor neurons can be defined by selective gene markers and distinguished from other LMC neurons by the expression of a variant Hox gene repertoire and by the failure to express a key enzyme involved in retinoic acid synthesis. This divergent developmental program is sufficient to induce the specification of digit-innervating motor neurons, emphasizing the specialized status of digit control in the evolution of skilled motor behaviors. Our findings suggest that the emergence of digits in the limb is matched by distinct mechanisms for specifying motor neurons that innervate digit muscles. PMID:28190640
Image Halftoning Using Optimized Dot Diffusion
1998-01-01
ppvnath@sys.caltech.edu ABSTRACT The dot diffusion method for digital halftoning has the advantage of parallelism unlike the error diffusion ...digital halftoning : ordered dither [1], error diffusion [2], neural-net based methods [8], and more recently direct binary search (DBS) [7]. Ordered...from periodic patterns. On the other hand error diffused halftones do not suffer from periodicity and offer blue noise characteristic [3] which is
Flexible and unique representations of two-digit decimals.
Zhang, Li; Chen, Min; Lin, Chongde; Szűcs, Denes
2014-09-01
We examined the representation of two-digit decimals through studying distance and compatibility effects in magnitude comparison tasks in four experiments. Using number pairs with different leftmost digits, we found both the second digit distance effect and compatibility effect with two-digit integers but only the second digit distance effect with two-digit pure decimals. This suggests that both integers and pure decimals are processed in a compositional manner. In contrast, neither the second digit distance effect nor the compatibility effect was observed in two-digit mixed decimals, thereby showing no evidence for compositional processing of two-digit mixed decimals. However, when the relevance of the rightmost digit processing was increased by adding some decimals pairs with the same leftmost digits, both pure and mixed decimals produced the compatibility effect. Overall, results suggest that the processing of decimals is flexible and depends on the relevance of unique digit positions. This processing mode is different from integer analysis in that two-digit mixed decimals demonstrate parallel compositional processing only when the rightmost digit is relevant. Findings suggest that people probably do not represent decimals by simply ignoring the decimal point and converting them to natural numbers. Copyright © 2014 Elsevier B.V. All rights reserved.
Trace DNA analysis: do you know what your neighbour is doing? A multi-jurisdictional survey.
Raymond, Jennifer J; van Oorschot, Roland A H; Walsh, Simon J; Roux, Claude
2008-01-01
Since 1997 the analysis of DNA recovered from handled objects, or 'trace' DNA, has become routine and is frequently demanded from crime scene examinations. However, this analysis often produces unpredictable results. The factors affecting the recovery of full profiles are numerous, and include varying methods of collection and analysis. Communication between forensic laboratories in Australia and New Zealand has been limited in the past, due in some part to sheer distance. Because of its relatively small population and low number of forensic jurisdictions this region is in an excellent position to provide a collective approach. However, the protocols, training methods and research of each jurisdiction had not been widely exchanged. A survey was developed to benchmark the current practices involved in trace DNA analysis, aiming to provide information for training programs and research directions, and to identify factors contributing to the success or failure of the analysis. The survey was divided in to three target groups: crime scene officers, DNA laboratory scientists, and managers of these staff. In late 2004 surveys were sent to forensic organisations in every Australian jurisdiction and New Zealand. A total of 169 completed surveys were received with a return rate of 54%. Information was collated regarding sampling, extraction, amplification and analysis methods, contamination prevention, samples collected, success rates, personnel training and education, and concurrent fingerprinting. The data from the survey responses provided an insight into aspects of trace DNA analysis, from crime scene to interpretation and management. Several concerning factors arose from the survey. Results collation is a significant issue being identified as poor and differing widely, preventing inter-jurisdictional comparison and intra-jurisdictional assessment of both the processes and outputs. A second point of note is the widespread lack of refresher training and proficiency testing, with no set standard for initial training courses. A common theme to these and other issues was the need for a collective approach to training and methodology in trace DNA analysis. Trace DNA is a small fraction of the evidence available in current investigations, and parallels to these results and problems will no doubt be found in other forensic disciplines internationally. The significant point to be realised from this study is the need for effective communication lines between forensic organisations to ensure that best practice is followed, ideally with a cohesive pan-jurisdictional approach.
Computer Sciences and Data Systems, volume 2
NASA Technical Reports Server (NTRS)
1987-01-01
Topics addressed include: data storage; information network architecture; VHSIC technology; fiber optics; laser applications; distributed processing; spaceborne optical disk controller; massively parallel processors; and advanced digital SAR processors.
One GHz digitizer for space based laser altimeter
NASA Technical Reports Server (NTRS)
Staples, Edward J.
1991-01-01
This is the final report for the research and development of the one GHz digitizer for space based laser altimeter. A feasibility model was designed, built, and tested. Only partial testing of essential functions of the digitizer was completed. Hybrid technology was incorporated which allows analog storage (memory) of the digitally sampled data. The actual sampling rate is 62.5 MHz, but executed in 16 parallel channels, to provide an effective sampling rate of one GHz. The average power consumption of the one GHz digitizer is not more than 1.5 Watts. A one GHz oscillator is incorporated for timing purposes. This signal is also made available externally for system timing. A software package was also developed for internal use (controls, commands, etc.) and for data communication with the host computer. The digitizer is equipped with an onboard microprocessor for this purpose.
Teaching forensic pathology to undergraduates at Zhongshan School of Medicine.
Zhou, Nan; Wu, Qiu-Ping; Su, Terry; Zhao, Qian-Hao; Yin, Kun; Zheng, Da; Zheng, Jing-Jing; Huang, Lei; Cheng, Jian-Ding
2018-04-01
Producing qualified forensic pathological practitioners is a common difficulty around the world. In China, forensic pathology is one of the required major subspecialties for undergraduates majoring in forensic medicine, in contrast to forensic education in Western countries where forensic pathology is often optional. The enduring predicament is that the professional qualities and abilities of forensic students from different institutions vary due to the lack of an efficient forensic pedagogical model. The purpose of this article is to describe the new pedagogical model of forensic pathology at Zhongshan School of Medicine, Sun Yat-sen University, which is characterised by: (a) imparting a broad view of forensic pathology and basic knowledge of duties and tasks in future careers to students; (b) educating students in primary skills on legal and medical issues, as well as advanced forensic pathological techniques; (c) providing students with resources to broaden their professional minds, and opportunities to improve their professional qualities and abilities; and (d) mentoring students on occupational preparation and further forensic education. In the past few years, this model has resulted in numerous notable forensic students accomplishing achievements in forensic practice and forensic scientific research. We therefore expect this pedagogical model to establish the foundation for forensic pathological education and other subspecialties of forensic medicine in China and abroad.
Duewer, David L; Kline, Margaret C; Romsos, Erica L; Toman, Blaza
2018-05-01
The highly multiplexed polymerase chain reaction (PCR) assays used for forensic human identification perform best when used with an accurately determined quantity of input DNA. To help ensure the reliable performance of these assays, we are developing a certified reference material (CRM) for calibrating human genomic DNA working standards. To enable sharing information over time and place, CRMs must provide accurate and stable values that are metrologically traceable to a common reference. We have shown that droplet digital PCR (ddPCR) limiting dilution end-point measurements of the concentration of DNA copies per volume of sample can be traceably linked to the International System of Units (SI). Unlike values assigned using conventional relationships between ultraviolet absorbance and DNA mass concentration, entity-based ddPCR measurements are expected to be stable over time. However, the forensic community expects DNA quantity to be stated in terms of mass concentration rather than entity concentration. The transformation can be accomplished given SI-traceable values and uncertainties for the number of nucleotide bases per human haploid genome equivalent (HHGE) and the average molar mass of a nucleotide monomer in the DNA polymer. This report presents the considerations required to establish the metrological traceability of ddPCR-based mass concentration estimates of human nuclear DNA. Graphical abstract The roots of metrological traceability for human nuclear DNA mass concentration results. Values for the factors in blue must be established experimentally. Values for the factors in red have been established from authoritative source materials. HHGE stands for "haploid human genome equivalent"; there are two HHGE per diploid human genome.
Design of neurophysiologically motivated structures of time-pulse coded neurons
NASA Astrophysics Data System (ADS)
Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Lazarev, Alexander A.; Lobodzinska, Raisa F.
2009-04-01
The common methodology of biologically motivated concept of building of processing sensors systems with parallel input and picture operands processing and time-pulse coding are described in paper. Advantages of such coding for creation of parallel programmed 2D-array structures for the next generation digital computers which require untraditional numerical systems for processing of analog, digital, hybrid and neuro-fuzzy operands are shown. The optoelectronic time-pulse coded intelligent neural elements (OETPCINE) simulation results and implementation results of a wide set of neuro-fuzzy logic operations are considered. The simulation results confirm engineering advantages, intellectuality, circuit flexibility of OETPCINE for creation of advanced 2D-structures. The developed equivalentor-nonequivalentor neural element has power consumption of 10mW and processing time about 10...100us.
Nose, Atsushi; Yamazaki, Tomohiro; Katayama, Hironobu; Uehara, Shuji; Kobayashi, Masatsugu; Shida, Sayaka; Odahara, Masaki; Takamiya, Kenichi; Matsumoto, Shizunori; Miyashita, Leo; Watanabe, Yoshihiro; Izawa, Takashi; Muramatsu, Yoshinori; Nitta, Yoshikazu; Ishikawa, Masatoshi
2018-04-24
We have developed a high-speed vision chip using 3D stacking technology to address the increasing demand for high-speed vision chips in diverse applications. The chip comprises a 1/3.2-inch, 1.27 Mpixel, 500 fps (0.31 Mpixel, 1000 fps, 2 × 2 binning) vision chip with 3D-stacked column-parallel Analog-to-Digital Converters (ADCs) and 140 Giga Operation per Second (GOPS) programmable Single Instruction Multiple Data (SIMD) column-parallel PEs for new sensing applications. The 3D-stacked structure and column parallel processing architecture achieve high sensitivity, high resolution, and high-accuracy object positioning.
Aircraft mishap investigation with radiology-assisted autopsy: helicopter crash with control injury.
Folio, R Les; Harcke, H Theodore; Luzi, Scott A
2009-04-01
Radiology-assisted autopsy traditionally has been plain film-based, but now is being augmented by computed tomography (CT). The authors present a two-fatality rotary wing crash scenario illustrating application of advanced radiographic techniques that can guide and supplement the forensic pathologist's physical autopsy. The radiographic findings also have the potential for use by the aircraft mishap investigation board. Prior to forensic autopsy, the two crash fatalities were imaged with conventional two-dimensional radiographs (digital technique) and with multidetector CT The CT data were used for multiplanar two-dimensional and three-dimensional (3D) image reconstruction. The forensic pathologist was provided with information about skeletal fractures, metal fragment location, and other pathologic findings of potential use in the physical autopsy. The radiologic autopsy served as a supplement to the physical autopsy and did not replace the traditional autopsy in these cases. Both individuals sustained severe blunt force trauma with multiple fractures of the skull, face, chest, pelvis, and extremities. Individual fractures differed; however, one individual showed hand and lower extremity injuries similar to those associated with control of the aircraft at the time of impact. The concept of "control injury" has been challenged by Campman et al., who found that control surface injuries have a low sensitivity and specificity for establishing who the pilot was in an accident. The application of new post mortem imaging techniques may help to resolve control injury questions. In addition, the combination of injuries in our cases may contribute to further understanding of control surface injury patterns in helicopter mishaps.
Forensic analysis of Venezuelan elections during the Chávez presidency.
Jiménez, Raúl; Hidalgo, Manuel
2014-01-01
Hugo Chávez dominated the Venezuelan electoral landscape since his first presidential victory in 1998 until his death in 2013. Nobody doubts that he always received considerable voter support in the numerous elections held during his mandate. However, the integrity of the electoral system has come into question since the 2004 Presidential Recall Referendum. From then on, different sectors of society have systematically alleged electoral irregularities or biases in favor of the incumbent party. We have carried out a thorough forensic analysis of the national-level Venezuelan electoral processes held during the 1998-2012 period to assess these complaints. The second-digit Benford's law and two statistical models of vote distributions, recently introduced in the literature, are reviewed and used in our case study. In addition, we discuss a new method to detect irregular variations in the electoral roll. The outputs obtained from these election forensic tools are examined taking into account the substantive context of the elections and referenda under study. Thus, we reach two main conclusions. Firstly, all the tools uncover anomalous statistical patterns, which are consistent with election fraud from 2004 onwards. Although our results are not a concluding proof of fraud, they signal the Recall Referendum as a turning point in the integrity of the Venezuelan elections. Secondly, our analysis calls into question the reliability of the electoral register since 2004. In particular, we found irregular variations in the electoral roll that were decisive in winning the 50% majority in the 2004 Referendum and in the 2012 Presidential Elections.
Forensic Analysis of Venezuelan Elections during the Chávez Presidency
Jiménez, Raúl; Hidalgo, Manuel
2014-01-01
Hugo Chávez dominated the Venezuelan electoral landscape since his first presidential victory in 1998 until his death in 2013. Nobody doubts that he always received considerable voter support in the numerous elections held during his mandate. However, the integrity of the electoral system has come into question since the 2004 Presidential Recall Referendum. From then on, different sectors of society have systematically alleged electoral irregularities or biases in favor of the incumbent party. We have carried out a thorough forensic analysis of the national-level Venezuelan electoral processes held during the 1998–2012 period to assess these complaints. The second-digit Benford's law and two statistical models of vote distributions, recently introduced in the literature, are reviewed and used in our case study. In addition, we discuss a new method to detect irregular variations in the electoral roll. The outputs obtained from these election forensic tools are examined taking into account the substantive context of the elections and referenda under study. Thus, we reach two main conclusions. Firstly, all the tools uncover anomalous statistical patterns, which are consistent with election fraud from 2004 onwards. Although our results are not a concluding proof of fraud, they signal the Recall Referendum as a turning point in the integrity of the Venezuelan elections. Secondly, our analysis calls into question the reliability of the electoral register since 2004. In particular, we found irregular variations in the electoral roll that were decisive in winning the 50% majority in the 2004 Referendum and in the 2012 Presidential Elections. PMID:24971462
Khanal, Laxman; Shah, Sandip; Koirala, Sarun
2017-03-01
Length of long bones is taken as an important contributor for estimating one of the four elements of forensic anthropology i.e., stature of the individual. Since physical characteristics of the individual differ among different groups of population, population specific studies are needed for estimating the total length of femur from its segment measurements. Since femur is not always recovered intact in forensic cases, it was the aim of this study to derive regression equations from measurements of proximal and distal fragments in Nepalese population. A cross-sectional study was done among 60 dry femora (30 from each side) without sex determination in anthropometry laboratory. Along with maximum femoral length, four proximal and four distal segmental measurements were measured following the standard method with the help of osteometric board, measuring tape and digital Vernier's caliper. Bones with gross defects were excluded from the study. Measured values were recorded separately for right and left side. Statistical Package for Social Science (SPSS version 11.5) was used for statistical analysis. The value of segmental measurements were different between right and left side but statistical difference was not significant except for depth of medial condyle (p=0.02). All the measurements were positively correlated and found to have linear relationship with the femoral length. With the help of regression equation, femoral length can be calculated from the segmental measurements; and then femoral length can be used to calculate the stature of the individual. The data collected may contribute in the analysis of forensic bone remains in study population.
Intelligent dental identification system (IDIS) in forensic medicine.
Chomdej, T; Pankaow, W; Choychumroon, S
2006-04-20
This study reports the design and development of the intelligent dental identification system (IDIS), including its efficiency and reliability. Five hundred patients were randomly selected from the Dental Department at Police General Hospital in Thailand to create a population of 3000 known subjects. From the original 500 patients, 100 were randomly selected to create a sample of 1000 unidentifiable subjects (400 subjects with completeness and possible alterations of dental information corresponding to natural occurrences and general dental treatments after the last clinical examination, such as missing teeth, dental caries, dental restorations, and dental prosthetics, 100 subjects with completeness and no alteration of dental information, 500 subjects with incompleteness and no alteration of dental information). Attempts were made to identify the unknown subjects utilizing IDIS. The use of IDIS advanced method resulted in consistent outstanding identification in the range of 82.61-100% with minimal error 0-1.19%. The results of this study indicate that IDIS can be used to support dental identification. It supports not only all types of dentitions: primary, mixed, and permanent but also for incomplete and altered dental information. IDIS is particularly useful in providing the huge quantity and redundancy of related documentation associated with forensic odontology. As a computerized system, IDIS can reduce the time required for identification and store dental digital images with many processing features. Furthermore, IDIS establishes enhancements of documental dental record with odontogram and identification codes, electrical dental record with dental database system, and identification methods and algorithms. IDIS was conceptualized based on the guidelines and standards of the American Board of Forensic Odontology (ABFO) and International Criminal Police Organization (INTERPOL).
Anti-collusion forensics of multimedia fingerprinting using orthogonal modulation.
Wang, Z Jane; Wu, Min; Zhao, Hong Vicky; Trappe, Wade; Liu, K J Ray
2005-06-01
Digital fingerprinting is a method for protecting digital data in which fingerprints that are embedded in multimedia are capable of identifying unauthorized use of digital content. A powerful attack that can be employed to reduce this tracing capability is collusion, where several users combine their copies of the same content to attenuate/remove the original fingerprints. In this paper, we study the collusion resistance of a fingerprinting system employing Gaussian distributed fingerprints and orthogonal modulation. We introduce the maximum detector and the thresholding detector for colluder identification. We then analyze the collusion resistance of a system to the averaging collusion attack for the performance criteria represented by the probability of a false negative and the probability of a false positive. Lower and upper bounds for the maximum number of colluders K(max) are derived. We then show that the detectors are robust to different collusion attacks. We further study different sets of performance criteria, and our results indicate that attacks based on a few dozen independent copies can confound such a fingerprinting system. We also propose a likelihood-based approach to estimate the number of colluders. Finally, we demonstrate the performance for detecting colluders through experiments using real images.
Real-time multiplicity counter
Rowland, Mark S [Alamo, CA; Alvarez, Raymond A [Berkeley, CA
2010-07-13
A neutron multi-detector array feeds pulses in parallel to individual inputs that are tied to individual bits in a digital word. Data is collected by loading a word at the individual bit level in parallel. The word is read at regular intervals, all bits simultaneously, to minimize latency. The electronics then pass the word to a number of storage locations for subsequent processing, thereby removing the front-end problem of pulse pileup.
Two-dimensional radiant energy array computers and computing devices
NASA Technical Reports Server (NTRS)
Schaefer, D. H.; Strong, J. P., III (Inventor)
1976-01-01
Two dimensional digital computers and computer devices operate in parallel on rectangular arrays of digital radiant energy optical signal elements which are arranged in ordered rows and columns. Logic gate devices receive two input arrays and provide an output array having digital states dependent only on the digital states of the signal elements of the two input arrays at corresponding row and column positions. The logic devices include an array of photoconductors responsive to at least one of the input arrays for either selectively accelerating electrons to a phosphor output surface, applying potentials to an electroluminescent output layer, exciting an array of discrete radiant energy sources, or exciting a liquid crystal to influence crystal transparency or reflectivity.
Subpicosecond Optical Digital Computation Using Conjugate Parametric Generators
1989-03-31
Using Phase Conjugate Farametric Generators ..... 12. PERSONAL AUTHOR(S) Alfano, Robert- Eichmann . George; Dorsinville. Roger! Li. Yao 13a. TYPE OF...conjugation-based optical residue arithmetic processor," Y. Li, G. Eichmann , R. Dorsinville, and R. R. Alfano, Opt. Lett. 13, (1988). [2] "Parallel ultrafast...optical digital and symbolic computation via optical phase conjugation," Y. Li, G. Eichmann , R. Dorsinville, Appl. Opt. 27, 2025 (1988). [3
Arithmetic operations in optical computations using a modified trinary number system.
Datta, A K; Basuray, A; Mukhopadhyay, S
1989-05-01
A modified trinary number (MTN) system is proposed in which any binary number can be expressed with the help of trinary digits (1, 0, 1 ). Arithmetic operations can be performed in parallel without the need for carry and borrow steps when binary digits are converted to the MTN system. An optical implementation of the proposed scheme that uses spatial light modulators and color-coded light signals is described.
ERIC Educational Resources Information Center
Schlicht, Patricia
2013-01-01
In today's world where tuition fees continue to rise rapidly and the demand for higher education increases in both the developing and developed world, it is important to find additional and alternative learning pathways that learners can afford. Traditional education as we have known it has begun to change, allowing for new parallel learning…
Kim, Min-Kyu; Hong, Seong-Kwan; Kwon, Oh-Kyong
2015-12-26
This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS) applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs). The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC) with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB.
Rapid parallel semantic processing of numbers without awareness.
Van Opstal, Filip; de Lange, Floris P; Dehaene, Stanislas
2011-07-01
In this study, we investigate whether multiple digits can be processed at a semantic level without awareness, either serially or in parallel. In two experiments, we presented participants with two successive sets of four simultaneous Arabic digits. The first set was masked and served as a subliminal prime for the second, visible target set. According to the instructions, participants had to extract from the target set either the mean or the sum of the digits, and to compare it with a reference value. Results showed that participants applied the requested instruction to the entire set of digits that was presented below the threshold of conscious perception, because their magnitudes jointly affected the participant's decision. Indeed, response decision could be accurately modeled as a sigmoid logistic function that pooled together the evidence provided by the four targets and, with lower weights, the four primes. In less than 800ms, participants successfully approximated the addition and mean tasks, although they tended to overweight the large numbers, particularly in the sum task. These findings extend previous observations on ensemble coding by showing that set statistics can be extracted from abstract symbolic stimuli rather than low-level perceptual stimuli, and that an ensemble code can be represented without awareness. Copyright © 2011 Elsevier B.V. All rights reserved.
High-throughput sequencing of forensic genetic samples using punches of FTA cards with buccal swabs.
Kampmann, Marie-Louise; Buchard, Anders; Børsting, Claus; Morling, Niels
2016-01-01
Here, we demonstrate that punches from buccal swab samples preserved on FTA cards can be used for high-throughput DNA sequencing, also known as massively parallel sequencing (MPS). We typed 44 reference samples with the HID-Ion AmpliSeq Identity Panel using washed 1.2 mm punches from FTA cards with buccal swabs and compared the results with those obtained with DNA extracted using the EZ1 DNA Investigator Kit. Concordant profiles were obtained for all samples. Our protocol includes simple punch, wash, and PCR steps, reducing cost and hands-on time in the laboratory. Furthermore, it facilitates automation of DNA sequencing.
NASA Astrophysics Data System (ADS)
Shi, Sheng-bing; Chen, Zhen-xing; Qin, Shao-gang; Song, Chun-yan; Jiang, Yun-hong
2014-09-01
With the development of science and technology, photoelectric equipment comprises visible system, infrared system, laser system and so on, integration, information and complication are higher than past. Parallelism and jumpiness of optical axis are important performance of photoelectric equipment,directly affect aim, ranging, orientation and so on. Jumpiness of optical axis directly affect hit precision of accurate point damage weapon, but we lack the facility which is used for testing this performance. In this paper, test system which is used fo testing parallelism and jumpiness of optical axis is devised, accurate aim isn't necessary and data processing are digital in the course of testing parallelism, it can finish directly testing parallelism of multi-axes, aim axis and laser emission axis, parallelism of laser emission axis and laser receiving axis and first acuualizes jumpiness of optical axis of optical sighting device, it's a universal test system.
Parallel programming with Easy Java Simulations
NASA Astrophysics Data System (ADS)
Esquembre, F.; Christian, W.; Belloni, M.
2018-01-01
Nearly all of today's processors are multicore, and ideally programming and algorithm development utilizing the entire processor should be introduced early in the computational physics curriculum. Parallel programming is often not introduced because it requires a new programming environment and uses constructs that are unfamiliar to many teachers. We describe how we decrease the barrier to parallel programming by using a java-based programming environment to treat problems in the usual undergraduate curriculum. We use the easy java simulations programming and authoring tool to create the program's graphical user interface together with objects based on those developed by Kaminsky [Building Parallel Programs (Course Technology, Boston, 2010)] to handle common parallel programming tasks. Shared-memory parallel implementations of physics problems, such as time evolution of the Schrödinger equation, are available as source code and as ready-to-run programs from the AAPT-ComPADRE digital library.
High Performance Radiation Transport Simulations on TITAN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Christopher G; Davidson, Gregory G; Evans, Thomas M
2012-01-01
In this paper we describe the Denovo code system. Denovo solves the six-dimensional, steady-state, linear Boltzmann transport equation, of central importance to nuclear technology applications such as reactor core analysis (neutronics), radiation shielding, nuclear forensics and radiation detection. The code features multiple spatial differencing schemes, state-of-the-art linear solvers, the Koch-Baker-Alcouffe (KBA) parallel-wavefront sweep algorithm for inverting the transport operator, a new multilevel energy decomposition method scaling to hundreds of thousands of processing cores, and a modern, novel code architecture that supports straightforward integration of new features. In this paper we discuss the performance of Denovo on the 10--20 petaflop ORNLmore » GPU-based system, Titan. We describe algorithms and techniques used to exploit the capabilities of Titan's heterogeneous compute node architecture and the challenges of obtaining good parallel performance for this sparse hyperbolic PDE solver containing inherently sequential computations. Numerical results demonstrating Denovo performance on early Titan hardware are presented.« less
Cognitive neuroscience in forensic science: understanding and utilizing the human element
Dror, Itiel E.
2015-01-01
The human element plays a critical role in forensic science. It is not limited only to issues relating to forensic decision-making, such as bias, but also relates to most aspects of forensic work (some of which even take place before a crime is ever committed or long after the verification of the forensic conclusion). In this paper, I explicate many aspects of forensic work that involve the human element and therefore show the relevance (and potential contribution) of cognitive neuroscience to forensic science. The 10 aspects covered in this paper are proactive forensic science, selection during recruitment, training, crime scene investigation, forensic decision-making, verification and conflict resolution, reporting, the role of the forensic examiner, presentation in court and judicial decisions. As the forensic community is taking on the challenges introduced by the realization that the human element is critical for forensic work, new opportunities emerge that allow for considerable improvement and enhancement of the forensic science endeavour. PMID:26101281
Cognitive neuroscience in forensic science: understanding and utilizing the human element.
Dror, Itiel E
2015-08-05
The human element plays a critical role in forensic science. It is not limited only to issues relating to forensic decision-making, such as bias, but also relates to most aspects of forensic work (some of which even take place before a crime is ever committed or long after the verification of the forensic conclusion). In this paper, I explicate many aspects of forensic work that involve the human element and therefore show the relevance (and potential contribution) of cognitive neuroscience to forensic science. The 10 aspects covered in this paper are proactive forensic science, selection during recruitment, training, crime scene investigation, forensic decision-making, verification and conflict resolution, reporting, the role of the forensic examiner, presentation in court and judicial decisions. As the forensic community is taking on the challenges introduced by the realization that the human element is critical for forensic work, new opportunities emerge that allow for considerable improvement and enhancement of the forensic science endeavour. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Digital all-sky polarization imaging of partly cloudy skies.
Pust, Nathan J; Shaw, Joseph A
2008-12-01
Clouds reduce the degree of linear polarization (DOLP) of skylight relative to that of a clear sky. Even thin subvisual clouds in the "twilight zone" between clouds and aerosols produce a drop in skylight DOLP long before clouds become visible in the sky. In contrast, the angle of polarization (AOP) of light scattered by a cloud in a partly cloudy sky remains the same as in the clear sky for most cases. In unique instances, though, select clouds display AOP signatures that are oriented 90 degrees from the clear-sky AOP. For these clouds, scattered light oriented parallel to the scattering plane dominates the perpendicularly polarized Rayleigh-scattered light between the instrument and the cloud. For liquid clouds, this effect may assist cloud particle size identification because it occurs only over a relatively limited range of particle radii that will scatter parallel polarized light. Images are shown from a digital all-sky-polarization imager to illustrate these effects. Images are also shown that provide validation of previously published theories for weak (approximately 2%) polarization parallel to the scattering plane for a 22 degrees halo.
Real-time multi-mode neutron multiplicity counter
Rowland, Mark S; Alvarez, Raymond A
2013-02-26
Embodiments are directed to a digital data acquisition method that collects data regarding nuclear fission at high rates and performs real-time preprocessing of large volumes of data into directly useable forms for use in a system that performs non-destructive assaying of nuclear material and assemblies for mass and multiplication of special nuclear material (SNM). Pulses from a multi-detector array are fed in parallel to individual inputs that are tied to individual bits in a digital word. Data is collected by loading a word at the individual bit level in parallel, to reduce the latency associated with current shift-register systems. The word is read at regular intervals, all bits simultaneously, with no manipulation. The word is passed to a number of storage locations for subsequent processing, thereby removing the front-end problem of pulse pileup. The word is used simultaneously in several internal processing schemes that assemble the data in a number of more directly useable forms. The detector includes a multi-mode counter that executes a number of different count algorithms in parallel to determine different attributes of the count data.
Forensic archaeology and anthropology : An Australian perspective.
Oakley, Kate
2005-09-01
Forensic archaeology is an extremely powerful investigative discipline and, in combination with forensic anthropology, can provide a wealth of evidentiary information to police investigators and the forensic community. The re-emergence of forensic archaeology and anthropology within Australia relies on its diversification and cooperation with established forensic medical organizations, law enforcement forensic service divisions, and national forensic boards. This presents a unique opportunity to develop a new multidisciplinary approach to forensic archaeology/anthropology within Australia as we hold a unique set of environmental, social, and cultural conditions that diverge from overseas models and require different methodological approaches. In the current world political climate, more forensic techniques are being applied at scenes of mass disasters, genocide, and terrorism. This provides Australian forensic archaeology/anthropology with a unique opportunity to develop multidisciplinary models with contributions from psychological profiling, ballistics, sociopolitics, cultural anthropology, mortuary technicians, post-blast analysis, fire analysis, and other disciplines from the world of forensic science.
NASA Technical Reports Server (NTRS)
Mclyman, W. T.
1981-01-01
Transformer transmits power and digital data across rotating interface. Array has many parallel data channels, each with potential l megabaud data rate. Ferrite-cored transformers are spaced along rotor; airgap between them reduces crosstalk.
NASA Astrophysics Data System (ADS)
Higashino, Satoru; Kobayashi, Shoei; Yamagami, Tamotsu
2007-06-01
High data transfer rate has been demanded for data storage devices along increasing the storage capacity. In order to increase the transfer rate, high-speed data processing techniques in read-channel devices are required. Generally, parallel architecture is utilized for the high-speed digital processing. We have developed a new architecture of Interpolated Timing Recovery (ITR) to achieve high-speed data transfer rate and wide capture-range in read-channel devices for the information storage channels. It facilitates the parallel implementation on large-scale-integration (LSI) devices.
NASA Technical Reports Server (NTRS)
Krosel, S. M.; Milner, E. J.
1982-01-01
The application of Predictor corrector integration algorithms developed for the digital parallel processing environment are investigated. The algorithms are implemented and evaluated through the use of a software simulator which provides an approximate representation of the parallel processing hardware. Test cases which focus on the use of the algorithms are presented and a specific application using a linear model of a turbofan engine is considered. Results are presented showing the effects of integration step size and the number of processors on simulation accuracy. Real time performance, interprocessor communication, and algorithm startup are also discussed.
NASA Astrophysics Data System (ADS)
Zaripov, D. I.; Renfu, Li
2018-05-01
The implementation of high-efficiency digital image correlation methods based on a zero-normalized cross-correlation (ZNCC) procedure for high-speed, time-resolved measurements using a high-resolution digital camera is associated with big data processing and is often time consuming. In order to speed-up ZNCC computation, a high-speed technique based on a parallel projection correlation procedure is proposed. The proposed technique involves the use of interrogation window projections instead of its two-dimensional field of luminous intensity. This simplification allows acceleration of ZNCC computation up to 28.8 times compared to ZNCC calculated directly, depending on the size of interrogation window and region of interest. The results of three synthetic test cases, such as a one-dimensional uniform flow, a linear shear flow and a turbulent boundary-layer flow, are discussed in terms of accuracy. In the latter case, the proposed technique is implemented together with an iterative window-deformation technique. On the basis of the results of the present work, the proposed technique is recommended to be used for initial velocity field calculation, with further correction using more accurate techniques.
Neuromorphic Hardware Architecture Using the Neural Engineering Framework for Pattern Recognition.
Wang, Runchun; Thakur, Chetan Singh; Cohen, Gregory; Hamilton, Tara Julia; Tapson, Jonathan; van Schaik, Andre
2017-06-01
We present a hardware architecture that uses the neural engineering framework (NEF) to implement large-scale neural networks on field programmable gate arrays (FPGAs) for performing massively parallel real-time pattern recognition. NEF is a framework that is capable of synthesising large-scale cognitive systems from subnetworks and we have previously presented an FPGA implementation of the NEF that successfully performs nonlinear mathematical computations. That work was developed based on a compact digital neural core, which consists of 64 neurons that are instantiated by a single physical neuron using a time-multiplexing approach. We have now scaled this approach up to build a pattern recognition system by combining identical neural cores together. As a proof of concept, we have developed a handwritten digit recognition system using the MNIST database and achieved a recognition rate of 96.55%. The system is implemented on a state-of-the-art FPGA and can process 5.12 million digits per second. The architecture and hardware optimisations presented offer high-speed and resource-efficient means for performing high-speed, neuromorphic, and massively parallel pattern recognition and classification tasks.
Xu, Qun; Wang, Xianchao; Xu, Chao
2017-06-01
Multiplication with traditional electronic computers is faced with a low calculating accuracy and a long computation time delay. To overcome these problems, the modified signed digit (MSD) multiplication routine is established based on the MSD system and the carry-free adder. Also, its parallel algorithm and optimization techniques are studied in detail. With the help of a ternary optical computer's characteristics, the structured data processor is designed especially for the multiplication routine. Several ternary optical operators are constructed to perform M transformations and summations in parallel, which has accelerated the iterative process of multiplication. In particular, the routine allocates data bits of the ternary optical processor based on digits of multiplication input, so the accuracy of the calculation results can always satisfy the users. Finally, the routine is verified by simulation experiments, and the results are in full compliance with the expectations. Compared with an electronic computer, the MSD multiplication routine is not only good at dealing with large-value data and high-precision arithmetic, but also maintains lower power consumption and fewer calculating delays.
Digital 3D facial reconstruction of George Washington
NASA Astrophysics Data System (ADS)
Razdan, Anshuman; Schwartz, Jeff; Tocheri, Mathew; Hansford, Dianne
2006-02-01
PRISM is a focal point of interdisciplinary research in geometric modeling, computer graphics and visualization at Arizona State University. Many projects in the last ten years have involved laser scanning, geometric modeling and feature extraction from such data as archaeological vessels, bones, human faces, etc. This paper gives a brief overview of a recently completed project on the 3D reconstruction of George Washington (GW). The project brought together forensic anthropologists, digital artists and computer scientists in the 3D digital reconstruction of GW at 57, 45 and 19 including detailed heads and bodies. Although many other scanning projects such as the Michelangelo project have successfully captured fine details via laser scanning, our project took it a step further, i.e. to predict what that individual (in the sculpture) might have looked like both in later and earlier years, specifically the process to account for reverse aging. Our base data was GWs face mask at Morgan Library and Hudons bust of GW at Mount Vernon, both done when GW was 53. Additionally, we scanned the statue at the Capitol in Richmond, VA; various dentures, and other items. Other measurements came from clothing and even portraits of GW. The digital GWs were then milled in high density foam for a studio to complete the work. These will be unveiled at the opening of the new education center at Mt Vernon in fall 2006.
Handwritten digits recognition based on immune network
NASA Astrophysics Data System (ADS)
Li, Yangyang; Wu, Yunhui; Jiao, Lc; Wu, Jianshe
2011-11-01
With the development of society, handwritten digits recognition technique has been widely applied to production and daily life. It is a very difficult task to solve these problems in the field of pattern recognition. In this paper, a new method is presented for handwritten digit recognition. The digit samples firstly are processed and features extraction. Based on these features, a novel immune network classification algorithm is designed and implemented to the handwritten digits recognition. The proposed algorithm is developed by Jerne's immune network model for feature selection and KNN method for classification. Its characteristic is the novel network with parallel commutating and learning. The performance of the proposed method is experimented to the handwritten number datasets MNIST and compared with some other recognition algorithms-KNN, ANN and SVM algorithm. The result shows that the novel classification algorithm based on immune network gives promising performance and stable behavior for handwritten digits recognition.
Liu, Ying; Geng, Kun; Chu, Yanhao; Xu, Mindi; Zha, Lagabaiyila
2018-03-03
The purpose of this study is to provide a forensic reference data about estimating chronologic age by evaluating the third molar mineralization of Han in central southern China. The mineralization degree of third molars was assessed by Demirjian's classification with modification for 2519 digital orthopantomograms (1190 males, 1329 females; age 8-23 years). The mean ages of the initial mineralization and the crown completion of third molars were around 9.66 and 13.88 years old in males and 9.52 and 14.09 years old in females. The minimum ages of apical closure were around 16 years in both sexes. Twenty-eight at stage C and stage G and 38 and 48 at stage F occurred earlier in males than in females. There was no significant difference between maxillary and mandibular teeth in males and females except that stage C in males. Two formulas were devised to estimate age based on mineralization stages and sexes. In Hunan Province, the person will probably be over age 14, when a third molar reaches the stage G. The results of the study could provide reference for age estimation in forensic cases and clinical dentistry.
Information surfing with the JHU/APL coherent imager
NASA Astrophysics Data System (ADS)
Ratto, Christopher R.; Shipley, Kara R.; Beagley, Nathaniel; Wolfe, Kevin C.
2015-05-01
The ability to perform remote forensics in situ is an important application of autonomous undersea vehicles (AUVs). Forensics objectives may include remediation of mines and/or unexploded ordnance, as well as monitoring of seafloor infrastructure. At JHU/APL, digital holography is being explored for the potential application to underwater imaging and integration with an AUV. In previous work, a feature-based approach was developed for processing the holographic imagery and performing object recognition. In this work, the results of the image processing method were incorporated into a Bayesian framework for autonomous path planning referred to as information surfing. The framework was derived assuming that the location of the object of interest is known a priori, but the type of object and its pose are unknown. The path-planning algorithm adaptively modifies the trajectory of the sensing platform based on historical performance of object and pose classification. The algorithm is called information surfing because the direction of motion is governed by the local information gradient. Simulation experiments were carried out using holographic imagery collected from submerged objects. The autonomous sensing algorithm was compared to a deterministic sensing CONOPS, and demonstrated improved accuracy and faster convergence in several cases.
Lindsay, Kaitlin E; Rühli, Frank J; Deleon, Valerie Burke
2015-06-01
The technique of forensic facial approximation, or reconstruction, is one of many facets of the field of mummy studies. Although far from a rigorous scientific technique, evidence-based visualization of antemortem appearance may supplement radiological, chemical, histological, and epidemiological studies of ancient remains. Published guidelines exist for creating facial approximations, but few approximations are published with documentation of the specific process and references used. Additionally, significant new research has taken place in recent years which helps define best practices in the field. This case study records the facial approximation of a 3,000-year-old ancient Egyptian woman using medical imaging data and the digital sculpting program, ZBrush. It represents a synthesis of current published techniques based on the most solid anatomical and/or statistical evidence. Through this study, it was found that although certain improvements have been made in developing repeatable, evidence-based guidelines for facial approximation, there are many proposed methods still awaiting confirmation from comprehensive studies. This study attempts to assist artists, anthropologists, and forensic investigators working in facial approximation by presenting the recommended methods in a chronological and usable format. © 2015 Wiley Periodicals, Inc.
Modified signed-digit trinary addition using synthetic wavelet filter
NASA Astrophysics Data System (ADS)
Iftekharuddin, K. M.; Razzaque, M. A.
2000-09-01
The modified signed-digit (MSD) number system has been a topic of interest as it allows for parallel carry-free addition of two numbers for digital optical computing. In this paper, harmonic wavelet joint transform (HWJT)-based correlation technique is introduced for optical implementation of MSD trinary adder implementation. The realization of the carry-propagation-free addition of MSD trinary numerals is demonstrated using synthetic HWJT correlator model. It is also shown that the proposed synthetic wavelet filter-based correlator shows high performance in logic processing. Simulation results are presented to validate the performance of the proposed technique.
Practice of clinical forensic medicine in Sri Lanka: does it need a new era?
Kodikara, Sarathchandra
2012-07-01
Clinical forensic medicine is a sub-specialty of forensic medicine and is intimately associated with the justice system of a country. Practice of clinical forensic medicine is evolving, but deviates from one jurisdiction to another. Most English-speaking countries practice clinical forensic medicine and forensic pathology separately while most non-English-speaking countries practice forensic medicine which includes clinical forensic medicine and forensic pathology. Unlike the practice of forensic pathology, several countries have informal arrangements to deal with forensic patients and there are no international standards of practice or training in this discipline. Besides, this is rarely a topic of discussion. In the adversarial justice system in Sri Lanka, the designated Government Medical Officers practice both clinical forensic medicine and forensic pathology. Practice of clinical forensic medicine, and its teaching and training in Sri Lanka depicts unique features. However, this system has not undergone a significant revision for many decades. In this communication, the existing legal framework, current procedure of practice, examination for drunkenness, investigations, structure of referrals, reports, subsequent legal procedures, undergraduate, in-service, and postgraduate training are discussed with suggestions for reforms. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Leintz, Rachel; Bond, John W
2013-05-01
Comparisons are made between the visualization of fingerprint corrosion ridge detail on fired brass cartridge casings, where fingerprint sweat was deposited prefiring, using both ultraviolet (UV) and visible (natural daylight) light sources. A reflected ultraviolet imaging system (RUVIS), normally used for visualizing latent fingerprint sweat deposits, is compared with optical interference and digital color mapping of visible light, the latter using apparatus constructed to easily enable selection of the optimum viewing angle. Results show that reflected UV, with a monochromatic UV source of 254 nm, was unable to visualize fingerprint ridge detail on any of 12 casings analyzed, whereas optical interference and digital color mapping using natural daylight yielded ridge detail on three casings. Reasons for the lack of success with RUVIS are discussed in terms of the variation in thickness of the thin film of metal oxide corrosion and absorption wavelengths for the corrosion products of brass. © 2013 American Academy of Forensic Sciences.
Herrera, Lara Maria; Fernandes, Clemente Maia da Silva; Serra, Mônica da Costa
2018-01-01
This study aimed to develop and to assess an algorithm to facilitate lip print visualization, and to digitally analyze lip prints on different supports, by superimposition. It also aimed to classify lip prints according to sex. A batch image processing algorithm was developed, which facilitated the identification and extraction of information about lip grooves. However, it performed better for lip print images with a uniform background. Paper and glass slab allowed more correct identifications than glass and the both sides of compact disks. There was no significant difference between the type of support and the amount of matching structures located in the middle area of the lower lip. There was no evidence of association between types of lip grooves and sex. Lip groove patterns of type III and type I were the most common for both sexes. The development of systems for lip print analysis is necessary, mainly concerning digital methods. © 2017 American Academy of Forensic Sciences.
Towards Automatic Image Segmentation Using Optimised Region Growing Technique
NASA Astrophysics Data System (ADS)
Alazab, Mamoun; Islam, Mofakharul; Venkatraman, Sitalakshmi
Image analysis is being adopted extensively in many applications such as digital forensics, medical treatment, industrial inspection, etc. primarily for diagnostic purposes. Hence, there is a growing interest among researches in developing new segmentation techniques to aid the diagnosis process. Manual segmentation of images is labour intensive, extremely time consuming and prone to human errors and hence an automated real-time technique is warranted in such applications. There is no universally applicable automated segmentation technique that will work for all images as the image segmentation is quite complex and unique depending upon the domain application. Hence, to fill the gap, this paper presents an efficient segmentation algorithm that can segment a digital image of interest into a more meaningful arrangement of regions and objects. Our algorithm combines region growing approach with optimised elimination of false boundaries to arrive at more meaningful segments automatically. We demonstrate this using X-ray teeth images that were taken for real-life dental diagnosis.
Amadasi, Alberto; Borgonovo, Simone; Brandone, Alberto; Di Giancamillo, Mauro; Cattaneo, Cristina
2014-05-01
The radiological search for GSR is crucial in burnt material although it has been rarely tested. In this study, thirty-one bovine ribs were shot at near-contact range and burnt to calcination in an oven simulating a real combustion. Computed tomography (CT) and magnetic resonance (MR) were performed before and after carbonization and compared with former analyses with DR (digital radiography); thus comparing the assistance, the radiological methods can provide in the search for GSR in fresh and burnt bone. DR demonstrated the greatest ability in the detection of metallic residues, CT showed lower abilities, while MR showed a high sensitivity only in soft tissues. Thus, DR can be considered as the most sensitive method in the detection of GSR in charred bones, whereas CT and MR demonstrated much less reliability. Nonetheless, the MR ameliorates the analysis of gunshot wounds in other types of remains with large quantities of soft tissues. © 2013 American Academy of Forensic Sciences.
Menéndez, Lumila Paula
2017-05-01
Intraobserver error (INTRA-OE) is the difference between repeated measurements of the same variable made by the same observer. The objective of this work was to evaluate INTRA-OE from 3D landmarks registered with a Microscribe, in different datasets: (A) the 3D coordinates, (B) linear measurements calculated from A, and (C) the six-first principal component axes. INTRA-OE was analyzed by digitizing 42 landmarks from 23 skulls in three events two weeks apart from each other. Systematic error was tested through repeated measures ANOVA (ANOVA-RM), while random error through intraclass correlation coefficient. Results showed that the largest differences between the three observations were found in the first dataset. Some anatomical points like nasion, ectoconchion, temporosphenoparietal, asterion, and temporomandibular presented the highest INTRA-OE. In the second dataset, local distances had higher INTRA-OE than global distances while the third dataset showed the lowest INTRA-OE. © 2016 American Academy of Forensic Sciences.
Design of a Low-Light-Level Image Sensor with On-Chip Sigma-Delta Analog-to- Digital Conversion
NASA Technical Reports Server (NTRS)
Mendis, Sunetra K.; Pain, Bedabrata; Nixon, Robert H.; Fossum, Eric R.
1993-01-01
The design and projected performance of a low-light-level active-pixel-sensor (APS) chip with semi-parallel analog-to-digital (A/D) conversion is presented. The individual elements have been fabricated and tested using MOSIS* 2 micrometer CMOS technology, although the integrated system has not yet been fabricated. The imager consists of a 128 x 128 array of active pixels at a 50 micrometer pitch. Each column of pixels shares a 10-bit A/D converter based on first-order oversampled sigma-delta (Sigma-Delta) modulation. The 10-bit outputs of each converter are multiplexed and read out through a single set of outputs. A semi-parallel architecture is chosen to achieve 30 frames/second operation even at low light levels. The sensor is designed for less than 12 e^- rms noise performance.
NASA Astrophysics Data System (ADS)
Blume, H.; Alexandru, R.; Applegate, R.; Giordano, T.; Kamiya, K.; Kresina, R.
1986-06-01
In a digital diagnostic imaging department, the majority of operations for handling and processing of images can be grouped into a small set of basic operations, such as image data buffering and storage, image processing and analysis, image display, image data transmission and image data compression. These operations occur in almost all nodes of the diagnostic imaging communications network of the department. An image processor architecture was developed in which each of these functions has been mapped into hardware and software modules. The modular approach has advantages in terms of economics, service, expandability and upgradeability. The architectural design is based on the principles of hierarchical functionality, distributed and parallel processing and aims at real time response. Parallel processing and real time response is facilitated in part by a dual bus system: a VME control bus and a high speed image data bus, consisting of 8 independent parallel 16-bit busses, capable of handling combined up to 144 MBytes/sec. The presented image processor is versatile enough to meet the video rate processing needs of digital subtraction angiography, the large pixel matrix processing requirements of static projection radiography, or the broad range of manipulation and display needs of a multi-modality diagnostic work station. Several hardware modules are described in detail. For illustrating the capabilities of the image processor, processed 2000 x 2000 pixel computed radiographs are shown and estimated computation times for executing the processing opera-tions are presented.
Korycki, Rafal
2014-05-01
Since the appearance of digital audio recordings, audio authentication has been becoming increasingly difficult. The currently available technologies and free editing software allow a forger to cut or paste any single word without audible artifacts. Nowadays, the only method referring to digital audio files commonly approved by forensic experts is the ENF criterion. It consists in fluctuation analysis of the mains frequency induced in electronic circuits of recording devices. Therefore, its effectiveness is strictly dependent on the presence of mains signal in the recording, which is a rare occurrence. Recently, much attention has been paid to authenticity analysis of compressed multimedia files and several solutions were proposed for detection of double compression in both digital video and digital audio. This paper addresses the problem of tampering detection in compressed audio files and discusses new methods that can be used for authenticity analysis of digital recordings. Presented approaches consist in evaluation of statistical features extracted from the MDCT coefficients as well as other parameters that may be obtained from compressed audio files. Calculated feature vectors are used for training selected machine learning algorithms. The detection of multiple compression covers up tampering activities as well as identification of traces of montage in digital audio recordings. To enhance the methods' robustness an encoder identification algorithm was developed and applied based on analysis of inherent parameters of compression. The effectiveness of tampering detection algorithms is tested on a predefined large music database consisting of nearly one million of compressed audio files. The influence of compression algorithms' parameters on the classification performance is discussed, based on the results of the current study. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Pinilla, Jaime; López-Valcárcel, Beatriz G; González-Martel, Christian; Peiro, Salvador
2018-05-09
Newcomb-Benford's Law (NBL) proposes a regular distribution for first digits, second digits and digit combinations applicable to many different naturally occurring sources of data. Testing deviations from NBL is used in many datasets as a screening tool for identifying data trustworthiness problems. This study aims to compare public available waiting lists (WL) data from Finland and Spain for testing NBL as an instrument to flag up potential manipulation in WLs. Analysis of the frequency of Finnish and Spanish WLs first digits to determine if their distribution is similar to the pattern documented by NBL. Deviations from the expected first digit frequency were analysed using Pearson's χ 2 , mean absolute deviation and Kuiper tests. Publicly available WL data from Finland and Spain, two countries with universal health insurance and National Health Systems but characterised by different levels of transparency and good governance standards. Adjustment of the observed distribution of the numbers reported in Finnish and Spanish WL data to the expected distribution according to NBL. WL data reported by the Finnish health system fits first digit NBL according to all statistical tests used (p=0.6519 in χ 2 test). For Spanish data, this hypothesis was rejected in all tests (p<0.0001 in χ 2 test). Testing deviations from NBL distribution can be a useful tool to identify problems with WL data trustworthiness and signalling the need for further testing. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Multiplexed chirp waveform synthesizer
Dudley, Peter A.; Tise, Bert L.
2003-09-02
A synthesizer for generating a desired chirp signal has M parallel channels, where M is an integer greater than 1, each channel including a chirp waveform synthesizer generating at an output a portion of a digital representation of the desired chirp signal; and a multiplexer for multiplexing the M outputs to create a digital representation of the desired chirp signal. Preferably, each channel receives input information that is a function of information representing the desired chirp signal.
The potential of multi-port optical memories in digital computing
NASA Technical Reports Server (NTRS)
Alford, C. O.; Gaylord, T. K.
1975-01-01
A high-capacity memory with a relatively high data transfer rate and multi-port simultaneous access capability may serve as the basis for new computer architectures. The implementation of a multi-port optical memory is discussed. Several computer structures are presented that might profitably use such a memory. These structures include (1) a simultaneous record access system, (2) a simultaneously shared memory computer system, and (3) a parallel digital processing structure.
Qian, F; Li, G; Ruan, H; Jing, H; Liu, L
1999-09-10
A novel, to our knowledge, two-step digit-set-restricted modified signed-digit (MSD) addition-subtraction algorithm is proposed. With the introduction of the reference digits, the operand words are mapped into an intermediate carry word with all digits restricted to the set {1, 0} and an intermediate sum word with all digits restricted to the set {0, 1}, which can be summed to form the final result without carry generation. The operation can be performed in parallel by use of binary logic. An optical system that utilizes an electron-trapping device is suggested for accomplishing the required binary logic operations. By programming of the illumination of data arrays, any complex logic operations of multiple variables can be realized without additional temporal latency of the intermediate results. This technique has a high space-bandwidth product and signal-to-noise ratio. The main structure can be stacked to construct a compact optoelectronic MSD adder-subtracter.
[The application of radiological image in forensic medicine].
Zhang, Ji-Zong; Che, Hong-Min; Xu, Li-Xiang
2006-04-01
Personal identification is an important work in forensic investigation included sex discrimination, age and stature estimation. Human identification depended on radiological image technique analysis is a practice and proper method in forensic science field. This paper intended to understand the advantage and defect by reviewed the employing of forensic radiology in forensic science field broadly and provide a reference to perfect the application of forensic radiology in forensic science field.
Transitioning from Forensic Genetics to Forensic Genomics
Kayser, Manfred
2017-01-01
Due to its support of law enforcement, forensics is a conservative field; nevertheless, driven by scientific and technological progress, forensic genetics is slowly transitioning into forensic genomics. With this Special Issue of Genes we acknowledge and appreciate this rather recent development by not only introducing the field of forensics to the wider community of geneticists, but we do so by emphasizing on different topics of forensic relevance where genomic, transcriptomic, and epigenomic principles, methods, and datasets of humans and beyond are beginning to be used to answer forensic questions. PMID:29271907
Google Glass for Documentation of Medical Findings: Evaluation in Forensic Medicine
2014-01-01
Background Google Glass is a promising premarket device that includes an optical head-mounted display. Several proof of concept reports exist, but there is little scientific evidence regarding its use in a medical setting. Objective The objective of this study was to empirically determine the feasibility of deploying Glass in a forensics setting. Methods Glass was used in combination with a self-developed app that allowed for hands-free operation during autopsy and postmortem examinations of 4 decedents performed by 2 physicians. A digital single-lens reflex (DSLR) camera was used for image comparison. In addition, 6 forensic examiners (3 male, 3 female; age range 23-48 years, age mean 32.8 years, SD 9.6; mean work experience 6.2 years, SD 8.5) were asked to evaluate 159 images for image quality on a 5-point Likert scale, specifically color discrimination, brightness, sharpness, and their satisfaction with the acquired region of interest. Statistical evaluations were performed to determine how Glass compares with conventionally acquired digital images. Results All images received good (median 4) and very good ratings (median 5) for all 4 categories. Autopsy images taken by Glass (n=32) received significantly lower ratings than those acquired by DSLR camera (n=17) (region of interest: z=–5.154, P<.001; sharpness: z=–7.898, P<.001; color: z=–4.407, P<.001, brightness: z=–3.187, P=.001). For 110 images of postmortem examinations (Glass: n=54, DSLR camera: n=56), ratings for region of interest (z=–8.390, P<.001) and brightness (z=–540, P=.007) were significantly lower. For interrater reliability, intraclass correlation (ICC) values were good for autopsy (ICC=.723, 95% CI .667-.771, P<.001) and postmortem examination (ICC=.758, 95% CI .727-.787, P<.001). Postmortem examinations performed using Glass took 42.6 seconds longer than those done with the DSLR camera (z=–2.100, P=.04 using Wilcoxon signed rank test). The battery charge of Glass quickly decreased; an average 5.5% (SD 1.85) of its battery capacity was spent per postmortem examination (0.81% per minute or 0.79% per picture). Conclusions Glass was efficient for acquiring images for documentation in forensic medicine, but the image quality was inferior compared to a DSLR camera. Images taken with Glass received significantly lower ratings for all 4 categories in an autopsy setting and for region of interest and brightness in postmortem examination. The effort necessary for achieving the objectives was higher when using the device compared to the DSLR camera thus extending the postmortem examination duration. Its relative high power consumption and low battery capacity is also a disadvantage. At the current stage of development, Glass may be an adequate tool for education. For deployment in clinical care, issues such as hygiene, data protection, and privacy need to be addressed and are currently limiting chances for professional use. PMID:24521935
Google Glass for documentation of medical findings: evaluation in forensic medicine.
Albrecht, Urs-Vito; von Jan, Ute; Kuebler, Joachim; Zoeller, Christoph; Lacher, Martin; Muensterer, Oliver J; Ettinger, Max; Klintschar, Michael; Hagemeier, Lars
2014-02-12
Google Glass is a promising premarket device that includes an optical head-mounted display. Several proof of concept reports exist, but there is little scientific evidence regarding its use in a medical setting. The objective of this study was to empirically determine the feasibility of deploying Glass in a forensics setting. Glass was used in combination with a self-developed app that allowed for hands-free operation during autopsy and postmortem examinations of 4 decedents performed by 2 physicians. A digital single-lens reflex (DSLR) camera was used for image comparison. In addition, 6 forensic examiners (3 male, 3 female; age range 23-48 years, age mean 32.8 years, SD 9.6; mean work experience 6.2 years, SD 8.5) were asked to evaluate 159 images for image quality on a 5-point Likert scale, specifically color discrimination, brightness, sharpness, and their satisfaction with the acquired region of interest. Statistical evaluations were performed to determine how Glass compares with conventionally acquired digital images. All images received good (median 4) and very good ratings (median 5) for all 4 categories. Autopsy images taken by Glass (n=32) received significantly lower ratings than those acquired by DSLR camera (n=17) (region of interest: z=-5.154, P<.001; sharpness: z=-7.898, P<.001; color: z=-4.407, P<.001, brightness: z=-3.187, P=.001). For 110 images of postmortem examinations (Glass: n=54, DSLR camera: n=56), ratings for region of interest (z=-8.390, P<.001) and brightness (z=-540, P=.007) were significantly lower. For interrater reliability, intraclass correlation (ICC) values were good for autopsy (ICC=.723, 95% CI .667-.771, P<.001) and postmortem examination (ICC=.758, 95% CI .727-.787, P<.001). Postmortem examinations performed using Glass took 42.6 seconds longer than those done with the DSLR camera (z=-2.100, P=.04 using Wilcoxon signed rank test). The battery charge of Glass quickly decreased; an average 5.5% (SD 1.85) of its battery capacity was spent per postmortem examination (0.81% per minute or 0.79% per picture). Glass was efficient for acquiring images for documentation in forensic medicine, but the image quality was inferior compared to a DSLR camera. Images taken with Glass received significantly lower ratings for all 4 categories in an autopsy setting and for region of interest and brightness in postmortem examination. The effort necessary for achieving the objectives was higher when using the device compared to the DSLR camera thus extending the postmortem examination duration. Its relative high power consumption and low battery capacity is also a disadvantage. At the current stage of development, Glass may be an adequate tool for education. For deployment in clinical care, issues such as hygiene, data protection, and privacy need to be addressed and are currently limiting chances for professional use.
Advanced digital SAR processing study
NASA Technical Reports Server (NTRS)
Martinson, L. W.; Gaffney, B. P.; Liu, B.; Perry, R. P.; Ruvin, A.
1982-01-01
A highly programmable, land based, real time synthetic aperture radar (SAR) processor requiring a processed pixel rate of 2.75 MHz or more in a four look system was designed. Variations in range and azimuth compression, number of looks, range swath, range migration and SR mode were specified. Alternative range and azimuth processing algorithms were examined in conjunction with projected integrated circuit, digital architecture, and software technologies. The advaced digital SAR processor (ADSP) employs an FFT convolver algorithm for both range and azimuth processing in a parallel architecture configuration. Algorithm performace comparisons, design system design, implementation tradeoffs and the results of a supporting survey of integrated circuit and digital architecture technologies are reported. Cost tradeoffs and projections with alternate implementation plans are presented.
Single-shot digital holography by use of the fractional Talbot effect.
Martínez-León, Lluís; Araiza-E, María; Javidi, Bahram; Andrés, Pedro; Climent, Vicent; Lancis, Jesús; Tajahuerce, Enrique
2009-07-20
We present a method for recording in-line single-shot digital holograms based on the fractional Talbot effect. In our system, an image sensor records the interference between the light field scattered by the object and a properly codified parallel reference beam. A simple binary two-dimensional periodic grating is used to codify the reference beam generating a periodic three-step phase distribution over the sensor plane by fractional Talbot effect. This provides a method to perform single-shot phase-shifting interferometry at frame rates only limited by the sensor capabilities. Our technique is well adapted for dynamic wavefront sensing applications. Images of the object are digitally reconstructed from the digital hologram. Both computer simulations and experimental results are presented.
Bio-inspired multi-mode optic flow sensors for micro air vehicles
NASA Astrophysics Data System (ADS)
Park, Seokjun; Choi, Jaehyuk; Cho, Jihyun; Yoon, Euisik
2013-06-01
Monitoring wide-field surrounding information is essential for vision-based autonomous navigation in micro-air-vehicles (MAV). Our image-cube (iCube) module, which consists of multiple sensors that are facing different angles in 3-D space, can be applied to the wide-field of view optic flows estimation (μ-Compound eyes) and to attitude control (μ- Ocelli) in the Micro Autonomous Systems and Technology (MAST) platforms. In this paper, we report an analog/digital (A/D) mixed-mode optic-flow sensor, which generates both optic flows and normal images in different modes for μ- Compound eyes and μ-Ocelli applications. The sensor employs a time-stamp based optic flow algorithm which is modified from the conventional EMD (Elementary Motion Detector) algorithm to give an optimum partitioning of hardware blocks in analog and digital domains as well as adequate allocation of pixel-level, column-parallel, and chip-level signal processing. Temporal filtering, which may require huge hardware resources if implemented in digital domain, is remained in a pixel-level analog processing unit. The rest of the blocks, including feature detection and timestamp latching, are implemented using digital circuits in a column-parallel processing unit. Finally, time-stamp information is decoded into velocity from look-up tables, multiplications, and simple subtraction circuits in a chip-level processing unit, thus significantly reducing core digital processing power consumption. In the normal image mode, the sensor generates 8-b digital images using single slope ADCs in the column unit. In the optic flow mode, the sensor estimates 8-b 1-D optic flows from the integrated mixed-mode algorithm core and 2-D optic flows with an external timestamp processing, respectively.
Kanto-Nishimaki, Yuko; Saito, Haruka; Watanabe-Aoyagi, Miwako; Toda, Ritsuko; Iwadate, Kimiharu
2014-11-01
Few large-scale investigations have looked at the oxyhemoglobin ratio (%O2-Hb) or the carboxyhemoglobin ratio (%CO-Hb) in fatal hypothermia and death by fire as applicable to forensic medicine. We therefore retrospectively examined right and left cardiac blood samples for both %O2-Hb and %CO-Hb in 690 forensic autopsy cases. We therefore sought to establish reference values for the above forensic diagnoses, to compare %O2-Hb in fatal hypothermia with or without cardiopulmonary resuscitation (CPR), and to compare the relationship between %CO-Hb and smoking history. All %O2-Hb and %CO-Hb data were obtained during or immediately after autopsies using a portable CO-oximeter. Death by carbon monoxide (CO) intoxication and death by fire were excluded from the analysis involving smoking history. In fatal hypothermia, %O2-Hb in the left cardiac blood was significantly higher than that in the right cardiac blood, providing important evidence for fatal hypothermia. Furthermore, %O2-Hb in the left cardiac blood increases with CPR but that in the right cardiac blood increases in parallel. No correlation was observed between rectal temperature and %O2-Hb in the right and left cardiac blood, indicating that it is unlikely that postmortem cooling increases %O2-Hb in cardiac blood. %CO-Hb in smokers was significantly higher than that in non-smokers, although the number of cigarettes smoked did not appear to be significant. When assessing death by fire, we identified that %CO-Hb of >10% was a reliable marker of antemortem CO inhalation, regardless of smoking history. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Filipino DNA variation at 12 X-chromosome short tandem repeat markers.
Salvador, Jazelyn M; Apaga, Dame Loveliness T; Delfin, Frederick C; Calacal, Gayvelline C; Dennis, Sheila Estacio; De Ungria, Maria Corazon A
2018-06-08
Demands for solving complex kinship scenarios where only distant relatives are available for testing have risen in the past years. In these instances, other genetic markers such as X-chromosome short tandem repeat (X-STR) markers are employed to supplement autosomal and Y-chromosomal STR DNA typing. However, prior to use, the degree of STR polymorphism in the population requires evaluation through generation of an allele or haplotype frequency population database. This population database is also used for statistical evaluation of DNA typing results. Here, we report X-STR data from 143 unrelated Filipino male individuals who were genotyped via conventional polymerase chain reaction-capillary electrophoresis (PCR-CE) using the 12 X-STR loci included in the Investigator ® Argus X-12 kit (Qiagen) and via massively parallel sequencing (MPS) of seven X-STR loci included in the ForenSeq ™ DNA Signature Prep kit of the MiSeq ® FGx ™ Forensic Genomics System (Illumina). Allele calls between PCR-CE and MPS systems were consistent (100% concordance) across seven overlapping X-STRs. Allele and haplotype frequencies and other parameters of forensic interest were calculated based on length (PCR-CE, 12 X-STRs) and sequence (MPS, seven X-STRs) variations observed in the population. Results of our study indicate that the 12 X-STRs in the PCR-CE system are highly informative for the Filipino population. MPS of seven X-STR loci identified 73 X-STR alleles compared with 55 X-STR alleles that were identified solely by length via PCR-CE. Of the 73 sequence-based alleles observed, six alleles have not been reported in the literature. The population data presented here may serve as a reference Philippine frequency database of X-STRs for forensic casework applications. Copyright © 2018 Elsevier B.V. All rights reserved.
Nuzzolese, Emilio; Borrini, Matteo
2010-11-01
During the years 2006-2007, the Archeological Superintendent of Veneto (Italy) promoted a research project on mass graves located on Nuovo Lazzaretto in Venice, where the corpses of plague deaths were buried during the 16th and 17th centuries. The burials were of different stages and are believed to be the remains of plague victims from the numerous outbreaks of pestilence, which occurred between the 15th and 17th centuries. Among the fragmented and commingled human bones, an unusual burial was found. The body was laid supine, with the top half of the thorax intact, arms parallel to the rachis axis, the articulations were anatomically unaltered. Both the skull morphology and the dimensions of the caput omeris suggest the body was a woman. A brick of moderate size was found inside the oral cavity, keeping the mandible wide open. The data collected by the anthropologist were used to generate a taphonomic profile, which precluded the positioning of the brick being accidental. Likewise, the probability of the brick having come from the surrounding burial sediment was rejected, as the only other inclusions found were bone fragments from previous burials in the same area. The data collected by the odontologist were employed for age estimation and radiological dental assessment. The forensic profile was based conceptually on the "circumstances of death" and concluded that the positioning of the brick was intentional, and attributed to a symbolic burial ritual. This ritual confirms the intimate belief held at those times, between the plague and the mythological character of the vampire. © 2010 American Academy of Forensic Sciences.
Analysis of 12 X-STR loci in the population of south Croatia.
Mršić, Gordan; Ozretić, Petar; Crnjac, Josip; Merkaš, Siniša; Račić, Ivana; Rožić, Sara; Sukser, Viktorija; Popović, Maja; Korolija, Marina
2017-02-01
The aim of the study was to assess forensic pertinence of 12 short tandem repeats (STRs) on X-chromosome in south Croatia population. Investigator ® Argus X-12 kit was used to co-amplify 12 STR loci belonging to four linkage groups (LGs) on X-chromosome in 99 male and 98 female DNA samples of unrelated donors. PCR products were analyzed by capillary electrophoresis. Population genetic and forensic parameters were calculated by the Arlequin and POPTREE2 software, and an on-line tool available at ChrX-STR.org. Hardy-Weinberg equilibrium was confirmed for all X-STR markers in female samples. Biallelic patterns at DXS10079 locus were detected in four male samples. Polymorphism information content for the most (DXS10135) and the least (DXS8378) informative markers was 0.9212 and 0.6347, respectively. In both male and female samples, combined power of discrimination exceeded 0.999999999. As confirmed by linkage disequilibrium test, significant association of marker pair DXS10074-DXS10079 (P = 0.0004) within LG2 and marker pair DXS10101-DXS10103 (P = 0.0003) within LG3 was found only in male samples. Number of observed haplotypes in our sample pool amounted 3.01, 7.53, 5 and 3.25% of the number of possible haplotypes for LG1, LG2, LG3 and LG4, respectively. According to haplotype diversity value of 0.9981, LG1 was the most informative. In comparison of south Croatia with 26 world populations, pair-wise [Formula: see text] values increase in parallel with geographical distance. Overall statistical assessment confirmed suitability of Investigator ® Argus X-12 kit for forensic casework in both identification and familial testing in the population of south Croatia.
Preserving the Pedagogy: The Director of Forensics as an At-Risk Professional.
ERIC Educational Resources Information Center
Jensen, Scott
Today's collegiate forensic activities have changed in ways that pose profound challenges to directors of forensics. Six primary factors that contribute to the "at-riskness" of directors of forensics are: the changing face of today's forensic program forces difficult choices; the forensics community is seeing signs of a crisis in…
Digital forensic osteology--possibilities in cooperation with the Virtopsy project.
Verhoff, Marcel A; Ramsthaler, Frank; Krähahn, Jonathan; Deml, Ulf; Gille, Ralf J; Grabherr, Silke; Thali, Michael J; Kreutz, Kerstin
2008-01-30
The present study was carried out to check whether classic osteometric parameters can be determined from the 3D reconstructions of MSCT (multislice computed tomography) scans acquired in the context of the Virtopsy project. To this end, four isolated and macerated skulls were examined by six examiners. First the skulls were conventionally (manually) measured using 32 internationally accepted linear measurements. Then the skulls were scanned by the use of MSCT with slice thicknesses of 1.25 mm and 0.63 mm, and the 33 measurements were virtually determined on the digital 3D reconstructions of the skulls. The results of the traditional and the digital measurements were compared for each examiner to figure out variations. Furthermore, several parameters were measured on the cranium and postcranium during an autopsy and compared to the values that had been measured on a 3D reconstruction from a previously acquired postmortem MSCT scan. The results indicate that equivalent osteometric values can be obtained from digital 3D reconstructions from MSCT scans using a slice thickness of 1.25 mm, and from conventional manual examinations. The measurements taken from a corpse during an autopsy could also be validated with the methods used for the digital 3D reconstructions in the context of the Virtopsy project. Future aims are the assessment and biostatistical evaluation in respect to sex, age and stature of all data sets stored in the Virtopsy project so far, as well as of future data sets. Furthermore, a definition of new parameters, only measurable with the aid of MSCT data would be conceivable.
NASA Astrophysics Data System (ADS)
Lhamon, Michael Earl
A pattern recognition system which uses complex correlation filter banks requires proportionally more computational effort than single-real valued filters. This introduces increased computation burden but also introduces a higher level of parallelism, that common computing platforms fail to identify. As a result, we consider algorithm mapping to both optical and digital processors. For digital implementation, we develop computationally efficient pattern recognition algorithms, referred to as, vector inner product operators that require less computational effort than traditional fast Fourier methods. These algorithms do not need correlation and they map readily onto parallel digital architectures, which imply new architectures for optical processors. These filters exploit circulant-symmetric matrix structures of the training set data representing a variety of distortions. By using the same mathematical basis as with the vector inner product operations, we are able to extend the capabilities of more traditional correlation filtering to what we refer to as "Super Images". These "Super Images" are used to morphologically transform a complicated input scene into a predetermined dot pattern. The orientation of the dot pattern is related to the rotational distortion of the object of interest. The optical implementation of "Super Images" yields feature reduction necessary for using other techniques, such as artificial neural networks. We propose a parallel digital signal processor architecture based on specific pattern recognition algorithms but general enough to be applicable to other similar problems. Such an architecture is classified as a data flow architecture. Instead of mapping an algorithm to an architecture, we propose mapping the DSP architecture to a class of pattern recognition algorithms. Today's optical processing systems have difficulties implementing full complex filter structures. Typically, optical systems (like the 4f correlators) are limited to phase-only implementation with lower detection performance than full complex electronic systems. Our study includes pseudo-random pixel encoding techniques for approximating full complex filtering. Optical filter bank implementation is possible and they have the advantage of time averaging the entire filter bank at real time rates. Time-averaged optical filtering is computational comparable to billions of digital operations-per-second. For this reason, we believe future trends in high speed pattern recognition will involve hybrid architectures of both optical and DSP elements.
American Academy of Forensic Sciences
... Academy News PDF Library Proceedings Journal of Forensic Sciences Information for Authors Searchable Index Contact Information Forensic Links ... Dale Stewart Award 2018 Annual Scientific Meeting Registration ... in Forensic Science … Now What? Young Forensic Scientists Forum (YFSF) Annual ...
Forensic Analysis of Human DNA from Samples Contamined with Bioweapons Agents
2011-10-01
Forensic analysis of human DNA from samples contaminated with bioweapons agents Jason Timbers Kathryn Wright Royal Canadian Mounted...Police Forensic Science and Identification Service Prepared By: Royal Canadian Mounted Police RCMP Forensic Science Identification Services... Royal Canadian Mounted Police Forensic Science and Identification Service Prepared By: Royal Canadian Mounted Police RCMP Forensic Science
Can dead man tooth do tell tales? Tooth prints in forensic identification.
Christopher, Vineetha; Murthy, Sarvani; Ashwinirani, S R; Prasad, Kulkarni; Girish, Suragimath; Vinit, Shashikanth Patil
2017-01-01
We know that teeth trouble us a lot when we are alive, but they last longer for thousands of years even after we are dead. Teeth being the strongest and resistant structure are the most significant tool in forensic investigations. Patterns of enamel rod end on the tooth surface are known as tooth prints. This study is aimed to know whether these tooth prints can become a forensic tool in personal identification such as finger prints. A study has been targeted toward the same. In the present in-vivo study, acetate peel technique has been used to obtain the replica of enamel rod end patterns. Tooth prints of upper first premolars were recorded from 80 individuals after acid etching using cellulose acetate strips. Then, digital images of the tooth prints obtained at two different intervals were subjected to biometric conversion using Verifinger standard software development kit version 6.5 software followed by the use of Automated Fingerprint Identification System (AFIS) software for comparison of the tooth prints. Similarly, each individual's finger prints were also recorded and were subjected to the same software. Further, recordings of AFIS scores obtained from images were statistically analyzed using Cronbach's test. We observed that comparing two tooth prints taken from an individual at two intervals exhibited similarity in many cases, with wavy pattern tooth print being the predominant type. However, the same prints showed dissimilarity when compared with other individuals. We also found that most of the individuals with whorl pattern finger print showed wavy pattern tooth print and few loop type fingerprints showed linear pattern of tooth prints. Further more experiments on both tooth prints and finger prints are required in establishing an individual's identity.