ERIC Educational Resources Information Center
Dunbar, Laura
2014-01-01
This article is an introduction to video screen capture. Basic information of two software programs, QuickTime for Mac and BlueBerry Flashback Express for PC, are also discussed. Practical applications for video screen capture are given.
Quick and Easy: Use Screen Capture Software to Train and Communicate
ERIC Educational Resources Information Center
Schuster, Ellen
2011-01-01
Screen capture (screen cast) software can be used to develop short videos for training purposes. Developing videos is quick and easy. This article describes how these videos are used as tools to reinforce face-to-face and interactive TV curriculum training in a nutrition education program. Advantages of developing these videos are shared.…
ERIC Educational Resources Information Center
Wales, Tim; Robertson, Penny
2008-01-01
Purpose: The aim of this paper is to share the experiences and challenges faced by the Open University Library (OUL) in using screen capture software to develop online literature search tutorials. Design/methodology/approach: A summary of information literacy support at the OUL is provided as background information to explain the decision to…
Capture Their Attention: Capturing Lessons Using Screen Capture Software
ERIC Educational Resources Information Center
Drumheller, Kristina; Lawler, Gregg
2011-01-01
When students miss classes for university activities such as athletic and academic events, they inevitably miss important class material. Students can get notes from their peers or visit professors to find out what they missed, but when students miss new and challenging material these steps are sometimes not enough. Screen capture and recording…
Privacy-preserving screen capture: towards closing the loop for health IT usability.
Cooley, Joseph; Smith, Sean
2013-08-01
As information technology permeates healthcare (particularly provider-facing systems), maximizing system effectiveness requires the ability to document and analyze tricky or troublesome usage scenarios. However, real-world health IT systems are typically replete with privacy-sensitive data regarding patients, diagnoses, clinicians, and EMR user interface details; instrumentation for screen capture (capturing and recording the scenario depicted on the screen) needs to respect these privacy constraints. Furthermore, real-world health IT systems are typically composed of modules from many sources, mission-critical and often closed-source; any instrumentation for screen capture can rely neither on access to structured output nor access to software internals. In this paper, we present a tool to help solve this problem: a system that combines keyboard video mouse (KVM) capture with automatic text redaction (and interactively selectable unredaction) to produce precise technical content that can enrich stakeholder communications and improve end-user influence on system evolution. KVM-based capture makes our system both application-independent and OS-independent because it eliminates software-interface dependencies on capture targets. Using a corpus of EMR screenshots, we present empirical measurements of redaction effectiveness and processing latency to demonstrate system performances. We discuss how these techniques can translate into instrumentation systems that improve real-world health IT deployments. Copyright © 2013 Elsevier Inc. All rights reserved.
Natural 3D content on glasses-free light-field 3D cinema
NASA Astrophysics Data System (ADS)
Balogh, Tibor; Nagy, Zsolt; Kovács, Péter Tamás.; Adhikarla, Vamsi K.
2013-03-01
This paper presents a complete framework for capturing, processing and displaying the free viewpoint video on a large scale immersive light-field display. We present a combined hardware-software solution to visualize free viewpoint 3D video on a cinema-sized screen. The new glasses-free 3D projection technology can support larger audience than the existing autostereoscopic displays. We introduce and describe our new display system including optical and mechanical design considerations, the capturing system and render cluster for producing the 3D content, and the various software modules driving the system. The indigenous display is first of its kind, equipped with front-projection light-field HoloVizio technology, controlling up to 63 MP. It has all the advantages of previous light-field displays and in addition, allows a more flexible arrangement with a larger screen size, matching cinema or meeting room geometries, yet simpler to set-up. The software system makes it possible to show 3D applications in real-time, besides the natural content captured from dense camera arrangements as well as from sparse cameras covering a wider baseline. Our software system on the GPU accelerated render cluster, can also visualize pre-recorded Multi-view Video plus Depth (MVD4) videos on this light-field glasses-free cinema system, interpolating and extrapolating missing views.
Empirical Data Collection and Analysis Using Camtasia and Transana
ERIC Educational Resources Information Center
Thorsteinsson, Gisli; Page, Tom
2009-01-01
One of the possible techniques for collecting empirical data is video recordings of a computer screen with specific screen capture software. This method for collecting empirical data shows how students use the BSCWII (Be Smart Cooperate Worldwide--a web based collaboration/groupware environment) to coordinate their work and collaborate in…
P1198: software for tracing decision behavior in lending to small businesses.
Andersson, P
2001-05-01
This paper describes a process-tracing software program specially designed to capture decision behavior in lending to small businesses. The source code was written in Lotus Notes. The software runs in a Web browser and consists of two interacting systems: a database and a user interface. The database includes three realistic loan applications. The user interface consists of different but interacting screens that enable the participant to operate the software. Log files register the decision behavior of the participant. An empirical example is presented in order to show the software's potential in providing insights into judgment and decision making. The implications of the software are discussed.
Fluorescent screens and image processing for the APS linac test stand
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berg, W.; Ko, K.
A fluorescent screen was used to monitor relative beam position and spot size of a 56-MeV electron beam in the linac test stand. A chromium doped alumina ceramic screen inserted into the beam was monitored by a video camera. The resulting image was captured using a frame grabber and stored into memory. Reconstruction and analysis of the stored image was performed using PV-WAVE. This paper will discuss the hardware and software implementation of the fluorescent screen and imaging system. Proposed improvements for the APS linac fluorescent screens and image processing will also be discussed.
Making Your Blackboard Courses Talk!
ERIC Educational Resources Information Center
Burcham, Tim M.
This presentation shows how to deliver audio/video (AV) lectures to online students using relatively inexpensive AV software (i.e., Camtasia Studio) and the standard Blackboard interface. The first section describes two types of production programs: presentation media converters and screen capture utilities. The second section covers making an AV…
ERIC Educational Resources Information Center
Yee, Kevin; Hargis, Jace
2010-01-01
This article discusses the benefits of screencasts and its instructional uses. Well-known for some years to advanced technology users, Screen Capture Software (SCS) offers the promise of recording action on the computer desktop together with voiceover narration, all combined into a single movie file that can be shared, emailed, or uploaded.…
Development of x-ray imaging technique for liquid screening at airport
NASA Astrophysics Data System (ADS)
Sulaiman, Nurhani binti; Srisatit, Somyot
2016-01-01
X-ray imaging technology is a viable option to recognize flammable liquids for the purposes of aviation security. In this study, an X-ray imaging technology was developed whereby, the image viewing system was built with the use of a digital camera coupled with a gadolinium oxysulfide (GOS) fluorescent screen. The camera was equipped with a software for remote control setting of the camera via a USB cable which allows the images to be captured. The image was analysed to determine the average grey level using a software designed by Microsoft Visual Basic 6.0. The data was obtained for various densities of liquid thickness of 4.5 cm, 6.0 cm and 7.5 cm respectively for X-ray energies ranging from 70 to 200 kVp. In order to verify the reliability of the constructed calibration data, the system was tested with a few types of unknown liquids. The developed system could be conveniently employed for security screening in order to discriminate between a threat and an innocuous liquid.
Temesi, David G; Martin, Scott; Smith, Robin; Jones, Christopher; Middleton, Brian
2010-06-30
Screening assays capable of performing quantitative analysis on hundreds of compounds per week are used to measure metabolic stability during early drug discovery. Modern orthogonal acceleration time-of-flight (OATOF) mass spectrometers equipped with analogue-to-digital signal capture (ADC) now offer performance levels suitable for many applications normally supported by triple quadruple instruments operated in multiple reaction monitoring (MRM) mode. Herein the merits of MRM and OATOF with ADC detection are compared for more than 1000 compounds screened in rat and/or cryopreserved human hepatocytes over a period of 3 months. Statistical comparison of a structurally diverse subset indicated good agreement for the two detection methods. The overall success rate was higher using OATOF detection and data acquisition time was reduced by around 20%. Targeted metabolites of diazepam were detected in samples from a CLint determination performed at 1 microM. Data acquisition by positive and negative ion mode switching can be achieved on high-performance liquid chromatography (HPLC) peak widths as narrow as 0.2 min (at base), thus enabling a more comprehensive first pass analysis with fast HPLC gradients. Unfortunately, most existing OATOF instruments lack the software tools necessary to rapidly convert the huge amounts of raw data into quantified results. Software with functionality similar to open access triple quadrupole systems is needed for OATOF to truly compete in a high-throughput screening environment. Copyright 2010 John Wiley & Sons, Ltd.
Development of x-ray imaging technique for liquid screening at airport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sulaiman, Nurhani binti, E-mail: nhani.sulaiman@gmail.com; Srisatit, Somyot, E-mail: somyot.s@chula.ac.th
2016-01-22
X-ray imaging technology is a viable option to recognize flammable liquids for the purposes of aviation security. In this study, an X-ray imaging technology was developed whereby, the image viewing system was built with the use of a digital camera coupled with a gadolinium oxysulfide (GOS) fluorescent screen. The camera was equipped with a software for remote control setting of the camera via a USB cable which allows the images to be captured. The image was analysed to determine the average grey level using a software designed by Microsoft Visual Basic 6.0. The data was obtained for various densities ofmore » liquid thickness of 4.5 cm, 6.0 cm and 7.5 cm respectively for X-ray energies ranging from 70 to 200 kVp. In order to verify the reliability of the constructed calibration data, the system was tested with a few types of unknown liquids. The developed system could be conveniently employed for security screening in order to discriminate between a threat and an innocuous liquid.« less
Shachak, Aviv; Dow, Rustam; Barnsley, Jan; Tu, Karen; Domb, Sharon; Jadad, Alejandro R; Lemieux-Charles, Louise
2013-06-04
Tutorials and user manuals are important forms of impersonal support for using software applications including electronic medical records (EMRs). Differences between user- and vendor documentation may indicate support needs, which are not sufficiently addressed by the official documentation, and reveal new elements that may inform the design of tutorials and user manuals. What are the differences between user-generated tutorials and manuals for an EMR and the official user manual from the software vendor? Effective design of tutorials and user manuals requires careful packaging of information, balance between declarative and procedural texts, an action and task-oriented approach, support for error recognition and recovery, and effective use of visual elements. No previous research compared these elements between formal and informal documents. We conducted an mixed methods study. Seven tutorials and two manuals for an EMR were collected from three family health teams and compared with the official user manual from the software vendor. Documents were qualitatively analyzed using a framework analysis approach in relation to the principles of technical documentation described above. Subsets of the data were quantitatively analyzed using cross-tabulation to compare the types of error information and visual cues in screen captures between user- and vendor-generated manuals. The user-developed tutorials and manuals differed from the vendor-developed manual in that they contained mostly procedural and not declarative information; were customized to the specific workflow, user roles, and patient characteristics; contained more error information related to work processes than to software usage; and used explicit visual cues on screen captures to help users identify window elements. These findings imply that to support EMR implementation, tutorials and manuals need to be customized and adapted to specific organizational contexts and workflows. The main limitation of the study is its generalizability. Future research should address this limitation and may explore alternative approaches to software documentation, such as modular manuals or participatory design.
VAP/VAT: video analytics platform and test bed for testing and deploying video analytics
NASA Astrophysics Data System (ADS)
Gorodnichy, Dmitry O.; Dubrofsky, Elan
2010-04-01
Deploying Video Analytics in operational environments is extremely challenging. This paper presents a methodological approach developed by the Video Surveillance and Biometrics Section (VSB) of the Science and Engineering Directorate (S&E) of the Canada Border Services Agency (CBSA) to resolve these problems. A three-phase approach to enable VA deployment within an operational agency is presented and the Video Analytics Platform and Testbed (VAP/VAT) developed by the VSB section is introduced. In addition to allowing the integration of third party and in-house built VA codes into an existing video surveillance infrastructure, VAP/VAT also allows the agency to conduct an unbiased performance evaluation of the cameras and VA software available on the market. VAP/VAT consists of two components: EventCapture, which serves to Automatically detect a "Visual Event", and EventBrowser, which serves to Display & Peruse of "Visual Details" captured at the "Visual Event". To deal with Open architecture as well as with Closed architecture cameras, two video-feed capture mechanisms have been developed within the EventCapture component: IPCamCapture and ScreenCapture.
NASA Astrophysics Data System (ADS)
Cao, Xinhua; Xu, Xiaoyin; Voss, Stephan
2017-03-01
In this paper, we describe an enhanced DICOM Secondary Capture (SC) that integrates Image Quantification (IQ) results, Regions of Interest (ROIs), and Time Activity Curves (TACs) with screen shots by embedding extra medical imaging information into a standard DICOM header. A software toolkit of DICOM IQSC has been developed to implement the SC-centered information integration of quantitative analysis for routine practice of nuclear medicine. Primary experiments show that the DICOM IQSC method is simple and easy to implement seamlessly integrating post-processing workstations with PACS for archiving and retrieving IQ information. Additional DICOM IQSC applications in routine nuclear medicine and clinic research are also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
The system is developed to collect, process, store and present the information provided by the radio frequency identification (RFID) devices. The system contains three parts, the application software, the database and the web page. The application software manages multiple RFID devices, such as readers and portals, simultaneously. It communicates with the devices through application programming interface (API) provided by the device vendor. The application software converts data collected by the RFID readers and portals to readable information. It is capable of encrypting data using 256 bits advanced encryption standard (AES). The application software has a graphical user interface (GUI). Themore » GUI mimics the configurations of the nucler material storage sites or transport vehicles. The GUI gives the user and system administrator an intuitive way to read the information and/or configure the devices. The application software is capable of sending the information to a remote, dedicated and secured web and database server. Two captured screen samples, one for storage and transport, are attached. The database is constructed to handle a large number of RFID tag readers and portals. A SQL server is employed for this purpose. An XML script is used to update the database once the information is sent from the application software. The design of the web page imitates the design of the application software. The web page retrieves data from the database and presents it in different panels. The user needs a user name combined with a password to access the web page. The web page is capable of sending e-mail and text messages based on preset criteria, such as when alarm thresholds are excceeded. A captured screen sample is attached. The application software is designed to be installed on a local computer. The local computer is directly connected to the RFID devices and can be controlled locally or remotely. There are multiple local computers managing different sites or transport vehicles. The control from remote sites and information transmitted to a central database server is through secured internet. The information stored in the central databaser server is shown on the web page. The users can view the web page on the internet. A dedicated and secured web and database server (https) is used to provide information security.« less
An experience of qualified preventive screening: shiraz smart screening software.
Islami Parkoohi, Parisa; Zare, Hashem; Abdollahifard, Gholamreza
2015-01-01
Computerized preventive screening software is a cost effective intervention tool to address non-communicable chronic diseases. Shiraz Smart Screening Software (SSSS) was developed as an innovative tool for qualified screening. It allows simultaneous smart screening of several high-burden chronic diseases and supports reminder notification functionality. The extent in which SSSS affects screening quality is also described. Following software development, preventive screening and annual health examinations of 261 school staff (Medical School of Shiraz, Iran) was carried out in a software-assisted manner. To evaluate the quality of the software-assisted screening, we used quasi-experimental study design and determined coverage, irregular attendance and inappropriateness proportions in relation with the manual and software-assisted screening as well as the corresponding number of requested tests. In manual screening method, 27% of employees were covered (with 94% irregular attendance) while by software-assisted screening, the coverage proportion was 79% (attendance status will clear after the specified time). The frequency of inappropriate screening test requests, before the software implementation, was 41.37% for fasting plasma glucose, 41.37% for lipid profile, 0.84% for occult blood, 0.19% for flexible sigmoidoscopy/colonoscopy, 35.29% for Pap smear, 19.20% for mammography and 11.2% for prostate specific antigen. All of the above were corrected by the software application. In total, 366 manual screening and 334 software-assisted screening tests were requested. SSSS is an innovative tool to improve the quality of preventive screening plans in terms of increased screening coverage, reduction in inappropriateness and the total number of requested tests.
Dow, Rustam; Barnsley, Jan; Tu, Karen; Domb, Sharon; Jadad, Alejandro R.; Lemieux-Charles, Louise
2015-01-01
Research problem Tutorials and user manuals are important forms of impersonal support for using software applications including electronic medical records (EMRs). Differences between user- and vendor documentation may indicate support needs, which are not sufficiently addressed by the official documentation, and reveal new elements that may inform the design of tutorials and user manuals. Research question What are the differences between user-generated tutorials and manuals for an EMR and the official user manual from the software vendor? Literature review Effective design of tutorials and user manuals requires careful packaging of information, balance between declarative and procedural texts, an action and task-oriented approach, support for error recognition and recovery, and effective use of visual elements. No previous research compared these elements between formal and informal documents. Methodology We conducted an mixed methods study. Seven tutorials and two manuals for an EMR were collected from three family health teams and compared with the official user manual from the software vendor. Documents were qualitatively analyzed using a framework analysis approach in relation to the principles of technical documentation described above. Subsets of the data were quantitatively analyzed using cross-tabulation to compare the types of error information and visual cues in screen captures between user- and vendor-generated manuals. Results and discussion The user-developed tutorials and manuals differed from the vendor-developed manual in that they contained mostly procedural and not declarative information; were customized to the specific workflow, user roles, and patient characteristics; contained more error information related to work processes than to software usage; and used explicit visual cues on screen captures to help users identify window elements. These findings imply that to support EMR implementation, tutorials and manuals need to be customized and adapted to specific organizational contexts and workflows. The main limitation of the study is its generalizability. Future research should address this limitation and may explore alternative approaches to software documentation, such as modular manuals or participatory design. PMID:26190888
Rapid Development of Custom Software Architecture Design Environments
1999-08-01
the tools themselves. This dissertation describes a new approach to capturing and using architectural design expertise in software architecture design environments...A language and tools are presented for capturing and encapsulating software architecture design expertise within a conceptual framework...of architectural styles and design rules. The design expertise thus captured is supported with an incrementally configurable software architecture
Burrell, Thomas; Fozard, Susan; Holroyd, Geoff H; French, Andrew P; Pound, Michael P; Bigley, Christopher J; James Taylor, C; Forde, Brian G
2017-01-01
Chemical genetics provides a powerful alternative to conventional genetics for understanding gene function. However, its application to plants has been limited by the lack of a technology that allows detailed phenotyping of whole-seedling development in the context of a high-throughput chemical screen. We have therefore sought to develop an automated micro-phenotyping platform that would allow both root and shoot development to be monitored under conditions where the phenotypic effects of large numbers of small molecules can be assessed. The 'Microphenotron' platform uses 96-well microtitre plates to deliver chemical treatments to seedlings of Arabidopsis thaliana L. and is based around four components: (a) the 'Phytostrip', a novel seedling growth device that enables chemical treatments to be combined with the automated capture of images of developing roots and shoots; (b) an illuminated robotic platform that uses a commercially available robotic manipulator to capture images of developing shoots and roots; (c) software to control the sequence of robotic movements and integrate these with the image capture process; (d) purpose-made image analysis software for automated extraction of quantitative phenotypic data. Imaging of each plate (representing 80 separate assays) takes 4 min and can easily be performed daily for time-course studies. As currently configured, the Microphenotron has a capacity of 54 microtitre plates in a growth room footprint of 2.1 m 2 , giving a potential throughput of up to 4320 chemical treatments in a typical 10 days experiment. The Microphenotron has been validated by using it to screen a collection of 800 natural compounds for qualitative effects on root development and to perform a quantitative analysis of the effects of a range of concentrations of nitrate and ammonium on seedling development. The Microphenotron is an automated screening platform that for the first time is able to combine large numbers of individual chemical treatments with a detailed analysis of whole-seedling development, and particularly root system development. The Microphenotron should provide a powerful new tool for chemical genetics and for wider chemical biology applications, including the development of natural and synthetic chemical products for improved agricultural sustainability.
High-quality and small-capacity e-learning video featuring lecturer-superimposing PC screen images
NASA Astrophysics Data System (ADS)
Nomura, Yoshihiko; Murakami, Michinobu; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko
2006-10-01
Information processing and communication technology are progressing quickly, and are prevailing throughout various technological fields. Therefore, the development of such technology should respond to the needs for improvement of quality in the e-learning education system. The authors propose a new video-image compression processing system that ingeniously employs the features of the lecturing scene. While dynamic lecturing scene is shot by a digital video camera, screen images are electronically stored by a PC screen image capturing software in relatively long period at a practical class. Then, a lecturer and a lecture stick are extracted from the digital video images by pattern recognition techniques, and the extracted images are superimposed on the appropriate PC screen images by off-line processing. Thus, we have succeeded to create a high-quality and small-capacity (HQ/SC) video-on-demand educational content featuring the advantages: the high quality of image sharpness, the small electronic file capacity, and the realistic lecturer motion.
A Computerized Data-Capture System for Animal Biosafety Level 4 Laboratories
Bente, Dennis A; Friesen, Jeremy; White, Kyle; Koll, Jordan; Kobinger, Gary P
2011-01-01
The restrictive nature of an Animal Biosafety Level 4 (ABSL4) laboratory complicates even simple clinical evaluation including data capture. Typically, clinical data are recorded on paper during procedures, faxed out of the ABSL4, and subsequently manually entered into a computer. This system has many disadvantages including transcriptional errors. Here, we describe the development of a highly customizable, tablet-PC-based computerized data-capture system, allowing reliable collection of observational and clinical data from experimental animals in a restrictive biocontainment setting. A multidisciplinary team with skills in containment laboratory animal science, database design, and software engineering collaborated on the development of this system. The goals were to design an easy-to-use and flexible user interface on a touch-screen tablet PC with user-supportable processes for recovery, full auditing capabilities, and cost effectiveness. The system simplifies data capture, reduces the necessary time in an ABSL4 environment, offers timely reporting and review of data, facilitates statistical analysis, reduces potential of erroneous data entry, improves quality assurance of animal care, and advances the use and refinement of humane endpoints. PMID:22330712
A mobile trauma database with charge capture.
Moulton, Steve; Myung, Dan; Chary, Aron; Chen, Joshua; Agarwal, Suresh; Emhoff, Tim; Burke, Peter; Hirsch, Erwin
2005-11-01
Charge capture plays an important role in every surgical practice. We have developed and merged a custom mobile database (DB) system with our trauma registry (TRACS), to better understand our billing methods, revenue generators, and areas for improved revenue capture. The mobile database runs on handheld devices using the Windows Compact Edition platform. The front end was written in C# and the back end is SQL. The mobile database operates as a thick client; it includes active and inactive patient lists, billing screens, hot pick lists, and Current Procedural Terminology and International Classification of Diseases, Ninth Revision code sets. Microsoft Information Internet Server provides secure data transaction services between the back ends stored on each device. Traditional, hand written billing information for three of five adult trauma surgeons was averaged over a 5-month period. Electronic billing information was then collected over a 3-month period using handheld devices and the subject software application. One surgeon used the software for all 3 months, and two surgeons used it for the latter 2 months of the electronic data collection period. This electronic billing information was combined with TRACS data to determine the clinical characteristics of the trauma patients who were and were not captured using the mobile database. Total charges increased by 135%, 148%, and 228% for each of the three trauma surgeons who used the mobile DB application. The majority of additional charges were for evaluation and management services. Patients who were captured and billed at the point of care using the mobile DB had higher Injury Severity Scores, were more likely to undergo an operative procedure, and had longer lengths of stay compared with those who were not captured. Total charges more than doubled using a mobile database to bill at the point of care. A subsequent comparison of TRACS data with billing information revealed a large amount of uncaptured patient revenue. Greater familiarity and broader use of mobile database technology holds the potential for even greater revenue capture.
Improvement of the user interface of multimedia applications by automatic display layout
NASA Astrophysics Data System (ADS)
Lueders, Peter; Ernst, Rolf
1995-03-01
Multimedia research has mainly focussed on real-time data capturing and display combined with compression, storage and transmission of these data. However, there is another problem considering real-time selecting and arranging a possibly large amount of data from multiple media on the computer screen together with textual and graphical data of regular software. This problem has already been known from complex software systems, such as CASE and hypertest, and will even be aggravated in multimedia systems. The aim of our work is to alleviate the user from the burden of continuously selecting, placing and sizing windows and their contents, but without introducing solutions limited to only few applications. We present an experimental system which controls the computer screen contents and layouts, directed by a user and/or tool provided information filter and prioritization. To be application independent, the screen layout is based on general layout optimization algorithms adapted from the VLSI layout which are controlled by application specific objective functions. In this paper, we discuss the problems of a comprehensible screen layout including the stability of optical information in time, the information filtering, the layout algorithms and the adaptation of the objective function to include a specific application. We give some examples of different standard applications with layout problems ranging from hierarchical graph layout to window layout. The results show that the automatic tool independent display layout will be possible in a real time interactive environment.
SU-F-T-94: Plan2pdf - a Software Tool for Automatic Plan Report for Philips Pinnacle TPS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, C
Purpose: To implement an automatic electronic PDF plan reporting tool for Philips Pinnacle treatment planning system (TPS) Methods: An electronic treatment plan reporting software is developed by us to enable fully automatic PDF report from Pinnacle TPS to external EMR programs such as MOSAIQ. The tool is named “plan2pdf”. plan2pdf is implemented using Pinnacle scripts, Java and UNIX shell scripts, without any external program needed. plan2pdf supports full auto-mode and manual mode reporting. In full auto-mode, with a single mouse click, plan2pdf will generate a detailed Pinnacle plan report in PDF format, which includes customizable cover page, Pinnacle plan summary,more » orthogonal views through each plan POI and maximum dose point, DRR for each beam, serial transverse views captured throughout the dose grid at a user specified interval, DVH and scorecard windows. The final PDF report is also automatically bookmarked for each section above for convenient plan review. The final PDF report can either be saved on a user specified folder on Pinnacle, or it can be automatically exported to an EMR import folder via a user configured FTP service. In manual capture mode, plan2pdf allows users to capture any Pinnacle plan by full screen, individual window or rectangular ROI drawn on screen. Furthermore, to avoid possible patients’ plan mix-up during auto-mode reporting, a user conflict check feature is included in plan2pdf: it prompts user to wait if another patient is being exported by plan2pdf by another user. Results: plan2pdf is tested extensively and successfully at our institution consists of 5 centers, 15 dosimetrists and 10 physicists, running Pinnacle version 9.10 on Enterprise servers. Conclusion: plan2pdf provides a highly efficient, user friendly and clinical proven platform for all Philips Pinnacle users, to generate a detailed plan report in PDF format for external EMR systems.« less
Dynamic interrogative data capture (DIDC) : concept of operations.
DOT National Transportation Integrated Search
2016-04-01
This Concept of Operations (ConOps) describes the characteristics of the Dynamic Interrogative Data Capture (DIDC) algorithms and associated software. The objective of the DIDC algorithms and software is to optimize the capture and transmission of ve...
Gopalaswamy, Arjun M.; Royle, J. Andrew; Hines, James E.; Singh, Pallavi; Jathanna, Devcharan; Kumar, N. Samba; Karanth, K. Ullas
2012-01-01
1. The advent of spatially explicit capture-recapture models is changing the way ecologists analyse capture-recapture data. However, the advantages offered by these new models are not fully exploited because they can be difficult to implement. 2. To address this need, we developed a user-friendly software package, created within the R programming environment, called SPACECAP. This package implements Bayesian spatially explicit hierarchical models to analyse spatial capture-recapture data. 3. Given that a large number of field biologists prefer software with graphical user interfaces for analysing their data, SPACECAP is particularly useful as a tool to increase the adoption of Bayesian spatially explicit capture-recapture methods in practice.
Durbin, Kenneth R.; Tran, John C.; Zamdborg, Leonid; Sweet, Steve M. M.; Catherman, Adam D.; Lee, Ji Eun; Li, Mingxi; Kellie, John F.; Kelleher, Neil L.
2011-01-01
Applying high-throughput Top-Down MS to an entire proteome requires a yet-to-be-established model for data processing. Since Top-Down is becoming possible on a large scale, we report our latest software pipeline dedicated to capturing the full value of intact protein data in automated fashion. For intact mass detection, we combine algorithms for processing MS1 data from both isotopically resolved (FT) and charge-state resolved (ion trap) LC-MS data, which are then linked to their fragment ions for database searching using ProSight. Automated determination of human keratin and tubulin isoforms is one result. Optimized for the intricacies of whole proteins, new software modules visualize proteome-scale data based on the LC retention time and intensity of intact masses and enable selective detection of PTMs to automatically screen for acetylation, phosphorylation, and methylation. Software functionality was demonstrated using comparative LC-MS data from yeast strains in addition to human cells undergoing chemical stress. We further these advances as a key aspect of realizing Top-Down MS on a proteomic scale. PMID:20848673
Roguev, Assen; Ryan, Colm J; Xu, Jiewei; Colson, Isabelle; Hartsuiker, Edgar; Krogan, Nevan
2018-02-01
This protocol describes computational analysis of genetic interaction screens, ranging from data capture (plate imaging) to downstream analyses. Plate imaging approaches using both digital camera and office flatbed scanners are included, along with a protocol for the extraction of colony size measurements from the resulting images. A commonly used genetic interaction scoring method, calculation of the S-score, is discussed. These methods require minimal computer skills, but some familiarity with MATLAB and Linux/Unix is a plus. Finally, an outline for using clustering and visualization software for analysis of resulting data sets is provided. © 2018 Cold Spring Harbor Laboratory Press.
Electron capture and excitation processes in H+-H collisions in dense quantum plasmas
NASA Astrophysics Data System (ADS)
Jakimovski, D.; Markovska, N.; Janev, R. K.
2016-10-01
Electron capture and excitation processes in proton-hydrogen atom collisions taking place in dense quantum plasmas are studied by employing the two-centre atomic orbital close-coupling (TC-AOCC) method. The Debye-Hückel cosine (DHC) potential is used to describe the plasma screening effects on the Coulomb interaction between charged particles. The properties of a hydrogen atom with DHC potential are investigated as a function of the screening strength of the potential. It is found that the decrease in binding energy of nl levels with increasing screening strength is considerably faster than in the case of the Debye-Hückel (DH) screening potential, appropriate for description of charged particle interactions in weakly coupled classical plasmas. This results in a reduction in the number of bound states in the DHC potential with respect to that in the DH potential for the same plasma screening strength, and is reflected in the dynamics of excitation and electron capture processes for the two screened potentials. The TC-AOCC cross sections for total and state-selective electron capture and excitation cross sections with the DHC potential are calculated for a number of representative screening strengths in the 1-300 keV energy range and compared with those for the DH and pure Coulomb potential. The total capture cross sections for a selected number of screening strengths are compared with the available results from classical trajectory Monte Carlo calculations.
Screen Design Principles of Computer-Aided Instructional Software for Elementary School Students
ERIC Educational Resources Information Center
Berrin, Atiker; Turan, Bülent Onur
2017-01-01
This study aims to present primary school students' views about current educational software interfaces, and to propose principles for educational software screens. The study was carried out with a general screening model. Sample group of the study consisted of sixth grade students in Sehit Ögretmen Hasan Akan Elementary School. In this context,…
Component-Based Visualization System
NASA Technical Reports Server (NTRS)
Delgado, Francisco
2005-01-01
A software system has been developed that gives engineers and operations personnel with no "formal" programming expertise, but who are familiar with the Microsoft Windows operating system, the ability to create visualization displays to monitor the health and performance of aircraft/spacecraft. This software system is currently supporting the X38 V201 spacecraft component/system testing and is intended to give users the ability to create, test, deploy, and certify their subsystem displays in a fraction of the time that it would take to do so using previous software and programming methods. Within the visualization system there are three major components: the developer, the deployer, and the widget set. The developer is a blank canvas with widget menu items that give users the ability to easily create displays. The deployer is an application that allows for the deployment of the displays created using the developer application. The deployer has additional functionality that the developer does not have, such as printing of displays, screen captures to files, windowing of displays, and also serves as the interface into the documentation archive and help system. The third major component is the widget set. The widgets are the visual representation of the items that will make up the display (i.e., meters, dials, buttons, numerical indicators, string indicators, and the like). This software was developed using Visual C++ and uses COTS (commercial off-the-shelf) software where possible.
Morin, Jean-François; Botton, Eléonore; Jacquemard, François; Richard-Gireme, Anouk
2013-01-01
The Fetal medicine foundation (FMF) has developed a new algorithm called Prenatal Risk Calculation (PRC) to evaluate Down syndrome screening based on free hCGβ, PAPP-A and nuchal translucency. The peculiarity of this algorithm is to use the degree of extremeness (DoE) instead of the multiple of the median (MoM). The biologists measuring maternal seric markers on Kryptor™ machines (Thermo Fisher Scientific) use Fast Screen pre I plus software for the prenatal risk calculation. This software integrates the PRC algorithm. Our study evaluates the data of 2.092 patient files of which 19 show a fœtal abnormality. These files have been first evaluated with the ViewPoint software based on MoM. The link between DoE and MoM has been analyzed and the different calculated risks compared. The study shows that Fast Screen pre I plus software gives the same risk results as ViewPoint software, but yields significantly fewer false positive results.
Building a virtual ligand screening pipeline using free software: a survey.
Glaab, Enrico
2016-03-01
Virtual screening, the search for bioactive compounds via computational methods, provides a wide range of opportunities to speed up drug development and reduce the associated risks and costs. While virtual screening is already a standard practice in pharmaceutical companies, its applications in preclinical academic research still remain under-exploited, in spite of an increasing availability of dedicated free databases and software tools. In this survey, an overview of recent developments in this field is presented, focusing on free software and data repositories for screening as alternatives to their commercial counterparts, and outlining how available resources can be interlinked into a comprehensive virtual screening pipeline using typical academic computing facilities. Finally, to facilitate the set-up of corresponding pipelines, a downloadable software system is provided, using platform virtualization to integrate pre-installed screening tools and scripts for reproducible application across different operating systems. © The Author 2015. Published by Oxford University Press.
Building a virtual ligand screening pipeline using free software: a survey
2016-01-01
Virtual screening, the search for bioactive compounds via computational methods, provides a wide range of opportunities to speed up drug development and reduce the associated risks and costs. While virtual screening is already a standard practice in pharmaceutical companies, its applications in preclinical academic research still remain under-exploited, in spite of an increasing availability of dedicated free databases and software tools. In this survey, an overview of recent developments in this field is presented, focusing on free software and data repositories for screening as alternatives to their commercial counterparts, and outlining how available resources can be interlinked into a comprehensive virtual screening pipeline using typical academic computing facilities. Finally, to facilitate the set-up of corresponding pipelines, a downloadable software system is provided, using platform virtualization to integrate pre-installed screening tools and scripts for reproducible application across different operating systems. PMID:26094053
Bowen, Michael E; Bhat, Deepa; Fish, Jason; Moran, Brett; Howell-Stampley, Temple; Kirk, Lynne; Persell, Stephen D; Halm, Ethan A
Preventive services required for performance measurement often are completed in outside health systems and not captured in electronic medical records (EMRs). A before-after study was conducted to examine the ability of clinical decision support (CDS) to improve performance on preventive quality measures, capture clinician-reported services completed elsewhere, and patient/medical exceptions and to describe their impact on quality measurement. CDS improved performance on colorectal cancer screening, osteoporosis screening, and pneumococcal vaccination measures ( P < .05) but not breast or cervical cancer screening. CDS captured clinician-reported services completed elsewhere (2% to 10%) and patient/medical exceptions (<3%). Compared to measures using only within-system data, including services completed elsewhere in the numerator improved performance: pneumococcal vaccine (73% vs 82%); breast (69% vs 75%), colorectal (58% vs 70%), and cervical cancer (53% vs 62%); and osteoporosis (72% vs 75%) screening ( P < .05). Visit-based CDS can capture clinician-reported preventive services, and accounting for services completed elsewhere improves performance on quality measures.
Software support environment design knowledge capture
NASA Technical Reports Server (NTRS)
Dollman, Tom
1990-01-01
The objective of this task is to assess the potential for using the software support environment (SSE) workstations and associated software for design knowledge capture (DKC) tasks. This assessment will include the identification of required capabilities for DKC and hardware/software modifications needed to support DKC. Several approaches to achieving this objective are discussed and interim results are provided: (1) research into the problem of knowledge engineering in a traditional computer-aided software engineering (CASE) environment, like the SSE; (2) research into the problem of applying SSE CASE tools to develop knowledge based systems; and (3) direct utilization of SSE workstations to support a DKC activity.
High-throughput screening of metal-porphyrin-like graphenes for selective capture of carbon dioxide
Bae, Hyeonhu; Park, Minwoo; Jang, Byungryul; Kang, Yura; Park, Jinwoo; Lee, Hosik; Chung, Haegeun; Chung, ChiHye; Hong, Suklyun; Kwon, Yongkyung; Yakobson, Boris I.; Lee, Hoonkyung
2016-01-01
Nanostructured materials, such as zeolites and metal-organic frameworks, have been considered to capture CO2. However, their application has been limited largely because they exhibit poor selectivity for flue gases and low capture capacity under low pressures. We perform a high-throughput screening for selective CO2 capture from flue gases by using first principles thermodynamics. We find that elements with empty d orbitals selectively attract CO2 from gaseous mixtures under low CO2 pressures (~10−3 bar) at 300 K and release it at ~450 K. CO2 binding to elements involves hybridization of the metal d orbitals with the CO2 π orbitals and CO2-transition metal complexes were observed in experiments. This result allows us to perform high-throughput screening to discover novel promising CO2 capture materials with empty d orbitals (e.g., Sc– or V–porphyrin-like graphene) and predict their capture performance under various conditions. Moreover, these findings provide physical insights into selective CO2 capture and open a new path to explore CO2 capture materials. PMID:26902156
High-throughput screening of metal-porphyrin-like graphenes for selective capture of carbon dioxide.
Bae, Hyeonhu; Park, Minwoo; Jang, Byungryul; Kang, Yura; Park, Jinwoo; Lee, Hosik; Chung, Haegeun; Chung, ChiHye; Hong, Suklyun; Kwon, Yongkyung; Yakobson, Boris I; Lee, Hoonkyung
2016-02-23
Nanostructured materials, such as zeolites and metal-organic frameworks, have been considered to capture CO2. However, their application has been limited largely because they exhibit poor selectivity for flue gases and low capture capacity under low pressures. We perform a high-throughput screening for selective CO2 capture from flue gases by using first principles thermodynamics. We find that elements with empty d orbitals selectively attract CO2 from gaseous mixtures under low CO2 pressures (~10(-3) bar) at 300 K and release it at ~450 K. CO2 binding to elements involves hybridization of the metal d orbitals with the CO2 π orbitals and CO2-transition metal complexes were observed in experiments. This result allows us to perform high-throughput screening to discover novel promising CO2 capture materials with empty d orbitals (e.g., Sc- or V-porphyrin-like graphene) and predict their capture performance under various conditions. Moreover, these findings provide physical insights into selective CO2 capture and open a new path to explore CO2 capture materials.
High-throughput screening of metal-porphyrin-like graphenes for selective capture of carbon dioxide
NASA Astrophysics Data System (ADS)
Bae, Hyeonhu; Park, Minwoo; Jang, Byungryul; Kang, Yura; Park, Jinwoo; Lee, Hosik; Chung, Haegeun; Chung, Chihye; Hong, Suklyun; Kwon, Yongkyung; Yakobson, Boris I.; Lee, Hoonkyung
2016-02-01
Nanostructured materials, such as zeolites and metal-organic frameworks, have been considered to capture CO2. However, their application has been limited largely because they exhibit poor selectivity for flue gases and low capture capacity under low pressures. We perform a high-throughput screening for selective CO2 capture from flue gases by using first principles thermodynamics. We find that elements with empty d orbitals selectively attract CO2 from gaseous mixtures under low CO2 pressures (~10-3 bar) at 300 K and release it at ~450 K. CO2 binding to elements involves hybridization of the metal d orbitals with the CO2 π orbitals and CO2-transition metal complexes were observed in experiments. This result allows us to perform high-throughput screening to discover novel promising CO2 capture materials with empty d orbitals (e.g., Sc- or V-porphyrin-like graphene) and predict their capture performance under various conditions. Moreover, these findings provide physical insights into selective CO2 capture and open a new path to explore CO2 capture materials.
ERIC Educational Resources Information Center
Jones, Lawrence; Graham, Ian
1986-01-01
Reviews the main principles of interfacing and discusses the software developed to perform kinetic data capture and analysis with a BBC microcomputer linked to a recording spectrophotometer. Focuses on the steps in software development. Includes results of a lactate dehydrogenase assay. (ML)
Design and preliminary analysis of a vaginal inserter for speculum-free cervical cancer screening
Agudogo, Júlia; Krieger, Marlee S.; Miros, Robert; Proeschold-Bell, Rae Jean; Schmitt, John W.; Ramanujam, Nimmi
2017-01-01
Objective Cervical cancer screening usually requires use of a speculum to provide a clear view of the cervix. The speculum is one potential barrier to screening due to fear of pain, discomfort and embarrassment. The aim of this paper is to present and demonstrate the feasibility of a tampon-sized inserter and the POCkeT Colposcope, a miniature pen sized-colposcope, for comfortable, speculum-free and potentially self-colposcopy. Study design We explored different designs using 3D computer-aided design (CAD) software and performed mechanical testing simulations on each. Designs were rapid prototyped and tested using a custom vaginal phantom across a range of vaginal pressures and uterine tilts to select an optimal design. Two final designs were tested with fifteen volunteers to assess cervix visualization, comfort and usability compared to the speculum and the optimal design, the curved-tip inserter, was selected for testing in volunteers. Results We present a vaginal inserter as an alternative to the standard speculum for use with the POCkeT Colposcope. The device has a slim tubular body with a funnel-like curved tip measuring approximately 2.5 cm in diameter. The inserter has a channel through which a 2 megapixel (MP) mini camera with LED illumination fits to enable image capture. Mechanical finite element testing simulations with an applied pressure of 15 cm H2O indicated a high factor of safety (90.9) for the inserter. Testing of the device with a custom vaginal phantom, across a range of supine vaginal pressures and uterine tilts (retroverted, anteverted and sideverted), demonstrated image capture with a visual area comparable to the speculum for a normal/axial positioned uteri and significantly better than the speculum for anteverted and sideverted uteri (p<0.00001). Volunteer studies with self-insertion and physician-assisted cervix image capture showed adequate cervix visualization for 83% of patients. In addition, questionnaire responses from volunteers indicated a 92.3% overall preference for the inserter over the speculum and all indicated that the inserter was more comfortable than the speculum. The inserter provides a platform for self-cervical cancer screening and also enables acetic acid/Lugol’s iodine application and insertion of swabs for Pap smear sample collection. Conclusion This study demonstrates the feasibility of an inserter and miniature-imaging device for comfortable cervical image capture of women with potential for synergistic HPV and Pap smear sample collection. PMID:28562669
Developing a Cyberinfrastructure for integrated assessments of environmental contaminants.
Kaur, Taranjit; Singh, Jatinder; Goodale, Wing M; Kramar, David; Nelson, Peter
2005-03-01
The objective of this study was to design and implement prototype software for capturing field data and automating the process for reporting and analyzing the distribution of mercury. The four phase process used to design, develop, deploy and evaluate the prototype software is described. Two different development strategies were used: (1) design of a mobile data collection application intended to capture field data in a meaningful format and automate transfer into user databases, followed by (2) a re-engineering of the original software to develop an integrated database environment with improved methods for aggregating and sharing data. Results demonstrated that innovative use of commercially available hardware and software components can lead to the development of an end-to-end digital cyberinfrastructure that captures, records, stores, transmits, compiles and integrates multi-source data as it relates to mercury.
Sanyal, Parikshit; Ganguli, Prosenjit; Barui, Sanghita; Deb, Prabal
2018-01-01
The Pap stained cervical smear is a screening tool for cervical cancer. Commercial systems are used for automated screening of liquid based cervical smears. However, there is no image analysis software used for conventional cervical smears. The aim of this study was to develop and test the diagnostic accuracy of a software for analysis of conventional smears. The software was developed using Python programming language and open source libraries. It was standardized with images from Bethesda Interobserver Reproducibility Project. One hundred and thirty images from smears which were reported Negative for Intraepithelial Lesion or Malignancy (NILM), and 45 images where some abnormality has been reported, were collected from the archives of the hospital. The software was then tested on the images. The software was able to segregate images based on overall nuclear: cytoplasmic ratio, coefficient of variation (CV) in nuclear size, nuclear membrane irregularity, and clustering. 68.88% of abnormal images were flagged by the software, as well as 19.23% of NILM images. The major difficulties faced were segmentation of overlapping cell clusters and separation of neutrophils. The software shows potential as a screening tool for conventional cervical smears; however, further refinement in technique is required.
Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model
NASA Technical Reports Server (NTRS)
Rizvi, Farheen
2016-01-01
Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.
Human portable preconcentrator system
Linker, Kevin L.; Brusseau, Charles A.; Hannum, David W.; Puissant, James G.; Varley, Nathan R.
2003-08-12
A preconcentrator system and apparatus suited to human portable use wherein sample potentially containing a target chemical substance is drawn into a chamber and through a pervious screen. The screen is adapted to capture target chemicals and then, upon heating, to release those chemicals into the chamber. Chemicals captured and then released in this fashion are then carried to a portable chemical detection device such as a portable ion mobility spectrometer. In the preferred embodiment, the means for drawing sample into the chamber comprises a reversible fan which, when operated in reverse direction, creates a backpressure that facilitates evolution of captured target chemicals into the chamber when the screen is heated. The screen can be positioned directly in front of the detector prior to heating to improve detection capability.
Reddy, Pulakuntla Swetha; Lokhande, Kiran Bharat; Nagar, Shuchi; Reddy, Vaddi Damodara; Murthy, P Sushma; Swamy, K Venkateswara
2018-02-27
Gefitinib (lressa) is the most prescribed drug, highly effective to treat of non-small cell lung cancer; primarily it was considered targeted therapy is a kinase inhibitor. The non-small cell lung cancer caused by the mutation in the Epithelial Growth Factor Receptor (EGFR) gene, Iressa works by blocking the EGFR protein that helps the cancer cell growth. EGFR protein has lead to the development of anticancer therapeutics directed against EGFR inhibitor including Gefitinib for non-small cell lung cancer. To explore research on Gefitinib and its derivatives interaction with crystal structure EGFR to understand the better molecular insights interaction strategies. Molecular modeling of ligands (Gefitinib and its derivatives) was carried out by Avogadro software till atomic angle stable confirmation obtained. The partial charges for the ligands were assigned as per standard protocol for molecular docking. All docking simulations were performed with AutoDockVina. Virtual screening carried out based on binding energy and hydrogen bonding affinity. Molecular dynamics (MD) and Simulation EGFR was done using GROMACS 5.1.1 software to explore the interaction stability in a cell. The stable conformation for EGFR protein trajectories were captured at various time intervals 0-20ns. Few compounds screen based on high affinity as the inhibitor for EGFR may inhibit the cell cycle signalling in non-small cell lung cancer. These result suggested that a computer aided screening approach of a Gefitinib derivatives compounds with regard to their binding to EGFR for identifying novel drugs for the treatment of non-small cell lung cancer. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
The Role and Design of Screen Images in Software Documentation.
ERIC Educational Resources Information Center
van der Meij, Hans
2000-01-01
Discussion of learning a new computer software program focuses on how to support the joint handling of a manual, input devices, and screen display. Describes a study that examined three design styles for manuals that included screen images to reduce split-attention problems and discusses theory versus practice and cognitive load theory.…
SIENA Customer Problem Statement and Requirements
DOE Office of Scientific and Technical Information (OSTI.GOV)
L. Sauer; R. Clay; C. Adams
2000-08-01
This document describes the problem domain and functional requirements of the SIENA framework. The software requirements and system architecture of SIENA are specified in separate documents (called SIENA Software Requirement Specification and SIENA Software Architecture, respectively). While currently this version of the document describes the problems and captures the requirements within the Analysis domain (concentrating on finite element models), it is our intention to subsequent y expand this document to describe problems and capture requirements from the Design and Manufacturing domains. In addition, SIENA is designed to be extendible to support and integrate elements from the other domains (see SIENAmore » Software Architecture document).« less
Screen-Capture Instructional Technology: A Cognitive Tool for Blended Learning
ERIC Educational Resources Information Center
Smith, Jeffrey George
2012-01-01
Little empirical investigation has been conducted on high school students and teachers using online instructional multimedia developed entirely from the classroom teacher's traditional live-lecture format. This study investigated academic achievement, engagement, preference, and curriculum development using screen-capture instructional…
Harte, Philip T.
2017-01-01
A common assumption with groundwater sampling is that low (<0.5 L/min) pumping rates during well purging and sampling captures primarily lateral flow from the formation through the well-screened interval at a depth coincident with the pump intake. However, if the intake is adjacent to a low hydraulic conductivity part of the screened formation, this scenario will induce vertical groundwater flow to the pump intake from parts of the screened interval with high hydraulic conductivity. Because less formation water will initially be captured during pumping, a substantial volume of water already in the well (preexisting screen water or screen storage) will be captured during this initial time until inflow from the high hydraulic conductivity part of the screened formation can travel vertically in the well to the pump intake. Therefore, the length of the time needed for adequate purging prior to sample collection (called optimal purge duration) is controlled by the in-well, vertical travel times. A preliminary, simple analytical model was used to provide information on the relation between purge duration and capture of formation water for different gross levels of heterogeneity (contrast between low and high hydraulic conductivity layers). The model was then used to compare these time–volume relations to purge data (pumping rates and drawdown) collected at several representative monitoring wells from multiple sites. Results showed that computation of time-dependent capture of formation water (as opposed to capture of preexisting screen water), which were based on vertical travel times in the well, compares favorably with the time required to achieve field parameter stabilization. If field parameter stabilization is an indicator of arrival time of formation water, which has been postulated, then in-well, vertical flow may be an important factor at wells where low-flow sampling is the sample method of choice.
Modification of infant hypothyroidism and phenylketonuria screening program using electronic tools.
Taheri, Behjat; Haddadpoor, Asefeh; Mirkhalafzadeh, Mahmood; Mazroei, Fariba; Aghdak, Pezhman; Nasri, Mehran; Bahrami, Gholamreza
2017-01-01
Congenital hypothyroidism and phenylketonuria (PKU) are the most common cause for preventable mental retardation in infants worldwide. Timely diagnosis and treatment of these disorders can have lasting effects on the mental development of newborns. However, there are several problems at different stages of screening programs that along with imposing heavy costs can reduce the precision of the screening, increasing the chance of undiagnosed cases which in turn can have damaging consequences for the society. Therefore, given these problems and the importance of information systems in facilitating the management and improving the quality of health care the aim of this study was to improve the screening process of hypothyroidism and PKU in infants with the help of electronic resources. The current study is a qualitative, action research designed to improve the quality of screening, services, performance, implementation effectiveness, and management of hypothyroidism and PKU screening program in Isfahan province. To this end, web-based software was designed. Programming was carried out using Delphi.net software and used SQL Server 2008 for database management. Given the weaknesses, problems, and limitations of hypothyroidism and PKU screening program, and the importance of these diseases in a national scale, this study resulted in design of hypothyroidism and PKU screening software for infants in Isfahan province. The inputs and outputs of the software were designed in three levels including Health Care Centers in charge of the screening program, provincial reference lab, and health and treatment network of Isfahan province. Immediate registration of sample data at the time and location of sampling, providing the provincial reference Laboratory and Health Centers of different eparchies with the ability to instantly observe, monitor, and follow-up on the samples at any moment, online verification of samples by reference lab, creating a daily schedule for reference lab, and receiving of the results from analysis equipment; and entering the results into the database without the need for user input are among the features of this software. The implementation of hypothyroidism screening software led to an increase in the quality and efficiency of the screening program; minimized the risk of human error in the process and solved many of the previous limitations of the screening program which were the main goals for implementation of this software. The implementation of this software also resulted in improvement in precision and quality of services provided for these two diseases and better accuracy and precision for data inputs by providing the possibility of entering the sample data at the place and time of sampling which then resulted in the possibility of management based on precise data and also helped develop a comprehensive database and improved the satisfaction of service recipients.
Human portable preconcentrator system
Linker, Kevin L.; Bouchier, Francis A.; Hannum, David W.; Rhykerd, Jr., Charles L.
2003-01-01
A preconcentrator system and apparatus suited to human portable use wherein sample potentially containing a target chemical substance is drawn into a chamber and through a pervious screen. The screen is adapted to capture target chemicals and then, upon heating, to release those chemicals into the chamber. Chemicals captured and then released in this fashion are then carried to a portable chemical detection device such as a portable ion mobility spectrometer. In the preferred embodiment, the means for drawing sample into the chamber comprises a reversible fan which, when operated in reverse direction, creates a backpressure that facilitates evolution of captured target chemicals into the chamber when the screen is heated.
INEL BNCT Research Program Annual Report 1993
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venhuizen, J.R.
1994-08-01
This report is a summary of the progress and research produced for the Idaho National Engineering Laboratory Boron Neutron Capture Therapy Research Program for calendar year 1993. Contributions from all the principal investigators are included, covering chemistry (pituitary tumor studies, boron drug development including liposomes, lipoproteins, and carboranylalanine derivatives), pharmacology (murine screenings, toxicity testing, boron drug analysis), physics (radiation dosimetry software, neutron beam and filter design, neutron beam measurement dosimetry), and radiation biology (tissue and efficacy studies of small and large animal models). Information on the potential toxicity of borocaptate sodium and boronophenylalanine is presented. Results of 21 spontaneous-tumor-bearing dogsmore » that have been treated with boron neutron capture therapy at the Brookhaven National Laboratory are updated. Boron-containing drug purity verification is discussed in some detail. Advances in magnetic resonance imaging of boron in vivo are discussed. Several boron-carrying drugs exhibiting good tumor uptake are described. Significant progress in the potential of treating pituitary tumors is presented. Measurement of the epithermal-neutron flux of the Petten (The Netherlands) High Flux Reactor beam (HFB11B), and comparison to predictions are shown.« less
Health IT-assisted population-based preventive cancer screening: a cost analysis.
Levy, Douglas E; Munshi, Vidit N; Ashburner, Jeffrey M; Zai, Adrian H; Grant, Richard W; Atlas, Steven J
2015-12-01
Novel health information technology (IT)-based strategies harnessing patient registry data seek to improve care at a population level. We analyzed costs from a randomized trial of 2 health IT strategies to improve cancer screening compared with usual care from the perspective of a primary care network. Monte Carlo simulations were used to compare costs across management strategies. We assessed the cost of the software, materials, and personnel for baseline usual care (BUC) compared with augmented usual care (AUC [ie, automated patient outreach]) and augmented usual care with physician input (AUCPI [ie, outreach mediated by physicians' knowledge of their patient panels]) over 1 year. AUC and AUCPI each reduced the time physicians spent on cancer screening by 6.5 minutes per half-day clinical session compared with BUC without changing cancer screening rates. Assuming the value of this time accrues to the network, total costs of cancer screening efforts over the study year were $3.83 million for AUC, $3.88 million for AUCPI, and $4.10 million for BUC. AUC was cost-saving relative to BUC in 87.1% of simulations. AUCPI was cost-saving relative to BUC in 82.5% of simulations. Ongoing per patient costs were lower for both AUC ($35.63) and AUCPI ($35.58) relative to BUC ($39.51). Over the course of the study year, the value of reduced physician time devoted to preventive cancer screening outweighed the costs of the interventions. Primary care networks considering similar interventions will need to capture adequate physician time savings to offset the costs of expanding IT infrastructure.
A Real-Time Image Acquisition And Processing System For A RISC-Based Microcomputer
NASA Astrophysics Data System (ADS)
Luckman, Adrian J.; Allinson, Nigel M.
1989-03-01
A low cost image acquisition and processing system has been developed for the Acorn Archimedes microcomputer. Using a Reduced Instruction Set Computer (RISC) architecture, the ARM (Acorn Risc Machine) processor provides instruction speeds suitable for image processing applications. The associated improvement in data transfer rate has allowed real-time video image acquisition without the need for frame-store memory external to the microcomputer. The system is comprised of real-time video digitising hardware which interfaces directly to the Archimedes memory, and software to provide an integrated image acquisition and processing environment. The hardware can digitise a video signal at up to 640 samples per video line with programmable parameters such as sampling rate and gain. Software support includes a work environment for image capture and processing with pixel, neighbourhood and global operators. A friendly user interface is provided with the help of the Archimedes Operating System WIMP (Windows, Icons, Mouse and Pointer) Manager. Windows provide a convenient way of handling images on the screen and program control is directed mostly by pop-up menus.
Yazaydin, A Ozgür; Snurr, Randall Q; Park, Tae-Hong; Koh, Kyoungmoo; Liu, Jian; Levan, M Douglas; Benin, Annabelle I; Jakubczak, Paulina; Lanuza, Mary; Galloway, Douglas B; Low, John J; Willis, Richard R
2009-12-30
A diverse collection of 14 metal-organic frameworks (MOFs) was screened for CO(2) capture from flue gas using a combined experimental and modeling approach. Adsorption measurements are reported for the screened MOFs at room temperature up to 1 bar. These data are used to validate a generalized strategy for molecular modeling of CO(2) and other small molecules in MOFs. MOFs possessing a high density of open metal sites are found to adsorb significant amounts of CO(2) even at low pressure. An excellent correlation is found between the heat of adsorption and the amount of CO(2) adsorbed below 1 bar. Molecular modeling can aid in selection of adsorbents for CO(2) capture from flue gas by screening a large number of MOFs.
Quantitative screening of yeast surface-displayed polypeptide libraries by magnetic bead capture.
Yeung, Yik A; Wittrup, K Dane
2002-01-01
Magnetic bead capture is demonstrated here to be a feasible alternative for quantitative screening of favorable mutants from a cell-displayed polypeptide library. Flow cytometric sorting with fluorescent probes has been employed previously for high throughput screening for either novel binders or improved mutants. However, many laboratories do not have ready access to this technology as a result of the limited availability and high cost of cytometers, restricting the use of cell-displayed libraries. Using streptavidin-coated magnetic beads and biotinylated ligands, an alternative approach to cell-based library screening for improved mutants was developed. Magnetic bead capture probability of labeled cells is shown to be closely correlated with the surface ligand density. A single-pass enrichment ratio of 9400 +/- 1800-fold, at the expense of 85 +/- 6% binder losses, is achieved from screening a library that contains one antibody-displaying cell (binder) in 1.1 x 10(5) nondisplaying cells. Additionally, kinetic screening for an initial high affinity to low affinity (7.7-fold lower) mutant ratio of 1:95,000, the magnetic bead capture method attains a single-pass enrichment ratio of 600 +/- 200-fold with a 75 +/- 24% probability of loss for the higher affinity mutant. The observed high loss probabilities can be straightforwardly compensated for by library oversampling, given the inherently parallel nature of the screen. Overall, these results demonstrate that magnetic beads are capable of quantitatively screening for novel binders and improved mutants. The described methods are directly analogous to procedures in common use for phage display and should lower the barriers to entry for use of cell surface display libraries.
ERIC Educational Resources Information Center
Smith, Rachel Naomi
2017-01-01
The purpose of this mixed methods research study was two-fold. First, I compared the findings of the success rates of online mathematics students with the perceived effects of classroom capture software in hopes to find convergence. Second, I used multiple methods in different phases of the study to expand the breadth and range of the effects of…
Many-body formulation of carriers capture time in quantum dots applicable in device simulation codes
NASA Astrophysics Data System (ADS)
Vallone, Marco
2010-03-01
We present an application of Green's functions formalism to calculate in a simplified but rigorous way electrons and holes capture time in quantum dots in closed form as function of carrier density, levels confinement potential, and temperature. Carrier-carrier (Auger) scattering and single LO-phonon emission are both addressed accounting for dynamic effects of the potential screening in the single plasmon pole approximation of the dielectric function. Regarding the LO-phonons interaction, the formulation evidences the role of the dynamic screening from wetting-layer carriers in comparison with its static limit, describes the interplay between screening and Fermi band filling, and offers simple expressions for capture time, suitable for modeling implementation.
NASA Astrophysics Data System (ADS)
Bae, Euiwon; Patsekin, Valery; Rajwa, Bartek; Bhunia, Arun K.; Holdman, Cheryl; Davisson, V. Jo; Hirleman, E. Daniel; Robinson, J. Paul
2012-04-01
A microbial high-throughput screening (HTS) system was developed that enabled high-speed combinatorial studies directly on bacterial colonies. The system consists of a forward scatterometer for elastic light scatter (ELS) detection, a plate transporter for sample handling, and a robotic incubator for automatic incubation. To minimize the ELS pattern-capturing time, a new calibration plate and correction algorithms were both designed, which dramatically reduced correction steps during acquisition of the circularly symmetric ELS patterns. Integration of three different control software programs was implemented, and the performance of the system was demonstrated with single-species detection for library generation and with time-resolved measurement for understanding ELS colony growth correlation, using Escherichia coli and Listeria. An in-house colony-tracking module enabled researchers to easily understand the time-dependent variation of the ELS from identical colony, which enabled further analysis in other biochemical experiments. The microbial HTS system provided an average scan time of 4.9 s per colony and the capability of automatically collecting more than 4000 ELS patterns within a 7-h time span.
Parallax Player: a stereoscopic format converter
NASA Astrophysics Data System (ADS)
Feldman, Mark H.; Lipton, Lenny
2003-05-01
The Parallax Player is a software application that is, in essence, a stereoscopic format converter. Various formats may be inputted and outputted. In addition to being able to take any one of a wide variety of different formats and play them back on many different kinds of PCs and display screens. The Parallax Player has built into it the capability to produce ersatz stereo from a planar still or movie image. The player handles two basic forms of digital content - still images, and movies. It is assumed that all data is digital, either created by means of a photographic film process and later digitized, or directly captured or authored in a digital form. In its current implementation, running on a number of Windows Operating Systems, The Parallax Player reads in a broad selection of contemporary file formats.
CAD/CAM/AM applications in the manufacture of dental appliances.
Al Mortadi, Noor; Eggbeer, Dominic; Lewis, Jeffrey; Williams, Robert J
2012-11-01
The purposes of this study were to apply the latest developments in additive manufacturing (AM) construction and to evaluate the effectiveness of these computer-aided design and computer-aided manufacturing (CAD/CAM) techniques in the production of dental appliances. In addition, a new method of incorporating wire into a single build was developed. A scanner was used to capture 3-dimensional images of Class II Division 1 dental models that were translated onto a 2-dimensional computer screen. Andresen and sleep-apnea devices were designed in 3 dimensions by using FreeForm software (version 11; Geo Magics SensAble Group, Wilmington, Mass) and a phantom arm. The design was then exported and transferred to an AM machine for building. Copyright © 2012 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.
Cervinka, Miroslav; Cervinková, Zuzana; Novák, Jan; Spicák, Jan; Rudolf, Emil; Peychl, Jan
2004-06-01
Alternatives and their teaching are an essential part of the curricula at the Faculty of Medicine. Dynamic screen-based video recordings are the most important type of alternative models employed for teaching purposes. Currently, the majority of teaching materials for this purpose are based on PowerPoint presentations, which are very popular because of their high versatility and visual impact. Furthermore, current developments in the field of image capturing devices and software enable the use of digitised video streams, tailored precisely to the specific situation. Here, we demonstrate that with reasonable financial resources, it is possible to prepare video sequences and to introduce them into the PowerPoint presentation, thereby shaping the teaching process according to individual students' needs and specificities.
Industrial Inspection with Open Eyes: Advance with Machine Vision Technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Zheng; Ukida, H.; Niel, Kurt
Machine vision systems have evolved significantly with the technology advances to tackle the challenges from modern manufacturing industry. A wide range of industrial inspection applications for quality control are benefiting from visual information captured by different types of cameras variously configured in a machine vision system. This chapter screens the state of the art in machine vision technologies in the light of hardware, software tools, and major algorithm advances for industrial inspection. The inspection beyond visual spectrum offers a significant complementary to the visual inspection. The combination with multiple technologies makes it possible for the inspection to achieve a bettermore » performance and efficiency in varied applications. The diversity of the applications demonstrates the great potential of machine vision systems for industry.« less
Total body photography for skin cancer screening.
Dengel, Lynn T; Petroni, Gina R; Judge, Joshua; Chen, David; Acton, Scott T; Schroen, Anneke T; Slingluff, Craig L
2015-11-01
Total body photography may aid in melanoma screening but is not widely applied due to time and cost. We hypothesized that a near-simultaneous automated skin photo-acquisition system would be acceptable to patients and could rapidly obtain total body images that enable visualization of pigmented skin lesions. From February to May 2009, a study of 20 volunteers was performed at the University of Virginia to test a prototype 16-camera imaging booth built by the research team and to guide development of special purpose software. For each participant, images were obtained before and after marking 10 lesions (five "easy" and five "difficult"), and images were evaluated to estimate visualization rates. Imaging logistical challenges were scored by the operator, and participant opinion was assessed by questionnaire. Average time for image capture was three minutes (range 2-5). All 55 "easy" lesions were visualized (sensitivity 100%, 90% CI 95-100%), and 54/55 "difficult" lesions were visualized (sensitivity 98%, 90% CI 92-100%). Operators and patients graded the imaging process favorably, with challenges identified regarding lighting and positioning. Rapid-acquisition automated skin photography is feasible with a low-cost system, with excellent lesion visualization and participant acceptance. These data provide a basis for employing this method in clinical melanoma screening. © 2014 The International Society of Dermatology.
Astronaut Health Participant Summary Application
NASA Technical Reports Server (NTRS)
Johnson, Kathy; Krog, Ralph; Rodriguez, Seth; Wear, Mary; Volpe, Robert; Trevino, Gina; Eudy, Deborah; Parisian, Diane
2011-01-01
The Longitudinal Study of Astronaut Health (LSAH) Participant Summary software captures data based on a custom information model designed to gather all relevant, discrete medical events for its study participants. This software provides a summarized view of the study participant s entire medical record. The manual collapsing of all the data in a participant s medical record into a summarized form eliminates redundancy, and allows for the capture of entire medical events. The coding tool could be incorporated into commercial electronic medical record software for use in areas like public health surveillance, hospital systems, clinics, and medical research programs.
Image enhancement software for underwater recovery operations: User's manual
NASA Astrophysics Data System (ADS)
Partridge, William J.; Therrien, Charles W.
1989-06-01
This report describes software for performing image enhancement on live or recorded video images. The software was developed for operational use during underwater recovery operations at the Naval Undersea Warfare Engineering Station. The image processing is performed on an IBM-PC/AT compatible computer equipped with hardware to digitize and display video images. The software provides the capability to provide contrast enhancement and other similar functions in real time through hardware lookup tables, to automatically perform histogram equalization, to capture one or more frames and average them or apply one of several different processing algorithms to a captured frame. The report is in the form of a user manual for the software and includes guided tutorial and reference sections. A Digital Image Processing Primer in the appendix serves to explain the principle concepts that are used in the image processing.
DOT National Transportation Integrated Search
2016-04-01
The objective of the Dynamic Interrogative Data Capture (DIDC) algorithms and software is to optimize the capture and transmission of vehicle-based data under a range of dynamically configurable messaging strategies. The key hypothesis of DIDC is tha...
ENCoRE: an efficient software for CRISPR screens identifies new players in extrinsic apoptosis.
Trümbach, Dietrich; Pfeiffer, Susanne; Poppe, Manuel; Scherb, Hagen; Doll, Sebastian; Wurst, Wolfgang; Schick, Joel A
2017-11-25
As CRISPR/Cas9 mediated screens with pooled guide libraries in somatic cells become increasingly established, an unmet need for rapid and accurate companion informatics tools has emerged. We have developed a lightweight and efficient software to easily manipulate large raw next generation sequencing datasets derived from such screens into informative relational context with graphical support. The advantages of the software entitled ENCoRE (Easy NGS-to-Gene CRISPR REsults) include a simple graphical workflow, platform independence, local and fast multithreaded processing, data pre-processing and gene mapping with custom library import. We demonstrate the capabilities of ENCoRE to interrogate results from a pooled CRISPR cellular viability screen following Tumor Necrosis Factor-alpha challenge. The results not only identified stereotypical players in extrinsic apoptotic signaling but two as yet uncharacterized members of the extrinsic apoptotic cascade, Smg7 and Ces2a. We further validated and characterized cell lines containing mutations in these genes against a panel of cell death stimuli and involvement in p53 signaling. In summary, this software enables bench scientists with sensitive data or without access to informatic cores to rapidly interpret results from large scale experiments resulting from pooled CRISPR/Cas9 library screens.
Method and apparatus for calibrating a display using an array of cameras
NASA Technical Reports Server (NTRS)
Johnson, Michael J. (Inventor); Chen, Chung-Jen (Inventor); Chandrasekhar, Rajesh (Inventor)
2001-01-01
The present invention overcomes many of the disadvantages of the prior art by providing a display that can be calibrated and re-calibrated with a minimal amount of manual intervention. To accomplish this, the present invention provides one or more cameras to capture an image that is projected on a display screen. In one embodiment, the one or more cameras are placed on the same side of the screen as the projectors. In another embodiment, an array of cameras is provided on either or both sides of the screen for capturing a number of adjacent and/or overlapping capture images of the screen. In either of these embodiments, the resulting capture images are processed to identify any non-desirable characteristics including any visible artifacts such as seams, bands, rings, etc. Once the non-desirable characteristics are identified, an appropriate transformation function is determined. The transformation function is used to pre-warp the input video signal to the display such that the non-desirable characteristics are reduced or eliminated from the display. The transformation function preferably compensates for spatial non-uniformity, color non-uniformity, luminance non-uniformity, and/or other visible artifacts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Eric Y.; Flory, Adam E.; Lamarche, Brian L.
2014-06-01
The Juvenile Salmon Acoustic Telemetry System (JSATS) Detector is a software and hardware system that captures JSATS Acoustic Micro Transmitter (AMT) signals. The system uses hydrophones to capture acoustic signals in the water. This analog signal is then amplified and processed by the Analog to Digital Converter (ADC) and Digital Signal Processor (DSP) board in the computer. This board digitizes and processes the acoustic signal to determine if a possible JSATS tag is present. With this detection, the data will be saved to the computer for further analysis. This document details the features and functionality of the JSATS Detector software.more » The document covers how to install the software, setup and run the detector software. The document will also go over the raw binary waveform file format and CSV files containing RMS values« less
Fully automated corneal endothelial morphometry of images captured by clinical specular microscopy
NASA Astrophysics Data System (ADS)
Bucht, Curry; Söderberg, Per; Manneberg, Göran
2009-02-01
The corneal endothelium serves as the posterior barrier of the cornea. Factors such as clarity and refractive properties of the cornea are in direct relationship to the quality of the endothelium. The endothelial cell density is considered the most important morphological factor. Morphometry of the corneal endothelium is presently done by semi-automated analysis of pictures captured by a Clinical Specular Microscope (CSM). Because of the occasional need of operator involvement, this process can be tedious, having a negative impact on sampling size. This study was dedicated to the development of fully automated analysis of images of the corneal endothelium, captured by CSM, using Fourier analysis. Software was developed in the mathematical programming language Matlab. Pictures of the corneal endothelium, captured by CSM, were read into the analysis software. The software automatically performed digital enhancement of the images. The digitally enhanced images of the corneal endothelium were transformed, using the fast Fourier transform (FFT). Tools were developed and applied for identification and analysis of relevant characteristics of the Fourier transformed images. The data obtained from each Fourier transformed image was used to calculate the mean cell density of its corresponding corneal endothelium. The calculation was based on well known diffraction theory. Results in form of estimated cell density of the corneal endothelium were obtained, using fully automated analysis software on images captured by CSM. The cell density obtained by the fully automated analysis was compared to the cell density obtained from classical, semi-automated analysis and a relatively large correlation was found.
Screen Miniatures as Icons for Backward Navigation in Content-Based Software.
ERIC Educational Resources Information Center
Boling, Elizabeth; Ma, Guoping; Tao, Chia-Wen; Askun, Cengiz; Green, Tim; Frick, Theodore; Schaumburg, Heike
Users of content-based software programs, including hypertexts and instructional multimedia, rely on the navigation functions provided by the designers of those program. Typical navigation schemes use abstract symbols (arrows) to label basic navigational functions like moving forward or backward through screen displays. In a previous study, the…
NASA Astrophysics Data System (ADS)
Liu, Carol Y. B.; Luk, David C. K.; Zhou, Kany S. Y.; So, Bryan M. K.; Louie, Derek C. H.
2015-03-01
Due to the increasing incidences of malignant melanoma, there is a rising demand for assistive technologies for its early diagnosis and improving the survival rate. The commonly used visual screening method is with limited accuracy as the early phase of melanoma shares many clinical features with an atypical nevus, while conventional dermoscopes are not user-friendly in terms of setup time and operations. Therefore, the development of an intelligent and handy system to assist the accurate screening and long-term monitoring of melanocytic skin lesions is crucial for early diagnosis and prevention of melanoma. In this paper, an advanced design of non-invasive and non-radioactive dermoscopy system was reported. Computer-aided simulations were conducted for optimizing the optical design and uniform illumination distribution. Functional prototype and the software system were further developed, which could enable image capturing at 10x amplified and general modes, convenient data transmission, analysis of dermoscopic features (e.g., asymmetry, border irregularity, color, diameter and dermoscopic structure) for assisting the early detection of melanoma, extract patient information (e.g. code, lesion location) and integrate with dermoscopic images, thus further support long term monitoring of diagnostic analysis results. A clinical trial study was further conducted on 185 Chinese children (0-18 years old). The results showed that for all subjects, skin conditions diagnosed based on the developed system accurately confirmed the diagnoses by conventional clinical procedures. Besides, clinical analysis on dermoscopic features and a potential standard approach by the developed system to support identifying specific melanocytic patterns for dermoscopic examination in Chinese children were also reported.
NASA Technical Reports Server (NTRS)
2014-01-01
Topics covered include: Innovative Software Tools Measure Behavioral Alertness; Miniaturized, Portable Sensors Monitor Metabolic Health; Patient Simulators Train Emergency Caregivers; Solar Refrigerators Store Life-Saving Vaccines; Monitors Enable Medication Management in Patients' Homes; Handheld Diagnostic Device Delivers Quick Medical Readings; Experiments Result in Safer, Spin-Resistant Aircraft; Interfaces Visualize Data for Airline Safety, Efficiency; Data Mining Tools Make Flights Safer, More Efficient; NASA Standards Inform Comfortable Car Seats; Heat Shield Paves the Way for Commercial Space; Air Systems Provide Life Support to Miners; Coatings Preserve Metal, Stone, Tile, and Concrete; Robots Spur Software That Lends a Hand; Cloud-Based Data Sharing Connects Emergency Managers; Catalytic Converters Maintain Air Quality in Mines; NASA-Enhanced Water Bottles Filter Water on the Go; Brainwave Monitoring Software Improves Distracted Minds; Thermal Materials Protect Priceless, Personal Keepsakes; Home Air Purifiers Eradicate Harmful Pathogens; Thermal Materials Drive Professional Apparel Line; Radiant Barriers Save Energy in Buildings; Open Source Initiative Powers Real-Time Data Streams; Shuttle Engine Designs Revolutionize Solar Power; Procedure-Authoring Tool Improves Safety on Oil Rigs; Satellite Data Aid Monitoring of Nation's Forests; Mars Technologies Spawn Durable Wind Turbines; Programs Visualize Earth and Space for Interactive Education; Processor Units Reduce Satellite Construction Costs; Software Accelerates Computing Time for Complex Math; Simulation Tools Prevent Signal Interference on Spacecraft; Software Simplifies the Sharing of Numerical Models; Virtual Machine Language Controls Remote Devices; Micro-Accelerometers Monitor Equipment Health; Reactors Save Energy, Costs for Hydrogen Production; Cameras Monitor Spacecraft Integrity to Prevent Failures; Testing Devices Garner Data on Insulation Performance; Smart Sensors Gather Information for Machine Diagnostics; Oxygen Sensors Monitor Bioreactors and Ensure Health and Safety; Vision Algorithms Catch Defects in Screen Displays; and Deformable Mirrors Capture Exoplanet Data, Reflect Lasers.
State analysis requirements database for engineering complex embedded systems
NASA Technical Reports Server (NTRS)
Bennett, Matthew B.; Rasmussen, Robert D.; Ingham, Michel D.
2004-01-01
It has become clear that spacecraft system complexity is reaching a threshold where customary methods of control are no longer affordable or sufficiently reliable. At the heart of this problem are the conventional approaches to systems and software engineering based on subsystem-level functional decomposition, which fail to scale in the tangled web of interactions typically encountered in complex spacecraft designs. Furthermore, there is a fundamental gap between the requirements on software specified by systems engineers and the implementation of these requirements by software engineers. Software engineers must perform the translation of requirements into software code, hoping to accurately capture the systems engineer's understanding of the system behavior, which is not always explicitly specified. This gap opens up the possibility for misinterpretation of the systems engineer's intent, potentially leading to software errors. This problem is addressed by a systems engineering tool called the State Analysis Database, which provides a tool for capturing system and software requirements in the form of explicit models. This paper describes how requirements for complex aerospace systems can be developed using the State Analysis Database.
Integration of real-time 3D capture, reconstruction, and light-field display
NASA Astrophysics Data System (ADS)
Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao
2015-03-01
Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.
Annual Report: Carbon Capture Simulation Initiative (CCSI) (30 September 2012)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, David C.; Syamlal, Madhava; Cottrell, Roger
2012-09-30
The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry and academic institutions that is developing and deploying state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technologies from discovery to development, demonstration, and ultimately the widespread deployment to hundreds of power plants. The CCSI Toolset will provide end users in industry with a comprehensive, integrated suite of scientifically validated models, with uncertainty quantification (UQ), optimization, risk analysis and decision making capabilities. The CCSI Toolset incorporates commercial and open-source software currently in use by industry and is also developing new software tools asmore » necessary to fill technology gaps identified during execution of the project. Ultimately, the CCSI Toolset will (1) enable promising concepts to be more quickly identified through rapid computational screening of devices and processes; (2) reduce the time to design and troubleshoot new devices and processes; (3) quantify the technical risk in taking technology from laboratory-scale to commercial-scale; and (4) stabilize deployment costs more quickly by replacing some of the physical operational tests with virtual power plant simulations. CCSI is organized into 8 technical elements that fall under two focus areas. The first focus area (Physicochemical Models and Data) addresses the steps necessary to model and simulate the various technologies and processes needed to bring a new Carbon Capture and Storage (CCS) technology into production. The second focus area (Analysis & Software) is developing the software infrastructure to integrate the various components and implement the tools that are needed to make quantifiable decisions regarding the viability of new CCS technologies. CCSI also has an Industry Advisory Board (IAB). By working closely with industry from the inception of the project to identify industrial challenge problems, CCSI ensures that the simulation tools are developed for the carbon capture technologies of most relevance to industry. CCSI is led by the National Energy Technology Laboratory (NETL) and leverages the Department of Energy (DOE) national laboratories' core strengths in modeling and simulation, bringing together the best capabilities at NETL, Los Alamos National Laboratory (LANL), Lawrence Berkeley National Laboratory (LBNL), Lawrence Livermore National Laboratory (LLNL), and Pacific Northwest National Laboratory (PNNL). The CCSI's industrial partners provide representation from the power generation industry, equipment manufacturers, technology providers and engineering and construction firms. The CCSI's academic participants (Carnegie Mellon University, Princeton University, West Virginia University, and Boston University) bring unparalleled expertise in multiphase flow reactors, combustion, process synthesis and optimization, planning and scheduling, and process control techniques for energy processes. During Fiscal Year (FY) 12, CCSI released its first set of computational tools and models. This pre-release, a year ahead of the originally planned first release, is the result of intense industry interest in getting early access to the tools and the phenomenal progress of the CCSI technical team. These initial components of the CCSI Toolset provide new models and computational capabilities that will accelerate the commercial development of carbon capture technologies as well as related technologies, such as those found in the power, refining, chemicals, and gas production industries. The release consists of new tools for process synthesis and optimization to help identify promising concepts more quickly, new physics-based models of potential capture equipment and processes that will reduce the time to design and troubleshoot new systems, a framework to quantify the uncertainty of model predictions, and various enabling tools that provide new capabilities such as creating reduced order models (ROMs) from reacting multiphase flow simulations and running thousands of process simulations concurrently for optimization and UQ.« less
NASA Technical Reports Server (NTRS)
Easley, Wesley C.
1991-01-01
Experiment critical use of RS-232 data busses in the Transport Systems Research Vehicle (TSRV) operated by the Advanced Transport Operating Systems Program Office at the NASA Langley Research Center has recently increased. Each application utilizes a number of nonidentical computer and peripheral configurations and requires task specific software development. To aid these development tasks, an IBM PC-based RS-232 bus monitoring system was produced. It can simultaneously monitor two communication ports of a PC or clone, including the nonstandard bus expansion of the TSRV Grid laptop computers. Display occurs in a separate window for each port's input with binary display being selectable. A number of other features including binary log files, screen capture to files, and a full range of communication parameters are provided.
ProbeDesigner: for the design of probesets for branched DNA (bDNA) signal amplification assays.
Bushnell, S; Budde, J; Catino, T; Cole, J; Derti, A; Kelso, R; Collins, M L; Molino, G; Sheridan, P; Monahan, J; Urdea, M
1999-05-01
The sensitivity and specificity of branched DNA (bDNA) assays are derived in part through the judicious design of the capture and label extender probes. To minimize non-specific hybridization (NSH) events, which elevate assay background, candidate probes must be computer screened for complementarity with generic sequences present in the assay. We present a software application which allows for rapid and flexible design of bDNA probesets for novel targets. It includes an algorithm for estimating the magnitude of NSH contribution to background, a mechanism for removing probes with elevated contributions, a methodology for the simultaneous design of probesets for multiple targets, and a graphical user interface which guides the user through the design steps. The program is available as a commercial package through the Pharmaceutical Drug Discovery program at Chiron Diagnostics.
Mining collections of compounds with Screening Assistant 2
2012-01-01
Background High-throughput screening assays have become the starting point of many drug discovery programs for large pharmaceutical companies as well as academic organisations. Despite the increasing throughput of screening technologies, the almost infinite chemical space remains out of reach, calling for tools dedicated to the analysis and selection of the compound collections intended to be screened. Results We present Screening Assistant 2 (SA2), an open-source JAVA software dedicated to the storage and analysis of small to very large chemical libraries. SA2 stores unique molecules in a MySQL database, and encapsulates several chemoinformatics methods, among which: providers management, interactive visualisation, scaffold analysis, diverse subset creation, descriptors calculation, sub-structure / SMART search, similarity search and filtering. We illustrate the use of SA2 by analysing the composition of a database of 15 million compounds collected from 73 providers, in terms of scaffolds, frameworks, and undesired properties as defined by recently proposed HTS SMARTS filters. We also show how the software can be used to create diverse libraries based on existing ones. Conclusions Screening Assistant 2 is a user-friendly, open-source software that can be used to manage collections of compounds and perform simple to advanced chemoinformatics analyses. Its modular design and growing documentation facilitate the addition of new functionalities, calling for contributions from the community. The software can be downloaded at http://sa2.sourceforge.net/. PMID:23327565
Mining collections of compounds with Screening Assistant 2.
Guilloux, Vincent Le; Arrault, Alban; Colliandre, Lionel; Bourg, Stéphane; Vayer, Philippe; Morin-Allory, Luc
2012-08-31
High-throughput screening assays have become the starting point of many drug discovery programs for large pharmaceutical companies as well as academic organisations. Despite the increasing throughput of screening technologies, the almost infinite chemical space remains out of reach, calling for tools dedicated to the analysis and selection of the compound collections intended to be screened. We present Screening Assistant 2 (SA2), an open-source JAVA software dedicated to the storage and analysis of small to very large chemical libraries. SA2 stores unique molecules in a MySQL database, and encapsulates several chemoinformatics methods, among which: providers management, interactive visualisation, scaffold analysis, diverse subset creation, descriptors calculation, sub-structure / SMART search, similarity search and filtering. We illustrate the use of SA2 by analysing the composition of a database of 15 million compounds collected from 73 providers, in terms of scaffolds, frameworks, and undesired properties as defined by recently proposed HTS SMARTS filters. We also show how the software can be used to create diverse libraries based on existing ones. Screening Assistant 2 is a user-friendly, open-source software that can be used to manage collections of compounds and perform simple to advanced chemoinformatics analyses. Its modular design and growing documentation facilitate the addition of new functionalities, calling for contributions from the community. The software can be downloaded at http://sa2.sourceforge.net/.
Engineering Complex Embedded Systems with State Analysis and the Mission Data System
NASA Technical Reports Server (NTRS)
Ingham, Michel D.; Rasmussen, Robert D.; Bennett, Matthew B.; Moncada, Alex C.
2004-01-01
It has become clear that spacecraft system complexity is reaching a threshold where customary methods of control are no longer affordable or sufficiently reliable. At the heart of this problem are the conventional approaches to systems and software engineering based on subsystem-level functional decomposition, which fail to scale in the tangled web of interactions typically encountered in complex spacecraft designs. Furthermore, there is a fundamental gap between the requirements on software specified by systems engineers and the implementation of these requirements by software engineers. Software engineers must perform the translation of requirements into software code, hoping to accurately capture the systems engineer's understanding of the system behavior, which is not always explicitly specified. This gap opens up the possibility for misinterpretation of the systems engineer s intent, potentially leading to software errors. This problem is addressed by a systems engineering methodology called State Analysis, which provides a process for capturing system and software requirements in the form of explicit models. This paper describes how requirements for complex aerospace systems can be developed using State Analysis and how these requirements inform the design of the system software, using representative spacecraft examples.
Records Inventory Data Collection Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cole, Brian A.
1995-03-01
DATALINK was created to provide an easy to use data collection program for records management software products. It provides several useful tools for capturing and validating record index data in the field. It also allows users to easily create a comma delimited, ASCII text file for data export into most records management software products.
A preliminary architecture for building communication software from traffic captures
NASA Astrophysics Data System (ADS)
Acosta, Jaime C.; Estrada, Pedro
2017-05-01
Security analysts are tasked with identifying and mitigating network service vulnerabilities. A common problem associated with in-depth testing of network protocols is the availability of software that communicates across disparate protocols. Many times, the software required to communicate with these services is not publicly available. Developing this software is a time-consuming undertaking that requires expertise and understanding of the protocol specification. The work described in this paper aims at developing a software package that is capable of automatically creating communication clients by using packet capture (pcap) and TShark dissectors. Currently, our focus is on simple protocols with fixed fields. The methodologies developed as part of this work will extend to other complex protocols such as the Gateway Load Balancing Protocol (GLBP), Port Aggregation Protocol (PAgP), and Open Shortest Path First (OSPF). Thus far, we have architected a modular pipeline for an automatic traffic-based software generator. We start the transformation of captured network traffic by employing TShark to convert packets into a Packet Details Markup Language (PDML) file. The PDML file contains a parsed, textual, representation of the packet data. Then, we extract field data, types, along with inter and intra-packet dependencies. This information is then utilized to construct an XML file that encompasses the protocol state machine and field vocabulary. Finally, this XML is converted into executable code. Using our methodology, and as a starting point, we have succeeded in automatically generating software that communicates with other hosts using an automatically generated Internet Control Message Protocol (ICMP) client program.
Zhang, Ao; Yan, Xing-Ke; Liu, An-Guo
2016-12-25
In the present paper, the authors introduce a newly-developed "Acupuncture Needle Manipulation Training-evaluation System" based on optical motion capture technique. It is composed of two parts, sensor and software, and overcomes some shortages of mechanical motion capture technique. This device is able to analyze the data of operations of the pressing-hand and needle-insertion hand during acupuncture performance and its software contains personal computer (PC) version, Android version, and Internetwork Operating System (IOS) Apple version. It is competent in recording and analyzing information of any ope-rator's needling manipulations, and is quite helpful for teachers in teaching, training and examining students in clinical practice.
Application of Plagiarism Screening Software in the Chemical Engineering Curriculum
ERIC Educational Resources Information Center
Cooper, Matthew E.; Bullard, Lisa G.
2014-01-01
Plagiarism is an area of increasing concern for written ChE assignments, such as laboratory and design reports, due to ease of access to text and other materials via the internet. This study examines the application of plagiarism screening software to four courses in a university chemical engineering curriculum. The effectiveness of plagiarism…
Modeling defect trends for iterative development
NASA Technical Reports Server (NTRS)
Powell, J. D.; Spanguolo, J. N.
2003-01-01
The Employment of Defects (EoD) approach to measuring and analyzing defects seeks to identify and capture trends and phenomena that are critical to managing software quality in the iterative software development lifecycle at JPL.
NASA Astrophysics Data System (ADS)
de Carvalho, Luis Alberto V.; Carvalho, Valeria
2014-02-01
One of the main problems with glaucoma throughout the world is that there are typically no symptoms in the early stages. Many people who have the disease do not know they have it and by the time one finds out, the disease is usually in an advanced stage. Most retinal cameras available in the market today use sophisticated optics and have several other features/capabilities (wide-angle optics, red-free and angiography filters, etc) that make them expensive for the general practice or for screening purposes. Therefore, it is important to develop instrumentation that is fast, effective and economic, in order to reach the mass public in the general eye-care centers. In this work, we have constructed the hardware and software of a cost-effective and non-mydriatic prototype device that allows fast capturing and plotting of high-resolution quantitative 3D images and videos of the optical disc head and neighboring region (30° of field of view). The main application of this device is for glaucoma screening, although it may also be useful for the diagnosis of other pathologies related to the optic nerve.
IoT for Real-Time Measurement of High-Throughput Liquid Dispensing in Laboratory Environments.
Shumate, Justin; Baillargeon, Pierre; Spicer, Timothy P; Scampavia, Louis
2018-04-01
Critical to maintaining quality control in high-throughput screening is the need for constant monitoring of liquid-dispensing fidelity. Traditional methods involve operator intervention with gravimetric analysis to monitor the gross accuracy of full plate dispenses, visual verification of contents, or dedicated weigh stations on screening platforms that introduce potential bottlenecks and increase the plate-processing cycle time. We present a unique solution using open-source hardware, software, and 3D printing to automate dispenser accuracy determination by providing real-time dispense weight measurements via a network-connected precision balance. This system uses an Arduino microcontroller to connect a precision balance to a local network. By integrating the precision balance as an Internet of Things (IoT) device, it gains the ability to provide real-time gravimetric summaries of dispensing, generate timely alerts when problems are detected, and capture historical dispensing data for future analysis. All collected data can then be accessed via a web interface for reviewing alerts and dispensing information in real time or remotely for timely intervention of dispense errors. The development of this system also leveraged 3D printing to rapidly prototype sensor brackets, mounting solutions, and component enclosures.
HCS road: an enterprise system for integrated HCS data management and analysis.
Jackson, Donald; Lenard, Michael; Zelensky, Alexander; Shaikh, Mohammad; Scharpf, James V; Shaginaw, Richard; Nawade, Mahesh; Agler, Michele; Cloutier, Normand J; Fennell, Myles; Guo, Qi; Wardwell-Swanson, Judith; Zhao, Dandan; Zhu, Yingjie; Miller, Christopher; Gill, James
2010-08-01
The effective analysis and interpretation of high-content screening (HCS) data requires joining results to information on experimental treatments and controls, normalizing data, and selecting hits or fitting concentration-response curves. HCS data have unique requirements that are not supported by traditional high-throughput screening databases, including the ability to designate separate positive and negative controls for different measurements in multiplexed assays; the ability to capture information on the cell lines, fluorescent reagents, and treatments in each assay; the ability to store and use individual-cell and image data; and the ability to support HCS readers and software from multiple vendors along with third-party image analysis tools. To address these requirements, the authors developed an enterprise system for the storage and processing of HCS images and results. This system, HCS Road, supports target identification, lead discovery, lead evaluation, and lead profiling activities. A dedicated client supports experimental design, data review, and core analyses and displays images together with results for assay development, hit assessment, and troubleshooting. Data can be exported to third-party applications for further analysis and exploration. HCS Road provides a single source for high-content results across the organization, regardless of the group or instrument that produced them.
Approach and case-study of green infrastructure screening analysis for urban stormwater control.
Eaton, Timothy T
2018-03-01
Urban stormwater control is an urgent concern in megacities where increased impervious surface has disrupted natural hydrology. Water managers are increasingly turning to more environmentally friendly ways of capturing stormwater, called Green Infrastructure (GI), to mitigate combined sewer overflow (CSO) that degrades local water quality. A rapid screening approach is described to evaluate how GI strategies can reduce the amount of stormwater runoff in a low-density residential watershed in New York City. Among multiple possible tools, the L-THIA LID online software package, using the SCS-CN method, was selected to estimate relative runoff reductions expected with different strategies in areas of different land uses in the watershed. Results are sensitive to the relative areas of different land uses, and show that bioretention and raingardens provide the maximum reduction (∼12%) in this largely residential watershed. Although commercial, industrial and high-density residential areas in the watershed are minor, larger runoff reductions from disconnection strategies and porous pavement in parking lots are also possible. Total stormwater reductions from various combinations of these strategies can reach 35-55% for individual land uses, and between 23% and 42% for the entire watershed. Copyright © 2017. Published by Elsevier Ltd.
That's Infotainment!: How to Create Your Own Screencasts
ERIC Educational Resources Information Center
Kroski, Ellyssa
2009-01-01
Screencasts are videos that record the actions that take place on the computer screen, most often including a narrative audio track, in order to demonstrate various computer-related tasks, such as how to use a software program or navigate a certain Web site. All that is needed is a standard microphone and screen recording software, which can be…
Lessons from 30 Years of Flight Software
NASA Technical Reports Server (NTRS)
McComas, David C.
2015-01-01
This presentation takes a brief historical look at flight software over the past 30 years, extracts lessons learned and shows how many of the lessons learned are embodied in the Flight Software product line called the core Flight System (cFS). It also captures the lessons learned from developing and applying the cFS.
DATALINK. Records Inventory Data Collection Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cole, B.
1995-03-01
DATALINK was created to provide an easy to use data collection program for records management software products. It provides several useful tools for capturing and validating record index data in the field. It also allows users to easily create a comma delimited, ASCII text file for data export into most records management software products.
Pilot study on the use of data mining to identify cochlear implant candidates.
Grisel, Jedidiah J; Schafer, Erin; Lam, Anne; Griffin, Terry
2018-05-01
The goal of this pilot study was to determine the clinical utility of data-mining software that screens for cochlear implant (CI) candidacy. The Auditory Implant Initiative developed a software module that screens for CI candidates via integration with a software system (Noah 4) that serves as a depository for hearing test data. To identify candidates, patient audiograms from one practice were exported into the screening module. Candidates were tracked to determine if any eventually underwent implantation. After loading 4836 audiograms from the Noah 4 system, the screening module identified 558 potential CI candidates. After reviewing the data for the potential candidates, 117 were targeted and invited to an educational event. Following the event, a total of six candidates were evaluated, and two were implanted. This objective approach to identifying candidates has the potential to address the gross underutilization of CIs by removing any bias or lack of knowledge regarding the management of severe to profound sensorineural hearing loss with CIs. The screening module was an effective tool for identifying potential CI candidates at one ENT practice. On a larger scale, the screening module has the potential to impact thousands of CI candidates worldwide.
Meng, Q Y; Huang, L Z; Wang, B; Li, X X; Liang, J H
2017-06-11
Objectives: To analyze RB1 gene mutation in retinoblastoma (RB) patients using gene capture technology. Methods: Experimental research. The clinical data of 17 RB patients were collected at Department of Ophthalmology, Peking University People's Hospital from June 2010 to Jun 2014. Peripheral blood samples of seventeen RB patients and their parents were collected and genomic DNA were extracted. DNA library from RB patients was mixed with designed gene capture probe of RB1 exons and its flanking sequences. The data were analyzed using bioinformatics software. To avoid the false positive, the abnormal sites were verified using the Sanger sequencing method. Results: Totally, there were 17 RB patients, including 12 males and 5 females, from 0.5 to 23 years old, average ages were (3.2±5.2) years old. Both eyes were involved in 6 patients. The other 11 cases were only one eye was attacked. Four RB patients were found to have germline mutations, among whom 2 had bilateral tumors and 2 had unilateral tumors. 2 novel missense mutations were identified, including 15(th) exon c.1408A>T (p. Ile470Phe) and c.1960G>C (p. Val654Leu) at 19(th) exon. No RB1 mutation was identified in any of their parents. We also identified 2 mutations reported previously. One is c.1030C>T termination mutation at 10(th) exon in a bilateral RB patients and his father, who was diagnosed with unilateral RB. The other is c.371-372delTA frame shift mutation at 3(rd) exon. No mutation was found in their parents. Conclusions: Two novel germline RB1 mutations were found using gene capture technology, which enriched RB1 mutations library. (Chin J Ophthalmol, 2017, 53: 455-459) .
Real-Time Monitoring of Scada Based Control System for Filling Process
NASA Astrophysics Data System (ADS)
Soe, Aung Kyaw; Myint, Aung Naing; Latt, Maung Maung; Theingi
2008-10-01
This paper is a design of real-time monitoring for filling system using Supervisory Control and Data Acquisition (SCADA). The monitoring of production process is described in real-time using Visual Basic.Net programming under Visual Studio 2005 software without SCADA software. The software integrators are programmed to get the required information for the configuration screens. Simulation of components is expressed on the computer screen using parallel port between computers and filling devices. The programs of real-time simulation for the filling process from the pure drinking water industry are provided.
Software for Real-Time Analysis of Subsonic Test Shot Accuracy
2014-03-01
used the C++ programming language, the Open Source Computer Vision ( OpenCV ®) software library, and Microsoft Windows® Application Programming...video for comparison through OpenCV image analysis tools. Based on the comparison, the software then computed the coordinates of each shot relative to...DWB researchers wanted to use the Open Source Computer Vision ( OpenCV ) software library for capturing and analyzing frames of video. OpenCV contains
Graph Visualization for RDF Graphs with SPARQL-EndPoints
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sukumar, Sreenivas R; Bond, Nathaniel
2014-07-11
RDF graphs are hard to visualize as triples. This software module is a web interface that connects to a SPARQL endpoint and retrieves graph data that the user can explore interactively and seamlessly. The software written in python and JavaScript has been tested to work on screens as little as the smart phones to large screens such as EVEREST.
NASA Astrophysics Data System (ADS)
Matsuda, Y.; Kakutani, K.; Nonomura, T.; Kimbara, J.; Osamura, K.; Kusakar, S.; Toyoda, H.
2015-10-01
An electric field screen can be used to keep mosquitoes out of houses with open windows. In this study, doubly charged dipolar electric field screens (DD-screens) were used to capture mosquitoes entering through a window. The screen had two components: three layers of insulated conductor iron wires (ICWs) in parallel arrays and two electrostatic direct current (DC) voltage generators that supplied negative or positive voltages to the ICWs. Within each layer, the ICWs were parallel at 5-mm intervals, and connected to each other and to a negative or positive voltage generator. The negatively and positively charged ICWs are represented as ICW(-) and ICW(+), respectively. The screen consisted of one ICW(+) layer with an ICW(-) layer on either side. The Asian tiger mosquito (Aedes albopictus) and house mosquito (Culex pipiens) were used as models of vectors carrying viral pathogens. Adult mosquitoes were blown into the space between the ICWs by sending compressed air through the tip of an insect aspirator to determine the voltage range that captured all of the test insects. Wind speed was measured at the surface of the ICW using a sensitive anemometer. The result showed that at ≥ 1.2 kV, the force was strong enough that the ICWs captured all of the mosquitoes, despite a wind speed of 7 m/s. Therefore, the DD-screen could serve as a physical barrier to prevent noxious mosquitoes from entering houses with good air penetration.
Optimized Two-Party Video Chat with Restored Eye Contact Using Graphics Hardware
NASA Astrophysics Data System (ADS)
Dumont, Maarten; Rogmans, Sammy; Maesen, Steven; Bekaert, Philippe
We present a practical system prototype to convincingly restore eye contact between two video chat participants, with a minimal amount of constraints. The proposed six-fold camera setup is easily integrated into the monitor frame, and is used to interpolate an image as if its virtual camera captured the image through a transparent screen. The peer user has a large freedom of movement, resulting in system specifications that enable genuine practical usage. Our software framework thereby harnesses the powerful computational resources inside graphics hardware, and maximizes arithmetic intensity to achieve over real-time performance up to 42 frames per second for 800 ×600 resolution images. Furthermore, an optimal set of fine tuned parameters are presented, that optimizes the end-to-end performance of the application to achieve high subjective visual quality, and still allows for further algorithmic advancement without loosing its real-time capabilities.
Computational Modeling of Mixed Solids for CO2 CaptureSorbents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duan, Yuhua
2015-01-01
Since current technologies for capturing CO2 to fight global climate change are still too energy intensive, there is a critical need for development of new materials that can capture CO2 reversibly with acceptable energy costs. Accordingly, solid sorbents have been proposed to be used for CO2 capture applications through a reversible chemical transformation. By combining thermodynamic database mining with first principles density functional theory and phonon lattice dynamics calculations, a theoretical screening methodology to identify the most promising CO2 sorbent candidates from the vast array of possible solid materials has been proposed and validated. The calculated thermodynamic properties of differentmore » classes of solid materials versus temperature and pressure changes were further used to evaluate the equilibrium properties for the CO2 adsorption/desorption cycles. According to the requirements imposed by the pre- and post- combustion technologies and based on our calculated thermodynamic properties for the CO2 capture reactions by the solids of interest, we were able to screen only those solid materials for which lower capture energy costs are expected at the desired pressure and temperature conditions. Only those selected CO2 sorbent candidates were further considered for experimental validations. The ab initio thermodynamic technique has the advantage of identifying thermodynamic properties of CO2 capture reactions without any experimental input beyond crystallographic structural information of the solid phases involved. Such methodology not only can be used to search for good candidates from existing database of solid materials, but also can provide some guidelines for synthesis new materials. In this presentation, we apply our screening methodology to mixing solid systems to adjust the turnover temperature to help on developing CO2 capture Technologies.« less
ERIC Educational Resources Information Center
Lindroth, Linda
2005-01-01
This article describes new presentation tools and game shows that can make the classroom into a learning stage. RM Easiteach Studio, a presentation software from RM Educational Software, provides teaching tools for use on any interactive whiteboard. Classroom Jeopardy[R] from Educational Insights includes a scoreboard/base control unit, three…
Roth, Christopher J; Boll, Daniel T; Wall, Lisa K; Merkle, Elmar M
2010-08-01
The purpose of this investigation was to assess workflow for medical imaging studies, specifically comparing liver and knee MRI examinations by use of the Lean Six Sigma methodologic framework. The hypothesis tested was that the Lean Six Sigma framework can be used to quantify MRI workflow and to identify sources of inefficiency to target for sequence and protocol improvement. Audio-video interleave streams representing individual acquisitions were obtained with graphic user interface screen capture software in the examinations of 10 outpatients undergoing MRI of the liver and 10 outpatients undergoing MRI of the knee. With Lean Six Sigma methods, the audio-video streams were dissected into value-added time (true image data acquisition periods), business value-added time (time spent that provides no direct patient benefit but is requisite in the current system), and non-value-added time (scanner inactivity while awaiting manual input). For overall MRI table time, value-added time was 43.5% (range, 39.7-48.3%) of the time for liver examinations and 89.9% (range, 87.4-93.6%) for knee examinations. Business value-added time was 16.3% of the table time for the liver and 4.3% of the table time for the knee examinations. Non-value-added time was 40.2% of the overall table time for the liver and 5.8% for the knee examinations. Liver MRI examinations consume statistically significantly more non-value-added and business value-added times than do knee examinations, primarily because of respiratory command management and contrast administration. Workflow analyses and accepted inefficiency reduction frameworks can be applied with use of a graphic user interface screen capture program.
Edelman, Emily A; Lin, Bruce K; Doksum, Teresa; Drohan, Brian; Edelson, Vaughn; Dolan, Siobhan M; Hughes, Kevin; O'Leary, James; Vasquez, Lisa; Copeland, Sara; Galvin, Shelley L; DeGroat, Nicole; Pardanani, Setul; Gregory Feero, W; Adams, Claire; Jones, Renee; Scott, Joan
2014-07-01
"The Pregnancy and Health Profile" (PHP) is a free prenatal genetic screening and clinical decision support (CDS) software tool for prenatal providers. PHP collects family health history (FHH) during intake and provides point-of-care risk assessment for providers and education for patients. This pilot study evaluated patient and provider responses to PHP and effects of using PHP in practice. PHP was implemented in four clinics. Surveys assessed provider confidence and knowledge and patient and provider satisfaction with PHP. Data on the implementation process were obtained through semi-structured interviews with administrators. Quantitative survey data were analyzed using Chi square test, Fisher's exact test, paired t tests, and multivariate logistic regression. Open-ended survey questions and interviews were analyzed using qualitative thematic analysis. Of the 83% (513/618) of patients that provided feedback, 97% felt PHP was easy to use and 98% easy to understand. Thirty percent (21/71) of participating physicians completed both pre- and post-implementation feedback surveys [13 obstetricians (OBs) and 8 family medicine physicians (FPs)]. Confidence in managing genetic risks significantly improved for OBs on 2/6 measures (p values ≤0.001) but not for FPs. Physician knowledge did not significantly change. Providers reported value in added patient engagement and reported mixed feedback about the CDS report. We identified key steps, resources, and staff support required to implement PHP in a clinical setting. To our knowledge, this study is the first to report on the integration of patient-completed, electronically captured and CDS-enabled FHH software into primary prenatal practice. PHP is acceptable to patients and providers. Key to successful implementation in the future will be customization options and interoperability with electronic health records.
Computational designing and screening of solid materials for CO2capture
NASA Astrophysics Data System (ADS)
Duan, Yuhua
In this presentation, we will update our progress on computational designing and screening of solid materials for CO2 capture. By combining thermodynamic database mining with first principles density functional theory and phonon lattice dynamics calculations, a theoretical screening methodology to identify the most promising CO2 sorbent candidates from the vast array of possible solid materials have been proposed and validated at NETL. The advantage of this method is that it identifies the thermodynamic properties of the CO2 capture reaction as a function of temperature and pressure without any experimental input beyond crystallographic structural information of the solid phases involved. The calculated thermodynamic properties of different classes of solid materials versus temperature and pressure changes were further used to evaluate the equilibrium properties for the CO2 adsorption/desorption cycles. According to the requirements imposed by the pre- and post- combustion technologies and based on our calculated thermodynamic properties for the CO2 capture reactions by the solids of interest, we were able to identify only those solid materials for which lower capture energy costs are expected at the desired working conditions. In addition, we present a simulation scheme to increase and decrease the turnover temperature (Tt) of solid capturing CO2 reaction by mixing other solids. Our results also show that some solid sorbents can serve as bi-functional materials: CO2 sorbent and CO oxidation catalyst. Such dual functionality could be used for removing both CO and CO2 after water-gas-shift to obtain pure H2.
Gritsenko, Valeriya; Dailey, Eric; Kyle, Nicholas; Taylor, Matt; Whittacre, Sean; Swisher, Anne K
2015-01-01
To determine if a low-cost, automated motion analysis system using Microsoft Kinect could accurately measure shoulder motion and detect motion impairments in women following breast cancer surgery. Descriptive study of motion measured via 2 methods. Academic cancer center oncology clinic. 20 women (mean age = 60 yrs) were assessed for active and passive shoulder motions during a routine post-operative clinic visit (mean = 18 days after surgery) following mastectomy (n = 4) or lumpectomy (n = 16) for breast cancer. Participants performed 3 repetitions of active and passive shoulder motions on the side of the breast surgery. Arm motion was recorded using motion capture by Kinect for Windows sensor and on video. Goniometric values were determined from video recordings, while motion capture data were transformed to joint angles using 2 methods (body angle and projection angle). Correlation of motion capture with goniometry and detection of motion limitation. Active shoulder motion measured with low-cost motion capture agreed well with goniometry (r = 0.70-0.80), while passive shoulder motion measurements did not correlate well. Using motion capture, it was possible to reliably identify participants whose range of shoulder motion was reduced by 40% or more. Low-cost, automated motion analysis may be acceptable to screen for moderate to severe motion impairments in active shoulder motion. Automatic detection of motion limitation may allow quick screening to be performed in an oncologist's office and trigger timely referrals for rehabilitation.
Biondich, Paul G; Overhage, J Marc; Dexter, Paul R; Downs, Stephen M; Lemmon, Larry; McDonald, Clement J
2002-01-01
Advances in optical character recognition (OCR) software and computer hardware have stimulated a reevaluation of the technology and its ability to capture structured clinical data from preexisting paper forms. In our pilot evaluation, we measured the accuracy and feasibility of capturing vitals data from a pediatric encounter form that has been in use for over twenty years. We found that the software had a digit recognition rate of 92.4% (95% confidence interval: 91.6 to 93.2) overall. More importantly, this system was approximately three times as fast as our existing method of data entry. These preliminary results suggest that with further refinements in the approach and additional development, we may be able to incorporate OCR as another method for capturing structured clinical data.
Ipsiroglu, Osman S; Hung, Yi-Hsuan Amy; Chan, Forson; Ross, Michelle L; Veer, Dorothee; Soo, Sonja; Ho, Gloria; Berger, Mai; McAllister, Graham; Garn, Heinrich; Kloesch, Gerhard; Barbosa, Adriano Vilela; Stockler, Sylvia; McKellin, William; Vatikiotis-Bateson, Eric
2015-01-01
Advanced video technology is available for sleep-laboratories. However, low-cost equipment for screening in the home setting has not been identified and tested, nor has a methodology for analysis of video recordings been suggested. We investigated different combinations of hardware/software for home-videosomnography (HVS) and established a process for qualitative and quantitative analysis of HVS-recordings. A case vignette (HVS analysis for a 5.5-year-old girl with major insomnia and several co-morbidities) demonstrates how methodological considerations were addressed and how HVS added value to clinical assessment. We suggest an "ideal set of hardware/software" that is reliable, affordable (∼$500) and portable (=2.8 kg) to conduct non-invasive HVS, which allows time-lapse analyses. The equipment consists of a net-book, a camera with infrared optics, and a video capture device. (1) We present an HVS-analysis protocol consisting of three steps of analysis at varying replay speeds: (a) basic overview and classification at 16× normal speed; (b) second viewing and detailed descriptions at 4-8× normal speed, and (c) viewing, listening, and in-depth descriptions at real-time speed. (2) We also present a custom software program that facilitates video analysis and note-taking (Annotator(©)), and Optical Flow software that automatically quantifies movement for internal quality control of the HVS-recording. The case vignette demonstrates how the HVS-recordings revealed the dimension of insomnia caused by restless legs syndrome, and illustrated the cascade of symptoms, challenging behaviors, and resulting medications. The strategy of using HVS, although requiring validation and reliability testing, opens the floor for a new "observational sleep medicine," which has been useful in describing discomfort-related behavioral movement patterns in patients with communication difficulties presenting with challenging/disruptive sleep/wake behaviors.
The Missing Link: The Use of Link Words and Phrases as a Link to Manuscript Quality
ERIC Educational Resources Information Center
Onwuegbuzie, Anthony J.
2016-01-01
In this article, I provide a typology of transition words/phrases. This typology comprises 12 dimensions of link words/phrases that capture 277 link words/phrases. Using QDA Miner, WordStat, and SPSS--a computer-assisted mixed methods data analysis software, content analysis software, and statistical software, respectively--I analyzed 74…
Cognitive task analysis-based design and authoring software for simulation training.
Munro, Allen; Clark, Richard E
2013-10-01
The development of more effective medical simulators requires a collaborative team effort where three kinds of expertise are carefully coordinated: (1) exceptional medical expertise focused on providing complete and accurate information about the medical challenges (i.e., critical skills and knowledge) to be simulated; (2) instructional expertise focused on the design of simulation-based training and assessment methods that produce maximum learning and transfer to patient care; and (3) software development expertise that permits the efficient design and development of the software required to capture expertise, present it in an engaging way, and assess student interactions with the simulator. In this discussion, we describe a method of capturing more complete and accurate medical information for simulators and combine it with new instructional design strategies that emphasize the learning of complex knowledge. Finally, we describe three different types of software support (Development/Authoring, Run Time, and Post Run Time) required at different stages in the development of medical simulations and the instructional design elements of the software required at each stage. We describe the contributions expected of each kind of software and the different instructional control authoring support required. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.
The Environmental Control and Life Support System (ECLSS) advanced automation project
NASA Technical Reports Server (NTRS)
Dewberry, Brandon S.; Carnes, Ray
1990-01-01
The objective of the environmental control and life support system (ECLSS) Advanced Automation Project is to influence the design of the initial and evolutionary Space Station Freedom Program (SSFP) ECLSS toward a man-made closed environment in which minimal flight and ground manpower is needed. Another objective includes capturing ECLSS design and development knowledge future missions. Our approach has been to (1) analyze the SSFP ECLSS, (2) envision as our goal a fully automated evolutionary environmental control system - an augmentation of the baseline, and (3) document the advanced software systems, hooks, and scars which will be necessary to achieve this goal. From this analysis, prototype software is being developed, and will be tested using air and water recovery simulations and hardware subsystems. In addition, the advanced software is being designed, developed, and tested using automation software management plan and lifecycle tools. Automated knowledge acquisition, engineering, verification and testing tools are being used to develop the software. In this way, we can capture ECLSS development knowledge for future use develop more robust and complex software, provide feedback to the knowledge based system tool community, and ensure proper visibility of our efforts.
Gopal, Hemavathi; Hassan, Hassan K.; Rodríguez-Pérez, Mario A.; Toé, Laurent D.; Lustigman, Sara; Unnasch, Thomas R.
2012-01-01
Background Entomological surveys of Simulium vectors are an important component in the criteria used to determine if Onchocerca volvulus transmission has been interrupted and if focal elimination of the parasite has been achieved. However, because infection in the vector population is quite rare in areas where control has succeeded, large numbers of flies need to be examined to certify transmission interruption. Currently, this is accomplished through PCR pool screening of large numbers of flies. The efficiency of this process is limited by the size of the pools that may be screened, which is in turn determined by the constraints imposed by the biochemistry of the assay. The current method of DNA purification from pools of vector black flies relies upon silica adsorption. This method can be applied to screen pools containing a maximum of 50 individuals (from the Latin American vectors) or 100 individuals (from the African vectors). Methodology/Principal Findings We have evaluated an alternative method of DNA purification for pool screening of black flies which relies upon oligonucleotide capture of Onchocerca volvulus genomic DNA from homogenates prepared from pools of Latin American and African vectors. The oligonucleotide capture assay was shown to reliably detect one O. volvulus infective larva in pools containing 200 African or Latin American flies, representing a two-four fold improvement over the conventional assay. The capture assay requires an equivalent amount of technical time to conduct as the conventional assay, resulting in a two-four fold reduction in labor costs per insect assayed and reduces reagent costs to $3.81 per pool of 200 flies, or less than $0.02 per insect assayed. Conclusions/Significance The oligonucleotide capture assay represents a substantial improvement in the procedure used to detect parasite prevalence in the vector population, a major metric employed in the process of certifying the elimination of onchocerciasis. PMID:22724041
Sodickson, Aaron; Warden, Graham I; Farkas, Cameron E; Ikuta, Ichiro; Prevedello, Luciano M; Andriole, Katherine P; Khorasani, Ramin
2012-08-01
To develop and validate an informatics toolkit that extracts anatomy-specific computed tomography (CT) radiation exposure metrics (volume CT dose index and dose-length product) from existing digital image archives through optical character recognition of CT dose report screen captures (dose screens) combined with Digital Imaging and Communications in Medicine attributes. This institutional review board-approved HIPAA-compliant study was performed in a large urban health care delivery network. Data were drawn from a random sample of CT encounters that occurred between 2000 and 2010; images from these encounters were contained within the enterprise image archive, which encompassed images obtained at an adult academic tertiary referral hospital and its affiliated sites, including a cancer center, a community hospital, and outpatient imaging centers, as well as images imported from other facilities. Software was validated by using 150 randomly selected encounters for each major CT scanner manufacturer, with outcome measures of dose screen retrieval rate (proportion of correctly located dose screens) and anatomic assignment precision (proportion of extracted exposure data with correctly assigned anatomic region, such as head, chest, or abdomen and pelvis). The 95% binomial confidence intervals (CIs) were calculated for discrete proportions, and CIs were derived from the standard error of the mean for continuous variables. After validation, the informatics toolkit was used to populate an exposure repository from a cohort of 54 549 CT encounters; of which 29 948 had available dose screens. Validation yielded a dose screen retrieval rate of 99% (597 of 605 CT encounters; 95% CI: 98%, 100%) and an anatomic assignment precision of 94% (summed DLP fraction correct 563 in 600 CT encounters; 95% CI: 92%, 96%). Patient safety applications of the resulting data repository include benchmarking between institutions, CT protocol quality control and optimization, and cumulative patient- and anatomy-specific radiation exposure monitoring. Large-scale anatomy-specific radiation exposure data repositories can be created with high fidelity from existing digital image archives by using open-source informatics tools.
Castañón, Jesús; Román, José Pablo; Jessop, Theodore C; de Blas, Jesús; Haro, Rubén
2018-06-01
DNA-encoded libraries (DELs) have emerged as an efficient and cost-effective drug discovery tool for the exploration and screening of very large chemical space using small-molecule collections of unprecedented size. Herein, we report an integrated automation and informatics system designed to enhance the quality, efficiency, and throughput of the production and affinity selection of these libraries. The platform is governed by software developed according to a database-centric architecture to ensure data consistency, integrity, and availability. Through its versatile protocol management functionalities, this application captures the wide diversity of experimental processes involved with DEL technology, keeps track of working protocols in the database, and uses them to command robotic liquid handlers for the synthesis of libraries. This approach provides full traceability of building-blocks and DNA tags in each split-and-pool cycle. Affinity selection experiments and high-throughput sequencing reads are also captured in the database, and the results are automatically deconvoluted and visualized in customizable representations. Researchers can compare results of different experiments and use machine learning methods to discover patterns in data. As of this writing, the platform has been validated through the generation and affinity selection of various libraries, and it has become the cornerstone of the DEL production effort at Lilly.
An integrated tool for the diagnosis of voice disorders.
Godino-Llorente, Juan I; Sáenz-Lechón, Nicolás; Osma-Ruiz, Víctor; Aguilera-Navarro, Santiago; Gómez-Vilda, Pedro
2006-04-01
A PC-based integrated aid tool has been developed for the analysis and screening of pathological voices. With it the user can simultaneously record speech, electroglottographic (EGG), and videoendoscopic signals, and synchronously edit them to select the most significant segments. These multimedia data are stored on a relational database, together with a patient's personal information, anamnesis, diagnosis, visits, explorations and any other comment the specialist may wish to include. The speech and EGG waveforms are analysed by means of temporal representations and the quantitative measurements of parameters such as spectrograms, frequency and amplitude perturbation measurements, harmonic energy, noise, etc. are calculated using digital signal processing techniques, giving an idea of the degree of hoarseness and quality of the voice register. Within this framework, the system uses a standard protocol to evaluate and build complete databases of voice disorders. The target users of this system are speech and language therapists and ear nose and throat (ENT) clinicians. The application can be easily configured to cover the needs of both groups of professionals. The software has a user-friendly Windows style interface. The PC should be equipped with standard sound and video capture cards. Signals are captured using common transducers: a microphone, an electroglottograph and a fiberscope or telelaryngoscope. The clinical usefulness of the system is addressed in a comprehensive evaluation section.
Three-dimensional image signals: processing methods
NASA Astrophysics Data System (ADS)
Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru
2010-11-01
Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.
Screening_mgmt: a Python module for managing screening data.
Helfenstein, Andreas; Tammela, Päivi
2015-02-01
High-throughput screening is an established technique in drug discovery and, as such, has also found its way into academia. High-throughput screening generates a considerable amount of data, which is why specific software is used for its analysis and management. The commercially available software packages are often beyond the financial limits of small-scale academic laboratories and, furthermore, lack the flexibility to fulfill certain user-specific requirements. We have developed a Python module, screening_mgmt, which is a lightweight tool for flexible data retrieval, analysis, and storage for different screening assays in one central database. The module reads custom-made analysis scripts and plotting instructions, and it offers a graphical user interface to import, modify, and display the data in a uniform manner. During the test phase, we used this module for the management of 10,000 data points of various origins. It has provided a practical, user-friendly tool for sharing and exchanging information between researchers. © 2014 Society for Laboratory Automation and Screening.
Peterfreund, Robert A; Driscoll, William D; Walsh, John L; Subramanian, Aparna; Anupama, Shaji; Weaver, Melissa; Morris, Theresa; Arnholz, Sarah; Zheng, Hui; Pierce, Eric T; Spring, Stephen F
2011-05-01
Efforts to assure high-quality, safe, clinical care depend upon capturing information about near-miss and adverse outcome events. Inconsistent or unreliable information capture, especially for infrequent events, compromises attempts to analyze events in quantitative terms, understand their implications, and assess corrective efforts. To enhance reporting, we developed a secure, electronic, mandatory system for reporting quality assurance data linked to our electronic anesthesia record. We used the capabilities of our anesthesia information management system (AIMS) in conjunction with internally developed, secure, intranet-based, Web application software. The application is implemented with a backend allowing robust data storage, retrieval, data analysis, and reporting capabilities. We customized a feature within the AIMS software to create a hard stop in the documentation workflow before the end of anesthesia care time stamp for every case. The software forces the anesthesia provider to access the separate quality assurance data collection program, which provides a checklist for targeted clinical events and a free text option. After completing the event collection program, the software automatically returns the clinician to the AIMS to finalize the anesthesia record. The number of events captured by the departmental quality assurance office increased by 92% (95% confidence interval [CI] 60.4%-130%) after system implementation. The major contributor to this increase was the new electronic system. This increase has been sustained over the initial 12 full months after implementation. Under our reporting criteria, the overall rate of clinical events reported by any method was 471 events out of 55,382 cases or 0.85% (95% CI 0.78% to 0.93%). The new system collected 67% of these events (95% confidence interval 63%-71%). We demonstrate the implementation in an academic anesthesia department of a secure clinical event reporting system linked to an AIMS. The system enforces entry of quality assurance information (either no clinical event or notification of a clinical event). System implementation resulted in capturing nearly twice the number of events at a relatively steady case load. © 2011 International Anesthesia Research Society
High-throughput screening (HTS) for potential thyroid–disrupting chemicals requires a system of assays to capture multiple molecular-initiating events (MIEs) that converge on perturbed thyroid hormone (TH) homeostasis. Screening for MIEs specific to TH-disrupting pathways is limi...
Rapid, Optimized Interactomic Screening
Hakhverdyan, Zhanna; Domanski, Michal; Hough, Loren; Oroskar, Asha A.; Oroskar, Anil R.; Keegan, Sarah; Dilworth, David J.; Molloy, Kelly R.; Sherman, Vadim; Aitchison, John D.; Fenyö, David; Chait, Brian T.; Jensen, Torben Heick; Rout, Michael P.; LaCava, John
2015-01-01
We must reliably map the interactomes of cellular macromolecular complexes in order to fully explore and understand biological systems. However, there are no methods to accurately predict how to capture a given macromolecular complex with its physiological binding partners. Here, we present a screen that comprehensively explores the parameters affecting the stability of interactions in affinity-captured complexes, enabling the discovery of physiological binding partners and the elucidation of their functional interactions in unparalleled detail. We have implemented this screen on several macromolecular complexes from a variety of organisms, revealing novel profiles even for well-studied proteins. Our approach is robust, economical and automatable, providing an inroad to the rigorous, systematic dissection of cellular interactomes. PMID:25938370
Fully automated corneal endothelial morphometry of images captured by clinical specular microscopy
NASA Astrophysics Data System (ADS)
Bucht, Curry; Söderberg, Per; Manneberg, Göran
2010-02-01
The corneal endothelium serves as the posterior barrier of the cornea. Factors such as clarity and refractive properties of the cornea are in direct relationship to the quality of the endothelium. The endothelial cell density is considered the most important morphological factor of the corneal endothelium. Pathological conditions and physical trauma may threaten the endothelial cell density to such an extent that the optical property of the cornea and thus clear eyesight is threatened. Diagnosis of the corneal endothelium through morphometry is an important part of several clinical applications. Morphometry of the corneal endothelium is presently carried out by semi automated analysis of pictures captured by a Clinical Specular Microscope (CSM). Because of the occasional need of operator involvement, this process can be tedious, having a negative impact on sampling size. This study was dedicated to the development and use of fully automated analysis of a very large range of images of the corneal endothelium, captured by CSM, using Fourier analysis. Software was developed in the mathematical programming language Matlab. Pictures of the corneal endothelium, captured by CSM, were read into the analysis software. The software automatically performed digital enhancement of the images, normalizing lights and contrasts. The digitally enhanced images of the corneal endothelium were Fourier transformed, using the fast Fourier transform (FFT) and stored as new images. Tools were developed and applied for identification and analysis of relevant characteristics of the Fourier transformed images. The data obtained from each Fourier transformed image was used to calculate the mean cell density of its corresponding corneal endothelium. The calculation was based on well known diffraction theory. Results in form of estimated cell density of the corneal endothelium were obtained, using fully automated analysis software on 292 images captured by CSM. The cell density obtained by the fully automated analysis was compared to the cell density obtained from classical, semi-automated analysis and a relatively large correlation was found.
The East London glaucoma prediction score: web-based validation of glaucoma risk screening tool
Stephen, Cook; Benjamin, Longo-Mbenza
2013-01-01
AIM It is difficult for Optometrists and General Practitioners to know which patients are at risk. The East London glaucoma prediction score (ELGPS) is a web based risk calculator that has been developed to determine Glaucoma risk at the time of screening. Multiple risk factors that are available in a low tech environment are assessed to provide a risk assessment. This is extremely useful in settings where access to specialist care is difficult. Use of the calculator is educational. It is a free web based service. Data capture is user specific. METHOD The scoring system is a web based questionnaire that captures and subsequently calculates the relative risk for the presence of Glaucoma at the time of screening. Three categories of patient are described: Unlikely to have Glaucoma; Glaucoma Suspect and Glaucoma. A case review methodology of patients with known diagnosis is employed to validate the calculator risk assessment. RESULTS Data from the patient records of 400 patients with an established diagnosis has been captured and used to validate the screening tool. The website reports that the calculated diagnosis correlates with the actual diagnosis 82% of the time. Biostatistics analysis showed: Sensitivity = 88%; Positive predictive value = 97%; Specificity = 75%. CONCLUSION Analysis of the first 400 patients validates the web based screening tool as being a good method of screening for the at risk population. The validation is ongoing. The web based format will allow a more widespread recruitment for different geographic, population and personnel variables. PMID:23550097
Hsieh, Sheng-Hsun; Li, Yung-Hui; Tien, Chung-Hao; Chang, Chin-Chen
2016-12-01
Iris recognition has gained increasing popularity over the last few decades; however, the stand-off distance in a conventional iris recognition system is too short, which limits its application. In this paper, we propose a novel hardware-software hybrid method to increase the stand-off distance in an iris recognition system. When designing the system hardware, we use an optimized wavefront coding technique to extend the depth of field. To compensate for the blurring of the image caused by wavefront coding, on the software side, the proposed system uses a local patch-based super-resolution method to restore the blurred image to its clear version. The collaborative effect of the new hardware design and software post-processing showed great potential in our experiment. The experimental results showed that such improvement cannot be achieved by using a hardware-or software-only design. The proposed system can increase the capture volume of a conventional iris recognition system by three times and maintain the system's high recognition rate.
DATALINK: Records inventory data collection software. User`s guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cole, B.A.
1995-03-01
DATALINK was created to provide an easy to use data collection program for records management software products. It provides several useful tools for capturing and validating record index data in the field. It also allows users to easily create a comma delimited, ASCII text file for data export into most records management software products. It runs on virtually any computer us MS-DOS.
Duffy, Fergal J; Verniere, Mélanie; Devocelle, Marc; Bernard, Elise; Shields, Denis C; Chubb, Anthony J
2011-04-25
We introduce CycloPs, software for the generation of virtual libraries of constrained peptides including natural and nonnatural commercially available amino acids. The software is written in the cross-platform Python programming language, and features include generating virtual libraries in one-dimensional SMILES and three-dimensional SDF formats, suitable for virtual screening. The stand-alone software is capable of filtering the virtual libraries using empirical measurements, including peptide synthesizability by standard peptide synthesis techniques, stability, and the druglike properties of the peptide. The software and accompanying Web interface is designed to enable the rapid generation of large, structurally diverse, synthesizable virtual libraries of constrained peptides quickly and conveniently, for use in virtual screening experiments. The stand-alone software, and the Web interface for evaluating these empirical properties of a single peptide, are available at http://bioware.ucd.ie .
Can light-field photography ease focusing on the scalp and oral cavity?
Taheri, Arash; Feldman, Steven R
2013-08-01
Capturing a well-focused image using an autofocus camera can be difficult in oral cavity and on a hairy scalp. Light-field digital cameras capture data regarding the color, intensity, and direction of rays of light. Having information regarding direction of rays of light, computer software can be used to focus on different subjects in the field after the image data have been captured. A light-field camera was used to capture the images of the scalp and oral cavity. The related computer software was used to focus on scalp or different parts of oral cavity. The final pictures were compared with pictures taken with conventional, compact, digital cameras. The camera worked well for oral cavity. It also captured the pictures of scalp easily; however, we had to repeat clicking between the hairs on different points to choose the scalp for focusing. A major drawback of the system was the resolution of the resulting pictures that was lower than conventional digital cameras. Light-field digital cameras are fast and easy to use. They can capture more information on the full depth of field compared with conventional cameras. However, the resolution of the pictures is relatively low. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Biondich, Paul G.; Overhage, J. Marc; Dexter, Paul R.; Downs, Stephen M.; Lemmon, Larry; McDonald, Clement J.
2002-01-01
Advances in optical character recognition (OCR) software and computer hardware have stimulated a reevaluation of the technology and its ability to capture structured clinical data from preexisting paper forms. In our pilot evaluation, we measured the accuracy and feasibility of capturing vitals data from a pediatric encounter form that has been in use for over twenty years. We found that the software had a digit recognition rate of 92.4% (95% confidence interval: 91.6 to 93.2) overall. More importantly, this system was approximately three times as fast as our existing method of data entry. These preliminary results suggest that with further refinements in the approach and additional development, we may be able to incorporate OCR as another method for capturing structured clinical data. PMID:12463786
Chen, Wenjin; Wong, Chung; Vosburgh, Evan; Levine, Arnold J; Foran, David J; Xu, Eugenia Y
2014-07-08
The increasing number of applications of three-dimensional (3D) tumor spheroids as an in vitro model for drug discovery requires their adaptation to large-scale screening formats in every step of a drug screen, including large-scale image analysis. Currently there is no ready-to-use and free image analysis software to meet this large-scale format. Most existing methods involve manually drawing the length and width of the imaged 3D spheroids, which is a tedious and time-consuming process. This study presents a high-throughput image analysis software application - SpheroidSizer, which measures the major and minor axial length of the imaged 3D tumor spheroids automatically and accurately; calculates the volume of each individual 3D tumor spheroid; then outputs the results in two different forms in spreadsheets for easy manipulations in the subsequent data analysis. The main advantage of this software is its powerful image analysis application that is adapted for large numbers of images. It provides high-throughput computation and quality-control workflow. The estimated time to process 1,000 images is about 15 min on a minimally configured laptop, or around 1 min on a multi-core performance workstation. The graphical user interface (GUI) is also designed for easy quality control, and users can manually override the computer results. The key method used in this software is adapted from the active contour algorithm, also known as Snakes, which is especially suitable for images with uneven illumination and noisy background that often plagues automated imaging processing in high-throughput screens. The complimentary "Manual Initialize" and "Hand Draw" tools provide the flexibility to SpheroidSizer in dealing with various types of spheroids and diverse quality images. This high-throughput image analysis software remarkably reduces labor and speeds up the analysis process. Implementing this software is beneficial for 3D tumor spheroids to become a routine in vitro model for drug screens in industry and academia.
Mapping HL7 CDA R2 Formatted Mass Screening Data to OpenEHR Archetypes.
Kobayashi, Shinji; Kume, Naoto; Yoshihara, Hiroyuki
2017-01-01
Mass screening of adults was performed to manage employee healthcare. The screening service defined the data collection format as HL7 Clinical Document Architecture (CDA) R2. To capture mass screening data for nationwide electronic health records (her), we programmed a model within the CDA format and mapped the data items to the ISO13606/openEHR archetype for semantic interoperabiilty.
NASA Astrophysics Data System (ADS)
Israel, Maya; Wherfel, Quentin M.; Shehab, Saadeddine; Ramos, Evan A.; Metzger, Adam; Reese, George C.
2016-07-01
This paper describes the development, validation, and uses of the Collaborative Computing Observation Instrument (C-COI), a web-based analysis instrument that classifies individual and/or collaborative behaviors of students during computing problem-solving (e.g. coding, programming). The C-COI analyzes data gathered through video and audio screen recording software that captures students' computer screens as they program, and their conversations with their peers or adults. The instrument allows researchers to organize and quantify these data to track behavioral patterns that could be further analyzed for deeper understanding of persistence and/or collaborative interactions. The article provides a rationale for the C-COI including the development of a theoretical framework for measuring collaborative interactions in computer-mediated environments. This theoretical framework relied on the computer-supported collaborative learning literature related to adaptive help seeking, the joint problem-solving space in which collaborative computing occurs, and conversations related to outcomes and products of computational activities. Instrument development and validation also included ongoing advisory board feedback from experts in computer science, collaborative learning, and K-12 computing as well as classroom observations to test out the constructs in the C-COI. These processes resulted in an instrument with rigorous validation procedures and a high inter-rater reliability.
NASA Technical Reports Server (NTRS)
Hejduk, M. D.; Pachura, D. A.
2017-01-01
Conjunction Assessment screening volumes used in the protection of NASA satellites are constructed as geometric volumes about these satellites, of a size expected to capture a certain percentage of the serious conjunction events by a certain time before closest approach. However, the analyses that established these sizes were grounded on covariance-based projections rather than empirical screening results, did not tailor the volume sizes to ensure operational actionability of those results, and did not consider the adjunct ability to produce data that could provide prevenient assistance for maneuver planning. The present study effort seeks to reconsider these questions based on a six-month dataset of empirical screening results using an extremely large screening volume. The results, pursued here for a highly-populated orbit regime near 700 km altitude, identify theoretical limits of screening volume performance, explore volume configuration to facilitate both maneuver remediation planning as well as basic asset protection, and recommend sizing principles that maximize volume performance while minimizing the capture of "chaff" conjunctions that are unlikely ever to become serious events.
DOE Office of Scientific and Technical Information (OSTI.GOV)
CHEN, JOANNA; SIMIRENKO, LISA; TAPASWI, MANJIRI
The DIVA software interfaces a process in which researchers design their DNA with a web-based graphical user interface, submit their designs to a central queue, and a few weeks later receive their sequence-verified clonal constructs. Each researcher independently designs the DNA to be constructed with a web-based BioCAD tool, and presses a button to submit their designs to a central queue. Researchers have web-based access to their DNA design queues, and can track the progress of their submitted designs as they progress from "evaluation", to "waiting for reagents", to "in progress", to "complete". Researchers access their completed constructs through themore » central DNA repository. Along the way, all DNA construction success/failure rates are captured in a central database. Once a design has been submitted to the queue, a small number of dedicated staff evaluate the design for feasibility and provide feedback to the responsible researcher if the design is either unreasonable (e.g., encompasses a combinatorial library of a billion constructs) or small design changes could significantly facilitate the downstream implementation process. The dedicated staff then use DNA assembly design automation software to optimize the DNA construction process for the design, leveraging existing parts from the DNA repository where possible and ordering synthetic DNA where necessary. SynTrack software manages the physical locations and availability of the various requisite reagents and process inputs (e.g., DNA templates). Once all requisite process inputs are available, the design progresses from "waiting for reagents" to "in progress" in the design queue. Human-readable and machine-parseable DNA construction protocols output by the DNA assembly design automation software are then executed by the dedicated staff exploiting lab automation devices wherever possible. Since the all employed DNA construction methods are sequence-agnostic, standardized (utilize the same enzymatic master mixes and reaction conditions), completely independent DNA construction tasks can be aggregated into the same multi-well plates and pursued in parallel. The resulting sets of cloned constructs can then be screened by high-throughput next-gen sequencing platforms for sequence correctness. A combination of long read-length (e.g., PacBio) and paired-end read platforms (e.g., Illumina) would be exploited depending the particular task at hand (e.g., PacBio might be sufficient to screen a set of pooled constructs with significant gene divergence). Post sequence verification, designs for which at least one correct clone was identified will progress to a "complete" status, while designs for which no correct clones wereidentified will progress to a "failure" status. Depending on the failure mode (e.g., no transformants), and how many prior attempts/variations of assembly protocol have been already made for a given design, subsequent attempts may be made or the design can progress to a "permanent failure" state. All success and failure rate information will be captured during the process, including at which stage a given clonal construction procedure failed (e.g., no PCR product) and what the exact failure was (e.g. assembly piece 2 missing). This success/failure rate data can be leveraged to refine the DNA assembly design process.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duan, Yuhua
2012-11-02
Since current technologies for capturing CO{sub 2} to fight global climate change are still too energy intensive, there is a critical need for development of new materials that can capture CO{sub 2} reversibly with acceptable energy costs. Accordingly, solid sorbents have been proposed to be used for CO{sub 2} capture applications through a reversible chemical transformation. By combining thermodynamic database mining with first principles density functional theory and phonon lattice dynamics calculations, a theoretical screening methodology to identify the most promising CO{sub 2} sorbent candidates from the vast array of possible solid materials has been proposed and validated. The calculatedmore » thermodynamic properties of different classes of solid materials versus temperature and pressure changes were further used to evaluate the equilibrium properties for the CO{sub 2} adsorption/desorption cycles. According to the requirements imposed by the pre- and post- combustion technologies and based on our calculated thermodynamic properties for the CO{sub 2} capture reactions by the solids of interest, we were able to screen only those solid materials for which lower capture energy costs are expected at the desired pressure and temperature conditions. Only those selected CO{sub 2} sorbent candidates were further considered for experimental validations. The ab initio thermodynamic technique has the advantage of identifying thermodynamic properties of CO{sub 2} capture reactions without any experimental input beyond crystallographic structural information of the solid phases involved. Such methodology not only can be used to search for good candidates from existing database of solid materials, but also can provide some guidelines for synthesis new materials. In this presentation, we first introduce our screening methodology and the results on a testing set of solids with known thermodynamic properties to validate our methodology. Then, by applying our computational method to several different kinds of solid systems, we demonstrate that our methodology can predict the useful information to help developing CO{sub 2} capture Technologies.« less
PrimerSuite: A High-Throughput Web-Based Primer Design Program for Multiplex Bisulfite PCR.
Lu, Jennifer; Johnston, Andrew; Berichon, Philippe; Ru, Ke-Lin; Korbie, Darren; Trau, Matt
2017-01-24
The analysis of DNA methylation at CpG dinucleotides has become a major research focus due to its regulatory role in numerous biological processes, but the requisite need for assays which amplify bisulfite-converted DNA represents a major bottleneck due to the unique design constraints imposed on bisulfite-PCR primers. Moreover, a review of the literature indicated no available software solutions which accommodated both high-throughput primer design, support for multiplex amplification assays, and primer-dimer prediction. In response, the tri-modular software package PrimerSuite was developed to support bisulfite multiplex PCR applications. This software was constructed to (i) design bisulfite primers against multiple regions simultaneously (PrimerSuite), (ii) screen for primer-primer dimerizing artefacts (PrimerDimer), and (iii) support multiplex PCR assays (PrimerPlex). Moreover, a major focus in the development of this software package was the emphasis on extensive empirical validation, and over 1300 unique primer pairs have been successfully designed and screened, with over 94% of them producing amplicons of the expected size, and an average mapping efficiency of 93% when screened using bisulfite multiplex resequencing. The potential use of the software in other bisulfite-based applications such as methylation-specific PCR is under consideration for future updates. This resource is freely available for use at PrimerSuite website (www.primer-suite.com).
Architecture of the software for LAMOST fiber positioning subsystem
NASA Astrophysics Data System (ADS)
Peng, Xiaobo; Xing, Xiaozheng; Hu, Hongzhuan; Zhai, Chao; Li, Weimin
2004-09-01
The architecture of the software which controls the LAMOST fiber positioning sub-system is described. The software is composed of two parts as follows: a main control program in a computer and a unit controller program in a MCS51 single chip microcomputer ROM. And the function of the software includes: Client/Server model establishment, observation planning, collision handling, data transmission, pulse generation, CCD control, image capture and processing, and data analysis etc. Particular attention is paid to the ways in which different parts of the software can communicate. Also software techniques for multi threads, SOCKET programming, Microsoft Windows message response, and serial communications are discussed.
Engineering Quality Software: 10 Recommendations for Improved Software Quality Management
2010-04-27
lack of user involvement • Inadequate Software Process Management & Control By Contractors • No “Team” of Vendors and users; little SME participation...1990 Quality Perspectives • Process Quality ( CMMI ) • Product Quality (ISO/IEC 2500x) – Internal Quality Attributes – External Quality Attributes... CMMI /ISO 9000 Assessments – Capture organizational knowledge • Identify best practices, lessons learned Know where you are, and where you need to be
Evaluation of the Microsoft Kinect for screening ACL injury.
Stone, Erik E; Butler, Michael; McRuer, Aaron; Gray, Aaron; Marks, Jeffrey; Skubic, Marjorie
2013-01-01
A study was conducted to evaluate the use of the skeletal model generated by the Microsoft Kinect SDK in capturing four biomechanical measures during the Drop Vertical Jump test. These measures, which include: knee valgus motion from initial contact to peak flexion, frontal plane knee angle at initial contact, frontal plane knee angle at peak flexion, and knee-to-ankle separation ratio at peak flexion, have proven to be useful in screening for future knee anterior cruciate ligament (ACL) injuries among female athletes. A marker-based Vicon motion capture system was used for ground truth. Results indicate that the Kinect skeletal model likely has acceptable accuracy for use as part of a screening tool to identify elevated risk for ACL injury.
The Software Design Document: More than a User's Manual.
ERIC Educational Resources Information Center
Bowers, Dennis
1989-01-01
Discusses the value of creating design documentation for computer software so that it may serve as a model for similar design efforts. Components of the software design document are described, including program flowcharts, graphic representation of screen displays, storyboards, and evaluation procedures. An example is given using HyperCard. (three…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruiz, Steven Adriel
The following discussion contains a high-level description of methods used to implement software for data processing. It describes the required directory structures and file handling required to use Excel's Visual Basic for Applications programming language and how to identify shot, test and capture types to appropriately process data. It also describes how to interface with the software.
Automation of the Environmental Control and Life Support System
NASA Technical Reports Server (NTRS)
Dewberry, Brandon S.; Carnes, J. Ray
1990-01-01
The objective of the Environmental Control and Life Support System (ECLSS) Advanced Automation Project is to recommend and develop advanced software for the initial and evolutionary Space Station Freedom (SSF) ECLS system which will minimize the crew and ground manpower needed for operations. Another objective includes capturing ECLSS design and development knowledge for future missions. This report summarizes our results from Phase I, the ECLSS domain analysis phase, which we broke down into three steps: 1) Analyze and document the baselined ECLS system, 2) envision as our goal an evolution to a fully automated regenerative life support system, built upon an augmented baseline, and 3) document the augmentations (hooks and scars) and advanced software systems which we see as necessary in achieving minimal manpower support for ECLSS operations. In addition, Phase I included development of an advanced software life cycle testing tools will be used in the development of the software. In this way, we plan in preparation for phase II and III, the development and integration phases, respectively. Automated knowledge acquisition, engineering, verification, and can capture ECLSS development knowledge for future use, develop more robust and complex software, provide feedback to the KBS tool community, and insure proper visibility of our efforts.
Zong, Shenfei; Wang, Zhuyuan; Chen, Hui; Hu, Guohua; Liu, Min; Chen, Peng; Cui, Yiping
2014-01-01
As an important biomarker and therapeutic target, telomerase has attracted considerable attention concerning its detection and monitoring. Here, we present a colorimetry and surface enhanced Raman scattering (SERS) dual-mode telomerase activity detection method, which has several distinctive advantages. First, colorimetric functionality allows rapid preliminary discrimination of telomerase activity by the naked eye. Second, the employment of SERS technique results in greatly improved detection sensitivity. Third, the combination of colorimetry and SERS into one detection system can ensure highly efficacious and sensitive screening of numerous samples. Besides, the avoidance of polymerase chain reaction (PCR) procedures further guarantees fine reliability and simplicity. Generally, the presented method is realized by an "elongate and capture" procedure. To be specific, gold nanoparticles modified with Raman molecules and telomeric repeat complementary oligonucleotide are employed as the colorimetric-SERS bifunctional reporting nanotag, while magnetic nanoparticles functionalized with telomerase substrate oligonucleotide are used as the capturing substrate. Telomerase can synthesize and elongate telomeric repeats onto the capturing substrate. The elongated telomeric repeats subsequently facilitate capturing of the reporting nanotag via hybridization between telomeric repeat and its complementary strand. The captured nanotags can cause a significant difference in the color and SERS intensity of the magnetically separated sediments. Thus both the color and SERS can be used as indicators of the telomerase activity. With fast screening ability and outstanding sensitivity, we anticipate that this method would greatly promote practical application of telomerase-based early-stage cancer diagnosis.
Estimating the Overdiagnosis Fraction in Cancer Screening | Division of Cancer Prevention
By Stuart G. Baker, 2017 Introduction This software supports the mathematical investigation into estimating the fraction of cancers detected on screening that are overdiagnosed. References Baker SG and Prorok PC. Estimating the overdiagnosis fraction in cancer screening. Requirement Mathematica Version 11 or later. |
NASA Astrophysics Data System (ADS)
Zong, Shenfei; Wang, Zhuyuan; Chen, Hui; Hu, Guohua; Liu, Min; Chen, Peng; Cui, Yiping
2014-01-01
As an important biomarker and therapeutic target, telomerase has attracted considerable attention concerning its detection and monitoring. Here, we present a colorimetry and surface enhanced Raman scattering (SERS) dual-mode telomerase activity detection method, which has several distinctive advantages. First, colorimetric functionality allows rapid preliminary discrimination of telomerase activity by the naked eye. Second, the employment of SERS technique results in greatly improved detection sensitivity. Third, the combination of colorimetry and SERS into one detection system can ensure highly efficacious and sensitive screening of numerous samples. Besides, the avoidance of polymerase chain reaction (PCR) procedures further guarantees fine reliability and simplicity. Generally, the presented method is realized by an ``elongate and capture'' procedure. To be specific, gold nanoparticles modified with Raman molecules and telomeric repeat complementary oligonucleotide are employed as the colorimetric-SERS bifunctional reporting nanotag, while magnetic nanoparticles functionalized with telomerase substrate oligonucleotide are used as the capturing substrate. Telomerase can synthesize and elongate telomeric repeats onto the capturing substrate. The elongated telomeric repeats subsequently facilitate capturing of the reporting nanotag via hybridization between telomeric repeat and its complementary strand. The captured nanotags can cause a significant difference in the color and SERS intensity of the magnetically separated sediments. Thus both the color and SERS can be used as indicators of the telomerase activity. With fast screening ability and outstanding sensitivity, we anticipate that this method would greatly promote practical application of telomerase-based early-stage cancer diagnosis.As an important biomarker and therapeutic target, telomerase has attracted considerable attention concerning its detection and monitoring. Here, we present a colorimetry and surface enhanced Raman scattering (SERS) dual-mode telomerase activity detection method, which has several distinctive advantages. First, colorimetric functionality allows rapid preliminary discrimination of telomerase activity by the naked eye. Second, the employment of SERS technique results in greatly improved detection sensitivity. Third, the combination of colorimetry and SERS into one detection system can ensure highly efficacious and sensitive screening of numerous samples. Besides, the avoidance of polymerase chain reaction (PCR) procedures further guarantees fine reliability and simplicity. Generally, the presented method is realized by an ``elongate and capture'' procedure. To be specific, gold nanoparticles modified with Raman molecules and telomeric repeat complementary oligonucleotide are employed as the colorimetric-SERS bifunctional reporting nanotag, while magnetic nanoparticles functionalized with telomerase substrate oligonucleotide are used as the capturing substrate. Telomerase can synthesize and elongate telomeric repeats onto the capturing substrate. The elongated telomeric repeats subsequently facilitate capturing of the reporting nanotag via hybridization between telomeric repeat and its complementary strand. The captured nanotags can cause a significant difference in the color and SERS intensity of the magnetically separated sediments. Thus both the color and SERS can be used as indicators of the telomerase activity. With fast screening ability and outstanding sensitivity, we anticipate that this method would greatly promote practical application of telomerase-based early-stage cancer diagnosis. Electronic supplementary information (ESI) available: TEM images of individual MB@Au NPs, results of dynamic light scattering analysis and extinction spectrum obtained using colorimetry detection. See DOI: 10.1039/c3nr04942f
Student Perceptions of Online Tutoring Videos
ERIC Educational Resources Information Center
Sligar, Steven R.; Pelletier, Christopher D.; Bonner, Heidi Stone; Coghill, Elizabeth; Guberman, Daniel; Zeng, Xiaoming; Newman, Joyce J.; Muller, Dorothy; Dennis, Allen
2017-01-01
Online tutoring is made possible by using videos to replace or supplement face to face services. The purpose of this research was to examine student reactions to the use of lecture capture technology in a university tutoring setting and to assess student knowledge of some features of Tegrity lecture capture software. A survey was administered to…
NASA Technical Reports Server (NTRS)
Dunne, Matthew J.
2011-01-01
The development of computer software as a tool to generate visual displays has led to an overall expansion of automated computer generated images in the aerospace industry. These visual overlays are generated by combining raw data with pre-existing data on the object or objects being analyzed on the screen. The National Aeronautics and Space Administration (NASA) uses this computer software to generate on-screen overlays when a Visiting Vehicle (VV) is berthing with the International Space Station (ISS). In order for Mission Control Center personnel to be a contributing factor in the VV berthing process, computer software similar to that on the ISS must be readily available on the ground to be used for analysis. In addition, this software must perform engineering calculations and save data for further analysis.
Bawankar, Pritam; Shanbhag, Nita; K., S. Smitha; Dhawan, Bodhraj; Palsule, Aratee; Kumar, Devesh; Chandel, Shailja
2017-01-01
Diabetic retinopathy (DR) is a leading cause of blindness among working-age adults. Early diagnosis through effective screening programs is likely to improve vision outcomes. The ETDRS seven-standard-field 35-mm stereoscopic color retinal imaging (ETDRS) of the dilated eye is elaborate and requires mydriasis, and is unsuitable for screening. We evaluated an image analysis application for the automated diagnosis of DR from non-mydriatic single-field images. Patients suffering from diabetes for at least 5 years were included if they were 18 years or older. Patients already diagnosed with DR were excluded. Physiologic mydriasis was achieved by placing the subjects in a dark room. Images were captured using a Bosch Mobile Eye Care fundus camera. The images were analyzed by the Retinal Imaging Bosch DR Algorithm for the diagnosis of DR. All subjects also subsequently underwent pharmacological mydriasis and ETDRS imaging. Non-mydriatic and mydriatic images were read by ophthalmologists. The ETDRS readings were used as the gold standard for calculating the sensitivity and specificity for the software. 564 consecutive subjects (1128 eyes) were recruited from six centers in India. Each subject was evaluated at a single outpatient visit. Forty-four of 1128 images (3.9%) could not be read by the algorithm, and were categorized as inconclusive. In four subjects, neither eye provided an acceptable image: these four subjects were excluded from the analysis. This left 560 subjects for analysis (1084 eyes). The algorithm correctly diagnosed 531 of 560 cases. The sensitivity, specificity, and positive and negative predictive values were 91%, 97%, 94%, and 95% respectively. The Bosch DR Algorithm shows favorable sensitivity and specificity in diagnosing DR from non-mydriatic images, and can greatly simplify screening for DR. This also has major implications for telemedicine in the use of screening for retinopathy in patients with diabetes mellitus. PMID:29281690
Bawankar, Pritam; Shanbhag, Nita; K, S Smitha; Dhawan, Bodhraj; Palsule, Aratee; Kumar, Devesh; Chandel, Shailja; Sood, Suneet
2017-01-01
Diabetic retinopathy (DR) is a leading cause of blindness among working-age adults. Early diagnosis through effective screening programs is likely to improve vision outcomes. The ETDRS seven-standard-field 35-mm stereoscopic color retinal imaging (ETDRS) of the dilated eye is elaborate and requires mydriasis, and is unsuitable for screening. We evaluated an image analysis application for the automated diagnosis of DR from non-mydriatic single-field images. Patients suffering from diabetes for at least 5 years were included if they were 18 years or older. Patients already diagnosed with DR were excluded. Physiologic mydriasis was achieved by placing the subjects in a dark room. Images were captured using a Bosch Mobile Eye Care fundus camera. The images were analyzed by the Retinal Imaging Bosch DR Algorithm for the diagnosis of DR. All subjects also subsequently underwent pharmacological mydriasis and ETDRS imaging. Non-mydriatic and mydriatic images were read by ophthalmologists. The ETDRS readings were used as the gold standard for calculating the sensitivity and specificity for the software. 564 consecutive subjects (1128 eyes) were recruited from six centers in India. Each subject was evaluated at a single outpatient visit. Forty-four of 1128 images (3.9%) could not be read by the algorithm, and were categorized as inconclusive. In four subjects, neither eye provided an acceptable image: these four subjects were excluded from the analysis. This left 560 subjects for analysis (1084 eyes). The algorithm correctly diagnosed 531 of 560 cases. The sensitivity, specificity, and positive and negative predictive values were 91%, 97%, 94%, and 95% respectively. The Bosch DR Algorithm shows favorable sensitivity and specificity in diagnosing DR from non-mydriatic images, and can greatly simplify screening for DR. This also has major implications for telemedicine in the use of screening for retinopathy in patients with diabetes mellitus.
Geologic and Landuse Controls of the Risk for Domestic Well Pollution from Septic Tank Leachate
NASA Astrophysics Data System (ADS)
Horn, J.; Harter, T.
2006-12-01
A highly resolved three-dimensional groundwater model containing a domestic drinking water well and its surrounding gravel pack is simulated with MODFLOW. Typical recharge rates, domestic well depths and well sealing lengths are obtained by analyzing well log data from eastern Stanislaus County, California, an area with a significant rural and suburban population relying on domestic wells and septic tank systems. The domestic well model is run for a range of hydraulic conductivities of both, the gravel pack and the aquifer. Reverse particle tracking with MODPATH 3D is carried out to determine the capture zone of the well as a function of hydraulic conductivity. The resulting capture zone is divided into two areas: Particles representing water entering the top of the well screen represent water that flows downward through the gravel pack from somewhere below the well seal and above the well screen. The source area associated with these particles forms a narrow well-ward elongation of the main capture zone, which represents that of particles flowing horizontally across the gravel pack into the well screen. The properties of the modeled capture zones are compared to existing analytical capture zone models. A clear influence of the gravel pack on capture zone shape and size is shown. Using the information on capture zone geometry, a risk assessment tool is developed to estimate the chance that a domestic well capture zone intersects at least one septic tank drainfield in a checkerboard of rural or suburban lots of a given size, but random drainfield and domestic well distribution. Risk is computed as a function of aquifer and gravel pack hydraulic conductivity, and as a function of lot size. We show the risk of collocation of a septic tank leach field with a domestic well capture zone for various scenarios. This risk is generally highest for high hydraulic conductivities of the gravel pack and the aquifer, limited anisotropy, and higher septic system densities. Under typical conditions, the risk of septic leachate reaching a domestic well is significant and may range from 5% to over 50%.
Approaches to virtual screening and screening library selection.
Wildman, Scott A
2013-01-01
The ease of access to virtual screening (VS) software in recent years has resulted in a large increase in literature reports. Over 300 publications in the last year report the use of virtual screening techniques to identify new chemical matter or present the development of new virtual screening techniques. The increased use is accompanied by a corresponding increase in misuse and misinterpretation of virtual screening results. This review aims to identify many of the common difficulties associated with virtual screening and allow researchers to better assess the reliability of their virtual screening effort.
Model-based engineering for medical-device software.
Ray, Arnab; Jetley, Raoul; Jones, Paul L; Zhang, Yi
2010-01-01
This paper demonstrates the benefits of adopting model-based design techniques for engineering medical device software. By using a patient-controlled analgesic (PCA) infusion pump as a candidate medical device, the authors show how using models to capture design information allows for i) fast and efficient construction of executable device prototypes ii) creation of a standard, reusable baseline software architecture for a particular device family, iii) formal verification of the design against safety requirements, and iv) creation of a safety framework that reduces verification costs for future versions of the device software. 1.
Image processing system design for microcantilever-based optical readout infrared arrays
NASA Astrophysics Data System (ADS)
Tong, Qiang; Dong, Liquan; Zhao, Yuejin; Gong, Cheng; Liu, Xiaohua; Yu, Xiaomei; Yang, Lei; Liu, Weiyu
2012-12-01
Compared with the traditional infrared imaging technology, the new type of optical-readout uncooled infrared imaging technology based on MEMS has many advantages, such as low cost, small size, producing simple. In addition, the theory proves that the technology's high thermal detection sensitivity. So it has a very broad application prospects in the field of high performance infrared detection. The paper mainly focuses on an image capturing and processing system in the new type of optical-readout uncooled infrared imaging technology based on MEMS. The image capturing and processing system consists of software and hardware. We build our image processing core hardware platform based on TI's high performance DSP chip which is the TMS320DM642, and then design our image capturing board based on the MT9P031. MT9P031 is Micron's company high frame rate, low power consumption CMOS chip. Last we use Intel's company network transceiver devices-LXT971A to design the network output board. The software system is built on the real-time operating system DSP/BIOS. We design our video capture driver program based on TI's class-mini driver and network output program based on the NDK kit for image capturing and processing and transmitting. The experiment shows that the system has the advantages of high capturing resolution and fast processing speed. The speed of the network transmission is up to 100Mbps.
NASA Astrophysics Data System (ADS)
Walton, James S.; Hodgson, Peter; Hallamasek, Karen; Palmer, Jake
2003-07-01
4DVideo is creating a general purpose capability for capturing and analyzing kinematic data from video sequences in near real-time. The core element of this capability is a software package designed for the PC platform. The software ("4DCapture") is designed to capture and manipulate customized AVI files that can contain a variety of synchronized data streams -- including audio, video, centroid locations -- and signals acquired from more traditional sources (such as accelerometers and strain gauges.) The code includes simultaneous capture or playback of multiple video streams, and linear editing of the images (together with the ancilliary data embedded in the files). Corresponding landmarks seen from two or more views are matched automatically, and photogrammetric algorithms permit multiple landmarks to be tracked in two- and three-dimensions -- with or without lens calibrations. Trajectory data can be processed within the main application or they can be exported to a spreadsheet where they can be processed or passed along to a more sophisticated, stand-alone, data analysis application. Previous attempts to develop such applications for high-speed imaging have been limited in their scope, or by the complexity of the application itself. 4DVideo has devised a friendly ("FlowStack") user interface that assists the end-user to capture and treat image sequences in a natural progression. 4DCapture employs the AVI 2.0 standard and DirectX technology which effectively eliminates the file size limitations found in older applications. In early tests, 4DVideo has streamed three RS-170 video sources to disk for more than an hour without loss of data. At this time, the software can acquire video sequences in three ways: (1) directly, from up to three hard-wired cameras supplying RS-170 (monochrome) signals; (2) directly, from a single camera or video recorder supplying an NTSC (color) signal; and (3) by importing existing video streams in the AVI 1.0 or AVI 2.0 formats. The latter is particularly useful for high-speed applications where the raw images are often captured and stored by the camera before being downloaded. Provision has been made to synchronize data acquired from any combination of these video sources using audio and visual "tags." Additional "front-ends," designed for digital cameras, are anticipated.
Using Docker Compose for the Simple Deployment of an Integrated Drug Target Screening Platform.
List, Markus
2017-06-10
Docker virtualization allows for software tools to be executed in an isolated and controlled environment referred to as a container. In Docker containers, dependencies are provided exactly as intended by the developer and, consequently, they simplify the distribution of scientific software and foster reproducible research. The Docker paradigm is that each container encapsulates one particular software tool. However, to analyze complex biomedical data sets, it is often necessary to combine several software tools into elaborate workflows. To address this challenge, several Docker containers need to be instantiated and properly integrated, which complicates the software deployment process unnecessarily. Here, we demonstrate how an extension to Docker, Docker compose, can be used to mitigate these problems by providing a unified setup routine that deploys several tools in an integrated fashion. We demonstrate the power of this approach by example of a Docker compose setup for a drug target screening platform consisting of five integrated web applications and shared infrastructure, deployable in just two lines of codes.
Applying Trustworthy Computing to End-to-End Electronic Voting
ERIC Educational Resources Information Center
Fink, Russell A.
2010-01-01
"End-to-End (E2E)" voting systems provide cryptographic proof that the voter's intention is captured, cast, and tallied correctly. While E2E systems guarantee integrity independent of software, most E2E systems rely on software to provide confidentiality, availability, authentication, and access control; thus, end-to-end integrity is not…
The Toxicity Estimation Software Tool (T.E.S.T.)
The Toxicity Estimation Software Tool (T.E.S.T.) has been developed to estimate toxicological values for aquatic and mammalian species considering acute and chronic endpoints for screening purposes within TSCA and REACH programs.
Predicting Software Suitability Using a Bayesian Belief Network
NASA Technical Reports Server (NTRS)
Beaver, Justin M.; Schiavone, Guy A.; Berrios, Joseph S.
2005-01-01
The ability to reliably predict the end quality of software under development presents a significant advantage for a development team. It provides an opportunity to address high risk components earlier in the development life cycle, when their impact is minimized. This research proposes a model that captures the evolution of the quality of a software product, and provides reliable forecasts of the end quality of the software being developed in terms of product suitability. Development team skill, software process maturity, and software problem complexity are hypothesized as driving factors of software product quality. The cause-effect relationships between these factors and the elements of software suitability are modeled using Bayesian Belief Networks, a machine learning method. This research presents a Bayesian Network for software quality, and the techniques used to quantify the factors that influence and represent software quality. The developed model is found to be effective in predicting the end product quality of small-scale software development efforts.
Model Driven Engineering with Ontology Technologies
NASA Astrophysics Data System (ADS)
Staab, Steffen; Walter, Tobias; Gröner, Gerd; Parreiras, Fernando Silva
Ontologies constitute formal models of some aspect of the world that may be used for drawing interesting logical conclusions even for large models. Software models capture relevant characteristics of a software artifact to be developed, yet, most often these software models have limited formal semantics, or the underlying (often graphical) software language varies from case to case in a way that makes it hard if not impossible to fix its semantics. In this contribution, we survey the use of ontology technologies for software modeling in order to carry over advantages from ontology technologies to the software modeling domain. It will turn out that ontology-based metamodels constitute a core means for exploiting expressive ontology reasoning in the software modeling domain while remaining flexible enough to accommodate varying needs of software modelers.
NASA Astrophysics Data System (ADS)
Pavlis, Terry; Hurtado, Jose; Langford, Richard; Serpa, Laura
2014-05-01
Although many geologists refuse to admit it, it is time to put paper-based geologic mapping into the historical archives and move to the full potential of digital mapping techniques. For our group, flat map digital geologic mapping is now a routine operation in both research and instruction. Several software options are available, and basic proficiency with the software can be learned in a few hours of instruction and practice. The first practical field GIS software, ArcPad, remains a viable, stable option on Windows-based systems. However, the vendor seems to be moving away from ArcPad in favor of mobile software solutions that are difficult to implement without GIS specialists. Thus, we have pursued a second software option based on the open source program QGIS. Our QGIS system uses the same shapefile-centric data structure as our ArcPad system, including similar pop-up data entry forms and generic graphics for easy data management in the field. The advantage of QGIS is that the same software runs on virtually all common platforms except iOS, although the Android version remains unstable as of this writing. A third software option we are experimenting with for flat map-based field work is Fieldmove, a derivative of the 3D-capable program Move developed by Midland Valley. Our initial experiments with Fieldmove are positive, particularly with the new, inexpensive (<300Euros) Windows tablets. However, the lack of flexibility in data structure makes for cumbersome workflows when trying to interface our existing shapefile-centric data structures to Move. Nonetheless, in spring 2014 we will experiment with full-3D immersion in the field using the full Move software package in combination with ground based LiDAR and photogrammetry. One new workflow suggested by our initial experiments is that field geologists should consider using photogrammetry software to capture 3D visualizations of key outcrops. This process is now straightforward in several software packages, and it affords a previously unheard of potential for communicating the complexity of key exposures. For example, in studies of metamorphic structures we often search for days to find "Rosetta Stone" outcrops that display key geometric relationships. While conventional photographs rarely can capture the essence of the field exposure, capturing a true 3D representation of the exposure with multiple photos from many orientations can solve this communication problem. As spatial databases evolve these 3D models should be readily importable into the database.
Operational experience with DICOM for the clinical specialties in the healthcare enterprise
NASA Astrophysics Data System (ADS)
Kuzmak, Peter M.; Dayhoff, Ruth E.
2004-04-01
A number of clinical specialties routinely use images in treating patients, for example ophthalmology, dentistry, cardiology, endoscopy, and surgery. These images are captured by a variety of commercial digital image acquisition systems. The US Department of Veterans Affairs has been working for several years on advancing the use of the Digital Imaging and Communications in Medicine (DICOM) Standard in these clinical specialties. This is an effort that has involved several facets: (1) working with the vendors to ensure that they satisfy existing DICOM requirements, (2) developing interface software to the VistA hospital information system (HIS), (3) field testing DICOM systems, (4) deploying these DICOM interfaces nation-wide to all VA medical centers, (5) working with the healthcare providers using the system, and (6) participating in the DICOM working groups to improve the standard. The VA is now beginning to develop clinical applications that make use of the DICOM interfaces in the clinical specialties. The first of these will be in ophthalmology to remotely screen patients for diabetic retinopathy.
NASA Astrophysics Data System (ADS)
Sulistyo Dwi K., P.; Arindra Trisna, W.; Vindri Catur P., W.; Wijayanti, Erna; Ichsan, Mochammad
2016-03-01
One of the efforts to prevent Alzheimer's disease becomes more severe is by inhibiting the activity of Human acetylcholinesterase enzyme (PDB ID: 4BDT). In this study, virtual screening againts 885 natural compounds from AfroDB has been done using MTIOpenScreen and this step has been successful in identifying ZINC15121024 (-12,9) and ZINC95486216 (-12,7) as the top rank compounds. This data then strengthened by the results of second docking step using Autodock software that has been integrated in PyRx 0.8 software. From this stage, ZINC95486216 (-11,3 kcal/mol) is a compound with the most negative binding affinity compared with four Alzheimer's drugs that have been officially used to date including Rivastigmine (-6,3 Kcal/mol), Donepenzil (-7.9 kcal/mol), Galantamine (-8.4 kcal/mol), and Huprine W (-7.3 kcal/mol). In addition, based on the results of the 2D and 3D visualization using LigPlus and PyMol softwares, respectively, known that the five compounds above are equally capable of binding to several amino acids (Trp 286, Phe295, and Tyr341) located in the active site of Human Acetylcholinesterase enzyme.
Nugen, Sam R; Leonard, Barbara; Baeumner, Antje J
2007-05-15
We developed a software program for the rapid selection of detection probes to be used in nucleic acid-based assays. In comparison to commercially available software packages, our program allows the addition of oligotags as required by nucleic acid sequence-based amplification (NASBA) as well as automatic BLAST searches for all probe/primer pairs. We then demonstrated the usefulness of the program by designing a novel lateral flow biosensor for Streptococcus pyogenes that does not rely on amplification methods such as the polymerase chain reaction (PCR) or NASBA to obtain low limits of detection, but instead uses multiple reporter and capture probes per target sequence and an instantaneous amplification via dye-encapsulating liposomes. These assays will decrease the detection time to just a 20 min hybridization reaction and avoid costly enzymatic gene amplification reactions. The lateral flow assay was developed quantifying the 16S rRNA from S. pyogenes by designing reporter and capture probes that specifically hybridize with the RNA and form a sandwich. DNA reporter probes were tagged with dye-encapsulating liposomes, biotinylated DNA oligonucleotides were used as capture probes. From the initial number of capture and reporter probes chosen, a combination of two capture and three reporter probes were found to provide optimal signal generation and significant enhancement over single capture/reporter probe combinations. The selectivity of the biosensor was proven by analyzing organisms closely related to S. pyogenes, such as other Streptococcus and Enterococcus species. All probes had been selected by the software program within minutes and no iterative optimization and re-design of the oligonucleotides was required which enabled a very rapid biosensor prototyping. While the sensitivity obtained with the biosensor was only 135 ng, future experiments will decrease this significantly by the addition of more reporter and capture probes for either the same rRNA or a different nucleic acid target molecule. This will lead to the possibility of detecting S. pyogenes with a rugged assay that does not require a cell culturing or gene amplification step and will therefore enable rapid, specific and sensitive onsite testing.
The field analytical screening program (FASP) polychlorinated biphenyl (PCB) method uses a temperature-programmable gas chromatograph (GC) equipped with an electron capture detector (ECD) to identify and quantify PCBs. Gas chromatography is an EPA-approved method for determi...
FIELD ANALYTICAL SCREENING PROGRAM: PCP METHOD - INNOVATIVE TECHNOLOGY EVALUATION REPORT
The Field Analytical Screening Program (FASP) pentachlorophenol (PCP) method uses a gas chromatograph (GC) equipped with a megabore capillary column and flame ionization detector (FID) and electron capture detector (ECD) to identify and quantify PCP. The FASP PCP method is design...
DOVIS 2.0: an efficient and easy to use parallel virtual screening tool based on AutoDock 4.0.
Jiang, Xiaohui; Kumar, Kamal; Hu, Xin; Wallqvist, Anders; Reifman, Jaques
2008-09-08
Small-molecule docking is an important tool in studying receptor-ligand interactions and in identifying potential drug candidates. Previously, we developed a software tool (DOVIS) to perform large-scale virtual screening of small molecules in parallel on Linux clusters, using AutoDock 3.05 as the docking engine. DOVIS enables the seamless screening of millions of compounds on high-performance computing platforms. In this paper, we report significant advances in the software implementation of DOVIS 2.0, including enhanced screening capability, improved file system efficiency, and extended usability. To keep DOVIS up-to-date, we upgraded the software's docking engine to the more accurate AutoDock 4.0 code. We developed a new parallelization scheme to improve runtime efficiency and modified the AutoDock code to reduce excessive file operations during large-scale virtual screening jobs. We also implemented an algorithm to output docked ligands in an industry standard format, sd-file format, which can be easily interfaced with other modeling programs. Finally, we constructed a wrapper-script interface to enable automatic rescoring of docked ligands by arbitrarily selected third-party scoring programs. The significance of the new DOVIS 2.0 software compared with the previous version lies in its improved performance and usability. The new version makes the computation highly efficient by automating load balancing, significantly reducing excessive file operations by more than 95%, providing outputs that conform to industry standard sd-file format, and providing a general wrapper-script interface for rescoring of docked ligands. The new DOVIS 2.0 package is freely available to the public under the GNU General Public License.
Bruland, Philipp; Dugas, Martin
2017-01-07
Data capture for clinical registries or pilot studies is often performed in spreadsheet-based applications like Microsoft Excel or IBM SPSS. Usually, data is transferred into statistic software, such as SAS, R or IBM SPSS Statistics, for analyses afterwards. Spreadsheet-based solutions suffer from several drawbacks: It is generally not possible to ensure a sufficient right and role management; it is not traced who has changed data when and why. Therefore, such systems are not able to comply with regulatory requirements for electronic data capture in clinical trials. In contrast, Electronic Data Capture (EDC) software enables a reliable, secure and auditable collection of data. In this regard, most EDC vendors support the CDISC ODM standard to define, communicate and archive clinical trial meta- and patient data. Advantages of EDC systems are support for multi-user and multicenter clinical trials as well as auditable data. Migration from spreadsheet based data collection to EDC systems is labor-intensive and time-consuming at present. Hence, the objectives of this research work are to develop a mapping model and implement a converter between the IBM SPSS and CDISC ODM standard and to evaluate this approach regarding syntactic and semantic correctness. A mapping model between IBM SPSS and CDISC ODM data structures was developed. SPSS variables and patient values can be mapped and converted into ODM. Statistical and display attributes from SPSS are not corresponding to any ODM elements; study related ODM elements are not available in SPSS. The S2O converting tool was implemented as command-line-tool using the SPSS internal Java plugin. Syntactic and semantic correctness was validated with different ODM tools and reverse transformation from ODM into SPSS format. Clinical data values were also successfully transformed into the ODM structure. Transformation between the spreadsheet format IBM SPSS and the ODM standard for definition and exchange of trial data is feasible. S2O facilitates migration from Excel- or SPSS-based data collections towards reliable EDC systems. Thereby, advantages of EDC systems like reliable software architecture for secure and traceable data collection and particularly compliance with regulatory requirements are achievable.
Lecture-Capture Software and the Teaching of Soils
NASA Astrophysics Data System (ADS)
Brevik, Eric C.
2014-05-01
Several companies now offer software that can record academic lectures and place them on password-protected course websites for future review by students. Using lecture-capture software offers several advantages for the instructor and the students, including: 1) The ability for students who miss class for legitimate reasons (e.g., participation in school-sanctioned extra-curricular activities, illness or family emergencies) to get lecture materials by logging into the class website. This provides these students with a more complete exposure to the material than simply copying a classmate's notes. 2) The instructor is able to direct students who miss class for legitimate reasons to the recorded lecture rather than needing to spend time going over the material with those students and that recap does not end up being rushed. 3) The ability to address course conflicts for graduating seniors by allowing them to take the lecture portion of the class via recorded lecture. 4) Students who desire more in-depth learning are able to go back to selected portions of previous lectures to review and reconsider a topic of discussion or to fill in vague sections of their notes. There are also potential disadvantages to the use of lecture-capture software, including: 1) decreased student attendance in class because they feel they can watch class later at a time of their own choosing, 2) additional time spent by the instructor dealing with the technology, and 3) problems with hardware or software during class time that prevents recording a given day's lecture. These problems can often be addressed or justified relatively easily. If problem 1 is of concern to an instructor it can be addressed by blocking online access to individual students who have a poor record of class attendance. In the case of problem 2, the extra time spent with the technology is often offset by a reduction in time answering questions from students who have missed class. Problem 3 does happen, but in the author's experience it is fairly rare, representing less than 5% of class sessions per semester. Student comments have been overwhelmingly favorable towards the use of captured lectures since the technology was first adopted in the author's classes in 2009.
USDA-ARS?s Scientific Manuscript database
The ability to reliably analyze cellular and molecular profiles of normal or diseased tissues is frequently obfuscated by the inherent heterogeneous nature of tissues. Laser Capture Microdissection (LCM) is an innovative technique that allows the isolation and enrichment of pure subpopulations of c...
FAIMS Mobile: Flexible, open-source software for field research
NASA Astrophysics Data System (ADS)
Ballsun-Stanton, Brian; Ross, Shawn A.; Sobotkova, Adela; Crook, Penny
2018-01-01
FAIMS Mobile is a native Android application supported by an Ubuntu server facilitating human-mediated field research across disciplines. It consists of 'core' Java and Ruby software providing a platform for data capture, which can be deeply customised using 'definition packets' consisting of XML documents (data schema and UI) and Beanshell scripts (automation). Definition packets can also be generated using an XML-based domain-specific language, making customisation easier. FAIMS Mobile includes features allowing rich and efficient data capture tailored to the needs of fieldwork. It also promotes synthetic research and improves transparency and reproducibility through the production of comprehensive datasets that can be mapped to vocabularies or ontologies as they are created.
Lyon, Jennifer A; Garcia-Milian, Rolando; Norton, Hannah F; Tennant, Michele R
2014-01-01
Expert-mediated literature searching, a keystone service in biomedical librarianship, would benefit significantly from regular methodical review. This article describes the novel use of Research Electronic Data Capture (REDCap) software to create a database of literature searches conducted at a large academic health sciences library. An archive of paper search requests was entered into REDCap, and librarians now prospectively enter records for current searches. Having search data readily available allows librarians to reuse search strategies and track their workload. In aggregate, this data can help guide practice and determine priorities by identifying users' needs, tracking librarian effort, and focusing librarians' continuing education.
Applying Minimal Manual Principles for Documentation of Graphical User Interfaces.
ERIC Educational Resources Information Center
Nowaczyk, Ronald H.; James, E. Christopher
1993-01-01
Investigates the need to include computer screens in documentation for software using a graphical user interface. Describes the uses and purposes of "minimal manuals" and their principles. Studies student reaction to their use of one of three on-screen manuals: screens, icon, and button. Finds some benefit for including icon and button…
Designing a Visual Factors-Based Screen Display Interface: The New Role of the Graphic Technologist.
ERIC Educational Resources Information Center
Faiola, Tony; DeBloois, Michael L.
1988-01-01
Discusses the role of the graphic technologist in preparing computer screen displays for interactive videodisc systems, and suggests screen design guidelines. Topics discussed include the grid system; typography; visual factors research; color; course mobility through branching and software menus; and a model of course integration. (22 references)…
In silico screening of carbon-capture materials
NASA Astrophysics Data System (ADS)
Lin, Li-Chiang; Berger, Adam H.; Martin, Richard L.; Kim, Jihan; Swisher, Joseph A.; Jariwala, Kuldeep; Rycroft, Chris H.; Bhown, Abhoyjit S.; Deem, Michael W.; Haranczyk, Maciej; Smit, Berend
2012-07-01
One of the main bottlenecks to deploying large-scale carbon dioxide capture and storage (CCS) in power plants is the energy required to separate the CO2 from flue gas. For example, near-term CCS technology applied to coal-fired power plants is projected to reduce the net output of the plant by some 30% and to increase the cost of electricity by 60-80%. Developing capture materials and processes that reduce the parasitic energy imposed by CCS is therefore an important area of research. We have developed a computational approach to rank adsorbents for their performance in CCS. Using this analysis, we have screened hundreds of thousands of zeolite and zeolitic imidazolate framework structures and identified many different structures that have the potential to reduce the parasitic energy of CCS by 30-40% compared with near-term technologies.
Newman, Craig G J; Bevins, Adam D; Zajicek, John P; Hodges, John R; Vuillermoz, Emil; Dickenson, Jennifer M; Kelly, Denise S; Brown, Simona; Noad, Rupert F
2018-01-01
Ensuring reliable administration and reporting of cognitive screening tests are fundamental in establishing good clinical practice and research. This study captured the rate and type of errors in clinical practice, using the Addenbrooke's Cognitive Examination-III (ACE-III), and then the reduction in error rate using a computerized alternative, the ACEmobile app. In study 1, we evaluated ACE-III assessments completed in National Health Service (NHS) clinics ( n = 87) for administrator error. In study 2, ACEmobile and ACE-III were then evaluated for their ability to capture accurate measurement. In study 1, 78% of clinically administered ACE-IIIs were either scored incorrectly or had arithmetical errors. In study 2, error rates seen in the ACE-III were reduced by 85%-93% using ACEmobile. Error rates are ubiquitous in routine clinical use of cognitive screening tests and the ACE-III. ACEmobile provides a framework for supporting reduced administration, scoring, and arithmetical error during cognitive screening.
ERIC Educational Resources Information Center
Pendzick, Richard E.; Downs, Robert L.
2002-01-01
Describes software for electronic visitor management (EVM) called EasyLobbyTM, currently in use in thousands of federal and corporate installations throughout the world and its application for school and campus environments. Explains EasyLobbyTM's use to replace visitor logs, capture and store visitor data electronically, and provide badges that…
Warden, Graham I.; Farkas, Cameron E.; Ikuta, Ichiro; Prevedello, Luciano M.; Andriole, Katherine P.; Khorasani, Ramin
2012-01-01
Purpose: To develop and validate an informatics toolkit that extracts anatomy-specific computed tomography (CT) radiation exposure metrics (volume CT dose index and dose-length product) from existing digital image archives through optical character recognition of CT dose report screen captures (dose screens) combined with Digital Imaging and Communications in Medicine attributes. Materials and Methods: This institutional review board–approved HIPAA-compliant study was performed in a large urban health care delivery network. Data were drawn from a random sample of CT encounters that occurred between 2000 and 2010; images from these encounters were contained within the enterprise image archive, which encompassed images obtained at an adult academic tertiary referral hospital and its affiliated sites, including a cancer center, a community hospital, and outpatient imaging centers, as well as images imported from other facilities. Software was validated by using 150 randomly selected encounters for each major CT scanner manufacturer, with outcome measures of dose screen retrieval rate (proportion of correctly located dose screens) and anatomic assignment precision (proportion of extracted exposure data with correctly assigned anatomic region, such as head, chest, or abdomen and pelvis). The 95% binomial confidence intervals (CIs) were calculated for discrete proportions, and CIs were derived from the standard error of the mean for continuous variables. After validation, the informatics toolkit was used to populate an exposure repository from a cohort of 54 549 CT encounters; of which 29 948 had available dose screens. Results: Validation yielded a dose screen retrieval rate of 99% (597 of 605 CT encounters; 95% CI: 98%, 100%) and an anatomic assignment precision of 94% (summed DLP fraction correct 563 in 600 CT encounters; 95% CI: 92%, 96%). Patient safety applications of the resulting data repository include benchmarking between institutions, CT protocol quality control and optimization, and cumulative patient- and anatomy-specific radiation exposure monitoring. Conclusion: Large-scale anatomy-specific radiation exposure data repositories can be created with high fidelity from existing digital image archives by using open-source informatics tools. ©RSNA, 2012 Supplemental material: http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.12111822/-/DC1 PMID:22668563
Clinical Validation of Anyplex II HPV HR Detection Test for Cervical Cancer Screening in Korea.
Jung, Sunkyung; Lee, Byungdoo; Lee, Kap No; Kim, Yonggoo; Oh, Eun-Jee
2016-03-01
The Anyplex II HPV HR detection kit (Seegene Inc, Seoul, Korea) is a new, multiplex, real-time polymerase chain reaction assay to detect individual 14 high-risk (HR) human papillomavirus (HPV) types in a single tube. To evaluate the clinical performance of the HPV HR kit in predicting high-grade squamous intraepithelial lesions and cervical intraepithelial lesions grade 2 or worse in cervical cancer screening. We analyzed 1137 cervical samples in Huro Path medium (CelltraZone, Seoul, Korea) from Korean women. The clinical performance of the HPV HR kit was compared with Hybrid Capture 2 (Qiagen, Valencia, California) using the noninferiority score test in a routine cervical cancer screening setting. The intralaboratory and interlaboratory agreements of HPV HR were also evaluated. Overall agreement between the 2 assays was 92.4% (1051 of 1137) with a κ value of 0.787. Clinical sensitivity of HPV HR for high-grade squamous intraepithelial lesions and cervical intraepithelial lesions grade 2 or worse was 94.4% (95% confidence interval [CI], 89.2-99.7) and 92.5% (95% CI, 84.3-100.0), respectively. The respective values for Hybrid Capture 2 were 93.1% (95% CI, 87.2-98.9) and 87.5% (95% CI, 77.3-99.7). Clinical sensitivity and specificity of HPV HR were not inferior to those of Hybrid Capture 2 (P = .005 and P = .04, respectively). The HPV HR showed good intralaboratory and interlaboratory reproducibility at 98.0% (κ = 0.953) and 97.4% (κ = 0.940), respectively. The HPV HR demonstrates comparable performance to the Hybrid Capture 2 test and can be useful for HPV-based cervical cancer screening testing.
IMCS reflight certification requirements and design specifications
NASA Technical Reports Server (NTRS)
1984-01-01
The requirements for reflight certification are established. Software requirements encompass the software programs that are resident in the PCC, DEP, PDSS, EC, or any related GSE. A design approach for the reflight software packages is recommended. These designs will be of sufficient detail to permit the implementation of reflight software. The PDSS/IMC Reflight Certification system provides the tools and mechanisms for the user to perform the reflight certification test procedures, test data capture, test data display, and test data analysis. The system as defined will be structured to permit maximum automation of reflight certification procedures and test data analysis.
Towards Archetypes-Based Software Development
NASA Astrophysics Data System (ADS)
Piho, Gunnar; Roost, Mart; Perkins, David; Tepandi, Jaak
We present a framework for the archetypes based engineering of domains, requirements and software (Archetypes-Based Software Development, ABD). An archetype is defined as a primordial object that occurs consistently and universally in business domains and in business software systems. An archetype pattern is a collaboration of archetypes. Archetypes and archetype patterns are used to capture conceptual information into domain specific models that are utilized by ABD. The focus of ABD is on software factories - family-based development artefacts (domain specific languages, patterns, frameworks, tools, micro processes, and others) that can be used to build the family members. We demonstrate the usage of ABD for developing laboratory information management system (LIMS) software for the Clinical and Biomedical Proteomics Group, at the Leeds Institute of Molecular Medicine, University of Leeds.
Akbari, Samin; Pirbodaghi, Tohid
2014-09-07
High throughput heterogeneous immunoassays that screen antigen-specific antibody secreting cells are essential to accelerate monoclonal antibody discovery for therapeutic applications. Here, we introduce a heterogeneous single cell immunoassay based on alginate microparticles as permeable cell culture chambers. Using a microfluidic device, we encapsulated single antibody secreting cells in 35-40 μm diameter alginate microbeads. We functionalized the alginate to capture the secreted antibodies inside the microparticles, enabling single cell analysis and preventing the cross-talk between the neighboring encapsulated cells. We demonstrated non-covalent functionalization of alginate microparticles by adding three secondary antibodies to the alginate solution to form high molecular weight complexes that become trapped in the porous nanostructure of alginate and capture the secreted antibodies. We screened anti-TNF-alpha antibody-secreting cells from a mixture of antibody-secreting cells.
Herbst, Christian T; Oh, Jinook; Vydrová, Jitka; Švec, Jan G
2015-07-01
In this short report we introduce DigitalVHI, a free open-source software application for obtaining Voice Handicap Index (VHI) and other questionnaire data, which can be put on a computer in clinics and used in clinical practice. The software can simplify performing clinical studies since it makes the VHI scores directly available for analysis in a digital form. It can be downloaded from http://www.christian-herbst.org/DigitalVHI/.
X-Eye: a novel wearable vision system
NASA Astrophysics Data System (ADS)
Wang, Yuan-Kai; Fan, Ching-Tang; Chen, Shao-Ang; Chen, Hou-Ye
2011-03-01
This paper proposes a smart portable device, named the X-Eye, which provides a gesture interface with a small size but a large display for the application of photo capture and management. The wearable vision system is implemented with embedded systems and can achieve real-time performance. The hardware of the system includes an asymmetric dualcore processer with an ARM core and a DSP core. The display device is a pico projector which has a small volume size but can project large screen size. A triple buffering mechanism is designed for efficient memory management. Software functions are partitioned and pipelined for effective execution in parallel. The gesture recognition is achieved first by a color classification which is based on the expectation-maximization algorithm and Gaussian mixture model (GMM). To improve the performance of the GMM, we devise a LUT (Look Up Table) technique. Fingertips are extracted and geometrical features of fingertip's shape are matched to recognize user's gesture commands finally. In order to verify the accuracy of the gesture recognition module, experiments are conducted in eight scenes with 400 test videos including the challenge of colorful background, low illumination, and flickering. The processing speed of the whole system including the gesture recognition is with the frame rate of 22.9FPS. Experimental results give 99% recognition rate. The experimental results demonstrate that this small-size large-screen wearable system has effective gesture interface with real-time performance.
Integrated Spacesuit Audio System Enhances Speech Quality and Reduces Noise
NASA Technical Reports Server (NTRS)
Huang, Yiteng Arden; Chen, Jingdong; Chen, Shaoyan Sharyl
2009-01-01
A new approach has been proposed for increasing astronaut comfort and speech capture. Currently, the special design of a spacesuit forms an extreme acoustic environment making it difficult to capture clear speech without compromising comfort. The proposed Integrated Spacesuit Audio (ISA) system is to incorporate the microphones into the helmet and use software to extract voice signals from background noise.
Engineering graphics data entry for space station data base
NASA Technical Reports Server (NTRS)
Lacovara, R. C.
1986-01-01
The entry of graphical engineering data into the Space Station Data Base was examined. Discussed were: representation of graphics objects; representation of connectivity data; graphics capture hardware; graphics display hardware; site-wide distribution of graphics, and consolidation of tools and hardware. A fundamental assumption was that existing equipment such as IBM based graphics capture software and VAX networked facilities would be exploited. Defensible conclusions reached after study and simulations of use of these systems at the engineering level are: (1) existing IBM based graphics capture software is an adequate and economical means of entry of schematic and block diagram data for present and anticipated electronic systems for Space Station; (2) connectivity data from the aforementioned system may be incorporated into the envisioned Space Station Data Base with modest effort; (3) graphics and connectivity data captured on the IBM based system may be exported to the VAX network in a simple and direct fashion; (4) graphics data may be displayed site-wide on VT-125 terminals and lookalikes; (5) graphics hard-copy may be produced site-wide on various dot-matrix printers; and (6) the system may provide integrated engineering services at both the engineering and engineering management level.
Coded aperture solution for improving the performance of traffic enforcement cameras
NASA Astrophysics Data System (ADS)
Masoudifar, Mina; Pourreza, Hamid Reza
2016-10-01
A coded aperture camera is proposed for automatic license plate recognition (ALPR) systems. It captures images using a noncircular aperture. The aperture pattern is designed for the rapid acquisition of high-resolution images while preserving high spatial frequencies of defocused regions. It is obtained by minimizing an objective function, which computes the expected value of perceptual deblurring error. The imaging conditions and camera sensor specifications are also considered in the proposed function. The designed aperture improves the depth of field (DoF) and subsequently ALPR performance. The captured images can be directly analyzed by the ALPR software up to a specific depth, which is 13 m in our case, though it is 11 m for the circular aperture. Moreover, since the deblurring results of images captured by our aperture yield fewer artifacts than those captured by the circular aperture, images can be first deblurred and then analyzed by the ALPR software. In this way, the DoF and recognition rate can be improved at the same time. Our case study shows that the proposed camera can improve the DoF up to 17 m while it is limited to 11 m in the conventional aperture.
Prete-moi Ton Logiciel pour Ecrire un Mot (Lend Me Your Software Program So I Can Write a Letter).
ERIC Educational Resources Information Center
Mangenot, Francois
1993-01-01
A brief discussion and description of one commercially available software package ("Pour Ecrire un Mot") for writing letters of various types uses the love letter as an example of the software's functioning. Answers to the prompting questions on the screen determine the few variable parameters of the text to be generated. (four…
Design and development of an upper extremity motion capture system for a rehabilitation robot.
Nanda, Pooja; Smith, Alan; Gebregiorgis, Adey; Brown, Edward E
2009-01-01
Human robot interaction is a new and rapidly growing field and its application in the realm of rehabilitation and physical care is a major focus area of research worldwide. This paper discusses the development and implementation of a wireless motion capture system for the human arm which can be used for physical therapy or real-time control of a robotic arm, among many other potential applications. The system is comprised of a mechanical brace with rotary potentiometers inserted at the different joints to capture position data. It also contains surface electrodes which acquire electromyographic signals through the CleveMed BioRadio device. The brace interfaces with a software subsystem which displays real time data signals. The software includes a 3D arm model which imitates the actual movement of a subject's arm under testing. This project began as part of the Rochester Institute of Technology's Undergraduate Multidisciplinary Senior Design curriculum and has been integrated into the overall research objectives of the Biomechatronic Learning Laboratory.
Applying Content Management to Automated Provenance Capture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schuchardt, Karen L.; Gibson, Tara D.; Stephan, Eric G.
2008-04-10
Workflows and data pipelines are becoming increasingly valuable in both computational and experimen-tal sciences. These automated systems are capable of generating significantly more data within the same amount of time than their manual counterparts. Automatically capturing and recording data prove-nance and annotation as part of these workflows is critical for data management, verification, and dis-semination. Our goal in addressing the provenance challenge was to develop and end-to-end system that demonstrates real-time capture, persistent content management, and ad-hoc searches of both provenance and metadata using open source software and standard protocols. We describe our prototype, which extends the Kepler workflow toolsmore » for the execution environment, the Scientific Annotation Middleware (SAM) content management software for data services, and an existing HTTP-based query protocol. Our implementation offers several unique capabilities, and through the use of standards, is able to pro-vide access to the provenance record to a variety of commonly available client tools.« less
Software Design Methodology Migration for a Distributed Ground System
NASA Technical Reports Server (NTRS)
Ritter, George; McNair, Ann R. (Technical Monitor)
2002-01-01
The Marshall Space Flight Center's (MSFC) Payload Operations Center (POC) ground system has been developed and has evolved over a period of about 10 years. During this time the software processes have migrated from more traditional to more contemporary development processes. The new Software processes still emphasize requirements capture, software configuration management, design documenting, and making sure the products that have been developed are accountable to initial requirements. This paper will give an overview of how the Software Process have evolved highlighting the positives as well as the negatives. In addition, we will mention the COTS tools that have been integrated into the processes and how the COTS have provided value to the project .
Applications of Mobile GIS in Forestry South Australia
NASA Astrophysics Data System (ADS)
Battad, D. T.; Mackenzie, P.
2012-07-01
South Australian Forestry Corporation (ForestrySA) had been actively investigating the applications of mobile GIS in forestry for the past few years. The main objective is to develop an integrated mobile GIS capability that allows staff to collect new spatial information, verify existing data, and remotely access and post data from the field. Two (2) prototype mobile GIS applications have been developed already using the Environmental Systems Research Institute (ESRI) ARCGISR technology as the main spatial component. These prototype systems are the Forest Health Surveillance System and the Mobile GIS for Wetlands System. The Forest Health Surveillance System prototype is used primarily for aerial forest health surveillance. It was developed using a tablet PC with ArcMapR GIS. A customised toolbar was developed using ArcObjectsR in the Visual Basic 6 Integrated Development Environment (IDE). The resulting dynamic linked library provides a suite of custom tools which enables the following: - quickly create spatial features and attribute the data - full utilisation of global positioning system (GPS) technology - excellent screen display navigation tools, i.e. pan, rotate map, capture of flight path - seamless integration of data into GIS as geodatabase (GDB) feature classes - screen entry of text and conversion to annotation feature classes The Mobile GIS for Wetlands System prototype was developed for verifying existing wetland areas within ForestrySA's plantation estate, collect new wetland data, and record wetland conditions. Mapping of actual wetlands within ForestrySA's plantation estate is very critical because of the need to establish protection buffers around these features during the implementation of plantation operations. System development has been focussed on a mobile phone platform (HTC HD2R ) with WindowsR Mobile 6, ESRI's ArcGISR Mobile software development kit (SDK) employing ArcObjectsR written on C#.NET IDE, and ArcGIS ServerR technology. The system is also implemented in the VILIVR X70. The system has undergone testing by ForestrySA staff and the refinements had been incorporated in the latest version of the system. The system has the following functionalities: - display and query strategic data layers - collect and edit spatial and attribute data - full utilisation of global positioning GPS technology - distance and area measurements - display of high resolution imagery - seamless integration of data into GIS as feature classes - screen display and navigation tools, i.e. pan, zoom in/out, rotate map - capture of flight path The next stages in the development of mobile GIS technologies at ForestrySA are to enhance the systems' capabilities as one of the organization main data capture systems. These include incorporating other applications, e.g. roads/tracks mapping, mapping of significant sites, etc., and migration of the system to Windows Phone7.
A software tool for analyzing multichannel cochlear implant signals.
Lai, Wai Kong; Bögli, Hans; Dillier, Norbert
2003-10-01
A useful and convenient means to analyze the radio frequency (RF) signals being sent by a speech processor to a cochlear implant would be to actually capture and display them with appropriate software. This is particularly useful for development or diagnostic purposes. sCILab (Swiss Cochlear Implant Laboratory) is such a PC-based software tool intended for the Nucleus family of Multichannel Cochlear Implants. Its graphical user interface provides a convenient and intuitive means for visualizing and analyzing the signals encoding speech information. Both numerical and graphic displays are available for detailed examination of the captured CI signals, as well as an acoustic simulation of these CI signals. sCILab has been used in the design and verification of new speech coding strategies, and has also been applied as an analytical tool in studies of how different parameter settings of existing speech coding strategies affect speech perception. As a diagnostic tool, it is also useful for troubleshooting problems with the external equipment of the cochlear implant systems.
Motion-Capture-Enabled Software for Gestural Control of 3D Models
NASA Technical Reports Server (NTRS)
Norris, Jeffrey S.; Luo, Victor; Crockett, Thomas M.; Shams, Khawaja S.; Powell, Mark W.; Valderrama, Anthony
2012-01-01
Current state-of-the-art systems use general-purpose input devices such as a keyboard, mouse, or joystick that map to tasks in unintuitive ways. This software enables a person to control intuitively the position, size, and orientation of synthetic objects in a 3D virtual environment. It makes possible the simultaneous control of the 3D position, scale, and orientation of 3D objects using natural gestures. Enabling the control of 3D objects using a commercial motion-capture system allows for natural mapping of the many degrees of freedom of the human body to the manipulation of the 3D objects. It reduces training time for this kind of task, and eliminates the need to create an expensive, special-purpose controller.
Knowledge-based requirements analysis for automating software development
NASA Technical Reports Server (NTRS)
Markosian, Lawrence Z.
1988-01-01
We present a new software development paradigm that automates the derivation of implementations from requirements. In this paradigm, informally-stated requirements are expressed in a domain-specific requirements specification language. This language is machine-understable and requirements expressed in it are captured in a knowledge base. Once the requirements are captured, more detailed specifications and eventually implementations are derived by the system using transformational synthesis. A key characteristic of the process is that the required human intervention is in the form of providing problem- and domain-specific engineering knowledge, not in writing detailed implementations. We describe a prototype system that applies the paradigm in the realm of communication engineering: the prototype automatically generates implementations of buffers following analysis of the requirements on each buffer.
Autonomous Science on the EO-1 Mission
NASA Technical Reports Server (NTRS)
Chien, S.; Sherwood, R.; Tran, D.; Castano, R.; Cichy, B.; Davies, A.; Rabideau, G.; Tang, N.; Burl, M.; Mandl, D.;
2003-01-01
In mid-2003, we will fly software to detect science events that will drive autonomous scene selectionon board the New Millennium Earth Observing 1 (EO-1) spacecraft. This software will demonstrate the potential for future space missions to use onboard decision-making to detect science events and respond autonomously to capture short-lived science events and to downlink only the highest value science data.
ERIC Educational Resources Information Center
1996
This software product presents multi-level stories to capture the interest of children in grades two through five, while teaching them crucial reading comprehension skills. With stories touching on everything from the invention of velcro to the journey of food through the digestive system, the open-ended reading comprehension program is versatile…
Towards an Interoperability Ontology for Software Development Tools
2003-03-01
The description of feature models was tied to the introduction of the Feature-Oriented Domain Analysis ( FODA *) [KANG90] approach in the late eighties...Feature-oriented domain analysis ( FODA ) is a domain analysis method developed at the Software...ese obstacles was to construct a “pilot” ontology that is extensible. We applied the Feature-Oriented Domain Analysis approach to capture the
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eckerman, Keith F.; Sjoreen, Andrea L.
2013-05-01
The Radiological Toolbox software developed by Oak Ridge National Laboratory (ORNL) for U. S. Nuclear Regulatory Commission (NRC) is designed to provide electronic access to the vast and varied data that underlies the field of radiation protection. These data represent physical, chemical, anatomical, physiological, and mathematical parameters detailed in various handbooks which a health physicist might consult while in his office. The initial motivation for the software was to serve the needs of the health physicist away from his office and without access to his handbooks; e.g., NRC inspectors. The earlier releases of the software were widely used and acceptedmore » around the world by not only practicing health physicist but also those within educational programs. This release updates the software to accommodate changes in Windows operating systems and, in some aspects, radiation protection. This release has been tested on Windows 7 and 8 and on 32- and 64-bit machines. The nuclear decay data has been updated and thermal neutron capture cross sections and cancer risk coefficients have been included. This document and the software’s user’s guide provide further details and documentation of the information captured within the Radiological Toolbox.« less
MoKey: A versatile exergame creator for everyday usage.
Eckert, Martina; López, Marcos; Lázaro, Carlos; Meneses, Juan
2017-11-27
Currently, virtual applications for physical exercises are highly appreciated as rehabilitation instruments. This article presents a middleware called "MoKey" (Motion Keyboard), which converts standard off-the-shelf software into exergames (exercise games). A configurable set of gestures, captured by a motion capture camera, is translated into the key strokes required by the chosen software. The present study assesses the tool regarding usability and viability on a heterogeneous group of 11 participants, aged 5 to 51, with moderate to severe disabilities, and mostly bound to a wheelchair. In comparison with FAAST (The Flexible Action and Articulated Skeleton Toolkit), MoKey achieved better results in terms of ease of use and computational load. The viability as an exergame creator tool was proven with help of four applications (PowerPoint®, e-book reader, Skype®, and Tetris). Success rates of up to 91% have been achieved, subjective perception was rated with 4.5 points (from 0-5). The middleware provides increased motivation due to the use of favorite software and the advantage of exploiting it for exercise. Used together with communication software or online games, social inclusion can be stimulated. The therapists can employ the tool to monitor the correctness and progress of the exercises.
Using In Silico Fragmentation to Improve Routine Residue Screening in Complex Matrices.
Kaufmann, Anton; Butcher, Patrick; Maden, Kathryn; Walker, Stephan; Widmer, Mirjam
2017-12-01
Targeted residue screening requires the use of reference substances in order to identify potential residues. This becomes a difficult issue when using multi-residue methods capable of analyzing several hundreds of analytes. Therefore, the capability of in silico fragmentation based on a structure database ("suspect screening") instead of physical reference substances for routine targeted residue screening was investigated. The detection of fragment ions that can be predicted or explained by in silico software was utilized to reduce the number of false positives. These "proof of principle" experiments were done with a tool that is integrated into a commercial MS vendor instrument operating software (UNIFI) as well as with a platform-independent MS tool (Mass Frontier). A total of 97 analytes belonging to different chemical families were separated by reversed phase liquid chromatography and detected in a data-independent acquisition (DIA) mode using ion mobility hyphenated with quadrupole time of flight mass spectrometry. The instrument was operated in the MS E mode with alternating low and high energy traces. The fragments observed from product ion spectra were investigated using a "chopping" bond disconnection algorithm and a rule-based algorithm. The bond disconnection algorithm clearly explained more analyte product ions and a greater percentage of the spectral abundance than the rule-based software (92 out of the 97 compounds produced ≥1 explainable fragment ions). On the other hand, tests with a complex blank matrix (bovine liver extract) indicated that the chopping algorithm reports significantly more false positive fragments than the rule based software. Graphical Abstract.
Using In Silico Fragmentation to Improve Routine Residue Screening in Complex Matrices
NASA Astrophysics Data System (ADS)
Kaufmann, Anton; Butcher, Patrick; Maden, Kathryn; Walker, Stephan; Widmer, Mirjam
2017-12-01
Targeted residue screening requires the use of reference substances in order to identify potential residues. This becomes a difficult issue when using multi-residue methods capable of analyzing several hundreds of analytes. Therefore, the capability of in silico fragmentation based on a structure database ("suspect screening") instead of physical reference substances for routine targeted residue screening was investigated. The detection of fragment ions that can be predicted or explained by in silico software was utilized to reduce the number of false positives. These "proof of principle" experiments were done with a tool that is integrated into a commercial MS vendor instrument operating software (UNIFI) as well as with a platform-independent MS tool (Mass Frontier). A total of 97 analytes belonging to different chemical families were separated by reversed phase liquid chromatography and detected in a data-independent acquisition (DIA) mode using ion mobility hyphenated with quadrupole time of flight mass spectrometry. The instrument was operated in the MSE mode with alternating low and high energy traces. The fragments observed from product ion spectra were investigated using a "chopping" bond disconnection algorithm and a rule-based algorithm. The bond disconnection algorithm clearly explained more analyte product ions and a greater percentage of the spectral abundance than the rule-based software (92 out of the 97 compounds produced ≥1 explainable fragment ions). On the other hand, tests with a complex blank matrix (bovine liver extract) indicated that the chopping algorithm reports significantly more false positive fragments than the rule based software. [Figure not available: see fulltext.
Yiadom, Maame Yaa A B; Baugh, Christopher W; McWade, Conor M; Liu, Xulei; Song, Kyoung Jun; Patterson, Brian W; Jenkins, Cathy A; Tanski, Mary; Mills, Angela M; Salazar, Gilberto; Wang, Thomas J; Dittus, Robert S; Liu, Dandan; Storrow, Alan B
2017-02-23
Timely diagnosis of ST-segment elevation myocardial infarction (STEMI) in the emergency department (ED) is made solely by ECG. Obtaining this test within 10 minutes of ED arrival is critical to achieving the best outcomes. We investigated variability in the timely identification of STEMI across institutions and whether performance variation was associated with the ED characteristics, the comprehensiveness of screening criteria, and the STEMI screening processes. We examined STEMI screening performance in 7 EDs, with the missed case rate (MCR) as our primary end point. The MCR is the proportion of primarily screened ED patients diagnosed with STEMI who did not receive an ECG within 15 minutes of ED arrival. STEMI was defined by hospital discharge diagnosis. Relationships between the MCR and ED characteristics, screening criteria, and STEMI screening processes were assessed, along with differences in door-to-ECG times for captured versus missed patients. The overall MCR for all 7 EDs was 12.8%. The lowest and highest MCRs were 3.4% and 32.6%, respectively. The mean difference in door-to-ECG times for captured and missed patients was 31 minutes, with a range of 14 to 80 minutes of additional myocardial ischemia time for missed cases. The prevalence of primarily screened ED STEMIs was 0.09%. EDs with the greatest informedness (sensitivity+specificity-1) demonstrated superior performance across all other screening measures. The 29.2% difference in MCRs between the highest and lowest performing EDs demonstrates room for improving timely STEMI identification among primarily screened ED patients. The MCR and informedness can be used to compare screening across EDs and to understand variable performance. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
Nomura, Yukihiro; Higaki, Toru; Fujita, Masayo; Miki, Soichiro; Awaya, Yoshikazu; Nakanishi, Toshio; Yoshikawa, Takeharu; Hayashi, Naoto; Awai, Kazuo
2017-02-01
This study aimed to evaluate the effects of iterative reconstruction (IR) algorithms on computer-assisted detection (CAD) software for lung nodules in ultra-low-dose computed tomography (ULD-CT) for lung cancer screening. We selected 85 subjects who underwent both a low-dose CT (LD-CT) scan and an additional ULD-CT scan in our lung cancer screening program for high-risk populations. The LD-CT scans were reconstructed with filtered back projection (FBP; LD-FBP). The ULD-CT scans were reconstructed with FBP (ULD-FBP), adaptive iterative dose reduction 3D (AIDR 3D; ULD-AIDR 3D), and forward projected model-based IR solution (FIRST; ULD-FIRST). CAD software for lung nodules was applied to each image dataset, and the performance of the CAD software was compared among the different IR algorithms. The mean volume CT dose indexes were 3.02 mGy (LD-CT) and 0.30 mGy (ULD-CT). For overall nodules, the sensitivities of CAD software at 3.0 false positives per case were 78.7% (LD-FBP), 9.3% (ULD-FBP), 69.4% (ULD-AIDR 3D), and 77.8% (ULD-FIRST). Statistical analysis showed that the sensitivities of ULD-AIDR 3D and ULD-FIRST were significantly higher than that of ULD-FBP (P < .001). The performance of CAD software in ULD-CT was improved by using IR algorithms. In particular, the performance of CAD in ULD-FIRST was almost equivalent to that in LD-FBP. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Treweek, Shaun; Pearson, Ewan; Smith, Natalie; Neville, Ron; Sargeant, Paul; Boswell, Brian; Sullivan, Frank
2010-01-01
Recruitment to trials in primary care is often difficult, particularly when practice staff need to identify study participants with acute conditions during consultations. The Scottish Acute Recruitment Management Application (SARMA) system is linked to general practice electronic medical record (EMR) systems and is designed to provide recruitment support to multi-centre trials by screening patients against trial inclusion criteria and alerting practice staff if the patient appears eligible. For patients willing to learn more about the trial, the software allows practice staff to send the patient's contact details to the research team by text message. To evaluate the ability of the software to support trial recruitment. Software evaluation embedded in a randomised controlled trial. Five general practices in Tayside and Fife, Scotland. SARMA was used to support recruitment to a feasibility trial (the Response to Oral Agents in Diabetes, or ROAD trial) looking at users of oral therapy in diabetes. The technical performance of the software and its utility as a recruitment tool were evaluated. The software was successfully installed at four of the five general practices and recruited 11 of the 29 participants for ROAD (other methods were letter and direct invitation by a practice nurse) and had a recruitment return of 35% (11 of 31 texts sent led to a recruitment). Screen failures were relatively low (7 of 31 referred). Practice staff members were positive about the system. An automated recruitment tool can support primary care trials in Scotland and has the potential to support recruitment in other jurisdictions. It offers a low-cost supplement to other trial recruitment methods and is likely to have a much lower screen failure rate than blanket approaches such as mailshots and newspaper campaigns.
Seamless presentation capture, indexing, and management
NASA Astrophysics Data System (ADS)
Hilbert, David M.; Cooper, Matthew; Denoue, Laurent; Adcock, John; Billsus, Daniel
2005-10-01
Technology abounds for capturing presentations. However, no simple solution exists that is completely automatic. ProjectorBox is a "zero user interaction" appliance that automatically captures, indexes, and manages presentation multimedia. It operates continuously to record the RGB information sent from presentation devices, such as a presenter's laptop, to display devices, such as a projector. It seamlessly captures high-resolution slide images, text and audio. It requires no operator, specialized software, or changes to current presentation practice. Automatic media analysis is used to detect presentation content and segment presentations. The analysis substantially enhances the web-based user interface for browsing, searching, and exporting captured presentations. ProjectorBox has been in use for over a year in our corporate conference room, and has been deployed in two universities. Our goal is to develop automatic capture services that address both corporate and educational needs.
Virtual Environment TBI Screen (VETS)
2014-10-01
balance challenges performed on a modified Wii Balance Board . Implementation of this device will enhance current approaches in TBI and mild TBI (i.e...TBI) screen (VETS) device in measuring standing balance . This system consists of software, a Wii balance board , and a large screen television that...Validate Wii ™ Balance Board relative to NeuroCom forceplate ! Running Wii Balance Board validation protocol. ! Milestone Achieved:
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ouyang, Lizhi
Advanced Ultra Supercritical Boiler (AUSC) requires materials that can operate in corrosive environment at temperature and pressure as high as 760°C (or 1400°F) and 5000psi, respectively, while at the same time maintain good ductility at low temperature. We develop automated simulation software tools to enable fast large scale screening studies of candidate designs. While direct evaluation of creep rupture strength and ductility are currently not feasible, properties such as energy, elastic constants, surface energy, interface energy, and stack fault energy can be used to assess their relative ductility and creeping strength. We implemented software to automate the complex calculations tomore » minimize human inputs in the tedious screening studies which involve model structures generation, settings for first principles calculations, results analysis and reporting. The software developed in the project and library of computed mechanical properties of phases found in ferritic steels, many are complex solid solutions estimated for the first time, will certainly help the development of low cost ferritic steel for AUSC.« less
ERIC Educational Resources Information Center
Whu, Lin Fai; Zhang, Ai Ling
This document describes how National Workplace Literacy Program (NWLP) software was developed for Chinese garment workers in New York City. First, it discusses the goal of the workplace literacy program and the nature of the population served. Then, it indicates how NWLP software was designed to supplement the NWLP curriculum and to reinforce the…
Micromotors to capture and destroy anthrax simulant spores.
Orozco, Jahir; Pan, Guoqing; Sattayasamitsathit, Sirilak; Galarnyk, Michael; Wang, Joseph
2015-03-07
Towards addressing the need for detecting and eliminating biothreats, we describe a micromotor-based approach for screening, capturing, isolating and destroying anthrax simulant spores in a simple and rapid manner with minimal sample processing. The B. globilli antibody-functionalized micromotors can recognize, capture and transport B. globigii spores in environmental matrices, while showing non-interactions with excess of non-target bacteria. Efficient destruction of the anthrax simulant spores is demonstrated via the micromotor-induced mixing of a mild oxidizing solution. The new micromotor-based approach paves a way to dynamic multifunctional systems that rapidly recognize, isolate, capture and destroy biological threats.
Influence of Smartphones and Software on Acoustic Voice Measures
GRILLO, ELIZABETH U.; BROSIOUS, JENNA N.; SORRELL, STACI L.; ANAND, SUPRAJA
2016-01-01
This study assessed the within-subject variability of voice measures captured using different recording devices (i.e., smartphones and head mounted microphone) and software programs (i.e., Analysis of Dysphonia in Speech and Voice (ADSV), Multi-dimensional Voice Program (MDVP), and Praat). Correlations between the software programs that calculated the voice measures were also analyzed. Results demonstrated no significant within-subject variability across devices and software and that some of the measures were highly correlated across software programs. The study suggests that certain smartphones may be appropriate to record daily voice measures representing the effects of vocal loading within individuals. In addition, even though different algorithms are used to compute voice measures across software programs, some of the programs and measures share a similar relationship. PMID:28775797
Automatic detection of spermatozoa for laser capture microdissection.
Vandewoestyne, Mado; Van Hoofstat, David; Van Nieuwerburgh, Filip; Deforce, Dieter
2009-03-01
In sexual assault crimes, differential extraction of spermatozoa from vaginal swab smears is often ineffective, especially when only a few spermatozoa are present in an overwhelming amount of epithelial cells. Laser capture microdissection (LCM) enables the precise separation of spermatozoa and epithelial cells. However, standard sperm-staining techniques are non-specific and rely on sperm morphology for identification. Moreover, manual screening of the microscope slides is time-consuming and labor-intensive. Here, we describe an automated screening method to detect spermatozoa stained with Sperm HY-LITER. Different ratios of spermatozoa and epithelial cells were used to assess the automatic detection method. In addition, real postcoital samples were also screened. Detected spermatozoa were isolated using LCM and DNA analysis was performed. Robust DNA profiles without allelic dropout could be obtained from as little as 30 spermatozoa recovered from postcoital samples, showing that the staining had no significant influence on DNA recovery.
ERIC Educational Resources Information Center
Revell, Kevin D.
2014-01-01
Three emerging technologies were used in a large introductory chemistry class: a tablet PC, a lecture capture and replay software program, and an online homework program. At the end of the semester, student usage of the lecture replay and online homework systems was compared to course performance as measured by course grade and by a standardized…
Retina Image Screening and Analysis Software Version 2.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tobin, Jr., Kenneth W.; Karnowski, Thomas P.; Aykac, Deniz
2009-04-01
The software allows physicians or researchers to ground-truth images of retinas, identifying key physiological features and lesions that are indicative of disease. The software features methods to automatically detect the physiological features and lesions. The software contains code to measure the quality of images received from a telemedicine network; create and populate a database for a telemedicine network; review and report the diagnosis of a set of images; and also contains components to transmit images from a Zeiss camera to the network through SFTP.
Web Extensible Display Manager
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slominski, Ryan; Larrieu, Theodore L.
Jefferson Lab's Web Extensible Display Manager (WEDM) allows staff to access EDM control system screens from a web browser in remote offices and from mobile devices. Native browser technologies are leveraged to avoid installing and managing software on remote clients such as browser plugins, tunnel applications, or an EDM environment. Since standard network ports are used firewall exceptions are minimized. To avoid security concerns from remote users modifying a control system, WEDM exposes read-only access and basic web authentication can be used to further restrict access. Updates of monitored EPICS channels are delivered via a Web Socket using a webmore » gateway. The software translates EDM description files (denoted with the edl suffix) to HTML with Scalable Vector Graphics (SVG) following the EDM's edl file vector drawing rules to create faithful screen renderings. The WEDM server parses edl files and creates the HTML equivalent in real-time allowing existing screens to work without modification. Alternatively, the familiar drag and drop EDM screen creation tool can be used to create optimized screens sized specifically for smart phones and then rendered by WEDM.« less
Open-source meteor detection software for low-cost single-board computers
NASA Astrophysics Data System (ADS)
Vida, D.; Zubović, D.; Šegon, D.; Gural, P.; Cupec, R.
2016-01-01
This work aims to overcome the current price threshold of meteor stations which can sometimes deter meteor enthusiasts from owning one. In recent years small card-sized computers became widely available and are used for numerous applications. To utilize such computers for meteor work, software which can run on them is needed. In this paper we present a detailed description of newly-developed open-source software for fireball and meteor detection optimized for running on low-cost single board computers. Furthermore, an update on the development of automated open-source software which will handle video capture, fireball and meteor detection, astrometry and photometry is given.
FoilSim: Basic Aerodynamics Software Created
NASA Technical Reports Server (NTRS)
Peterson, Ruth A.
1999-01-01
FoilSim is interactive software that simulates the airflow around various shapes of airfoils. The graphical user interface, which looks more like a video game than a learning tool, captures and holds the students interest. The software is a product of NASA Lewis Research Center s Learning Technologies Project, an educational outreach initiative within the High Performance Computing and Communications Program (HPCCP).This airfoil view panel is a simulated view of a wing being tested in a wind tunnel. As students create new wing shapes by moving slider controls that change parameters, the software calculates their lift. FoilSim also displays plots of pressure or airspeed above and below the airfoil surface.
Solar Electric Propulsion Triple-Satellite-Aided Capture With Mars Flyby
NASA Astrophysics Data System (ADS)
Patrick, Sean
Triple-Satellite-aided-capture sequences use gravity-assists at three of Jupiter's four massive Galilean moons to reduce the DeltaV required to enter into Jupiter orbit. A triple-satellite-aided capture at Callisto, Ganymede, and Io is proposed to capture a SEP spacecraft into Jupiter orbit from an interplanetary Earth-Jupiter trajectory that employs low-thrust maneuvers. The principal advantage of this method is that it combines the ISP efficiency of ion propulsion with nearly impulsive but propellant-free gravity assists. For this thesis, two main chapters are devoted to the exploration of low-thrust triple-flyby capture trajectories. Specifically, the design and optimization of these trajectories are explored heavily. The first chapter explores the design of two solar electric propulsion (SEP), low-thrust trajectories developed using the JPL's MALTO software. The two trajectories combined represent a full Earth to Jupiter capture split into a heliocentric Earth to Jupiter Sphere of Influence (SOI) trajectory and a Joviocentric capture trajectory. The Joviocentric trajectory makes use of gravity assist flybys of Callisto, Ganymede, and Io to capture into Jupiter orbit with a period of 106.3 days. Following this, in chapter two, three more SEP low-thrust trajectories were developed based upon those in chapter one. These trajectories, devised using the high-fidelity Mystic software, also developed by JPL, improve upon the original trajectories developed in chapter one. Here, the developed trajectories are each three separate, full Earth to Jupiter capture orbits. As in chapter one, a Mars gravity assist is used to augment the heliocentric trajectories. Gravity-assist flybys of Callisto, Ganymede, and Io or Europa are used to capture into Jupiter Orbit. With between 89.8 and 137.2-day periods, the orbits developed in chapters one and two are shorter than most Jupiter capture orbits achieved using low-thrust propulsion techniques. Finally, chapter 3 presents an original trajectory design for a Very-Long-Baseline Interferometry (VLBI) satellite constellation. The design was created for the 8th Global Trajectory Optimization Competition (GTOC8) in which participants are tasked with creating and optimizing low-thrust trajectories to place a series of three space craft into formation to map given radio sources.
Toward Software Both Seen and Heard.
ERIC Educational Resources Information Center
Lazzaro, Joseph J.
1996-01-01
Visually impaired users are hampered by current PC software written for graphical user interfaces. Screen readers that vocalize displayed text require standardization that remains missing in the programming industry; the readers cannot interpret many cues in the Windows environment. More programming standards and adaptive technology for computers…
Watershed Management Optimization Support Tool (WMOST) is a software application designed tofacilitate integrated water resources management across wet and dry climate regions. It allows waterresources managers and planners to screen a wide range of practices across their watersh...
CellProfiler and KNIME: open source tools for high content screening.
Stöter, Martin; Niederlein, Antje; Barsacchi, Rico; Meyenhofer, Felix; Brandl, Holger; Bickle, Marc
2013-01-01
High content screening (HCS) has established itself in the world of the pharmaceutical industry as an essential tool for drug discovery and drug development. HCS is currently starting to enter the academic world and might become a widely used technology. Given the diversity of problems tackled in academic research, HCS could experience some profound changes in the future, mainly with more imaging modalities and smart microscopes being developed. One of the limitations in the establishment of HCS in academia is flexibility and cost. Flexibility is important to be able to adapt the HCS setup to accommodate the multiple different assays typical of academia. Many cost factors cannot be avoided, but the costs of the software packages necessary to analyze large datasets can be reduced by using Open Source software. We present and discuss the Open Source software CellProfiler for image analysis and KNIME for data analysis and data mining that provide software solutions which increase flexibility and keep costs low.
NASA Astrophysics Data System (ADS)
Gastounioti, Aimilia; Hsieh, Meng-Kang; Pantalone, Lauren; Conant, Emily F.; Kontos, Despina
2018-03-01
Mammographic density is an established risk factor for breast cancer. However, area-based density (ABD) measured in 2D mammograms consider the projection, rather than the actual volume of dense tissue which may be an important limitation. With the increasing utilization of digital breast tomosynthesis (DBT) in screening, there's an opportunity to routinely estimate volumetric breast density (VBD). In this study, we investigate associations between DBT-VBD and ABD extracted from standard-dose mammography (DM) and synthetic 2D digital mammography (sDM) increasingly replacing DM. We retrospectively analyzed bilateral imaging data from a random sample of 1000 women, acquired over a transitional period at our institution when all women had DBT, sDM and DM acquired as part of their routine breast screening. For each exam, ABD was measured in DM and sDM images with the publicly available "LIBRA" software, while DBT-VBD was measured using a previously validated, fully-automated computer algorithm. Spearman correlation (r) was used to compare VBD to ABD measurements. For each density measure, we also estimated the within woman intraclass correlation (ICC) and finally, to compare to clinical assessments, we performed analysis of variance (ANOVA) to evaluate the variation to the assigned clinical BI-RADS breast density category for each woman. DBT-VBD was moderately correlated to ABD from DM (r=0.70) and sDM (r=0.66). All density measures had strong bilateral symmetry (ICC = [0.85, 0.95]), but were significantly different across BI-RADS density categories (ANOVA, p<0.001). Our results contribute to further elaborating the clinical implications of breast density measures estimated with DBT which may better capture the volumetric amount of dense tissue within the breast than area-based measures and visual assessment.
A Proven Method for Meeting Export Control Objectives in Postal and Shipping Sectors
2015-02-01
Mellon University for the operation of the Software Engineering Institute, a federally funded research and development center sponsored by the United...Export Control at USPS 5 3.3 Objectives for Improving Export Screening at USPS 6 4 Development of the New Screening Process 7 4.1 “Walking the Model...Export Screening Development Process 10 Figure 2: Induction and Processing of International Mail 10 Figure 3: The Export Screening Process 11
Modeling Amorphous Microporous Polymers for CO2 Capture and Separations.
Kupgan, Grit; Abbott, Lauren J; Hart, Kyle E; Colina, Coray M
2018-06-13
This review concentrates on the advances of atomistic molecular simulations to design and evaluate amorphous microporous polymeric materials for CO 2 capture and separations. A description of atomistic molecular simulations is provided, including simulation techniques, structural generation approaches, relaxation and equilibration methodologies, and considerations needed for validation of simulated samples. The review provides general guidelines and a comprehensive update of the recent literature (since 2007) to promote the acceleration of the discovery and screening of amorphous microporous polymers for CO 2 capture and separation processes.
NASA Astrophysics Data System (ADS)
West, P.; Michaelis, J.; Lebot, T.; McGuinness, D. L.; Fox, P. A.
2014-12-01
Providing proper citation and attribution for published data, derived data products, and the software tools used to generate them, has always been an important aspect of scientific research. However, It is often the case that this type of detailed citation and attribution is lacking. This is in part because it often requires manual markup since dynamic generation of this type of provenance information is not typically done by the tools used to access, manipulate, transform, and visualize data. In addition, the tools themselves lack the information needed to be properly cited themselves. The OPeNDAP Hyrax Software Framework is a tool that provides access to and the ability to constrain, manipulate, and transform, different types of data from different data formats, into a common format, the DAP (Data Access Protocol), in order to derive new data products. A user, or another software client, specifies an HTTP URL in order to access a particular piece of data, and appropriately transform it to suit a specific purpose of use. The resulting data products, however, do not contain any information about what data was used to create it, or the software process used to generate it, let alone information that would allow the proper citing and attribution to down stream researchers and tool developers. We will present our approach to provenance capture in Hyrax including a mechanism that can be used to report back to the hosting site any derived products, such as publications and reports, using the W3C PROV recommendation pingback service. We will demonstrate our utilization of Semantic Web and Web standards, the development of an information model that extends the PROV model for provenance capture, and the development of the pingback service. We will present our findings, as well as our practices for providing provenance information, visualization of the provenance information, and the development of pingback services, to better enable scientists and tool developers to be recognized and properly cited for their contributions.
Software Development and Test Methodology for a Distributed Ground System
NASA Technical Reports Server (NTRS)
Ritter, George; Guillebeau, Pat; McNair, Ann R. (Technical Monitor)
2002-01-01
The Marshall Space Flight Center's (MSFC) Payload Operations Center (POC) ground system has evolved over a period of about 10 years. During this time the software processes have migrated from more traditional to more contemporary development processes in an effort to minimize unnecessary overhead while maximizing process benefits. The Software processes that have evolved still emphasize requirements capture, software configuration management, design documenting, and making sure the products that have been developed are accountable to initial requirements. This paper will give an overview of how the Software Processes have evolved, highlighting the positives as well as the negatives. In addition, we will mention the COTS tools that have been integrated into the processes and how the COTS have provided value to the project.
Software Engineering Laboratory (SEL) relationships, models, and management rules
NASA Technical Reports Server (NTRS)
Decker, William; Hendrick, Robert; Valett, Jon D.
1991-01-01
Over 50 individual Software Engineering Laboratory (SEL) research results, extracted from a review of published SEL documentation, that can be applied directly to managing software development projects are captured. Four basic categories of results are defined and discussed - environment profiles, relationships, models, and management rules. In each category, research results are presented as a single page that summarizes the individual result, lists potential uses of the result by managers, and references the original SEL documentation where the result was found. The document serves as a concise reference summary of applicable research for SEL managers.
Spatial data software integration - Merging CAD/CAM/mapping with GIS and image processing
NASA Technical Reports Server (NTRS)
Logan, Thomas L.; Bryant, Nevin A.
1987-01-01
The integration of CAD/CAM/mapping with image processing using geographic information systems (GISs) as the interface is examined. Particular emphasis is given to the development of software interfaces between JPL's Video Image Communication and Retrieval (VICAR)/Imaged Based Information System (IBIS) raster-based GIS and the CAD/CAM/mapping system. The design and functions of the VICAR and IBIS are described. Vector data capture and editing are studied. Various software programs for interfacing between the VICAR/IBIS and CAD/CAM/mapping are presented and analyzed.
Using a Geographic Information System to Improve Childhood Lead-Screening Efforts
2013-01-01
The Idaho Division of Public Health conducted a pilot study to produce a lead-exposure–risk map to help local and state agencies better target childhood lead-screening efforts. Priority lead-screening areas, at the block group level, were created by using county tax assessor data and geographic information system software. A series of maps were produced, indicating childhood lead-screening prevalence in areas in which there was high potential for exposure to lead. These maps could enable development of more systematically targeted and cost-effective childhood lead-screening efforts. PMID:23764346
Software Review: A program for testing capture-recapture data for closure
Stanley, Thomas R.; Richards, Jon D.
2005-01-01
Capture-recapture methods are widely used to estimate population parameters of free-ranging animals. Closed-population capture-recapture models, which assume there are no additions to or losses from the population over the period of study (i.e., the closure assumption), are preferred for population estimation over the open-population models, which do not assume closure, because heterogeneity in detection probabilities can be accounted for and this improves estimates. In this paper we introduce CloseTest, a new Microsoft® Windows-based program that computes the Otis et al. (1978) and Stanley and Burnham (1999) closure tests for capture-recapture data sets. Information on CloseTest features and where to obtain the program are provided.
ERIC Educational Resources Information Center
1996
This software product presents multi-level stories to capture the interest of children in grades two through five, while teaching them crucial reading comprehension skills. With stories touching on everything from superstars to sports facts, the open-ended reading comprehension program is versatile and easy to use for educators and children alike.…
Schema for Spacecraft-Command Dictionary
NASA Technical Reports Server (NTRS)
Laubach, Sharon; Garcia, Celina; Maxwell, Scott; Wright, Jesse
2008-01-01
An Extensible Markup Language (XML) schema was developed as a means of defining and describing a structure for capturing spacecraft command- definition and tracking information in a single location in a form readable by both engineers and software used to generate software for flight and ground systems. A structure defined within this schema is then used as the basis for creating an XML file that contains command definitions.
Laboratory Connections: Review of Two Commercial Interfacing Packages.
ERIC Educational Resources Information Center
Powers, Michael H.
1989-01-01
Evaluates two Apple II interfacing packages designed to measure pH: (1) "Experiments in Chemistry" by HRM Software and (2) "Voltage Plotter III" by Vernier Software. Provides characteristics and screen dumps of each package. Reports both systems are suitable for high school or beginning college laboratories. (MVL)
Prototyping with Data Dictionaries for Requirements Analysis.
1985-03-01
statistical packages and software for screen layout. These items work at a higher level than another category of prototyping tool, program generators... Program generators are software packages which, when given specifications, produce source listings, usually in a high order language such as COBCL...with users and this will not happen if he must stop to develcp a detailed program . [Ref. 241] Hardware as well as software should be considered in
Funaki, Ayumu; Ohkubo, Masaki; Wada, Shinichi; Murao, Kohei; Matsumoto, Toru; Niizuma, Shinji
2012-07-01
With the wide dissemination of computed tomography (CT) screening for lung cancer, measuring the nodule volume accurately with computer-aided volumetry software is increasingly important. Many studies for determining the accuracy of volumetry software have been performed using a phantom with artificial nodules. These phantom studies are limited, however, in their ability to reproduce the nodules both accurately and in the variety of sizes and densities required. Therefore, we propose a new approach of using computer-simulated nodules based on the point spread function measured in a CT system. The validity of the proposed method was confirmed by the excellent agreement obtained between computer-simulated nodules and phantom nodules regarding the volume measurements. A practical clinical evaluation of the accuracy of volumetry software was achieved by adding simulated nodules onto clinical lung images, including noise and artifacts. The tested volumetry software was revealed to be accurate within an error of 20 % for nodules >5 mm and with the difference between nodule density and background (lung) (CT value) being 400-600 HU. Such a detailed analysis can provide clinically useful information on the use of volumetry software in CT screening for lung cancer. We concluded that the proposed method is effective for evaluating the performance of computer-aided volumetry software.
A Taxonomy of Object-Oriented Measures Modeling the Object-Oriented Space
NASA Technical Reports Server (NTRS)
Neal, Ralph D.; Weistroffer, H. Roland; Coppins, Richard J.
1997-01-01
In order to control the quality of software and the software development process, it is important to understand the measurement of software. A first step toward a better comprehension of software measurement is the categorization of software measures by some meaningful taxonomy. The most worthwhile taxonomy would capture the fundamental nature of the object-oriented (O-O) space. The principal characteristics of object-oriented software offer a starting point for such a categorization of measures. This paper introduces a taxonomy of measures based upon fourteen characteristics of object-oriented software gathered from the literature. This taxonomy allows us to easily see gaps or redundancies in the existing O-O measures. The taxonomy also clearly differentiates among taxa so that there is no ambiguity as to the taxon to which a measure belongs. The taxonomy has been populated with measures taken from the literature.
A Taxonomy of Object-Oriented Measures Modeling the Object Oriented Space
NASA Technical Reports Server (NTRS)
Neal, Ralph D.; Weistroffer, H. Roland; Coppins, Richard J.
1997-01-01
In order to control the quality of software and the software development process, it is important to understand the measurement of software. A first step toward a better comprehension of software measurement is the categorization of software measures by some meaningful taxonomy. The most worthwhile taxonomy would capture the fundamental nature of the object-oriented (O-O) space. The principal characteristics of object-oriented software offer a starting point for such a categorization of measures. This paper introduces a taxonomy of measures based upon fourteen characteristics of object-oriented software gathered from the literature. This taxonomy allows us to easily see gaps or redundancies in the existing O-O measures. The taxonomy also clearly differentiates among taxa so that there is no ambiguity as to the taxon to which a measure belongs. The taxonomy has been populated with measures taken from the literature.
Idea Paper: The Lifecycle of Software for Scientific Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubey, Anshu; McInnes, Lois C.
The software lifecycle is a well researched topic that has produced many models to meet the needs of different types of software projects. However, one class of projects, software development for scientific computing, has received relatively little attention from lifecycle researchers. In particular, software for end-to-end computations for obtaining scientific results has received few lifecycle proposals and no formalization of a development model. An examination of development approaches employed by the teams implementing large multicomponent codes reveals a great deal of similarity in their strategies. This idea paper formalizes these related approaches into a lifecycle model for end-to-end scientific applicationmore » software, featuring loose coupling between submodels for development of infrastructure and scientific capability. We also invite input from stakeholders to converge on a model that captures the complexity of this development processes and provides needed lifecycle guidance to the scientific software community.« less
Software Development Standard Processes (SDSP)
NASA Technical Reports Server (NTRS)
Lavin, Milton L.; Wang, James J.; Morillo, Ronald; Mayer, John T.; Jamshidian, Barzia; Shimizu, Kenneth J.; Wilkinson, Belinda M.; Hihn, Jairus M.; Borgen, Rosana B.; Meyer, Kenneth N.;
2011-01-01
A JPL-created set of standard processes is to be used throughout the lifecycle of software development. These SDSPs cover a range of activities, from management and engineering activities, to assurance and support activities. These processes must be applied to software tasks per a prescribed set of procedures. JPL s Software Quality Improvement Project is currently working at the behest of the JPL Software Process Owner to ensure that all applicable software tasks follow these procedures. The SDSPs are captured as a set of 22 standards in JPL s software process domain. They were developed in-house at JPL by a number of Subject Matter Experts (SMEs) residing primarily within the Engineering and Science Directorate, but also from the Business Operations Directorate and Safety and Mission Success Directorate. These practices include not only currently performed best practices, but also JPL-desired future practices in key thrust areas like software architecting and software reuse analysis. Additionally, these SDSPs conform to many standards and requirements to which JPL projects are beholden.
Analyzing gene perturbation screens with nested effects models in R and bioconductor.
Fröhlich, Holger; Beissbarth, Tim; Tresch, Achim; Kostka, Dennis; Jacob, Juby; Spang, Rainer; Markowetz, F
2008-11-01
Nested effects models (NEMs) are a class of probabilistic models introduced to analyze the effects of gene perturbation screens visible in high-dimensional phenotypes like microarrays or cell morphology. NEMs reverse engineer upstream/downstream relations of cellular signaling cascades. NEMs take as input a set of candidate pathway genes and phenotypic profiles of perturbing these genes. NEMs return a pathway structure explaining the observed perturbation effects. Here, we describe the package nem, an open-source software to efficiently infer NEMs from data. Our software implements several search algorithms for model fitting and is applicable to a wide range of different data types and representations. The methods we present summarize the current state-of-the-art in NEMs. Our software is written in the R language and freely avail-able via the Bioconductor project at http://www.bioconductor.org.
Space Station Mission Planning System (MPS) development study. Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
Klus, W. J.
1987-01-01
The basic objective of the Space Station (SS) Mission Planning System (MPS) Development Study was to define a baseline Space Station mission plan and the associated hardware and software requirements for the system. A detailed definition of the Spacelab (SL) payload mission planning process and SL Mission Integration Planning System (MIPS) software was derived. A baseline concept was developed for performing SS manned base payload mission planning, and it was consistent with current Space Station design/operations concepts and philosophies. The SS MPS software requirements were defined. Also, requirements for new software include candidate programs for the application of artificial intelligence techniques to capture and make more effective use of mission planning expertise. A SS MPS Software Development Plan was developed which phases efforts for the development software to implement the SS mission planning concept.
A Computer Supported Teamwork Project for People with a Visual Impairment.
ERIC Educational Resources Information Center
Hale, Greg
2000-01-01
Discussion of the use of computer supported teamwork (CSTW) in team-based organizations focuses on problems that visually impaired people have reading graphical user interface software via screen reader software. Describes a project that successfully used email for CSTW, and suggests issues needing further research. (LRW)
Automated support for experience-based software management
NASA Technical Reports Server (NTRS)
Valett, Jon D.
1992-01-01
To effectively manage a software development project, the software manager must have access to key information concerning a project's status. This information includes not only data relating to the project of interest, but also, the experience of past development efforts within the environment. This paper describes the concepts and functionality of a software management tool designed to provide this information. This tool, called the Software Management Environment (SME), enables the software manager to compare an ongoing development effort with previous efforts and with models of the 'typical' project within the environment, to predict future project status, to analyze a project's strengths and weaknesses, and to assess the project's quality. In order to provide these functions the tool utilizes a vast corporate memory that includes a data base of software metrics, a set of models and relationships that describe the software development environment, and a set of rules that capture other knowledge and experience of software managers within the environment. Integrating these major concepts into one software management tool, the SME is a model of the type of management tool needed for all software development organizations.
Capturing the Data: Nutrition Risk Screening of Adults in Hospital
Frew, Elizabeth; Cant, Robyn; Sequeira, Jennifer
2010-01-01
This study aims to explore limitations with the Malnutrition Screening Tool in identifyingmalnutrition risk, in a cohort of 3,033 adult Australian medical and surgical hospital inpatients. Seventy-two percent of patients were screened; illness and medical care limited access to others. Malnutrition risk (16.5%; n = 501) was found in all age groups with a trend to higher risk in medical wards; 10% (n = 300) of patients with communication barriers were excluded. Systematic screening increased dietitians’ referrals by 39%. Further research is required to enable screening of all patients, including those with communication issues with an easy to use valid tool. PMID:22254032
An adsorption of carbon dioxide on activated carbon controlled by temperature swing adsorption
NASA Astrophysics Data System (ADS)
Tomas, Korinek; Karel, Frana
2017-09-01
This work deals with a method of capturing carbon dioxide (CO2) in indoor air. Temperature Swing Adsorption (TSA) on solid adsorbent was chosen for CO2 capture. Commercial activated carbon (AC) in form of extruded pellets was used as a solid adsorbent. There was constructed a simple device to testing effectiveness of CO2 capture in a fixed bed with AC. The TSA cycle was also simulated using the open-source software OpenFOAM. There was a good agreement between results obtained from numerical simulations and experimental data for adsorption process.
Sun, Jie; Li, Zhengdong; Pan, Shaoyou; Feng, Hao; Shao, Yu; Liu, Ningguo; Huang, Ping; Zou, Donghua; Chen, Yijiu
2018-05-01
The aim of the present study was to develop an improved method, using MADYMO multi-body simulation software combined with an optimization method and three-dimensional (3D) motion capture, for identifying the pre-impact conditions of a cyclist (walking or cycling) involved in a vehicle-bicycle accident. First, a 3D motion capture system was used to analyze coupled motions of a volunteer while walking and cycling. The motion capture results were used to define the posture of the human model during walking and cycling simulations. Then, cyclist, bicycle and vehicle models were developed. Pre-impact parameters of the models were treated as unknown design variables. Finally, a multi-objective genetic algorithm, the nondominated sorting genetic algorithm II, was used to find optimal solutions. The objective functions of the walk parameter were significantly lower than cycle parameter; thus, the cyclist was more likely to have been walking with the bicycle than riding the bicycle. In the most closely matched result found, all observed contact points matched and the injury parameters correlated well with the real injuries sustained by the cyclist. Based on the real accident reconstruction, the present study indicates that MADYMO multi-body simulation software, combined with an optimization method and 3D motion capture, can be used to identify the pre-impact conditions of a cyclist involved in a vehicle-bicycle accident. Copyright © 2018. Published by Elsevier Ltd.
Golberg, Alexander; Linshiz, Gregory; Kravets, Ilia; Stawski, Nina; Hillson, Nathan J; Yarmush, Martin L; Marks, Robert S; Konry, Tania
2014-01-01
We report an all-in-one platform - ScanDrop - for the rapid and specific capture, detection, and identification of bacteria in drinking water. The ScanDrop platform integrates droplet microfluidics, a portable imaging system, and cloud-based control software and data storage. The cloud-based control software and data storage enables robotic image acquisition, remote image processing, and rapid data sharing. These features form a "cloud" network for water quality monitoring. We have demonstrated the capability of ScanDrop to perform water quality monitoring via the detection of an indicator coliform bacterium, Escherichia coli, in drinking water contaminated with feces. Magnetic beads conjugated with antibodies to E. coli antigen were used to selectively capture and isolate specific bacteria from water samples. The bead-captured bacteria were co-encapsulated in pico-liter droplets with fluorescently-labeled anti-E. coli antibodies, and imaged with an automated custom designed fluorescence microscope. The entire water quality diagnostic process required 8 hours from sample collection to online-accessible results compared with 2-4 days for other currently available standard detection methods.
Kravets, Ilia; Stawski, Nina; Hillson, Nathan J.; Yarmush, Martin L.; Marks, Robert S.; Konry, Tania
2014-01-01
We report an all-in-one platform – ScanDrop – for the rapid and specific capture, detection, and identification of bacteria in drinking water. The ScanDrop platform integrates droplet microfluidics, a portable imaging system, and cloud-based control software and data storage. The cloud-based control software and data storage enables robotic image acquisition, remote image processing, and rapid data sharing. These features form a “cloud” network for water quality monitoring. We have demonstrated the capability of ScanDrop to perform water quality monitoring via the detection of an indicator coliform bacterium, Escherichia coli, in drinking water contaminated with feces. Magnetic beads conjugated with antibodies to E. coli antigen were used to selectively capture and isolate specific bacteria from water samples. The bead-captured bacteria were co-encapsulated in pico-liter droplets with fluorescently-labeled anti-E. coli antibodies, and imaged with an automated custom designed fluorescence microscope. The entire water quality diagnostic process required 8 hours from sample collection to online-accessible results compared with 2–4 days for other currently available standard detection methods. PMID:24475107
In-theater piracy: finding where the pirate was
NASA Astrophysics Data System (ADS)
Chupeau, Bertrand; Massoudi, Ayoub; Lefèbvre, Frédéric
2008-02-01
Pirate copies of feature films are proliferating on the Internet. DVD rip or screener recording methods involve the duplication of officially distributed media whereas 'cam' versions are illicitly captured with handheld camcorders in movie theaters. Several, complementary, multimedia forensic techniques such as copy identification, forensic tracking marks or sensor forensics can deter those clandestine recordings. In the case of camcorder capture in a theater, the image is often geometrically distorted, the main artifact being the trapezoidal effect, also known as 'keystoning', due to a capture viewing axis not being perpendicular to the screen. In this paper we propose to analyze the geometric distortions in a pirate copy to determine the camcorder viewing angle to the screen perpendicular and derive the approximate position of the pirate in the theater. The problem is first of all geometrically defined, by describing the general projection and capture setup, and by identifying unknown parameters and estimates. The estimation approach based on the identification of an eight-parameter homographic model of the 'keystoning' effect is then presented. A validation experiment based on ground truth collected in a real movie theater is reported, and the accuracy of the proposed method is assessed.
Does the use of automated fetal biometry improve clinical work flow efficiency?
Espinoza, Jimmy; Good, Sara; Russell, Evie; Lee, Wesley
2013-05-01
This study was designed to compare the work flow efficiency of manual measurements of 5 fetal parameters with a novel technique that automatically measures these parameters from 2-dimensional sonograms. This prospective study included 200 singleton pregnancies between 15 and 40 weeks' gestation. Patients were randomly allocated to either manual (n = 100) or automatic (n = 100) fetal biometry. The automatic measurement was performed using a commercially available software application. A digital video recorder captured all on-screen activity associated with the sonographic examination. The examination time and number of steps required to obtain fetal measurements were compared between manual and automatic methods. The mean time required to obtain the biometric measurements was significantly shorter using the automated technique than the manual approach (P < .001 for all comparisons). Similarly, the mean number of steps required to perform these measurements was significantly fewer with automatic measurements compared to the manual technique (P < .001). In summary, automated biometry reduced the examination time required for standard fetal measurements. This approach may improve work flow efficiency in busy obstetric sonography practices.
Haptic, Virtual Interaction and Motor Imagery: Entertainment Tools and Psychophysiological Testing
Invitto, Sara; Faggiano, Chiara; Sammarco, Silvia; De Luca, Valerio; De Paolis, Lucio T.
2016-01-01
In this work, the perception of affordances was analysed in terms of cognitive neuroscience during an interactive experience in a virtual reality environment. In particular, we chose a virtual reality scenario based on the Leap Motion controller: this sensor device captures the movements of the user’s hand and fingers, which are reproduced on a computer screen by the proper software applications. For our experiment, we employed a sample of 10 subjects matched by age and sex and chosen among university students. The subjects took part in motor imagery training and immersive affordance condition (a virtual training with Leap Motion and a haptic training with real objects). After each training sessions the subject performed a recognition task, in order to investigate event-related potential (ERP) components. The results revealed significant differences in the attentional components during the Leap Motion training. During Leap Motion session, latencies increased in the occipital lobes, which are entrusted to visual sensory; in contrast, latencies decreased in the frontal lobe, where the brain is mainly activated for attention and action planning. PMID:26999151
Haptic, Virtual Interaction and Motor Imagery: Entertainment Tools and Psychophysiological Testing.
Invitto, Sara; Faggiano, Chiara; Sammarco, Silvia; De Luca, Valerio; De Paolis, Lucio T
2016-03-18
In this work, the perception of affordances was analysed in terms of cognitive neuroscience during an interactive experience in a virtual reality environment. In particular, we chose a virtual reality scenario based on the Leap Motion controller: this sensor device captures the movements of the user's hand and fingers, which are reproduced on a computer screen by the proper software applications. For our experiment, we employed a sample of 10 subjects matched by age and sex and chosen among university students. The subjects took part in motor imagery training and immersive affordance condition (a virtual training with Leap Motion and a haptic training with real objects). After each training sessions the subject performed a recognition task, in order to investigate event-related potential (ERP) components. The results revealed significant differences in the attentional components during the Leap Motion training. During Leap Motion session, latencies increased in the occipital lobes, which are entrusted to visual sensory; in contrast, latencies decreased in the frontal lobe, where the brain is mainly activated for attention and action planning.
Use of laser 3D surface digitizer in data collection and 3D modeling of anatomical structures
NASA Astrophysics Data System (ADS)
Tse, Kelly; Van Der Wall, Hans; Vu, Dzung H.
2006-02-01
A laser digitizer (Konica-Minolta Vivid 910) is used to obtain 3-dimensional surface scans of anatomical structures with a maximum resolution of 0.1mm. Placing the specimen on a turntable allows multiple scans allaround because the scanner only captures data from the portion facing its lens. A computer model is generated using 3D modeling software such as Geomagic. The 3D model can be manipulated on screen for repeated analysis of anatomical features, a useful capability when the specimens are rare or inaccessible (museum collection, fossils, imprints in rock formation.). As accurate measurements can be performed on the computer model, instead of taking measurements on actual specimens only at the archeological excavation site e.g., a variety of quantitative data can be later obtained on the computer model in the laboratory as new ideas come to mind. Our group had used a mechanical contact digitizer (Microscribe) for this purpose, but with the surface digitizer, we have been obtaining data sets more accurately and more quickly.
Joint Measurement Operations Controller (JMOC)
2011-01-01
This work included evaluation of electronic paper and handwriting recognition software. Neither of these technologies was sufficiently robust to...is header information saying this is the Dynamic Targeting Cell set of questions. <Module webEnabled="false" appName="DTC" displayGlobalPre="true...translation of their handwriting captures. The one exception is Logitech, which provides its own software but is also compatible with MyScript Notes
Studying Upper-Limb Amputee Prosthesis Use to Inform Device Design
2016-10-01
study of the resulting videos led to a new prosthetics-use taxonomy that is generalizable to various levels of amputation and terminal devices. The...taxonomy was applied to classification of the recorded videos via custom tagging software with midi controller interface. The software creates...a motion capture studio and video cameras to record accurate and detailed upper body motion during a series of standardized tasks. These tasks are
Advanced Extravehicular Mobility Unit Informatics Software Design
NASA Technical Reports Server (NTRS)
Wright, Theodore
2014-01-01
This is a description of the software design for the 2013 edition of the Advanced Extravehicular Mobility Unit (AEMU) Informatics computer assembly. The Informatics system is an optional part of the space suit assembly. It adds a graphical interface for displaying suit status, timelines, procedures, and caution and warning information. In the future it will display maps with GPS position data, and video and still images captured by the astronaut.
Automated software development workstation
NASA Technical Reports Server (NTRS)
Prouty, Dale A.; Klahr, Philip
1988-01-01
A workstation is being developed that provides a computational environment for all NASA engineers across application boundaries, which automates reuse of existing NASA software and designs, and efficiently and effectively allows new programs and/or designs to be developed, catalogued, and reused. The generic workstation is made domain specific by specialization of the user interface, capturing engineering design expertise for the domain, and by constructing/using a library of pertinent information. The incorporation of software reusability principles and expert system technology into this workstation provide the obvious benefits of increased productivity, improved software use and design reliability, and enhanced engineering quality by bringing engineering to higher levels of abstraction based on a well tested and classified library.
Big Software for Big Data: Scaling Up Photometry for LSST (Abstract)
NASA Astrophysics Data System (ADS)
Rawls, M.
2017-06-01
(Abstract only) The Large Synoptic Survey Telescope (LSST) will capture mosaics of the sky every few nights, each containing more data than your computer's hard drive can store. As a result, the software to process these images is as critical to the science as the telescope and the camera. I discuss the algorithms and software being developed by the LSST Data Management team to handle such a large volume of data. All of our work is open source and available to the community. Once LSST comes online, our software will produce catalogs of objects and a stream of alerts. These will bring exciting new opportunities for follow-up observations and collaborations with LSST scientists.
Virtual Exercise Training Software System
NASA Technical Reports Server (NTRS)
Vu, L.; Kim, H.; Benson, E.; Amonette, W. E.; Barrera, J.; Perera, J.; Rajulu, S.; Hanson, A.
2018-01-01
The purpose of this study was to develop and evaluate a virtual exercise training software system (VETSS) capable of providing real-time instruction and exercise feedback during exploration missions. A resistive exercise instructional system was developed using a Microsoft Kinect depth-camera device, which provides markerless 3-D whole-body motion capture at a small form factor and minimal setup effort. It was hypothesized that subjects using the newly developed instructional software tool would perform the deadlift exercise with more optimal kinematics and consistent technique than those without the instructional software. Following a comprehensive evaluation in the laboratory, the system was deployed for testing and refinement in the NASA Extreme Environment Mission Operations (NEEMO) analog.
FreeTure: A Free software to capTure meteors for FRIPON
NASA Astrophysics Data System (ADS)
Audureau, Yoan; Marmo, Chiara; Bouley, Sylvain; Kwon, Min-Kyung; Colas, François; Vaubaillon, Jérémie; Birlan, Mirel; Zanda, Brigitte; Vernazza, Pierre; Caminade, Stephane; Gattecceca, Jérôme
2014-02-01
The Fireball Recovery and Interplanetary Observation Network (FRIPON) is a French project started in 2014 which will monitor the sky, using 100 all-sky cameras to detect meteors and to retrieve related meteorites on the ground. There are several detection software all around. Some of them are proprietary. Also, some of them are hardware dependent. We present here the open source software for meteor detection to be installed on the FRIPON network's stations. The software will run on Linux with gigabit Ethernet cameras and we plan to make it cross platform. This paper is focused on the meteor detection method used for the pipeline development and the present capabilities.
Wang, Zhen; Yuan, Xinxin; Cong, Shan; Chen, Zhigang; Li, Qingwen; Geng, Fengxia; Zhao, Zhigang
2018-05-02
Air pollution is one of the most serious issues affecting the world today. Instead of expensive and energy-intensive air filtering devices, a fiber-based transparent air filter coated on a window screen is seen as one of the state-of-the-art filtration technologies to combat the seriously growing problem, delivering the advantages of simplicity, convenience, and high filtering efficiency. However, such a window screen is currently limited to particulate matter (PM) filtration and ineffective with other air pollutants. Here, we report the use of a newfangled type of color-changing fibers, porous Prussian blue analogues (CuHCF)/polymer composite microfibers, for transparent window screens toward air pollutant filtration. To increase pollution filtration, pores and dimples are purposely introduced to the fibers using binary solvent systems through a nonsolvent-induced phase separation mechanism. Such composite microfibers overcome some of the limitations of those previously used fibers and could simultaneously capture PM 2.5 , PM 10 , and NH 3 with high efficiency. More interestingly, a distinct color change is observed upon exposure to air pollutants in such window screens, which provides multifunctional capability of simultaneous pollutant capture and naked eye screening of the pollutant amount. Specifically, in the case of long-term exposure to low-concentration NH 3 , the symbol displayed in such window screens changes from yellow color to brown and the coloration rate is directly controlled by the NH 3 concentration, which may serve as a careful reminder for those people who are repeatedly exposed to low-concentration ammonia gas (referred to as chronic poisoning). In contrast, after short-term exposure to a high concentration of ammonia gas, the yellow symbol immediately becomes blackened, which provides timely information about the risk of acute ammonia poisoning or even ammonia explosion. Further spectroscopic results show that the chromatic behaviors in response to different concentrations of NH 3 are fundamentally different, which is related to the different locations of ammonia in the lattice of CuHCF, either in its interstitial sites or at the Fe(CN) 6 vacancy sites, largely distinguished by the absence or presence of atmospheric moisture.
Database Access Manager for the Software Engineering Laboratory (DAMSEL) user's guide
NASA Technical Reports Server (NTRS)
1990-01-01
Operating instructions for the Database Access Manager for the Software Engineering Laboratory (DAMSEL) system are presented. Step-by-step instructions for performing various data entry and report generation activities are included. Sample sessions showing the user interface display screens are also included. Instructions for generating reports are accompanied by sample outputs for each of the reports. The document groups the available software functions by the classes of users that may access them.
Data Standards for Flow Cytometry
SPIDLEN, JOSEF; GENTLEMAN, ROBERT C.; HAALAND, PERRY D.; LANGILLE, MORGAN; MEUR, NOLWENN LE; OCHS, MICHAEL F.; SCHMITT, CHARLES; SMITH, CLAYTON A.; TREISTER, ADAM S.; BRINKMAN, RYAN R.
2009-01-01
Flow cytometry (FCM) is an analytical tool widely used for cancer and HIV/AIDS research, and treatment, stem cell manipulation and detecting microorganisms in environmental samples. Current data standards do not capture the full scope of FCM experiments and there is a demand for software tools that can assist in the exploration and analysis of large FCM datasets. We are implementing a standardized approach to capturing, analyzing, and disseminating FCM data that will facilitate both more complex analyses and analysis of datasets that could not previously be efficiently studied. Initial work has focused on developing a community-based guideline for recording and reporting the details of FCM experiments. Open source software tools that implement this standard are being created, with an emphasis on facilitating reproducible and extensible data analyses. As well, tools for electronic collaboration will assist the integrated access and comprehension of experiments to empower users to collaborate on FCM analyses. This coordinated, joint development of bioinformatics standards and software tools for FCM data analysis has the potential to greatly facilitate both basic and clinical research—impacting a notably diverse range of medical and environmental research areas. PMID:16901228
Towards automated traceability maintenance
Mäder, Patrick; Gotel, Orlena
2012-01-01
Traceability relations support stakeholders in understanding the dependencies between artifacts created during the development of a software system and thus enable many development-related tasks. To ensure that the anticipated benefits of these tasks can be realized, it is necessary to have an up-to-date set of traceability relations between the established artifacts. This goal requires the creation of traceability relations during the initial development process. Furthermore, the goal also requires the maintenance of traceability relations over time as the software system evolves in order to prevent their decay. In this paper, an approach is discussed that supports the (semi-) automated update of traceability relations between requirements, analysis and design models of software systems expressed in the UML. This is made possible by analyzing change events that have been captured while working within a third-party UML modeling tool. Within the captured flow of events, development activities comprised of several events are recognized. These are matched with predefined rules that direct the update of impacted traceability relations. The overall approach is supported by a prototype tool and empirical results on the effectiveness of tool-supported traceability maintenance are provided. PMID:23471308
Valdés-Martiní, José R; Marrero-Ponce, Yovani; García-Jacas, César R; Martinez-Mayorga, Karina; Barigye, Stephen J; Vaz d'Almeida, Yasser Silveira; Pham-The, Hai; Pérez-Giménez, Facundo; Morell, Carlos A
2017-06-07
In previous reports, Marrero-Ponce et al. proposed algebraic formalisms for characterizing topological (2D) and chiral (2.5D) molecular features through atom- and bond-based ToMoCoMD-CARDD (acronym for Topological Molecular Computational Design-Computer Aided Rational Drug Design) molecular descriptors. These MDs codify molecular information based on the bilinear, quadratic and linear algebraic forms and the graph-theoretical electronic-density and edge-adjacency matrices in order to consider atom- and bond-based relations, respectively. These MDs have been successfully applied in the screening of chemical compounds of different therapeutic applications ranging from antimalarials, antibacterials, tyrosinase inhibitors and so on. To compute these MDs, a computational program with the same name was initially developed. However, this in house software barely offered the functionalities required in contemporary molecular modeling tasks, in addition to the inherent limitations that made its usability impractical. Therefore, the present manuscript introduces the QuBiLS-MAS (acronym for Quadratic, Bilinear and N-Linear mapS based on graph-theoretic electronic-density Matrices and Atomic weightingS) software designed to compute topological (0-2.5D) molecular descriptors based on bilinear, quadratic and linear algebraic forms for atom- and bond-based relations. The QuBiLS-MAS module was designed as standalone software, in which extensions and generalizations of the former ToMoCoMD-CARDD 2D-algebraic indices are implemented, considering the following aspects: (a) two new matrix normalization approaches based on double-stochastic and mutual probability formalisms; (b) topological constraints (cut-offs) to take into account particular inter-atomic relations; (c) six additional atomic properties to be used as weighting schemes in the calculation of the molecular vectors; (d) four new local-fragments to consider molecular regions of interest; (e) number of lone-pair electrons in chemical structure defined by diagonal coefficients in matrix representations; and (f) several aggregation operators (invariants) applied over atom/bond-level descriptors in order to compute global indices. This software permits the parallel computation of the indices, contains a batch processing module and data curation functionalities. This program was developed in Java v1.7 using the Chemistry Development Kit library (version 1.4.19). The QuBiLS-MAS software consists of two components: a desktop interface (GUI) and an API library allowing for the easy integration of the latter in chemoinformatics applications. The relevance of the novel extensions and generalizations implemented in this software is demonstrated through three studies. Firstly, a comparative Shannon's entropy based variability study for the proposed QuBiLS-MAS and the DRAGON indices demonstrates superior performance for the former. A principal component analysis reveals that the QuBiLS-MAS approach captures chemical information orthogonal to that codified by the DRAGON descriptors. Lastly, a QSAR study for the binding affinity to the corticosteroid-binding globulin using Cramer's steroid dataset is carried out. From these analyses, it is revealed that the QuBiLS-MAS approach for atom-pair relations yields similar-to-superior performance with regard to other QSAR methodologies reported in the literature. Therefore, the QuBiLS-MAS approach constitutes a useful tool for the diversity analysis of chemical compound datasets and high-throughput screening of structure-activity data.
Water Distribution System Risk Tool for Investment Planning (WaterRF Report 4332)
Product Description/Abstract The product consists of the Pipe Risk Screening Tool (PRST), and a report on the development and use of the tool. The PRST is a software-based screening aid to identify and rank candidate pipes for actions that range from active monitoring (including...
Touch Screen Tablets and Emergent Literacy
ERIC Educational Resources Information Center
Neumann, Michelle M.; Neumann, David L.
2014-01-01
The use of touch screen tablets by young children is increasing in the home and in early childhood settings. The simple tactile interface and finger-based operating features of tablets may facilitate preschoolers' use of tablet application software and support their educational development in domains such as literacy. This article reviews…
Designing Colorectal Cancer Screening Decision Support: A Cognitive Engineering Enterprise.
Militello, Laura G; Saleem, Jason J; Borders, Morgan R; Sushereba, Christen E; Haverkamp, Donald; Wolf, Steven P; Doebbeling, Bradley N
2016-03-01
Adoption of clinical decision support has been limited. Important barriers include an emphasis on algorithmic approaches to decision support that do not align well with clinical work flow and human decision strategies, and the expense and challenge of developing, implementing, and refining decision support features in existing electronic health records (EHRs). We applied decision-centered design to create a modular software application to support physicians in managing and tracking colorectal cancer screening. Using decision-centered design facilitates a thorough understanding of cognitive support requirements from an end user perspective as a foundation for design. In this project, we used an iterative design process, including ethnographic observation and cognitive task analysis, to move from an initial design concept to a working modular software application called the Screening & Surveillance App. The beta version is tailored to work with the Veterans Health Administration's EHR Computerized Patient Record System (CPRS). Primary care providers using the beta version Screening & Surveillance App more accurately answered questions about patients and found relevant information more quickly compared to those using CPRS alone. Primary care providers also reported reduced mental effort and rated the Screening & Surveillance App positively for usability.
Designing Colorectal Cancer Screening Decision Support: A Cognitive Engineering Enterprise
Militello, Laura G.; Saleem, Jason J.; Borders, Morgan R.; Sushereba, Christen E.; Haverkamp, Donald; Wolf, Steven P.; Doebbeling, Bradley N.
2016-01-01
Adoption of clinical decision support has been limited. Important barriers include an emphasis on algorithmic approaches to decision support that do not align well with clinical work flow and human decision strategies, and the expense and challenge of developing, implementing, and refining decision support features in existing electronic health records (EHRs). We applied decision-centered design to create a modular software application to support physicians in managing and tracking colorectal cancer screening. Using decision-centered design facilitates a thorough understanding of cognitive support requirements from an end user perspective as a foundation for design. In this project, we used an iterative design process, including ethnographic observation and cognitive task analysis, to move from an initial design concept to a working modular software application called the Screening & Surveillance App. The beta version is tailored to work with the Veterans Health Administration’s EHR Computerized Patient Record System (CPRS). Primary care providers using the beta version Screening & Surveillance App more accurately answered questions about patients and found relevant information more quickly compared to those using CPRS alone. Primary care providers also reported reduced mental effort and rated the Screening & Surveillance App positively for usability. PMID:26973441
Application of Polynomial Neural Networks to Classification of Acoustic Warfare Signals
1993-04-01
on Neural Networks, Vol. II, Jun’e, 1987. [66] Shynk, J.J., "Adaptive IIR filtering," IEEE ASSP Magazine, Vol. 6, No. 2, Apr. 1989. 175 I [67] Specht ...rows This is the size of the yellow capture window which will be displayed on the screen. The best setting for pixel-rows is two greater than exemplar...exemplar size of 4 to be captured by the PNN. The pixel-rows setting is 6, which allows all four rows of I the retina data to fit inside yellow capture
NASA Astrophysics Data System (ADS)
Kopps, Anna M.; Palsbøll, Per J.
2016-02-01
The assessment of the status of endangered species or populations typically draw generously on the plethora of population genetic software available to detect population genetic structuring. However, despite the many available analytical approaches, population genetic inference methods [of neutral genetic variation] essentially capture three basic processes; migration, random genetic drift and mutation. Consequently, different analytical approaches essentially capture the same basic process, and should yield consistent results.
High-Rate Data-Capture for an Airborne Lidar System
NASA Technical Reports Server (NTRS)
Valett, Susan; Hicks, Edward; Dabney, Philip; Harding, David
2012-01-01
A high-rate data system was required to capture the data for an airborne lidar system. A data system was developed that achieved up to 22 million (64-bit) events per second sustained data rate (1408 million bits per second), as well as short bursts (less than 4 s) at higher rates. All hardware used for the system was off the shelf, but carefully selected to achieve these rates. The system was used to capture laser fire, single-photon detection, and GPS data for the Slope Imaging Multi-polarization Photo-counting Lidar (SIMPL). However, the system has applications for other laser altimeter systems (waveform-recording), mass spectroscopy, xray radiometry imaging, high-background- rate ranging lidar, and other similar areas where very high-speed data capture is needed. The data capture software was used for the SIMPL instrument that employs a micropulse, single-photon ranging measurement approach and has 16 data channels. The detected single photons are from two sources those reflected from the target and solar background photons. The instrument is non-gated, so background photons are acquired for a range window of 13 km and can comprise many times the number of target photons. The highest background rate occurs when the atmosphere is clear, the Sun is high, and the target is a highly reflective surface such as snow. Under these conditions, the total data rate for the 16 channels combined is expected to be approximately 22 million events per second. For each photon detection event, the data capture software reads the relative time of receipt, with respect to a one-per-second absolute time pulse from a GPS receiver, from an event timer card with 0.1-ns precision, and records that information to a RAID (Redundant Array of Independent Disks) storage device. The relative time of laser pulse firings must also be read and recorded with the same precision. Each of the four event timer cards handles the throughput from four of the channels. For each detection event, a flag is recorded that indicates the source channel. To accommodate the expected maximum count rate and also handle the other extreme of very low rates occurring during nighttime operations, the software requests a set amount of data from each of the event timer cards and buffers the data. The software notes if any of the cards did not return all the data requested and then accommodates that lower rate. The data is buffered to minimize the I/O overhead of writing the data to storage. Care was taken to optimize the reads from the cards, the speed of the I/O bus, and RAID configuration.
An oppositely charged insect exclusion screen with gap-free multiple electric fields
NASA Astrophysics Data System (ADS)
Matsuda, Yoshinori; Kakutani, Koji; Nonomura, Teruo; Kimbara, Junji; Kusakari, Shin-ichi; Osamura, Kazumi; Toyoda, Hideyoshi
2012-12-01
An electric field screen was constructed to examine insect attraction mechanisms in multiple electric fields generated inside the screen. The screen consisted of two parallel insulated conductor wires (ICWs) charged with equal but opposite voltages and two separate grounded nets connected to each other and placed on each side of the ICW layer. Insects released inside the fields were charged either positively or negatively as a result of electricity flow from or to the insect, respectively. The force generated between the charged insects and opposite ICW charges was sufficient to capture all insects.
Samal, Himanshu Bhusan; Das, Jugal Kishore; Mahapatra, Rajani Kanta; Suar, Mrutyunjay
2015-01-01
The Mur enzymes of the peptidoglycan biosynthesis pathway constitute ideal targets for the design of new classes of antimicrobial inhibitors in Gram-negative bacteria. We built a homology model of MurD of Salmonella typhimurium LT2 using MODELLER (9v12) software. 'The homology model was subjected to energy minimization by molecular dynamics (MD) simulation study with GROMACS software for a simulation time of 20 ns in water environment. The model was subjected for virtual screening study from the Zinc Database using Dockblaster software. Inhibition assay for the best inhibitor, 3-(amino methyl)-n-(4-methoxyphenyl) aniline, by flow cytometric analysis revealed the effective inhibition of peptidoglycan biosynthesis. Results from this study provide new insights for the molecular understanding and development of new antibacterial drugs against the pathogen. Copyright © 2015 Elsevier Inc. All rights reserved.
Tank Monitoring and Document control System (TMACS) As Built Software Design Document
DOE Office of Scientific and Technical Information (OSTI.GOV)
GLASSCOCK, J.A.
This document describes the software design for the Tank Monitor and Control System (TMACS). This document captures the existing as-built design of TMACS as of November 1999. It will be used as a reference document to the system maintainers who will be maintaining and modifying the TMACS functions as necessary. The heart of the TMACS system is the ''point-processing'' functionality where a sample value is received from the field sensors and the value is analyzed, logged, or alarmed as required. This Software Design Document focuses on the point-processing functions.
Observations on online educational materials for powder diffraction crystallography software.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toby, B. H.
2010-10-01
This article presents a series of approaches used to educate potential users of crystallographic software for powder diffraction. The approach that has been most successful in the author's opinion is the web lecture, where an audio presentation is coupled to a video-like record of the contents of the presenter's computer screen.
Educational Software Employing Group Competition Using an Interactive Electronic Whiteboard
ERIC Educational Resources Information Center
Otsuki, Yoko; Bandoh, Hirokazu; Kato, Naoki; Indurkhya, Bipin; Nakagawa, Masaki
2004-01-01
This article presents a design of educational software employing group competition using a large interactive electronic whiteboard, and a report on its experimental use. Group competition and collaboration are useful methods to cultivate originality and communication skills. To share the same space, the same large screen, and face-to-face…
Designing Better Camels: Developing Effective Documentation for Computer Software.
ERIC Educational Resources Information Center
Zacher, Candace M.
This guide to the development of effective documentation for users of computer software begins by identifying five types of documentation, i.e., training manuals, user guides, tutorials, on-screen help comments, and troubleshooting manuals. Six steps in the development process are then outlined and briefly described: (1) planning and preparation;…
Captivate MenuBuilder: Creating an Online Tutorial for Teaching Software
ERIC Educational Resources Information Center
Yelinek, Kathryn; Tarnowski, Lynn; Hannon, Patricia; Oliver, Susan
2008-01-01
In this article, the authors, students in an instructional technology graduate course, describe a process to create an online tutorial for teaching software. They created the tutorial for a cyber school's use. Five tutorial modules were linked together through one menu screen using the MenuBuilder feature in the Adobe Captivate program. The…
Software Graphical User Interface For Analysis Of Images
NASA Technical Reports Server (NTRS)
Leonard, Desiree M.; Nolf, Scott R.; Avis, Elizabeth L.; Stacy, Kathryn
1992-01-01
CAMTOOL software provides graphical interface between Sun Microsystems workstation and Eikonix Model 1412 digitizing camera system. Camera scans and digitizes images, halftones, reflectives, transmissives, rigid or flexible flat material, or three-dimensional objects. Users digitize images and select from three destinations: work-station display screen, magnetic-tape drive, or hard disk. Written in C.
Transit safety retrofit package development : applications requirements document.
DOT National Transportation Integrated Search
2014-05-01
This Application Requirements Document for the Transit Safety Retrofit Package (TRP) Development captures the system, hardware and software requirements towards fulfilling the technical objectives stated within the contract. To achieve the objective ...
2015-01-01
Web-based user interfaces to scientific applications are important tools that allow researchers to utilize a broad range of software packages with just an Internet connection and a browser.1 One such interface, CHARMMing (CHARMM interface and graphics), facilitates access to the powerful and widely used molecular software package CHARMM. CHARMMing incorporates tasks such as molecular structure analysis, dynamics, multiscale modeling, and other techniques commonly used by computational life scientists. We have extended CHARMMing’s capabilities to include a fragment-based docking protocol that allows users to perform molecular docking and virtual screening calculations either directly via the CHARMMing Web server or on computing resources using the self-contained job scripts generated via the Web interface. The docking protocol was evaluated by performing a series of “re-dockings” with direct comparison to top commercial docking software. Results of this evaluation showed that CHARMMing’s docking implementation is comparable to many widely used software packages and validates the use of the new CHARMM generalized force field for docking and virtual screening. PMID:25151852
Pevzner, Yuri; Frugier, Emilie; Schalk, Vinushka; Caflisch, Amedeo; Woodcock, H Lee
2014-09-22
Web-based user interfaces to scientific applications are important tools that allow researchers to utilize a broad range of software packages with just an Internet connection and a browser. One such interface, CHARMMing (CHARMM interface and graphics), facilitates access to the powerful and widely used molecular software package CHARMM. CHARMMing incorporates tasks such as molecular structure analysis, dynamics, multiscale modeling, and other techniques commonly used by computational life scientists. We have extended CHARMMing's capabilities to include a fragment-based docking protocol that allows users to perform molecular docking and virtual screening calculations either directly via the CHARMMing Web server or on computing resources using the self-contained job scripts generated via the Web interface. The docking protocol was evaluated by performing a series of "re-dockings" with direct comparison to top commercial docking software. Results of this evaluation showed that CHARMMing's docking implementation is comparable to many widely used software packages and validates the use of the new CHARMM generalized force field for docking and virtual screening.
Modeling the Object-Oriented Space Through Validated Measures
NASA Technical Reports Server (NTRS)
Neal, Ralph D.
1996-01-01
In order to truly understand software and the software development process, software measurement must be better understood. A beginning step toward a better understanding of software measurement is the categorization of the measurements by some meaningful taxonomy. The most meaningful taxonomy would capture the basic nature of the subject oriented (O-O) space. The interesting characteristics of object oriented software offer a starting point for such a categorization of measures. A taxonomy has been developed based on fourteen characteristics of object-oriented software gathered from the literature This taxonomy allows us to easily see gaps and redundancies in the O-O measures. The taxonomy also clearly differentiates among taxa so that there is no ambiguity as to the taxon to which a measure belongs. The taxonomy has been populated with thirty-two measures that have been validated in the narrow sense of Fenton, using measurement theory with Zuse's augmentation.
ACES: Space shuttle flight software analysis expert system
NASA Technical Reports Server (NTRS)
Satterwhite, R. Scott
1990-01-01
The Analysis Criteria Evaluation System (ACES) is a knowledge based expert system that automates the final certification of the Space Shuttle onboard flight software. Guidance, navigation and control of the Space Shuttle through all its flight phases are accomplished by a complex onboard flight software system. This software is reconfigured for each flight to allow thousands of mission-specific parameters to be introduced and must therefore be thoroughly certified prior to each flight. This certification is performed in ground simulations by executing the software in the flight computers. Flight trajectories from liftoff to landing, including abort scenarios, are simulated and the results are stored for analysis. The current methodology of performing this analysis is repetitive and requires many man-hours. The ultimate goals of ACES are to capture the knowledge of the current experts and improve the quality and reduce the manpower required to certify the Space Shuttle onboard flight software.
SU-E-P-05: Electronic Brachytherapy: A Physics Perspective On Field Implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pai, S; Ayyalasomayajula, S; Lee, S
2015-06-15
Purpose: We want to summarize our experience implementing a successful program of electronic brachytherapy at several dermatology clinics with the help of a cloud based software to help us define the key program parameters and capture physics QA aspects. Optimally developed software helps the physicist in peer review and qualify the physical parameters. Methods: Using the XOFT™ Axxent™ electronic brachytherapy system in conjunction with a cloud-based software, a process was setup to capture and record treatments. It was implemented initially at about 10 sites in California. For dosimetric purposes, the software facilitated storage of the physics parameters of surface applicatorsmore » used in treatment and other source calibration parameters. In addition, the patient prescription, pathology and other setup considerations were input by radiation oncologist and the therapist. This facilitated physics planning of the treatment parameters and also independent check of the dwell time. From 2013–2014, nearly1500 such calculation were completed by a group of physicists. A total of 800 patients with multiple lesions have been treated successfully during this period. The treatment log files have been uploaded and documented in the software which facilitated physics peer review of treatments per the standards in place by AAPM and ACR. Results: The program model was implemented successfully at multiple sites. The cloud based software allowed for proper peer review and compliance of the program at 10 clinical sites. Dosimtery was done on 800 patients and executed in a timely fashion to suit the clinical needs. Accumulated physics data in the software from the clinics allows for robust analysis and future development. Conclusion: Electronic brachytherapy implementation experience from a quality assurance perspective was greatly enhanced by using a cloud based software. The comprehensive database will pave the way for future developments to yield superior physics outcomes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guyer, H.B.; McChesney, C.A.
The overall primary Objective of HDAR is to create a repository of historical personnel security documents and provide the functionality needed for archival and retrieval use by other software modules and application users of the DISS/ET system. The software product to be produced from this specification is the Historical Document Archival and Retrieval Subsystem The product will provide the functionality to capture, retrieve and manage documents currently contained in the personnel security folders in DOE Operations Offices vaults at various locations across the United States. The long-term plan for DISS/ET includes the requirement to allow for capture and storage ofmore » arbitrary, currently undefined, clearance-related documents that fall outside the scope of the ``cradle-to-grave`` electronic processing provided by DISS/ET. However, this requirement is not within the scope of the requirements specified in this document.« less
Salazar-Gamarra, Rodrigo; Seelaus, Rosemary; da Silva, Jorge Vicente Lopes; da Silva, Airton Moreira; Dib, Luciano Lauria
2016-05-25
The aim of this study is to present the development of a new technique to obtain 3D models using photogrammetry by a mobile device and free software, as a method for making digital facial impressions of patients with maxillofacial defects for the final purpose of 3D printing of facial prostheses. With the use of a mobile device, free software and a photo capture protocol, 2D captures of the anatomy of a patient with a facial defect were transformed into a 3D model. The resultant digital models were evaluated for visual and technical integrity. The technical process and resultant models were described and analyzed for technical and clinical usability. Generating 3D models to make digital face impressions was possible by the use of photogrammetry with photos taken by a mobile device. The facial anatomy of the patient was reproduced by a *.3dp and a *.stl file with no major irregularities. 3D printing was possible. An alternative method for capturing facial anatomy is possible using a mobile device for the purpose of obtaining and designing 3D models for facial rehabilitation. Further studies must be realized to compare 3D modeling among different techniques and systems. Free software and low cost equipment could be a feasible solution to obtain 3D models for making digital face impressions for maxillofacial prostheses, improving access for clinical centers that do not have high cost technology considered as a prior acquisition.
Fernandez, Michael; Boyd, Peter G; Daff, Thomas D; Aghaji, Mohammad Zein; Woo, Tom K
2014-09-04
In this work, we have developed quantitative structure-property relationship (QSPR) models using advanced machine learning algorithms that can rapidly and accurately recognize high-performing metal organic framework (MOF) materials for CO2 capture. More specifically, QSPR classifiers have been developed that can, in a fraction of a section, identify candidate MOFs with enhanced CO2 adsorption capacity (>1 mmol/g at 0.15 bar and >4 mmol/g at 1 bar). The models were tested on a large set of 292 050 MOFs that were not part of the training set. The QSPR classifier could recover 945 of the top 1000 MOFs in the test set while flagging only 10% of the whole library for compute intensive screening. Thus, using the machine learning classifiers as part of a high-throughput screening protocol would result in an order of magnitude reduction in compute time and allow intractably large structure libraries and search spaces to be screened.
Workflow-Based Software Development Environment
NASA Technical Reports Server (NTRS)
Izygon, Michel E.
2013-01-01
The Software Developer's Assistant (SDA) helps software teams more efficiently and accurately conduct or execute software processes associated with NASA mission-critical software. SDA is a process enactment platform that guides software teams through project-specific standards, processes, and procedures. Software projects are decomposed into all of their required process steps or tasks, and each task is assigned to project personnel. SDA orchestrates the performance of work required to complete all process tasks in the correct sequence. The software then notifies team members when they may begin work on their assigned tasks and provides the tools, instructions, reference materials, and supportive artifacts that allow users to compliantly perform the work. A combination of technology components captures and enacts any software process use to support the software lifecycle. It creates an adaptive workflow environment that can be modified as needed. SDA achieves software process automation through a Business Process Management (BPM) approach to managing the software lifecycle for mission-critical projects. It contains five main parts: TieFlow (workflow engine), Business Rules (rules to alter process flow), Common Repository (storage for project artifacts, versions, history, schedules, etc.), SOA (interface to allow internal, GFE, or COTS tools integration), and the Web Portal Interface (collaborative web environment
Sun, Yunan; Zhou, Hui; Zhu, Hongmei; Leung, Siu-wai
2016-01-25
Sirtuin 1 (SIRT1) is a nicotinamide adenine dinucleotide-dependent deacetylase, and its dysregulation can lead to ageing, diabetes, and cancer. From 346 experimentally confirmed SIRT1 inhibitors, an inhibitor structure pattern was generated by inductive logic programming (ILP) with DMax Chemistry Assistant software. The pattern contained amide, amine, and hetero-aromatic five-membered rings, each of which had a hetero-atom and an unsubstituted atom at a distance of 2. According to this pattern, a ligand-based virtual screening of 1 444 880 active compounds from Chinese herbs identified 12 compounds as inhibitors of SIRT1. Three compounds (ZINC08790006, ZINC08792229, and ZINC08792355) had high affinity (-7.3, -7.8, and -8.6 kcal/mol, respectively) for SIRT1 as estimated by molecular docking software AutoDock Vina. This study demonstrated a use of ILP and background knowledge in machine learning to facilitate virtual screening.
NASA Astrophysics Data System (ADS)
Sun, Yunan; Zhou, Hui; Zhu, Hongmei; Leung, Siu-Wai
2016-01-01
Sirtuin 1 (SIRT1) is a nicotinamide adenine dinucleotide-dependent deacetylase, and its dysregulation can lead to ageing, diabetes, and cancer. From 346 experimentally confirmed SIRT1 inhibitors, an inhibitor structure pattern was generated by inductive logic programming (ILP) with DMax Chemistry Assistant software. The pattern contained amide, amine, and hetero-aromatic five-membered rings, each of which had a hetero-atom and an unsubstituted atom at a distance of 2. According to this pattern, a ligand-based virtual screening of 1 444 880 active compounds from Chinese herbs identified 12 compounds as inhibitors of SIRT1. Three compounds (ZINC08790006, ZINC08792229, and ZINC08792355) had high affinity (-7.3, -7.8, and -8.6 kcal/mol, respectively) for SIRT1 as estimated by molecular docking software AutoDock Vina. This study demonstrated a use of ILP and background knowledge in machine learning to facilitate virtual screening.
NASA Astrophysics Data System (ADS)
Jayaraman, Shrisudersan; Baeck, Sung-Hyeon; Jaramillo, Thomas F.; Kleiman-Shwarsctein, Alan; McFarland, Eric W.
2005-06-01
An automated system for high-throughput electrochemical synthesis and screening of fuel cell electro-oxidation catalysts is described. This system consists of an electrode probe that contains counter and reference electrodes that can be positioned inside an array of electrochemical cells created within a polypropylene block. The electrode probe is attached to an automated of X-Y-Z motion system. An externally controlled potentiostat is used to apply the electrochemical potential to the catalyst substrate. The motion and electrochemical control are integrated using a user-friendly software interface. During automated synthesis the deposition potential and/or current may be controlled by a pulse program triggered by the software using a data acquisition board. The screening includes automated experiments to obtain cyclic voltammograms. As an example, a platinum-tungsten oxide (Pt-WO3) library was synthesized and characterized for reactivity towards methanol electro-oxidation.
Localization-based super-resolution imaging meets high-content screening.
Beghin, Anne; Kechkar, Adel; Butler, Corey; Levet, Florian; Cabillic, Marine; Rossier, Olivier; Giannone, Gregory; Galland, Rémi; Choquet, Daniel; Sibarita, Jean-Baptiste
2017-12-01
Single-molecule localization microscopy techniques have proven to be essential tools for quantitatively monitoring biological processes at unprecedented spatial resolution. However, these techniques are very low throughput and are not yet compatible with fully automated, multiparametric cellular assays. This shortcoming is primarily due to the huge amount of data generated during imaging and the lack of software for automation and dedicated data mining. We describe an automated quantitative single-molecule-based super-resolution methodology that operates in standard multiwell plates and uses analysis based on high-content screening and data-mining software. The workflow is compatible with fixed- and live-cell imaging and allows extraction of quantitative data like fluorophore photophysics, protein clustering or dynamic behavior of biomolecules. We demonstrate that the method is compatible with high-content screening using 3D dSTORM and DNA-PAINT based super-resolution microscopy as well as single-particle tracking.
The Warfighter Associate: Decision-Support and Metrics for Mission Command
2013-01-01
complex situations can be captured it makes sense to use software to provide this important adjunct to complex human cognitive problems. As a software...tasks that could distract the user from the important events occurring. An Associate System also observes the actions undertaken by a human operator...the Commander’s Critical Information Requirements. ‡It is important to note that the Warfighter Associate maintains a human -in-the-loop for decision
Agile Software Development in Defense Acquisition: A Mission Assurance Perspective
2012-03-23
based information retrieval system, we might say that this program works like a hive of bees , going out for pollen and bringing it back to the hive...developers ® Six Siqma is reqistered in the U. S. Patent and Trademark Office by Motorola ^_ 33 @ AEROSPACE Major Areas in a Typical Software...requirements - Capturing and evaluating quality metrics, identifying common problem areas **» Despite its positive impact on quality, pair programming
Results obtained with a low cost software-based audiometer for hearing screening.
Ferrari, Deborah Viviane; Lopez, Esteban Alejandro; Lopes, Andrea Cintra; Aiello, Camila Piccini; Jokura, Pricila Reis
2013-07-01
The implementation of hearing screening programs can be facilitated by reducing operating costs, including the cost of equipment. The Telessaúde (TS) audiometer is a low-cost, software-based, and easy-to-use piece of equipment for conducting audiometric screening. To evaluate the TS audiometer for conducting audiometric screening. A prospective randomized study was performed. Sixty subjects, divided into those who did not have (group A, n = 30) and those who had otologic complaints (group B, n = 30), underwent audiometric screening with conventional and TS audiometers in a randomized order. Pure tones at 25 dB HL were presented at frequencies of 500, 1000, 2000, and 4000 Hz. A "fail" result was considered when the individual failed to respond to at least one of the stimuli. Pure-tone audiometry was also performed on all participants. The concordance of the results of screening with both audiometers was evaluated. The sensitivity, specificity, and positive and negative predictive values of screening with the TS audiometer were calculated. For group A, 100% of the ears tested passed the screening. For group B, "pass" results were obtained in 34.2% (TS) and 38.3% (conventional) of the ears tested. The agreement between procedures (TS vs. conventional) ranged from 93% to 98%. For group B, screening with the TS audiometer showed 95.5% sensitivity, 90.4% sensitivity, and positive and negative predictive values equal to 94.9% and 91.5%, respectively. The results of the TS audiometer were similar to those obtained with the conventional audiometer, indicating that the TS audiometer can be used for audiometric screening.
ERIC Educational Resources Information Center
Daugherty, Lindsay; Dossani, Rafiq; Johnson, Erin-Elizabeth; Wright, Cameron
2014-01-01
Conversations about what constitutes "developmentally appropriate" use of technology in early childhood education have, to date, focused largely on a single, blunt measure--screen time--that fails to capture important nuances, such as what type of media a child is accessing and whether technology use is taking place solo or with peers.…
Conceptual schematic for capture of biomethane released from hydroelectric power facilities.
Kikuchi, R; Amaral, P Bingre do
2008-09-01
Though dam-related biomethane was identified in the 1960s, its capture has not been sufficiently discussed. Captured biomethane could be burned to produce energy, and the burning of biomethane turns the carbon in it into CO(2) that is far less potent as a greenhouse gas; this paper therefore aims to technically discuss the capture/use of dam-related biomethane. A great amount of bubbles would be formed by the rapid drop in water pressure (i.e. cavitation) after turbine passage, so it is proposed to capture methane-bearing bubbles by means of a flow tube for adjusting residence time and hydrophilic screens for trapping these bubbles. The results from the performed calculation show that biomethane can be trapped in a yield of 60%.
Johnson, S R; Leo, P J; McInerney-Leo, A M; Anderson, L K; Marshall, M; McGown, I; Newell, F; Brown, M A; Conwell, L S; Harris, M; Duncan, E L
2018-06-01
To assess the utility of whole-exome sequencing (WES) for mutation detection in maturity-onset diabetes of the young (MODY) and congenital hyperinsulinism (CHI). MODY and CHI are the two commonest monogenic disorders of glucose-regulated insulin secretion in childhood, with 13 causative genes known for MODY and 10 causative genes identified for CHI. The large number of potential genes makes comprehensive screening using traditional methods expensive and time-consuming. Ten subjects with MODY and five with CHI with known mutations underwent WES using two different exome capture kits (Nimblegen SeqCap EZ Human v3.0 Exome Enrichment Kit, Nextera Rapid Capture Exome Kit). Analysis was blinded to previously identified mutations, and included assessment for large deletions. The target capture of five exome capture technologies was also analyzed using sequencing data from >2800 unrelated samples. Four of five MODY mutations were identified using Nimblegen (including a large deletion in HNF1B). Although targeted, one mutation (in INS) had insufficient coverage for detection. Eleven of eleven mutations (six MODY, five CHI) were identified using Nextera Rapid (including the previously missed mutation). On reconciliation, all mutations concorded with previous data and no additional variants in MODY genes were detected. There were marked differences in the performance of the capture technologies. WES can be useful for screening for MODY/CHI mutations, detecting both point mutations and large deletions. However, capture technologies require careful selection. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Internet-Based Cervical Cancer Screening Program
2008-05-01
information technology have facilitated the Internet transmission and archival storage of digital images and other clinical information . The combination of...Phase included: 1) development of hardware, software, and interfaces between computerized scanning device and Internet - linked servers and reading...AD_________________ Award Number: W81XWH-04-C-0083 TITLE: Internet -Based Cervical Cancer Screening
Mobile phones for retinopathy of prematurity screening in Lagos, Nigeria, sub-Saharan Africa.
Oluleye, Tunji S; Rotimi-Samuel, Adekunle; Adenekan, Adetunji
2016-01-01
Retinopathy of prematurity (ROP), thought to be rare in Nigeria, sub-Saharan Africa, has been reported in recent studies. Developing cost-effective screening is crucial for detecting retinal changes amenable to treatment. This study describes the use of an iPhone combined with a 20-D lens in screening for ROP in Lagos, Nigeria. The ROP screening program was approved by the Lagos University Teaching Hospital Ethical Committee. Preterm infants with birthweight of less than 1.5 kg or gestational age of less than 32 weeks were screened. In conjunction with the neonatologist, topical tropicamide (0.5%) and phenylephrine (2.5%) was used to dilate the pupils. A pediatric lid speculum was used. Indirect ophthalmoscopy was used to examine the fundus to ensure there were no missed diagnoses. An iPhone 5 with 20-D lens was used to examine the fundus. The App Filmic Pro was launched in the video mode. The camera flash served as the source of illumination. Its intensity was controlled by the app. The 20-D lens was used to capture the image of the retina, which was picked up by the camera system of the mobile phone. Another app, Aviary, was used to edit the picture. The images captured by the system were satisfactory for staging and determining the need for treatment. An iPhone combined with a 20-D lens appear to be useful in screening for ROP in resource-poor settings. More studies are needed in this area.
General Mode Scanning Probe Microscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Somnath, Suhas; Jesse, Stephen
A critical part of SPM measurements is the information transfer from the probe-sample junction to the measurement system. Current information transfer methods heavily compress the information-rich data stream by averaging the data over a time interval, or via heterodyne detection approaches such as lock-in amplifiers and phase-locked loops. As a consequence, highly valuable information at the sub-microsecond time scales or information from frequencies outside the measurement band is lost. We have developed a fundamentally new approach called General Mode (G-mode), where we can capture the complete information stream from the detectors in the microscope. The availability of the complete informationmore » allows the microscope operator to analyze the data via information-theory analysis or comprehensive physical models. Furthermore, the complete data stream enables advanced data-driven filtering algorithms, multi-resolution imaging, ultrafast spectroscropic imaging, spatial mapping of multidimensional variability in material properties, etc. Though we applied this approach to scanning probe microscopy, the general philosophy of G-mode can be applied to many other modes of microscopy. G-mode data is captured by completely custom software written in LabVIEW and Matlab. The software generates the waveforms to electrically, thermally, or mechanically excite the SPM probe. It handles real-time communications with the microscope software for operations such as moving the SPM probe position and also controls other instrumentation hardware. The software also controls multiple variants of high-speed data acquisition cards to excite the SPM probe with the excitation waveform and simultaneously measure multiple channels of information from the microscope detectors at sampling rates of 1-100 MHz. The software also saves the raw data to the computer and allows the microscope operator to visualize processed or filtered data during the experiment. The software performs all these features while offering a user-friendly interface.« less
Can You See Me Now Visualizing Battlefield Facial Recognition Technology in 2035
2010-04-01
County Sheriff’s Department, use certain measurements such as the distance between eyes, the length of the nose, or the shape of the ears. 8 However...captures multiple frames of video and composites them into an appropriately high-resolution image that can be processed by the facial recognition software...stream of data. High resolution video systems, such as those described below will be able to capture orders of magnitude more data in one video frame
Lowering the Barrier to Reproducible Research by Publishing Provenance from Common Analytical Tools
NASA Astrophysics Data System (ADS)
Jones, M. B.; Slaughter, P.; Walker, L.; Jones, C. S.; Missier, P.; Ludäscher, B.; Cao, Y.; McPhillips, T.; Schildhauer, M.
2015-12-01
Scientific provenance describes the authenticity, origin, and processing history of research products and promotes scientific transparency by detailing the steps in computational workflows that produce derived products. These products include papers, findings, input data, software products to perform computations, and derived data and visualizations. The geosciences community values this type of information, and, at least theoretically, strives to base conclusions on computationally replicable findings. In practice, capturing detailed provenance is laborious and thus has been a low priority; beyond a lab notebook describing methods and results, few researchers capture and preserve detailed records of scientific provenance. We have built tools for capturing and publishing provenance that integrate into analytical environments that are in widespread use by geoscientists (R and Matlab). These tools lower the barrier to provenance generation by automating capture of critical information as researchers prepare data for analysis, develop, test, and execute models, and create visualizations. The 'recordr' library in R and the `matlab-dataone` library in Matlab provide shared functions to capture provenance with minimal changes to normal working procedures. Researchers can capture both scripted and interactive sessions, tag and manage these executions as they iterate over analyses, and then prune and publish provenance metadata and derived products to the DataONE federation of archival repositories. Provenance traces conform to the ProvONE model extension of W3C PROV, enabling interoperability across tools and languages. The capture system supports fine-grained versioning of science products and provenance traces. By assigning global identifiers such as DOIs, reseachers can cite the computational processes used to reach findings. And, finally, DataONE has built a web portal to search, browse, and clearly display provenance relationships between input data, the software used to execute analyses and models, and derived data and products that arise from these computations. This provenance is vital to interpretation and understanding of science, and provides an audit trail that researchers can use to understand and replicate computational workflows in the geosciences.
Assessing the Relative Risk of Aerocapture Using Probabalistic Risk Assessment
NASA Technical Reports Server (NTRS)
Percy, Thomas K.; Bright, Ellanee; Torres, Abel O.
2005-01-01
A recent study performed for the Aerocapture Technology Area in the In-Space Propulsion Technology Projects Office at the Marshall Space Flight Center investigated the relative risk of various capture techniques for Mars missions. Aerocapture has been proposed as a possible capture technique for future Mars missions but has been perceived by many in the community as a higher risk option as compared to aerobraking and propulsive capture. By performing a probabilistic risk assessment on aerocapture, aerobraking and propulsive capture, a comparison was made to uncover the projected relative risks of these three maneuvers. For mission planners, this knowledge will allow them to decide if the mass savings provided by aerocapture warrant any incremental risk exposure. The study focuses on a Mars Sample Return mission currently under investigation at the Jet Propulsion Laboratory (JPL). In each case (propulsive, aerobraking and aerocapture), the Earth return vehicle is inserted into Martian orbit by one of the three techniques being investigated. A baseline spacecraft was established through initial sizing exercises performed by JPL's Team X. While Team X design results provided the baseline and common thread between the spacecraft, in each case the Team X results were supplemented by historical data as needed. Propulsion, thermal protection, guidance, navigation and control, software, solar arrays, navigation and targeting and atmospheric prediction were investigated. A qualitative assessment of human reliability was also included. Results show that different risk drivers contribute significantly to each capture technique. For aerocapture, the significant drivers include propulsion system failures and atmospheric prediction errors. Software and guidance hardware contribute the most to aerobraking risk. Propulsive capture risk is mainly driven by anomalous solar array degradation and propulsion system failures. While each subsystem contributes differently to the risk of each technique, results show that there exists little relative difference in the reliability of these capture techniques although uncertainty for the aerocapture estimates remains high given the lack of in-space demonstration.
Software tools for interactive instruction in radiologic anatomy.
Alvarez, Antonio; Gold, Garry E; Tobin, Brian; Desser, Terry S
2006-04-01
To promote active learning in an introductory Radiologic Anatomy course through the use of computer-based exercises. DICOM datasets from our hospital PACS system were transferred to a networked cluster of desktop computers in a medical school classroom. Medical students in the Radiologic Anatomy course were divided into four small groups and assigned to work on a clinical case for 45 minutes. The groups used iPACS viewer software, a free DICOM viewer, to view images and annotate anatomic structures. The classroom instructor monitored and displayed each group's work sequentially on the master screen by running SynchronEyes, a software tool for controlling PC desktops remotely. Students were able to execute the assigned tasks using the iPACS software with minimal oversight or instruction. Course instructors displayed each group's work on the main display screen of the classroom as the students presented the rationale for their decisions. The interactive component of the course received high ratings from the students and overall course ratings were higher than in prior years when the course was given solely in lecture format. DICOM viewing software is an excellent tool for enabling students to learn radiologic anatomy from real-life clinical datasets. Interactive exercises performed in groups can be powerful tools for stimulating students to learn radiologic anatomy.
Catlin, Ann Christine; Fernando, Sumudinie; Gamage, Ruwan; Renner, Lorna; Antwi, Sampson; Tettey, Jonas Kusah; Amisah, Kofi Aikins; Kyriakides, Tassos; Cong, Xiangyu; Reynolds, Nancy R.; Paintsil, Elijah
2015-01-01
Prevalence of pediatric HIV disclosure is low in resource-limited settings. Innovative, culturally sensitive, and patient-centered disclosure approaches are needed. Conducting such studies in resource-limited settings is not trivial considering the challenges of capturing, cleaning, and storing clinical research data. To overcome some of these challenges, the Sankofa pediatric disclosure intervention adopted an interactive cyber infrastructure for data capture and analysis. The Sankofa Project database system is built on the HUBzero cyber infrastructure (https://hubzero.org), an open source software platform. The hub database components support: (1) data management – the “databases” component creates, configures, and manages database access, backup, repositories, applications, and access control; (2) data collection – the “forms” component is used to build customized web case report forms that incorporate common data elements and include tailored form submit processing to handle error checking, data validation, and data linkage as the data are stored to the database; and (3) data exploration – the “dataviewer” component provides powerful methods for users to view, search, sort, navigate, explore, map, graph, visualize, aggregate, drill-down, compute, and export data from the database. The Sankofa cyber data management tool supports a user-friendly, secure, and systematic collection of all data. We have screened more than 400 child–caregiver dyads and enrolled nearly 300 dyads, with tens of thousands of data elements. The dataviews have successfully supported all data exploration and analysis needs of the Sankofa Project. Moreover, the ability of the sites to query and view data summaries has proven to be an incentive for collecting complete and accurate data. The data system has all the desirable attributes of an electronic data capture tool. It also provides an added advantage of building data management capacity in resource-limited settings due to its innovative data query and summary views and availability of real-time support by the data management team. PMID:26616131
Catlin, Ann Christine; Fernando, Sumudinie; Gamage, Ruwan; Renner, Lorna; Antwi, Sampson; Tettey, Jonas Kusah; Amisah, Kofi Aikins; Kyriakides, Tassos; Cong, Xiangyu; Reynolds, Nancy R; Paintsil, Elijah
2015-01-01
Prevalence of pediatric HIV disclosure is low in resource-limited settings. Innovative, culturally sensitive, and patient-centered disclosure approaches are needed. Conducting such studies in resource-limited settings is not trivial considering the challenges of capturing, cleaning, and storing clinical research data. To overcome some of these challenges, the Sankofa pediatric disclosure intervention adopted an interactive cyber infrastructure for data capture and analysis. The Sankofa Project database system is built on the HUBzero cyber infrastructure ( https://hubzero.org ), an open source software platform. The hub database components support: (1) data management - the "databases" component creates, configures, and manages database access, backup, repositories, applications, and access control; (2) data collection - the "forms" component is used to build customized web case report forms that incorporate common data elements and include tailored form submit processing to handle error checking, data validation, and data linkage as the data are stored to the database; and (3) data exploration - the "dataviewer" component provides powerful methods for users to view, search, sort, navigate, explore, map, graph, visualize, aggregate, drill-down, compute, and export data from the database. The Sankofa cyber data management tool supports a user-friendly, secure, and systematic collection of all data. We have screened more than 400 child-caregiver dyads and enrolled nearly 300 dyads, with tens of thousands of data elements. The dataviews have successfully supported all data exploration and analysis needs of the Sankofa Project. Moreover, the ability of the sites to query and view data summaries has proven to be an incentive for collecting complete and accurate data. The data system has all the desirable attributes of an electronic data capture tool. It also provides an added advantage of building data management capacity in resource-limited settings due to its innovative data query and summary views and availability of real-time support by the data management team.
VecScreen_plus_taxonomy: imposing a tax(onomy) increase on vector contamination screening.
Schäffer, Alejandro A; Nawrocki, Eric P; Choi, Yoon; Kitts, Paul A; Karsch-Mizrachi, Ilene; McVeigh, Richard
2018-03-01
Nucleic acid sequences in public databases should not contain vector contamination, but many sequences in GenBank do (or did) contain vectors. The National Center for Biotechnology Information uses the program VecScreen to screen submitted sequences for contamination. Additional tools are needed to distinguish true-positive (contamination) from false-positive (not contamination) VecScreen matches. A principal reason for false-positive VecScreen matches is that the sequence and the matching vector subsequence originate from closely related or identical organisms (for example, both originate in Escherichia coli). We collected information on the taxonomy of sources of vector segments in the UniVec database used by VecScreen. We used that information in two overlapping software pipelines for retrospective analysis of contamination in GenBank and for prospective analysis of contamination in new sequence submissions. Using the retrospective pipeline, we identified and corrected over 8000 contaminated sequences in the nonredundant nucleotide database. The prospective analysis pipeline has been in production use since April 2017 to evaluate some new GenBank submissions. Data on the sources of UniVec entries were included in release 10.0 (ftp://ftp.ncbi.nih.gov/pub/UniVec/). The main software is freely available at https://github.com/aaschaffer/vecscreen_plus_taxonomy. aschaffe@helix.nih.gov. Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2017. This work is written by US Government employees and are in the public domain in the US.
Software for Collaborative Use of Large Interactive Displays
NASA Technical Reports Server (NTRS)
Trimble, Jay; Shab, Thodore; Wales, Roxana; Vera, Alonso; Tollinger, Irene; McCurdy, Michael; Lyubimov, Dmitriy
2006-01-01
The MERBoard Collaborative Workspace, which is currently being deployed to support the Mars Exploration Rover (MER) Missions, is the first instantiation of a new computing architecture designed to support collaborative and group computing using computing devices situated in NASA mission operations room. It is a software system for generation of large-screen interactive displays by multiple users
NASA Technical Reports Server (NTRS)
1989-01-01
Loredan Biomedical, Inc.'s LIDO, a computerized physical therapy system, was purchased by NASA in 1985 for evaluation as a Space Station Freedom exercise program. In 1986, while involved in an ARC muscle conditioning project, Malcom Bond, Loredan's chairman, designed an advanced software package for NASA which became the basis for LIDOSOFT software used in the commercially available system. The system employs a "proprioceptive" software program which perceives internal body conditions, induces perturbations to muscular effort and evaluates the response. Biofeedback on a screen allows a patient to observe his own performance.
Perez, Susan L; Paterniti, Debora A; Wilson, Machelle; Bell, Robert A; Chan, Man Shan; Villareal, Chloe C; Nguyen, Hien Huy; Kravitz, Richard L
2015-07-20
Little is known about the processes people use to find health-related information on the Internet or the individual characteristics that shape selection of information-seeking approaches. Our aim was to describe the processes by which users navigate the Internet for information about a hypothetical acute illness and to identify individual characteristics predictive of their information-seeking strategies. Study participants were recruited from public settings and agencies. Interested individuals were screened for eligibility using an online questionnaire. Participants listened to one of two clinical scenarios—consistent with influenza or bacterial meningitis—and then conducted an Internet search. Screen-capture video software captured Internet search mouse clicks and keystrokes. Each step of the search was coded as hypothesis testing (etiology), evidence gathering (symptoms), or action/treatment seeking (behavior). The coded steps were used to form a step-by-step pattern of each participant's information-seeking process. A total of 78 Internet health information seekers ranging from 21-35 years of age and who experienced barriers to accessing health care services participated. We identified 27 unique patterns of information seeking, which were grouped into four overarching classifications based on the number of steps taken during the search, whether a pattern consisted of developing a hypothesis and exploring symptoms before ending the search or searching an action/treatment, and whether a pattern ended with action/treatment seeking. Applying dual-processing theory, we categorized the four overarching pattern classifications as either System 1 (41%, 32/78), unconscious, rapid, automatic, and high capacity processing; or System 2 (59%, 46/78), conscious, slow, and deliberative processing. Using multivariate regression, we found that System 2 processing was associated with higher education and younger age. We identified and classified two approaches to processing Internet health information. System 2 processing, a methodical approach, most resembles the strategies for information processing that have been found in other studies to be associated with higher-quality decisions. We conclude that the quality of Internet health-information seeking could be improved through consumer education on methodical Internet navigation strategies and the incorporation of decision aids into health information websites.
Paterniti, Debora A; Wilson, Machelle; Bell, Robert A; Chan, Man Shan; Villareal, Chloe C; Nguyen, Hien Huy; Kravitz, Richard L
2015-01-01
Background Little is known about the processes people use to find health-related information on the Internet or the individual characteristics that shape selection of information-seeking approaches. Objective Our aim was to describe the processes by which users navigate the Internet for information about a hypothetical acute illness and to identify individual characteristics predictive of their information-seeking strategies. Methods Study participants were recruited from public settings and agencies. Interested individuals were screened for eligibility using an online questionnaire. Participants listened to one of two clinical scenarios—consistent with influenza or bacterial meningitis—and then conducted an Internet search. Screen-capture video software captured Internet search mouse clicks and keystrokes. Each step of the search was coded as hypothesis testing (etiology), evidence gathering (symptoms), or action/treatment seeking (behavior). The coded steps were used to form a step-by-step pattern of each participant’s information-seeking process. A total of 78 Internet health information seekers ranging from 21-35 years of age and who experienced barriers to accessing health care services participated. Results We identified 27 unique patterns of information seeking, which were grouped into four overarching classifications based on the number of steps taken during the search, whether a pattern consisted of developing a hypothesis and exploring symptoms before ending the search or searching an action/treatment, and whether a pattern ended with action/treatment seeking. Applying dual-processing theory, we categorized the four overarching pattern classifications as either System 1 (41%, 32/78), unconscious, rapid, automatic, and high capacity processing; or System 2 (59%, 46/78), conscious, slow, and deliberative processing. Using multivariate regression, we found that System 2 processing was associated with higher education and younger age. Conclusions We identified and classified two approaches to processing Internet health information. System 2 processing, a methodical approach, most resembles the strategies for information processing that have been found in other studies to be associated with higher-quality decisions. We conclude that the quality of Internet health-information seeking could be improved through consumer education on methodical Internet navigation strategies and the incorporation of decision aids into health information websites. PMID:26194787
Walker, Melissa J; Dorrestein, Annabel; Camacho, Jasmin J; Meckler, Lauren A; Silas, Kirk A; Hiller, Thomas; Haelewaters, Danny
2018-01-01
The Darién province in eastern Panama is one of the most unexplored and biodiverse regions in the world. The Chucantí Nature Reserve, in Serranía de Majé, consists of a diverse tropical cloud forest ecosystem. The aim of this research was to explore and study host associations of a tripartite system of bats, ectoparasitic flies on bats (Diptera, Streblidae), and ectoparasitic fungi (Ascomycota, Laboulbeniales) that use bat flies as hosts. We captured bats at Chucantí, screened each bat for presence of bat flies, and screened collected bat flies for presence of Laboulbeniales. We mistnetted for 68 mistnet hours and captured 227 bats representing 17 species. We captured Micronycteris schmidtorum, a species previously unreported in Darién. In addition, we encountered the rarely collected Platyrrhinus dorsalis, representing the westernmost report for this species. Of all captured bats, 148 carried bat flies (65%). The number of sampled bat flies was 437, representing 16 species. One species represents a new country record (Trichobius anducei) and five species represent first reports for Darién (Basilia anceps, Anatrichobius scorzai, Nycterophilia parnelli, T. johnsonae, T. parasiticus). All 74 bat fly species currently reported in Panama are presented in tabulated form. Of all screened bat flies, 30 bore Laboulbeniales fungi (7%). Based on both morphology and large ribosomal subunit (LSU) sequence data, we delimited 7 species of Laboulbeniales: Gloeandromyces nycteribiidarum (newly reported for Panama), G. pageanus, G. streblae, Nycteromyces streblidinus, and 3 undescribed species. Of the 30 infected flies, 21 were Trichobius joblingi. This species was the only host on which we observed double infections of Laboulbeniales. © M.J. Walker et al., published by EDP Sciences, 2018.
Walker, Melissa J.; Dorrestein, Annabel; Camacho, Jasmin J.; Meckler, Lauren A.; Silas, Kirk A.; Hiller, Thomas; Haelewaters, Danny
2018-01-01
The Darién province in eastern Panama is one of the most unexplored and biodiverse regions in the world. The Chucantí Nature Reserve, in Serranía de Majé, consists of a diverse tropical cloud forest ecosystem. The aim of this research was to explore and study host associations of a tripartite system of bats, ectoparasitic flies on bats (Diptera, Streblidae), and ectoparasitic fungi (Ascomycota, Laboulbeniales) that use bat flies as hosts. We captured bats at Chucantí, screened each bat for presence of bat flies, and screened collected bat flies for presence of Laboulbeniales. We mistnetted for 68 mistnet hours and captured 227 bats representing 17 species. We captured Micronycteris schmidtorum, a species previously unreported in Darién. In addition, we encountered the rarely collected Platyrrhinus dorsalis, representing the westernmost report for this species. Of all captured bats, 148 carried bat flies (65%). The number of sampled bat flies was 437, representing 16 species. One species represents a new country record (Trichobius anducei) and five species represent first reports for Darién (Basilia anceps, Anatrichobius scorzai, Nycterophilia parnelli, T. johnsonae, T. parasiticus). All 74 bat fly species currently reported in Panama are presented in tabulated form. Of all screened bat flies, 30 bore Laboulbeniales fungi (7%). Based on both morphology and large ribosomal subunit (LSU) sequence data, we delimited 7 species of Laboulbeniales: Gloeandromyces nycteribiidarum (newly reported for Panama), G. pageanus, G. streblae, Nycteromyces streblidinus, and 3 undescribed species. Of the 30 infected flies, 21 were Trichobius joblingi. This species was the only host on which we observed double infections of Laboulbeniales. PMID:29633707
Gray scale enhances display readability of bitmapped documents
NASA Astrophysics Data System (ADS)
Ostberg, Olov; Disfors, Dennis; Feng, Yingduo
1994-05-01
Bitmapped images of high resolution, say 300 dpi rastered documents, stored in the memory of a PC are at best only borderline readable on the PC's display screen (say a 72 dpi VGA monitor). Results from a series of exploratory psycho-physical experiments, using the Adobe PhotoshopR software, show that the readability can be significantly enhanced by making use of the monitor's capability to display shades of gray. It is suggested that such a gray scale adaptation module should be bundled to all software products for electronic document management. In fact, fax modems are already available in which this principle is employed, hereby making it possible to read incoming fax documents directly on the screen.
Large-scale virtual screening on public cloud resources with Apache Spark.
Capuccini, Marco; Ahmed, Laeeq; Schaal, Wesley; Laure, Erwin; Spjuth, Ola
2017-01-01
Structure-based virtual screening is an in-silico method to screen a target receptor against a virtual molecular library. Applying docking-based screening to large molecular libraries can be computationally expensive, however it constitutes a trivially parallelizable task. Most of the available parallel implementations are based on message passing interface, relying on low failure rate hardware and fast network connection. Google's MapReduce revolutionized large-scale analysis, enabling the processing of massive datasets on commodity hardware and cloud resources, providing transparent scalability and fault tolerance at the software level. Open source implementations of MapReduce include Apache Hadoop and the more recent Apache Spark. We developed a method to run existing docking-based screening software on distributed cloud resources, utilizing the MapReduce approach. We benchmarked our method, which is implemented in Apache Spark, docking a publicly available target receptor against [Formula: see text]2.2 M compounds. The performance experiments show a good parallel efficiency (87%) when running in a public cloud environment. Our method enables parallel Structure-based virtual screening on public cloud resources or commodity computer clusters. The degree of scalability that we achieve allows for trying out our method on relatively small libraries first and then to scale to larger libraries. Our implementation is named Spark-VS and it is freely available as open source from GitHub (https://github.com/mcapuccini/spark-vs).Graphical abstract.
Human Engineering Modeling and Performance Lab Study Project
NASA Technical Reports Server (NTRS)
Oliva-Buisson, Yvette J.
2014-01-01
The HEMAP (Human Engineering Modeling and Performance) Lab is a joint effort between the Industrial and Human Engineering group and the KAVE (Kennedy Advanced Visualiations Environment) group. The lab consists of sixteen camera system that is used to capture human motions and operational tasks, through te use of a Velcro suit equipped with sensors, and then simulate these tasks in an ergonomic software package know as Jac, The Jack software is able to identify the potential risk hazards.
Building confidence and credibility amid growing model and computing complexity
NASA Astrophysics Data System (ADS)
Evans, K. J.; Mahajan, S.; Veneziani, C.; Kennedy, J. H.
2017-12-01
As global Earth system models are developed to answer an ever-wider range of science questions, software products that provide robust verification, validation, and evaluation must evolve in tandem. Measuring the degree to which these new models capture past behavior, predict the future, and provide the certainty of predictions is becoming ever more challenging for reasons that are generally well known, yet are still challenging to address. Two specific and divergent needs for analysis of the Accelerated Climate Model for Energy (ACME) model - but with a similar software philosophy - are presented to show how a model developer-based focus can address analysis needs during expansive model changes to provide greater fidelity and execute on multi-petascale computing facilities. A-PRIME is a python script-based quick-look overview of a fully-coupled global model configuration to determine quickly if it captures specific behavior before significant computer time and expense is invested. EVE is an ensemble-based software framework that focuses on verification of performance-based ACME model development, such as compiler or machine settings, to determine the equivalence of relevant climate statistics. The challenges and solutions for analysis of multi-petabyte output data are highlighted from the aspect of the scientist using the software, with the aim of fostering discussion and further input from the community about improving developer confidence and community credibility.
Automated registration of tail bleeding in rats.
Johansen, Peter B; Henriksen, Lars; Andresen, Per R; Lauritzen, Brian; Jensen, Kåre L; Juhl, Trine N; Tranholm, Mikael
2008-05-01
An automated system for registration of tail bleeding in rats using a camera and a user-designed PC-based software program has been developed. The live and processed images are displayed on the screen and are exported together with a text file for later statistical processing of the data allowing calculation of e.g. number of bleeding episodes, bleeding times and bleeding areas. Proof-of-principle was achieved when the camera captured the blood stream after infusion of rat whole blood into saline. Suitability was assessed by recording of bleeding profiles in heparin-treated rats, demonstrating that the system was able to capture on/off bleedings and that the data transfer and analysis were conducted successfully. Then, bleeding profiles were visually recorded by two independent observers simultaneously with the automated recordings after tail transection in untreated rats. Linear relationships were found in the number of bleedings, demonstrating, however, a statistically significant difference in the recording of bleeding episodes between observers. Also, the bleeding time was longer for visual compared to automated recording. No correlation was found between blood loss and bleeding time in untreated rats, but in heparinized rats a correlation was suggested. Finally, the blood loss correlated with the automated recording of bleeding area. In conclusion, the automated system has proven suitable for replacing visual recordings of tail bleedings in rats. Inter-observer differences can be eliminated, monotonous repetitive work avoided, and a higher through-put of animals in less time achieved. The automated system will lead to an increased understanding of the nature of bleeding following tail transection in different rodent models.
Process control charts in infection prevention: Make it simple to make it happen.
Wiemken, Timothy L; Furmanek, Stephen P; Carrico, Ruth M; Mattingly, William A; Persaud, Annuradha K; Guinn, Brian E; Kelley, Robert R; Ramirez, Julio A
2017-03-01
Quality improvement is central to Infection Prevention and Control (IPC) programs. Challenges may occur when applying quality improvement methodologies like process control charts, often due to the limited exposure of typical IPs. Because of this, our team created an open-source database with a process control chart generator for IPC programs. The objectives of this report are to outline the development of the application and demonstrate application using simulated data. We used Research Electronic Data Capture (REDCap Consortium, Vanderbilt University, Nashville, TN), R (R Foundation for Statistical Computing, Vienna, Austria), and R Studio Shiny (R Foundation for Statistical Computing) to create an open source data collection system with automated process control chart generation. We used simulated data to test and visualize both in-control and out-of-control processes for commonly used metrics in IPC programs. The R code for implementing the control charts and Shiny application can be found on our Web site (https://github.com/ul-research-support/spcapp). Screen captures of the workflow and simulated data indicating both common cause and special cause variation are provided. Process control charts can be easily developed based on individual facility needs using freely available software. Through providing our work free to all interested parties, we hope that others will be able to harness the power and ease of use of the application for improving the quality of care and patient safety in their facilities. Copyright © 2017 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
Diagnostic report acquisition unit for the Mayo/IBM PACS project
NASA Astrophysics Data System (ADS)
Brooks, Everett G.; Rothman, Melvyn L.
1991-07-01
The Mayo Clinic and IBM Rochester have jointly developed a picture archive and control system (PACS) for use with Mayo's MRI and Neuro-CT imaging modalities. One of the challenges of developing a useful PACS involves integrating the diagnostic reports with the electronic images so they can be displayed simultaneously. By the time a diagnostic report is generated for a particular case, its images have already been captured and archived by the PACS. To integrate the report with the images, the authors have developed an IBM Personal System/2 computer (PS/2) based diagnostic report acquisition unit (RAU). A typed copy of the report is transmitted via facsimile to the RAU where it is stacked electronically with other reports that have been sent previously but not yet processed. By processing these reports at the RAU, the information they contain is integrated with the image database and a copy of the report is archived electronically on an IBM Application System/400 computer (AS/400). When a user requests a set of images for viewing, the report is automatically integrated with the image data. By using a hot key, the user can toggle on/off the report on the display screen. This report describes process, hardware, and software employed to integrate the diagnostic report information into the PACS, including how the report images are captured, transmitted, and entered into the AS/400 database. Also described is how the archived reports and their associated medical images are located and merged for retrieval and display. The methods used to detect and process error conditions are also discussed.
Kuder, Margaret; Goheen, Mary Jett; Dize, Laura; Barnes, Mathilda; Gaydos, Charlotte A
2015-05-01
The www.iwantthekit.org provides Internet-based, at-home sexually transmitted infection screening. The Web site implemented an automated test result access system. To evaluate potential deleterious effects of the new system, we analyzed demographics, Web site usage, and treatment. The post-Web site design captured more participant information and no decrease in requests, kit return, or treatment adherence.
ERIC Educational Resources Information Center
Young, Phillip; Young, Karen Holsey
2010-01-01
A 2 x 2 x 2 factorial design involving sex of superintendents, sex of applicants, and national origin of applicants (Hispanic vs. non-Hispanic) is used to assess screening decisions for a middle school principalship. Screening decisions are analyzed from a sequential model to capture selection as a process. Results indicate that biases surface…
Quantifying swallowing function for healthy adults in different age groups using acoustic analysis
NASA Astrophysics Data System (ADS)
Leung, Man-Yin
Dysphagia is a medical condition that can lead to devastating complications including weight loss, aspiration pneumonia, dehydration, and malnutrition; hence, timely identification is essential. Current dysphagia evaluation tools are either invasive, time consuming, or highly dependent on the experience of an individual clinician. The present study aims to develop a non-invasive, quantitative screening tool for dysphagia identification by capturing acoustic data from swallowing and mastication. The first part of this study explores the feasibility of using acoustic data to quantify swallowing and mastication. This study then further identifies mastication and swallowing trends in a neurotypical adult population. An acoustic capture protocol for dysphagia screening is proposed. Finally, the relationship among speaking, lingual and mastication rates are explored. Results and future directions are discussed.
Space Telecommunications Radio System (STRS) Application Repository Design and Analysis
NASA Technical Reports Server (NTRS)
Handler, Louis M.
2013-01-01
The Space Telecommunications Radio System (STRS) Application Repository Design and Analysis document describes the STRS application repository for software-defined radio (SDR) applications intended to be compliant to the STRS Architecture Standard. The document provides information about the submission of artifacts to the STRS application repository, to provide information to the potential users of that information, and for the systems engineer to understand the requirements, concepts, and approach to the STRS application repository. The STRS application repository is intended to capture knowledge, documents, and other artifacts for each waveform application or other application outside of its project so that when the project ends, the knowledge is retained. The document describes the transmission of technology from mission to mission capturing lessons learned that are used for continuous improvement across projects and supporting NASA Procedural Requirements (NPRs) for performing software engineering projects and NASAs release process.
NASA Technical Reports Server (NTRS)
2002-01-01
Ames Research Center granted Reality Capture Technologies (RCT), Inc., a license to further develop NASA's Mars Map software platform. The company incorporated NASA#s innovation into software that uses the Virtual Plant Model (VPM)(TM) to structure, modify, and implement the construction sites of industrial facilities, as well as develop, validate, and train operators on procedures. The VPM orchestrates the exchange of information between engineering, production, and business transaction systems. This enables users to simulate, control, and optimize work processes while increasing the reliability of critical business decisions. Engineers can complete the construction process and test various aspects of it in virtual reality before building the actual structure. With virtual access to and simulation of the construction site, project personnel can manage, access control, and respond to changes on complex constructions more effectively. Engineers can also create operating procedures, training, and documentation. Virtual Plant Model(TM) is a trademark of Reality Capture Technologies, Inc.
Molded underfill (MUF) encapsulation for flip-chip package: A numerical investigation
NASA Astrophysics Data System (ADS)
Azmi, M. A.; Abdullah, M. K.; Abdullah, M. Z.; Ariff, Z. M.; Saad, Abdullah Aziz; Hamid, M. F.; Ismail, M. A.
2017-07-01
This paper presents the numerical simulation of epoxy molding compound (EMC) filling in multi flip-chip packages during encapsulation process. The empty and a group flip chip packages were considered in the mold cavity in order to study the flow profile of the EMC. SOLIDWORKS software was used for three-dimensional modeling and it was incorporated into fluid analysis software namely as ANSYS FLUENT. The volume of fluid (VOF) technique was used for capturing the flow front profiles and Power Law model was applied for its rheology model. The numerical result are compared and discussed with previous experimental and it was shown a good conformity for model validation. The prediction of flow front was observed and analyzed at different filling time. The possibility and visual of void formation in the package is captured and the number of flip-chip is one factor that contributed to the void formation.
Requirements UML Tool (RUT) Expanded for Extreme Programming (CI02)
NASA Technical Reports Server (NTRS)
McCoy, James R.
2003-01-01
A procedure for capturing and managing system requirements that incorporates XP user stories. Because costs associated with identifying problems in requirements increase dramatically over the lifecycle of a project, a method for identifying sources of software risks in user stories is urgently needed. This initiative aims to determine a set of guide-lines for user stories that will result in high-quality requirement. To further this initiative, a tool is needed to analyze user stories that can assess the quality of individual user stories, detect sources cf software risk's, produce software metrics, and identify areas in user stories that can be improved.
End-to-end observatory software modeling using domain specific languages
NASA Astrophysics Data System (ADS)
Filgueira, José M.; Bec, Matthieu; Liu, Ning; Peng, Chien; Soto, José
2014-07-01
The Giant Magellan Telescope (GMT) is a 25-meter extremely large telescope that is being built by an international consortium of universities and research institutions. Its software and control system is being developed using a set of Domain Specific Languages (DSL) that supports a model driven development methodology integrated with an Agile management process. This approach promotes the use of standardized models that capture the component architecture of the system, that facilitate the construction of technical specifications in a uniform way, that facilitate communication between developers and domain experts and that provide a framework to ensure the successful integration of the software subsystems developed by the GMT partner institutions.
WISE: Automated support for software project management and measurement. M.S. Thesis
NASA Technical Reports Server (NTRS)
Ramakrishnan, Sudhakar
1995-01-01
One important aspect of software development and IV&V is measurement. Unless a software development effort is measured in some way, it is difficult to judge the effectiveness of current efforts and predict future performances. Collection of metrics and adherence to a process are difficult tasks in a software project. Change activity is a powerful indicator of project status. Automated systems that can handle change requests, issues, and other process documents provide an excellent platform for tracking the status of the project. A World Wide Web based architecture is developed for (a) making metrics collection an implicit part of the software process, (b) providing metric analysis dynamically, (c) supporting automated tools that can complement current practices of in-process improvement, and (d) overcoming geographical barrier. An operational system (WISE) instantiates this architecture allowing for the improvement of software process in a realistic environment. The tool tracks issues in software development process, provides informal communication between the users with different roles, supports to-do lists (TDL), and helps in software process improvement. WISE minimizes the time devoted to metrics collection, analysis, and captures software change data. Automated tools like WISE focus on understanding and managing the software process. The goal is improvement through measurement.
Summers, Ronald M; Baecher, Nicolai; Yao, Jianhua; Liu, Jiamin; Pickhardt, Perry J; Choi, J Richard; Hill, Suvimol
2011-01-01
To show the feasibility of calculating the bone mineral density (BMD) from computed tomographic colonography (CTC) scans using fully automated software. Automated BMD measurement software was developed that measures the BMD of the first and second lumbar vertebrae on computed tomography and calculates the mean of the 2 values to provide a per patient BMD estimate. The software was validated in a reference population of 17 consecutive women who underwent quantitative computed tomography and in a population of 475 women from a consecutive series of asymptomatic patients enrolled in a CTC screening trial conducted at 3 medical centers. The mean (SD) BMD was 133.6 (34.6) mg/mL (95% confidence interval, 130.5-136.7; n = 475). In women aged 42 to 60 years (n = 316) and 61 to 79 years (n = 159), the mean (SD) BMDs were 143.1 (33.5) and 114.7 (28.3) mg/mL, respectively (P < 0.0001). Fully automated BMD measurements were reproducible for a given patient with 95% limits of agreement of -9.79 to 8.46 mg/mL for the mean difference between paired assessments on supine and prone CTC. Osteoporosis screening can be performed simultaneously with screening for colorectal polyps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, F.; Flach, G.
This report describes work performed by the Savannah River National Laboratory (SRNL) in fiscal year 2014 to develop a new Cementitious Barriers Project (CBP) software module designated as FLOExcel. FLOExcel incorporates a uniform database to capture material characterization data and a GoldSim model to define flow properties for both intact and fractured cementitious materials and estimate Darcy velocity based on specified hydraulic head gradient and matric tension. The software module includes hydraulic parameters for intact cementitious and granular materials in the database and a standalone GoldSim framework to manipulate the data. The database will be updated with new data asmore » it comes available. The software module will later be integrated into the next release of the CBP Toolbox, Version 3.0. This report documents the development efforts for this software module. The FY14 activities described in this report focused on the following two items that form the FLOExcel package; 1) Development of a uniform database to capture CBP data for cementitious materials. In particular, the inclusion and use of hydraulic properties of the materials are emphasized; and 2) Development of algorithms and a GoldSim User Interface to calculate hydraulic flow properties of degraded and fractured cementitious materials. Hydraulic properties are required in a simulation of flow through cementitious materials such as Saltstone, waste tank fill grout, and concrete barriers. At SRNL these simulations have been performed using the PORFLOW code as part of Performance Assessments for salt waste disposal and waste tank closure.« less
NASA Astrophysics Data System (ADS)
Gómez-Gutiérrez, Álvaro; Juan de Sanjosé-Blasco, José; Schnabel, Susanne; de Matías-Bejarano, Javier; Pulido-Fernández, Manuel; Berenguer-Sempere, Fernando
2015-04-01
In this work, the hypothesis of improving 3D models obtained with Structure from Motion (SfM) approaches using images pre-processed by High Dynamic Range (HDR) techniques is tested. Photographs of the Veleta Rock Glacier in Spain were captured with different exposure values (EV0, EV+1 and EV-1), two focal lengths (35 and 100 mm) and under different weather conditions for the years 2008, 2009, 2011, 2012 and 2014. HDR images were produced using the different EV steps within Fusion F.1 software. Point clouds were generated using commercial and free available SfM software: Agisoft Photoscan and 123D Catch. Models Obtained using pre-processed images and non-preprocessed images were compared in a 3D environment with a benchmark 3D model obtained by means of a Terrestrial Laser Scanner (TLS). A total of 40 point clouds were produced, georeferenced and compared. Results indicated that for Agisoft Photoscan software differences in the accuracy between models obtained with pre-processed and non-preprocessed images were not significant from a statistical viewpoint. However, in the case of the free available software 123D Catch, models obtained using images pre-processed by HDR techniques presented a higher point density and were more accurate. This tendency was observed along the 5 studied years and under different capture conditions. More work should be done in the near future to corroborate whether the results of similar software packages can be improved by HDR techniques (e.g. ARC3D, Bundler and PMVS2, CMP SfM, Photosynth and VisualSFM).
Computer Decision Support to Improve Autism Screening and Care in Community Pediatric Clinics
ERIC Educational Resources Information Center
Bauer, Nerissa S.; Sturm, Lynne A.; Carroll, Aaron E.; Downs, Stephen M.
2013-01-01
An autism module was added to an existing computer decision support system (CDSS) to facilitate adherence to recommended guidelines for screening for autism spectrum disorders in primary care pediatric clinics. User satisfaction was assessed by survey and informal feedback at monthly meetings between clinical staff and the software team. To assess…
Yang, Peng; Wu, Min; Guo, Jing; Kwoh, Chee Keong; Przytycka, Teresa M; Zheng, Jie
2014-02-17
As a fundamental genomic element, meiotic recombination hotspot plays important roles in life sciences. Thus uncovering its regulatory mechanisms has broad impact on biomedical research. Despite the recent identification of the zinc finger protein PRDM9 and its 13-mer binding motif as major regulators for meiotic recombination hotspots, other regulators remain to be discovered. Existing methods for finding DNA sequence motifs of recombination hotspots often rely on the enrichment of co-localizations between hotspots and short DNA patterns, which ignore the cross-individual variation of recombination rates and sequence polymorphisms in the population. Our objective in this paper is to capture signals encoded in genetic variations for the discovery of recombination-associated DNA motifs. Recently, an algorithm called "LDsplit" has been designed to detect the association between single nucleotide polymorphisms (SNPs) and proximal meiotic recombination hotspots. The association is measured by the difference of population recombination rates at a hotspot between two alleles of a candidate SNP. Here we present an open source software tool of LDsplit, with integrative data visualization for recombination hotspots and their proximal SNPs. Applying LDsplit on SNPs inside an established 7-mer motif bound by PRDM9 we observed that SNP alleles preserving the original motif tend to have higher recombination rates than the opposite alleles that disrupt the motif. Running on SNP windows around hotspots each containing an occurrence of the 7-mer motif, LDsplit is able to guide the established motif finding algorithm of MEME to recover the 7-mer motif. In contrast, without LDsplit the 7-mer motif could not be identified. LDsplit is a software tool for the discovery of cis-regulatory DNA sequence motifs stimulating meiotic recombination hotspots by screening and narrowing down to hotspot associated SNPs. It is the first computational method that utilizes the genetic variation of recombination hotspots among individuals, opening a new avenue for motif finding. Tested on an established motif and simulated datasets, LDsplit shows promise to discover novel DNA motifs for meiotic recombination hotspots.
2014-01-01
Background As a fundamental genomic element, meiotic recombination hotspot plays important roles in life sciences. Thus uncovering its regulatory mechanisms has broad impact on biomedical research. Despite the recent identification of the zinc finger protein PRDM9 and its 13-mer binding motif as major regulators for meiotic recombination hotspots, other regulators remain to be discovered. Existing methods for finding DNA sequence motifs of recombination hotspots often rely on the enrichment of co-localizations between hotspots and short DNA patterns, which ignore the cross-individual variation of recombination rates and sequence polymorphisms in the population. Our objective in this paper is to capture signals encoded in genetic variations for the discovery of recombination-associated DNA motifs. Results Recently, an algorithm called “LDsplit” has been designed to detect the association between single nucleotide polymorphisms (SNPs) and proximal meiotic recombination hotspots. The association is measured by the difference of population recombination rates at a hotspot between two alleles of a candidate SNP. Here we present an open source software tool of LDsplit, with integrative data visualization for recombination hotspots and their proximal SNPs. Applying LDsplit on SNPs inside an established 7-mer motif bound by PRDM9 we observed that SNP alleles preserving the original motif tend to have higher recombination rates than the opposite alleles that disrupt the motif. Running on SNP windows around hotspots each containing an occurrence of the 7-mer motif, LDsplit is able to guide the established motif finding algorithm of MEME to recover the 7-mer motif. In contrast, without LDsplit the 7-mer motif could not be identified. Conclusions LDsplit is a software tool for the discovery of cis-regulatory DNA sequence motifs stimulating meiotic recombination hotspots by screening and narrowing down to hotspot associated SNPs. It is the first computational method that utilizes the genetic variation of recombination hotspots among individuals, opening a new avenue for motif finding. Tested on an established motif and simulated datasets, LDsplit shows promise to discover novel DNA motifs for meiotic recombination hotspots. PMID:24533858
Applied research of embedded WiFi technology in the motion capture system
NASA Astrophysics Data System (ADS)
Gui, Haixia
2012-04-01
Embedded wireless WiFi technology is one of the current wireless hot spots in network applications. This paper firstly introduces the definition and characteristics of WiFi. With the advantages of WiFi such as using no wiring, simple operation and stable transmission, this paper then gives a system design for the application of embedded wireless WiFi technology in the motion capture system. Also, it verifies the effectiveness of design in the WiFi-based wireless sensor hardware and software program.
77 FR 75144 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-19
...: Diversion screens, aiding/rescue of salmon and artificial propagation, 5 hours each; road maintenance... species (``take'' includes actions that harass, harm, pursue, kill, or capture). The first salmonid...
Julian, Timothy R; Bustos, Carla; Kwong, Laura H; Badilla, Alejandro D; Lee, Julia; Bischel, Heather N; Canales, Robert A
2018-05-08
Quantitative data on human-environment interactions are needed to fully understand infectious disease transmission processes and conduct accurate risk assessments. Interaction events occur during an individual's movement through, and contact with, the environment, and can be quantified using diverse methodologies. Methods that utilize videography, coupled with specialized software, can provide a permanent record of events, collect detailed interactions in high resolution, be reviewed for accuracy, capture events difficult to observe in real-time, and gather multiple concurrent phenomena. In the accompanying video, the use of specialized software to capture humanenvironment interactions for human exposure and disease transmission is highlighted. Use of videography, combined with specialized software, allows for the collection of accurate quantitative representations of human-environment interactions in high resolution. Two specialized programs include the Virtual Timing Device for the Personal Computer, which collects sequential microlevel activity time series of contact events and interactions, and LiveTrak, which is optimized to facilitate annotation of events in real-time. Opportunities to annotate behaviors at high resolution using these tools are promising, permitting detailed records that can be summarized to gain information on infectious disease transmission and incorporated into more complex models of human exposure and risk.
Bucchi, L; Pierri, C; Caprara, L; Cortecchia, S; De Lillo, M; Bondi, A
2003-02-01
This paper presents a computerised system for the monitoring of integrated cervical screening, i.e. the integration of spontaneous Pap smear practice into organised screening. The general characteristics of the system are described, including background and rationale (integrated cervical screening in European countries, impact of integration on monitoring, decentralised organization of screening and levels of monitoring), general methods (definitions, sections, software description, and setting of application), and indicators of participation (distribution by time interval since previous Pap smear, distribution by screening sector--organised screening centres vs public and private clinical settings--, distribution by time interval between the last two Pap smears, and movement of women between the two screening sectors). Also, the paper reports the results of the application of these indicators in the general database of the Pathology Department of Imola Health District in northern Italy.
Perfusion CT to assess angiogenesis in colon cancer: technical limitations and practical challenges.
Dighe, S; Castellano, E; Blake, H; Jeyadevan, N; Koh, M U; Orten, M; Swift, I; Brown, G
2012-10-01
Perfusion CT may have the potential to quantify the degree of angiogenesis of solid tumours in vivo. This study aims to identify the practical and technical challenges inherent to the technique, and evaluate its feasibility in colorectal tumours. 51 patients from 2 institutions prospectively underwent a single perfusion CT on 2 different multidetector scanners. The patients were advised to breath-hold as long as possible, followed by shallow breathing, and were given intravenous buscopan to reduce movement. Numerous steps were explored to identify the challenges. 43 patients successfully completed the perfusion CT as per protocol. Inability to detect the tumour (n=3), misplacement of dynamic sequence co-ordinates (n=2), failure of contrast injection (n=2) and displacement of tumour (n=1) were the reasons for failure. In 14 cases excessive respiratory motion displaced the tumour out of the scanning field along the temporal sequence, leading to erroneous data capture. In nine patients, minor displacements of the tumour were corrected by repositioning the region of interest (ROI) to its original position after reviewing each dynamic sequence slice. In 20 patients the tumour was stable, and data captured from the ROI were representative, and could have been analysed by commercially available Body Tumor Perfusion 3.0® software (GE Healthcare, Waukesha, WI). Hence all data were manually analysed by MATLAB® processing software (MathWorks, Cambridge, UK). Perfusion CT in tumours susceptible to motion during acquisition makes accurate data capture challenging and requires meticulous attention to detail. Motion correction software is essential if perfusion CT is to be used routinely in colorectal cancer.
Ipsiroglu, Osman S.; Hung, Yi-Hsuan Amy; Chan, Forson; Ross, Michelle L.; Veer, Dorothee; Soo, Sonja; Ho, Gloria; Berger, Mai; McAllister, Graham; Garn, Heinrich; Kloesch, Gerhard; Barbosa, Adriano Vilela; Stockler, Sylvia; McKellin, William; Vatikiotis-Bateson, Eric
2015-01-01
Introduction: Advanced video technology is available for sleep-laboratories. However, low-cost equipment for screening in the home setting has not been identified and tested, nor has a methodology for analysis of video recordings been suggested. Methods: We investigated different combinations of hardware/software for home-videosomnography (HVS) and established a process for qualitative and quantitative analysis of HVS-recordings. A case vignette (HVS analysis for a 5.5-year-old girl with major insomnia and several co-morbidities) demonstrates how methodological considerations were addressed and how HVS added value to clinical assessment. Results: We suggest an “ideal set of hardware/software” that is reliable, affordable (∼$500) and portable (=2.8 kg) to conduct non-invasive HVS, which allows time-lapse analyses. The equipment consists of a net-book, a camera with infrared optics, and a video capture device. (1) We present an HVS-analysis protocol consisting of three steps of analysis at varying replay speeds: (a) basic overview and classification at 16× normal speed; (b) second viewing and detailed descriptions at 4–8× normal speed, and (c) viewing, listening, and in-depth descriptions at real-time speed. (2) We also present a custom software program that facilitates video analysis and note-taking (Annotator©), and Optical Flow software that automatically quantifies movement for internal quality control of the HVS-recording. The case vignette demonstrates how the HVS-recordings revealed the dimension of insomnia caused by restless legs syndrome, and illustrated the cascade of symptoms, challenging behaviors, and resulting medications. Conclusion: The strategy of using HVS, although requiring validation and reliability testing, opens the floor for a new “observational sleep medicine,” which has been useful in describing discomfort-related behavioral movement patterns in patients with communication difficulties presenting with challenging/disruptive sleep/wake behaviors. PMID:25852578
2015-01-01
class within Microsoft Visual Studio . 2 It has been tested on and is compatible with Microsoft Vista, 7, and 8 and Visual Studio Express 2008...the ScreenRecorder utility assumes a basic understanding of compiling and running C++ code within Microsoft Visual Studio . This report does not...of Microsoft Visual Studio , the ScreenRecorder utility was developed as a C++ class that can be compiled as a library (static or dynamic) to be
NASA Technical Reports Server (NTRS)
Wilber, George F.
2017-01-01
This Software Description Document (SDD) captures the design for developing the Flight Interval Management (FIM) system Configurable Graphics Display (CGD) software. Specifically this SDD describes aspects of the Boeing CGD software and the surrounding context and interfaces. It does not describe the Honeywell components of the CGD system. The SDD provides the system overview, architectural design, and detailed design with all the necessary information to implement the Boeing components of the CGD software and integrate them into the CGD subsystem within the larger FIM system. Overall system and CGD system-level requirements are derived from the CGD SRS (in turn derived from the Boeing System Requirements Design Document (SRDD)). Display and look-and-feel requirements are derived from Human Machine Interface (HMI) design documents and working group recommendations. This Boeing CGD SDD is required to support the upcoming Critical Design Review (CDR).
NASA Technical Reports Server (NTRS)
Hardwick, Charles
1991-01-01
Field studies were conducted by MCC to determine areas of research of mutual interest to MCC and JSC. NASA personnel from the Information Systems Directorate and research faculty from UHCL/RICIS visited MCC in Austin, Texas to examine tools and applications under development in the MCC Software Technology Program. MCC personnel presented workshops in hypermedia, design knowledge capture, and design recovery on site at JSC for ISD personnel. The following programs were installed on workstations in the Software Technology Lab, NASA/JSC: (1) GERM (Graphic Entity Relations Modeler); (2) gIBIS (Graphic Issues Based Information System); and (3) DESIRE (Design Recovery tool). These applications were made available to NASA for inspection and evaluation. Programs developed in the MCC Software Technology Program run on the SUN workstation. The programs do not require special configuration, but they will require larger than usual amounts of disk space and RAM to operate properly.
Xu, Shen; Rogers, Toby; Fairweather, Elliot; Glenn, Anthony; Curran, James; Curcin, Vasa
2018-01-01
Data provenance is a technique that describes the history of digital objects. In health data settings, it can be used to deliver auditability and transparency, and to achieve trust in a software system. However, implementing data provenance in analytics software at an enterprise level presents a different set of challenges from the research environments where data provenance was originally devised. In this paper, the challenges of reporting provenance information to the user is presented. Provenance captured from analytics software can be large and complex and visualizing a series of tasks over a long period can be overwhelming even for a domain expert, requiring visual aggregation mechanisms that fit with complex human cognitive activities involved in the process. This research studied how provenance-based reporting can be integrated into a health data analytics software, using the example of Atmolytics visual reporting tool. PMID:29888084
Learning from examples - Generation and evaluation of decision trees for software resource analysis
NASA Technical Reports Server (NTRS)
Selby, Richard W.; Porter, Adam A.
1988-01-01
A general solution method for the automatic generation of decision (or classification) trees is investigated. The approach is to provide insights through in-depth empirical characterization and evaluation of decision trees for software resource data analysis. The trees identify classes of objects (software modules) that had high development effort. Sixteen software systems ranging from 3,000 to 112,000 source lines were selected for analysis from a NASA production environment. The collection and analysis of 74 attributes (or metrics), for over 4,700 objects, captured information about the development effort, faults, changes, design style, and implementation style. A total of 9,600 decision trees were automatically generated and evaluated. The trees correctly identified 79.3 percent of the software modules that had high development effort or faults, and the trees generated from the best parameter combinations correctly identified 88.4 percent of the modules on the average.
The making of the mechanical universe
NASA Technical Reports Server (NTRS)
Blinn, James
1989-01-01
The Mechanical Universe project required the production of over 550 different animated scenes, totaling about 7 and 1/2 hours of screen time. The project required the use of a wide range of techniques and motivated the development of several different software packages. A documentation is presented of many aspects of the project, encompassing artistic design issues, scientific simulations, software engineering, and video engineering.
User's manual for the VAX-Gerber link software package. Revision 1. 0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Isobe, G.W.
1985-10-01
This manual provides a user the information necessary to run the VAX-Gerber link software package. It is expected that the user already knows how to login to the VAX, and is familiar with the Gerber Photo Plotter. It is also highly desirable that the user be familiar with the full screen editor on the VAX, EDT.
The Projectile Inside the Loop
ERIC Educational Resources Information Center
Varieschi, Gabriele U.
2006-01-01
The loop-the-loop demonstration can be easily adapted to study the kinematics of projectile motion, when the moving body falls inside the apparatus. Video capturing software can be used to reveal peculiar geometrical effects of this simple but educational experiment.
DOT National Transportation Integrated Search
2012-07-01
This project has developed and implemented a software environment to utilize data collected by Traffic Management Centers (TMC) in Florida, in combination with data from other sources to support various applications. The environment allows capturing ...
Developing inexpensive crash countermeasures for Louisiana local roads : request for proposals
DOT National Transportation Integrated Search
2010-09-17
The intelligent transportation system (ITS) includes detectors that capture data from Floridas transportation network and computer hardware and software that process these data. Data processed in real-time can, for example, be used to develop mess...
2005-12-19
Using the JMars targeting software, eighth grade students from Charleston Middle School in Charleston, IL, selected the location of -8.37N and 276.66E for capture by the THEMIS visible camera during Mars Odyssey sixth orbit of Mars on Nov. 22, 2005
Parallel Computing for the Computed-Tomography Imaging Spectrometer
NASA Technical Reports Server (NTRS)
Lee, Seungwon
2008-01-01
This software computes the tomographic reconstruction of spatial-spectral data from raw detector images of the Computed-Tomography Imaging Spectrometer (CTIS), which enables transient-level, multi-spectral imaging by capturing spatial and spectral information in a single snapshot.
Khunamornpong, Surapan; Settakorn, Jongkolnee; Sukpan, Kornkanok; Suprasert, Prapaporn; Srisomboon, Jatupol; Intaraphet, Suthida; Siriaunkgul, Sumalee
2016-01-01
Background Testing for high-risk human papillomavirus DNA (HPV test) has gained increasing acceptance as an alternative method to cytology in cervical cancer screening. Compared to cytology, HPV test has a higher sensitivity for the detection of histologic high-grade squamous intraepithelial lesion or worse (HSIL+), but this could lead to a large colposcopy burden. Genotyping for HPV16/18 has been recommended in triaging HPV-positive women. This study was aimed to evaluate the screening performance of HPV testing and the role of genotyping triage in Northern Thailand. Methods A population-based cervical screening program was performed in Chiang Mai (Northern Thailand) using cytology (conventional Pap test) and HPV test (Hybrid Capture 2). Women who had abnormal cytology or were HPV-positive were referred for colposcopy. Cervical samples from these women were genotyped using the Linear Array assay. Results Of 5,456 women, 2.0% had abnormal Pap test results and 6.5% tested positive with Hybrid Capture 2. Of 5,433 women eligible for analysis, 355 with any positive test had histologic confirmation and 57 of these had histologic HSIL+. The sensitivity for histologic HSIL+ detection was 64.9% for Pap test and 100% for Hybrid Capture 2, but the ratio of colposcopy per detection of each HSIL+ was more than two-fold higher with Hybrid Capture 2 than Pap test (5.9 versus 2.8). Genotyping results were available in 316 samples. HPV52, HPV16, and HPV58 were the three most common genotypes among women with histologic HSIL+. Performance of genotyping triage using HPV16/18/52/58 was superior to that of HPV16/18, with a higher sensitivity (85.7% versus 28.6%) and negative predictive value (94.2% versus 83.9%). Conclusions In Northern Thailand, HPV testing with genotyping triage shows better screening performance than cervical cytology alone. In this region, the addition of genotyping for HPV52/58 to HPV16/18 is deemed necessary in triaging women with positive HPV test. PMID:27336913
Fluidica CFD software for fluids instruction
NASA Astrophysics Data System (ADS)
Colonius, Tim
2008-11-01
Fluidica is an open-source freely available Matlab graphical user interface (GUI) to to an immersed-boundary Navier- Stokes solver. The algorithm is programmed in Fortran and compiled into Matlab as mex-function. The user can create external flows about arbitrarily complex bodies and collections of free vortices. The code runs fast enough for complex 2D flows to be computed and visualized in real-time on the screen. This facilitates its use in homework and in the classroom for demonstrations of various potential-flow and viscous flow phenomena. The GUI has been written with the goal of allowing the student to learn how to use the software as she goes along. The user can select which quantities are viewed on the screen, including contours of various scalars, velocity vectors, streamlines, particle trajectories, streaklines, and finite-time Lyapunov exponents. In this talk, we demonstrate the software in the context of worked classroom examples demonstrating lift and drag, starting vortices, separation, and vortex dynamics.
Distribution Feeder Modeling for Time-Series Simulation of Voltage Management Strategies: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giraldez Miner, Julieta I; Gotseff, Peter; Nagarajan, Adarsh
This paper presents techniques to create baseline distribution models using a utility feeder from Hawai'ian Electric Company. It describes the software-to-software conversion, steady-state, and time-series validations of a utility feeder model. It also presents a methodology to add secondary low-voltage circuit models to accurately capture the voltage at the customer meter level. This enables preparing models to perform studies that simulate how customer-sited resources integrate into legacy utility distribution system operations.
Earth Global Reference Atmospheric Model (Earth-GRAM) GRAM Virtual Meeting
NASA Technical Reports Server (NTRS)
White, Patrick
2017-01-01
What is Earth-GRAM? Provide monthly mean and standard deviation for any point in atmosphere; Monthly, Geographic, and Altitude Variation. Earth-GRAM is a C++ software package; Currently distributed as Earth-GRAM 2016. Atmospheric variables included: pressure, density, temperature, horizontal and vertical winds, speed of sound, and atmospheric constituents. Used by engineering community because of ability to create dispersions inatmosphere at a rapid runtime; Often embedded in trajectory simulation software. Not a forecast model. Does not readily capture localized atmospheric effects.
2014-10-01
designed an Internet-based and mobile application (software) to assist with the following domains pertinent to diabetes self-management: 1...management that provides education, reminders, and support. The new tool is an internet-based and mobile application (software), now called Tracking...is mobile , provides decision support with actionable options, and is based on user input, will enhance diabetes self-care, improve glycemic control
Realtime Decision Making on EO-1 Using Onboard Science Analysis
NASA Technical Reports Server (NTRS)
Sherwood, Robert; Chien, Steve; Davies, Ashley; Mandl, Dan; Frye, Stu
2004-01-01
Recent autonomy experiments conducted on Earth Observing 1 (EO-1) using the Autonomous Sciencecraft Experiment (ASE) flight software has been used to classify key features in hyperspectral images captured by EO-1. Furthermore, analysis is performed by this software onboard EO-1 and then used to modify the operational plan without interaction from the ground. This paper will outline the overall operations concept and provide some details and examples of the onboard science processing, science analysis, and replanning.
NASA Astrophysics Data System (ADS)
Drachova-Strang, Svetlana V.
As computing becomes ubiquitous, software correctness has a fundamental role in ensuring the safety and security of the systems we build. To design and develop software correctly according to their formal contracts, CS students, the future software practitioners, need to learn a critical set of skills that are necessary and sufficient for reasoning about software correctness. This dissertation presents a systematic approach to both introducing these reasoning skills into the curriculum, and assessing how well the students have learned them. Specifically, it introduces a comprehensive Reasoning Concept Inventory (RCI) that captures the fine details of basic reasoning skills that are ideally learned across the undergraduate curriculum to reason about software correctness, to develop high quality software, and to understand why software works as specified. The RCI forms the basis for developing learning outcomes that help educators to assess the adequacy of current techniques and pinpoint necessary improvements. This dissertation contains results from experimentation and assessment over the past few years in multiple CS courses. The results show that the finer principles of mathematical reasoning of software correctness can be taught effectively and continuously improved with the help of the RCI using suitable teaching practices, and supporting methods and tools.
Sawaya, Helen; Atoui, Mia; Hamadeh, Aya; Zeinoun, Pia; Nahas, Ziad
2016-05-30
The Patient Health Questionnaire - 9 (PHQ-9) and Generalized Anxiety Disorder - 7 (GAD-7) are short screening measures used in medical and community settings to assess depression and anxiety severity. The aim of this study is to translate the screening tools into Arabic and evaluate their psychometric properties in an Arabic-speaking Lebanese psychiatric outpatient sample. The patients completed the questionnaires, among others, prior to being evaluated by a clinical psychiatrist or psychologist. The scales' internal consistency and factor structure were measured and convergent and discriminant validity were established by comparing the scores with clinical diagnoses and the Psychiatric Diagnostic Screening Questionnaire - MDD subset (PDSQ - MDD). Results showed that the PHQ-9 and GAD-7 are reliable screening tools for depression and anxiety and their factor structures replicated those reported in the literature. Sensitivity and specificity analyses showed that the PHQ-9 is sensitive but not specific at capturing depressive symptoms when compared to clinician diagnoses whereas the GAD-7 was neither sensitive nor specific at capturing anxiety symptoms. The implications of these findings are discussed in reference to the scales themselves and the cultural specificity of the Lebanese population. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Eye Carduino: A Car Control System using Eye Movements
NASA Astrophysics Data System (ADS)
Kumar, Arjun; Nagaraj, Disha; Louzardo, Joel; Hegde, Rajeshwari
2011-12-01
Modern automotive systems are rapidly becoming highly of transportation, but can be a web integrated media centre. This paper explains the implementation of a vehicle control defined and characterized by embedded electronics and software. With new technologies, the vehicle industry is facing new opportunities and also new challenges. Electronics have improved the performance of vehicles and at the same time, new more complex applications are introduced. Examples of high level applications include adaptive cruise control and electronic stability programs (ESP). Further, a modern vehicle does not have to be merely a means using only eye movements. The EyeWriter's native hardware and software work to return the co-ordinates of where the user is looking. These co-ordinates are then used to control the car. A centre-point is defined on the screen. The higher on the screen the user's gaze is, the faster the car will accelerate. Braking is done by looking below centre. Steering is done by looking left and right on the screen.
Automated phenotype pattern recognition of zebrafish for high-throughput screening.
Schutera, Mark; Dickmeis, Thomas; Mione, Marina; Peravali, Ravindra; Marcato, Daniel; Reischl, Markus; Mikut, Ralf; Pylatiuk, Christian
2016-07-03
Over the last years, the zebrafish (Danio rerio) has become a key model organism in genetic and chemical screenings. A growing number of experiments and an expanding interest in zebrafish research makes it increasingly essential to automatize the distribution of embryos and larvae into standard microtiter plates or other sample holders for screening, often according to phenotypical features. Until now, such sorting processes have been carried out by manually handling the larvae and manual feature detection. Here, a prototype platform for image acquisition together with a classification software is presented. Zebrafish embryos and larvae and their features such as pigmentation are detected automatically from the image. Zebrafish of 4 different phenotypes can be classified through pattern recognition at 72 h post fertilization (hpf), allowing the software to classify an embryo into 2 distinct phenotypic classes: wild-type versus variant. The zebrafish phenotypes are classified with an accuracy of 79-99% without any user interaction. A description of the prototype platform and of the algorithms for image processing and pattern recognition is presented.
Mobile Care (Moca) for Remote Diagnosis and Screening
Celi, Leo Anthony; Sarmenta, Luis; Rotberg, Jhonathan; Marcelo, Alvin; Clifford, Gari
2010-01-01
Moca is a cell phone-facilitated clinical information system to improve diagnostic, screening and therapeutic capabilities in remote resource-poor settings. The software allows transmission of any medical file, whether a photo, x-ray, audio or video file, through a cell phone to (1) a central server for archiving and incorporation into an electronic medical record (to facilitate longitudinal care, quality control, and data mining), and (2) a remote specialist for real-time decision support (to leverage expertise). The open source software is designed as an end-to-end clinical information system that seamlessly connects health care workers to medical professionals. It is integrated with OpenMRS, an existing open source medical records system commonly used in developing countries. PMID:21822397
The capture and recreation of 3D auditory scenes
NASA Astrophysics Data System (ADS)
Li, Zhiyun
The main goal of this research is to develop the theory and implement practical tools (in both software and hardware) for the capture and recreation of 3D auditory scenes. Our research is expected to have applications in virtual reality, telepresence, film, music, video games, auditory user interfaces, and sound-based surveillance. The first part of our research is concerned with sound capture via a spherical microphone array. The advantage of this array is that it can be steered into any 3D directions digitally with the same beampattern. We develop design methodologies to achieve flexible microphone layouts, optimal beampattern approximation and robustness constraint. We also design novel hemispherical and circular microphone array layouts for more spatially constrained auditory scenes. Using the captured audio, we then propose a unified and simple approach for recreating them by exploring the reciprocity principle that is satisfied between the two processes. Our approach makes the system easy to build, and practical. Using this approach, we can capture the 3D sound field by a spherical microphone array and recreate it using a spherical loudspeaker array, and ensure that the recreated sound field matches the recorded field up to a high order of spherical harmonics. For some regular or semi-regular microphone layouts, we design an efficient parallel implementation of the multi-directional spherical beamformer by using the rotational symmetries of the beampattern and of the spherical microphone array. This can be implemented in either software or hardware and easily adapted for other regular or semi-regular layouts of microphones. In addition, we extend this approach for headphone-based system. Design examples and simulation results are presented to verify our algorithms. Prototypes are built and tested in real-world auditory scenes.
Dalecki, Alex G; Wolschendorf, Frank
2016-07-01
Facing totally resistant bacteria, traditional drug discovery efforts have proven to be of limited use in replenishing our depleted arsenal of therapeutic antibiotics. Recently, the natural anti-bacterial properties of metal ions in synergy with metal-coordinating ligands have shown potential for generating new molecule candidates with potential therapeutic downstream applications. We recently developed a novel combinatorial screening approach to identify compounds with copper-dependent anti-bacterial properties. Through a parallel screening technique, the assay distinguishes between copper-dependent and independent activities against Mycobacterium tuberculosis with hits being defined as compounds with copper-dependent activities. These activities must then be linked to a compound master list to process and analyze the data and to identify the hit molecules, a labor intensive and mistake-prone analysis. Here, we describe a software program built to automate this analysis in order to streamline our workflow significantly. We conducted a small, 1440 compound screen against M. tuberculosis and used it as an example framework to build and optimize the software. Though specifically adapted to our own needs, it can be readily expanded for any small- to medium-throughput screening effort, parallel or conventional. Further, by virtue of the underlying Linux server, it can be easily adapted for chemoinformatic analysis of screens through packages such as OpenBabel. Overall, this setup represents an easy-to-use solution for streamlining processing and analysis of biological screening data, as well as offering a scaffold for ready functionality expansion. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zemek, Peter G.; Plowman, Steven V.
2010-04-01
Advances in hardware have miniaturized the emissions spectrometer and associated optics, rendering them easily deployed in the field. Such systems are also suitable for vehicle mounting, and can provide high quality data and concentration information in minutes. Advances in software have accompanied this hardware evolution, enabling the development of portable point-and-click OP-FTIR systems that weigh less than 16 lbs. These systems are ideal for first-responders, military, law enforcement, forensics, and screening applications using optical remote sensing (ORS) methodologies. With canned methods and interchangeable detectors, the new generation of OP-FTIR technology is coupled to the latest forward reference-type model software to provide point-and-click technology. These software models have been established for some time. However, refined user-friendly models that use active, passive, and solar occultation methodologies now allow the user to quickly field-screen and quantify plumes, fence-lines, and combustion incident scenarios in high-temporal-resolution. Synthetic background generation is now redundant as the models use highly accurate instrument line shape (ILS) convolutions and several other parameters, in conjunction with radiative transfer model databases to model a single calibration spectrum to collected sample spectra. Data retrievals are performed directly on single beam spectra using non-linear classical least squares (NLCLS). Typically, the Hitran line database is used to generate the initial calibration spectrum contained within the software.
Indoor Modelling Benchmark for 3D Geometry Extraction
NASA Astrophysics Data System (ADS)
Thomson, C.; Boehm, J.
2014-06-01
A combination of faster, cheaper and more accurate hardware, more sophisticated software, and greater industry acceptance have all laid the foundations for an increased desire for accurate 3D parametric models of buildings. Pointclouds are the data source of choice currently with static terrestrial laser scanning the predominant tool for large, dense volume measurement. The current importance of pointclouds as the primary source of real world representation is endorsed by CAD software vendor acquisitions of pointcloud engines in 2011. Both the capture and modelling of indoor environments require great effort in time by the operator (and therefore cost). Automation is seen as a way to aid this by reducing the workload of the user and some commercial packages have appeared that provide automation to some degree. In the data capture phase, advances in indoor mobile mapping systems are speeding up the process, albeit currently with a reduction in accuracy. As a result this paper presents freely accessible pointcloud datasets of two typical areas of a building each captured with two different capture methods and each with an accurate wholly manually created model. These datasets are provided as a benchmark for the research community to gauge the performance and improvements of various techniques for indoor geometry extraction. With this in mind, non-proprietary, interoperable formats are provided such as E57 for the scans and IFC for the reference model. The datasets can be found at: http://indoor-bench.github.io/indoor-bench.
Development of a Software Safety Process and a Case Study of Its Use
NASA Technical Reports Server (NTRS)
Knight, J. C.
1996-01-01
Research in the year covered by this reporting period has been primarily directed toward: continued development of mock-ups of computer screens for operator of a digital reactor control system; development of a reactor simulation to permit testing of various elements of the control system; formal specification of user interfaces; fault-tree analysis including software; evaluation of formal verification techniques; and continued development of a software documentation system. Technical results relating to this grant and the remainder of the principal investigator's research program are contained in various reports and papers.
Dresen, S; Ferreirós, N; Gnann, H; Zimmermann, R; Weinmann, W
2010-04-01
The multi-target screening method described in this work allows the simultaneous detection and identification of 700 drugs and metabolites in biological fluids using a hybrid triple-quadrupole linear ion trap mass spectrometer in a single analytical run. After standardization of the method, the retention times of 700 compounds were determined and transitions for each compound were selected by a "scheduled" survey MRM scan, followed by an information-dependent acquisition using the sensitive enhanced product ion scan of a Q TRAP hybrid instrument. The identification of the compounds in the samples analyzed was accomplished by searching the tandem mass spectrometry (MS/MS) spectra against the library we developed, which contains electrospray ionization-MS/MS spectra of over 1,250 compounds. The multi-target screening method together with the library was included in a software program for routine screening and quantitation to achieve automated acquisition and library searching. With the help of this software application, the time for evaluation and interpretation of the results could be drastically reduced. This new multi-target screening method has been successfully applied for the analysis of postmortem and traffic offense samples as well as proficiency testing, and complements screening with immunoassays, gas chromatography-mass spectrometry, and liquid chromatography-diode-array detection. Other possible applications are analysis in clinical toxicology (for intoxication cases), in psychiatry (antidepressants and other psychoactive drugs), and in forensic toxicology (drugs and driving, workplace drug testing, oral fluid analysis, drug-facilitated sexual assault).
NASA Technical Reports Server (NTRS)
Perusich, Stephen; Moos, Thomas; Muscatello, Anthony
2011-01-01
This innovation provides the user with autonomous on-screen monitoring, embedded computations, and tabulated output for two new processes. The software was originally written for the Continuous Lunar Water Separation Process (CLWSP), but was found to be general enough to be applicable to the Lunar Greenhouse Amplifier (LGA) as well, with minor alterations. The resultant program should have general applicability to many laboratory processes (see figure). The objective for these programs was to create a software application that would provide both autonomous monitoring and data storage, along with manual manipulation. The software also allows operators the ability to input experimental changes and comments in real time without modifying the code itself. Common process elements, such as thermocouples, pressure transducers, and relative humidity sensors, are easily incorporated into the program in various configurations, along with specialized devices such as photodiode sensors. The goal of the CLWSP research project is to design, build, and test a new method to continuously separate, capture, and quantify water from a gas stream. The application is any In-Situ Resource Utilization (ISRU) process that desires to extract or produce water from lunar or planetary regolith. The present work is aimed at circumventing current problems and ultimately producing a system capable of continuous operation at moderate temperatures that can be scaled over a large capacity range depending on the ISRU process. The goal of the LGA research project is to design, build, and test a new type of greenhouse that could be used on the moon or Mars. The LGA uses super greenhouse gases (SGGs) to absorb long-wavelength radiation, thus creating a highly efficient greenhouse at a future lunar or Mars outpost. Silica-based glass, although highly efficient at trapping heat, is heavy, fragile, and not suitable for space greenhouse applications. Plastics are much lighter and resilient, but are not efficient for absorbing longwavelength infrared radiation and therefore will lose more heat to the environment compared to glass. The LGA unit uses a transparent polymer antechamber that surrounds part of the greenhouse and encases the SGGs, thereby minimizing infrared losses through the plastic windows. With ambient temperatures at the lunar poles at 50 C, the LGA should provide a substantial enhancement to currently conceived lunar greenhouses. Positive results obtained from this project could lead to a future large-scale system capable of running autonomously on the Moon, Mars, and beyond. The software for both applications needs to run the entire units and all subprocesses; however, throughout testing, many variables and parameters need to be changed as more is learned about the system operation. The software provides the versatility to permit the software operation to change as the user requirements evolve.
Reverse screening methods to search for the protein targets of chemopreventive compounds
NASA Astrophysics Data System (ADS)
Huang, Hongbin; Zhang, Guigui; Zhou, Yuquan; Lin, Chenru; Chen, Suling; Lin, Yutong; Mai, Shangkang; Huang, Zunnan
2018-05-01
This article is a systematic review of reverse screening methods used to search for the protein targets of chemopreventive compounds or drugs. Typical chemopreventive compounds include components of traditional Chinese medicine, natural compounds and Food and Drug Administration (FDA)-approved drugs. Such compounds are somewhat selective but are predisposed to bind multiple protein targets distributed throughout diverse signaling pathways in human cells. In contrast to conventional virtual screening, which identifies the ligands of a targeted protein from a compound database, reverse screening is used to identify the potential targets or unintended targets of a given compound from a large number of receptors by examining their known ligands or crystal structures. This method, also known as in silico or computational target fishing, is highly valuable for discovering the target receptors of query molecules from terrestrial or marine natural products, exploring the molecular mechanisms of chemopreventive compounds, finding alternative indications of existing drugs by drug repositioning, and detecting adverse drug reactions and drug toxicity. Reverse screening can be divided into three major groups: shape screening, pharmacophore screening and reverse docking. Several large software packages, such as Schrödinger and Discovery Studio; typical software/network services such as ChemMapper, PharmMapper, idTarget and INVDOCK; and practical databases of known target ligands and receptor crystal structures, such as ChEMBL, BindingDB and the Protein Data Bank (PDB), are available for use in these computational methods. Different programs, online services and databases have different applications and constraints. Here, we conducted a systematic analysis and multilevel classification of the computational programs, online services and compound libraries available for shape screening, pharmacophore screening and reverse docking to enable non-specialist users to quickly learn and grasp the types of calculations used in protein target fishing. In addition, we review the main features of these methods, programs and databases and provide a variety of examples illustrating the application of one or a combination of reverse screening methods for accurate target prediction.
Reverse Screening Methods to Search for the Protein Targets of Chemopreventive Compounds.
Huang, Hongbin; Zhang, Guigui; Zhou, Yuquan; Lin, Chenru; Chen, Suling; Lin, Yutong; Mai, Shangkang; Huang, Zunnan
2018-01-01
This article is a systematic review of reverse screening methods used to search for the protein targets of chemopreventive compounds or drugs. Typical chemopreventive compounds include components of traditional Chinese medicine, natural compounds and Food and Drug Administration (FDA)-approved drugs. Such compounds are somewhat selective but are predisposed to bind multiple protein targets distributed throughout diverse signaling pathways in human cells. In contrast to conventional virtual screening, which identifies the ligands of a targeted protein from a compound database, reverse screening is used to identify the potential targets or unintended targets of a given compound from a large number of receptors by examining their known ligands or crystal structures. This method, also known as in silico or computational target fishing, is highly valuable for discovering the target receptors of query molecules from terrestrial or marine natural products, exploring the molecular mechanisms of chemopreventive compounds, finding alternative indications of existing drugs by drug repositioning, and detecting adverse drug reactions and drug toxicity. Reverse screening can be divided into three major groups: shape screening, pharmacophore screening and reverse docking. Several large software packages, such as Schrödinger and Discovery Studio; typical software/network services such as ChemMapper, PharmMapper, idTarget, and INVDOCK; and practical databases of known target ligands and receptor crystal structures, such as ChEMBL, BindingDB, and the Protein Data Bank (PDB), are available for use in these computational methods. Different programs, online services and databases have different applications and constraints. Here, we conducted a systematic analysis and multilevel classification of the computational programs, online services and compound libraries available for shape screening, pharmacophore screening and reverse docking to enable non-specialist users to quickly learn and grasp the types of calculations used in protein target fishing. In addition, we review the main features of these methods, programs and databases and provide a variety of examples illustrating the application of one or a combination of reverse screening methods for accurate target prediction.
Reverse Screening Methods to Search for the Protein Targets of Chemopreventive Compounds
Huang, Hongbin; Zhang, Guigui; Zhou, Yuquan; Lin, Chenru; Chen, Suling; Lin, Yutong; Mai, Shangkang; Huang, Zunnan
2018-01-01
This article is a systematic review of reverse screening methods used to search for the protein targets of chemopreventive compounds or drugs. Typical chemopreventive compounds include components of traditional Chinese medicine, natural compounds and Food and Drug Administration (FDA)-approved drugs. Such compounds are somewhat selective but are predisposed to bind multiple protein targets distributed throughout diverse signaling pathways in human cells. In contrast to conventional virtual screening, which identifies the ligands of a targeted protein from a compound database, reverse screening is used to identify the potential targets or unintended targets of a given compound from a large number of receptors by examining their known ligands or crystal structures. This method, also known as in silico or computational target fishing, is highly valuable for discovering the target receptors of query molecules from terrestrial or marine natural products, exploring the molecular mechanisms of chemopreventive compounds, finding alternative indications of existing drugs by drug repositioning, and detecting adverse drug reactions and drug toxicity. Reverse screening can be divided into three major groups: shape screening, pharmacophore screening and reverse docking. Several large software packages, such as Schrödinger and Discovery Studio; typical software/network services such as ChemMapper, PharmMapper, idTarget, and INVDOCK; and practical databases of known target ligands and receptor crystal structures, such as ChEMBL, BindingDB, and the Protein Data Bank (PDB), are available for use in these computational methods. Different programs, online services and databases have different applications and constraints. Here, we conducted a systematic analysis and multilevel classification of the computational programs, online services and compound libraries available for shape screening, pharmacophore screening and reverse docking to enable non-specialist users to quickly learn and grasp the types of calculations used in protein target fishing. In addition, we review the main features of these methods, programs and databases and provide a variety of examples illustrating the application of one or a combination of reverse screening methods for accurate target prediction. PMID:29868550
Barrier screens: a method to sample blood-fed and host-seeking exophilic mosquitoes
2013-01-01
Background Determining the proportion of blood meals on humans by outdoor-feeding and resting mosquitoes is challenging. This is largely due to the difficulty of finding an adequate and unbiased sample of resting, engorged mosquitoes to enable the identification of host blood meal sources. This is particularly difficult in the south-west Pacific countries of Indonesia, the Solomon Islands and Papua New Guinea where thick vegetation constitutes the primary resting sites for the exophilic mosquitoes that are the primary malaria and filariasis vectors. Methods Barrier screens of shade-cloth netting attached to bamboo poles were constructed between villages and likely areas where mosquitoes might seek blood meals or rest. Flying mosquitoes, obstructed by the barrier screens, would temporarily stop and could then be captured by aspiration at hourly intervals throughout the night. Results In the three countries where this method was evaluated, blood-fed females of Anopheles farauti, Anopheles bancroftii, Anopheles longirostris, Anopheles sundaicus, Anopheles vagus, Anopheles kochi, Anopheles annularis, Anopheles tessellatus, Culex vishnui, Culex quinquefasciatus and Mansonia spp were collected while resting on the barrier screens. In addition, female Anopheles punctulatus and Armigeres spp as well as male An. farauti, Cx. vishnui, Cx. quinquefasciatus and Aedes species were similarly captured. Conclusions Building barrier screens as temporary resting sites in areas where mosquitoes were likely to fly was an extremely time-effective method for collecting an unbiased representative sample of engorged mosquitoes for determining the human blood index. PMID:23379959
A knowledge-based system design/information tool
NASA Technical Reports Server (NTRS)
Allen, James G.; Sikora, Scott E.
1990-01-01
The objective of this effort was to develop a Knowledge Capture System (KCS) for the Integrated Test Facility (ITF) at the Dryden Flight Research Facility (DFRF). The DFRF is a NASA Ames Research Center (ARC) facility. This system was used to capture the design and implementation information for NASA's high angle-of-attack research vehicle (HARV), a modified F/A-18A. In particular, the KCS was used to capture specific characteristics of the design of the HARV fly-by-wire (FBW) flight control system (FCS). The KCS utilizes artificial intelligence (AI) knowledge-based system (KBS) technology. The KCS enables the user to capture the following characteristics of automated systems: the system design; the hardware (H/W) design and implementation; the software (S/W) design and implementation; and the utilities (electrical and hydraulic) design and implementation. A generic version of the KCS was developed which can be used to capture the design information for any automated system. The deliverable items for this project consist of the prototype generic KCS and an application, which captures selected design characteristics of the HARV FCS.
NASA Technical Reports Server (NTRS)
Oluwole, Oluwayemisi O.; Wong, Hsi-Wu; Green, William
2012-01-01
AdapChem software enables high efficiency, low computational cost, and enhanced accuracy on computational fluid dynamics (CFD) numerical simulations used for combustion studies. The software dynamically allocates smaller, reduced chemical models instead of the larger, full chemistry models to evolve the calculation while ensuring the same accuracy to be obtained for steady-state CFD reacting flow simulations. The software enables detailed chemical kinetic modeling in combustion CFD simulations. AdapChem adapts the reaction mechanism used in the CFD to the local reaction conditions. Instead of a single, comprehensive reaction mechanism throughout the computation, a dynamic distribution of smaller, reduced models is used to capture accurately the chemical kinetics at a fraction of the cost of the traditional single-mechanism approach.
A Data-Driven Solution for Performance Improvement
NASA Technical Reports Server (NTRS)
2002-01-01
Marketed as the "Software of the Future," Optimal Engineering Systems P.I. EXPERT(TM) technology offers statistical process control and optimization techniques that are critical to businesses looking to restructure or accelerate operations in order to gain a competitive edge. Kennedy Space Center granted Optimal Engineering Systems the funding and aid necessary to develop a prototype of the process monitoring and improvement software. Completion of this prototype demonstrated that it was possible to integrate traditional statistical quality assurance tools with robust optimization techniques in a user- friendly format that is visually compelling. Using an expert system knowledge base, the software allows the user to determine objectives, capture constraints and out-of-control processes, predict results, and compute optimal process settings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duan, Yuhua; Luebke, David; Pennline, Henry
2012-01-01
It is generally accepted that current technologies for capturing CO{sub 2} are still too energy intensive. Hence, there is a critical need for development of new materials that can capture CO{sub 2} reversibly with acceptable energy costs. Accordingly, solid sorbents have been proposed to be used for CO{sub 2} capture applications through a reversible chemical transformation. By combining thermodynamic database mining with first principles density functional theory and phonon lattice dynamics calculations, a theoretical screening methodology to identify the most promising CO{sub 2} sorbent candidates from the vast array of possible solid materials has been proposed and validated. The calculatedmore » thermodynamic properties of different classes of solid materials versus temperature and pressure changes were further used to evaluate the equilibrium properties for the CO{sub 2} adsorption/desorption cycles. According to the requirements imposed by the pre- and post- combustion technologies and based on our calculated thermodynamic properties for the CO{sub 2} capture reactions by the solids of interest, we were able to screen only those solid materials for which lower capture energy costs are expected at the desired pressure and temperature conditions. These CO{sub 2} sorbent candidates were further considered for experimental validations. In this presentation, we first introduce our screening methodology with validating by solid dataset of alkali and alkaline metal oxides, hydroxides and bicarbonates which thermodynamic properties are available. Then, by studying a series of lithium silicates, we found that by increasing the Li{sub 2}O/SiO{sub 2} ratio in the lithium silicates their corresponding turnover temperatures for CO{sub 2} capture reactions can be increased. Compared to anhydrous K{sub 2}CO{sub 3}, the dehydrated K{sub 2}CO{sub 3}1.5H{sub 2}O can only be applied for post-combustion CO{sub 2} capture technology at temperatures lower than its phase transition (to anhydrous phase) temperature, which depends on the CO{sub 2} pressure and the steam pressure with the best range being PH{sub 2}O≤1.0 bar. Above the phase-transition temperature, the sorbent will be regenerated into anhydrous K{sub 2}CO{sub 3}. Our theoretical investigations on Na-promoted MgO sorbents revealed that the sorption process takes place through formation of the Na{sub 2}Mg(CO{sub 3}){sub 2} double carbonate with better reaction kinetics over porous MgO, that of pure MgO sorbent. The experimental sorption tests also indicated that the Na-promoted MgO sorbent has high reactivity and capacity towards CO{sub 2} sorption and can be easily regenerated either through pressure or temperature swing processes.« less
A Low-Cost Real Color Picker Based on Arduino
Agudo, Juan Enrique; Pardo, Pedro J.; Sánchez, Héctor; Pérez, Ángel Luis; Suero, María Isabel
2014-01-01
Color measurements have traditionally been linked to expensive and difficult to handle equipment. The set of mathematical transformations that are needed to transfer a color that we observe in any object that doesn't emit its own light (which is usually called a color-object) so that it can be displayed on a computer screen or printed on paper is not at all trivial. This usually requires a thorough knowledge of color spaces, colorimetric transformations and color management systems. The TCS3414CS color sensor (I2C Sensor Color Grove), a system for capturing, processing and color management that allows the colors of any non-self-luminous object using a low-cost hardware based on Arduino, is presented in this paper. Specific software has been developed in Matlab and a study of the linearity of chromatic channels and accuracy of color measurements for this device has been undertaken. All used scripts (Arduino and Matlab) are attached as supplementary material. The results show acceptable accuracy values that, although obviously do not reach the levels obtained with the other scientific instruments, for the price difference they present a good low cost option. PMID:25004152
NASA Astrophysics Data System (ADS)
Li, Qingli; Liu, Hongying; Wang, Yiting; Sun, Zhen; Guo, Fangmin; Zhu, Jianzhong
2014-12-01
Histological observation of dual-stained colon sections is usually performed by visual observation under a light microscope, or by viewing on a computer screen with the assistance of image processing software in both research and clinical settings. These traditional methods are usually not sufficient to reliably differentiate spatially overlapping chromogens generated by different dyes. Hyperspectral microscopic imaging technology offers a solution for these constraints as the hyperspectral microscopic images contain information that allows differentiation between spatially co-located chromogens with similar but different spectra. In this paper, a hyperspectral microscopic imaging (HMI) system is used to identify methyl green and nitrotetrazolium blue chloride in dual-stained colon sections. Hyperspectral microscopic images are captured and the normalized score algorithm is proposed to identify the stains and generate the co-expression results. Experimental results show that the proposed normalized score algorithm can generate more accurate co-localization results than the spectral angle mapper algorithm. The hyperspectral microscopic imaging technology can enhance the visualization of dual-stained colon sections, improve the contrast and legibility of each stain using their spectral signatures, which is helpful for pathologist performing histological analyses.
A low-cost real color picker based on Arduino.
Agudo, Juan Enrique; Pardo, Pedro J; Sánchez, Héctor; Pérez, Ángel Luis; Suero, María Isabel
2014-07-07
Color measurements have traditionally been linked to expensive and difficult to handle equipment. The set of mathematical transformations that are needed to transfer a color that we observe in any object that doesn't emit its own light (which is usually called a color-object) so that it can be displayed on a computer screen or printed on paper is not at all trivial. This usually requires a thorough knowledge of color spaces, colorimetric transformations and color management systems. The TCS3414CS color sensor (I2C Sensor Color Grove), a system for capturing, processing and color management that allows the colors of any non-self-luminous object using a low-cost hardware based on Arduino, is presented in this paper. Specific software has been developed in Matlab and a study of the linearity of chromatic channels and accuracy of color measurements for this device has been undertaken. All used scripts (Arduino and Matlab) are attached as supplementary material. The results show acceptable accuracy values that, although obviously do not reach the levels obtained with the other scientific instruments, for the price difference they present a good low cost option.
Luster measurements of lips treated with lipstick formulations.
Yadav, Santosh; Issa, Nevine; Streuli, David; McMullen, Roger; Fares, Hani
2011-01-01
In this study, digital photography in combination with image analysis was used to measure the luster of several lipstick formulations containing varying amounts and types of polymers. A weighed amount of lipstick was applied to a mannequin's lips and the mannequin was illuminated by a uniform beam of a white light source. Digital images of the mannequin were captured with a high-resolution camera and the images were analyzed using image analysis software. Luster analysis was performed using Stamm (L(Stamm)) and Reich-Robbins (L(R-R)) luster parameters. Statistical analysis was performed on each luster parameter (L(Stamm) and L(R-R)), peak height, and peak width. Peak heights for lipstick formulation containing 11% and 5% VP/eicosene copolymer were statistically different from those of the control. The L(Stamm) and L(R-R) parameters for the treatment containing 11% VP/eicosene copolymer were statistically different from these of the control. Based on the results obtained in this study, we are able to determine whether a polymer is a good pigment dispersant and contributes to visually detected shine of a lipstick upon application. The methodology presented in this paper could serve as a tool for investigators to screen their ingredients for shine in lipstick formulations.
Cultural expressions of depression and the development of the Indonesian Depression Checklist.
Widiana, Herlina Siwi; Simpson, Katrina; Manderson, Lenore
2018-06-01
Depression may manifest differently across cultural settings, suggesting the value of an assessment tool that is sensitive enough to capture these variations. The study reported in this article aimed to develop a depression screening tool for Indonesians derived from ethnographic interviews with 20 people who had been diagnosed as having depression by clinical psychologists at primary health centers. The tool, which we have termed the Indonesian Depression Checklist (IDC), consists of 40 items. The tool was administered to 125 people assessed to have depression by 40 clinical psychologists in primary health centers. The data were analyzed with Confirmatory Factor Analysis (CFA) (IBM SPSS AMOS Software). CFA identified a five-factor hierarchical model ( χ 2 = 168.157, p = .091; CFI = .963; TLI = .957; RMSEA = .036). A 19-item inventory of the IDC, with five factors - Physical Symptoms, Affect, Cognition, Social Engagement and Religiosity - was identified. There was a strong correlation between the total score of the IDC and total score of the Center for Epidemiological Studies-Depression scale (revised version CES-D), a standard tool for assessing symptoms of depression. The IDC accommodates culturally distinctive aspects of depression among Indonesians that are not included in the CES-D.
2012-01-01
Background Clinical trials are the primary mechanism for advancing clinical care and evidenced-based practice, yet challenges with the recruitment of participants for such trials are widely recognized as a major barrier to these types of studies. Data warehouses (DW) store large amounts of heterogenous clinical data that can be used to enhance recruitment practices, but multiple challenges exist when using a data warehouse for such activities, due to the manner of collection, management, integration, analysis, and dissemination of the data. A critical step in leveraging the DW for recruitment purposes is being able to match trial eligibility criteria to discrete and semi-structured data types in the data warehouse, though trial eligibility criteria tend to be written without concern for their computability. We present the multi-modal evaluation of a web-based tool that can be used for pre-screening patients for clinical trial eligibility and assess the ability of this tool to be practically used for clinical research pre-screening and recruitment. Methods The study used a validation study, usability testing, and a heuristic evaluation to evaluate and characterize the operational characteristics of the software as well as human factors affecting its use. Results Clinical trials from the Division of Cardiology and the Department of Family Medicine were used for this multi-modal evaluation, which included a validation study, usability study, and a heuristic evaluation. From the results of the validation study, the software demonstrated a positive predictive value (PPV) of 54.12% and 0.7%, respectively, and a negative predictive value (NPV) of 73.3% and 87.5%, respectively, for two types of clinical trials. Heuristic principles concerning error prevention and documentation were characterized as the major usability issues during the heuristic evaluation. Conclusions This software is intended to provide an initial list of eligible patients to a clinical study coordinators, which provides a starting point for further eligibility screening by the coordinator. Because this software has a high “rule in” ability, meaning that it is able to remove patients who are not eligible for the study, the use of an automated tool built to leverage an existing enterprise DW can be beneficial to determining eligibility and facilitating clinical trial recruitment through pre-screening. While the results of this study are promising, further refinement and study of this and related approaches to automated eligibility screening, including comparison to other approaches and stakeholder perceptions, are needed and future studies are planned to address these needs. PMID:22646313
Pressler, Taylor R; Yen, Po-Yin; Ding, Jing; Liu, Jianhua; Embi, Peter J; Payne, Philip R O
2012-05-30
Clinical trials are the primary mechanism for advancing clinical care and evidenced-based practice, yet challenges with the recruitment of participants for such trials are widely recognized as a major barrier to these types of studies. Data warehouses (DW) store large amounts of heterogenous clinical data that can be used to enhance recruitment practices, but multiple challenges exist when using a data warehouse for such activities, due to the manner of collection, management, integration, analysis, and dissemination of the data. A critical step in leveraging the DW for recruitment purposes is being able to match trial eligibility criteria to discrete and semi-structured data types in the data warehouse, though trial eligibility criteria tend to be written without concern for their computability. We present the multi-modal evaluation of a web-based tool that can be used for pre-screening patients for clinical trial eligibility and assess the ability of this tool to be practically used for clinical research pre-screening and recruitment. The study used a validation study, usability testing, and a heuristic evaluation to evaluate and characterize the operational characteristics of the software as well as human factors affecting its use. Clinical trials from the Division of Cardiology and the Department of Family Medicine were used for this multi-modal evaluation, which included a validation study, usability study, and a heuristic evaluation. From the results of the validation study, the software demonstrated a positive predictive value (PPV) of 54.12% and 0.7%, respectively, and a negative predictive value (NPV) of 73.3% and 87.5%, respectively, for two types of clinical trials. Heuristic principles concerning error prevention and documentation were characterized as the major usability issues during the heuristic evaluation. This software is intended to provide an initial list of eligible patients to a clinical study coordinators, which provides a starting point for further eligibility screening by the coordinator. Because this software has a high "rule in" ability, meaning that it is able to remove patients who are not eligible for the study, the use of an automated tool built to leverage an existing enterprise DW can be beneficial to determining eligibility and facilitating clinical trial recruitment through pre-screening. While the results of this study are promising, further refinement and study of this and related approaches to automated eligibility screening, including comparison to other approaches and stakeholder perceptions, are needed and future studies are planned to address these needs.
Integration of time as a factor in ergonomic simulation.
Walther, Mario; Muñoz, Begoña Toledo
2012-01-01
The paper describes the application of a simulation based ergonomic evaluation. Within a pilot project, the algorithms of the screening method of the European Assembly Worksheet were transferred into an existing digital human model. Movement data was recorded with an especially developed hybrid Motion Capturing system. A prototype of the system was built and is currently being tested at the Volkswagen Group. First results showed the feasibility of the simulation based ergonomic evaluation with Motion Capturing.
Peron, Guillaume; Hines, James E.
2014-01-01
Many industrial and agricultural activities involve wildlife fatalities by collision, poisoning or other involuntary harvest: wind turbines, highway network, utility network, tall structures, pesticides, etc. Impacted wildlife may benefit from official protection, including the requirement to monitor the impact. Carcass counts can often be conducted to quantify the number of fatalities, but they need to be corrected for carcass persistence time (removal by scavengers and decay) and detection probability (searcher efficiency). In this article we introduce a new piece of software that fits a superpopulation capture-recapture model to raw count data. It uses trial data to estimate detection and daily persistence probabilities. A recurrent issue is that fatalities of rare, protected species are infrequent, in which case the software offers the option to switch to an ‘evidence of absence’ mode, i.e., estimate the number of carcasses that may have been missed by field crews. The software allows distinguishing between different turbine types (e.g. different vegetation cover under turbines, or different technical properties), as well between two carcass age-classes or states, with transition between those classes (e.g, fresh and dry). There is a data simulation capacity that may be used at the planning stage to optimize sampling design. Resulting mortality estimates can be used 1) to quantify the required amount of compensation, 2) inform mortality projections for proposed development sites, and 3) inform decisions about management of existing sites.
Analyzing the cost of screening selectee and non-selectee baggage.
Virta, Julie L; Jacobson, Sheldon H; Kobza, John E
2003-10-01
Determining how to effectively operate security devices is as important to overall system performance as developing more sensitive security devices. In light of recent federal mandates for 100% screening of all checked baggage, this research studies the trade-offs between screening only selectee checked baggage and screening both selectee and non-selectee checked baggage for a single baggage screening security device deployed at an airport. This trade-off is represented using a cost model that incorporates the cost of the baggage screening security device, the volume of checked baggage processed through the device, and the outcomes that occur when the device is used. The cost model captures the cost of deploying, maintaining, and operating a single baggage screening security device over a one-year period. The study concludes that as excess baggage screening capacity is used to screen non-selectee checked bags, the expected annual cost increases, the expected annual cost per checked bag screened decreases, and the expected annual cost per expected number of threats detected in the checked bags screened increases. These results indicate that the marginal increase in security per dollar spent is significantly lower when non-selectee checked bags are screened than when only selectee checked bags are screened.
Web-Based Environment for Maintaining Legacy Software
NASA Technical Reports Server (NTRS)
Tigges, Michael; Thompson, Nelson; Orr, Mark; Fox, Richard
2007-01-01
Advanced Tool Integration Environment (ATIE) is the name of both a software system and a Web-based environment created by the system for maintaining an archive of legacy software and expertise involved in developing the legacy software. ATIE can also be used in modifying legacy software and developing new software. The information that can be encapsulated in ATIE includes experts documentation, input and output data of tests cases, source code, and compilation scripts. All of this information is available within a common environment and retained in a database for ease of access and recovery by use of powerful search engines. ATIE also accommodates the embedment of supporting software that users require for their work, and even enables access to supporting commercial-off-the-shelf (COTS) software within the flow of the experts work. The flow of work can be captured by saving the sequence of computer programs that the expert uses. A user gains access to ATIE via a Web browser. A modern Web-based graphical user interface promotes efficiency in the retrieval, execution, and modification of legacy code. Thus, ATIE saves time and money in the support of new and pre-existing programs.
Methods, Knowledge Support, and Experimental Tools for Modeling
2006-10-01
open source software entities: the PostgreSQL relational database management system (http://www.postgres.org), the Apache web server (http...past. The revision control system allows the program to capture disagreements, and allows users to explore the history of such disagreements by
Lykkesfeldt, Jens
2016-08-01
In recent years, several online tools have appeared capable of identifying potential plagiarism in science. While such tools may help to maintain or even increase the originality and ethical quality of the scientific literature, no apparent consensus exists among editors on the degree of plagiarism or self-plagiarism necessary to reject or retract manuscripts. In this study, two entire volumes of published original papers and reviews from Basic & Clinical Pharmacology & Toxicology were retrospectively scanned for similarity in anonymized form using iThenticate software to explore measures to predictively identify true plagiarism and self-plagiarism and to potentially provide guidelines for future screening of incoming manuscripts. Several filters were applied, all of which appeared to lower the noise from irrelevant hits. The main conclusions were that plagiarism software offers a unique opportunity to screen for plagiarism easily but also that it has to be employed with caution as automated or uncritical use is far too unreliable to allow a fair basis for judging the degree of plagiarism in a manuscript. This remains the job of senior editors. Whereas a few cases of self-plagiarism that would not likely have been accepted with today's guidelines were indeed identified, no cases of fraud or serious plagiarism were found. Potential guidelines are discussed. © 2016 Nordic Association for the Publication of BCPT (former Nordic Pharmacological Society).
2012-03-19
PETER MA, EV74, WEARS A SUIT COVERED WITH SPHERICAL REFLECTORS THAT ENABLE HIS MOTIONS TO BE TRACKED BY THE MOTION CAPTURE SYSTEM. THE HUMAN MODEL IN RED ON THE SCREEN IN THE BACKGROUND REPRESENTS THE SYSTEM-GENERATED IMAGE OF PETER'S POSITION.
Creep Measurement Video Extensometer
NASA Technical Reports Server (NTRS)
Jaster, Mark; Vickerman, Mary; Padula, Santo, II; Juhas, John
2011-01-01
Understanding material behavior under load is critical to the efficient and accurate design of advanced aircraft and spacecraft. Technologies such as the one disclosed here allow accurate creep measurements to be taken automatically, reducing error. The goal was to develop a non-contact, automated system capable of capturing images that could subsequently be processed to obtain the strain characteristics of these materials during deformation, while maintaining adequate resolution to capture the true deformation response of the material. The measurement system comprises a high-resolution digital camera, computer, and software that work collectively to interpret the image.
Capturing User Reading Behaviors for Personalized Document Summarization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Songhua; Jiang, Hao; Lau, Francis
2011-01-01
We propose a new personalized document summarization method that observes a user's personal reading preferences. These preferences are inferred from the user's reading behaviors, including facial expressions, gaze positions, and reading durations that were captured during the user's past reading activities. We compare the performance of our algorithm with that of a few peer algorithms and software packages. The results of our comparative study show that our algorithm can produce more superior personalized document summaries than all the other methods in that the summaries generated by our algorithm can better satisfy a user's personal preferences.
A Sign Language Screen Reader for Deaf
NASA Astrophysics Data System (ADS)
El Ghoul, Oussama; Jemni, Mohamed
Screen reader technology has appeared first to allow blind and people with reading difficulties to use computer and to access to the digital information. Until now, this technology is exploited mainly to help blind community. During our work with deaf people, we noticed that a screen reader can facilitate the manipulation of computers and the reading of textual information. In this paper, we propose a novel screen reader dedicated to deaf. The output of the reader is a visual translation of the text to sign language. The screen reader is composed by two essential modules: the first one is designed to capture the activities of users (mouse and keyboard events). For this purpose, we adopted Microsoft MSAA application programming interfaces. The second module, which is in classical screen readers a text to speech engine (TTS), is replaced by a novel text to sign (TTSign) engine. This module converts text into sign language animation based on avatar technology.
Haston, Elspeth; Cubey, Robert; Pullan, Martin; Atkins, Hannah; Harris, David J
2012-01-01
Digitisation programmes in many institutes frequently involve disparate and irregular funding, diverse selection criteria and scope, with different members of staff managing and operating the processes. These factors have influenced the decision at the Royal Botanic Garden Edinburgh to develop an integrated workflow for the digitisation of herbarium specimens which is modular and scalable to enable a single overall workflow to be used for all digitisation projects. This integrated workflow is comprised of three principal elements: a specimen workflow, a data workflow and an image workflow.The specimen workflow is strongly linked to curatorial processes which will impact on the prioritisation, selection and preparation of the specimens. The importance of including a conservation element within the digitisation workflow is highlighted. The data workflow includes the concept of three main categories of collection data: label data, curatorial data and supplementary data. It is shown that each category of data has its own properties which influence the timing of data capture within the workflow. Development of software has been carried out for the rapid capture of curatorial data, and optical character recognition (OCR) software is being used to increase the efficiency of capturing label data and supplementary data. The large number and size of the images has necessitated the inclusion of automated systems within the image workflow.
Heckeroth, J; Boywitt, C D
2017-06-01
Considering the increasing relevance of handwritten electronically captured signatures, we evaluated the ability of forensic handwriting examiners (FHEs) to distinguish between authentic and simulated electronic signatures. Sixty-six professional FHEs examined the authenticity of electronic signatures captured with software by signotec on a smartphone Galaxy Note 4 by Samsung and signatures made with a ballpoint pen on paper (conventional signatures). In addition, we experimentally varied the name ("J. König" vs. "A. Zaiser") and the status (authentic vs. simulated) of the signatures in question. FHEs' conclusions about the authenticity did not show a statistically significant general difference between electronic and conventional signatures. Furthermore, no significant discrepancies between electronic and conventional signatures were found with regard to other important aspects of the authenticity examination such as questioned signatures' graphic information content, the suitability of the provided sample signatures, the necessity of further examinations and the levels of difficulty of the cases under examination. Thus, this study did not reveal any indications that electronic signatures captured with software by signotec on a Galaxy Note 4 are less well suited than conventional signatures for the examination of authenticity, precluding potential technical problems concerning the integrity of electronic signatures. Copyright © 2017 Elsevier B.V. All rights reserved.
Trade Space Specification Tool (TSST) for Rapid Mission Architecture (Version 1.2)
NASA Technical Reports Server (NTRS)
Wang, Yeou-Fang; Schrock, Mitchell; Borden, Chester S.; Moeller, Robert C.
2013-01-01
Trade Space Specification Tool (TSST) is designed to capture quickly ideas in the early spacecraft and mission architecture design and categorize them into trade space dimensions and options for later analysis. It is implemented as an Eclipse RCP Application, which can be run as a standalone program. Users rapidly create concept items with single clicks on a graphical canvas, and can organize and create linkages between the ideas using drag-and-drop actions within the same graphical view. Various views such as a trade view, rules view, and architecture view are provided to help users to visualize the trade space. This software can identify, explore, and assess aspects of the mission trade space, as well as capture and organize linkages/dependencies between trade space components. The tool supports a user-in-the-loop preliminary logical examination and filtering of trade space options to help identify which paths in the trade space are feasible (and preferred) and what analyses need to be done later with executable models. This tool provides multiple user views of the trade space to guide the analyst/team to facilitate interpretation and communication of the trade space components and linkages, identify gaps in combining and selecting trade space options, and guide user decision-making for which combinations of architectural options should be pursued for further evaluation. This software provides an environment to capture mission trade space elements rapidly and assist users for their architecture analysis. This is primarily focused on mission and spacecraft architecture design, rather than general-purpose design application. In addition, it provides more flexibility to create concepts and organize the ideas. The software is developed as an Eclipse plug-in and potentially can be integrated with other Eclipse-based tools.
Real-time animation software for customized training to use motor prosthetic systems.
Davoodi, Rahman; Loeb, Gerald E
2012-03-01
Research on control of human movement and development of tools for restoration and rehabilitation of movement after spinal cord injury and amputation can benefit greatly from software tools for creating precisely timed animation sequences of human movement. Despite their ability to create sophisticated animation and high quality rendering, existing animation software are not adapted for application to neural prostheses and rehabilitation of human movement. We have developed a software tool known as MSMS (MusculoSkeletal Modeling Software) that can be used to develop models of human or prosthetic limbs and the objects with which they interact and to animate their movement using motion data from a variety of offline and online sources. The motion data can be read from a motion file containing synthesized motion data or recordings from a motion capture system. Alternatively, motion data can be streamed online from a real-time motion capture system, a physics-based simulation program, or any program that can produce real-time motion data. Further, animation sequences of daily life activities can be constructed using the intuitive user interface of Microsoft's PowerPoint software. The latter allows expert and nonexpert users alike to assemble primitive movements into a complex motion sequence with precise timing by simply arranging the order of the slides and editing their properties in PowerPoint. The resulting motion sequence can be played back in an open-loop manner for demonstration and training or in closed-loop virtual reality environments where the timing and speed of animation depends on user inputs. These versatile animation utilities can be used in any application that requires precisely timed animations but they are particularly suited for research and rehabilitation of movement disorders. MSMS's modeling and animation tools are routinely used in a number of research laboratories around the country to study the control of movement and to develop and test neural prostheses for patients with paralysis or amputations.
An interactive, multi-touch videowall for scientific data exploration
NASA Astrophysics Data System (ADS)
Blower, Jon; Griffiths, Guy; van Meersbergen, Maarten; Lusher, Scott; Styles, Jon
2014-05-01
The use of videowalls for scientific data exploration is rising as hardware becomes cheaper and the availability of software and multimedia content grows. Most videowalls are used primarily for outreach and communication purposes, but there is increasing interest in using large display screens to support exploratory visualization as an integral part of scientific research. In this PICO presentation we will present a brief overview of a new videowall system at the University of Reading, which is designed specifically to support interactive, exploratory visualization activities in climate science and Earth Observation. The videowall consists of eight 42-inch full-HD screens (in 4x2 formation), giving a total resolution of about 16 megapixels. The display is managed by a videowall controller, which can direct video to the screen from up to four external laptops, a purpose-built graphics workstation, or any combination thereof. A multi-touch overlay provides the capability for the user to interact directly with the data. There are many ways to use the videowall, and a key technical challenge is to make the most of the touch capabilities - touch has the potential to greatly reduce the learning curve in interactive data exploration, but most software is not yet designed for this purpose. In the PICO we will present an overview of some ways in which the wall can be employed in science, seeking feedback and discussion from the community. The system was inspired by an existing and highly-successful system (known as the "Collaboratorium") at the Netherlands e-Science Center (NLeSC). We will demonstrate how we have adapted NLeSC's visualization software to our system for touch-enabled multi-screen climate data exploration.
Computer Vision Tool and Technician as First Reader of Lung Cancer Screening CT Scans.
Ritchie, Alexander J; Sanghera, Calvin; Jacobs, Colin; Zhang, Wei; Mayo, John; Schmidt, Heidi; Gingras, Michel; Pasian, Sergio; Stewart, Lori; Tsai, Scott; Manos, Daria; Seely, Jean M; Burrowes, Paul; Bhatia, Rick; Atkar-Khattra, Sukhinder; van Ginneken, Bram; Tammemagi, Martin; Tsao, Ming Sound; Lam, Stephen
2016-05-01
To implement a cost-effective low-dose computed tomography (LDCT) lung cancer screening program at the population level, accurate and efficient interpretation of a large volume of LDCT scans is needed. The objective of this study was to evaluate a workflow strategy to identify abnormal LDCT scans in which a technician assisted by computer vision (CV) software acts as a first reader with the aim to improve speed, consistency, and quality of scan interpretation. Without knowledge of the diagnosis, a technician reviewed 828 randomly batched scans (136 with lung cancers, 556 with benign nodules, and 136 without nodules) from the baseline Pan-Canadian Early Detection of Lung Cancer Study that had been annotated by the CV software CIRRUS Lung Screening (Diagnostic Image Analysis Group, Nijmegen, The Netherlands). The scans were classified as either normal (no nodules ≥1 mm or benign nodules) or abnormal (nodules or other abnormality). The results were compared with the diagnostic interpretation by Pan-Canadian Early Detection of Lung Cancer Study radiologists. The overall sensitivity and specificity of the technician in identifying an abnormal scan were 97.8% (95% confidence interval: 96.4-98.8) and 98.0% (95% confidence interval: 89.5-99.7), respectively. Of the 112 prevalent nodules that were found to be malignant in follow-up, 92.9% were correctly identified by the technician plus CV compared with 84.8% by the study radiologists. The average time taken by the technician to review a scan after CV processing was 208 ± 120 seconds. Prescreening CV software and a technician as first reader is a promising strategy for improving the consistency and quality of screening interpretation of LDCT scans. Copyright © 2016 International Association for the Study of Lung Cancer. Published by Elsevier Inc. All rights reserved.
Lamberti, Fabrizio; Paravati, Gianluca; Gatteschi, Valentina; Cannavo, Alberto; Montuschi, Paolo
2018-05-01
Software for computer animation is generally characterized by a steep learning curve, due to the entanglement of both sophisticated techniques and interaction methods required to control 3D geometries. This paper proposes a tool designed to support computer animation production processes by leveraging the affordances offered by articulated tangible user interfaces and motion capture retargeting solutions. To this aim, orientations of an instrumented prop are recorded together with animator's motion in the 3D space and used to quickly pose characters in the virtual environment. High-level functionalities of the animation software are made accessible via a speech interface, thus letting the user control the animation pipeline via voice commands while focusing on his or her hands and body motion. The proposed solution exploits both off-the-shelf hardware components (like the Lego Mindstorms EV3 bricks and the Microsoft Kinect, used for building the tangible device and tracking animator's skeleton) and free open-source software (like the Blender animation tool), thus representing an interesting solution also for beginners approaching the world of digital animation for the first time. Experimental results in different usage scenarios show the benefits offered by the designed interaction strategy with respect to a mouse & keyboard-based interface both for expert and non-expert users.
Bietz, Stefan; Inhester, Therese; Lauck, Florian; Sommer, Kai; von Behren, Mathias M; Fährrolfes, Rainer; Flachsenberg, Florian; Meyder, Agnes; Nittinger, Eva; Otto, Thomas; Hilbig, Matthias; Schomburg, Karen T; Volkamer, Andrea; Rarey, Matthias
2017-11-10
Nowadays, computational approaches are an integral part of life science research. Problems related to interpretation of experimental results, data analysis, or visualization tasks highly benefit from the achievements of the digital era. Simulation methods facilitate predictions of physicochemical properties and can assist in understanding macromolecular phenomena. Here, we will give an overview of the methods developed in our group that aim at supporting researchers from all life science areas. Based on state-of-the-art approaches from structural bioinformatics and cheminformatics, we provide software covering a wide range of research questions. Our all-in-one web service platform ProteinsPlus (http://proteins.plus) offers solutions for pocket and druggability prediction, hydrogen placement, structure quality assessment, ensemble generation, protein-protein interaction classification, and 2D-interaction visualization. Additionally, we provide a software package that contains tools targeting cheminformatics problems like file format conversion, molecule data set processing, SMARTS editing, fragment space enumeration, and ligand-based virtual screening. Furthermore, it also includes structural bioinformatics solutions for inverse screening, binding site alignment, and searching interaction patterns across structure libraries. The software package is available at http://software.zbh.uni-hamburg.de. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Yu, Wei; Clyne, Melinda; Dolan, Siobhan M; Yesupriya, Ajay; Wulf, Anja; Liu, Tiebin; Khoury, Muin J; Gwinn, Marta
2008-04-22
Synthesis of data from published human genetic association studies is a critical step in the translation of human genome discoveries into health applications. Although genetic association studies account for a substantial proportion of the abstracts in PubMed, identifying them with standard queries is not always accurate or efficient. Further automating the literature-screening process can reduce the burden of a labor-intensive and time-consuming traditional literature search. The Support Vector Machine (SVM), a well-established machine learning technique, has been successful in classifying text, including biomedical literature. The GAPscreener, a free SVM-based software tool, can be used to assist in screening PubMed abstracts for human genetic association studies. The data source for this research was the HuGE Navigator, formerly known as the HuGE Pub Lit database. Weighted SVM feature selection based on a keyword list obtained by the two-way z score method demonstrated the best screening performance, achieving 97.5% recall, 98.3% specificity and 31.9% precision in performance testing. Compared with the traditional screening process based on a complex PubMed query, the SVM tool reduced by about 90% the number of abstracts requiring individual review by the database curator. The tool also ascertained 47 articles that were missed by the traditional literature screening process during the 4-week test period. We examined the literature on genetic associations with preterm birth as an example. Compared with the traditional, manual process, the GAPscreener both reduced effort and improved accuracy. GAPscreener is the first free SVM-based application available for screening the human genetic association literature in PubMed with high recall and specificity. The user-friendly graphical user interface makes this a practical, stand-alone application. The software can be downloaded at no charge.
Screening in perturbative approaches to LSS
Fasiello, Matteo; Vlah, Zvonimir
2017-08-24
A specific value for the cosmological constant Λ can account for late-time cosmic acceleration. However, motivated by the so-called cosmological constant problem(s), several alternative mechanisms have been explored. To date, a host of well-studied dynamical dark energy and modified gravity models exists. Going beyond ΛCDM often comes with additional degrees of freedom (dofs). For these to pass existing observational tests, an efficient screening mechanism must be in place. Furthermore, the linear and quasi-linear regimes of structure formation are ideal probes of such dofs and can capture the onset of screening. We propose here a semi-phenomenological “filter” to account for screeningmore » dynamics on LSS observables, with special emphasis on Vainshtein-type screening.« less
Using Musical Intervals to Demonstrate Superposition of Waves and Fourier Analysis
ERIC Educational Resources Information Center
LoPresto, Michael C.
2013-01-01
What follows is a description of a demonstration of superposition of waves and Fourier analysis using a set of four tuning forks mounted on resonance boxes and oscilloscope software to create, capture and analyze the waveforms and Fourier spectra of musical intervals.
A class Hierarchical, object-oriented approach to virtual memory management
NASA Technical Reports Server (NTRS)
Russo, Vincent F.; Campbell, Roy H.; Johnston, Gary M.
1989-01-01
The Choices family of operating systems exploits class hierarchies and object-oriented programming to facilitate the construction of customized operating systems for shared memory and networked multiprocessors. The software is being used in the Tapestry laboratory to study the performance of algorithms, mechanisms, and policies for parallel systems. Described here are the architectural design and class hierarchy of the Choices virtual memory management system. The software and hardware mechanisms and policies of a virtual memory system implement a memory hierarchy that exploits the trade-off between response times and storage capacities. In Choices, the notion of a memory hierarchy is captured by abstract classes. Concrete subclasses of those abstractions implement a virtual address space, segmentation, paging, physical memory management, secondary storage, and remote (that is, networked) storage. Captured in the notion of a memory hierarchy are classes that represent memory objects. These classes provide a storage mechanism that contains encapsulated data and have methods to read or write the memory object. Each of these classes provides specializations to represent the memory hierarchy.
Engineering Software Suite Validates System Design
NASA Technical Reports Server (NTRS)
2007-01-01
EDAptive Computing Inc.'s (ECI) EDAstar engineering software tool suite, created to capture and validate system design requirements, was significantly funded by NASA's Ames Research Center through five Small Business Innovation Research (SBIR) contracts. These programs specifically developed Syscape, used to capture executable specifications of multi-disciplinary systems, and VectorGen, used to automatically generate tests to ensure system implementations meet specifications. According to the company, the VectorGen tests considerably reduce the time and effort required to validate implementation of components, thereby ensuring their safe and reliable operation. EDASHIELD, an additional product offering from ECI, can be used to diagnose, predict, and correct errors after a system has been deployed using EDASTAR -created models. Initial commercialization for EDASTAR included application by a large prime contractor in a military setting, and customers include various branches within the U.S. Department of Defense, industry giants like the Lockheed Martin Corporation, Science Applications International Corporation, and Ball Aerospace and Technologies Corporation, as well as NASA's Langley and Glenn Research Centers
Iurov, Iu B; Khazatskiĭ, I A; Akindinov, V A; Dovgilov, L V; Kobrinskiĭ, B A; Vorsanova, S G
2000-08-01
Original software FISHMet has been developed and tried for improving the efficiency of diagnosis of hereditary diseases caused by chromosome aberrations and for chromosome mapping by fluorescent in situ hybridization (FISH) method. The program allows creation and analysis of pseudocolor chromosome images and hybridization signals in the Windows 95 system, allows computer analysis and editing of the results of pseudocolor hybridization in situ, including successive imposition of initial black-and-white images created using fluorescent filters (blue, green, and red), and editing of each image individually or of a summary pseudocolor image in BMP, TIFF, and JPEG formats. Components of image computer analysis system (LOMO, Leitz Ortoplan, and Axioplan fluorescent microscopes, COHU 4910 and Sanyo VCB-3512P CCD cameras, Miro-Video, Scion LG-3 and VG-5 image capture maps, and Pentium 100 and Pentium 200 computers) and specialized software for image capture and visualization (Scion Image PC and Video-Cup) have been used with good results in the study.
Inselect: Automating the Digitization of Natural History Collections
Hudson, Lawrence N.; Blagoderov, Vladimir; Heaton, Alice; Holtzhausen, Pieter; Livermore, Laurence; Price, Benjamin W.; van der Walt, Stéfan; Smith, Vincent S.
2015-01-01
The world’s natural history collections constitute an enormous evidence base for scientific research on the natural world. To facilitate these studies and improve access to collections, many organisations are embarking on major programmes of digitization. This requires automated approaches to mass-digitization that support rapid imaging of specimens and associated data capture, in order to process the tens of millions of specimens common to most natural history collections. In this paper we present Inselect—a modular, easy-to-use, cross-platform suite of open-source software tools that supports the semi-automated processing of specimen images generated by natural history digitization programmes. The software is made up of a Windows, Mac OS X, and Linux desktop application, together with command-line tools that are designed for unattended operation on batches of images. Blending image visualisation algorithms that automatically recognise specimens together with workflows to support post-processing tasks such as barcode reading, label transcription and metadata capture, Inselect fills a critical gap to increase the rate of specimen digitization. PMID:26599208
Tracking and Quantifying Developmental Processes in C. elegans Using Open-source Tools.
Dutta, Priyanka; Lehmann, Christina; Odedra, Devang; Singh, Deepika; Pohl, Christian
2015-12-16
Quantitatively capturing developmental processes is crucial to derive mechanistic models and key to identify and describe mutant phenotypes. Here protocols are presented for preparing embryos and adult C. elegans animals for short- and long-term time-lapse microscopy and methods for tracking and quantification of developmental processes. The methods presented are all based on C. elegans strains available from the Caenorhabditis Genetics Center and on open-source software that can be easily implemented in any laboratory independently of the microscopy system used. A reconstruction of a 3D cell-shape model using the modelling software IMOD, manual tracking of fluorescently-labeled subcellular structures using the multi-purpose image analysis program Endrov, and an analysis of cortical contractile flow using PIVlab (Time-Resolved Digital Particle Image Velocimetry Tool for MATLAB) are shown. It is discussed how these methods can also be deployed to quantitatively capture other developmental processes in different models, e.g., cell tracking and lineage tracing, tracking of vesicle flow.
Inselect: Automating the Digitization of Natural History Collections.
Hudson, Lawrence N; Blagoderov, Vladimir; Heaton, Alice; Holtzhausen, Pieter; Livermore, Laurence; Price, Benjamin W; van der Walt, Stéfan; Smith, Vincent S
2015-01-01
The world's natural history collections constitute an enormous evidence base for scientific research on the natural world. To facilitate these studies and improve access to collections, many organisations are embarking on major programmes of digitization. This requires automated approaches to mass-digitization that support rapid imaging of specimens and associated data capture, in order to process the tens of millions of specimens common to most natural history collections. In this paper we present Inselect-a modular, easy-to-use, cross-platform suite of open-source software tools that supports the semi-automated processing of specimen images generated by natural history digitization programmes. The software is made up of a Windows, Mac OS X, and Linux desktop application, together with command-line tools that are designed for unattended operation on batches of images. Blending image visualisation algorithms that automatically recognise specimens together with workflows to support post-processing tasks such as barcode reading, label transcription and metadata capture, Inselect fills a critical gap to increase the rate of specimen digitization.
2015-05-01
application ,1 while the simulated PLC software is the open source ModbusPal Java application . When queried using the Modbus TCP protocol, ModbusPal reports...and programmable logic controller ( PLC ) components. The HMI and PLC components were instantiated with software and installed in multiple virtual...creating and capturing HMI– PLC network traffic over a 24-h period in the virtualized network and inspect the packets for errors. Test the
Assess/Mitigate Risk through the Use of Computer-Aided Software Engineering (CASE) Tools
NASA Technical Reports Server (NTRS)
Aguilar, Michael L.
2013-01-01
The NASA Engineering and Safety Center (NESC) was requested to perform an independent assessment of the mitigation of the Constellation Program (CxP) Risk 4421 through the use of computer-aided software engineering (CASE) tools. With the cancellation of the CxP, the assessment goals were modified to capture lessons learned and best practices in the use of CASE tools. The assessment goal was to prepare the next program for the use of these CASE tools. The outcome of the assessment is contained in this document.
Earth Global Reference Atmospheric Model (GRAM) Overview and Updates: DOLWG Meeting
NASA Technical Reports Server (NTRS)
White, Patrick
2017-01-01
What is Earth-GRAM (Global Reference Atmospheric Model): Provides monthly mean and standard deviation for any point in atmosphere - Monthly, Geographic, and Altitude Variation; Earth-GRAM is a C++ software package - Currently distributed as Earth-GRAM 2016; Atmospheric variables included: pressure, density, temperature, horizontal and vertical winds, speed of sound, and atmospheric constituents; Used by engineering community because of ability to create dispersions in atmosphere at a rapid runtime - Often embedded in trajectory simulation software; Not a forecast model; Does not readily capture localized atmospheric effects.
General Nonlinear Ferroelectric Model v. Beta
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, Wen; Robbins, Josh
2017-03-14
The purpose of this software is to function as a generalized ferroelectric material model. The material model is designed to work with existing finite element packages by providing updated information on material properties that are nonlinear and dependent on loading history. The two major nonlinear phenomena this model captures are domain-switching and phase transformation. The software itself does not contain potentially sensitive material information and instead provides a framework for different physical phenomena observed within ferroelectric materials. The model is calibrated to a specific ferroelectric material through input parameters provided by the user.
Mobile application for field data collection and query: Example from wildlife research (Invited)
NASA Astrophysics Data System (ADS)
Bateman, H.; Lindquist, T.; Whitehouse, R.
2013-12-01
Field data collection is often used in many scientific disciplines and effective approaches rely on accurate data collection and recording. We designed a smartphone and tablet application (app) for field-collected data and tested it during a study on wildlife. The objective of our study was to determine the effectiveness of mobile applications in wildlife field research. Student software developers designed applications for mobile devices on the iOS and Android operating systems. Both platforms had similar user interactions via data entry on a touch screen using pre-programmed fields, checkboxes, drop-down menus, and keypad entry. The mobile application included features to insure collection of all measurements in the field through pop-up messages and could proof entries for valid formats. We used undergraduate student subjects to compare the duration of data recording and data entry, and the frequency of errors between the mobile application and traditional (paper) techniques. We field-tested the mobile application using an existing study on wildlife. From the field, technicians could query a database stored on a mobile device to view histories of previously captured animals. Overall, we found that because the mobile application allowed us to enter data in a digital format in the field we could eliminate timely steps to process handwritten data sheets and double-checking data entries. We estimated that, for a 2-month project, using the mobile application instead of traditional data entry and proofing reduced our total project time by 10%. To our knowledge, this is the first application developed for mobile devices for wildlife users interesting in viewing animal capture histories from the field and could be developed for use in other areas of field research.
Perez, Susan L; Kravitz, Richard L; Bell, Robert A; Chan, Man Shan; Paterniti, Debora A
2016-08-09
The Internet is valuable for those with limited access to health care services because of its low cost and wealth of information. Our objectives were to investigate how the Internet is used to obtain health-related information and how individuals with differing socioeconomic resources navigate it when presented with a health decision. Study participants were recruited from public settings and social service agencies. Participants listened to one of two clinical scenarios - consistent with influenza or bacterial meningitis - and then conducted an Internet search. Screen-capture video software captured the Internet search. Participant Internet search strategies were analyzed and coded for pre- and post-Internet search guess at diagnosis and information seeking patterns. Individuals who did not have a college degree and were recruited from locations offering social services were categorized as "lower socioeconomic status" (SES); the remainder was categorized as "higher SES." Participants were 78 Internet health information seekers, ranging from 21-35 years of age, who experienced barriers to accessing health care services. Lower-SES individuals were more likely to use an intuitive, rather than deliberative, approach to Internet health information seeking. Lower- and higher-SES participants did not differ in the tendency to make diagnostic guesses based on Internet searches. Lower-SES participants were more likely than their higher-SES counterparts to narrow the scope of their search. Our findings suggest that individuals with different levels of socioeconomic status vary in the heuristics and search patterns they rely upon to direct their searches. The influence and use of credible information in the process of making a decision is associated with education and prior experiences with healthcare services. Those with limited resources may be disadvantaged when turning to the Internet to make a health decision.
Naraghi, Safa; Mutsvangwa, Tinashe; Goliath, René; Rangaka, Molebogeng X; Douglas, Tania S
2018-05-08
The tuberculin skin test is the most widely used method for detecting latent tuberculosis infection in adults and active tuberculosis in children. We present the development of a mobile-phone based screening tool for measuring the tuberculin skin test induration. The tool makes use of a mobile application developed on the Android platform to capture images of an induration, and photogrammetric reconstruction using Agisoft PhotoScan to reconstruct the induration in 3D, followed by 3D measurement of the induration with the aid of functions from the Python programming language. The system enables capture of images by the person being screened for latent tuberculosis infection. Measurement precision was tested using a 3D printed induration. Real-world use of the tool was simulated by application to a set of mock skin indurations, created by a make-up artist, and the performance of the tool was evaluated. The usability of the application was assessed with the aid of a questionnaire completed by participants. The tool was found to measure the 3D printed induration with greater precision than the current ruler and pen method, as indicated by the lower standard deviation produced (0.3 mm versus 1.1 mm in the literature). There was high correlation between manual and algorithm measurement of mock skin indurations. The height of the skin induration and the definition of its margins were found to influence the accuracy of 3D reconstruction and therefore the measurement error, under simulated real-world conditions. Based on assessment of the user experience in capturing images, a simplified user interface would benefit wide-spread implementation. The mobile application shows good agreement with direct measurement. It provides an alternative method for measuring tuberculin skin test indurations and may remove the need for an in-person follow-up visit after test administration, thus improving latent tuberculosis infection screening throughput. Copyright © 2018 Elsevier Ltd. All rights reserved.
Pascolutti, Mauro; Campitelli, Marc; Nguyen, Bao; Pham, Ngoc; Gorse, Alain-Dominique; Quinn, Ronald J.
2015-01-01
Natural products are universally recognized to contribute valuable chemical diversity to the design of molecular screening libraries. The analysis undertaken in this work, provides a foundation for the generation of fragment screening libraries that capture the diverse range of molecular recognition building blocks embedded within natural products. Physicochemical properties were used to select fragment-sized natural products from a database of known natural products (Dictionary of Natural Products). PCA analysis was used to illustrate the positioning of the fragment subset within the property space of the non-fragment sized natural products in the dataset. Structural diversity was analysed by three distinct methods: atom function analysis, using pharmacophore fingerprints, atom type analysis, using radial fingerprints, and scaffold analysis. Small pharmacophore triplets, representing the range of chemical features present in natural products that are capable of engaging in molecular interactions with small, contiguous areas of protein binding surfaces, were analysed. We demonstrate that fragment-sized natural products capture more than half of the small pharmacophore triplet diversity observed in non fragment-sized natural product datasets. Atom type analysis using radial fingerprints was represented by a self-organizing map. We examined the structural diversity of non-flat fragment-sized natural product scaffolds, rich in sp3 configured centres. From these results we demonstrate that 2-ring fragment-sized natural products effectively balance the opposing characteristics of minimal complexity and broad structural diversity when compared to the larger, more complex fragment-like natural products. These naturally-derived fragments could be used as the starting point for the generation of a highly diverse library with the scope for further medicinal chemistry elaboration due to their minimal structural complexity. This study highlights the possibility to capture a high proportion of the individual molecular interaction motifs embedded within natural products using a fragment screening library spanning 422 structural clusters and comprised of approximately 2800 natural products. PMID:25902039
VA's Integrated Imaging System on three platforms.
Dayhoff, R E; Maloney, D L; Majurski, W J
1992-01-01
The DHCP Integrated Imaging System provides users with integrated patient data including text, image and graphics data. This system has been transferred from its original two screen DOS-based MUMPS platform to an X window workstation and a Microsoft Windows-based workstation. There are differences between these various platforms that impact on software design and on software development strategy. Data structures and conventions were used to isolate hardware, operating system, imaging software, and user-interface differences between platforms in the implementation of functionality for text and image display and interaction. The use of an object-oriented approach greatly increased system portability.
VA's Integrated Imaging System on three platforms.
Dayhoff, R. E.; Maloney, D. L.; Majurski, W. J.
1992-01-01
The DHCP Integrated Imaging System provides users with integrated patient data including text, image and graphics data. This system has been transferred from its original two screen DOS-based MUMPS platform to an X window workstation and a Microsoft Windows-based workstation. There are differences between these various platforms that impact on software design and on software development strategy. Data structures and conventions were used to isolate hardware, operating system, imaging software, and user-interface differences between platforms in the implementation of functionality for text and image display and interaction. The use of an object-oriented approach greatly increased system portability. PMID:1482983
Software metrics: The key to quality software on the NCC project
NASA Technical Reports Server (NTRS)
Burns, Patricia J.
1993-01-01
Network Control Center (NCC) Project metrics are captured during the implementation and testing phases of the NCCDS software development lifecycle. The metrics data collection and reporting function has interfaces with all elements of the NCC project. Close collaboration with all project elements has resulted in the development of a defined and repeatable set of metrics processes. The resulting data are used to plan and monitor release activities on a weekly basis. The use of graphical outputs facilitates the interpretation of progress and status. The successful application of metrics throughout the NCC project has been instrumental in the delivery of quality software. The use of metrics on the NCC Project supports the needs of the technical and managerial staff. This paper describes the project, the functions supported by metrics, the data that are collected and reported, how the data are used, and the improvements in the quality of deliverable software since the metrics processes and products have been in use.
iMSRC: converting a standard automated microscope into an intelligent screening platform.
Carro, Angel; Perez-Martinez, Manuel; Soriano, Joaquim; Pisano, David G; Megias, Diego
2015-05-27
Microscopy in the context of biomedical research is demanding new tools to automatically detect and capture objects of interest. The few extant packages addressing this need, however, have enjoyed limited uptake due to complexity of use and installation. To overcome these drawbacks, we developed iMSRC, which combines ease of use and installation with high flexibility and enables applications such as rare event detection and high-resolution tissue sample screening, saving time and resources.
Business Intelligence Applied to the ALMA Software Integration Process
NASA Astrophysics Data System (ADS)
Zambrano, M.; Recabarren, C.; González, V.; Hoffstadt, A.; Soto, R.; Shen, T.-C.
2012-09-01
Software quality assurance and planning of an astronomy project is a complex task, specially if it is a distributed collaborative project such as ALMA, where the development centers are spread across the globe. When you execute a software project there is much valuable information about this process itself that you might be able to collect. One of the ways you can receive this input is via an issue tracking system that will gather the problem reports relative to software bugs captured during the testing of the software, during the integration of the different components or even worst, problems occurred during production time. Usually, there is little time spent on analyzing them but with some multidimensional processing you can extract valuable information from them and it might help you on the long term planning and resources allocation. We present an analysis of the information collected at ALMA from a collection of key unbiased indicators. We describe here the extraction, transformation and load process and how the data was processed. The main goal is to assess a software process and get insights from this information.
NASA Technical Reports Server (NTRS)
Fridge, Ernest M., III; Hiott, Jim; Golej, Jim; Plumb, Allan
1993-01-01
Today's software systems generally use obsolete technology, are not integrated properly with other software systems, and are difficult and costly to maintain. The discipline of reverse engineering is becoming prominent as organizations try to move their systems up to more modern and maintainable technology in a cost effective manner. The Johnson Space Center (JSC) created a significant set of tools to develop and maintain FORTRAN and C code during development of the space shuttle. This tool set forms the basis for an integrated environment to reengineer existing code into modern software engineering structures which are then easier and less costly to maintain and which allow a fairly straightforward translation into other target languages. The environment will support these structures and practices even in areas where the language definition and compilers do not enforce good software engineering. The knowledge and data captured using the reverse engineering tools is passed to standard forward engineering tools to redesign or perform major upgrades to software systems in a much more cost effective manner than using older technologies. The latest release of the environment was in Feb. 1992.
Cannell, John; Jovic, Emelyn; Rathjen, Amy; Lane, Kylie; Tyson, Anna M; Callisaya, Michele L; Smith, Stuart T; Ahuja, Kiran Dk; Bird, Marie-Louise
2018-02-01
To compare the efficacy of novel interactive, motion capture-rehabilitation software to usual care stroke rehabilitation on physical function. Randomized controlled clinical trial. Two subacute hospital rehabilitation units in Australia. In all, 73 people less than six months after stroke with reduced mobility and clinician determined capacity to improve. Both groups received functional retraining and individualized programs for up to an hour, on weekdays for 8-40 sessions (dose matched). For the intervention group, this individualized program used motivating virtual reality rehabilitation and novel gesture controlled interactive motion capture software. For usual care, the individualized program was delivered in a group class on one unit and by rehabilitation assistant 1:1 on the other. Primary outcome was standing balance (functional reach). Secondary outcomes were lateral reach, step test, sitting balance, arm function, and walking. Participants (mean 22 days post-stroke) attended mean 14 sessions. Both groups improved (mean (95% confidence interval)) on primary outcome functional reach (usual care 3.3 (0.6 to 5.9), intervention 4.1 (-3.0 to 5.0) cm) with no difference between groups ( P = 0.69) on this or any secondary measures. No differences between the rehabilitation units were seen except in lateral reach (less affected side) ( P = 0.04). No adverse events were recorded during therapy. Interactive, motion capture rehabilitation for inpatients post stroke produced functional improvements that were similar to those achieved by usual care stroke rehabilitation, safely delivered by either a physical therapist or a rehabilitation assistant.
Singularity: Scientific containers for mobility of compute.
Kurtzer, Gregory M; Sochat, Vanessa; Bauer, Michael W
2017-01-01
Here we present Singularity, software developed to bring containers and reproducibility to scientific computing. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms. Singularity is an open source initiative that harnesses the expertise of system and software engineers and researchers alike, and integrates seamlessly into common workflows for both of these groups. As its primary use case, Singularity brings mobility of computing to both users and HPC centers, providing a secure means to capture and distribute software and compute environments. This ability to create and deploy reproducible environments across these centers, a previously unmet need, makes Singularity a game changing development for computational science.
Singularity: Scientific containers for mobility of compute
Kurtzer, Gregory M.; Bauer, Michael W.
2017-01-01
Here we present Singularity, software developed to bring containers and reproducibility to scientific computing. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms. Singularity is an open source initiative that harnesses the expertise of system and software engineers and researchers alike, and integrates seamlessly into common workflows for both of these groups. As its primary use case, Singularity brings mobility of computing to both users and HPC centers, providing a secure means to capture and distribute software and compute environments. This ability to create and deploy reproducible environments across these centers, a previously unmet need, makes Singularity a game changing development for computational science. PMID:28494014
Smart roadside initiative : system design document.
DOT National Transportation Integrated Search
2015-09-01
This document describes the software design for the Smart Roadside Initiative (SRI) for the delivery of capabilities related to wireless roadside inspections, electronic screening/virtual weigh stations, universal electronic commercial vehicle identi...
Identifying Dyscalculia Symptoms Related to Magnocellular Reasoning Using Smartphones.
Knudsen, Greger Siem; Babic, Ankica
2016-01-01
This paper presents a study that has developed a mobile software application for assisting diagnosis of learning disabilities in mathematics, called dyscalculia, and measuring correlations between dyscalculia symptoms and magnocellular reasoning. Usually, software aids for dyscalculic individuals are focused on both assisting diagnosis and teaching the material. The software developed in this study however maintains a specific focus on the former, and in the process attempts to capture alleged correlations between dyscalculia symptoms and possible underlying causes of the condition. Classification of symptoms is performed by k-Nearest Neighbor algorithm classifying five parameters evaluating user's skills, returning calculated performance in each category as well as correlation strength between detected symptoms and magnocellular reasoning abilities. Expert evaluations has found the application to be appropriate and productive for its intended purpose, proving that mobile software is a suitable and valuable tool for assisting dyscalculia diagnosis and identifying root causes of developing the condition.
Model-Based Development of Automotive Electronic Climate Control Software
NASA Astrophysics Data System (ADS)
Kakade, Rupesh; Murugesan, Mohan; Perugu, Bhupal; Nair, Mohanan
With increasing complexity of software in today's products, writing and maintaining thousands of lines of code is a tedious task. Instead, an alternative methodology must be employed. Model-based development is one candidate that offers several benefits and allows engineers to focus on the domain of their expertise than writing huge codes. In this paper, we discuss the application of model-based development to the electronic climate control software of vehicles. The back-to-back testing approach is presented that ensures flawless and smooth transition from legacy designs to the model-based development. Simulink report generator to create design documents from the models is presented along with its usage to run the simulation model and capture the results into the test report. Test automation using model-based development tool that support the use of unique set of test cases for several testing levels and the test procedure that is independent of software and hardware platform is also presented.
Knowledge-based assistance in costing the space station DMS
NASA Technical Reports Server (NTRS)
Henson, Troy; Rone, Kyle
1988-01-01
The Software Cost Engineering (SCE) methodology developed over the last two decades at IBM Systems Integration Division (SID) in Houston is utilized to cost the NASA Space Station Data Management System (DMS). An ongoing project to capture this methodology, which is built on a foundation of experiences and lessons learned, has resulted in the development of an internal-use-only, PC-based prototype that integrates algorithmic tools with knowledge-based decision support assistants. This prototype Software Cost Engineering Automation Tool (SCEAT) is being employed to assist in the DMS costing exercises. At the same time, DMS costing serves as a forcing function and provides a platform for the continuing, iterative development, calibration, and validation and verification of SCEAT. The data that forms the cost engineering database is derived from more than 15 years of development of NASA Space Shuttle software, ranging from low criticality, low complexity support tools to highly complex and highly critical onboard software.
Expert system verification and validation study. Delivery 3A and 3B: Trip summaries
NASA Technical Reports Server (NTRS)
French, Scott
1991-01-01
Key results are documented from attending the 4th workshop on verification, validation, and testing. The most interesting part of the workshop was when representatives from the U.S., Japan, and Europe presented surveys of VV&T within their respective regions. Another interesting part focused on current efforts to define industry standards for artificial intelligence and how that might affect approaches to VV&T of expert systems. The next part of the workshop focused on VV&T methods of applying mathematical techniques to verification of rule bases and techniques for capturing information relating to the process of developing software. The final part focused on software tools. A summary is also presented of the EPRI conference on 'Methodologies, Tools, and Standards for Cost Effective Reliable Software Verification and Validation. The conference was divided into discussion sessions on the following issues: development process, automated tools, software reliability, methods, standards, and cost/benefit considerations.
DigiSeis—A software component for digitizing seismic signals using the PC sound card
NASA Astrophysics Data System (ADS)
Amin Khan, Khalid; Akhter, Gulraiz; Ahmad, Zulfiqar
2012-06-01
An innovative software-based approach to develop an inexpensive experimental seismic recorder is presented. This approach requires no hardware as the built-in PC sound card is used for digitization of seismic signals. DigiSeis, an ActiveX component is developed to capture the digitized seismic signals from the sound card and deliver them to applications for processing and display. A seismic recorder application software SeisWave is developed over this component, which provides real-time monitoring and display of seismic events picked by a pair of external geophones. This recorder can be used as an educational aid for conducting seismic experiments. It can also be connected with suitable seismic sensors to record earthquakes. The software application and the ActiveX component are available for download. This component can be used to develop seismic recording applications according to user specific requirements.
Orthographic Software Modelling: A Novel Approach to View-Based Software Engineering
NASA Astrophysics Data System (ADS)
Atkinson, Colin
The need to support multiple views of complex software architectures, each capturing a different aspect of the system under development, has been recognized for a long time. Even the very first object-oriented analysis/design methods such as the Booch method and OMT supported a number of different diagram types (e.g. structural, behavioral, operational) and subsequent methods such as Fusion, Kruchten's 4+1 views and the Rational Unified Process (RUP) have added many more views over time. Today's leading modeling languages such as the UML and SysML, are also oriented towards supporting different views (i.e. diagram types) each able to portray a different facets of a system's architecture. More recently, so called enterprise architecture frameworks such as the Zachman Framework, TOGAF and RM-ODP have become popular. These add a whole set of new non-functional views to the views typically emphasized in traditional software engineering environments.
NASA Astrophysics Data System (ADS)
Vidal, Borja; Lafuente, Juan A.
2016-03-01
A simple technique to avoid color limitations in image capture systems based on chroma key video composition using retroreflective screens and light-emitting diodes (LED) rings is proposed and demonstrated. The combination of an asynchronous temporal modulation onto the background illumination and simple image processing removes the usual restrictions on foreground colors in the scene. The technique removes technical constraints in stage composition, allowing its design to be purely based on artistic grounds. Since it only requires adding a very simple electronic circuit to widely used chroma keying hardware based on retroreflective screens, the technique is easily applicable to TV and filming studios.
Lam, Christopher T.; Krieger, Marlee S.; Gallagher, Jennifer E.; Asma, Betsy; Muasher, Lisa C.; Schmitt, John W.; Ramanujam, Nimmi
2015-01-01
Introduction Current guidelines by WHO for cervical cancer screening in low- and middle-income countries involves visual inspection with acetic acid (VIA) of the cervix, followed by treatment during the same visit or a subsequent visit with cryotherapy if a suspicious lesion is found. Implementation of these guidelines is hampered by a lack of: trained health workers, reliable technology, and access to screening facilities. A low cost ultra-portable Point of Care Tampon based digital colposcope (POCkeT Colposcope) for use at the community level setting, which has the unique form factor of a tampon, can be inserted into the vagina to capture images of the cervix, which are on par with that of a state of the art colposcope, at a fraction of the cost. A repository of images to be compiled that can be used to empower front line workers to become more effective through virtual dynamic training. By task shifting to the community setting, this technology could potentially provide significantly greater cervical screening access to where the most vulnerable women live. The POCkeT Colposcope’s concentric LED ring provides comparable white and green field illumination at a fraction of the electrical power required in commercial colposcopes. Evaluation with standard optical imaging targets to assess the POCkeT Colposcope against the state of the art digital colposcope and other VIAM technologies. Results Our POCkeT Colposcope has comparable resolving power, color reproduction accuracy, minimal lens distortion, and illumination when compared to commercially available colposcopes. In vitro and pilot in vivo imaging results are promising with our POCkeT Colposcope capturing comparable quality images to commercial systems. Conclusion The POCkeT Colposcope is capable of capturing images suitable for cervical lesion analysis. Our portable low cost system could potentially increase access to cervical cancer screening in limited resource settings through task shifting to community health workers. PMID:26332673
A Test Bed for Detection of Botnet Infections in Low Data Rate Tactical Networks
2009-09-01
perimeter, their effectiveness in preventing further proliferation within a LAN is almost non-existent. Conficker disables antivirus and firewall software...enable: Yes Disable “strict” capture filtering: no Fencelist location: /etc/fencelist.txt (IP addresses and CIDR blocks Enable Fencelist filtering
written the portions of the offline software and simulations that involve the electronics and calibrations resonsible for the pieces of the detector calibration and simulation that are connected to the electronics electronics that process and capture the signal produce by Cerenkov light in the photomultiplier tubes. It
Software forecasting as it is really done: A study of JPL software engineers
NASA Technical Reports Server (NTRS)
Griesel, Martha Ann; Hihn, Jairus M.; Bruno, Kristin J.; Fouser, Thomas J.; Tausworthe, Robert C.
1993-01-01
This paper presents a summary of the results to date of a Jet Propulsion Laboratory internally funded research task to study the costing process and parameters used by internally recognized software cost estimating experts. Protocol Analysis and Markov process modeling were used to capture software engineer's forecasting mental models. While there is significant variation between the mental models that were studied, it was nevertheless possible to identify a core set of cost forecasting activities, and it was also found that the mental models cluster around three forecasting techniques. Further partitioning of the mental models revealed clustering of activities, that is very suggestive of a forecasting lifecycle. The different forecasting methods identified were based on the use of multiple-decomposition steps or multiple forecasting steps. The multiple forecasting steps involved either forecasting software size or an additional effort forecast. Virtually no subject used risk reduction steps in combination. The results of the analysis include: the identification of a core set of well defined costing activities, a proposed software forecasting life cycle, and the identification of several basic software forecasting mental models. The paper concludes with a discussion of the implications of the results for current individual and institutional practices.
Farahani, Navid; Liu, Zheng; Jutt, Dylan; Fine, Jeffrey L
2017-10-01
- Pathologists' computer-assisted diagnosis (pCAD) is a proposed framework for alleviating challenges through the automation of their routine sign-out work. Currently, hypothetical pCAD is based on a triad of advanced image analysis, deep integration with heterogeneous information systems, and a concrete understanding of traditional pathology workflow. Prototyping is an established method for designing complex new computer systems such as pCAD. - To describe, in detail, a prototype of pCAD for the sign-out of a breast cancer specimen. - Deidentified glass slides and data from breast cancer specimens were used. Slides were digitized into whole-slide images with an Aperio ScanScope XT, and screen captures were created by using vendor-provided software. The advanced workflow prototype was constructed by using PowerPoint software. - We modeled an interactive, computer-assisted workflow: pCAD previews whole-slide images in the context of integrated, disparate data and predefined diagnostic tasks and subtasks. Relevant regions of interest (ROIs) would be automatically identified and triaged by the computer. A pathologist's sign-out work would consist of an interactive review of important ROIs, driven by required diagnostic tasks. The interactive session would generate a pathology report automatically. - Using animations and real ROIs, the pCAD prototype demonstrates the hypothetical sign-out in a stepwise fashion, illustrating various interactions and explaining how steps can be automated. The file is publicly available and should be widely compatible. This mock-up is intended to spur discussion and to help usher in the next era of digitization for pathologists by providing desperately needed and long-awaited automation.
Chakravorty, Rajib; Rawlinson, David; Zhang, Alan; Markham, John; Dowling, Mark R; Wellard, Cameron; Zhou, Jie H S; Hodgkin, Philip D
2014-01-01
Interest in cell heterogeneity and differentiation has recently led to increased use of time-lapse microscopy. Previous studies have shown that cell fate may be determined well in advance of the event. We used a mixture of automation and manual review of time-lapse live cell imaging to track the positions, contours, divisions, deaths and lineage of 44 B-lymphocyte founders and their 631 progeny in vitro over a period of 108 hours. Using this data to train a Support Vector Machine classifier, we were retrospectively able to predict the fates of individual lymphocytes with more than 90% accuracy, using only time-lapse imaging captured prior to mitosis or death of 90% of all cells. The motivation for this paper is to explore the impact of labour-efficient assistive software tools that allow larger and more ambitious live-cell time-lapse microscopy studies. After training on this data, we show that machine learning methods can be used for realtime prediction of individual cell fates. These techniques could lead to realtime cell culture segregation for purposes such as phenotype screening. We were able to produce a large volume of data with less effort than previously reported, due to the image processing, computer vision, tracking and human-computer interaction tools used. We describe the workflow of the software-assisted experiments and the graphical interfaces that were needed. To validate our results we used our methods to reproduce a variety of published data about lymphocyte populations and behaviour. We also make all our data publicly available, including a large quantity of lymphocyte spatio-temporal dynamics and related lineage information.
Gray, Aaron D; Willis, Brad W; Skubic, Marjorie; Huo, Zhiyu; Razu, Swithin; Sherman, Seth L; Guess, Trent M; Jahandar, Amirhossein; Gulbrandsen, Trevor R; Miller, Scott; Siesener, Nathan J
Noncontact anterior cruciate ligament (ACL) injury in adolescent female athletes is an increasing problem. The knee-ankle separation ratio (KASR), calculated at initial contact (IC) and peak flexion (PF) during the drop vertical jump (DVJ), is a measure of dynamic knee valgus. The Microsoft Kinect V2 has shown promise as a reliable and valid marker-less motion capture device. The Kinect V2 will demonstrate good to excellent correlation between KASR results at IC and PF during the DVJ, as compared with a "gold standard" Vicon motion analysis system. Descriptive laboratory study. Level 2. Thirty-eight healthy volunteer subjects (20 male, 18 female) performed 5 DVJ trials, simultaneously measured by a Vicon MX-T40S system, 2 AMTI force platforms, and a Kinect V2 with customized software. A total of 190 jumps were completed. The KASR was calculated at IC and PF during the DVJ. The intraclass correlation coefficient (ICC) assessed the degree of KASR agreement between the Kinect and Vicon systems. The ICCs of the Kinect V2 and Vicon KASR at IC and PF were 0.84 and 0.95, respectively, showing excellent agreement between the 2 measures. The Kinect V2 successfully identified the KASR at PF and IC frames in 182 of 190 trials, demonstrating 95.8% reliability. The Kinect V2 demonstrated excellent ICC of the KASR at IC and PF during the DVJ when compared with the Vicon system. A customized Kinect V2 software program demonstrated good reliability in identifying the KASR at IC and PF during the DVJ. Reliable, valid, inexpensive, and efficient screening tools may improve the accessibility of motion analysis assessment of adolescent female athletes.
NeuronMetrics: Software for Semi-Automated Processing of Cultured-Neuron Images
Narro, Martha L.; Yang, Fan; Kraft, Robert; Wenk, Carola; Efrat, Alon; Restifo, Linda L.
2007-01-01
Using primary cell culture to screen for changes in neuronal morphology requires specialized analysis software. We developed NeuronMetrics™ for semi-automated, quantitative analysis of two-dimensional (2D) images of fluorescently labeled cultured neurons. It skeletonizes the neuron image using two complementary image-processing techniques, capturing fine terminal neurites with high fidelity. An algorithm was devised to span wide gaps in the skeleton. NeuronMetrics uses a novel strategy based on geometric features called faces to extract a branch-number estimate from complex arbors with numerous neurite-to-neurite contacts, without creating a precise, contact-free representation of the neurite arbor. It estimates total neurite length, branch number, primary neurite number, territory (the area of the convex polygon bounding the skeleton and cell body), and Polarity Index (a measure of neuronal polarity). These parameters provide fundamental information about the size and shape of neurite arbors, which are critical factors for neuronal function. NeuronMetrics streamlines optional manual tasks such as removing noise, isolating the largest primary neurite, and correcting length for self-fasciculating neurites. Numeric data are output in a single text file, readily imported into other applications for further analysis. Written as modules for ImageJ, NeuronMetrics provides practical analysis tools that are easy to use and support batch processing. Depending on the need for manual intervention, processing time for a batch of ~60 2D images is 1.0–2.5 hours, from a folder of images to a table of numeric data. NeuronMetrics’ output accelerates the quantitative detection of mutations and chemical compounds that alter neurite morphology in vitro, and will contribute to the use of cultured neurons for drug discovery. PMID:17270152
NeuronMetrics: software for semi-automated processing of cultured neuron images.
Narro, Martha L; Yang, Fan; Kraft, Robert; Wenk, Carola; Efrat, Alon; Restifo, Linda L
2007-03-23
Using primary cell culture to screen for changes in neuronal morphology requires specialized analysis software. We developed NeuronMetrics for semi-automated, quantitative analysis of two-dimensional (2D) images of fluorescently labeled cultured neurons. It skeletonizes the neuron image using two complementary image-processing techniques, capturing fine terminal neurites with high fidelity. An algorithm was devised to span wide gaps in the skeleton. NeuronMetrics uses a novel strategy based on geometric features called faces to extract a branch number estimate from complex arbors with numerous neurite-to-neurite contacts, without creating a precise, contact-free representation of the neurite arbor. It estimates total neurite length, branch number, primary neurite number, territory (the area of the convex polygon bounding the skeleton and cell body), and Polarity Index (a measure of neuronal polarity). These parameters provide fundamental information about the size and shape of neurite arbors, which are critical factors for neuronal function. NeuronMetrics streamlines optional manual tasks such as removing noise, isolating the largest primary neurite, and correcting length for self-fasciculating neurites. Numeric data are output in a single text file, readily imported into other applications for further analysis. Written as modules for ImageJ, NeuronMetrics provides practical analysis tools that are easy to use and support batch processing. Depending on the need for manual intervention, processing time for a batch of approximately 60 2D images is 1.0-2.5 h, from a folder of images to a table of numeric data. NeuronMetrics' output accelerates the quantitative detection of mutations and chemical compounds that alter neurite morphology in vitro, and will contribute to the use of cultured neurons for drug discovery.
Adverse Outcome Pathways – Tailoring Development to Support Use
Adverse Outcome Pathways (AOPs) represent an ideal framework for connecting high-throughput screening (HTS) data and other toxicity testing results to adverse outcomes of regulatory importance. The AOP Knowledgebase (AOP-KB) captures AOP information to facilitate the development,...
Method and system for rendering and interacting with an adaptable computing environment
Osbourn, Gordon Cecil [Albuquerque, NM; Bouchard, Ann Marie [Albuquerque, NM
2012-06-12
An adaptable computing environment is implemented with software entities termed "s-machines", which self-assemble into hierarchical data structures capable of rendering and interacting with the computing environment. A hierarchical data structure includes a first hierarchical s-machine bound to a second hierarchical s-machine. The first hierarchical s-machine is associated with a first layer of a rendering region on a display screen and the second hierarchical s-machine is associated with a second layer of the rendering region overlaying at least a portion of the first layer. A screen element s-machine is linked to the first hierarchical s-machine. The screen element s-machine manages data associated with a screen element rendered to the display screen within the rendering region at the first layer.
ScreenMasker: An Open-source Gaze-contingent Screen Masking Environment.
Orlov, Pavel A; Bednarik, Roman
2016-09-01
The moving-window paradigm, based on gazecontingent technic, traditionally used in a studies of the visual perceptual span. There is a strong demand for new environments that could be employed by non-technical researchers. We have developed an easy-to-use tool with a graphical user interface (GUI) allowing both execution and control of visual gaze-contingency studies. This work describes ScreenMasker, an environment that allows create gaze-contingent textured displays used together with stimuli presentation software. ScreenMasker has an architecture that meets the requirements of low-latency real-time eye-movement experiments. It also provides a variety of settings and functions. Effective rendering times and performance are ensured by means of GPU processing under CUDA technology. Performance tests show ScreenMasker's latency to be 67-74 ms on a typical office computer, and high-end 144-Hz screen latencies of about 25-28 ms. ScreenMasker is an open-source system distributed under the GNU Lesser General Public License and is available at https://github.com/PaulOrlov/ScreenMasker .
Autonomous Scheduling Requirements for Agile Cubesat Constellations in Earth Observation
NASA Astrophysics Data System (ADS)
Nag, S.; Li, A. S. X.; Kumar, S.
2017-12-01
Distributed Space Missions such as formation flight and constellations, are being recognized as important Earth Observation solutions to increase measurement samples over space and time. Cubesats are increasing in size (27U, 40 kg) with increasing capabilities to host imager payloads. Given the precise attitude control systems emerging commercially, Cubesats now have the ability to slew and capture images within short notice. Prior literature has demonstrated a modular framework that combines orbital mechanics, attitude control and scheduling optimization to plan the time-varying orientation of agile Cubesats in a constellation such that they maximize the number of observed images, within the constraints of hardware specs. Schedule optimization is performed on the ground autonomously, using dynamic programming with two levels of heuristics, verified and improved upon using mixed integer linear programming. Our algorithm-in-the-loop simulation applied to Landsat's use case, captured up to 161% more Landsat images than nadir-pointing sensors with the same field of view, on a 2-satellite constellation over a 12-hour simulation. In this paper, we will derive the requirements for the above algorithm to run onboard small satellites such that the constellation can make time-sensitive decisions to slew and capture images autonomously, without ground support. We will apply the above autonomous algorithm to a time critical use case - monitoring of precipitation and subsequent effects on floods, landslides and soil moisture, as quantified by the NASA Unified Weather Research and Forecasting Model. Since the latency between these event occurrences is quite low, they make a strong case for autonomous decisions among satellites in a constellation. The algorithm can be implemented in the Plan Execution Interchange Language - NASA's open source technology for automation, used to operate the International Space Station and LADEE's in flight software - enabling a controller-in-the-loop demonstration. The autonomy software can then be integrated with NASA's open source Core Flight Software, ported onto a Raspberry Pi 3.0 for a software-in-the-loop demonstration. Future use cases can be time critical events such as cloud movement, storms or other disasters, and in conjunction with other platforms in a Sensor Web.
Computational materials chemistry for carbon capture using porous materials
NASA Astrophysics Data System (ADS)
Sharma, Abhishek; Huang, Runhong; Malani, Ateeque; Babarao, Ravichandar
2017-11-01
Control over carbon dioxide (CO2) release is extremely important to decrease its hazardous effects on the environment such as global warming, ocean acidification, etc. For CO2 capture and storage at industrial point sources, nanoporous materials offer an energetically viable and economically feasible approach compared to chemisorption in amines. There is a growing need to design and synthesize new nanoporous materials with enhanced capability for carbon capture. Computational materials chemistry offers tools to screen and design cost-effective materials for CO2 separation and storage, and it is less time consuming compared to trial and error experimental synthesis. It also provides a guide to synthesize new materials with better properties for real world applications. In this review, we briefly highlight the various carbon capture technologies and the need of computational materials design for carbon capture. This review discusses the commonly used computational chemistry-based simulation methods for structural characterization and prediction of thermodynamic properties of adsorbed gases in porous materials. Finally, simulation studies reported on various potential porous materials, such as zeolites, porous carbon, metal organic frameworks (MOFs) and covalent organic frameworks (COFs), for CO2 capture are discussed.
Chung, Hae-Sun; Hahm, Chorong; Lee, Miae
2014-09-01
The clinical performance of three human papillomavirus (HPV) DNA commercial assays for cervical cancer screening was evaluated; the AdvanSure HPV Screening Real-Time PCR (AdvanSure PCR; LG Life Sciences) that was developed recently for the detection of both high-risk and low-risk genotypes, the Abbott RealTime High-Risk HPV Test (Abbott PCR; Abbott Molecular) and the Hybrid Capture High-Risk HPV DNA test (HC2; Qiagen). The three different HPV DNA tests were compared using cytology samples obtained from 619 women who underwent routine cervical cancer screening. The gold-standard assay was histopathological confirmation of cervical intraepithelial neoplasia of grade 2 or worse. The clinical sensitivities of the AdvanSure PCR, the Abbott PCR and the HC2 for the detection of cervical intraepithelial neoplasia of grade 2 or worse were 95.5%, 95.5% and 100%, respectively, while the clinical specificities were 61.6%, 86.4% and 83.3%, respectively. There were no significant differences in the clinical sensitivities of the Abbott PCR and the AdvanSure PCR compared to the HC2. The clinical specificities of the Abbott PCR and the AdvanSure PCR for the detection of HPV types 16/18 were 97.8% and 98.5%, respectively. For cervical cancer screening, all three tests showed relatively good clinical sensitivities, but the AdvanSure PCR had lower clinical specificity than the Abbott PCR and the HC2. The AdvanSure PCR and the Abbott PCR assays have the advantage of being automated and the ability to distinguish between HPV types 16/18 and other HPV types. The two real-time PCR assays could be useful tools in HPV testing for cervical cancer screening. Copyright © 2014 Elsevier B.V. All rights reserved.
Parallel Wavefront Analysis for a 4D Interferometer
NASA Technical Reports Server (NTRS)
Rao, Shanti R.
2011-01-01
This software provides a programming interface for automating data collection with a PhaseCam interferometer from 4D Technology, and distributing the image-processing algorithm across a cluster of general-purpose computers. Multiple instances of 4Sight (4D Technology s proprietary software) run on a networked cluster of computers. Each connects to a single server (the controller) and waits for instructions. The controller directs the interferometer to several images, then assigns each image to a different computer for processing. When the image processing is finished, the server directs one of the computers to collate and combine the processed images, saving the resulting measurement in a file on a disk. The available software captures approximately 100 images and analyzes them immediately. This software separates the capture and analysis processes, so that analysis can be done at a different time and faster by running the algorithm in parallel across several processors. The PhaseCam family of interferometers can measure an optical system in milliseconds, but it takes many seconds to process the data so that it is usable. In characterizing an adaptive optics system, like the next generation of astronomical observatories, thousands of measurements are required, and the processing time quickly becomes excessive. A programming interface distributes data processing for a PhaseCam interferometer across a Windows computing cluster. A scriptable controller program coordinates data acquisition from the interferometer, storage on networked hard disks, and parallel processing. Idle time of the interferometer is minimized. This architecture is implemented in Python and JavaScript, and may be altered to fit a customer s needs.
PcapDB: Search Optimized Packet Capture, Version 0.1.0.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferrell, Paul; Steinfadt, Shannon
PcapDB is a packet capture system designed to optimize the captured data for fast search in the typical (network incident response) use case. The technology involved in this software has been submitted via the IDEAS system and has been filed as a provisional patent. It includes the following primary components: capture: The capture component utilizes existing capture libraries to retrieve packets from network interfaces. Once retrieved the packets are passed to additional threads for sorting into flows and indexing. The sorted flows and indexes are passed to other threads so that they can be written to disk. These components aremore » written in the C programming language. search: The search components provide a means to find relevant flows and the associated packets. A search query is parsed and represented as a search tree. Various search commands, written in C, are then used resolve this tree into a set of search results. The tree generation and search execution management components are written in python. interface: The PcapDB web interface is written in Python on the Django framework. It provides a series of pages, API's, and asynchronous tasks that allow the user to manage the capture system, perform searches, and retrieve results. Web page components are written in HTML,CSS and Javascript.« less
Video capture virtual reality as a flexible and effective rehabilitation tool
Weiss, Patrice L; Rand, Debbie; Katz, Noomi; Kizony, Rachel
2004-01-01
Video capture virtual reality (VR) uses a video camera and software to track movement in a single plane without the need to place markers on specific bodily locations. The user's image is thereby embedded within a simulated environment such that it is possible to interact with animated graphics in a completely natural manner. Although this technology first became available more than 25 years ago, it is only within the past five years that it has been applied in rehabilitation. The objective of this article is to describe the way this technology works, to review its assets relative to other VR platforms, and to provide an overview of some of the major studies that have evaluated the use of video capture technologies for rehabilitation. PMID:15679949
Using timed event sequential data in nursing research.
Pecanac, Kristen E; Doherty-King, Barbara; Yoon, Ju Young; Brown, Roger; Schiefelbein, Tony
2015-01-01
Measuring behavior is important in nursing research, and innovative technologies are needed to capture the "real-life" complexity of behaviors and events. The purpose of this article is to describe the use of timed event sequential data in nursing research and to demonstrate the use of this data in a research study. Timed event sequencing allows the researcher to capture the frequency, duration, and sequence of behaviors as they occur in an observation period and to link the behaviors to contextual details. Timed event sequential data can easily be collected with handheld computers, loaded with a software program designed for capturing observations in real time. Timed event sequential data add considerable strength to analysis of any nursing behavior of interest, which can enhance understanding and lead to improvement in nursing practice.
Ross, Abigail M; White, Erina; Powell, Daniel; Nelson, Sally; Horowitz, Lisa; Wharff, Elizabeth
2016-03-01
To describe opinions about suicide risk screening in a pediatric medical inpatient sample. As part of a larger instrument validation study, 200 pediatric medical inpatients (ages 10-21 years) were screened for suicide risk. Participants completed demographic self-report forms and were asked their opinions about suicide risk screening. Patient responses were recorded verbatim by trained research social workers. Qualitative data was analyzed using thematic analysis. The majority of adolescents who participated had not been previously asked about suicide (N = 101; 62.3%) and were supportive of suicide risk screening (81.0%). Five salient themes emerged from the qualitative analysis of patient opinions: prevention, elevated risk, emotional benefits, provider responsibility, and lack of harm in asking. The majority of youth screened for suicide risk on medical inpatient units were supportive of suicide risk screening. Opinion data have the potential to inform screening practices and assure clinicians that suicide risk screening will be acceptable to pediatric patients and their parents. Given the lack of screening in these patients' past experiences, the medical setting is a unique opportunity to capture youth at risk for suicide. Copyright © 2016 Elsevier Inc. All rights reserved.
Pointright: a system to redirect mouse and keyboard control among multiple machines
Johanson, Bradley E [Palo Alto, CA; Winograd, Terry A [Stanford, CA; Hutchins, Gregory M [Mountain View, CA
2008-09-30
The present invention provides a software system, PointRight, that allows for smooth and effortless control of pointing and input devices among multiple displays. With PointRight, a single free-floating mouse and keyboard can be used to control multiple screens. When the cursor reaches the edge of a screen it seamlessly moves to the adjacent screen and keyboard control is simultaneously redirected to the appropriate machine. Laptops may also redirect their keyboard and pointing device, and multiple pointers are supported simultaneously. The system automatically reconfigures itself as displays go on, go off, or change the machine they display.
Bertollo, David N; Alexander, Mary Jane; Shinn, Marybeth; Aybar, Jalila B
2007-06-01
This column describes the nonproprietary software Talker, used to adapt screening instruments to audio computer-assisted self-interviewing (ACASI) systems for low-literacy populations and other populations. Talker supports ease of programming, multiple languages, on-site scoring, and the ability to update a central research database. Key features include highly readable text display, audio presentation of questions and audio prompting of answers, and optional touch screen input. The scripting language for adapting instruments is briefly described as well as two studies in which respondents provided positive feedback on its use.
Cosmic Deuterium and Social Networking Software
NASA Astrophysics Data System (ADS)
Pasachoff, J. M.; Suer, T.-A.; Lubowich, D. A.; Glaisyer, T.
2006-08-01
For the education of newcomers to a scientific field and for the convenience of students and workers in the field, it is helpful to have all the basic scientific papers gathered. For the study of deuterium in the Universe, in 2004-5 we set up http://www.cosmicdeuterium.info with clickable links to all the historic and basic papers in the field and to many of the current papers. Cosmic deuterium is especially important because all deuterium in the Universe was formed in the epoch of nucleosynthesis in the first 1000 seconds after the Big Bang, so study of its relative abundance (D:H~1:100,000) gives us information about those first minutes of the Universe's life. Thus the understanding of cosmic deuterium is one of the pillars of modern cosmology, joining the cosmic expansion, the 3 degree cosmic background radiation, and the ripples in that background radiation. Studies of deuterium are also important for understanding Galactic chemical evolution, astrochemistry, interstellar processes, and planetary formation. Some papers had to be scanned while others are available at the Astrophysical Data System, adswww.harvard.edu, or to publishers' Websites. By 2006, social networking software (http:tinyurl.com/ zx5hk) had advanced with popular sites like facebook.com and MySpace.com; the Astrophysical Data System had even set up MyADS. Social tagging software sites like http://del.icio.us have made it easy to share sets of links to papers already available online. We have set up http://del.icio.us/deuterium to provide links to many of the papers on cosmicdeuterium.info, furthering previous del.icio.us work on /eclipses and /plutocharon. It is easy for the site owner to add links to a del.icio.us site; it takes merely clicking on a button on the browser screen once the site is opened and the desired link is viewed in a browser. Categorizing different topics by keywords allows subsets to be easily displayed. The opportunity to expose knowledge and build an ecosystem of web pages that use the functionality of a facebook-type application to capture knowledge collaboratively is considerable. Setting up such a system would marry one of the youngest isotopes with the latest software technologies.
Non-Target Screening of Veterinary Drugs Using Tandem Mass Spectrometry on SmartMass
NASA Astrophysics Data System (ADS)
Xia, Bing; Liu, Xin; Gu, Yu-Cheng; Zhang, Zhao-Hui; Wang, Hai-Yan; Ding, Li-Sheng; Zhou, Yan
2013-05-01
Non-target screening of veterinary drugs using tandem mass spectrometric data was performed on the SmartMass platform. This newly developed software uses the characteristic fragmentation patterns (CFP) to identify chemicals, especially those containing particular substructures. A mixture of 17 sulfonamides was separated by ultra performance liquid chromatography (UPLC), and SmartMass was used to process the tandem mass spectrometry (MS/MS) data acquired on an Orbitrap mass spectrometer. The data were automatically extracted, and each sulfonamide was recognized and analyzed with a prebuilt analysis rule. By using this software, over 98 % of the false candidate structures were eliminated, and all the correct structures were found within the top 10 of the ranking lists. Furthermore, SmartMass could also be used to identify slightly modified contraband drugs and metabolites with simple prebuilt rules. [Figure not available: see fulltext.
iMSRC: converting a standard automated microscope into an intelligent screening platform
Carro, Angel; Perez-Martinez, Manuel; Soriano, Joaquim; Pisano, David G.; Megias, Diego
2015-01-01
Microscopy in the context of biomedical research is demanding new tools to automatically detect and capture objects of interest. The few extant packages addressing this need, however, have enjoyed limited uptake due to complexity of use and installation. To overcome these drawbacks, we developed iMSRC, which combines ease of use and installation with high flexibility and enables applications such as rare event detection and high-resolution tissue sample screening, saving time and resources. PMID:26015081
BGS·SIGMA - Digital mapping at the British Geological Survey
NASA Astrophysics Data System (ADS)
Smith, Nichola; Lawrie, Ken
2017-04-01
Geological mapping methods have evolved significantly over recent decades and this has included the transition to digital field data capture. BGS has been developing methodologies and technologies for this since 2001, and has now reached a stage where our custom built data capture and map compilation system (BGS·SIGMAv2015) is the default toolkit, within BGS, for bedrock and superficial mapping across the UK and overseas. In addition, BGS scientists also use the system for other data acquisition projects, such as landslide assessment, geodiversity audits and building stone studies. BGS·SIGMAv2015 is an integrated toolkit which enables assembly, interrogation and visualisation of existing geological information; capture of, and integration with, new data and geological interpretations; and delivery of digital products and services. From its early days as a system which used PocketGIS run on Husky Fex21 hardware, to the present day system, developed using ESRI's ArcGIS built on top of a bespoke relational data model, running on ruggedized tablet PCs with integrated GPS units, the system has evolved into a comprehensive system for digital geological data capture, mapping and compilation. The benefits, for BGS, of digital data capture are huge. Not only are the data being gathered in a standardised format, with the use of dictionaries to ensure consistency, but project teams can start building their digital geological map in the field by merging data collected by colleagues, building line-work and polygons, and subsequently identifying areas for further investigation. This digital data can then be easily incorporated into corporate databases and used in 3D modelling and visualisation software once back in the office. BGS is now at a stage where the free external release of our digital mapping system is in demand across the world, with 3000 licences being issued to date, and is successfully being used by other geological surveys, universities and exploration companies. However, we recognise that in some areas usage is restricted due to access to the software platform used by the system. To combat this, and to try and facilitate access to the system for all, BGS is now developing the BGS·SIGMA companion app. This will be developed for smart phones and tablets, and as well as enabling users of open source software to access to the system it will also facilitate rapid point based mapping, something BGS geologists are increasingly required to carry out. Alongside this, BGS is also developing a set of modular, re-usable tools for data capture, storage, manipulation and delivery that will help organisations, which are just starting their journey into the digital world, to learn from our experiences and implement a system that is already fully integrated and can be customised for specific user requirements.
John Weisberg; Jay Beaman
2001-01-01
Progress in the options for survey data collection and its effective processing continues. This paper focuses on the rapidly evolving capabilities of handheld computers, and their effective exploitation including links to data captured from scanned questionnaires (OMR and barcodes). The paper describes events in Parks Canada that led to the creation of survey software...
Profiling a Mind Map User: A Descriptive Appraisal
ERIC Educational Resources Information Center
Tucker, Joanne M.; Armstrong, Gary R.; Massad, Victor J.
2010-01-01
Whether manually or through the use of software, a non-linear information organization framework known as mind mapping offers an alternative method for capturing thoughts, ideas and information to linear thinking modes such as outlining. Mind mapping is brainstorming, organizing, and problem solving. This paper examines mind mapping techniques,…
78 FR 16392 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-14
... Broadcast messaging system to a self-managed software with almost immediate dissemination; and (9) upgraded... exporting used self-propelled vehicles. The requirement to file in the AES for all used self- propelled... the AES are for used self-propelled vehicles. The Census Bureau does not capture statistics for used...
Social and Collaborative Interactions for Educational Content Enrichment in ULEs
ERIC Educational Resources Information Center
Araújo, Rafael D.; Brant-Ribeiro, Taffarel; Mendonça, Igor E. S.; Mendes, Miller M.; Dorça, Fabiano A.; Cattelan, Renan G.
2017-01-01
This article presents a social and collaborative model for content enrichment in Ubiquitous Learning Environments. Designed as a loosely coupled software architecture, the proposed model was implemented and integrated into the Classroom eXperience, a multimedia capture platform for educational environments. After automatically recording a lecture…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-28
... bring together experts from diverse backgrounds and experiences including electric system operators... transmission switching; AC optimal power flow modeling; and use of active and dynamic transmission ratings. In... variability of the system, including forecast error? [cir] How can outage probability be captured in...
Recent meteor observing activities in Japan
NASA Astrophysics Data System (ADS)
Yamamoto, M.
2005-02-01
The meteor train observation (METRO) campaign is described as an example of recent meteor observing activity in Japan. Other topics of meteor observing activities in Japan, including Ham-band radio meteor observation, the ``Japan Fireball Network'', the automatic video-capture software ``UFOCapture'', and the Astro-classroom programme are also briefly introduced.
Introduction to the Graduation Tracking System (GTS)
ERIC Educational Resources Information Center
Alabama Department of Education, 2011
2011-01-01
This guide is a training and supportive tool for use by local education agencies (LEAs) in the state of Alabama that are utilizing the Science, Technology and Innovation (STI) Information-INow-INFocus information system software. The Graduation Tracking System (GTS) utilizes existing STI technology to capture student information pertaining to…
Code of Federal Regulations, 2014 CFR
2014-04-01
... identification and data capture (AIDC) means any technology that conveys the unique device identifier or the... use. Human cell, tissue, or cellular or tissue-based product (HCT/P) regulated as a device means an... device or more that consist of a single type, model, class, size, composition, or software version that...
A Fifth Grader's Guide to the World
ERIC Educational Resources Information Center
Purcell, April D.; Ponomarenko, Alyson L.; Brown, Stephen C.
2006-01-01
The challenge for today's elementary teachers is not "whether" but rather "how" to use computers to effectively teach students essential skills and concepts. One exciting way of meeting this challenge is to use Geographic Information Systems (GIS), computer software that captures, manipulates, analyzes, and displays data on specialized layered…
Using SysML to model complex systems for security.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cano, Lester Arturo
2010-08-01
As security systems integrate more Information Technology the design of these systems has tended to become more complex. Some of the most difficult issues in designing Complex Security Systems (CSS) are: Capturing Requirements: Defining Hardware Interfaces: Defining Software Interfaces: Integrating Technologies: Radio Systems: Voice Over IP Systems: Situational Awareness Systems.
High-Throughput Screening of a Luciferase Reporter of Gene Silencing on the Inactive X Chromosome.
Keegan, Alissa; Plath, Kathrin; Damoiseaux, Robert
2018-01-01
Assays of luciferase gene activity are a sensitive and quantitative reporter system suited to high-throughput screening. We adapted a luciferase assay to a screening strategy for identifying factors that reactivate epigenetically silenced genes. This epigenetic luciferase reporter is subject to endogenous gene silencing mechanisms on the inactive X chromosome (Xi) in primary mouse cells and thus captures the multilayered nature of chromatin silencing in development. Here, we describe the optimization of an Xi-linked luciferase reactivation assay in 384-well format and adaptation of the assay for high-throughput siRNA and chemical screening. Xi-luciferase reactivation screening has applications in stem cell biology and cancer therapy. We have used the approach described here to identify chromatin-modifying proteins and to identify drug combinations that enhance the gene reactivation activity of the DNA demethylating drug 5-aza-2'-deoxycytidine.
Sample, Renee Beach; Kinney, Allison L; Jackson, Kurt; Diestelkamp, Wiebke; Bigelow, Kimberly Edginton
2017-09-01
The Timed Up and Go (TUG) has been commonly used for fall risk assessment. The instrumented Timed Up and Go (iTUG) adds wearable sensors to capture sub-movements and may be more sensitive. Posturography assessments have also been used for determining fall risk. This study used stepwise logistic regression models to identify key outcome measures for the iTUG and posturography protocols. The effectiveness of the models containing these measures in differentiating fallers from non-fallers were then compared for each: iTUG total time duration only, iTUG, posturography, and combined iTUG and posturography assessments. One hundred and fifty older adults participated in this study. The iTUG measures were calculated utilizing APDM Inc.'s Mobility Lab software. Traditional and non-linear posturography measures were calculated from center of pressure during quiet-standing. The key outcome measures incorporated in the iTUG assessment model (sit-to-stand lean angle and height) resulted in a model sensitivity of 48.1% and max re-scaled R 2 value of 0.19. This was a higher sensitivity, indicating better differentiation, compared to the model only including total time duration (outcome of the traditional TUG), which had a sensitivity of 18.2%. When the key outcome measures of the iTUG and the posturography assessments were combined into a single model, the sensitivity was approximately the same as the iTUG model alone. Overall the findings of this study support that the iTUG demonstrates greater sensitivity than the total time duration, but that carrying out both iTUG and posturography does not greatly improve sensitivity when used as a fall risk screening tool. Copyright © 2017 Elsevier B.V. All rights reserved.
Small Particles Intact Capture Experiment (SPICE)
NASA Technical Reports Server (NTRS)
Nishioka, Ken-Ji; Carle, G. C.; Bunch, T. E.; Mendez, David J.; Ryder, J. T.
1994-01-01
The Small Particles Intact Capture Experiment (SPICE) will develop technologies and engineering techniques necessary to capture nearly intact, uncontaminated cosmic and interplanetary dust particles (IDP's). Successful capture of such particles will benefit the exobiology and planetary science communities by providing particulate samples that may have survived unaltered since the formation of the solar system. Characterization of these particles may contribute fundamental data to our knowledge of how these particles could have formed into our planet Earth and, perhaps, contributed to the beginnings of life. The term 'uncontaminated' means that captured cosmic and IDP particles are free of organic contamination from the capture process and the term 'nearly intact capture' means that their chemical and elemental components are not materially altered during capture. The key to capturing cosmic and IDP particles that are organic-contamination free and nearly intact is the capture medium. Initial screening of capture media included organic foams, multiple thin foil layers, and aerogel (a silica gel); but, with the exception of aerogel, the requirements of no contamination or nearly intact capture were not met. To ensure no contamination of particles in the capture process, high-purity aerogel was chosen. High-purity aerogel results in high clarity (visual clearness), a useful quality in detection and recovery of embedded captured particles from the aerogel. P. Tsou at the Jet Propulsion Laboratory (JPL) originally described the use of aerogel for this purpose and reported laboratory test results. He has flown aerogel as a 'GAS-can Lid' payload on STS-47 and is evaluating the results. The Timeband Capture Cell Experiment (TICCE), a Eureca 1 experiment, is also flying aerogel and is scheduled for recovery in late April.
Entirely irrelevant distractors can capture and captivate attention.
Forster, Sophie; Lavie, Nilli
2011-12-01
The question of whether a stimulus onset may capture attention when it is entirely irrelevant to the task and even in the absence of any attentional settings for abrupt onset or any dynamic changes has been highly controversial. In the present study, we designed a novel irrelevant capture task to address this question. Participants engaged in a continuous task making sequential forced choice (letter or digit) responses to each item in an alphanumeric matrix that remained on screen throughout many responses. This task therefore involved no attentional settings for onset or indeed any dynamic changes, yet the brief onset of an entirely irrelevant distractor (a cartoon picture) resulted in significant slowing of the two (Experiment 1) or three (Experiment 2) responses immediately following distractor appearance These findings provide a clear demonstration of attention being captured and captivated by a distractor that is entirely irrelevant to any attentional settings of the task.
Haston, Elspeth; Cubey, Robert; Pullan, Martin; Atkins, Hannah; Harris, David J
2012-01-01
Abstract Digitisation programmes in many institutes frequently involve disparate and irregular funding, diverse selection criteria and scope, with different members of staff managing and operating the processes. These factors have influenced the decision at the Royal Botanic Garden Edinburgh to develop an integrated workflow for the digitisation of herbarium specimens which is modular and scalable to enable a single overall workflow to be used for all digitisation projects. This integrated workflow is comprised of three principal elements: a specimen workflow, a data workflow and an image workflow. The specimen workflow is strongly linked to curatorial processes which will impact on the prioritisation, selection and preparation of the specimens. The importance of including a conservation element within the digitisation workflow is highlighted. The data workflow includes the concept of three main categories of collection data: label data, curatorial data and supplementary data. It is shown that each category of data has its own properties which influence the timing of data capture within the workflow. Development of software has been carried out for the rapid capture of curatorial data, and optical character recognition (OCR) software is being used to increase the efficiency of capturing label data and supplementary data. The large number and size of the images has necessitated the inclusion of automated systems within the image workflow. PMID:22859881
Discrete Element Modeling (DEM) of Triboelectrically Charged Particles: Revised Experiments
NASA Technical Reports Server (NTRS)
Hogue, Michael D.; Calle, Carlos I.; Curry, D. R.; Weitzman, P. S.
2008-01-01
In a previous work, the addition of basic screened Coulombic electrostatic forces to an existing commercial discrete element modeling (DEM) software was reported. Triboelectric experiments were performed to charge glass spheres rolling on inclined planes of various materials. Charge generation constants and the Q/m ratios for the test materials were calculated from the experimental data and compared to the simulation output of the DEM software. In this paper, we will discuss new values of the charge generation constants calculated from improved experimental procedures and data. Also, planned work to include dielectrophoretic, Van der Waals forces, and advanced mechanical forces into the software will be discussed.
van Dyck, Peter C; Rinaldo, Piero; McDonald, Clement; Howell, R Rodrey; Zuckerman, Alan; Downing, Gregory
2010-01-01
Capture, coding and communication of newborn screening (NBS) information represent a challenge for public health laboratories, health departments, hospitals, and ambulatory care practices. An increasing number of conditions targeted for screening and the complexity of interpretation contribute to a growing need for integrated information-management strategies. This makes NBS an important test of tools and architecture for electronic health information exchange (HIE) in this convergence of individual patient care and population health activities. For this reason, the American Health Information Community undertook three tasks described in this paper. First, a newborn screening use case was established to facilitate standards harmonization for common terminology and interoperability specifications guiding HIE. Second, newborn screening coding and terminology were developed for integration into electronic HIE activities. Finally, clarification of privacy, security, and clinical laboratory regulatory requirements governing information exchange was provided, serving as a framework to establish pathways for improving screening program timeliness, effectiveness, and efficiency of quality patient care services. PMID:20064796
Descriptions of Free and Freeware Software in the Mathematics Teaching
NASA Astrophysics Data System (ADS)
Antunes de Macedo, Josue; Neves de Almeida, Samara; Voelzke, Marcos Rincon
2016-05-01
This paper presents the analysis and the cataloging of free and freeware mathematical software available on the internet, a brief explanation of them, and types of licenses for use in teaching and learning. The methodology is based on the qualitative research. Among the different types of software found, it stands out in algebra, the Winmat, that works with linear algebra, matrices and linear systems. In geometry, the GeoGebra, which can be used in the study of functions, plan and spatial geometry, algebra and calculus. For graphing, can quote the Graph and Graphequation. With Graphmatica software, it is possible to build various graphs of mathematical equations on the same screen, representing cartesian equations, inequalities, parametric among other functions. The Winplot allows the user to build graphics in two and three dimensions functions and mathematical equations. Thus, this work aims to present the teachers some free math software able to be used in the classroom.
Wearable Notification via Dissemination Service in a Pervasive Computing Environment
2015-09-01
context, state, and environment in a manner that would be transparent to a Soldier’s common operations. 15. SUBJECT TERMS pervasive computing, Android ...of user context shifts, i.e., changes in the user’s position, history , workflow, or resource interests. If the PCE is described as a 2-component...convenient viewing on the Glass’s screen just above the line of sight. All of the software developed uses Google’s Android open-source software stack
NASA Astrophysics Data System (ADS)
Schiwietz, G.; Grande, P. L.
2011-11-01
Recent developments in the theoretical treatment of electronic energy losses of bare and screened ions in gases are presented. Specifically, the unitary-convolution-approximation (UCA) stopping-power model has proven its strengths for the determination of nonequilibrium effects for light as well as heavy projectiles at intermediate to high projectile velocities. The focus of this contribution will be on the UCA and its extension to specific projectile energies far below 100 keV/u, by considering electron-capture contributions at charge-equilibrium conditions.
Computer Reconstruction of Spirit Predicament
2009-11-04
A screen shot from software used by the Mars Exploration Rover team for assessing movements by Spirit and Opportunity illustrates the degree to which Spirit wheels have become embedded in soft material at the location called Troy.
ERIC Educational Resources Information Center
Vietzke, Robert; And Others
1996-01-01
This special section explains the latest developments in networking technologies, profiles school districts benefiting from successful implementations, and reviews new products for building networks. Highlights include ATM (asynchronous transfer mode), cable modems, networking switches, Internet screening software, file servers, network management…
Automated measurement of zebrafish larval movement
Cario, Clinton L; Farrell, Thomas C; Milanese, Chiara; Burton, Edward A
2011-01-01
Abstract The zebrafish is a powerful vertebrate model that is readily amenable to genetic, pharmacological and environmental manipulations to elucidate the molecular and cellular basis of movement and behaviour. We report software enabling automated analysis of zebrafish movement from video recordings captured with cameras ranging from a basic camcorder to more specialized equipment. The software, which is provided as open-source MATLAB functions, can be freely modified and distributed, and is compatible with multiwell plates under a wide range of experimental conditions. Automated measurement of zebrafish movement using this technique will be useful for multiple applications in neuroscience, pharmacology and neuropsychiatry. PMID:21646414
Automated measurement of zebrafish larval movement.
Cario, Clinton L; Farrell, Thomas C; Milanese, Chiara; Burton, Edward A
2011-08-01
The zebrafish is a powerful vertebrate model that is readily amenable to genetic, pharmacological and environmental manipulations to elucidate the molecular and cellular basis of movement and behaviour. We report software enabling automated analysis of zebrafish movement from video recordings captured with cameras ranging from a basic camcorder to more specialized equipment. The software, which is provided as open-source MATLAB functions, can be freely modified and distributed, and is compatible with multiwell plates under a wide range of experimental conditions. Automated measurement of zebrafish movement using this technique will be useful for multiple applications in neuroscience, pharmacology and neuropsychiatry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pachuilo, Andrew R; Ragan, Eric; Goodall, John R
Visualization tools can take advantage of multiple coordinated views to support analysis of large, multidimensional data sets. Effective design of such views and layouts can be challenging, but understanding users analysis strategies can inform design improvements. We outline an approach for intelligent design configuration of visualization tools with multiple coordinated views, and we discuss a proposed software framework to support the approach. The proposed software framework could capture and learn from user interaction data to automate new compositions of views and widgets. Such a framework could reduce the time needed for meta analysis of the visualization use and lead tomore » more effective visualization design.« less
Measuring and assessing maintainability at the end of high level design
NASA Technical Reports Server (NTRS)
Briand, Lionel C.; Morasca, Sandro; Basili, Victor R.
1993-01-01
Software architecture appears to be one of the main factors affecting software maintainability. Therefore, in order to be able to predict and assess maintainability early in the development process we need to be able to measure the high-level design characteristics that affect the change process. To this end, we propose a measurement approach, which is based on precise assumptions derived from the change process, which is based on Object-Oriented Design principles and is partially language independent. We define metrics for cohesion, coupling, and visibility in order to capture the difficulty of isolating, understanding, designing and validating changes.
Digital PIV (DPIV) Software Analysis System
NASA Technical Reports Server (NTRS)
Blackshire, James L.
1997-01-01
A software package was developed to provide a Digital PIV (DPIV) capability for NASA LaRC. The system provides an automated image capture, test correlation, and autocorrelation analysis capability for the Kodak Megaplus 1.4 digital camera system for PIV measurements. The package includes three separate programs that, when used together with the PIV data validation algorithm, constitutes a complete DPIV analysis capability. The programs are run on an IBM PC/AT host computer running either Microsoft Windows 3.1 or Windows 95 using a 'quickwin' format that allows simple user interface and output capabilities to the windows environment.
Inexpensive Audio Activities: Earbud-based Sound Experiments
NASA Astrophysics Data System (ADS)
Allen, Joshua; Boucher, Alex; Meggison, Dean; Hruby, Kate; Vesenka, James
2016-11-01
Inexpensive alternatives to a number of classic introductory physics sound laboratories are presented including interference phenomena, resonance conditions, and frequency shifts. These can be created using earbuds, economical supplies such as Giant Pixie Stix® wrappers, and free software available for PCs and mobile devices. We describe two interference laboratories (beat frequency and two-speaker interference) and two resonance laboratories (quarter- and half-wavelength). Lastly, a Doppler laboratory using rotating earbuds is explained. The audio signal captured by all experiments is analyzed on free spectral analysis software and many of the experiments incorporate the unifying theme of measuring the speed of sound in air.
Watson, Verity; Ryan, Mandy; Watson, Emma
2009-06-01
To examine women's preferences for characteristics of chlamydia screening. Chlamydia trachomatis is the most common curable sexually transmitted disease. To design effective screening programs, it is important to fully capture the benefits of screening to patients. Thus, the value of experience factors must be considered alongside health outcomes. A self-complete discrete choice experiment questionnaire was administered to women attending a family planning clinic. Chlamydia screening was described by five characteristics: location of screening; type of screening test; cost of screening test; risk of developing pelvic inflammatory disease if chlamydia is untreated; and support provided when receiving results. One hundred twenty-six women completed the questionnaire. Respondents valued characteristics of the care experience. Screening was valued at 15 pound; less invasive screening tests increase willingness to pay by 7 pound, and more invasive tests reduce willingness to pay by 3.50 pound. The most preferred screening location was the family planning clinic, valued at 5 pound. The support of a trained health-care professional when receiving results was valued at 4 pound. Respondents under 25 years and those in a casual relationship were less likely to be screened. Women valued experience factors in the provision of chlamydia screening. To correctly value these screening programs and to predict uptake, cost-effectiveness studies should take such values into account. Failure to do this may result in incorrect policy recommendations.
The GeoClaw software for depth-averaged flows with adaptive refinement
Berger, M.J.; George, D.L.; LeVeque, R.J.; Mandli, Kyle T.
2011-01-01
Many geophysical flow or wave propagation problems can be modeled with two-dimensional depth-averaged equations, of which the shallow water equations are the simplest example. We describe the GeoClaw software that has been designed to solve problems of this nature, consisting of open source Fortran programs together with Python tools for the user interface and flow visualization. This software uses high-resolution shock-capturing finite volume methods on logically rectangular grids, including latitude-longitude grids on the sphere. Dry states are handled automatically to model inundation. The code incorporates adaptive mesh refinement to allow the efficient solution of large-scale geophysical problems. Examples are given illustrating its use for modeling tsunamis and dam-break flooding problems. Documentation and download information is available at www.clawpack.org/geoclaw. ?? 2011.
Assuring NASA's Safety and Mission Critical Software
NASA Technical Reports Server (NTRS)
Deadrick, Wesley
2015-01-01
What is IV&V? Independent Verification and Validation (IV&V) is an objective examination of safety and mission critical software processes and products. Independence: 3 Key parameters: Technical Independence; Managerial Independence; Financial Independence. NASA IV&V perspectives: Will the system's software: Do what it is supposed to do?; Not do what it is not supposed to do?; Respond as expected under adverse conditions?. Systems Engineering: Determines if the right system has been built and that it has been built correctly. IV&V Technical Approaches: Aligned with IEEE 1012; Captured in a Catalog of Methods; Spans the full project lifecycle. IV&V Assurance Strategy: The IV&V Project's strategy for providing mission assurance; Assurance Strategy is driven by the specific needs of an individual project; Implemented via an Assurance Design; Communicated via Assurance Statements.