Video streaming technologies using ActiveX and LabVIEW
NASA Astrophysics Data System (ADS)
Panoiu, M.; Rat, C. L.; Panoiu, C.
2015-06-01
The goal of this paper is to present the possibilities of remote image processing through data exchange between two programming technologies: LabVIEW and ActiveX. ActiveX refers to the process of controlling one program from another via ActiveX component; where one program acts as the client and the other as the server. LabVIEW can be either client or server. Both programs (client and server) exist independent of each other but are able to share information. The client communicates with the ActiveX objects that the server opens to allow the sharing of information [7]. In the case of video streaming [1] [2], most ActiveX controls can only display the data, being incapable of transforming it into a data type that LabVIEW can process. This becomes problematic when the system is used for remote image processing. The LabVIEW environment itself provides little if any possibilities for video streaming, and the methods it does offer are usually not high performance, but it possesses high performance toolkits and modules specialized in image processing, making it ideal for processing the captured data. Therefore, we chose to use existing software, specialized in video streaming along with LabVIEW and to capture the data provided by them, for further use, within LabVIEW. The software we studied (the ActiveX controls of a series of media players that utilize streaming technology) provide high quality data and a very small transmission delay, ensuring the reliability of the results of the image processing.
Integrating Robotic Observatories into Astronomy Labs
NASA Astrophysics Data System (ADS)
Ruch, Gerald T.
2015-01-01
The University of St. Thomas (UST) and a consortium of five local schools is using the UST Robotic Observatory, housing a 17' telescope, to develop labs and image processing tools that allow easy integration of observational labs into existing introductory astronomy curriculum. Our lab design removes the burden of equipment ownership by sharing access to a common resource and removes the burden of data processing by automating processing tasks that are not relevant to the learning objectives.Each laboratory exercise takes place over two lab periods. During period one, students design and submit observation requests via the lab website. Between periods, the telescope automatically acquires the data and our image processing pipeline produces data ready for student analysis. During period two, the students retrieve their data from the website and perform the analysis. The first lab, 'Weighing Jupiter,' was successfully implemented at UST and several of our partner schools. We are currently developing a second lab to measure the age of and distance to a globular cluster.
Increasing the speed of medical image processing in MatLab®
Bister, M; Yap, CS; Ng, KH; Tok, CH
2007-01-01
MatLab® has often been considered an excellent environment for fast algorithm development but is generally perceived as slow and hence not fit for routine medical image processing, where large data sets are now available e.g., high-resolution CT image sets with typically hundreds of 512x512 slices. Yet, with proper programming practices – vectorization, pre-allocation and specialization – applications in MatLab® can run as fast as in C language. In this article, this point is illustrated with fast implementations of bilinear interpolation, watershed segmentation and volume rendering. PMID:21614269
In-Situ Mosaic Production at JPL/MIPL
NASA Technical Reports Server (NTRS)
Deen, Bob
2012-01-01
Multimission Image Processing Lab (MIPL) at JPL is responsible for (among other things) the ground-based operational image processing of all the recent in-situ Mars missions: (1) Mars Pathfinder (2) Mars Polar Lander (3) Mars Exploration Rovers (MER) (4) Phoenix (5) Mars Science Lab (MSL) Mosaics are probably the most visible products from MIPL (1) Generated for virtually every rover position at which a panorama is taken (2) Provide better environmental context than single images (3) Valuable to operations and science personnel (4) Arguably the signature products for public engagement
A Macintosh-Based Scientific Images Video Analysis System
NASA Technical Reports Server (NTRS)
Groleau, Nicolas; Friedland, Peter (Technical Monitor)
1994-01-01
A set of experiments was designed at MIT's Man-Vehicle Laboratory in order to evaluate the effects of zero gravity on the human orientation system. During many of these experiments, the movements of the eyes are recorded on high quality video cassettes. The images must be analyzed off-line to calculate the position of the eyes at every moment in time. To this aim, I have implemented a simple inexpensive computerized system which measures the angle of rotation of the eye from digitized video images. The system is implemented on a desktop Macintosh computer, processes one play-back frame per second and exhibits adequate levels of accuracy and precision. The system uses LabVIEW, a digital output board, and a video input board to control a VCR, digitize video images, analyze them, and provide a user friendly interface for the various phases of the process. The system uses the Concept Vi LabVIEW library (Graftek's Image, Meudon la Foret, France) for image grabbing and displaying as well as translation to and from LabVIEW arrays. Graftek's software layer drives an Image Grabber board from Neotech (Eastleigh, United Kingdom). A Colour Adapter box from Neotech provides adequate video signal synchronization. The system also requires a LabVIEW driven digital output board (MacADIOS II from GW Instruments, Cambridge, MA) controlling a slightly modified VCR remote control used mainly to advance the video tape frame by frame.
EarthTutor: An Interactive Intelligent Tutoring System for Remote Sensing
NASA Astrophysics Data System (ADS)
Bell, A. M.; Parton, K.; Smith, E.
2005-12-01
Earth science classes in colleges and high schools use a variety of satellite image processing software to teach earth science and remote sensing principles. However, current tutorials for image processing software are often paper-based or lecture-based and do not take advantage of the full potential of the computer context to teach, immerse, and stimulate students. We present EarthTutor, an adaptive, interactive Intelligent Tutoring System (ITS) being built for NASA (National Aeronautics and Space Administration) that is integrated directly with an image processing application. The system aims to foster the use of satellite imagery in classrooms and encourage inquiry-based, hands-on earth science scientific study by providing students with an engaging imagery analysis learning environment. EarthTutor's software is available as a plug-in to ImageJ, a free image processing system developed by the NIH (National Institute of Health). Since it is written in Java, it can be run on almost any platform and also as an applet from the Web. Labs developed for EarthTutor combine lesson content (such as HTML web pages) with interactive activities and questions. In each lab the student learns to measure, calibrate, color, slice, plot and otherwise process and analyze earth science imagery. During the activities, EarthTutor monitors students closely as they work, which allows it to provide immediate feedback that is customized to a particular student's needs. As the student moves through the labs, EarthTutor assesses the student, and tailors the presentation of the content to a student's demonstrated skill level. EarthTutor's adaptive approach is based on emerging Artificial Intelligence (AI) research. Bayesian networks are employed to model a student's proficiency with different earth science and image processing concepts. Agent behaviors are used to track the student's progress through activities and provide guidance when a student encounters difficulty. Through individual feedback and adaptive instruction, EarthTutor aims to offer the benefits of a one-on-one human instructor in a cost-effective, easy-to-use application. We are currently working with remote sensing experts to develop EarthTutor labs for diverse earth science subjects such as global vegetation, stratospheric ozone, oceanography, polar sea ice and natural hazards. These labs will be packaged with the first public release of EarthTutor in December 2005. Custom labs can be designed with the EarthTutor authoring tool. The tool is basic enough to allow teachers to construct tutorials to fit their classroom's curriculum and locale, but also powerful enough to allow advanced users to create highly-interactive labs. Preliminary results from an ongoing pilot study demonstrate that the EarthTutor system is effective and enjoyable teaching tool, relative to traditional satellite imagery teaching methods.
NASA Astrophysics Data System (ADS)
di, L.; Deng, M.
2010-12-01
Remote sensing (RS) is an essential method to collect data for Earth science research. Huge amount of remote sensing data, most of them in the image form, have been acquired. Almost all geography departments in the world offer courses in digital processing of remote sensing images. Such courses place emphasis on how to digitally process large amount of multi-source images for solving real world problems. However, due to the diversity and complexity of RS images and the shortcomings of current data and processing infrastructure, obstacles for effectively teaching such courses still remain. The major obstacles include 1) difficulties in finding, accessing, integrating and using massive RS images by students and educators, and 2) inadequate processing functions and computing facilities for students to freely explore the massive data. Recent development in geospatial Web processing service systems, which make massive data, computing powers, and processing capabilities to average Internet users anywhere in the world, promises the removal of the obstacles. The GeoBrain system developed by CSISS is an example of such systems. All functions available in GRASS Open Source GIS have been implemented as Web services in GeoBrain. Petabytes of remote sensing images in NASA data centers, the USGS Landsat data archive, and NOAA CLASS are accessible transparently and processable through GeoBrain. The GeoBrain system is operated on a high performance cluster server with large disk storage and fast Internet connection. All GeoBrain capabilities can be accessed by any Internet-connected Web browser. Dozens of universities have used GeoBrain as an ideal platform to support data-intensive remote sensing education. This presentation gives a specific example of using GeoBrain geoprocessing services to enhance the teaching of GGS 588, Digital Remote Sensing taught at the Department of Geography and Geoinformation Science, George Mason University. The course uses the textbook "Introductory Digital Image Processing, A Remote Sensing Perspective" authored by John Jensen. The textbook is widely adopted in the geography departments around the world for training students on digital processing of remote sensing images. In the traditional teaching setting for the course, the instructor prepares a set of sample remote sensing images to be used for the course. Commercial desktop remote sensing software, such as ERDAS, is used for students to do the lab exercises. The students have to do the excurses in the lab and can only use the simple images. For this specific course at GMU, we developed GeoBrain-based lab excurses for the course. With GeoBrain, students now can explore petabytes of remote sensing images in the NASA, NOAA, and USGS data archives instead of dealing only with sample images. Students have a much more powerful computing facility available for their lab excurses. They can explore the data and do the excurses any time at any place they want as long as they can access the Internet through the Web Browser. The feedbacks from students are all very positive about the learning experience on the digital image processing with the help of GeoBrain web processing services. The teaching/lab materials and GeoBrain services are freely available to anyone at http://www.laits.gmu.edu.
Wang, Chunliang; Ritter, Felix; Smedby, Orjan
2010-07-01
To enhance the functional expandability of a picture archiving and communication systems (PACS) workstation and to facilitate the integration of third-part image-processing modules, we propose a browser-server style method. In the proposed solution, the PACS workstation shows the front-end user interface defined in an XML file while the image processing software is running in the background as a server. Inter-process communication (IPC) techniques allow an efficient exchange of image data, parameters, and user input between the PACS workstation and stand-alone image-processing software. Using a predefined communication protocol, the PACS workstation developer or image processing software developer does not need detailed information about the other system, but will still be able to achieve seamless integration between the two systems and the IPC procedure is totally transparent to the final user. A browser-server style solution was built between OsiriX (PACS workstation software) and MeVisLab (Image-Processing Software). Ten example image-processing modules were easily added to OsiriX by converting existing MeVisLab image processing networks. Image data transfer using shared memory added <10ms of processing time while the other IPC methods cost 1-5 s in our experiments. The browser-server style communication based on IPC techniques is an appealing method that allows PACS workstation developers and image processing software developers to cooperate while focusing on different interests.
Real-time blood flow visualization using the graphics processing unit
NASA Astrophysics Data System (ADS)
Yang, Owen; Cuccia, David; Choi, Bernard
2011-01-01
Laser speckle imaging (LSI) is a technique in which coherent light incident on a surface produces a reflected speckle pattern that is related to the underlying movement of optical scatterers, such as red blood cells, indicating blood flow. Image-processing algorithms can be applied to produce speckle flow index (SFI) maps of relative blood flow. We present a novel algorithm that employs the NVIDIA Compute Unified Device Architecture (CUDA) platform to perform laser speckle image processing on the graphics processing unit. Software written in C was integrated with CUDA and integrated into a LabVIEW Virtual Instrument (VI) that is interfaced with a monochrome CCD camera able to acquire high-resolution raw speckle images at nearly 10 fps. With the CUDA code integrated into the LabVIEW VI, the processing and display of SFI images were performed also at ~10 fps. We present three video examples depicting real-time flow imaging during a reactive hyperemia maneuver, with fluid flow through an in vitro phantom, and a demonstration of real-time LSI during laser surgery of a port wine stain birthmark.
Real-time blood flow visualization using the graphics processing unit
Yang, Owen; Cuccia, David; Choi, Bernard
2011-01-01
Laser speckle imaging (LSI) is a technique in which coherent light incident on a surface produces a reflected speckle pattern that is related to the underlying movement of optical scatterers, such as red blood cells, indicating blood flow. Image-processing algorithms can be applied to produce speckle flow index (SFI) maps of relative blood flow. We present a novel algorithm that employs the NVIDIA Compute Unified Device Architecture (CUDA) platform to perform laser speckle image processing on the graphics processing unit. Software written in C was integrated with CUDA and integrated into a LabVIEW Virtual Instrument (VI) that is interfaced with a monochrome CCD camera able to acquire high-resolution raw speckle images at nearly 10 fps. With the CUDA code integrated into the LabVIEW VI, the processing and display of SFI images were performed also at ∼10 fps. We present three video examples depicting real-time flow imaging during a reactive hyperemia maneuver, with fluid flow through an in vitro phantom, and a demonstration of real-time LSI during laser surgery of a port wine stain birthmark. PMID:21280915
Data Visualization and Animation Lab (DVAL) overview
NASA Technical Reports Server (NTRS)
Stacy, Kathy; Vonofenheim, Bill
1994-01-01
The general capabilities of the Langley Research Center Data Visualization and Animation Laboratory is described. These capabilities include digital image processing, 3-D interactive computer graphics, data visualization and analysis, video-rate acquisition and processing of video images, photo-realistic modeling and animation, video report generation, and color hardcopies. A specialized video image processing system is also discussed.
Applications of a digital darkroom in the forensic laboratory
NASA Astrophysics Data System (ADS)
Bullard, Barry D.; Birge, Brian
1997-02-01
Through a joint agreement with the Indiana-Marion County Forensic Laboratory Services Agency, the Institute for Forensic Imaging conducted a pilot program to investigate crime lab applications of a digital darkroom. IFI installed and staffed a state-of-the-art digital darkroom in the photography laboratory of the Indianapolis-Marion County crime lab located at Indianapolis, Indiana. The darkroom consisted of several high resolution color digital cameras, image processing computer, dye sublimation continuous tone digital printers, and CD-ROM writer. This paper describes the use of the digital darkroom in several crime lab investigations conducted during the program.
Erberich, Stephan G; Bhandekar, Manasee; Chervenak, Ann; Kesselman, Carl; Nelson, Marvin D
2007-01-01
Functional MRI is successfully being used in clinical and research applications including preoperative planning, language mapping, and outcome monitoring. However, clinical use of fMRI is less widespread due to its complexity of imaging, image workflow, post-processing, and lack of algorithmic standards hindering result comparability. As a consequence, wide-spread adoption of fMRI as clinical tool is low contributing to the uncertainty of community physicians how to integrate fMRI into practice. In addition, training of physicians with fMRI is in its infancy and requires clinical and technical understanding. Therefore, many institutions which perform fMRI have a team of basic researchers and physicians to perform fMRI as a routine imaging tool. In order to provide fMRI as an advanced diagnostic tool to the benefit of a larger patient population, image acquisition and image post-processing must be streamlined, standardized, and available at any institution which does not have these resources available. Here we describe a software architecture, the functional imaging laboratory (funcLAB/G), which addresses (i) standardized image processing using Statistical Parametric Mapping and (ii) its extension to secure sharing and availability for the community using standards-based Grid technology (Globus Toolkit). funcLAB/G carries the potential to overcome the limitations of fMRI in clinical use and thus makes standardized fMRI available to the broader healthcare enterprise utilizing the Internet and HealthGrid Web Services technology.
Chen, S C; Shao, C L; Liang, C K; Lin, S W; Huang, T H; Hsieh, M C; Yang, C H; Luo, C H; Wuo, C M
2004-01-01
In this paper, we present a text input system for the seriously disabled by using lips image recognition based on LabVIEW. This system can be divided into the software subsystem and the hardware subsystem. In the software subsystem, we adopted the technique of image processing to recognize the status of mouth-opened or mouth-closed depending the relative distance between the upper lip and the lower lip. In the hardware subsystem, parallel port built in PC is used to transmit the recognized result of mouth status to the Morse-code text input system. Integrating the software subsystem with the hardware subsystem, we implement a text input system by using lips image recognition programmed in LabVIEW language. We hope the system can help the seriously disabled to communicate with normal people more easily.
NASA Astrophysics Data System (ADS)
Tufts, Joseph R.; Lobdill, Rich; Haldeman, Benjamin J.; Haynes, Rachel; Hawkins, Eric; Burleson, Ben; Jahng, David
2008-07-01
The Las Cumbres Observatory Global Telescope Network (LCOGT) is an ambitious project to build and operate, within 5 years, a worldwide robotic network of 50 0.4, 1, and 2 m telescopes sharing identical instrumentation and optimized for precision photometry of time-varying sources. The telescopes, instrumentation, and software are all developed in house with two 2 m telescopes already installed. The LCOGT Imaging Lab is responsible for assembly and characterization of the network's cameras and instrumentation. In addition to a fully equipped CNC machine shop, two electronics labs, and a future optics lab, the Imaging Lab is designed from the ground up to be a superb environment for bare detectors, precision filters, and assembled instruments. At the heart of the lab is an ISO class 5 cleanroom with full ionization. Surrounding this, the class 7 main lab houses equipment for detector characterization including QE and CTE, and equipment for measuring transmission and reflection of optics. Although the first science cameras installed, two TEC cooled e2v 42-40 deep depletion based units and two CryoTiger cooled Fairchild Imaging CCD486-BI based units, are from outside manufacturers, their 18 position filter wheels and the remainder of the network's science cameras, controllers, and instrumentation will be built in house. Currently being designed, the first generation LCOGT cameras for the network's 1 m telescopes use existing CCD486-BI devices and an in-house controller. Additionally, the controller uses digital signal processing to optimize readout noise vs. speed, and all instrumentation uses embedded microprocessors for communication over ethernet.
Cardiac catheterization laboratory management: the fundamentals.
Newell, Amy
2012-01-01
Increasingly, imaging administrators are gaining oversight for the cardiac cath lab as part of imaging services. Significant daily challenges include physician and staff demands, as well as patients who in many cases require higher acuity care. Along with strategic program driven responsibilities, the management role is complex. Critical elements that are the major impacts on cath lab management, as well as the overall success of a cardiac and vascular program, include program quality, patient safety, operational efficiency including inventory management, and customer service. It is critically important to have a well-qualified cath lab manager who acts as a leader by example, a mentor and motivator of the team, and an expert in the organization's processes and procedures. Such qualities will result in a streamlined cath lab with outstanding results.
Using Storyboarding to Model Gene Expression
ERIC Educational Resources Information Center
Korb, Michele; Colton, Shannon; Vogt, Gina
2015-01-01
Students often find it challenging to create images of complex, abstract biological processes. Using modified storyboards, which contain predrawn images, students can visualize the process and anchor ideas from activities, labs, and lectures. Storyboards are useful in assessing students' understanding of content in larger contexts. They enable…
Navarro, Pedro J; Alonso, Diego; Stathis, Kostas
2016-01-01
We develop an automated image processing system for detecting microaneurysm (MA) in diabetic patients. Diabetic retinopathy is one of the main causes of preventable blindness in working age diabetic people with the presence of an MA being one of the first signs. We transform the eye fundus images to the L*a*b* color space in order to separately process the L* and a* channels, looking for MAs in each of them. We then fuse the results, and last send the MA candidates to a k-nearest neighbors classifier for final assessment. The performance of the method, measured against 50 images with an ophthalmologist's hand-drawn ground-truth, shows high sensitivity (100%) and accuracy (84%), and running times around 10 s. This kind of automatic image processing application is important in order to reduce the burden on the public health system associated with the diagnosis of diabetic retinopathy given the high number of potential patients that need periodic screening.
Easily Transported CCD Systems for Use in Astronomy Labs
NASA Astrophysics Data System (ADS)
Meisel, D.
1992-12-01
Relatively inexpensive CCD cameras and portable computers are now easily obtained as commercially available products. I will describe a prototype system that can be used by introductory astronomy students, even urban enviroments, to obtain useful observations of the night sky. It is based on the ST-4 CCDs made by Santa Barbara Instruments Group and Macintosh Powerbook145 computers. Students take outdoor images directly from the college campus, bring the exposures back into the lab and download the images into our networked server. These stored images can then be processed (at a later time) using a variety of image processing programs including a new astronomical version of the popular "freeware" NIH Image package that is currently under development at Geneseo. The prototype of this system will be demonstrated and available for hands-on use during the meeting. This work is supported by NSF ILI Demonstration Grant USE9250493 and Grants from SUNY-GENESEO.
Li, Xiaofang; Deng, Linhong; Lu, Hu; He, Bin
2014-08-01
A measurement system based on the image processing technology and developed by LabVIEW was designed to quickly obtain the range of motion (ROM) of spine. NI-Vision module was used to pre-process the original images and calculate the angles of marked needles in order to get ROM data. Six human cadaveric thoracic spine segments T7-T10 were selected to carry out 6 kinds of loads, including left/right lateral bending, flexion, extension, cis/counterclockwise torsion. The system was used to measure the ROM of segment T8-T9 under the loads from 1 Nm to 5 Nm. The experimental results showed that the system is able to measure the ROM of the spine accurately and quickly, which provides a simple and reliable tool for spine biomechanics investigators.
LabVIEW application for motion tracking using USB camera
NASA Astrophysics Data System (ADS)
Rob, R.; Tirian, G. O.; Panoiu, M.
2017-05-01
The technical state of the contact line and also the additional equipment in electric rail transport is very important for realizing the repairing and maintenance of the contact line. During its functioning, the pantograph motion must stay in standard limits. Present paper proposes a LabVIEW application which is able to track in real time the motion of a laboratory pantograph and also to acquire the tracking images. An USB webcam connected to a computer acquires the desired images. The laboratory pantograph contains an automatic system which simulates the real motion. The tracking parameters are the horizontally motion (zigzag) and the vertically motion which can be studied in separate diagrams. The LabVIEW application requires appropriate tool-kits for vision development. Therefore the paper describes the subroutines that are especially programmed for real-time image acquisition and also for data processing.
Cui, Yang; Hanley, Luke
2015-06-01
ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science.
Cui, Yang; Hanley, Luke
2015-01-01
ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science. PMID:26133872
NASA Astrophysics Data System (ADS)
Cui, Yang; Hanley, Luke
2015-06-01
ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science.
Slice-thickness evaluation in CT and MRI: an alternative computerised procedure.
Acri, G; Tripepi, M G; Causa, F; Testagrossa, B; Novario, R; Vermiglio, G
2012-04-01
The efficient use of computed tomography (CT) and magnetic resonance imaging (MRI) equipment necessitates establishing adequate quality-control (QC) procedures. In particular, the accuracy of slice thickness (ST) requires scan exploration of phantoms containing test objects (plane, cone or spiral). To simplify such procedures, a novel phantom and a computerised LabView-based procedure have been devised, enabling determination of full width at half maximum (FWHM) in real time. The phantom consists of a polymethyl methacrylate (PMMA) box, diagonally crossed by a PMMA septum dividing the box into two sections. The phantom images were acquired and processed using the LabView-based procedure. The LabView (LV) results were compared with those obtained by processing the same phantom images with commercial software, and the Fisher exact test (F test) was conducted on the resulting data sets to validate the proposed methodology. In all cases, there was no statistically significant variation between the two different procedures and the LV procedure, which can therefore be proposed as a valuable alternative to other commonly used procedures and be reliably used on any CT and MRI scanner.
1998-12-05
This view of Jupiter was taken by Voyager 1. This image was taken through color filters and recombined to produce the color image. This photo was assembled from three black and white negatives by the Image Processing Lab at Jet Propulsion Laboratory. http://photojournal.jpl.nasa.gov/catalog/PIA01384
CT and MRI slice separation evaluation by LabView developed software.
Acri, Giuseppe; Testagrossa, Barbara; Sestito, Angela; Bonanno, Lilla; Vermiglio, Giuseppe
2018-02-01
The efficient use of Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) equipment necessitates establishing adequate quality-control (QC) procedures. In particular, the accuracy of slice separation, during multislices acquisition, requires scan exploration of phantoms containing test objects. To simplify such procedures, a novel phantom and a computerised LabView-based procedure have been devised, enabling determination the midpoint of full width at half maximum (FWHM) in real time while the distance from the profile midpoint of two progressive images is evaluated and measured. The results were compared with those obtained by processing the same phantom images with commercial software. To validate the proposed methodology the Fisher test was conducted on the resulting data sets. In all cases, there was no statistically significant variation between the commercial procedure and the LabView one, which can be used on any CT and MRI diagnostic devices. Copyright © 2017. Published by Elsevier GmbH.
Imaging live cells at high spatiotemporal resolution for lab-on-a-chip applications.
Chin, Lip Ket; Lee, Chau-Hwang; Chen, Bi-Chang
2016-05-24
Conventional optical imaging techniques are limited by the diffraction limit and difficult-to-image biomolecular and sub-cellular processes in living specimens. Novel optical imaging techniques are constantly evolving with the desire to innovate an imaging tool that is capable of seeing sub-cellular processes in a biological system, especially in three dimensions (3D) over time, i.e. 4D imaging. For fluorescence imaging on live cells, the trade-offs among imaging depth, spatial resolution, temporal resolution and photo-damage are constrained based on the limited photons of the emitters. The fundamental solution to solve this dilemma is to enlarge the photon bank such as the development of photostable and bright fluorophores, leading to the innovation in optical imaging techniques such as super-resolution microscopy and light sheet microscopy. With the synergy of microfluidic technology that is capable of manipulating biological cells and controlling their microenvironments to mimic in vivo physiological environments, studies of sub-cellular processes in various biological systems can be simplified and investigated systematically. In this review, we provide an overview of current state-of-the-art super-resolution and 3D live cell imaging techniques and their lab-on-a-chip applications, and finally discuss future research trends in new and breakthrough research areas of live specimen 4D imaging in controlled 3D microenvironments.
Human perception testing methodology for evaluating EO/IR imaging systems
NASA Astrophysics Data System (ADS)
Graybeal, John J.; Monfort, Samuel S.; Du Bosq, Todd W.; Familoni, Babajide O.
2018-04-01
The U.S. Army's RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) Perception Lab is tasked with supporting the development of sensor systems for the U.S. Army by evaluating human performance of emerging technologies. Typical research questions involve detection, recognition and identification as a function of range, blur, noise, spectral band, image processing techniques, image characteristics, and human factors. NVESD's Perception Lab provides an essential bridge between the physics of the imaging systems and the performance of the human operator. In addition to quantifying sensor performance, perception test results can also be used to generate models of human performance and to drive future sensor requirements. The Perception Lab seeks to develop and employ scientifically valid and efficient perception testing procedures within the practical constraints of Army research, including rapid development timelines for critical technologies, unique guidelines for ethical testing of Army personnel, and limited resources. The purpose of this paper is to describe NVESD Perception Lab capabilities, recent methodological improvements designed to align our methodology more closely with scientific best practice, and to discuss goals for future improvements and expanded capabilities. Specifically, we discuss modifying our methodology to improve training, to account for human fatigue, to improve assessments of human performance, and to increase experimental design consultation provided by research psychologists. Ultimately, this paper outlines a template for assessing human perception and overall system performance related to EO/IR imaging systems.
Going fully digital: Perspective of a Dutch academic pathology lab
Stathonikos, Nikolas; Veta, Mitko; Huisman, André; van Diest, Paul J.
2013-01-01
During the last years, whole slide imaging has become more affordable and widely accepted in pathology labs. Digital slides are increasingly being used for digital archiving of routinely produced clinical slides, remote consultation and tumor boards, and quantitative image analysis for research purposes and in education. However, the implementation of a fully digital Pathology Department requires an in depth look into the suitability of digital slides for routine clinical use (the image quality of the produced digital slides and the factors that affect it) and the required infrastructure to support such use (the storage requirements and integration with lab management and hospital information systems). Optimization of digital pathology workflow requires communication between several systems, which can be facilitated by the use of open standards for digital slide storage and scanner management. Consideration of these aspects along with appropriate validation of the use of digital slides for routine pathology can pave the way for pathology departments to go “fully digital.” In this paper, we summarize our experiences so far in the process of implementing a fully digital workflow at our Pathology Department and the steps that are needed to complete this process. PMID:23858390
Affordable Imaging Lab for Noninvasive Analysis of Biomass and Early Vigour in Cereal Crops
2018-01-01
Plant phenotyping by imaging allows automated analysis of plants for various morphological and physiological traits. In this work, we developed a low-cost RGB imaging phenotyping lab (LCP lab) for low-throughput imaging and analysis using affordable imaging equipment and freely available software. LCP lab comprising RGB imaging and analysis pipeline is set up and demonstrated with early vigour analysis in wheat. Using this lab, a few hundred pots can be photographed in a day and the pots are tracked with QR codes. The software pipeline for both imaging and analysis is built from freely available software. The LCP lab was evaluated for early vigour analysis of five wheat cultivars. A high coefficient of determination (R2 0.94) was obtained between the dry weight and the projected leaf area of 20-day-old wheat plants and R2 of 0.9 for the relative growth rate between 10 and 20 days of plant growth. Detailed description for setting up such a lab is provided together with custom scripts built for imaging and analysis. The LCP lab is an affordable alternative for analysis of cereal crops when access to a high-throughput phenotyping facility is unavailable or when the experiments require growing plants in highly controlled climate chambers. The protocols described in this work are useful for building affordable imaging system for small-scale research projects and for education. PMID:29850536
This view of Jupiter was taken by Voyager 1
NASA Technical Reports Server (NTRS)
1998-01-01
This view of Jupiter was taken by Voyager 1. This image was taken through color filters and recombined to produce the color image. This photo was assembled from three black and white negatives by the Image Processing Lab at Jet Propulsion Laboratory. JPL manages and controls the VOyager project for NASA's Office of Space Science.
CCDs in the Mechanics Lab--A Competitive Alternative? (Part I).
ERIC Educational Resources Information Center
Pinto, Fabrizio
1995-01-01
Reports on the implementation of a relatively low-cost, versatile, and intuitive system to teach basic mechanics based on the use of a Charge-Coupled Device (CCD) camera and inexpensive image-processing and analysis software. Discusses strengths and limitations of CCD imaging technologies. (JRH)
NASA Technical Reports Server (NTRS)
Chien, S.
1994-01-01
This paper describes work on the Multimission VICAR Planner (MVP) system to automatically construct executable image processing procedures for custom image processing requests for the JPL Multimission Image Processing Lab (MIPL). This paper focuses on two issues. First, large search spaces caused by complex plans required the use of hand encoded control information. In order to address this in a manner similar to that used by human experts, MVP uses a decomposition-based planner to implement hierarchical/skeletal planning at the higher level and then uses a classical operator based planner to solve subproblems in contexts defined by the high-level decomposition.
2015-01-01
The Portable Document Format (PDF) allows for embedding three-dimensional (3D) models and is therefore particularly suitable to communicate respective data, especially as regards scholarly articles. The generation of the necessary model data, however, is still challenging, especially for inexperienced users. This prevents an unrestrained proliferation of 3D PDF usage in scholarly communication. This article introduces a new solution for the creation of three of types of 3D geometry (point clouds, polylines and triangle meshes), that is based on MeVisLab, a framework for biomedical image processing. This solution enables even novice users to generate the model data files without requiring programming skills and without the need for an intensive training by simply using it as a conversion tool. Advanced users can benefit from the full capability of MeVisLab to generate and export the model data as part of an overall processing chain. Although MeVisLab is primarily designed for handling biomedical image data, the new module is not restricted to this domain. It can be used for all scientific disciplines. PMID:25780759
Newe, Axel
2015-01-01
The Portable Document Format (PDF) allows for embedding three-dimensional (3D) models and is therefore particularly suitable to communicate respective data, especially as regards scholarly articles. The generation of the necessary model data, however, is still challenging, especially for inexperienced users. This prevents an unrestrained proliferation of 3D PDF usage in scholarly communication. This article introduces a new solution for the creation of three of types of 3D geometry (point clouds, polylines and triangle meshes), that is based on MeVisLab, a framework for biomedical image processing. This solution enables even novice users to generate the model data files without requiring programming skills and without the need for an intensive training by simply using it as a conversion tool. Advanced users can benefit from the full capability of MeVisLab to generate and export the model data as part of an overall processing chain. Although MeVisLab is primarily designed for handling biomedical image data, the new module is not restricted to this domain. It can be used for all scientific disciplines.
2010-07-15
operations of mathematical morphology applied for analysis of images are ways to extract information of image. The approach early developed [52] to use...1,2568 57 VB2 5,642; 5,804; 5,67; 5,784 0,5429 0,2338 0,04334 0,45837 CrB2 5,62; 5,779; 5,61; 5,783 0,53276 0,23482...maxT For VB2 - has min value if compare with other composite materials on the base of LaB6 and diborides of transitive metals [3], = Joule and
NASA Technical Reports Server (NTRS)
2003-01-01
Helen Cole, the project manager for the Lab-on-a-Chip Applications Development program, and Lisa Monaco, the project scientist for the program, insert a lab on a chip into the Caliper 42 which is specialized equipment that controls processes on commercial chips to support development of lab-on-a-chip applications. The system has special microscopes and imaging systems, so scientists can process and study different types of fluid, chemical, and medical tests conducted on chips. For example, researchers have examined fluorescent bacteria as it flows through the chips' fluid channels or microfluidic capillaries. Researchers at NASA's Marshall Space Flight Center (MSFC) in Huntsville, Alabama, have been studying how the lab-on-a-chip technology can be used for microbial detection, water quality monitoring, and detecting biosignatures of past or present life on Mars. The Marshall Center team is also collaborating with scientists at other NASA centers and at universities to develop custom chip designs for not only space applications, but for many Earth applications, such as for detecting deadly microbes in heating and air systems. (NASA/MSFC/D.Stoffer)
Multi-threaded integration of HTC-Vive and MeVisLab
NASA Astrophysics Data System (ADS)
Gunacker, Simon; Gall, Markus; Schmalstieg, Dieter; Egger, Jan
2018-03-01
This work presents how Virtual Reality (VR) can easily be integrated into medical applications via a plugin for a medical image processing framework called MeVisLab. A multi-threaded plugin has been developed using OpenVR, a VR library that can be used for developing vendor and platform independent VR applications. The plugin is tested using the HTC Vive, a head-mounted display developed by HTC and Valve Corporation.
Impacts of Digital Imaging versus Drawing on Student Learning in Undergraduate Biodiversity Labs
ERIC Educational Resources Information Center
Basey, John M.; Maines, Anastasia P.; Francis, Clinton D.; Melbourne, Brett
2014-01-01
We examined the effects of documenting observations with digital imaging versus hand drawing in inquiry-based college biodiversity labs. Plant biodiversity labs were divided into two treatments, digital imaging (N = 221) and hand drawing (N = 238). Graduate-student teaching assistants (N = 24) taught one class in each treatment. Assessments…
Aided target recognition processing of MUDSS sonar data
NASA Astrophysics Data System (ADS)
Lau, Brian; Chao, Tien-Hsin
1998-09-01
The Mobile Underwater Debris Survey System (MUDSS) is a collaborative effort by the Navy and the Jet Propulsion Lab to demonstrate multi-sensor, real-time, survey of underwater sites for ordnance and explosive waste (OEW). We describe the sonar processing algorithm, a novel target recognition algorithm incorporating wavelets, morphological image processing, expansion by Hermite polynomials, and neural networks. This algorithm has found all planted targets in MUDSS tests and has achieved spectacular success upon another Coastal Systems Station (CSS) sonar image database.
Nuclear Medicine at Berkeley Lab: From Pioneering Beginnings to Today (LBNL Summer Lecture Series)
Budinger, Thomas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Dept. of Nuclear Medicine & Functional Imaging
2018-01-23
Summer Lecture Series 2006: Thomas Budinger, head of Berkeley Lab's Center for Functional Imaging, discusses Berkeley Lab's rich history pioneering the field of nuclear medicine, from radioisotopes to medical imaging.
2003-12-01
Helen Cole, the project manager for the Lab-on-a-Chip Applications Development program, and Lisa Monaco, the project scientist for the program, insert a lab on a chip into the Caliper 42 which is specialized equipment that controls processes on commercial chips to support development of lab-on-a-chip applications. The system has special microscopes and imaging systems, so scientists can process and study different types of fluid, chemical, and medical tests conducted on chips. For example, researchers have examined fluorescent bacteria as it flows through the chips' fluid channels or microfluidic capillaries. Researchers at NASA's Marshall Space Flight Center (MSFC) in Huntsville, Alabama, have been studying how the lab-on-a-chip technology can be used for microbial detection, water quality monitoring, and detecting biosignatures of past or present life on Mars. The Marshall Center team is also collaborating with scientists at other NASA centers and at universities to develop custom chip designs for not only space applications, but for many Earth applications, such as for detecting deadly microbes in heating and air systems. (NASA/MSFC/D.Stoffer)
On-line determination of pork color and intramuscular fat by computer vision
NASA Astrophysics Data System (ADS)
Liao, Yi-Tao; Fan, Yu-Xia; Wu, Xue-Qian; Xie, Li-juan; Cheng, Fang
2010-04-01
In this study, the application potential of computer vision in on-line determination of CIE L*a*b* and content of intramuscular fat (IMF) of pork was evaluated. Images of pork chop from 211 pig carcasses were captured while samples were on a conveyor belt at the speed of 0.25 m•s-1 to simulate the on-line environment. CIE L*a*b* and IMF content were measured with colorimeter and chemical extractor as reference. The KSW algorithm combined with region selection was employed in eliminating the surrounding fat of longissimus dorsi muscle (MLD). RGB values of the pork were counted and five methods were applied for transforming RGB values to CIE L*a*b* values. The region growing algorithm with multiple seed points was applied to mask out the IMF pixels within the intensity corrected images. The performances of the proposed algorithms were verified by comparing the measured reference values and the quality characteristics obtained by image processing. MLD region of six samples could not be identified using the KSW algorithm. Intensity nonuniformity of pork surface in the image can be eliminated efficiently, and IMF region of three corrected images failed to be extracted. Given considerable variety of color and complexity of the pork surface, CIE L*, a* and b* color of MLD could be predicted with correlation coefficients of 0.84, 0.54 and 0.47 respectively, and IMF content could be determined with a correlation coefficient more than 0.70. The study demonstrated that it is feasible to evaluate CIE L*a*b* values and IMF content on-line using computer vision.
NASA Astrophysics Data System (ADS)
Suen, Ricky Wai
The work described in this thesis covers the conversion of HiLo image processing into MATLAB architecture and the use of speckle-illumination HiLo microscopy for use of ex-vivo and in-vivo imaging of thick tissue models. HiLo microscopy is a wide-field fluorescence imaging technique and has been demonstrated to produce optically sectioned images comparable to confocal in thin samples. The imaging technique was developed by Jerome Mertz and the Boston University Biomicroscopy Lab and has been implemented in our lab as a stand-alone optical setup and a modification to a conventional fluorescence microscope. Speckle-illumination HiLo microscopy combines two images taken under speckle-illumination and standard uniform-illumination to generate an optically sectioned image that reject out-of-focus fluorescence. The evaluated speckle contrast in the images is used as a weighting function where elements that move out-of-focus have a speckle contrast that decays to zero. The experiments shown here demonstrate the capability of our HiLo microscopes to produce optically-sectioned images of the microvasculature of ex-vivo and in-vivo thick tissue models. The HiLo microscope were used to image the microvasculature of ex-vivo mouse heart sections prepared for optical histology and the microvasculature of in-vivo rodent dorsal window chamber models. Studies in label-free surface profiling with HiLo microscopy is also presented.
Fish swarm intelligent to optimize real time monitoring of chips drying using machine vision
NASA Astrophysics Data System (ADS)
Hendrawan, Y.; Hawa, L. C.; Damayanti, R.
2018-03-01
This study attempted to apply machine vision-based chips drying monitoring system which is able to optimise the drying process of cassava chips. The objective of this study is to propose fish swarm intelligent (FSI) optimization algorithms to find the most significant set of image features suitable for predicting water content of cassava chips during drying process using artificial neural network model (ANN). Feature selection entails choosing the feature subset that maximizes the prediction accuracy of ANN. Multi-Objective Optimization (MOO) was used in this study which consisted of prediction accuracy maximization and feature-subset size minimization. The results showed that the best feature subset i.e. grey mean, L(Lab) Mean, a(Lab) energy, red entropy, hue contrast, and grey homogeneity. The best feature subset has been tested successfully in ANN model to describe the relationship between image features and water content of cassava chips during drying process with R2 of real and predicted data was equal to 0.9.
Software Reuse in the Planetary Context: The JPL/MIPL Mars Program Suite
NASA Technical Reports Server (NTRS)
Deen, Robert
2012-01-01
Reuse greatly reduces development costs. Savings can be invested in new/improved capabilities Or returned to sponsor Worth the extra time to "do it right" Operator training greatly reduced. MIPL MER personnel can step into MSL easily because the programs are familiar. Application programs much easier to write. Can assume core capabilities exist already. Multimission Instrument (Image) Processing Lab at MIPL Responsible for the ground-based instrument data processing for (among other things) all recent in-situ Mars missions: Mars Pathfinder Mars Polar Lander (MPL) Mars Exploration Rovers (MER) Phoenix Mars Science Lab (MSL) Responsibilities for in-situ missions Reconstruction of instrument data from telemetry Systematic creation of Reduced Data Records (RDRs) for images Creation of special products for operations, science, and public outreach In the critical path for operations MIPL products required for planning the next Sol s activities
Neves Tafula, Sérgio M; Moreira da Silva, Nádia; Rozanski, Verena E; Silva Cunha, João Paulo
2014-01-01
Neuroscience is an increasingly multidisciplinary and highly cooperative field where neuroimaging plays an important role. Neuroimaging rapid evolution is demanding for a growing number of computing resources and skills that need to be put in place at every lab. Typically each group tries to setup their own servers and workstations to support their neuroimaging needs, having to learn from Operating System management to specific neuroscience software tools details before any results can be obtained from each setup. This setup and learning process is replicated in every lab, even if a strong collaboration among several groups is going on. In this paper we present a new cloud service model - Brain Imaging Application as a Service (BiAaaS) - and one of its implementation - Advanced Brain Imaging Lab (ABrIL) - in the form of an ubiquitous virtual desktop remote infrastructure that offers a set of neuroimaging computational services in an interactive neuroscientist-friendly graphical user interface (GUI). This remote desktop has been used for several multi-institution cooperative projects with different neuroscience objectives that already achieved important results, such as the contribution to a high impact paper published in the January issue of the Neuroimage journal. The ABrIL system has shown its applicability in several neuroscience projects with a relatively low-cost, promoting truly collaborative actions and speeding up project results and their clinical applicability.
Image based performance analysis of thermal imagers
NASA Astrophysics Data System (ADS)
Wegner, D.; Repasi, E.
2016-05-01
Due to advances in technology, modern thermal imagers resemble sophisticated image processing systems in functionality. Advanced signal and image processing tools enclosed into the camera body extend the basic image capturing capability of thermal cameras. This happens in order to enhance the display presentation of the captured scene or specific scene details. Usually, the implemented methods are proprietary company expertise, distributed without extensive documentation. This makes the comparison of thermal imagers especially from different companies a difficult task (or at least a very time consuming/expensive task - e.g. requiring the execution of a field trial and/or an observer trial). For example, a thermal camera equipped with turbulence mitigation capability stands for such a closed system. The Fraunhofer IOSB has started to build up a system for testing thermal imagers by image based methods in the lab environment. This will extend our capability of measuring the classical IR-system parameters (e.g. MTF, MTDP, etc.) in the lab. The system is set up around the IR- scene projector, which is necessary for the thermal display (projection) of an image sequence for the IR-camera under test. The same set of thermal test sequences might be presented to every unit under test. For turbulence mitigation tests, this could be e.g. the same turbulence sequence. During system tests, gradual variation of input parameters (e. g. thermal contrast) can be applied. First ideas of test scenes selection and how to assembly an imaging suite (a set of image sequences) for the analysis of imaging thermal systems containing such black boxes in the image forming path is discussed.
Sub-pixel mineral mapping using EO-1 Hyperion hyperspectral data
NASA Astrophysics Data System (ADS)
Kumar, C.; Shetty, A.; Raval, S.; Champatiray, P. K.; Sharma, R.
2014-11-01
This study describes the utility of Earth Observation (EO)-1 Hyperion data for sub-pixel mineral investigation using Mixture Tuned Target Constrained Interference Minimized Filter (MTTCIMF) algorithm in hostile mountainous terrain of Rajsamand district of Rajasthan, which hosts economic mineralization such as lead, zinc, and copper etc. The study encompasses pre-processing, data reduction, Pixel Purity Index (PPI) and endmember extraction from reflectance image of surface minerals such as illite, montmorillonite, phlogopite, dolomite and chlorite. These endmembers were then assessed with USGS mineral spectral library and lab spectra of rock samples collected from field for spectral inspection. Subsequently, MTTCIMF algorithm was implemented on processed image to obtain mineral distribution map of each detected mineral. A virtual verification method has been adopted to evaluate the classified image, which uses directly image information to evaluate the result and confirm the overall accuracy and kappa coefficient of 68 % and 0.6 respectively. The sub-pixel level mineral information with reasonable accuracy could be a valuable guide to geological and exploration community for expensive ground and/or lab experiments to discover economic deposits. Thus, the study demonstrates the feasibility of Hyperion data for sub-pixel mineral mapping using MTTCIMF algorithm with cost and time effective approach.
Teaching Chemistry Lab Safety through Comics
NASA Astrophysics Data System (ADS)
di Raddo, Pasquale
2006-04-01
As a means for raising students' interest in aspects pertaining to chemistry lab safety, this article presents a novel approach to teaching this important subject. Comic book lab scenes that involve fictional characters familiar to many students are presented and discussed as to the safety concerns represented in those images. These are discussed in a safety prelab session. For the sake of comparison, students are then shown images taken from current chemistry journals of safety-conscious contemporary chemists at work in their labs. Finally the need to adhere to copyright regulations for the use of the images is discussed so as to increase students' awareness of academic honesty and copyright issues.
Keleshis, C; Ionita, CN; Yadava, G; Patel, V; Bednarek, DR; Hoffmann, KR; Verevkin, A; Rudin, S
2008-01-01
A graphical user interface based on LabVIEW software was developed to enable clinical evaluation of a new High-Sensitivity Micro-Angio-Fluoroscopic (HSMAF) system for real-time acquisition, display and rapid frame transfer of high-resolution region-of-interest images. The HSMAF detector consists of a CsI(Tl) phosphor, a light image intensifier (LII), and a fiber-optic taper coupled to a progressive scan, frame-transfer, charged-coupled device (CCD) camera which provides real-time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. Images can be captured in continuous or triggered mode, and the camera can be programmed by a computer using Camera Link serial communication. A graphical user interface was developed to control the camera modes such as gain and pixel binning as well as to acquire, store, display, and process the images. The program, written in LabVIEW, has the following capabilities: camera initialization, synchronized image acquisition with the x-ray pulses, roadmap and digital subtraction angiography acquisition (DSA), flat field correction, brightness and contrast control, last frame hold in fluoroscopy, looped playback of the acquired images in angiography, recursive temporal filtering and LII gain control. Frame rates can be up to 30 fps in full-resolution mode. The user friendly implementation of the interface along with the high framerate acquisition and display for this unique high-resolution detector should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents and hence enable more accurate diagnoses and image guided interventions. (Support: NIH Grants R01NS43924, R01EB002873) PMID:18836570
Keleshis, C; Ionita, Cn; Yadava, G; Patel, V; Bednarek, Dr; Hoffmann, Kr; Verevkin, A; Rudin, S
2008-01-01
A graphical user interface based on LabVIEW software was developed to enable clinical evaluation of a new High-Sensitivity Micro-Angio-Fluoroscopic (HSMAF) system for real-time acquisition, display and rapid frame transfer of high-resolution region-of-interest images. The HSMAF detector consists of a CsI(Tl) phosphor, a light image intensifier (LII), and a fiber-optic taper coupled to a progressive scan, frame-transfer, charged-coupled device (CCD) camera which provides real-time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. Images can be captured in continuous or triggered mode, and the camera can be programmed by a computer using Camera Link serial communication. A graphical user interface was developed to control the camera modes such as gain and pixel binning as well as to acquire, store, display, and process the images. The program, written in LabVIEW, has the following capabilities: camera initialization, synchronized image acquisition with the x-ray pulses, roadmap and digital subtraction angiography acquisition (DSA), flat field correction, brightness and contrast control, last frame hold in fluoroscopy, looped playback of the acquired images in angiography, recursive temporal filtering and LII gain control. Frame rates can be up to 30 fps in full-resolution mode. The user friendly implementation of the interface along with the high framerate acquisition and display for this unique high-resolution detector should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents and hence enable more accurate diagnoses and image guided interventions. (Support: NIH Grants R01NS43924, R01EB002873).
StagLab: Post-Processing and Visualisation in Geodynamics
NASA Astrophysics Data System (ADS)
Crameri, Fabio
2017-04-01
Despite being simplifications of nature, today's Geodynamic numerical models can, often do, and sometimes have to become very complex. Additionally, a steadily-increasing amount of raw model data results from more elaborate numerical codes and the still continuously-increasing computational power available for their execution. The current need for efficient post-processing and sensible visualisation is thus apparent. StagLab (www.fabiocrameri.ch/software) provides such much-needed strongly-automated post-processing in combination with state-of-the-art visualisation. Written in MatLab, StagLab is simple, flexible, efficient and reliable. It produces figures and movies that are both fully-reproducible and publication-ready. StagLab's post-processing capabilities include numerous diagnostics for plate tectonics and mantle dynamics. Featured are accurate plate-boundary identification, slab-polarity recognition, plate-bending derivation, mantle-plume detection, and surface-topography component splitting. These and many other diagnostics are derived conveniently from only a few parameter fields thanks to powerful image processing tools and other capable algorithms. Additionally, StagLab aims to prevent scientific visualisation pitfalls that are, unfortunately, still too common in the Geodynamics community. Misinterpretation of raw data and exclusion of colourblind people introduced with the continuous use of the rainbow (a.k.a. jet) colour scheme is just one, but a dramatic example (e.g., Rogowitz and Treinish, 1998; Light and Bartlein, 2004; Borland and Ii, 2007). StagLab is currently optimised for binary StagYY output (e.g., Tackley 2008), but is adjustable for the potential use with other Geodynamic codes. Additionally, StagLab's post-processing routines are open-source. REFERENCES Borland, D., and R. M. T. Ii (2007), Rainbow color map (still) considered harmful, IEEE Computer Graphics and Applications, 27(2), 14-17. Light, A., and P. J. Bartlein (2004), The end of the rainbow? Color schemes for improved data graphics, Eos Trans. AGU, 85(40), 385-391. Rogowitz, B. E., and L. A. Treinish (1998), Data visualization: the end of the rainbow, IEEE Spectrum, 35(12), 52-59, doi:10.1109/6.736450. Tackley, P.J (2008) Modelling compressible mantle convection with large viscosity contrasts in a three-dimensional spherical shell using the yin-yang grid. Physics of the Earth and Planetary Interiors 171(1-4), 7-18.
LabVIEW-based control software for para-hydrogen induced polarization instrumentation.
Agraz, Jose; Grunfeld, Alexander; Li, Debiao; Cunningham, Karl; Willey, Cindy; Pozos, Robert; Wagner, Shawn
2014-04-01
The elucidation of cell metabolic mechanisms is the modern underpinning of the diagnosis, treatment, and in some cases the prevention of disease. Para-Hydrogen induced polarization (PHIP) enhances magnetic resonance imaging (MRI) signals over 10,000 fold, allowing for the MRI of cell metabolic mechanisms. This signal enhancement is the result of hyperpolarizing endogenous substances used as contrast agents during imaging. PHIP instrumentation hyperpolarizes Carbon-13 ((13)C) based substances using a process requiring control of a number of factors: chemical reaction timing, gas flow, monitoring of a static magnetic field (Bo), radio frequency (RF) irradiation timing, reaction temperature, and gas pressures. Current PHIP instruments manually control the hyperpolarization process resulting in the lack of the precise control of factors listed above, resulting in non-reproducible results. We discuss the design and implementation of a LabVIEW based computer program that automatically and precisely controls the delivery and manipulation of gases and samples, monitoring gas pressures, environmental temperature, and RF sample irradiation. We show that the automated control over the hyperpolarization process results in the hyperpolarization of hydroxyethylpropionate. The implementation of this software provides the fast prototyping of PHIP instrumentation for the evaluation of a myriad of (13)C based endogenous contrast agents used in molecular imaging.
LabVIEW-based control software for para-hydrogen induced polarization instrumentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agraz, Jose, E-mail: joseagraz@ucla.edu; Grunfeld, Alexander; Li, Debiao
2014-04-15
The elucidation of cell metabolic mechanisms is the modern underpinning of the diagnosis, treatment, and in some cases the prevention of disease. Para-Hydrogen induced polarization (PHIP) enhances magnetic resonance imaging (MRI) signals over 10 000 fold, allowing for the MRI of cell metabolic mechanisms. This signal enhancement is the result of hyperpolarizing endogenous substances used as contrast agents during imaging. PHIP instrumentation hyperpolarizes Carbon-13 ({sup 13}C) based substances using a process requiring control of a number of factors: chemical reaction timing, gas flow, monitoring of a static magnetic field (B{sub o}), radio frequency (RF) irradiation timing, reaction temperature, and gas pressures.more » Current PHIP instruments manually control the hyperpolarization process resulting in the lack of the precise control of factors listed above, resulting in non-reproducible results. We discuss the design and implementation of a LabVIEW based computer program that automatically and precisely controls the delivery and manipulation of gases and samples, monitoring gas pressures, environmental temperature, and RF sample irradiation. We show that the automated control over the hyperpolarization process results in the hyperpolarization of hydroxyethylpropionate. The implementation of this software provides the fast prototyping of PHIP instrumentation for the evaluation of a myriad of {sup 13}C based endogenous contrast agents used in molecular imaging.« less
From synchrotron radiation to lab source: advanced speckle-based X-ray imaging using abrasive paper
NASA Astrophysics Data System (ADS)
Wang, Hongchang; Kashyap, Yogesh; Sawhney, Kawal
2016-02-01
X-ray phase and dark-field imaging techniques provide complementary and inaccessible information compared to conventional X-ray absorption or visible light imaging. However, such methods typically require sophisticated experimental apparatus or X-ray beams with specific properties. Recently, an X-ray speckle-based technique has shown great potential for X-ray phase and dark-field imaging using a simple experimental arrangement. However, it still suffers from either poor resolution or the time consuming process of collecting a large number of images. To overcome these limitations, in this report we demonstrate that absorption, dark-field, phase contrast, and two orthogonal differential phase contrast images can simultaneously be generated by scanning a piece of abrasive paper in only one direction. We propose a novel theoretical approach to quantitatively extract the above five images by utilising the remarkable properties of speckles. Importantly, the technique has been extended from a synchrotron light source to utilise a lab-based microfocus X-ray source and flat panel detector. Removing the need to raster the optics in two directions significantly reduces the acquisition time and absorbed dose, which can be of vital importance for many biological samples. This new imaging method could potentially provide a breakthrough for numerous practical imaging applications in biomedical research and materials science.
Hybrid imaging: a quantum leap in scientific imaging
NASA Astrophysics Data System (ADS)
Atlas, Gene; Wadsworth, Mark V.
2004-01-01
ImagerLabs has advanced its patented next generation imaging technology called the Hybrid Imaging Technology (HIT) that offers scientific quality performance. The key to the HIT is the merging of the CCD and CMOS technologies through hybridization rather than process integration. HIT offers exceptional QE, fill factor, broad spectral response and very low noise properties of the CCD. In addition, it provides the very high-speed readout, low power, high linearity and high integration capability of CMOS sensors. In this work, we present the benefits, and update the latest advances in the performance of this exciting technology.
Lithospheric Deformation Along the Southern and Western Suture Zones of the Wyoming Province
NASA Astrophysics Data System (ADS)
Nuyen, C.; Porritt, R. W.; O'Driscoll, L.
2014-12-01
The Wyoming Province is an Archean craton that played an early role in the construction and growth of the North American continent. This region, which encompasses the majority of modern day Wyoming and southern Montana, initially collided with other Archean blocks in the Paleoproterozoic (2.0-1.8 Ga), creating the Canadian Shield. From 1.8-1.68 Ga, the Yavapai Province crashed into the Wyoming Province, suturing the two together. The accretion of the Yavapai Province gave way to the Cheyenne Belt, a deformational zone that exists along the southern border of the Wyoming Province where earlier studies have found evidence for crustal imbrication and double a Moho. Current deformation within the Wyoming province is due to its interaction with the Yellowstone Hotspot, which is currently located in the northwest portion of the region. This study images the LAB along the western and southern borders of the Wyoming Province in order to understand how the region's Archean lithosphere has responded to deformation over time. These results shed light on the inherent strength of Archean cratonic lithosphere in general. We employ two methods for this study: common conversion point (CCP) stacking of S to P receiver functions and teleseismic and ambient Rayleigh wave dispersion. The former is used to image the LAB structure while the latter is used to create a velocity gradient for the region. Results from both of the methods reveal a notably shallower LAB depth to the west of the boundary. The shallower LAB west of the Wyoming Province is interpreted to be a result of lithospheric thinning due to the region's interaction with the Yellowstone Hotspot and post-Laramide deformation and extension of the western United States. We interpret the deeper LAB east of the boundary to be evidence for the Wyoming Province's resistance to lithospheric deformation from the hotspot and tectonic processes. CCP images across the Cheyenne Belt also reveal a shallower LAB under the western perimeter of the belt. We believe that this is a result of the LAB jumping up to a mid-lithospheric discontinuity (MLD) as the less stable lower lithosphere was thinned or removed. This same MLD appears above the intact LAB in the eastern portion of the Cheyenne Belt. This suggests that the western end of the Cheyenne Belt has undergone more deformation over time than the eastern end.
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200503-00117-052). March 2005. LOCAL INJECTOR, BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200503-00117-066). March 2005. LOCAL INJECTOR, BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
Applications in Digital Image Processing
ERIC Educational Resources Information Center
Silverman, Jason; Rosen, Gail L.; Essinger, Steve
2013-01-01
Students are immersed in a mathematically intensive, technological world. They engage daily with iPods, HDTVs, and smartphones--technological devices that rely on sophisticated but accessible mathematical ideas. In this article, the authors provide an overview of four lab-type activities that have been used successfully in high school mathematics…
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200503-00117-107). March 2005. NORTH FAN, FAN ROOM, BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200503-00117-106). March 2005. SOUTH FAN, FAN ROOM, BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200503-00117-087). March 2005. GENERATOR PIT AREA, BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200503-00117-054). March 2005. LOCAL INJECTOR ENTERING SHIELDING, BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200503-00117-027). March 2005. MOUSE AT EAST TANGENT, BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
Imaging performance of LabPET APD-based digital PET scanners for pre-clinical research
NASA Astrophysics Data System (ADS)
Bergeron, Mélanie; Cadorette, Jules; Tétrault, Marc-André; Beaudoin, Jean-François; Leroux, Jean-Daniel; Fontaine, Réjean; Lecomte, Roger
2014-02-01
The LabPET is an avalanche photodiode (APD) based digital PET scanner with quasi-individual detector read-out and highly parallel electronic architecture for high-performance in vivo molecular imaging of small animals. The scanner is based on LYSO and LGSO scintillation crystals (2×2×12/14 mm3), assembled side-by-side in phoswich pairs read out by an APD. High spatial resolution is achieved through the individual and independent read-out of an individual APD detector for recording impinging annihilation photons. The LabPET exists in three versions, LabPET4 (3.75 cm axial length), LabPET8 (7.5 cm axial length) and LabPET12 (11.4 cm axial length). This paper focuses on the systematic characterization of the three LabPET versions using two different energy window settings to implement a high-efficiency mode (250-650 keV) and a high-resolution mode (350-650 keV) in the most suitable operating conditions. Prior to measurements, a global timing alignment of the scanners and optimization of the APD operating bias have been carried out. Characteristics such as spatial resolution, absolute sensitivity, count rate performance and image quality have been thoroughly investigated following the NEMA NU 4-2008 protocol. Phantom and small animal images were acquired to assess the scanners' suitability for the most demanding imaging tasks in preclinical biomedical research. The three systems achieve the same radial FBP spatial resolution at 5 mm from the field-of-view center: 1.65/3.40 mm (FWHM/FWTM) for an energy threshold of 250 keV and 1.51/2.97 mm for an energy threshold of 350 keV. The absolute sensitivity for an energy window of 250-650 keV is 1.4%/2.6%/4.3% for LabPET4/8/12, respectively. The best count rate performance peaking at 362 kcps is achieved by the LabPET12 with an energy window of 250-650 keV and a mouse phantom (2.5 cm diameter) at an activity of 2.4 MBq ml-1. With the same phantom, the scatter fraction for all scanners is about 17% for an energy threshold of 250 keV and 10% for an energy threshold of 350 keV. The results obtained with two energy window settings confirm the relevance of high-efficiency and high-resolution operating modes to take full advantage of the imaging capabilities of the LabPET scanners for molecular imaging applications.
NASA Technical Reports Server (NTRS)
Roth, Don J.; Hendricks, J. Lynne; Whalen, Mike F.; Bodis, James R.; Martin, Katherine
1996-01-01
This article describes the commercial implementation of ultrasonic velocity imaging methods developed and refined at NASA Lewis Research Center on the Sonix c-scan inspection system. Two velocity imaging methods were implemented: thickness-based and non-thickness-based reflector plate methods. The article demonstrates capabilities of the commercial implementation and gives the detailed operating procedures required for Sonix customers to achieve optimum velocity imaging results. This commercial implementation of velocity imaging provides a 100x speed increase in scanning and processing over the lab-based methods developed at LeRC. The significance of this cooperative effort is that the aerospace and other materials development-intensive industries which use extensive ultrasonic inspection for process control and failure analysis will now have an alternative, highly accurate imaging method commercially available.
2004-02-04
KENNEDY SPACE CENTER, FLA. - Armando Oliu, Final Inspection Team lead for the Shuttle program, speaks to reporters about the aid the Image Analysis Lab is giving the FBI in a kidnapping case. Oliu oversees the image lab that is using an advanced SGI® TP9500 data management system to review the tape of the kidnapping in progress in Sarasota, Fla. KSC installed the new $3.2 million system in preparation for Return to Flight of the Space Shuttle fleet. The lab is studying the Sarasota kidnapping video to provide any new information possible to law enforcement officers. KSC is joining NASA’s Marshall Space Flight Center in Alabama in reviewing the tape.
A LabVIEW Platform for Preclinical Imaging Using Digital Subtraction Angiography and Micro-CT.
Badea, Cristian T; Hedlund, Laurence W; Johnson, G Allan
2013-01-01
CT and digital subtraction angiography (DSA) are ubiquitous in the clinic. Their preclinical equivalents are valuable imaging methods for studying disease models and treatment. We have developed a dual source/detector X-ray imaging system that we have used for both micro-CT and DSA studies in rodents. The control of such a complex imaging system requires substantial software development for which we use the graphical language LabVIEW (National Instruments, Austin, TX, USA). This paper focuses on a LabVIEW platform that we have developed to enable anatomical and functional imaging with micro-CT and DSA. Our LabVIEW applications integrate and control all the elements of our system including a dual source/detector X-ray system, a mechanical ventilator, a physiological monitor, and a power microinjector for the vascular delivery of X-ray contrast agents. Various applications allow cardiac- and respiratory-gated acquisitions for both DSA and micro-CT studies. Our results illustrate the application of DSA for cardiopulmonary studies and vascular imaging of the liver and coronary arteries. We also show how DSA can be used for functional imaging of the kidney. Finally, the power of 4D micro-CT imaging using both prospective and retrospective gating is shown for cardiac imaging.
A LabVIEW Platform for Preclinical Imaging Using Digital Subtraction Angiography and Micro-CT
Badea, Cristian T.; Hedlund, Laurence W.; Johnson, G. Allan
2013-01-01
CT and digital subtraction angiography (DSA) are ubiquitous in the clinic. Their preclinical equivalents are valuable imaging methods for studying disease models and treatment. We have developed a dual source/detector X-ray imaging system that we have used for both micro-CT and DSA studies in rodents. The control of such a complex imaging system requires substantial software development for which we use the graphical language LabVIEW (National Instruments, Austin, TX, USA). This paper focuses on a LabVIEW platform that we have developed to enable anatomical and functional imaging with micro-CT and DSA. Our LabVIEW applications integrate and control all the elements of our system including a dual source/detector X-ray system, a mechanical ventilator, a physiological monitor, and a power microinjector for the vascular delivery of X-ray contrast agents. Various applications allow cardiac- and respiratory-gated acquisitions for both DSA and micro-CT studies. Our results illustrate the application of DSA for cardiopulmonary studies and vascular imaging of the liver and coronary arteries. We also show how DSA can be used for functional imaging of the kidney. Finally, the power of 4D micro-CT imaging using both prospective and retrospective gating is shown for cardiac imaging. PMID:27006920
NASA Astrophysics Data System (ADS)
Kurtz, N.; Marks, N.; Cooper, S. K.
2014-12-01
Scientific ocean drilling through the International Ocean Discovery Program (IODP) has contributed extensively to our knowledge of Earth systems science. However, many of its methods and discoveries can seem abstract and complicated for students. Collaborations between scientists and educators/artists to create accurate yet engaging demonstrations and activities have been crucial to increasing understanding and stimulating interest in fascinating geological topics. One such collaboration, which came out of Expedition 345 to the Hess Deep Rift, resulted in an interactive lab to explore sampling rocks from the usually inacessible lower oceanic crust, offering an insight into the geological processes that form the structure of the Earth's crust. This Hess Deep Interactive Lab aims to explain several significant discoveries made by oceanic drilling utilizing images of actual thin sections and core samples recovered from IODP expeditions. . Participants can interact with a physical model to learn about the coring and drilling processes, and gain an understanding of seafloor structures. The collaboration of this lab developed as a need to explain fundamental notions of the ocean crust formed at fast-spreading ridges. A complementary interactive online lab can be accessed at www.joidesresolution.org for students to engage further with these concepts. This project explores the relationship between physical and on-line models to further understanding, including what we can learn from the pros and cons of each.
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200503-00117-110). March 2005. SOUTH FAN FROM MEZZANINE, FAN ROOM, BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200503-00117-143). March 2005. BUILDING 51A, EXTERIOR WALL, BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200503-00117-006). March 2005. JACKBOLTS BETWEEN MAGNET AND MAGNET FOUNDATION, BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200503-00117-047). March 2005. AREA OF MAGNET REMOVAL, NORTHEAST QUADRANT, BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200503-00117-043). March 2005. MOUSE AT EAST TANGENT, PLUNGING MECHANISM, BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200503-00117-077). March 2005. STUB OF SUPERHILAC BEAM, ENTERING SHIELDING, BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200503-00117-012). March 2005. PASSAGEWAY UNDER QUADRANT AND DIFFUSION PUMPS, BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200503-00117-050). March 2005. DIFFUSION PUMPS UNDER WEST TANGENT, BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
Land classification of south-central Iowa from computer enhanced images
NASA Technical Reports Server (NTRS)
Lucas, J. R.; Taranik, J. V.; Billingsley, F. C. (Principal Investigator)
1977-01-01
The author has identified the following significant results. Enhanced LANDSAT imagery was most useful for land classification purposes, because these images could be photographically printed at large scales such as 1:63,360. The ability to see individual picture elements was no hindrance as long as general image patterns could be discerned. Low cost photographic processing systems for color printings have proved to be effective in the utilization of computer enhanced LANDSAT products for land classification purposes. The initial investment for this type of system was very low, ranging from $100 to $200 beyond a black and white photo lab. The technical expertise can be acquired from reading a color printing and processing manual.
Spectral gamuts and spectral gamut mapping
NASA Astrophysics Data System (ADS)
Rosen, Mitchell R.; Derhak, Maxim W.
2006-01-01
All imaging devices have two gamuts: the stimulus gamut and the response gamut. The response gamut of a print engine is typically described in CIE colorimetry units, a system derived to quantify human color response. More fundamental than colorimetric gamuts are spectral gamuts, based on radiance, reflectance or transmittance units. Spectral gamuts depend on the physics of light or on how materials interact with light and do not involve the human's photoreceptor integration or brain processing. Methods for visualizing a spectral gamut raise challenges as do considerations of how to utilize such a data-set for producing superior color reproductions. Recent work has described a transformation of spectra reduced to 6-dimensions called LabPQR. LabPQR was designed as a hybrid space with three explicit colorimetric axes and three additional spectral reconstruction axes. In this paper spectral gamuts are discussed making use of LabPQR. Also, spectral gamut mapping is considered in light of the colorimetric-spectral duality of the LabPQR space.
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection XBD200503-00117-089). March 2005. GENERATOR PIT AREA, CONCRETE FOUNDATION FOR EQUIPMENT MOUNTS, BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200503-00117-082). June 2005. CEILING AND CRANE OF BUILDING 51A, BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200503-00117-108). March 2005. FAN ROOM WITH STAIR TO FILTER BANKS, BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200503-00117-158). March 2005. CONNECTION OF MAGNET ROOM CRANE TO OUTER TRACK, BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200503-00117-015). March 2005. INTERIOR WALL OF MAGNET INSIDE CENTER OF BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200503-00117-004). March 2005. ENTRY TO IGLOO, ILLUSTRATING THICKNESS OF IGLOO WALL, BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200503-00117-026). March 2005. MOUSE AT EAST TANGENT, LOOKING TOWARD EAST TANGENT, BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200503-00117-005). March 2005. PASSAGEWAY UNDER SOUTHEAST QUADRANT, AIR DUCT OPENINGS, BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
NASA Technical Reports Server (NTRS)
2001-01-01
Image of soot (smoke) plume made for the Laminar Soot Processes (LSP) experiment during the Microgravity Sciences Lab-1 mission in 1997. LSP-2 will fly in the STS-107 Research 1 mission in 2002. The principal investigator is Dr. Gerard Faeth of the University of Michigan. LSP uses a small jet burner, similar to a classroom butane lighter, that produces flames up to 60 mm (2.3 in) long. Measurements include color TV cameras and a temperature sensor, and laser images whose darkness indicates the quantity of soot produced in the flame. Glenn Research in Cleveland, OH, manages the project.
A Deep Narrowband Imaging Search for C IV and He II Emission from Lyα Blobs
NASA Astrophysics Data System (ADS)
Arrigoni Battaia, Fabrizio; Yang, Yujin; Hennawi, Joseph F.; Prochaska, J. Xavier; Matsuda, Yuichi; Yamada, Toru; Hayashino, Tomoki
2015-05-01
We conduct a deep narrowband imaging survey of 13 Lyα blobs (LABs) located in the SSA22 proto-cluster at z ˜ 3.1 in the C iv and He ii emission lines in an effort to constrain the physical process powering the Lyα emission in LABs. Our observations probe down to unprecedented surface brightness (SB) limits of (2.1-3.4) × 10-18 erg s-1 cm-2 arcsec-2 per 1 arcsec2 aperture (5σ) for the He ii λ1640 and C iv λ1549 lines, respectively. We do not detect extended He ii and C iv emission in any of the LABs, placing strong upper limits on the He ii/Lyα and C iv/Lyα line ratios, of 0.11 and 0.16, for the brightest two LABs in the field. We conduct detailed photoionization modeling of the expected line ratios and find that, although our data constitute the deepest ever observations of these lines, they are still not deep enough to rule out a scenario where the Lyα emission is powered by the ionizing radiation from an obscured active galactic nucleus. Our models can accommodate He ii/Lyα and C iv/Lyα ratios as low as ≃0.05 and ≃0.07, respectively, implying that one needs to reach SB as low as (1-1.5) × 10-18 erg s-1 cm-2 arcsec-2 (at 5σ) in order to rule out a photoionization scenario. These depths will be achievable with the new generation of image-slicing integral field units such as the Multi Unit Spectroscopic Explorer (MUSE) on VLT and the Keck Cosmic Web Imager (KCWI). We also model the expected He ii/Lyα and C iv/Lyα in a different scenario, where Lyα emission is powered by shocks generated in a large-scale superwind, but find that our observational constraints can only be met for shock velocities vs ≳ 250 km s-1, which appear to be in conflict with recent observations of quiescent kinematics in LABs. .
How to Build a Hybrid Neurofeedback Platform Combining EEG and fMRI
Mano, Marsel; Lécuyer, Anatole; Bannier, Elise; Perronnet, Lorraine; Noorzadeh, Saman; Barillot, Christian
2017-01-01
Multimodal neurofeedback estimates brain activity using information acquired with more than one neurosignal measurement technology. In this paper we describe how to set up and use a hybrid platform based on simultaneous electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), then we illustrate how to use it for conducting bimodal neurofeedback experiments. The paper is intended for those willing to build a multimodal neurofeedback system, to guide them through the different steps of the design, setup, and experimental applications, and help them choose a suitable hardware and software configuration. Furthermore, it reports practical information from bimodal neurofeedback experiments conducted in our lab. The platform presented here has a modular parallel processing architecture that promotes real-time signal processing performance and simple future addition and/or replacement of processing modules. Various unimodal and bimodal neurofeedback experiments conducted in our lab showed high performance and accuracy. Currently, the platform is able to provide neurofeedback based on electroencephalography and functional magnetic resonance imaging, but the architecture and the working principles described here are valid for any other combination of two or more real-time brain activity measurement technologies. PMID:28377691
Photocopy of photograph (digital image maintained in LBNL Photo Lab ...
Photocopy of photograph (digital image maintained in LBNL Photo Lab Collection, XBD200503-00117-176). March 2005. CENTRAL COLUMN SUPPORT TO ROOF SHOWING CRANES CENTER SUPPORT TRACK, BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200503-00117-046). March 2005. ROOF SHIELDING BLOCK AND I-BEAM SUPPORT CONSTRUCTION, CENTER OF BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200503-00117-129). March 2005. ENTRY TO ROOM 24, MAIN FLOOR, OFFICE-AND-SHOPS SECTION, BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200503-00117-009). March 2005. OPENINGS OF AIR DUCTS INTO PASSAGEWAY UNDER SOUTHEAST QUADRANT, BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
PIFEX: An advanced programmable pipelined-image processor
NASA Technical Reports Server (NTRS)
Gennery, D. B.; Wilcox, B.
1985-01-01
PIFEX is a pipelined-image processor being built in the JPL Robotics Lab. It will operate on digitized raster-scanned images (at 60 frames per second for images up to about 300 by 400 and at lesser rates for larger images), performing a variety of operations simultaneously under program control. It thus is a powerful, flexible tool for image processing and low-level computer vision. It also has applications in other two-dimensional problems such as route planning for obstacle avoidance and the numerical solution of two-dimensional partial differential equations (although its low numerical precision limits its use in the latter field). The concept and design of PIFEX are described herein, and some examples of its use are given.
PScan 1.0: flexible software framework for polygon based multiphoton microscopy
NASA Astrophysics Data System (ADS)
Li, Yongxiao; Lee, Woei Ming
2016-12-01
Multiphoton laser scanning microscopes exhibit highly localized nonlinear optical excitation and are powerful instruments for in-vivo deep tissue imaging. Customized multiphoton microscopy has a significantly superior performance for in-vivo imaging because of precise control over the scanning and detection system. To date, there have been several flexible software platforms catered to custom built microscopy systems i.e. ScanImage, HelioScan, MicroManager, that perform at imaging speeds of 30-100fps. In this paper, we describe a flexible software framework for high speed imaging systems capable of operating from 5 fps to 1600 fps. The software is based on the MATLAB image processing toolbox. It has the capability to communicate directly with a high performing imaging card (Matrox Solios eA/XA), thus retaining high speed acquisition. The program is also designed to communicate with LabVIEW and Fiji for instrument control and image processing. Pscan 1.0 can handle high imaging rates and contains sufficient flexibility for users to adapt to their high speed imaging systems.
Wang, Ke; Zhao, Yang; Chen, Deyong; Huang, Chengjun; Fan, Beiyuan; Long, Rong; Hsieh, Chia-Hsun; Wang, Junbo; Wu, Min-Hsien; Chen, Jian
2017-06-19
This paper presents the instrumentation of a microfluidic analyzer enabling the characterization of single-cell biophysical properties, which includes seven key components: a microfluidic module, a pressure module, an imaging module, an impedance module, two LabVIEW platforms for instrument operation and raw data processing, respectively, and a Python code for data translation. Under the control of the LabVIEW platform for instrument operation, the pressure module flushes single cells into the microfluidic module with raw biophysical parameters sampled by the imaging and impedance modules and processed by the LabVIEW platform for raw data processing, which were further translated into intrinsic cellular biophysical parameters using the code developed in Python. Based on this system, specific membrane capacitance, cytoplasm conductivity, and instantaneous Young's modulus of three cell types were quantified as 2.76 ± 0.57 μF/cm², 1.00 ± 0.14 S/m, and 3.79 ± 1.11 kPa for A549 cells ( n cell = 202); 1.88 ± 0.31 μF/cm², 1.05 ± 0.16 S/m, and 3.74 ± 0.75 kPa for 95D cells ( n cell = 257); 2.11 ± 0.38 μF/cm², 0.87 ± 0.11 S/m, and 5.39 ± 0.89 kPa for H460 cells ( n cell = 246). As a semi-automatic instrument with a throughput of roughly 1 cell per second, this prototype instrument can be potentially used for the characterization of cellular biophysical properties.
Wang, Ke; Zhao, Yang; Chen, Deyong; Huang, Chengjun; Fan, Beiyuan; Long, Rong; Hsieh, Chia-Hsun; Wang, Junbo; Wu, Min-Hsien; Chen, Jian
2017-01-01
This paper presents the instrumentation of a microfluidic analyzer enabling the characterization of single-cell biophysical properties, which includes seven key components: a microfluidic module, a pressure module, an imaging module, an impedance module, two LabVIEW platforms for instrument operation and raw data processing, respectively, and a Python code for data translation. Under the control of the LabVIEW platform for instrument operation, the pressure module flushes single cells into the microfluidic module with raw biophysical parameters sampled by the imaging and impedance modules and processed by the LabVIEW platform for raw data processing, which were further translated into intrinsic cellular biophysical parameters using the code developed in Python. Based on this system, specific membrane capacitance, cytoplasm conductivity, and instantaneous Young’s modulus of three cell types were quantified as 2.76 ± 0.57 μF/cm2, 1.00 ± 0.14 S/m, and 3.79 ± 1.11 kPa for A549 cells (ncell = 202); 1.88 ± 0.31 μF/cm2, 1.05 ± 0.16 S/m, and 3.74 ± 0.75 kPa for 95D cells (ncell = 257); 2.11 ± 0.38 μF/cm2, 0.87 ± 0.11 S/m, and 5.39 ± 0.89 kPa for H460 cells (ncell = 246). As a semi-automatic instrument with a throughput of roughly 1 cell per second, this prototype instrument can be potentially used for the characterization of cellular biophysical properties. PMID:28629175
2004-02-04
KENNEDY SPACE CENTER, FLA. - Armando Oliu, Final Inspection Team lead for the Shuttle program, speaks to reporters about the aid the Image Analysis Lab is giving the FBI in a kidnapping case. Behind him at right is Mike Rein, External Affairs division chief. Oliu oversees the image lab that is using an advanced SGI® TP9500 data management system to review the tape of the kidnapping in progress in Sarasota, Fla. KSC installed the new $3.2 million system in preparation for Return to Flight of the Space Shuttle fleet. The lab is studying the Sarasota kidnapping video to provide any new information possible to law enforcement officers. KSC is joining NASA’s Marshall Space Flight Center in Alabama in reviewing the tape.
Note: A simple image processing based fiducial auto-alignment method for sample registration.
Robertson, Wesley D; Porto, Lucas R; Ip, Candice J X; Nantel, Megan K T; Tellkamp, Friedjof; Lu, Yinfei; Miller, R J Dwayne
2015-08-01
A simple method for the location and auto-alignment of sample fiducials for sample registration using widely available MATLAB/LabVIEW software is demonstrated. The method is robust, easily implemented, and applicable to a wide variety of experiment types for improved reproducibility and increased setup speed. The software uses image processing to locate and measure the diameter and center point of circular fiducials for distance self-calibration and iterative alignment and can be used with most imaging systems. The method is demonstrated to be fast and reliable in locating and aligning sample fiducials, provided here by a nanofabricated array, with accuracy within the optical resolution of the imaging system. The software was further demonstrated to register, load, and sample the dynamically wetted array.
Advanced Digital Signal Processing for Hybrid Lidar FY 2014
2014-10-30
processing steps on raw data, with a PC miming Lab VIEW performing the fmal calculations to obtain range measurements . A MATLAB- based system...regarding the object and it reduces the image contrast and resolution as well as the object ranging measurement accuracy. There have been various...frequency (>100MHz) approach that uses high speed modulation to help suppress backscatter while also providing an unambiguous range measurement . In general
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200503-00117-035). March 2005. WEST TANGENT VIEWED FROM INTERIOR OF BEVATRON. EQUIPMENT ACCESS STAIRWAY ON LEFT - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200503-00117-034). March 2005. MOUSE AT EAST TANGENT WITH COVER CLOSED, LOOKING TOWARD CENTER IGLOO, BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200503-00117-031). March 2005. MOUSE AT EAST TANGENT, WITH COVER OPEN, LOOKING TOWARD CENTER IGLOO, BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
Imaging the Moon II: Webcam CCD Observations and Analysis (a Two-Week Lab for Non-Majors)
NASA Astrophysics Data System (ADS)
Sato, T.
2014-07-01
Imaging the Moon is a successful two-week lab involving real sky observations of the Moon in which students make telescopic observations and analyze their own images. Originally developed around the 35 mm film camera, a common household object adapted for astronomical work, the lab now uses webcams as film photography has evolved into an obscure specialty technology and increasing numbers of students have little familiarity with it. The printed circuit board with the CCD is harvested from a commercial webcam and affixed to a tube to mount on a telescope in place of an eyepiece. Image frames are compiled to form a lunar mosaic, and crater sizes are measured. Students also work through the logistical steps of telescope time assignment and scheduling. They learn to keep a schedule and work with uncertainties of weather in ways paralleling research observations. Because there is no need for a campus observatory, this lab can be replicated at a wide variety of institutions.
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200506-00218-12). June 2005. DEEP TUNNEL INTO FOUNDATION UNDER BEVATRON, VIEW OF CART ON RAILS FOR TRANSPORTING EQUIPMENT - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200503-00117-049). March 2005. TUNNEL ENTRY FROM MAIN FLOOR OF MAGNET ROOM INTO CENTER OF BEVATRON, BENEATH SOUTHWEST QUADRANT - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
Quantification of Confocal Images Using LabVIEW for Tissue Engineering Applications
Sfakis, Lauren; Kamaldinov, Tim; Larsen, Melinda; Castracane, James
2016-01-01
Quantifying confocal images to enable location of specific proteins of interest in three-dimensional (3D) is important for many tissue engineering (TE) applications. Quantification of protein localization is essential for evaluation of specific scaffold constructs for cell growth and differentiation for application in TE and tissue regeneration strategies. Although obtaining information regarding protein expression levels is important, the location of proteins within cells grown on scaffolds is often the key to evaluating scaffold efficacy. Functional epithelial cell monolayers must be organized with apicobasal polarity with proteins specifically localized to the apical or basolateral regions of cells in many organs. In this work, a customized program was developed using the LabVIEW platform to quantify protein positions in Z-stacks of confocal images of epithelial cell monolayers. The program's functionality is demonstrated through salivary gland TE, since functional salivary epithelial cells must correctly orient many proteins on the apical and basolateral membranes. Bio-LabVIEW Image Matrix Evaluation (Bio-LIME) takes 3D information collected from confocal Z-stack images and processes the fluorescence at each pixel to determine cell heights, nuclei heights, nuclei widths, protein localization, and cell count. As a demonstration of its utility, Bio-LIME was used to quantify the 3D location of the Zonula occludens-1 protein contained within tight junctions and its change in 3D position in response to chemical modification of the scaffold with laminin. Additionally, Bio-LIME was used to demonstrate that there is no advantage of sub-100 nm poly lactic-co-glycolic acid nanofibers over 250 nm fibers for epithelial apicobasal polarization. Bio-LIME will be broadly applicable for quantification of proteins in 3D that are grown in many different contexts. PMID:27758134
Quantification of Confocal Images Using LabVIEW for Tissue Engineering Applications.
Sfakis, Lauren; Kamaldinov, Tim; Larsen, Melinda; Castracane, James; Khmaladze, Alexander
2016-11-01
Quantifying confocal images to enable location of specific proteins of interest in three-dimensional (3D) is important for many tissue engineering (TE) applications. Quantification of protein localization is essential for evaluation of specific scaffold constructs for cell growth and differentiation for application in TE and tissue regeneration strategies. Although obtaining information regarding protein expression levels is important, the location of proteins within cells grown on scaffolds is often the key to evaluating scaffold efficacy. Functional epithelial cell monolayers must be organized with apicobasal polarity with proteins specifically localized to the apical or basolateral regions of cells in many organs. In this work, a customized program was developed using the LabVIEW platform to quantify protein positions in Z-stacks of confocal images of epithelial cell monolayers. The program's functionality is demonstrated through salivary gland TE, since functional salivary epithelial cells must correctly orient many proteins on the apical and basolateral membranes. Bio-LabVIEW Image Matrix Evaluation (Bio-LIME) takes 3D information collected from confocal Z-stack images and processes the fluorescence at each pixel to determine cell heights, nuclei heights, nuclei widths, protein localization, and cell count. As a demonstration of its utility, Bio-LIME was used to quantify the 3D location of the Zonula occludens-1 protein contained within tight junctions and its change in 3D position in response to chemical modification of the scaffold with laminin. Additionally, Bio-LIME was used to demonstrate that there is no advantage of sub-100 nm poly lactic-co-glycolic acid nanofibers over 250 nm fibers for epithelial apicobasal polarization. Bio-LIME will be broadly applicable for quantification of proteins in 3D that are grown in many different contexts.
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200503-00117-139). March 2005. TOP OF BEVATRON, INCLUDING WOOD STAIRWAY FROM OUTER EDGE OF SHIELDING TO TOP OF ROOF BLOCK SHIELDING - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
Crazy Engineering Starshade and Coronagraph
2016-04-26
Episode 7 of Crazy Engineering series. Host Mike Meacham, Mechanical Engineer at JPL, learns about the two technologies NASA is investing in to image exoplanets: the Starshade and the Coronagraph. Mike interviews Nick Siegler, Program Chief Technologist, NASA Exoplanet Program in the Starshade lab and the High Contrast Imaging Testbed lab.
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200506-00198-11). June 2005. DUCTWORK BETWEEN FAN ROOM AND PASSAGEWAY UNDER BEVATRON, NORTH SIDE OF ROOM 10, MAIN FLOOR, BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
Photocopy of photograph (digital image located in LBNL Photo Lab ...
Photocopy of photograph (digital image located in LBNL Photo Lab Collection, XBD200506-00198-08). June 2005. DUCTWORK BETWEEN FAN ROOM AND PASSAGEWAY UNDER BEVATRON, SOUTH SIDE OF ROOM 10, MAIN FLOOR, BEVATRON - University of California Radiation Laboratory, Bevatron, 1 Cyclotron Road, Berkeley, Alameda County, CA
Breast cancer histopathology image analysis: a review.
Veta, Mitko; Pluim, Josien P W; van Diest, Paul J; Viergever, Max A
2014-05-01
This paper presents an overview of methods that have been proposed for the analysis of breast cancer histopathology images. This research area has become particularly relevant with the advent of whole slide imaging (WSI) scanners, which can perform cost-effective and high-throughput histopathology slide digitization, and which aim at replacing the optical microscope as the primary tool used by pathologist. Breast cancer is the most prevalent form of cancers among women, and image analysis methods that target this disease have a huge potential to reduce the workload in a typical pathology lab and to improve the quality of the interpretation. This paper is meant as an introduction for nonexperts. It starts with an overview of the tissue preparation, staining and slide digitization processes followed by a discussion of the different image processing techniques and applications, ranging from analysis of tissue staining to computer-aided diagnosis, and prognosis of breast cancer patients.
Recent Applications of Neutron Imaging Methods
NASA Astrophysics Data System (ADS)
Lehmann, E.; Mannes, D.; Kaestner, A.; Grünzweig, C.
The methodical progress in the field of neutron imaging is visible in general but on different levels in the particular labs. Consequently, the access to most suitable beam ports, the usage of advanced imaging detector systems and the professional image processing made the technique competitive to other non-destructive tools like X-ray imaging. Based on this performance gain and by new methodical approaches several new application fields came up - in addition to the already established ones. Accordingly, new image data are now mostly in the third dimension available in the format of tomography volumes. The radiography mode is still the basis of neutron imaging, but the extracted information from superimposed image data (like for a grating interferometer) enables completely new insights. In the consequence, many new applications were created.
ERIC Educational Resources Information Center
Alexiadis, D. S.; Mitianoudis, N.
2013-01-01
Digital signal processing (DSP) has been an integral part of most electrical, electronic, and computer engineering curricula. The applications of DSP in multimedia (audio, image, video) storage, transmission, and analysis are also widely taught at both the undergraduate and post-graduate levels, as digital multimedia can be encountered in most…
NASA Astrophysics Data System (ADS)
Ainiwaer, A.; Gurrola, H.
2017-12-01
In traditional Ps receiver functions (RFs) imaging, PPs and PSs phases from the shallow layers (near surface and crust) can be miss stacked as Ps phases or interfere with deeper Ps phases. To overcome interference between phases, we developed a method to produce phase specific Ps, PPs and PSs receiver functions (wavefield iterative deconvolution or WID). Rather than preforming a separate deconvolution of each seismogram recorded at a station, WID processes all the seismograms from a seismic station in a single run. Each iteration of WID identifies the most prominent phase remaining in the data set, based on the shape of its wavefield (or moveout curve), and then places this phase on the appropriate phase specific RF. As a result, we produce PsRFs that are free of PPs and PSs phase; and reverberations thereof. We also produce phase specific PPsRFs and PSsRFs but moveout curves for these phases and their higher order reverberations are not as distinct from one another. So the PPsRFs and the PSsRFs are not as clean as the PsRFs. These phase specific RFs can be stacked to image 2-D or 3-D Earth structure using common conversion point (CCP) stacking or migration. We applied WID to 524 Southern California seismic stations to construct 3-D PsRF image of lithosphere beneath southern California. These CCP images exhibit a Ps phases from the Moho and the lithosphere asthenosphere boundary (LAB) that are free of interference from the crustal reverberations. The Moho and LAB were found to be deepest beneath the Sierra Nevada, Tansverse Range and Peninsular Range. Shallow Moho and Lab is apparent beneath the Inner Borderland and Salton Trough. The LAB depth that we estimate is in close agreement to recent published results that used Sp imaging (Lekic et al., 2011). We also found complicated structure beneath Mojave Block where mid crustal features are apparent and anomalous Ps phases at 60 km depth are observed beneath Western Mojave dessert.
Calibrating AIS images using the surface as a reference
NASA Technical Reports Server (NTRS)
Smith, M. O.; Roberts, D. A.; Shipman, H. M.; Adams, J. B.; Willis, S. C.; Gillespie, A. R.
1987-01-01
A method of evaluating the initial assumptions and uncertainties of the physical connection between Airborne Imaging Spectrometer (AIS) image data and laboratory/field spectrometer data was tested. The Tuscon AIS-2 image connects to lab reference spectra by an alignment to the image spectral endmembers through a system gain and offset for each band. Images were calibrated to reflectance so as to transform the image into a measure that is independent of the solar radiant flux. This transformation also makes the image spectra directly comparable to data from lab and field spectrometers. A method was tested for calibrating AIS images using the surface as a reference. The surface heterogeneity is defined by lab/field spectral measurements. It was found that the Tuscon AIS-2 image is consistent with each of the initial hypotheses: (1) that the AIS-2 instrument calibration is nearly linear; (2) the spectral variance is caused by sub-pixel mixtures of spectrally distinct materials and shade, and (3) that sub-pixel mixtures can be treated as linear mixtures of pure endmembers. It was also found that the image can be characterized by relatively few endmembers using the AIS-2 spectra.
NASA Astrophysics Data System (ADS)
Kepner, J. V.; Janka, R. S.; Lebak, J.; Richards, M. A.
1999-12-01
The Vector/Signal/Image Processing Library (VSIPL) is a DARPA initiated effort made up of industry, government and academic representatives who have defined an industry standard API for vector, signal, and image processing primitives for real-time signal processing on high performance systems. VSIPL supports a wide range of data types (int, float, complex, ...) and layouts (vectors, matrices and tensors) and is ideal for astronomical data processing. The VSIPL API is intended to serve as an open, vendor-neutral, industry standard interface. The object-based VSIPL API abstracts the memory architecture of the underlying machine by using the concept of memory blocks and views. Early experiments with VSIPL code conversions have been carried out by the High Performance Computing Program team at the UCSD. Commercially, several major vendors of signal processors are actively developing implementations. VSIPL has also been explicitly required as part of a recent Rome Labs teraflop procurement. This poster presents the VSIPL API, its functionality and the status of various implementations.
Pc-Based Floating Point Imaging Workstation
NASA Astrophysics Data System (ADS)
Guzak, Chris J.; Pier, Richard M.; Chinn, Patty; Kim, Yongmin
1989-07-01
The medical, military, scientific and industrial communities have come to rely on imaging and computer graphics for solutions to many types of problems. Systems based on imaging technology are used to acquire and process images, and analyze and extract data from images that would otherwise be of little use. Images can be transformed and enhanced to reveal detail and meaning that would go undetected without imaging techniques. The success of imaging has increased the demand for faster and less expensive imaging systems and as these systems become available, more and more applications are discovered and more demands are made. From the designer's perspective the challenge to meet these demands forces him to attack the problem of imaging from a different perspective. The computing demands of imaging algorithms must be balanced against the desire for affordability and flexibility. Systems must be flexible and easy to use, ready for current applications but at the same time anticipating new, unthought of uses. Here at the University of Washington Image Processing Systems Lab (IPSL) we are focusing our attention on imaging and graphics systems that implement imaging algorithms for use in an interactive environment. We have developed a PC-based imaging workstation with the goal to provide powerful and flexible, floating point processing capabilities, along with graphics functions in an affordable package suitable for diverse environments and many applications.
A midas plugin to enable construction of reproducible web-based image processing pipelines
Grauer, Michael; Reynolds, Patrick; Hoogstoel, Marion; Budin, Francois; Styner, Martin A.; Oguz, Ipek
2013-01-01
Image processing is an important quantitative technique for neuroscience researchers, but difficult for those who lack experience in the field. In this paper we present a web-based platform that allows an expert to create a brain image processing pipeline, enabling execution of that pipeline even by those biomedical researchers with limited image processing knowledge. These tools are implemented as a plugin for Midas, an open-source toolkit for creating web based scientific data storage and processing platforms. Using this plugin, an image processing expert can construct a pipeline, create a web-based User Interface, manage jobs, and visualize intermediate results. Pipelines are executed on a grid computing platform using BatchMake and HTCondor. This represents a new capability for biomedical researchers and offers an innovative platform for scientific collaboration. Current tools work well, but can be inaccessible for those lacking image processing expertise. Using this plugin, researchers in collaboration with image processing experts can create workflows with reasonable default settings and streamlined user interfaces, and data can be processed easily from a lab environment without the need for a powerful desktop computer. This platform allows simplified troubleshooting, centralized maintenance, and easy data sharing with collaborators. These capabilities enable reproducible science by sharing datasets and processing pipelines between collaborators. In this paper, we present a description of this innovative Midas plugin, along with results obtained from building and executing several ITK based image processing workflows for diffusion weighted MRI (DW MRI) of rodent brain images, as well as recommendations for building automated image processing pipelines. Although the particular image processing pipelines developed were focused on rodent brain MRI, the presented plugin can be used to support any executable or script-based pipeline. PMID:24416016
A midas plugin to enable construction of reproducible web-based image processing pipelines.
Grauer, Michael; Reynolds, Patrick; Hoogstoel, Marion; Budin, Francois; Styner, Martin A; Oguz, Ipek
2013-01-01
Image processing is an important quantitative technique for neuroscience researchers, but difficult for those who lack experience in the field. In this paper we present a web-based platform that allows an expert to create a brain image processing pipeline, enabling execution of that pipeline even by those biomedical researchers with limited image processing knowledge. These tools are implemented as a plugin for Midas, an open-source toolkit for creating web based scientific data storage and processing platforms. Using this plugin, an image processing expert can construct a pipeline, create a web-based User Interface, manage jobs, and visualize intermediate results. Pipelines are executed on a grid computing platform using BatchMake and HTCondor. This represents a new capability for biomedical researchers and offers an innovative platform for scientific collaboration. Current tools work well, but can be inaccessible for those lacking image processing expertise. Using this plugin, researchers in collaboration with image processing experts can create workflows with reasonable default settings and streamlined user interfaces, and data can be processed easily from a lab environment without the need for a powerful desktop computer. This platform allows simplified troubleshooting, centralized maintenance, and easy data sharing with collaborators. These capabilities enable reproducible science by sharing datasets and processing pipelines between collaborators. In this paper, we present a description of this innovative Midas plugin, along with results obtained from building and executing several ITK based image processing workflows for diffusion weighted MRI (DW MRI) of rodent brain images, as well as recommendations for building automated image processing pipelines. Although the particular image processing pipelines developed were focused on rodent brain MRI, the presented plugin can be used to support any executable or script-based pipeline.
Interactive, Online, Adsorption Lab to Support Discovery of the Scientific Process
NASA Astrophysics Data System (ADS)
Carroll, K. C.; Ulery, A. L.; Chamberlin, B.; Dettmer, A.
2014-12-01
Science students require more than methods practice in lab activities; they must gain an understanding of the application of the scientific process through lab work. Large classes, time constraints, and funding may limit student access to science labs, denying students access to the types of experiential learning needed to motivate and develop new scientists. Interactive, discovery-based computer simulations and virtual labs provide an alternative, low-risk opportunity for learners to engage in lab processes and activities. Students can conduct experiments, collect data, draw conclusions, and even abort a session. We have developed an online virtual lab, through which students can interactively develop as scientists as they learn about scientific concepts, lab equipment, and proper lab techniques. Our first lab topic is adsorption of chemicals to soil, but the methodology is transferrable to other topics. In addition to learning the specific procedures involved in each lab, the online activities will prompt exploration and practice in key scientific and mathematical concepts, such as unit conversion, significant digits, assessing risks, evaluating bias, and assessing quantity and quality of data. These labs are not designed to replace traditional lab instruction, but to supplement instruction on challenging or particularly time-consuming concepts. To complement classroom instruction, students can engage in a lab experience outside the lab and over a shorter time period than often required with real-world adsorption studies. More importantly, students can reflect, discuss, review, and even fail at their lab experience as part of the process to see why natural processes and scientific approaches work the way they do. Our Media Productions team has completed a series of online digital labs available at virtuallabs.nmsu.edu and scienceofsoil.com, and these virtual labs are being integrated into coursework to evaluate changes in student learning.
NASA Astrophysics Data System (ADS)
Erberich, Stephan G.; Hoppe, Martin; Jansen, Christian; Schmidt, Thomas; Thron, Armin; Oberschelp, Walter
2001-08-01
In the last few years more and more University Hospitals as well as private hospitals changed to digital information systems for patient record, diagnostic files and digital images. Not only that patient management becomes easier, it is also very remarkable how clinical research can profit from Picture Archiving and Communication Systems (PACS) and diagnostic databases, especially from image databases. Since images are available on the finger tip, difficulties arise when image data needs to be processed, e.g. segmented, classified or co-registered, which usually demands a lot computational power. Today's clinical environment does support PACS very well, but real image processing is still under-developed. The purpose of this paper is to introduce a parallel cluster of standard distributed systems and its software components and how such a system can be integrated into a hospital environment. To demonstrate the cluster technique we present our clinical experience with the crucial but cost-intensive motion correction of clinical routine and research functional MRI (fMRI) data, as it is processed in our Lab on a daily basis.
NeuroSeek dual-color image processing infrared focal plane array
NASA Astrophysics Data System (ADS)
McCarley, Paul L.; Massie, Mark A.; Baxter, Christopher R.; Huynh, Buu L.
1998-09-01
Several technologies have been developed in recent years to advance the state of the art of IR sensor systems including dual color affordable focal planes, on-focal plane array biologically inspired image and signal processing techniques and spectral sensing techniques. Pacific Advanced Technology (PAT) and the Air Force Research Lab Munitions Directorate have developed a system which incorporates the best of these capabilities into a single device. The 'NeuroSeek' device integrates these technologies into an IR focal plane array (FPA) which combines multicolor Midwave IR/Longwave IR radiometric response with on-focal plane 'smart' neuromorphic analog image processing. The readout and processing integrated circuit very large scale integration chip which was developed under this effort will be hybridized to a dual color detector array to produce the NeuroSeek FPA, which will have the capability to fuse multiple pixel-based sensor inputs directly on the focal plane. Great advantages are afforded by application of massively parallel processing algorithms to image data in the analog domain; the high speed and low power consumption of this device mimic operations performed in the human retina.
Cordilleran Longevity, Elevation and Heat Driven by Lithospheric Mantle Removal
NASA Astrophysics Data System (ADS)
Mackay-Hill, A.; Currie, C. A.; Audet, P.; Schaeffer, A. J.
2017-12-01
Cordilleran evolution is controlled by subduction zone back-arc processes that generate and maintain high topography due to elevated uppermost mantle temperatures. In the northern Canadian Cordillera (NCC), the persisting high mean elevation long after subduction has stopped (>50 Ma) requires a sustained source of heat either from small-scale mantle convection or lithospheric mantle removal; however direct structural constraints of these processes are sparse. We image the crust and uppermost mantle beneath the NCC using scattered teleseismic waves recorded on an array of broadband seismograph stations. We resolve two sharp and flat seismic discontinuities: a downward velocity increase at 35 km that we interpret as the Moho; and a deeper discontinuity with opposite velocity contrast at 50 km depth. Based on petrologic estimates, we interpret the deeper interface as the lithosphere-asthenosphere boundary (LAB), which implies an extremely thin ( 15 km) lithospheric mantle. We calculate the temperature at the Moho and the LAB in the range 800-900C and 1200-1300C, respectively. Below the LAB, we find west-dipping features far below the LAB beneath the eastern NCC that we associate with laminar downwelling of Cordilleran lithosphere. Whether these structures are fossilized or active, they suggest that lithospheric mantle removal near the Cordillera-Craton boundary may have provided the source of heat and elevation and therefore played a role in the longevity and stability of the Cordillera.
Magnetically engineered smart thin films: toward lab-on-chip ultra-sensitive molecular imaging.
Hassan, Muhammad A; Saqib, Mudassara; Shaikh, Haseeb; Ahmad, Nasir M; Elaissari, Abdelhamid
2013-03-01
Magnetically responsive engineered smart thin films of nanoferrites as contrast agent are employed to develop surface based magnetic resonance imaging to acquire simple yet fast molecular imaging. The work presented here can be of significant potential for future lab-on-chip point-of-care diagnostics from the whole blood pool on almost any substrates to reduce or even prevent clinical studies involve a living organism to enhance the non-invasive imaging to advance the '3Rs' of work in animals-replacement, refinement and reduction.
Post-processing images from the WFIRST-AFTA coronagraph testbed
NASA Astrophysics Data System (ADS)
Zimmerman, Neil T.; Ygouf, Marie; Pueyo, Laurent; Soummer, Remi; Perrin, Marshall D.; Mennesson, Bertrand; Cady, Eric; Mejia Prada, Camilo
2016-01-01
The concept for the exoplanet imaging instrument on WFIRST-AFTA relies on the development of mission-specific data processing tools to reduce the speckle noise floor. No instruments have yet functioned on the sky in the planet-to-star contrast regime of the proposed coronagraph (1E-8). Therefore, starlight subtraction algorithms must be tested on a combination of simulated and laboratory data sets to give confidence that the scientific goals can be reached. The High Contrast Imaging Testbed (HCIT) at Jet Propulsion Lab has carried out several technology demonstrations for the instrument concept, demonstrating 1E-8 raw (absolute) contrast. Here, we have applied a mock reference differential imaging strategy to HCIT data sets, treating one subset of images as a reference star observation and another subset as a science target observation. We show that algorithms like KLIP (Karhunen-Loève Image Projection), by suppressing residual speckles, enable the recovery of exoplanet signals at contrast of order 2E-9.
A Video Lecture and Lab-Based Approach for Learning of Image Processing Concepts
ERIC Educational Resources Information Center
Chiu, Chiung-Fang; Lee, Greg C.
2009-01-01
The current practice of traditional in-class lecture for learning computer science (CS) in the high schools of Taiwan is in need of revamping. Teachers instruct on the use of commercial software instead of teaching CS concepts to students. The lack of more suitable teaching materials and limited classroom time are the main reasons for the…
Imaging Buried Culverts Using Ground Penetrating Radar: Comparing 100 MHZ Through 1 GHZ Antennae
NASA Astrophysics Data System (ADS)
Abdul Aziz, A.; Stewart, R. R.; Green, S. L.
2013-12-01
*Aziz, A A aabdulaziz@uh.edu Allied Geophysical Lab, Department of Earth and Atmospheric Sciences, University of Houston, TX, USA Stewart, R R rrstewart@uh.edu Allied Geophysical Lab, Department of Earth and Atmospheric Sciences, University of Houston, TX, USA *Green, S L slgreen@yahoo.com Allied Geophysical Lab, Department of Earth and Atmospheric Sciences, University of Houston, TX, USA A 3D ground penetrating radar (GPR) survey, using three different frequency antennae, was undertaken to image buried steel culverts at the University of Houston's La Marque Geophysical Observatory 30 miles south of Houston, Texas. The four culverts, under study, support a road crossing one of the area's bayous. A 32 m by 4.5 m survey grid was designed on the road above the culverts and data were collected with 100 MHz, 250 MHz, and 1 GHz antennae. We used an orthogonal acquisition geometry for the three surveys. Inline sampling was from 1.0 cm to 10 cm (from 1 GHz to 100 MHz antenna) with inline and crossline spacings ranging from 0.2 m to 0.5 m. We used an initial velocity of 0.1 m/ns (from previous CMP work at the site) for the display purposes. The main objective of the study was to analyze the effect of different frequency antennae on the resultant GPR images. We are also interested in the accuracy and resolution of the various images, in addition to developing an optimal processing flow.The data were initially processed with standard steps that included gain enhancement, dewow and temporal-filtering, background suppression, and 2D migration. Various radar velocities were used in the 2D migration and ultimately 0.12 m/ns was used. The data are complicated by multipathing from the surface and between culverts (from modeling). Some of this is ameliorated via deconvolution. The top of each of the four culverts was evident in the GPR images acquired with the 250 MHz and 100 MHz antennas. For 1 GHz, the top of the culvert was not clear due to the signal's attenuation. The 250 MHz shielded antenna provides a vertical resolution of about 0.1 m and is the choice to image the culverts. The 100 MHz antenna provided an increment in depth of penetration, but at the expense of a substantially diminished resolution (0.25 m).
2001-01-24
Image of soot (smoke) plume made for the Laminar Soot Processes (LSP) experiment during the Microgravity Sciences Lab-1 mission in 1997. LSP-2 will fly in the STS-107 Research 1 mission in 2002. The principal investigator is Dr. Gerard Faeth of the University of Michigan. LSP uses a small jet burner, similar to a classroom butane lighter, that produces flames up to 60 mm (2.3 in) long. Measurements include color TV cameras and a temperature sensor, and laser images whose darkness indicates the quantity of soot produced in the flame. Glenn Research in Cleveland, OH, manages the project.
Monitoring CO2 invasion processes at the pore scale using geological labs on chip.
Morais, S; Liu, N; Diouf, A; Bernard, D; Lecoutre, C; Garrabos, Y; Marre, S
2016-09-21
In order to investigate at the pore scale the mechanisms involved during CO2 injection in a water saturated pore network, a series of displacement experiments is reported using high pressure micromodels (geological labs on chip - GLoCs) working under real geological conditions (25 < T (°C) < 75 and 4.5 < p (MPa) < 8). The experiments were focused on the influence of three experimental parameters: (i) the p, T conditions, (ii) the injection flow rates and (iii) the pore network characteristics. By using on-chip optical characterization and imaging approaches, the CO2 saturation curves as a function of either time or the number of pore volume injected were determined. Three main mechanisms were observed during CO2 injection, namely, invasion, percolation and drying, which are discussed in this paper. Interestingly, besides conventional mechanisms, two counterintuitive situations were observed during the invasion and drying processes.
Constructing a modern cytology laboratory: A toolkit for planning and design.
Roberson, Janie; Wrenn, Allison; Poole, John; Jaeger, Andrew; Eltoum, Isam A
2013-01-01
Constructing or renovating a laboratory can be both challenging and rewarding. UAB Cytology (UAB CY) recently undertook a project to relocate from a building constructed in 1928 to new space. UAB CY is part of an academic center that provides service to a large set of patients, support training of one cytotechnology program and one cytopathology fellowship training program and involve actively in research and scholarly activity. Our objectives were to provide a safe, aesthetically pleasing space and gain efficiencies through lean processes. The phases of any laboratory design project are Planning, Schematic Design (SD), Design Development (DD), Construction Documents (CD) and Construction. Lab personnel are most critical in the Planning phase. During this time stakeholders, relationships, budget, square footage and equipment were identified. Equipment lists, including what would be relocated, purchased new and projected for future growth ensure that utilities were matched to expected need. A chemical inventory was prepared and adequate storage space was planned. Regulatory and safety requirements were discussed. Tours and high level process flow diagrams helped architects and engineers understand the laboratory daily work. Future needs were addressed through a questionnaire which identified potential areas of growth and technological change. Throughout the project, decisions were driven by data from the planning phase. During the SD phase, objective information from the first phase was used by architects and planners to create a general floor plan. This was the basis of a series of meetings to brainstorm and suggest modifications. DD brings more detail to the plans with engineering, casework, equipment specifics, finishes. Design changes should be completed at this phase. The next phase, CD took the project from the lab purview into purely technical mode. Construction documents were used by the contractor for the bidding process and ultimately the Construction phase. The project fitted out a total of 9,000 square feet; 4,000 laboratory and 5,000 office/support. Lab space includes areas for Prep, CT screening, sign out and Imaging. Adjacent space houses faculty offices and conferencing facilities. Transportation time was reduced (waste removal) by a Pneumatic Tube System, specimen drop window to Prep Lab and a pass thru window to the screening area. Open screening and prep areas allow visual management control. Efficiencies were gained by ergonomically placing CT Manual and Imaging microscopes and computers in close proximity, also facilitating a paperless workflow for additional savings. Logistically, closer proximity to Surgical Pathology maximized the natural synergies between the areas. Lab construction should be a systematic process based on sound principles for safety, high quality testing, and finance. Our detailed planning and design process can be a model for others undertaking similar projects.
Constructing a modern cytology laboratory: A toolkit for planning and design
Roberson, Janie; Wrenn, Allison; Poole, John; Jaeger, Andrew; Eltoum, Isam A.
2013-01-01
Introduction: Constructing or renovating a laboratory can be both challenging and rewarding. UAB Cytology (UAB CY) recently undertook a project to relocate from a building constructed in 1928 to new space. UAB CY is part of an academic center that provides service to a large set of patients, support training of one cytotechnology program and one cytopathology fellowship training program and involve actively in research and scholarly activity. Our objectives were to provide a safe, aesthetically pleasing space and gain efficiencies through lean processes. Methods: The phases of any laboratory design project are Planning, Schematic Design (SD), Design Development (DD), Construction Documents (CD) and Construction. Lab personnel are most critical in the Planning phase. During this time stakeholders, relationships, budget, square footage and equipment were identified. Equipment lists, including what would be relocated, purchased new and projected for future growth ensure that utilities were matched to expected need. A chemical inventory was prepared and adequate storage space was planned. Regulatory and safety requirements were discussed. Tours and high level process flow diagrams helped architects and engineers understand the laboratory daily work. Future needs were addressed through a questionnaire which identified potential areas of growth and technological change. Throughout the project, decisions were driven by data from the planning phase. During the SD phase, objective information from the first phase was used by architects and planners to create a general floor plan. This was the basis of a series of meetings to brainstorm and suggest modifications. DD brings more detail to the plans with engineering, casework, equipment specifics, finishes. Design changes should be completed at this phase. The next phase, CD took the project from the lab purview into purely technical mode. Construction documents were used by the contractor for the bidding process and ultimately the Construction phase. Results: The project fitted out a total of 9,000 square feet; 4,000 laboratory and 5,000 office/support. Lab space includes areas for Prep, CT screening, sign out and Imaging. Adjacent space houses faculty offices and conferencing facilities. Transportation time was reduced (waste removal) by a Pneumatic Tube System, specimen drop window to Prep Lab and a pass thru window to the screening area. Open screening and prep areas allow visual management control. Efficiencies were gained by ergonomically placing CT Manual and Imaging microscopes and computers in close proximity, also facilitating a paperless workflow for additional savings. Logistically, closer proximity to Surgical Pathology maximized the natural synergies between the areas. Conclusions: Lab construction should be a systematic process based on sound principles for safety, high quality testing, and finance. Our detailed planning and design process can be a model for others undertaking similar projects PMID:23599722
NASA Technical Reports Server (NTRS)
Kent, J. J.; Berger, E. L.; Fries, M. D.; Bastien, R.; McCubbin, F. M.; Pace, L.; Righter, K.; Sutter, B.; Zeigler, R. A.; Zolensky, M.
2017-01-01
On the early morning of September 15th, 2016, on the first floor of Building 31 at NASA-Johnson Space Center, the hose from a water chiller ruptured and began spraying water onto the floor. The water had been circulating though old metal pipes, and the leaked water contained rust-colored particulates. The water flooded much of the western wing of the building's ground floor before the leak was stopped, and it left behind a residue of rust across the floor, most notably in the Apollo and Meteorite Thin Section Labs and Sample Preparation Lab. No samples were damaged in the event, and the affected facilities are in the process of remediation. At the beginning of 2016, a separate leak occurred in the Cosmic Dust Lab, located in the same building. In that lab, a water leak occurred at the bottom of the sink used to clean the lab's tools and containers with ultra-pure water. Over years of use, the ultra-pure water eroded the metal sink piping and leaked water onto the inside of the lab's flow bench. This water also left behind a film of rusty material. The material was cleaned up and the metal piping was replaced with PVC pipe and sealed with Teflon plumber's tape. Samples of the rust detritus were collected from both incidents. These samples were imaged and analyzed to determine their chemical and mineralogical compositions. The purpose of these analyses is to document the nature of the detritus for future reference in the unlikely event that these materials occur as contaminants in the Cosmic Dust samples or Apollo or Meteorite thin sections.
Microarthroscopy System With Image Processing Technology Developed for Minimally Invasive Surgery
NASA Technical Reports Server (NTRS)
Steele, Gynelle C.
2001-01-01
In a joint effort, NASA, Micro Medical Devices, and the Cleveland Clinic have developed a microarthroscopy system with digital image processing. This system consists of a disposable endoscope the size of a needle that is aimed at expanding the use of minimally invasive surgery on the knee, ankle, and other small joints. This device not only allows surgeons to make smaller incisions (by improving the clarity and brightness of images), but it gives them a better view of the injured area to make more accurate diagnoses. Because of its small size, the endoscope helps reduce physical trauma and speeds patient recovery. The faster recovery rate also makes the system cost effective for patients. The digital image processing software used with the device was originally developed by the NASA Glenn Research Center to conduct computer simulations of satellite positioning in space. It was later modified to reflect lessons learned in enhancing photographic images in support of the Center's microgravity program. Glenn's Photovoltaic Branch and Graphics and Visualization Lab (G-VIS) computer programmers and software developers enhanced and speed up graphic imaging for this application. Mary Vickerman at Glenn developed algorithms that enabled Micro Medical Devices to eliminate interference and improve the images.
The Influence of Pets on Infants’ Processing of Cat and Dog Images
Hurley, Karinna B.; Kovack-Lesh, Kristine A.; Oakes, Lisa M.
2010-01-01
We examined how experience at home with pets is related to infants’ processing of animal stimuli in a standard laboratory procedure. We presented 6-month-old infants with photographs of cats or dogs and found that infants with pets at home (N = 40) responded differently to the pictures than infants without pets (N = 40). These results suggest that infants’ experience in one context (at home) contributes to their processing of similar stimuli in a different context (the lab), and have implications for how infants’ early experience shapes basic cognitive processing. PMID:20728223
Intelligent image capture of cartridge cases for firearms examiners
NASA Astrophysics Data System (ADS)
Jones, Brett C.; Guerci, Joseph R.
1997-02-01
The FBI's DRUGFIRETM system is a nationwide computerized networked image database of ballistic forensic evidence. This evidence includes images of cartridge cases and bullets obtained from both crime scenes and controlled test firings of seized weapons. Currently, the system is installed in over 80 forensic labs across the country and has enjoyed a high degree of success. In this paper, we discuss some of the issues and methods associated with providing a front-end semi-automated image capture system that simultaneously satisfies the often conflicting criteria of the many human examiners visual perception versus the criteria associated with optimizing autonomous digital image correlation. Specifically, we detail the proposed processing chain of an intelligent image capture system (IICS), involving a real- time capture 'assistant,' which assesses the quality of the image under test utilizing a custom designed neural network.
Volumetric brain tumour detection from MRI using visual saliency.
Mitra, Somosmita; Banerjee, Subhashis; Hayashi, Yoichi
2017-01-01
Medical image processing has become a major player in the world of automatic tumour region detection and is tantamount to the incipient stages of computer aided design. Saliency detection is a crucial application of medical image processing, and serves in its potential aid to medical practitioners by making the affected area stand out in the foreground from the rest of the background image. The algorithm developed here is a new approach to the detection of saliency in a three dimensional multi channel MR image sequence for the glioblastoma multiforme (a form of malignant brain tumour). First we enhance the three channels, FLAIR (Fluid Attenuated Inversion Recovery), T2 and T1C (contrast enhanced with gadolinium) to generate a pseudo coloured RGB image. This is then converted to the CIE L*a*b* color space. Processing on cubes of sizes k = 4, 8, 16, the L*a*b* 3D image is then compressed into volumetric units; each representing the neighbourhood information of the surrounding 64 voxels for k = 4, 512 voxels for k = 8 and 4096 voxels for k = 16, respectively. The spatial distance of these voxels are then compared along the three major axes to generate the novel 3D saliency map of a 3D image, which unambiguously highlights the tumour region. The algorithm operates along the three major axes to maximise the computation efficiency while minimising loss of valuable 3D information. Thus the 3D multichannel MR image saliency detection algorithm is useful in generating a uniform and logistically correct 3D saliency map with pragmatic applicability in Computer Aided Detection (CADe). Assignment of uniform importance to all three axes proves to be an important factor in volumetric processing, which helps in noise reduction and reduces the possibility of compromising essential information. The effectiveness of the algorithm was evaluated over the BRATS MICCAI 2015 dataset having 274 glioma cases, consisting both of high grade and low grade GBM. The results were compared with that of the 2D saliency detection algorithm taken over the entire sequence of brain data. For all comparisons, the Area Under the receiver operator characteristic (ROC) Curve (AUC) has been found to be more than 0.99 ± 0.01 over various tumour types, structures and locations.
Interfacing Lab-on-a-Chip Embryo Technology with High-Definition Imaging Cytometry.
Zhu, Feng; Hall, Christopher J; Crosier, Philip S; Wlodkowic, Donald
2015-08-01
To spearhead deployment of zebrafish embryo biotests in large-scale drug discovery studies, automated platforms are needed to integrate embryo in-test positioning and immobilization (suitable for high-content imaging) with fluidic modules for continuous drug and medium delivery under microperfusion to developing embryos. In this work, we present an innovative design of a high-throughput three-dimensional (3D) microfluidic chip-based device for automated immobilization and culture and time-lapse imaging of developing zebrafish embryos under continuous microperfusion. The 3D Lab-on-a-Chip array was fabricated in poly(methyl methacrylate) (PMMA) transparent thermoplastic using infrared laser micromachining, while the off-chip interfaces were fabricated using additive manufacturing processes (fused deposition modelling and stereolithography). The system's design facilitated rapid loading and immobilization of a large number of embryos in predefined clusters of traps during continuous microperfusion of drugs/toxins. It was conceptually designed to seamlessly interface with both upright and inverted fluorescent imaging systems and also to directly interface with conventional microtiter plate readers that accept 96-well plates. Compared with the conventional Petri dish assays, the chip-based bioassay was much more convenient and efficient as only small amounts of drug solutions were required for the whole perfusion system running continuously over 72 h. Embryos were spatially separated in the traps that assisted tracing single embryos, preventing interembryo contamination and improving imaging accessibility.
NASA Astrophysics Data System (ADS)
Giordano, N.; Arato, A.; Comina, C.; Mandrone, G.
2017-05-01
A Borehole Thermal Energy Storage living lab was built up nearby Torino (Northern Italy). This living lab aims at testing the ability of the alluvial deposits of the north-western Po Plain to store the thermal energy collected by solar thermal panels and the efficiency of energy storage systems in this climatic context. Different monitoring approaches have been tested and analyzed since the start of the thermal injection in April 2014. Underground temperature monitoring is constantly undertaken by means of several temperature sensors located along the borehole heat exchangers and within the hydraulic circuit. Nevertheless, this can provide only pointwise information about underground temperature distribution. For this reason, a geophysical approach is proposed in order to image the thermally affected zone (TAZ) caused by the heat injection: surface electrical resistivity measurements were carried out with this purpose. In the present paper, results of time-lapse acquisitions during a heating day are reported with the aim of imaging the thermal plume evolution within the subsoil. Resistivity data, calibrated on local temperature measurements, have shown their potentiality in imaging the heated plume of the system and depicting its evolution throughout the day. Different types of data processing were adopted in order to face issues mainly related to a highly urbanized environment. The use of apparent resistivity proved to be in valid agreement with the results of different inversion approaches. The inversion processes did not significantly improve the qualitative and quantitative TAZ imaging in comparison to the pseudo-sections. This suggested the usefulness of apparent resistivity data alone for a rough monitoring of TAZ in this kind of applications.
Advanced Digital Signal Processing for Hybrid Lidar
2014-09-30
with a PC running LabVIEW performing the final calculations to obtain range measurements . A MATLAB- based system developed at Clarkson University in...the image contrast and resolution as well as the object ranging measurement accuracy. There have been various methods that attempt to reduce the...high speed modulation to help suppress backscatter while also providing an unambiguous range measurement . In general, it is desired to determine which
Nanofiber alignment of a small diameter elastic electrospun scaffold
NASA Astrophysics Data System (ADS)
Patel, Jignesh
Cardiovascular disease is the leading cause of death in western countries with coronary heart disease making up 50% of these deaths. As a treatment option, tissue engineered grafts have great potential. Elastic scaffolds that mimic arterial extracellular matrix (ECM) may hold the key to creating viable vascular grafts. Electrospinning is a widely used scaffold fabrication technique to engineer tubular scaffolds. In this study, we investigated how the collector rotation speed altered the nanofiber alignment which may improve mechanical characteristics making the scaffold more suitable for arterial grafts. The scaffold was fabricated from a blend of PCL/Elastin. 2D Fast Fourier Transform (FFT) image processing tool and MatLab were used to quantitatively analyze nanofiber orientation at different collector speeds (13500 to 15500 rpm). Both Image J and MatLab showed graphical peaks indicating predominant fiber orientation angles. A collector speed of 15000 rpm was found to produce the best nanofiber alignment with narrow peaks at 90 and 270 degrees, and a relative amplitude of 200. This indicates a narrow distribution of circumferentially aligned nanofibers. Collector speeds below and above 15000 rpm caused a decrease in fiber alignment with a broader orientation distribution. Uniformity of fiber diameter was also measured. Of 600 measures from the 15000 rpm scaffolds, the fiber diameter range from 500 nm to 899 nm was most prevalent. This diameter range was slightly larger than native ECM which ranges from 50 nm to 500 nm. The second most prevalent diameter range had an average of 404 nm which is within the diameter range of collagen. This study concluded that with proper electrospinning technique and collector speed, it is possible to fabricate highly aligned small diameter elastic scaffolds. Image J 2D FFT results confirmed MatLab findings for the analyses of circumferentially aligned nanofibers. In addition, MatLab analyses simplified the FFT orientation data providing an accurate, user friendly orientation measurement tool.
3D Animations for Exploring Nucleon Structure
NASA Astrophysics Data System (ADS)
Gorman, Waverly; Burkardt, Matthias
2016-09-01
Over the last few years many intuitive pictures have been developed for the interpretation of electron hadron scattering experiments, such as a mechanism for transverse single-spin asymmetries in semi-inclusive deep-inelastic scattering experiments. While Dr. Burkardt's pictures have been helpful for many researchers in the field, they are still difficult to visualize for broader audiences since they rely mostly on 2-dimensional static images. In order to make more accessible for a broader audience what can be learned from Jefferson Lab experiments, we have started to work on developing 3-dimensional animations for these processes. The goal is to enable the viewer to repeatedly look at the same microscopic mechanism for a specific reaction, with the viewpoint of the observer changing. This should help an audience that is not so familiar with these reactions to better understand what can be learned from various experiments at Jefferson Lab aimed at exploring the nucleon structure. Jefferson Lab Minority/Female Undergraduate Research Assistantship.
Acquisition and Post-Processing of Immunohistochemical Images.
Sedgewick, Jerry
2017-01-01
Augmentation of digital images is almost always a necessity in order to obtain a reproduction that matches the appearance of the original. However, that augmentation can mislead if it is done incorrectly and not within reasonable limits. When procedures are in place for insuring that originals are archived, and image manipulation steps reported, scientists not only follow good laboratory practices, but avoid ethical issues associated with post processing, and protect their labs from any future allegations of scientific misconduct. Also, when procedures are in place for correct acquisition of images, the extent of post processing is minimized or eliminated. These procedures include white balancing (for brightfield images), keeping tonal values within the dynamic range of the detector, frame averaging to eliminate noise (typically in fluorescence imaging), use of the highest bit depth when a choice is available, flatfield correction, and archiving of the image in a non-lossy format (not JPEG).When post-processing is necessary, the commonly used applications for correction include Photoshop, and ImageJ, but a free program (GIMP) can also be used. Corrections to images include scaling the bit depth to higher and lower ranges, removing color casts from brightfield images, setting brightness and contrast, reducing color noise, reducing "grainy" noise, conversion of pure colors to grayscale, conversion of grayscale to colors typically used in fluorescence imaging, correction of uneven illumination (flatfield correction), merging color images (fluorescence), and extending the depth of focus. These corrections are explained in step-by-step procedures in the chapter that follows.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. Armando Oliu, Final Inspection Team lead for the Shuttle program, speaks to reporters about the aid the Image Analysis Lab is giving the FBI in a kidnapping case. Oliu oversees the image lab that is using an advanced SGI TP9500 data management system to review the tape of the kidnapping in progress in Sarasota, Fla. KSC installed the new $3.2 million system in preparation for Return to Flight of the Space Shuttle fleet. The lab is studying the Sarasota kidnapping video to provide any new information possible to law enforcement officers. KSC is joining NASAs Marshall Space Flight Center in Alabama in reviewing the tape.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. Armando Oliu, Final Inspection Team lead for the Shuttle program, speaks to reporters about the aidced the Image Analysis Lab is giving the FBI in a kidnapping case. Oliu oversees the image lab that is using an advanced SGI TP9500 data management system to review the tape of the kidnapping in progress in Sarasota, Fla. KSC installed the new $3.2 million system in preparation for Return to Flight of the Space Shuttle fleet. The lab is studying the Sarasota kidnapping video to provide any new information possible to law enforcement officers. KSC is joining NASAs Marshall Space Flight Center in Alabama in reviewing the tape.
2011-12-01
versatility has allowed for an additional investigation on the use of the SH coating for Lab on Chip ( LOC ) and Lab on Paper (LOP) applications by spraying the...Lab On Chip ( LOC ) and Lab On Paper (LOP) devices. The study concluded that the newly developed SH coating formulation can withstand prolonged...Microscope (Carl Zeiss LEO 1430). Before SEM imaging, a gold layer of 10 nm was deposited on the sample surface. Care was taken such that CSM, SEM
Real-time biochemical sensor based on Raman scattering with CMOS contact imaging.
Muyun Cao; Yuhua Li; Yadid-Pecht, Orly
2015-08-01
This work presents a biochemical sensor based on Raman scattering with Complementary metal-oxide-semiconductor (CMOS) contact imaging. This biochemical optical sensor is designed for detecting the concentration of solutions. The system is built with a laser diode, an optical filter, a sample holder and a commercial CMOS sensor. The output of the system is analyzed by an image processing program. The system provides instant measurements with a resolution of 0.2 to 0.4 Mol. This low cost and easy-operated small scale system is useful in chemical, biomedical and environmental labs for quantitative bio-chemical concentration detection with results reported comparable to a highly cost commercial spectrometer.
Radiation calibration for LWIR Hyperspectral Imager Spectrometer
NASA Astrophysics Data System (ADS)
Yang, Zhixiong; Yu, Chunchao; Zheng, Wei-jian; Lei, Zhenggang; Yan, Min; Yuan, Xiaochun; Zhang, Peizhong
2014-11-01
The radiometric calibration of LWIR Hyperspectral imager Spectrometer is presented. The lab has been developed to LWIR Interferometric Hyperspectral imager Spectrometer Prototype(CHIPED-I) to study Lab Radiation Calibration, Two-point linear calibration is carried out for the spectrometer by using blackbody respectively. Firstly, calibration measured relative intensity is converted to the absolute radiation lightness of the object. Then, radiation lightness of the object is is converted the brightness temperature spectrum by the method of brightness temperature. The result indicated †that this method of Radiation Calibration calibration was very good.
NASA Technical Reports Server (NTRS)
Payne, Meredith Lindsay
1995-01-01
The main objective was to assist in the production of electronic images in the Electronic Photography Lab (EPL). The EPL is a new facility serving the electronic photographic needs of the Langley community. The purpose of the Electronic Photography lab is to provide Langley with access to digital imaging technology. Although the EPL has been in operation for less than one year, almost 1,000 images have been produced. The decision to establish the lab was made after careful determination of the centers needs for electronic photography. The LaRC community requires electronic photography for the production of electronic printing, Web sites, desktop publications, and its increased enhancement capabilities. In addition to general use, other considerations went into the planning of the EPL. For example, electronic photography is much less of a burden on the environment compared to conventional photography. Also, the possibilities of an on-line database and retrieval system could make locating past work more efficient. Finally, information in an electronic image is quantified, making measurements and calculations easier for the researcher.
NASA Astrophysics Data System (ADS)
Hopper, E.; Fischer, K. M.
2017-12-01
The contiguous U.S.A. is a rich tapestry of tectonism spanning over two billion years. On the broadest scale, this complex history can be simplified to three regimes: the tectonically active western U.S., the largely quiescent Archean and Proterozoic cratons of the central U.S., and the Phanerozoic orogen and rifted margin of the eastern U.S. The transitions between these regions can be clearly observed with Sp converted wave images of the uppermost mantle. We use common conversion point stacked Sp waves recorded by EarthScope's Transportable Array and other permanent and temporary broadband stations to image the transition from a strong, sharp velocity decrease in the shallow upper mantle of the western U.S. (the lithosphere-asthenosphere boundary, or LAB) to deeper, more diffuse features moving east that largely lie within the lithosphere. Only sparse, localized, weak phases are seen at LAB depths beneath the cratonic interior. This transition is clearly revealed by cluster analysis, which also shows the eastern U.S. as more similar to the western U.S. than the ancient interior, particularly beneath New England. In the western U.S., the observed strong LAB indicates a large enough velocity decrease to imply that melt has ponded beneath the lithosphere. We compare western U.S. LAB properties to the age distribution of most recent volcanism from NavDat. While LAB properties vary widely within a given age range, their distributions indicate a relationship between age of surface volcanism and LAB phase strength and breadth. LAB depth does not appear to have a clear correlation. In general, the LAB is strongest and broadest beneath zones that have been magmatically active in the last 50 Myr, suggesting an observable fraction of melt that is distributed over a depth range of 10's of kilometers, perhaps due to variations in the degree of thermochemical erosion of the lithosphere even on very local scales. The LAB is strongest and broadest for magmatic ages of 5-10 Ma, but beneath the youngest volcanism (<5 Ma), the LAB is seen as significantly weaker, suggesting more complete destruction of the high velocity lid. The timescale of these changes in LAB character suggests the presence and possibly production of melt in the asthenosphere for many 10's of Myr after surface volcanism ceases.
Using LabView for real-time monitoring and tracking of multiple biological objects
NASA Astrophysics Data System (ADS)
Nikolskyy, Aleksandr I.; Krasilenko, Vladimir G.; Bilynsky, Yosyp Y.; Starovier, Anzhelika
2017-04-01
Today real-time studying and tracking of movement dynamics of various biological objects is important and widely researched. Features of objects, conditions of their visualization and model parameters strongly influence the choice of optimal methods and algorithms for a specific task. Therefore, to automate the processes of adaptation of recognition tracking algorithms, several Labview project trackers are considered in the article. Projects allow changing templates for training and retraining the system quickly. They adapt to the speed of objects and statistical characteristics of noise in images. New functions of comparison of images or their features, descriptors and pre-processing methods will be discussed. The experiments carried out to test the trackers on real video files will be presented and analyzed.
Counterfeit Electronics Detection Using Image Processing and Machine Learning
NASA Astrophysics Data System (ADS)
Asadizanjani, Navid; Tehranipoor, Mark; Forte, Domenic
2017-01-01
Counterfeiting is an increasing concern for businesses and governments as greater numbers of counterfeit integrated circuits (IC) infiltrate the global market. There is an ongoing effort in experimental and national labs inside the United States to detect and prevent such counterfeits in the most efficient time period. However, there is still a missing piece to automatically detect and properly keep record of detected counterfeit ICs. Here, we introduce a web application database that allows users to share previous examples of counterfeits through an online database and to obtain statistics regarding the prevalence of known defects. We also investigate automated techniques based on image processing and machine learning to detect different physical defects and to determine whether or not an IC is counterfeit.
Updates to FuncLab, a Matlab based GUI for handling receiver functions
NASA Astrophysics Data System (ADS)
Porritt, Robert W.; Miller, Meghan S.
2018-02-01
Receiver functions are a versatile tool commonly used in seismic imaging. Depending on how they are processed, they can be used to image discontinuity structure within the crust or mantle or they can be inverted for seismic velocity either directly or jointly with complementary datasets. However, modern studies generally require large datasets which can be challenging to handle; therefore, FuncLab was originally written as an interactive Matlab GUI to assist in handling these large datasets. This software uses a project database to allow interactive trace editing, data visualization, H-κ stacking for crustal thickness and Vp/Vs ratio, and common conversion point stacking while minimizing computational costs. Since its initial release, significant advances have been made in the implementation of web services and changes in the underlying Matlab platform have necessitated a significant revision to the software. Here, we present revisions to the software, including new features such as data downloading via irisFetch.m, receiver function calculations via processRFmatlab, on-the-fly cross-section tools, interface picking, and more. In the descriptions of the tools, we present its application to a test dataset in Michigan, Wisconsin, and neighboring areas following the passage of USArray Transportable Array. The software is made available online at https://robporritt.wordpress.com/software.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. Armando Oliu, Final Inspection Team lead for the Shuttle program, speaks to reporters about the aid the Image Analysis Lab is giving the FBI in a kidnapping case. Behind him at right is Mike Rein, External Affairs division chief. Oliu oversees the image lab that is using an advanced SGI TP9500 data management system to review the tape of the kidnapping in progress in Sarasota, Fla. KSC installed the new $3.2 million system in preparation for Return to Flight of the Space Shuttle fleet. The lab is studying the Sarasota kidnapping video to provide any new information possible to law enforcement officers. KSC is joining NASAs Marshall Space Flight Center in Alabama in reviewing the tape.
Applying Enhancement Filters in the Pre-processing of Images of Lymphoma
NASA Astrophysics Data System (ADS)
Henrique Silva, Sérgio; Zanchetta do Nascimento, Marcelo; Alves Neves, Leandro; Ramos Batista, Valério
2015-01-01
Lymphoma is a type of cancer that affects the immune system, and is classified as Hodgkin or non-Hodgkin. It is one of the ten types of cancer that are the most common on earth. Among all malignant neoplasms diagnosed in the world, lymphoma ranges from three to four percent of them. Our work presents a study of some filters devoted to enhancing images of lymphoma at the pre-processing step. Here the enhancement is useful for removing noise from the digital images. We have analysed the noise caused by different sources like room vibration, scraps and defocusing, and in the following classes of lymphoma: follicular, mantle cell and B-cell chronic lymphocytic leukemia. The filters Gaussian, Median and Mean-Shift were applied to different colour models (RGB, Lab and HSV). Afterwards, we performed a quantitative analysis of the images by means of the Structural Similarity Index. This was done in order to evaluate the similarity between the images. In all cases we have obtained a certainty of at least 75%, which rises to 99% if one considers only HSV. Namely, we have concluded that HSV is an important choice of colour model at pre-processing histological images of lymphoma, because in this case the resulting image will get the best enhancement.
NASA Technical Reports Server (NTRS)
2001-01-01
The Laminar Soot Processes (LSP) experiment under way during the Microgravity Sciences Lab-1 mission in 1997. LSP-2 will fly in the STS-107 Research 1 mission in 2001. The principal investigator is Dr. Gerard Faeth of the University of Michigan. LSP uses a small jet burner, similar to a classroom butane lighter, that produces flames up to 60 mm (2.3 in) long. Measurements include color TV cameras and a temperature sensor, and laser images whose darkness indicates the quantity of soot produced in the flame. Glenn Research in Cleveland, OH, manages the project.
A versatile scalable PET processing system
DOE Office of Scientific and Technical Information (OSTI.GOV)
H. Dong, A. Weisenberger, J. McKisson, Xi Wenze, C. Cuevas, J. Wilson, L. Zukerman
2011-06-01
Positron Emission Tomography (PET) historically has major clinical and preclinical applications in cancerous oncology, neurology, and cardiovascular diseases. Recently, in a new direction, an application specific PET system is being developed at Thomas Jefferson National Accelerator Facility (Jefferson Lab) in collaboration with Duke University, University of Maryland at Baltimore (UMAB), and West Virginia University (WVU) targeted for plant eco-physiology research. The new plant imaging PET system is versatile and scalable such that it could adapt to several plant imaging needs - imaging many important plant organs including leaves, roots, and stems. The mechanical arrangement of the detectors is designed tomore » accommodate the unpredictable and random distribution in space of the plant organs without requiring the plant be disturbed. Prototyping such a system requires a new data acquisition system (DAQ) and data processing system which are adaptable to the requirements of these unique and versatile detectors.« less
Real-time catheter localization and visualization using three-dimensional echocardiography
NASA Astrophysics Data System (ADS)
Kozlowski, Pawel; Bandaru, Raja Sekhar; D'hooge, Jan; Samset, Eigil
2017-03-01
Real-time three-dimensional transesophageal echocardiography (RT3D-TEE) is increasingly used during minimally invasive cardiac surgeries (MICS). In many cath labs, RT3D-TEE is already one of the requisite tools for image guidance during MICS. However, the visualization of the catheter is not always satisfactory making 3D- TEE challenging to use as the only modality for guidance. We propose a novel technique for better visualization of the catheter along with the cardiac anatomy using TEE alone - exploiting both beamforming and post processing methods. We extended our earlier method called Delay and Standard Deviation (DASD) beamforming to 3D in order to enhance specular reflections. The beam-formed image was further post-processed by the Frangi filter to segment the catheter. Multi-variate visualization techniques enabled us to render both the standard tissue and the DASD beam-formed image on a clinical ultrasound scanner simultaneously. A frame rate of 15 FPS was achieved.
McDonald, S A; Holzner, C; Lauridsen, E M; Reischig, P; Merkle, A P; Withers, P J
2017-07-12
Pressureless sintering of loose or compacted granular bodies at elevated temperature occurs by a combination of particle rearrangement, rotation, local deformation and diffusion, and grain growth. Understanding of how each of these processes contributes to the densification of a powder body is still immature. Here we report a fundamental study coupling the crystallographic imaging capability of laboratory diffraction contrast tomography (LabDCT) with conventional computed tomography (CT) in a time-lapse study. We are able to follow and differentiate these processes non-destructively and in three-dimensions during the sintering of a simple copper powder sample at 1050 °C. LabDCT quantifies particle rotation (to <0.05° accuracy) and grain growth while absorption CT simultaneously records the diffusion and deformation-related morphological changes of the sintering particles. We find that the rate of particle rotation is lowest for the more highly coordinated particles and decreases during sintering. Consequently, rotations are greater for surface breaking particles than for more highly coordinated interior ones. Both rolling (cooperative) and sliding particle rotations are observed. By tracking individual grains the grain growth/shrinkage kinetics during sintering are quantified grain by grain for the first time. Rapid, abnormal grain growth is observed for one grain while others either grow or are consumed more gradually.
Interfacing LabVIEW With Instrumentation for Electronic Failure Analysis and Beyond
NASA Technical Reports Server (NTRS)
Buchanan, Randy K.; Bryan, Coleman; Ludwig, Larry
1996-01-01
The Laboratory Virtual Instrumentation Engineering Workstation (LabVIEW) software is designed such that equipment and processes related to control systems can be operationally lined and controlled by the use of a computer. Various processes within the failure analysis laboratories of NASA's Kennedy Space Center (KSC) demonstrate the need for modernization and, in some cases, automation, using LabVIEW. An examination of procedures and practices with the Failure Analaysis Laboratory resulted in the conclusion that some device was necessary to elevate the potential users of LabVIEW to an operational level in minimum time. This paper outlines the process involved in creating a tutorial application to enable personnel to apply LabVIEW to their specific projects. Suggestions for furthering the extent to which LabVIEW is used are provided in the areas of data acquisition and process control.
Automated visual inspection system based on HAVNET architecture
NASA Astrophysics Data System (ADS)
Burkett, K.; Ozbayoglu, Murat A.; Dagli, Cihan H.
1994-10-01
In this study, the HAusdorff-Voronoi NETwork (HAVNET) developed at the UMR Smart Engineering Systems Lab is tested in the recognition of mounted circuit components commonly used in printed circuit board assembly systems. The automated visual inspection system used consists of a CCD camera, a neural network based image processing software and a data acquisition card connected to a PC. The experiments are run in the Smart Engineering Systems Lab in the Engineering Management Dept. of the University of Missouri-Rolla. The performance analysis shows that the vision system is capable of recognizing different components under uncontrolled lighting conditions without being effected by rotation or scale differences. The results obtained are promising and the system can be used in real manufacturing environments. Currently the system is being customized for a specific manufacturing application.
The laboratory report: A pedagogical tool in college science courses
NASA Astrophysics Data System (ADS)
Ferzli, Miriam
When viewed as a product rather than a process that aids in student learning, the lab report may become rote, busywork for both students and instructors. Students fail to see the purpose of the lab report, and instructors see them as a heavy grading load. If lab reports are taught as part of a process rather than a product that aims to "get the right answer," they may serve as pedagogical tools in college science courses. In response to these issues, an in-depth, web-based tutorial named LabWrite (www.ncsu.edu/labwrite) was developed to help students and instructors (www.ncsu.edu/labwrite/instructors) understand the purpose of the lab report as grounded in the written discourse and processes of science. The objective of this post-test only quasi-experimental study was to examine the role that in-depth instruction such as LabWrite plays in helping students to develop skills characteristic of scientifically literate individuals. Student lab reports from an introductory-level biology course at NC State University were scored for overall understanding of scientific concepts and scientific ways of thinking. The study also looked at students' attitudes toward science and lab report writing, as well as students' perceptions of lab reports in general. Significant statistical findings from this study show that students using LabWrite were able to write lab reports that showed a greater understanding of scientific investigations (p < .003) and scientific ways of thinking (p < .0001) than students receiving traditional lab report writing instruction. LabWrite also helped students develop positive attitudes toward lab reports as compared to non-LabWrite users (p < .01). Students using LabWrite seemed to perceive the lab report as a valuable tool for determining learning objectives, understanding science concepts, revisiting the lab experience, and documenting their learning.
Göröcs, Zoltán; Ozcan, Aydogan
2012-01-01
Lab-on-a-chip systems have been rapidly emerging to pave the way toward ultra-compact, efficient, mass producible and cost-effective biomedical research and diagnostic tools. Although such microfluidic and micro electromechanical systems achieved high levels of integration, and are capable of performing various important tasks on the same chip, such as cell culturing, sorting and staining, they still rely on conventional microscopes for their imaging needs. Recently several alternative on-chip optical imaging techniques have been introduced, which have the potential to substitute conventional microscopes for various lab-on-a-chip applications. Here we present a critical review of these recently emerging on-chip biomedical imaging modalities, including contact shadow imaging, lensfree holographic microscopy, fluorescent on-chip microscopy and lensfree optical tomography. PMID:23558399
Dobrescu, Andrei; Scorza, Livia C T; Tsaftaris, Sotirios A; McCormick, Alistair J
2017-01-01
Improvements in high-throughput phenotyping technologies are rapidly expanding the scope and capacity of plant biology studies to measure growth traits. Nevertheless, the costs of commercial phenotyping equipment and infrastructure remain prohibitively expensive for wide-scale uptake, while academic solutions can require significant local expertise. Here we present a low-cost methodology for plant biologists to build their own phenotyping system for quantifying growth rates and phenotypic characteristics of Arabidopsis thaliana rosettes throughout the diel cycle. We constructed an image capture system consisting of a near infra-red (NIR, 940 nm) LED panel with a mounted Raspberry Pi NoIR camera and developed a MatLab-based software module (iDIEL Plant) to characterise rosette expansion. Our software was able to accurately segment and characterise multiple rosettes within an image, regardless of plant arrangement or genotype, and batch process image sets. To further validate our system, wild-type Arabidopsis plants (Col-0) and two mutant lines with reduced Rubisco contents, pale leaves and slow growth phenotypes ( 1a3b and 1a2b ) were grown on a single plant tray. Plants were imaged from 9 to 24 days after germination every 20 min throughout the 24 h light-dark growth cycle (i.e. the diel cycle). The resulting dataset provided a dynamic and uninterrupted characterisation of differences in rosette growth and expansion rates over time for the three lines tested. Our methodology offers a straightforward solution for setting up automated, scalable and low-cost phenotyping facilities in a wide range of lab environments that could greatly increase the processing power and scalability of Arabidopsis soil growth experiments.
Control code for laboratory adaptive optics teaching system
NASA Astrophysics Data System (ADS)
Jin, Moonseob; Luder, Ryan; Sanchez, Lucas; Hart, Michael
2017-09-01
By sensing and compensating wavefront aberration, adaptive optics (AO) systems have proven themselves crucial in large astronomical telescopes, retinal imaging, and holographic coherent imaging. Commercial AO systems for laboratory use are now available in the market. One such is the ThorLabs AO kit built around a Boston Micromachines deformable mirror. However, there are limitations in applying these systems to research and pedagogical projects since the software is written with limited flexibility. In this paper, we describe a MATLAB-based software suite to interface with the ThorLabs AO kit by using the MATLAB Engine API and Visual Studio. The software is designed to offer complete access to the wavefront sensor data, through the various levels of processing, to the command signals to the deformable mirror and fast steering mirror. In this way, through a MATLAB GUI, an operator can experiment with every aspect of the AO system's functioning. This is particularly valuable for tests of new control algorithms as well as to support student engagement in an academic environment. We plan to make the code freely available to the community.
Technique of diffusion weighted imaging and its application in stroke
NASA Astrophysics Data System (ADS)
Li, Enzhong; Tian, Jie; Han, Ying; Wang, Huifang; Li, Wu; He, Huiguang
2003-05-01
To study the application of diffusion weighted imaging and image post processing in the diagnosis of stroke, especially in acute stroke, 205 patients were examined by 1.5 T or 1.0 T MRI scanner and the images such as T1, T2 and diffusion weighted images were obtained. Image post processing was done with "3D Med System" developed by our lab to analyze data and acquire the apparent diffusion coefficient (ADC) map. In acute and subacute stage of stroke, the signal in cerebral infarction areas changed to hyperintensity in T2- and diffusion-weighted images, normal or hypointensity in T1-weighted images. In hyperacute stage, however, the signal was hyperintense just in the diffusion weighted imaes; others were normal. In the chronic stage, the signal in T1- and diffusion-weighted imaging showed hypointensity and hyperintensity in T2 weighted imaging. Because ADC declined obviously in acute and subacute stage of stroke, the lesion area was hypointensity in ADC map. With the development of the disease, ADC gradually recovered and then changed to hyperintensity in ADC map in chronic stage. Using diffusion weighted imaging and ADC mapping can make a diagnosis of stroke, especially in the hyperacute stage of stroke, and can differentiate acute and chronic stroke.
Phoenix Carries Soil to Wet Chemistry Lab
NASA Technical Reports Server (NTRS)
2008-01-01
This image taken by the Surface Stereo Imager on NASA's Phoenix Mars Lander shows the lander's Robotic Arm scoop positioned over the Wet Chemistry Lab delivery funnel on Sol 29, the 29th Martian day after landing, or June 24, 2008. The soil will be delivered to the instrument on Sol 30. This image has been enhanced to brighten the scene. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.Hadlich, Marcelo Souza; Oliveira, Gláucia Maria Moraes; Feijóo, Raúl A; Azevedo, Clerio F; Tura, Bernardo Rangel; Ziemer, Paulo Gustavo Portela; Blanco, Pablo Javier; Pina, Gustavo; Meira, Márcio; Souza e Silva, Nelson Albuquerque de
2012-10-01
The standardization of images used in Medicine in 1993 was performed using the DICOM (Digital Imaging and Communications in Medicine) standard. Several tests use this standard and it is increasingly necessary to design software applications capable of handling this type of image; however, these software applications are not usually free and open-source, and this fact hinders their adjustment to most diverse interests. To develop and validate a free and open-source software application capable of handling DICOM coronary computed tomography angiography images. We developed and tested the ImageLab software in the evaluation of 100 tests randomly selected from a database. We carried out 600 tests divided between two observers using ImageLab and another software sold with Philips Brilliance computed tomography appliances in the evaluation of coronary lesions and plaques around the left main coronary artery (LMCA) and the anterior descending artery (ADA). To evaluate intraobserver, interobserver and intersoftware agreements, we used simple and kappa statistics agreements. The agreements observed between software applications were generally classified as substantial or almost perfect in most comparisons. The ImageLab software agreed with the Philips software in the evaluation of coronary computed tomography angiography tests, especially in patients without lesions, with lesions < 50% in the LMCA and < 70% in the ADA. The agreement for lesions > 70% in the ADA was lower, but this is also observed when the anatomical reference standard is used.
A High-Resolution Minimicroscope System for Wireless Real-Time Monitoring.
Wang, Zongjie; Boddeda, Akash; Parker, Benjamin; Samanipour, Roya; Ghosh, Sanjoy; Menard, Frederic; Kim, Keekyoung
2018-07-01
Compact, cost-effective, and high-performance microscope that enables the real-time imaging of cells and lab-on-a-chip devices is highly demanded for cell biology and biomedical engineering. This paper aims to present the design and application of an inexpensive wireless minimicroscope with resolution up to 2592 × 1944 pixels and speed up to 90 f/s. The minimicroscope system was built on a commercial embedded system (Raspberry Pi). We modified a camera module and adopted an inverse dual lens system to obtain the clear field of view and appropriate magnification for tens of micrometer objects. The system was capable of capturing time-lapse images and transferring image data wirelessly. The entire system can be operated wirelessly and cordlessly in a conventional cell culturing incubator. The developed minimicroscope was used to monitor the attachment and proliferation of NIH-3T3 and HEK 293 cells inside an incubator for 50 h. In addition, the minimicroscope was used to monitor a droplet generation process in a microfluidic device. The high-quality images captured by the minimicroscope enabled us an automated analysis of experimental parameters. The successful applications prove the great potential of the developed minimicroscope for monitoring various biological samples and microfluidic devices. This paper presents the design of a high-resolution minimicroscope system that enables the wireless real-time imaging of cells inside the incubator. This system has been verified to be a useful tool to obtain high-quality images and videos for the automated quantitative analysis of biological samples and lab-on-a-chip devices in the long term.
Design and Implementation of the Retinoblastoma Collaborative Laboratory.
Qaiser, Seemi; Limo, Alice; Gichana, Josiah; Kimani, Kahaki; Githanga, Jessie; Waweru, Wairimu; Dimba, Elizabeth A O; Dimaras, Helen
2017-01-01
The purpose of this work was to describe the design and implementation of a digital pathology laboratory, the Retinoblastoma Collaborative Laboratory (RbCoLab) in Kenya. The RbCoLab is a central lab in Nairobi that receives retinoblastoma specimens from all over Kenya. Specimens were processed using evidence-based standard operating procedures. Images were produced by a digital scanner, and pathology reports were disseminated online. The lab implemented standard operating procedures aimed at improving the accuracy, completeness, and timeliness of pathology reports, enhancing the care of Kenyan retinoblastoma patients. Integration of digital technology to support pathology services supported knowledge transfer and skills transfer. A bidirectional educational network of local pathologists and other clinicians in the circle of care of the patients emerged and served to emphasize the clinical importance of cancer pathology at multiple levels of care. A 'Robin Hood' business model of health care service delivery was developed to support sustainability and scale-up of cancer pathology services. The application of evidence-based protocols, comprehensive training, and collaboration were essential to bring improvements to the care of retinoblastoma patients in Kenya. When embraced as an integrated component of retinoblastoma care, digital pathology offers the opportunity for frequent connection and consultation for development of expertise over time.
Design and Implementation of the Retinoblastoma Collaborative Laboratory
Qaiser, Seemi; Limo, Alice; Gichana, Josiah; Kimani, Kahaki; Githanga, Jessie; Waweru, Wairimu; Dimba, Elizabeth A.O.; Dimaras, Helen
2017-01-01
Purpose The purpose of this work was to describe the design and implementation of a digital pathology laboratory, the Retinoblastoma Collaborative Laboratory (RbCoLab) in Kenya. Method The RbCoLab is a central lab in Nairobi that receives retinoblastoma specimens from all over Kenya. Specimens were processed using evidence-based standard operating procedures. Images were produced by a digital scanner, and pathology reports were disseminated online. Results The lab implemented standard operating procedures aimed at improving the accuracy, completeness, and timeliness of pathology reports, enhancing the care of Kenyan retinoblastoma patients. Integration of digital technology to support pathology services supported knowledge transfer and skills transfer. A bidirectional educational network of local pathologists and other clinicians in the circle of care of the patients emerged and served to emphasize the clinical importance of cancer pathology at multiple levels of care. A ‘Robin Hood’ business model of health care service delivery was developed to support sustainability and scale-up of cancer pathology services. Discussion The application of evidence-based protocols, comprehensive training, and collaboration were essential to bring improvements to the care of retinoblastoma patients in Kenya. When embraced as an integrated component of retinoblastoma care, digital pathology offers the opportunity for frequent connection and consultation for development of expertise over time. PMID:28275608
2004-02-04
KENNEDY SPACE CENTER, FLA. - Reporters are eager to hear from Armando Oliu about the aid the Image Analysis Lab is giving the FBI in a kidnapping case. Oliu, Final Inspection Team lead for the Shuttle program, oversees the lab that is using an advanced SGI® TP9500 data management system to review the tape of the kidnapping in progress in Sarasota, Fla. KSC installed the new $3.2 million system in preparation for Return to Flight of the Space Shuttle fleet. The lab is studying the Sarasota kidnapping video to provide any new information possible to law enforcement officers. KSC is joining NASA’s Marshall Space Flight Center in Alabama in reviewing the tape.
Pedestrian Validation in Infrared Images by Means of Active Contours and Neural Networks
2010-01-01
Research Article Pedestrian Validation in Infrared Images byMeans of Active Contours and Neural Networks Massimo Bertozzi,1 Pietro Cerri,1 Mirko Felisa,1...Stefano Ghidoni,2 andMichael Del Rose3 1VisLab, Dipartimento di Ingegneria dell’Informazione, Università di Parma, 43124 Parma, Italy 2 IAS-Lab...Dipartimento di Ingegneria dell’Informazione, Università di Padova, 35131 Padova, Italy 3Vetronics Research Center, U. S. Army TARDEC, MI 48397, USA
Breast cancer mitosis detection in histopathological images with spatial feature extraction
NASA Astrophysics Data System (ADS)
Albayrak, Abdülkadir; Bilgin, Gökhan
2013-12-01
In this work, cellular mitosis detection in histopathological images has been investigated. Mitosis detection is very expensive and time consuming process. Development of digital imaging in pathology has enabled reasonable and effective solution to this problem. Segmentation of digital images provides easier analysis of cell structures in histopathological data. To differentiate normal and mitotic cells in histopathological images, feature extraction step is very crucial step for the system accuracy. A mitotic cell has more distinctive textural dissimilarities than the other normal cells. Hence, it is important to incorporate spatial information in feature extraction or in post-processing steps. As a main part of this study, Haralick texture descriptor has been proposed with different spatial window sizes in RGB and La*b* color spaces. So, spatial dependencies of normal and mitotic cellular pixels can be evaluated within different pixel neighborhoods. Extracted features are compared with various sample sizes by Support Vector Machines using k-fold cross validation method. According to the represented results, it has been shown that separation accuracy on mitotic and non-mitotic cellular pixels gets better with the increasing size of spatial window.
Twofold processing for denoising ultrasound medical images.
Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y
2015-01-01
Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India.
NASA Astrophysics Data System (ADS)
Zhu, Feng; Macdonald, Niall; Skommer, Joanna; Wlodkowic, Donald
2015-06-01
Current microfabrication methods are often restricted to two-dimensional (2D) or two and a half dimensional (2.5D) structures. Those fabrication issues can be potentially addressed by emerging additive manufacturing technologies. Despite rapid growth of additive manufacturing technologies in tissue engineering, microfluidics has seen relatively little developments with regards to adopting 3D printing for rapid fabrication of complex chip-based devices. This has been due to two major factors: lack of sufficient resolution of current rapid-prototyping methods (usually >100 μm ) and optical transparency of polymers to allow in vitro imaging of specimens. We postulate that adopting innovative fabrication processes can provide effective solutions for prototyping and manufacturing of chip-based devices with high-aspect ratios (i.e. above ration of 20:1). This work provides a comprehensive investigation of commercially available additive manufacturing technologies as an alternative for rapid prototyping of complex monolithic Lab-on-a-Chip devices for biological applications. We explored both multi-jet modelling (MJM) and several stereolithography (SLA) processes with five different 3D printing resins. Compared with other rapid prototyping technologies such as PDMS soft lithography and infrared laser micromachining, we demonstrated that selected SLA technologies had superior resolution and feature quality. We also for the first time optimised the post-processing protocols and demonstrated polymer features under scanning electronic microscope (SEM). Finally we demonstrate that selected SLA polymers have optical properties enabling high-resolution biological imaging. A caution should be, however, exercised as more work is needed to develop fully bio-compatible and non-toxic polymer chemistries.
Upper mantle structure across the Trans-European Suture Zone imaged by S-receiver functions
NASA Astrophysics Data System (ADS)
Knapmeyer-Endrun, Brigitte; Krüger, Frank; Geissler, Wolfram H.; Passeq Working Group
2017-01-01
We present a high-resolution study of the upper mantle structure of Central Europe, including the western part of the East European Platform, based on S-receiver functions of 345 stations. A distinct contrast is found between Phanerozoic Europe and the East European Craton across the Trans-European Suture Zone. To the west, a pronounced velocity reduction with depth interpreted as lithosphere-asthenosphere boundary (LAB) is found at an average depth of 90 km. Beneath the craton, no strong and continuous LAB conversion is observed. Instead we find a distinct velocity reduction within the lithosphere, at 80-120 km depth. This mid-lithospheric discontinuity (MLD) is attributed to a compositional boundary between depleted and more fertile lithosphere created by late Proterozoic metasomatism. A potential LAB phase beneath the craton is very weak and varies in depth between 180 and 250 km, consistent with a reduced velocity contrast between the lower lithosphere and the asthenosphere. Within the Trans-European Suture Zone, lithospheric structure is characterized by strong heterogeneity. A dipping or step-wise increase to LAB depth of 150 km is imaged from Phanerozoic Europe to 20-22° E, whereas no direct connection to the cratonic LAB or MLD to the east is apparent. At larger depths, a positive conversion associated with the lower boundary of the asthenosphere is imaged at 210-250 km depth beneath Phanerozoic Europe, continuing down to 300 km depth beneath the craton. Conversions from both 410 km and 660 km discontinuities are found at their nominal depth beneath Phanerozoic Europe, and the discontinuity at 410 km depth can also be traced into the craton. A potential negative conversion on top of the 410 km discontinuity found in migrated images is analyzed by modeling and attributed to interference with other converted phases.
Current trends in nanobiosensor technology
Wu, Diana; Langer, Robert S
2014-01-01
The development of tools and processes used to fabricate, measure, and image nanoscale objects has lead to a wide range of work devoted to producing sensors that interact with extremely small numbers (or an extremely small concentration) of analyte molecules. These advances are particularly exciting in the context of biosensing, where the demands for low concentration detection and high specificity are great. Nanoscale biosensors, or nanobiosensors, provide researchers with an unprecedented level of sensitivity, often to the single molecule level. The use of biomolecule-functionalized surfaces can dramatically boost the specificity of the detection system, but can also yield reproducibility problems and increased complexity. Several nanobiosensor architectures based on mechanical devices, optical resonators, functionalized nanoparticles, nanowires, nanotubes, and nanofibers have been demonstrated in the lab. As nanobiosensor technology becomes more refined and reliable, it is likely it will eventually make its way from the lab to the clinic, where future lab-on-a-chip devices incorporating an array of nanobiosensors could be used for rapid screening of a wide variety of analytes at low cost using small samples of patient material. PMID:21391305
The Precise and Efficient Identification of Medical Order Forms Using Shape Trees
NASA Astrophysics Data System (ADS)
Henker, Uwe; Petersohn, Uwe; Ultsch, Alfred
A powerful and flexible technique to identify, classify and process documents using images from a scanning process is presented. The types of documents can be described to the system as a set of differentiating features in a case base using shape trees. The features are filtered and abstracted from an extremely reduced scanner image of the document. Classification rules are stored with the cases to enable precise recognition and further mark reading and Optical Character Recognition (OCR) process. The method is implemented in a system which actually processes the majority of requests for medical lab procedures in Germany. A large practical experiment with data from practitioners was performed. An average of 97% of the forms were correctly identified; none were identified incorrectly. This meets the quality requirements for most medical applications. The modular description of the recognition process allows for a flexible adaptation of future changes to the form and content of the document’s structures.
The value of core lab stress echocardiography interpretations: observations from the ISCHEMIA Trial.
Kataoka, Akihisa; Scherrer-Crosbie, Marielle; Senior, Roxy; Gosselin, Gilbert; Phaneuf, Denis; Guzman, Gabriela; Perna, Gian; Lara, Alfonso; Kedev, Sasko; Mortara, Andrea; El-Hajjar, Mohammad; Shaw, Leslee J; Reynolds, Harmony R; Picard, Michael H
2015-12-18
Stress echocardiography (SE) is dependent on subjective interpretations. As a prelude to the International Study of Comparative Health Effectiveness with Medical and Invasive Approaches (ISCHEMIA) Trial, potential sites were required to submit two SE, one with moderate or severe left ventricular (LV) myocardial ischemia and one with mild ischemia. We evaluated the concordance of site and core lab interpretations. Eighty-one SE were submitted from 41 international sites. Ischemia was classified by the number of new or worsening segmental LV wall motion abnormalities (WMA): none, mild (1 or 2) or moderate or severe (3 or more) by the sites and the core lab. Core lab classified 6 SE as no ischemia, 35 mild and 40 moderate or greater. There was agreement between the site and core in 66 of 81 total cases (81%, weighted kappa coefficient [K] =0.635). Agreement was similar for SE type - 24 of 30 exercise (80%, K = 0.571) vs. 41 of 49 pharmacologic (84%, K = 0.685). The agreement between poor or fair image quality (27 of 36 cases, 75%, K = 0.492) was not as good as for the good or excellent image quality cases (39 of 45 cases, 87%, K = 0.755). Differences in concordance were noted for degree of ischemia with the majority of discordant interpretations (87%) occurring in patients with no or mild LV myocardial ischemia. While site SE interpretations are largely concordant with core lab interpretations, this appears dependent on image quality and the extent of WMA. Thus core lab interpretations remain important in clinical trials where consistency of interpretation across a range of cases is critical. ClinicalTrials.gov NCT01471522.
2004-01-05
KENNEDY SPACE CENTER, FLA. -- In the Space Life Sciences (SLS) Lab, Jan Bauer, with Dynamac Corp., weighs samples of onion tissue for processing in the elemental analyzer behind it. The equipment analyzes for carbon, hydrogen, nitrogen and sulfur. The 100,000 square-foot SLS houses labs for NASA’s ongoing research efforts, microbiology/microbial ecology studies and analytical chemistry labs. Also calling the new lab home are facilities for space flight-experiment and flight-hardware development, new plant growth chambers, and an Orbiter Environment Simulator that will be used to conduct ground control experiments in simulated flight conditions for space flight experiments. The SLS Lab, formerly known as the Space Experiment Research and Processing Laboratory or SERPL, provides space for NASA’s Life Sciences Services contractor Dynamac Corporation, Bionetics Corporation, and researchers from the University of Florida. NASA’s Office of Biological and Physical Research will use the facility for processing life sciences experiments that will be conducted on the International Space Station. The SLS Lab is the magnet facility for the International Space Research Park at KSC being developed in partnership with Florida Space Authority.
Boix, Macarena; Cantó, Begoña
2013-04-01
Accurate image segmentation is used in medical diagnosis since this technique is a noninvasive pre-processing step for biomedical treatment. In this work we present an efficient segmentation method for medical image analysis. In particular, with this method blood cells can be segmented. For that, we combine the wavelet transform with morphological operations. Moreover, the wavelet thresholding technique is used to eliminate the noise and prepare the image for suitable segmentation. In wavelet denoising we determine the best wavelet that shows a segmentation with the largest area in the cell. We study different wavelet families and we conclude that the wavelet db1 is the best and it can serve for posterior works on blood pathologies. The proposed method generates goods results when it is applied on several images. Finally, the proposed algorithm made in MatLab environment is verified for a selected blood cells.
Gorgolewski, Krzysztof J; Auer, Tibor; Calhoun, Vince D; Craddock, R Cameron; Das, Samir; Duff, Eugene P; Flandin, Guillaume; Ghosh, Satrajit S; Glatard, Tristan; Halchenko, Yaroslav O; Handwerker, Daniel A; Hanke, Michael; Keator, David; Li, Xiangrui; Michael, Zachary; Maumet, Camille; Nichols, B Nolan; Nichols, Thomas E; Pellman, John; Poline, Jean-Baptiste; Rokem, Ariel; Schaefer, Gunnar; Sochat, Vanessa; Triplett, William; Turner, Jessica A; Varoquaux, Gaël; Poldrack, Russell A
2016-06-21
The development of magnetic resonance imaging (MRI) techniques has defined modern neuroimaging. Since its inception, tens of thousands of studies using techniques such as functional MRI and diffusion weighted imaging have allowed for the non-invasive study of the brain. Despite the fact that MRI is routinely used to obtain data for neuroscience research, there has been no widely adopted standard for organizing and describing the data collected in an imaging experiment. This renders sharing and reusing data (within or between labs) difficult if not impossible and unnecessarily complicates the application of automatic pipelines and quality assurance protocols. To solve this problem, we have developed the Brain Imaging Data Structure (BIDS), a standard for organizing and describing MRI datasets. The BIDS standard uses file formats compatible with existing software, unifies the majority of practices already common in the field, and captures the metadata necessary for most common data processing operations.
Gorgolewski, Krzysztof J.; Auer, Tibor; Calhoun, Vince D.; Craddock, R. Cameron; Das, Samir; Duff, Eugene P.; Flandin, Guillaume; Ghosh, Satrajit S.; Glatard, Tristan; Halchenko, Yaroslav O.; Handwerker, Daniel A.; Hanke, Michael; Keator, David; Li, Xiangrui; Michael, Zachary; Maumet, Camille; Nichols, B. Nolan; Nichols, Thomas E.; Pellman, John; Poline, Jean-Baptiste; Rokem, Ariel; Schaefer, Gunnar; Sochat, Vanessa; Triplett, William; Turner, Jessica A.; Varoquaux, Gaël; Poldrack, Russell A.
2016-01-01
The development of magnetic resonance imaging (MRI) techniques has defined modern neuroimaging. Since its inception, tens of thousands of studies using techniques such as functional MRI and diffusion weighted imaging have allowed for the non-invasive study of the brain. Despite the fact that MRI is routinely used to obtain data for neuroscience research, there has been no widely adopted standard for organizing and describing the data collected in an imaging experiment. This renders sharing and reusing data (within or between labs) difficult if not impossible and unnecessarily complicates the application of automatic pipelines and quality assurance protocols. To solve this problem, we have developed the Brain Imaging Data Structure (BIDS), a standard for organizing and describing MRI datasets. The BIDS standard uses file formats compatible with existing software, unifies the majority of practices already common in the field, and captures the metadata necessary for most common data processing operations. PMID:27326542
NASA Astrophysics Data System (ADS)
Ye, Jinzuo; Chi, Chongwei; Zhang, Shuang; Ma, Xibo; Tian, Jie
2014-02-01
Sentinel lymph node (SLN) in vivo detection is vital in breast cancer surgery. A new near-infrared fluorescence-based surgical navigation system (SNS) imaging software, which has been developed by our research group, is presented for SLN detection surgery in this paper. The software is based on the fluorescence-based surgical navigation hardware system (SNHS) which has been developed in our lab, and is designed specifically for intraoperative imaging and postoperative data analysis. The surgical navigation imaging software consists of the following software modules, which mainly include the control module, the image grabbing module, the real-time display module, the data saving module and the image processing module. And some algorithms have been designed to achieve the performance of the software, for example, the image registration algorithm based on correlation matching. Some of the key features of the software include: setting the control parameters of the SNS; acquiring, display and storing the intraoperative imaging data in real-time automatically; analysis and processing of the saved image data. The developed software has been used to successfully detect the SLNs in 21 cases of breast cancer patients. In the near future, we plan to improve the software performance and it will be extensively used for clinical purpose.
Astronomy for Everyone: Harvard's Move Toward an All-Inclusive Astronomy Lab and Telescope
NASA Astrophysics Data System (ADS)
Bieryla, Allyson
2016-01-01
Harvard University has a growing astronomy program that offers various courses to the undergraduate concentrators, secondaries and non-majors. Many of the courses involve labs that use the 16-inch DFM Clay Telescope for night-time observations and the heliostat for observing the Sun. The goal is to proactively adapt the lab and telescope facilities to accommodate all students with disabilities. The current focus is converting the labs to accommodate visually impaired students. Using tactile images and sound, the intention is to create an experience equivalent to that of a student with full sight.
Phoenix Again Carries Soil to Wet Chemistry Lab
NASA Technical Reports Server (NTRS)
2008-01-01
This image taken by the Surface Stereo Imager on NASA's Phoenix Mars Lander shows the lander's Robotic Arm scoop positioned over the Wet Chemistry Lab Cell 1 delivery funnel on Sol 41, the 42nd Martian day after landing, or July 6, 2008, after a soil sample was delivered to the instrument. The instrument's Cell 1 is second one from the foreground of the image. The first cell, Cell 0, received a soil sample two weeks earlier. This image has been enhanced to brighten the scene. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.NASA Astrophysics Data System (ADS)
Hopper, E.; Fischer, K. M.
2016-12-01
The lithosphere preserves a record of past and present tectonic processes in its internal structures and its boundary with the underlying asthenosphere. We use common conversion point stacked Sp converted waves recorded by EarthScope's Transportable Array, as well as other available permanent and temporary broadband stations, to image such structures in the lithospheric mantle of the contiguous U.S. In the tectonically youngest western U.S., a shallow, sharp velocity gradient at the base of the lithosphere suggests a boundary defined by ponded melt. The lithosphere thickens with age of volcanism, implying the lithosphere is a melt-mitigated, conductively cooling thermal boundary layer. Beneath older, colder lithosphere where melt fractions are likely much lower, the velocity gradient at the base of such a layer should be a more diffuse, primarily thermal boundary. This is consistent with observations in the eastern U.S. where the lithosphere-asthenosphere boundary (LAB) is locally sharp and shallower only in areas of inferred enhanced upwelling - such as ancient hot spot tracks and areas of inferred delamination. In the cratonic interior, the LAB is even more gradual in depth, and is transparent to Sp waves with dominant periods of 10 s. Although seismic imaging only provides a snapshot of the lithosphere as it is today, preserved internal structures extend the utility of this imaging back into deep geological time. Ancient accretion within the cratonic lithospheric mantle is preserved as dipping structures associated with relict subducted slabs from Paleoproterozoic continental accretion, suggesting that lateral accretion was integral to the cratonic mantle root formation process. Metasomatism, melt migration and ponding below a carbonated peridotite solidus explain a sub-horizontal mid-lithospheric discontinuity (MLD) commonly observed at 70-100 km depth. This type of MLD is strongest in Mesoproterozoic and older lithosphere, suggesting that it formed more vigorously in the deep past, that a billion years or more are required to build up an observable volatile-rich layer, or that strong, ancient lithosphere is required to support an inherently weak, volatilized layer.
Development of the Science Data System for the International Space Station Cold Atom Lab
NASA Technical Reports Server (NTRS)
van Harmelen, Chris; Soriano, Melissa A.
2015-01-01
Cold Atom Laboratory (CAL) is a facility that will enable scientists to study ultra-cold quantum gases in a microgravity environment on the International Space Station (ISS) beginning in 2016. The primary science data for each experiment consists of two images taken in quick succession. The first image is of the trapped cold atoms and the second image is of the background. The two images are subtracted to obtain optical density. These raw Level 0 atom and background images are processed into the Level 1 optical density data product, and then into the Level 2 data products: atom number, Magneto-Optical Trap (MOT) lifetime, magnetic chip-trap atom lifetime, and condensate fraction. These products can also be used as diagnostics of the instrument health. With experiments being conducted for 8 hours every day, the amount of data being generated poses many technical challenges, such as downlinking and managing the required data volume. A parallel processing design is described, implemented, and benchmarked. In addition to optimizing the data pipeline, accuracy and speed in producing the Level 1 and 2 data products is key. Algorithms for feature recognition are explored, facilitating image cropping and accurate atom number calculations.
Complex Microfluidic Systems Architectures and Applications to Micropower Generation
2010-07-07
signal. Images are recorded via an Hamamatsu Orca camera and processed with Matlab. The observed results show the ability of the micromixer to distribute...Generator was produced. References [1] F. Bottausci, C. Cardonne, C. Meinhart, and I. Mezić. An ultrashort mixing length micromixer : The shear superposition... micromixer . Lab on a Chip, 7(3):396–398, 2007. [2] F. Bottausci, I. Mezić, C.D. Meinhart, and C. Cardonne. Mixing in the shear superposition
Nanoscale Imaging with a Single Quantum Dot
2011-12-19
mL round-bottom flask equipped with a condenser , a thermometer and a magnetic stirring bar. After the EG was heated to 160 °C in an oil bath, 0.5 mL...radiatively transferred to the wire’s SPP mode through an electric dipole interaction19. The efficiency of this process scales as the spontaneous emission...Liu, J., Gao, D., Li, H.-F. & Lin, J.-M. Controlled photopolymerization of hydrogel microstructures inside microchannels for bioassays. Lab. Chip 9
NASA Technical Reports Server (NTRS)
2001-01-01
Interior of the Equipment Module for the Laminar Soot Processes (LSP-2) experiment that fly in the STS-107 Research 1 mission in 2002 (LSP-1 flew on Microgravity Sciences Lab-1 mission in 1997). The principal investigator is Dr. Gerard Faeth of the University of Michigan. LSP uses a small jet burner (yellow ellipse), similar to a classroom butane lighter, that produces flames up to 60 mm (2.3 in) long. Measurements include color TV cameras and a radiometer or heat sensor (blue circle), and laser images whose darkness indicates the quantity of soot produced in the flame. Glenn Research in Cleveland, OH, manages the project.
Pandiyan, Vimal Prabhu; John, Renu
2016-01-20
We propose a versatile 3D phase-imaging microscope platform for real-time imaging of optomicrofluidic devices based on the principle of digital holographic microscopy (DHM). Lab-on-chip microfluidic devices fabricated on transparent polydimethylsiloxane (PDMS) and glass substrates have attained wide popularity in biological sensing applications. However, monitoring, visualization, and characterization of microfluidic devices, microfluidic flows, and the biochemical kinetics happening in these devices is difficult due to the lack of proper techniques for real-time imaging and analysis. The traditional bright-field microscopic techniques fail in imaging applications, as the microfluidic channels and the fluids carrying biological samples are transparent and not visible in bright light. Phase-based microscopy techniques that can image the phase of the microfluidic channel and changes in refractive indices due to the fluids and biological samples present in the channel are ideal for imaging the fluid flow dynamics in a microfluidic channel at high resolutions. This paper demonstrates three-dimensional imaging of a microfluidic device with nanometric depth precisions and high SNR. We demonstrate imaging of microelectrodes of nanometric thickness patterned on glass substrate and the microfluidic channel. Three-dimensional imaging of a transparent PDMS optomicrofluidic channel, fluid flow, and live yeast cell flow in this channel has been demonstrated using DHM. We also quantify the average velocity of fluid flow through the channel. In comparison to any conventional bright-field microscope, the 3D depth information in the images illustrated in this work carry much information about the biological system under observation. The results demonstrated in this paper prove the high potential of DHM in imaging optofluidic devices; detection of pathogens, cells, and bioanalytes on lab-on-chip devices; and in studying microfluidic dynamics in real time based on phase changes.
Ames Lab 101: Real-Time 3D Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Song
2010-08-02
Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.
Ames Lab 101: Real-Time 3D Imaging
Zhang, Song
2017-12-22
Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.
Vihavainen, Elina; Lundström, Hanna-Saara; Susiluoto, Tuija; Koort, Joanna; Paulin, Lars; Auvinen, Petri; Björkroth, K. Johanna
2007-01-01
Some psychrotrophic lactic acid bacteria (LAB) are specific meat spoilage organisms in modified-atmosphere-packaged (MAP), cold-stored meat products. To determine if incoming broilers or the production plant environment is a source of spoilage LAB, a total of 86, 122, and 447 LAB isolates from broiler carcasses, production plant air, and MAP broiler products, respectively, were characterized using a library of HindIII restriction fragment length polymorphism (RFLP) patterns of the 16 and 23S rRNA genes as operational taxonomic units in numerical analyses. Six hundred thirteen LAB isolates from the total of 655 clustered in 29 groups considered to be species specific. Sixty-four percent of product isolates clustered either with Carnobacterium divergens or with Carnobacterium maltaromaticum type strains. The third major product-associated cluster (17% of isolates) was formed by unknown LAB. Representative strains from these three clusters were analyzed for the phylogeny of their 16S rRNA genes. This analysis verified that the two largest RFLP clusters consisted of carnobacteria and showed that the unknown LAB group consisted of Lactococcus spp. No product-associated LAB were detected in broiler carcasses sampled at the beginning of slaughter, whereas carnobacteria and lactococci, along with some other specific meat spoilage LAB, were recovered from processing plant air at many sites. This study reveals that incoming broiler chickens are not major sources of psychrotrophic spoilage LAB, whereas the detection of these organisms from the air of the processing environment highlights the role of processing facilities as sources of LAB contamination. PMID:17142357
Bao, Shunxing; Weitendorf, Frederick D; Plassard, Andrew J; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A
2017-02-11
The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., "short" processing times and/or "large" datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply "large scale" processing transitions into "big data" and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and non-relevant for medical imaging.
NASA Astrophysics Data System (ADS)
Bao, Shunxing; Weitendorf, Frederick D.; Plassard, Andrew J.; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A.
2017-03-01
The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., "short" processing times and/or "large" datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply "large scale" processing transitions into "big data" and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and nonrelevant for medical imaging.
NASA Technical Reports Server (NTRS)
Coles, J. B.; Richardson, Brandon S.; Eastwood, Michael L.; Sarture, Charles M.; Quetin, Gregory R.; Hernandez, Marco A.; Kroll, Linley A.; Nolte, Scott H.; Porter, Michael D.; Green, Robert O.
2011-01-01
The quality of the quantitative spectral data collected by an imaging spectrometer instrument is critically dependent upon the accuracy of the spectral and radiometric calibration of the system. In order for the collected spectra to be scientifically useful, the calibration of the instrument must be precisely known not only prior to but during data collection. Thus, in addition to a rigorous in-lab calibration procedure, the airborne instruments designed and built by the NASA/JPL Imaging Spectroscopy Group incorporate an on board calibrator (OBC) system with the instrument to provide auxiliary in-use system calibration data. The output of the OBC source illuminates a target panel on the backside of the foreoptics shutter both before and after data collection. The OBC and in-lab calibration data sets are then used to validate and post-process the collected spectral image data. The resulting accuracy of the spectrometer output data is therefore integrally dependent upon the stability of the OBC source. In this paper we describe the design and application of the latest iteration of this novel device developed at NASA/JPL which integrates a halogen-cycle source with a precisely designed fiber coupling system and a fiber-based intensity monitoring feedback loop. The OBC source in this Airborne Testbed Spectrometer was run over a period of 15 hours while both the radiometric and spectral stabilities of the output were measured and demonstrated stability to within 1% of nominal.
A novel blinding digital watermark algorithm based on lab color space
NASA Astrophysics Data System (ADS)
Dong, Bing-feng; Qiu, Yun-jie; Lu, Hong-tao
2010-02-01
It is necessary for blinding digital image watermark algorithm to extract watermark information without any extra information except the watermarked image itself. But most of the current blinding watermark algorithms have the same disadvantage: besides the watermarked image, they also need the size and other information about the original image when extracting the watermark. This paper presents an innovative blinding color image watermark algorithm based on Lab color space, which does not have the disadvantages mentioned above. This algorithm first marks the watermark region size and position through embedding some regular blocks called anchor points in image spatial domain, and then embeds the watermark into the image. In doing so, the watermark information can be easily extracted after doing cropping and scale change to the image. Experimental results show that the algorithm is particularly robust against the color adjusting and geometry transformation. This algorithm has already been used in a copyright protecting project and works very well.
NASA Astrophysics Data System (ADS)
Farmer, J. D.; Nunez, J. I.; Sellar, R. G.; Gardner, P. B.; Manatt, K. S.; Dingizian, A.; Dudik, M. J.; McDonnell, G.; Le, T.; Thomas, J. A.; Chu, K.
2011-12-01
The Multispectral Microscopic Imager (MMI) is a prototype instrument presently under development for future astrobiological missions to Mars. The MMI is designed to be a arm-mounted rover instrument for use in characterizing the microtexture and mineralogy of materials along geological traverses [1,2,3]. Such geological information is regarded as essential for interpreting petrogenesis and geological history, and when acquired in near real-time, can support hypothesis-driven exploration and optimize science return. Correlated microtexure and mineralogy also provides essential data for selecting samples for analysis with onboard lab instruments, and for prioritizing samples for potential Earth return. The MMI design employs multispectral light-emitting diodes (LEDs) and an uncooled focal plane array to achieve the low-mass (<1kg), low-cost, and high reliability (no moving parts) required for an arm-mounted instrument on a planetary rover [2,3]. The MMI acquires multispectral, reflectance images at 62 μm/pixel, in which each image pixel is comprised of a 21-band VNIR spectrum (0.46 to 1.73 μm). This capability enables the MMI to discriminate and resolve the spatial distribution of minerals and textures at the microscale [2, 3]. By extending the spectral range into the infrared, and increasing the number of spectral bands, the MMI exceeds the capabilities of current microimagers, including the MER Microscopic Imager (MI); 4, the Phoenix mission Robotic Arm Camera (RAC; 5) and the Mars Science Laboratory's Mars Hand Lens Imager (MAHLI; 6). In this report we will review the capabilities of the MMI by highlighting recent lab and field applications, including: 1) glove box deployments in the Astromaterials lab at Johnson Space Center to analyze Apollo lunar samples; 2) GeoLab glove box deployments during the 2011 Desert RATS field trials in northern AZ to characterize analog materials collected by astronauts during simulated EVAs; 3) field deployments on Mauna Kea Volcano, Hawaii, during NASA's 2010 ISRU field trials, to analyze materials at the primary feedstock mining site; 4) lab characterization of geological samples from a complex, volcanic-hydrothermal terrain in the Cady Mts., SE Mojave Desert, California. We will show how field and laboratory applications have helped drive the development and refinement of MMI capabilities, while identifying synergies with other potential payload instruments (e.g. X-ray Diffraction) for solving real geological problems.
NASA Astrophysics Data System (ADS)
Sunarya, I. Made Gede; Yuniarno, Eko Mulyanto; Purnomo, Mauridhi Hery; Sardjono, Tri Arief; Sunu, Ismoyo; Purnama, I. Ketut Eddy
2017-06-01
Carotid Artery (CA) is one of the vital organs in the human body. CA features that can be used are position, size and volume. Position feature can used to determine the preliminary initialization of the tracking. Examination of the CA features can use Ultrasound. Ultrasound imaging can be operated dependently by an skilled operator, hence there could be some differences in the images result obtained by two or more different operators. This can affect the process of determining of CA. To reduce the level of subjectivity among operators, it can determine the position of the CA automatically. In this study, the proposed method is to segment CA in B-Mode Ultrasound Image based on morphology, geometry and gradient direction. This study consists of three steps, the data collection, preprocessing and artery segmentation. The data used in this study were taken directly by the researchers and taken from the Brno university's signal processing lab database. Each data set contains 100 carotid artery B-Mode ultrasound image. Artery is modeled using ellipse with center c, major axis a and minor axis b. The proposed method has a high value on each data set, 97% (data set 1), 73 % (data set 2), 87% (data set 3). This segmentation results will then be used in the process of tracking the CA.
Berkeley Lab Wins Seven 2015 R&D 100 Awards | Berkeley Lab
products from industry, academia, and government-sponsored research, ranging from chemistry to materials to problems in metrology techniques: the quantitative characterization of the imaging instrumentation Computational Research Division led the development of the technology. Sensor Integrated with Recombinant and
Pu, Yuan-Yuan; Sun, Da-Wen
2015-12-01
Mango slices were dried by microwave-vacuum drying using a domestic microwave oven equipped with a vacuum desiccator inside. Two lab-scale hyperspectral imaging (HSI) systems were employed for moisture prediction. The Page and the Two-term thin-layer drying models were suitable to describe the current drying process with a fitting goodness of R(2)=0.978. Partial least square (PLS) was applied to correlate the mean spectrum of each slice and reference moisture content. With three waveband selection strategies, optimal wavebands corresponding to moisture prediction were identified. The best model RC-PLS-2 (Rp(2)=0.972 and RMSEP=4.611%) was implemented into the moisture visualization procedure. Moisture distribution map clearly showed that the moisture content in the central part of the mango slices was lower than that of other parts. The present study demonstrated that hyperspectral imaging was a useful tool for non-destructively and rapidly measuring and visualizing the moisture content during drying process. Copyright © 2015 Elsevier Ltd. All rights reserved.
Crasto, Chiquito J.; Marenco, Luis N.; Liu, Nian; Morse, Thomas M.; Cheung, Kei-Hoi; Lai, Peter C.; Bahl, Gautam; Masiar, Peter; Lam, Hugo Y.K.; Lim, Ernest; Chen, Huajin; Nadkarni, Prakash; Migliore, Michele; Miller, Perry L.; Shepherd, Gordon M.
2009-01-01
This article presents the latest developments in neuroscience information dissemination through the SenseLab suite of databases: NeuronDB, CellPropDB, ORDB, OdorDB, OdorMapDB, ModelDB and BrainPharm. These databases include information related to: (i) neuronal membrane properties and neuronal models, and (ii) genetics, genomics, proteomics and imaging studies of the olfactory system. We describe here: the new features for each database, the evolution of SenseLab’s unifying database architecture and instances of SenseLab database interoperation with other neuroscience online resources. PMID:17510162
Fahed, Robert; Ben Maacha, Malek; Ducroux, Célina; Khoury, Naim; Blanc, Raphaël; Piotin, Michel; Lapergue, Bertrand
2018-05-14
We aimed to assess the agreement between study investigators and the core laboratory (core lab) of a thrombectomy trial for imaging scores. The Alberta Stroke Program Early CT Score (ASPECTS), the European Collaborative Acute Stroke Study (ECASS) hemorrhagic transformation (HT) classification, and the Thrombolysis In Cerebral Infarction (TICI) scores as recorded by study investigators were compared with the core lab scores in order to assess interrater agreement, using Cohen's unweighted and weighted kappa statistics. There were frequent discrepancies between study sites and core lab for all the scores. Agreement for ASPECTS and ECASS HT classification was less than substantial, with disagreement occurring in more than one-third of cases. Agreement was higher on MRI-based scores than on CT, and was improved after dichotomization on both CT and MRI. Agreement for TICI scores was moderate (with disagreement occurring in more than 25% of patients), and went above the substantial level (less than 10% disagreement) after dichotomization (TICI 0/1/2a vs 2b/3). Discrepancies between scores assessed by the imaging core lab and those reported by study sites occurred in a significant proportion of patients. Disagreement in the assessment of ASPECTS and day 1 HT scores was more frequent on CT than on MRI. The agreement for the dichotomized TICI score (the trial's primary outcome) was substantial, with less than 10% of disagreement between study sites and core lab. NCT02523261, Post-results. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Scientific Visualization, Seeing the Unseeable
LBNL
2017-12-09
June 24, 2008 Berkeley Lab lecture: Scientific visualization transforms abstract data into readily comprehensible images, provide a vehicle for "seeing the unseeable," and play a central role in bo... June 24, 2008 Berkeley Lab lecture: Scientific visualization transforms abstract data into readily comprehensible images, provide a vehicle for "seeing the unseeable," and play a central role in both experimental and computational sciences. Wes Bethel, who heads the Scientific Visualization Group in the Computational Research Division, presents an overview of visualization and computer graphics, current research challenges, and future directions for the field.
Holographic quantitative imaging of sample hidden by turbid medium or occluding objects
NASA Astrophysics Data System (ADS)
Bianco, V.; Miccio, L.; Merola, F.; Memmolo, P.; Gennari, O.; Paturzo, Melania; Netti, P. A.; Ferraro, P.
2015-03-01
Digital Holography (DH) numerical procedures have been developed to allow imaging through turbid media. A fluid is considered turbid when dispersed particles provoke strong light scattering, thus destroying the image formation by any standard optical system. Here we show that sharp amplitude imaging and phase-contrast mapping of object hidden behind turbid medium and/or occluding objects are possible in harsh noise conditions and with a large field-of view by Multi-Look DH microscopy. In particular, it will be shown that both amplitude imaging and phase-contrast mapping of cells hidden behind a flow of Red Blood Cells can be obtained. This allows, in a noninvasive way, the quantitative evaluation of living processes in Lab on Chip platforms where conventional microscopy techniques fail. The combination of this technique with endoscopic imaging can pave the way for the holographic blood vessel inspection, e.g. to look for settled cholesterol plaques as well as blood clots for a rapid diagnostics of blood diseases.
A user-friendly LabVIEW software platform for grating based X-ray phase-contrast imaging.
Wang, Shenghao; Han, Huajie; Gao, Kun; Wang, Zhili; Zhang, Can; Yang, Meng; Wu, Zhao; Wu, Ziyu
2015-01-01
X-ray phase-contrast imaging can provide greatly improved contrast over conventional absorption-based imaging for weakly absorbing samples, such as biological soft tissues and fibre composites. In this study, we introduced an easy and fast way to develop a user-friendly software platform dedicated to the new grating-based X-ray phase-contrast imaging setup at the National Synchrotron Radiation Laboratory of the University of Science and Technology of China. The control of 21 motorized stages, of a piezoelectric stage and of an X-ray tube are achieved with this software, it also covers image acquisition with a flat panel detector for automatic phase stepping scan. Moreover, a data post-processing module for signals retrieval and other custom features are in principle available. With a seamless integration of all the necessary functions in one software package, this platform greatly facilitate users' activities during experimental runs with this grating based X-ray phase contrast imaging setup.
Gallo-Oller, Gabriel; Ordoñez, Raquel; Dotor, Javier
2018-06-01
Since its first description, Western blot has been widely used in molecular labs. It constitutes a multistep method that allows the detection and/or quantification of proteins from simple to complex protein mixtures. Western blot quantification method constitutes a critical step in order to obtain accurate and reproducible results. Due to the technical knowledge required for densitometry analysis together with the resources availability, standard office scanners are often used for the imaging acquisition of developed Western blot films. Furthermore, the use of semi-quantitative software as ImageJ (Java-based image-processing and analysis software) is clearly increasing in different scientific fields. In this work, we describe the use of office scanner coupled with the ImageJ software together with a new image background subtraction method for accurate Western blot quantification. The proposed method represents an affordable, accurate and reproducible approximation that could be used in the presence of limited resources availability. Copyright © 2018 Elsevier B.V. All rights reserved.
Double sided grating fabrication for high energy X-ray phase contrast imaging
Hollowell, Andrew E.; Arrington, Christian L.; Finnegan, Patrick; ...
2018-04-19
State of the art grating fabrication currently limits the maximum source energy that can be used in lab based x-ray phase contrast imaging (XPCI) systems. In order to move to higher source energies, and image high density materials or image through encapsulating barriers, new grating fabrication methods are needed. In this work we have analyzed a new modality for grating fabrication that involves precision alignment of etched gratings on both sides of a substrate, effectively doubling the thickness of the grating. Furthermore, we have achieved a front-to-backside feature alignment accuracy of 0.5 µm demonstrating a methodology that can be appliedmore » to any grating fabrication approach extending the attainable aspect ratios allowing higher energy lab based XPCI systems.« less
Double sided grating fabrication for high energy X-ray phase contrast imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hollowell, Andrew E.; Arrington, Christian L.; Finnegan, Patrick
State of the art grating fabrication currently limits the maximum source energy that can be used in lab based x-ray phase contrast imaging (XPCI) systems. In order to move to higher source energies, and image high density materials or image through encapsulating barriers, new grating fabrication methods are needed. In this work we have analyzed a new modality for grating fabrication that involves precision alignment of etched gratings on both sides of a substrate, effectively doubling the thickness of the grating. Furthermore, we have achieved a front-to-backside feature alignment accuracy of 0.5 µm demonstrating a methodology that can be appliedmore » to any grating fabrication approach extending the attainable aspect ratios allowing higher energy lab based XPCI systems.« less
1984-06-01
TechnologySchool of Electrical Engineering Atlanta, Georgia 30332 I1. CONTROLLING OFFICE NAME AND ADDRESS 12. REPORT DATE U.S. Army Research Office June 1984...Post Office Box 12211 I3. NUMBER OF PAGES Research Triangle Park, NC 27709 14. MONITORING AGENCY NAME G ADDRESS(if dilferent from Controlling Office...could be attached to it to produce a permanent record of images. A video control unit, designed and built in the Optics Lab, was employed to direct and
Simulating Planet-Hunting in a Lab
2007-04-11
Three simulated planets -- one as bright as Jupiter, one half as bright as Jupiter and one as faint as Earth -- stand out plainly in this image created from a sequence of 480 images captured by the High Contrast Imaging Testbed at NASA JPL.
Ultrafast Imaging using Spectral Resonance Modulation
NASA Astrophysics Data System (ADS)
Huang, Eric; Ma, Qian; Liu, Zhaowei
2016-04-01
CCD cameras are ubiquitous in research labs, industry, and hospitals for a huge variety of applications, but there are many dynamic processes in nature that unfold too quickly to be captured. Although tradeoffs can be made between exposure time, sensitivity, and area of interest, ultimately the speed limit of a CCD camera is constrained by the electronic readout rate of the sensors. One potential way to improve the imaging speed is with compressive sensing (CS), a technique that allows for a reduction in the number of measurements needed to record an image. However, most CS imaging methods require spatial light modulators (SLMs), which are subject to mechanical speed limitations. Here, we demonstrate an etalon array based SLM without any moving elements that is unconstrained by either mechanical or electronic speed limitations. This novel spectral resonance modulator (SRM) shows great potential in an ultrafast compressive single pixel camera.
Real-time landmark-based unrestrained animal tracking system for motion-corrected PET/SPECT imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
J.S. Goddard; S.S. Gleason; M.J. Paulus
2003-08-01
Oak Ridge National Laboratory (ORNL) and Jefferson Lab and are collaborating to develop a new high-resolution single photon emission tomography (SPECT) instrument to image unrestrained laboratory animals. This technology development will allow functional imaging studies to be performed on the animals without the use of anesthetic agents. This technology development could have eventual clinical applications for performing functional imaging studies on patients that cannot remain still (Parkinson's patients, Alzheimer's patients, small children, etc.) during a PET or SPECT scan. A key component of this new device is the position tracking apparatus. The tracking apparatus is an integral part of themore » gantry and designed to measure the spatial position of the animal at a rate of 10-15 frames per second with sub-millimeter accuracy. Initial work focuses on brain studies where anesthetic agents or physical restraint can significantly impact physiologic processes.« less
Recognition and inference of crevice processing on digitized paintings
NASA Astrophysics Data System (ADS)
Karuppiah, S. P.; Srivatsa, S. K.
2013-03-01
This paper is designed to detect and removal of cracks on digitized paintings. The cracks are detected by threshold. Afterwards, the thin dark brush strokes which have been misidentified as cracks are removed using Median radial basis function neural network on hue and saturation data, Semi-automatic procedure based on region growing. Finally, crack is filled using wiener filter. The paper is well designed in such a way that most of the cracks on digitized paintings have identified and removed. The paper % of betterment is 90%. This paper helps us to perform not only on digitized paintings but also the medical images and bmp images. This paper is implemented by Mat Lab.
The NOAO Data Lab PHAT Photometry Database
NASA Astrophysics Data System (ADS)
Olsen, Knut; Williams, Ben; Fitzpatrick, Michael; PHAT Team
2018-01-01
We present a database containing both the combined photometric object catalog and the single epoch measurements from the Panchromatic Hubble Andromeda Treasury (PHAT). This database is hosted by the NOAO Data Lab (http://datalab.noao.edu), and as such exposes a number of data services to the PHAT photometry, including access through a Table Access Protocol (TAP) service, direct PostgreSQL queries, web-based and programmatic query interfaces, remote storage space for personal database tables and files, and a JupyterHub-based Notebook analysis environment, as well as image access through a Simple Image Access (SIA) service. We show how the Data Lab database and Jupyter Notebook environment allow for straightforward and efficient analyses of PHAT catalog data, including maps of object density, depth, and color, extraction of light curves of variable objects, and proper motion exploration.
Gao, Yuan; Peters, Ove A; Wu, Hongkun; Zhou, Xuedong
2009-02-01
The purpose of this study was to customize an application framework by using the MeVisLab image processing and visualization platform for three-dimensional reconstruction and assessment of tooth and root canal morphology. One maxillary first molar was scanned before and after preparation with ProTaper by using micro-computed tomography. With a customized application framework based on MeVisLab, internal and external anatomy was reconstructed. Furthermore, the dimensions of root canal and radicular dentin were quantified, and effects of canal preparation were assessed. Finally, a virtual preparation with risk analysis was performed to simulate the removal of a broken instrument. This application framework provided an economical platform and met current requirements of endodontic research. The broad-based use of high-quality free software and the resulting exchange of experience might help to improve the quality of endodontic research with micro-computed tomography.
NASA Astrophysics Data System (ADS)
Haruzi, Peleg; Halisch, Matthias; Katsman, Regina; Waldmann, Nicolas
2016-04-01
Lower Cretaceous sandstone serves as hydrocarbon reservoir in some places over the world, and potentially in Hatira formation in the Golan Heights, northern Israel. The purpose of the current research is to characterize the petrophysical properties of these sandstone units. The study is carried out by two alternative methods: using conventional macroscopic lab measurements, and using CT-scanning, image processing and subsequent fluid mechanics simulations at a microscale, followed by upscaling to the conventional macroscopic rock parameters (porosity and permeability). Comparison between the upscaled and measured in the lab properties will be conducted. The best way to upscale the microscopic rock characteristics will be analyzed based the models suggested in the literature. Proper characterization of the potential reservoir will provide necessary analytical parameters for the future experimenting and modeling of the macroscopic fluid flow behavior in the Lower Cretaceous sandstone.
Seismic constraints on the lithosphere-asthenosphere boundary
NASA Astrophysics Data System (ADS)
Rychert, Catherine A.
2014-05-01
The basic tenet of plate tectonics is that a rigid plate, or lithosphere, moves over a weaker asthenospheric layer. However, the exact location and defining mechanism of the boundary at the base of the plate, the lithosphere-asthenosphere boundary (LAB) is debated. The oceans should represent a simple scenario since the lithosphere is predicted to thicken with seafloor age if it thermally defined, whereas a constant plate thickness might indicate a compositional definition. However, the oceans are remote and difficult to constrain, and studies with different sensitivities and resolutions have come to different conclusions. Hotspot regions lend additional insight, since they are relatively well instrumented with seismic stations, and also since the effect of a thermal plume on the LAB should depend on the defining mechanism of the plate. Here I present new results using S-to-P receiver functions to image upper mantle discontinuity structure beneath volcanically active regions including Hawaii, Iceland, Galapagos, and Afar. In particular I focus on the lithosphere-asthenosphere boundary and discontinuities related to the base of melting, which can be used to highlight plume locations. I image a lithosphere-asthenosphere boundary in the 50 - 95 km depth range beneath Hawaii, Galapagos, and Iceland. Although LAB depth variations exist within these regions, significant thinning is not observed in the locations of hypothesized plume impingement from receiver functions (see below). Since a purely thermally defined lithosphere is expected to thin significantly in the presence of a thermal plume anomaly, a compositional component in the definition of the LAB is implied. Beneath Afar, an LAB is imaged at 75 km depth on the flank of the rift, but no LAB is imaged beneath the rift itself. The transition from flank of rift is relatively abrupt, again suggesting something other than a purely thermally defined lithosphere. Melt may also exist in the asthenosphere in these regions of hotpot volcanism. Indeed, S-to-P also images strong velocity increases that are likely related to the base of a melt-rich layer beneath the oceanic LAB. This discontinuity may highlight plume locations since melt is predicted deeper at thermal anomalies. For instance, beneath Hawaii the base of melting increases from 110 km to 155 km depth 100 km west of Hawaii, i.e., the likely location of plume impingement on the lithosphere. Beneath Galapagos the discontinuity is deeper in 3 sectors, all off the island axis, suggesting multiple plume diversions and complex plume-ridge interactions. Beneath Iceland deepening is imaged to the northeast of the island. Beneath the Afar rift a shallow melt discontinuity is imaged at ~75 km, suggesting that the plume is located outside the study region. Overall, the deepest realizations of the discontinuities agree with the slowest velocities from surface waves, but are not located directly beneath surface volcanoes. This suggests that either plumes approach the surface at an angle or that restite roots beneath hotspots divert plumes at shallow depths. In either case, mantle melts are likely guided from the location of impingement on the lithosphere to current day surface volcanoes by pre-existing structures of the lithosphere.
SU-E-CAMPUS-T-01: Automation of the Winston-Lutz Test for Stereotactic Radiosurgery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Litzenberg, D; Irrer, J; Kessler, M
Purpose: To optimize clinical efficiency and shorten patient wait time by minimizing the time and effort required to perform the Winston-Lutz test before stereotactic radiosurgery (SRS) through automation of the delivery, analysis, and documentation of results. Methods: The radiation fields of the Winston-Lutz (WL) test were created in a “machine-QA patient” saved in ARIA for use before SRS cases. Images of the BRW target ball placed at mechanical isocenter are captured with the portal imager for each of four, 2cm×2cm, MLC-shaped beams. When the WL plan is delivered and closed, this event is detected by in-house software called EventNet whichmore » automates subsequent processes with the aid of the ARIA web services. Images are automatically retrieved from the ARIA database and analyzed to determine the offset of the target ball from radiation isocenter. The results are posted to a website and a composite summary image of the results is pushed back into ImageBrowser for review and authenticated documentation. Results: The total time to perform the test was reduced from 20-25 minutes to less than 4 minutes. The results were found to be more accurate and consistent than the previous method which used radiochromic film. The images were also analyzed with DoseLab for comparison. The difference between the film and automated WL results in the X and Y direction and the radius were (−0.17 +/− 0.28) mm, (0.21 +/− 0.20) mm and (−0.14 +/− 0.27) mm, respectively. The difference between the DoseLab and automated WL results were (−0.05 +/− 0.06) mm, (−0.01 +/− 0.02) mm and (0.01 +/− 0.07) mm, respectively. Conclusions: This process reduced patient wait times by 15–20 minutes making the treatment machine available to treat another patient. Accuracy and consistency of results were improved over the previous method and were comparable to other commercial solutions. Access to the ARIA web services is made possible through an Eclipse co-development agreement with Varian Medical Systems.« less
More than a Picture: Helping Undergraduates Learn to Communicate through Scientific Images
Watson, Fiona L.
2008-01-01
Images are powerful means of communicating scientific results; a strong image can underscore an experimental result more effectively than any words, whereas a poor image can readily undermine a result or conclusion. Developmental biologists rely extensively on images to compare normal versus abnormal development and communicate their results. Most undergraduate lab science courses do not actively teach students skills to communicate effectively through images. To meet this need, we developed a series of image portfolio assignments and imaging workshops in our Developmental Biology course to encourage students to develop communication skills using images. The improvements in their images over the course of the semester were striking, and on anonymous course evaluations, 73% of students listed imaging skills as the most important skill or concept they learned in the course. The image literacy skills acquired through simple lab assignments and in-class workshops appeared to stimulate confidence in the student's own evaluations of current scientific literature to assess research conclusions. In this essay, we discuss our experiences and methodology teaching undergraduates the basic criteria involved in generating images that communicate scientific content and provide a road map for integrating this curriculum into any upper-level biology laboratory course. PMID:18316805
[Preliminary Study on Linear Alkylbenzenes as Indicator for Process of Urbanization].
Xu, Te; Zeng, Hui; Ni, Hong-Gang
2016-01-15
In this study, we selected Shenzhen City as a typical region of urbanization and took Linear alkylbenzenes ( LABs) as an environmental molecular marker to investigate the relationship between soil LABs levels and urbanization indexes on the basis of analysis of spatial distribution of LABs in surface soil. Our results indicated relations between the LABs levels in soil and the five urbanization indexes, such as the population, water supply, urban construction, income and expenditure, as well as industrial structure. These results suggested that LABs levels were correlated with urbanization and could be used as an environmental molecular indicator for the process of urbanization.
NASA Astrophysics Data System (ADS)
Bachche, Shivaji; Oka, Koichi
2013-06-01
This paper presents the comparative study of various color space models to determine the suitable color space model for detection of green sweet peppers. The images were captured by using CCD cameras and infrared cameras and processed by using Halcon image processing software. The LED ring around the camera neck was used as an artificial lighting to enhance the feature parameters. For color images, CieLab, YIQ, YUV, HSI and HSV whereas for infrared images, grayscale color space models were selected for image processing. In case of color images, HSV color space model was found more significant with high percentage of green sweet pepper detection followed by HSI color space model as both provides information in terms of hue/lightness/chroma or hue/lightness/saturation which are often more relevant to discriminate the fruit from image at specific threshold value. The overlapped fruits or fruits covered by leaves can be detected in better way by using HSV color space model as the reflection feature from fruits had higher histogram than reflection feature from leaves. The IR 80 optical filter failed to distinguish fruits from images as filter blocks useful information on features. Computation of 3D coordinates of recognized green sweet peppers was also conducted in which Halcon image processing software provides location and orientation of the fruits accurately. The depth accuracy of Z axis was examined in which 500 to 600 mm distance between cameras and fruits was found significant to compute the depth distance precisely when distance between two cameras maintained to 100 mm.
Image-Based Single Cell Profiling: High-Throughput Processing of Mother Machine Experiments
Sachs, Christian Carsten; Grünberger, Alexander; Helfrich, Stefan; Probst, Christopher; Wiechert, Wolfgang; Kohlheyer, Dietrich; Nöh, Katharina
2016-01-01
Background Microfluidic lab-on-chip technology combined with live-cell imaging has enabled the observation of single cells in their spatio-temporal context. The mother machine (MM) cultivation system is particularly attractive for the long-term investigation of rod-shaped bacteria since it facilitates continuous cultivation and observation of individual cells over many generations in a highly parallelized manner. To date, the lack of fully automated image analysis software limits the practical applicability of the MM as a phenotypic screening tool. Results We present an image analysis pipeline for the automated processing of MM time lapse image stacks. The pipeline supports all analysis steps, i.e., image registration, orientation correction, channel/cell detection, cell tracking, and result visualization. Tailored algorithms account for the specialized MM layout to enable a robust automated analysis. Image data generated in a two-day growth study (≈ 90 GB) is analyzed in ≈ 30 min with negligible differences in growth rate between automated and manual evaluation quality. The proposed methods are implemented in the software molyso (MOther machine AnaLYsis SOftware) that provides a new profiling tool to analyze unbiasedly hitherto inaccessible large-scale MM image stacks. Conclusion Presented is the software molyso, a ready-to-use open source software (BSD-licensed) for the unsupervised analysis of MM time-lapse image stacks. molyso source code and user manual are available at https://github.com/modsim/molyso. PMID:27661996
ALMA OBSERVATIONS OF Ly α BLOB 1: HALO SUBSTRUCTURE ILLUMINATED FROM WITHIN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geach, J. E.; Narayanan, D.; Matsuda, Y.
2016-11-20
We present new Atacama Large Millimeter/Submillimeter Array (ALMA) 850 μ m continuum observations of the original Ly α Blob (LAB) in the SSA22 field at z = 3.1 (SSA22-LAB01). The ALMA map resolves the previously identified submillimeter source into three components with a total flux density of S {sub 850} = 1.68 ± 0.06 mJy, corresponding to a star-formation rate of ∼150 M {sub ⊙} yr{sup -1}. The submillimeter sources are associated with several faint ( m ≈ 27 mag) rest-frame ultraviolet sources identified in Hubble Space Telescope Imaging Spectrograph (STIS) clear filter imaging ( λ ≈ 5850 Å). Onemore » of these companions is spectroscopically confirmed with the Keck Multi-Object Spectrometer For Infra-Red Exploration to lie within 20 projected kpc and 250 km s{sup -1} of one of the ALMA components. We postulate that some of these STIS sources represent a population of low-mass star-forming satellites surrounding the central submillimeter sources, potentially contributing to their growth and activity through accretion. Using a high-resolution cosmological zoom simulation of a 10{sup 13} M {sub ⊙} halo at z = 3, including stellar, dust, and Ly α radiative transfer, we can model the ALMA+STIS observations and demonstrate that Ly α photons escaping from the central submillimeter sources are expected to resonantly scatter in neutral hydrogen, the majority of which is predicted to be associated with halo substructure. We show how this process gives rise to extended Ly α emission with similar surface brightness and morphology to observed giant LABs.« less
Berger, Andrew J; Page, Michael R; Jacob, Jan; Young, Justin R; Lewis, Jim; Wenzel, Lothar; Bhallamudi, Vidya P; Johnston-Halperin, Ezekiel; Pelekhov, Denis V; Hammel, P Chris
2014-12-01
Understanding the complex properties of electronic and spintronic devices at the micro- and nano-scale is a topic of intense current interest as it becomes increasingly important for scientific progress and technological applications. In operando characterization of such devices by scanning probe techniques is particularly well-suited for the microscopic study of these properties. We have developed a scanning probe microscope (SPM) which is capable of both standard force imaging (atomic, magnetic, electrostatic) and simultaneous electrical transport measurements. We utilize flexible and inexpensive FPGA (field-programmable gate array) hardware and a custom software framework developed in National Instrument's LabVIEW environment to perform the various aspects of microscope operation and device measurement. The FPGA-based approach enables sensitive, real-time cantilever frequency-shift detection. Using this system, we demonstrate electrostatic force microscopy of an electrically biased graphene field-effect transistor device. The combination of SPM and electrical transport also enables imaging of the transport response to a localized perturbation provided by the scanned cantilever tip. Facilitated by the broad presence of LabVIEW in the experimental sciences and the openness of our software solution, our system permits a wide variety of combined scanning and transport measurements by providing standardized interfaces and flexible access to all aspects of a measurement (input and output signals, and processed data). Our system also enables precise control of timing (synchronization of scanning and transport operations) and implementation of sophisticated feedback protocols, and thus should be broadly interesting and useful to practitioners in the field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berger, Andrew J., E-mail: berger.156@osu.edu; Page, Michael R.; Young, Justin R.
Understanding the complex properties of electronic and spintronic devices at the micro- and nano-scale is a topic of intense current interest as it becomes increasingly important for scientific progress and technological applications. In operando characterization of such devices by scanning probe techniques is particularly well-suited for the microscopic study of these properties. We have developed a scanning probe microscope (SPM) which is capable of both standard force imaging (atomic, magnetic, electrostatic) and simultaneous electrical transport measurements. We utilize flexible and inexpensive FPGA (field-programmable gate array) hardware and a custom software framework developed in National Instrument's LabVIEW environment to perform themore » various aspects of microscope operation and device measurement. The FPGA-based approach enables sensitive, real-time cantilever frequency-shift detection. Using this system, we demonstrate electrostatic force microscopy of an electrically biased graphene field-effect transistor device. The combination of SPM and electrical transport also enables imaging of the transport response to a localized perturbation provided by the scanned cantilever tip. Facilitated by the broad presence of LabVIEW in the experimental sciences and the openness of our software solution, our system permits a wide variety of combined scanning and transport measurements by providing standardized interfaces and flexible access to all aspects of a measurement (input and output signals, and processed data). Our system also enables precise control of timing (synchronization of scanning and transport operations) and implementation of sophisticated feedback protocols, and thus should be broadly interesting and useful to practitioners in the field.« less
The Lithosphere-asthenosphere Boundary beneath the South Island of New Zealand
NASA Astrophysics Data System (ADS)
Hua, J.; Fischer, K. M.; Savage, M. K.
2017-12-01
Lithosphere-asthenosphere boundary (LAB) properties beneath the South Island of New Zealand have been imaged by Sp receiver function common-conversion point stacking. In this transpressional boundary between the Australian and Pacific plates, dextral offset on the Alpine fault and convergence have occurred for the past 20 My, with the Alpine fault now bounded by Australian plate subduction to the south and Pacific plate subduction to the north. This study takes advantage of the long-duration and high-density seismometer networks deployed on or near the South Island, especially 29 broadband stations of the New Zealand permanent seismic network (GeoNet). We obtained 24,980 individual receiver functions by extended-time multi-taper deconvolution, mapping to three-dimensional space using a Fresnel zone approximation. Pervasive strong positive Sp phases are observed in the LAB depth range indicated by surface wave tomography (Ball et al., 2015) and geochemical studies. These phases are interpreted as conversions from a velocity decrease across the LAB. In the central South Island, the LAB is observed to be deeper and broader to the west of the Alpine fault. The deeper LAB to the west of the Alpine fault is consistent with oceanic lithosphere attached to the Australian plate that was partially subducted while also translating parallel to the Alpine fault (e.g. Sutherland, 2000). However, models in which the Pacific lithosphere has been underthrust to the west past the Alpine fault cannot be ruled out. Further north, a zone of thin lithosphere with a strong and vertically localized LAB velocity gradient occurs to the west of the fault, juxtaposed against a region of anomalously weak LAB conversions to the east of the fault. This structure, similar to results of Sp imaging beneath the central segment of the San Andreas fault (Ford et al., 2014), also suggests that lithospheric blocks with contrasting LAB properties meet beneath the Alpine fault. The observed variations in LAB properties indicate strong modification of the LAB by the interplay of convergence and strike-slip deformation along and across this transpressional plate boundary.
The LabTube - a novel microfluidic platform for assay automation in laboratory centrifuges.
Kloke, A; Fiebach, A R; Zhang, S; Drechsel, L; Niekrawietz, S; Hoehl, M M; Kneusel, R; Panthel, K; Steigert, J; von Stetten, F; Zengerle, R; Paust, N
2014-05-07
Assay automation is the key for successful transformation of modern biotechnology into routine workflows. Yet, it requires considerable investment in processing devices and auxiliary infrastructure, which is not cost-efficient for laboratories with low or medium sample throughput or point-of-care testing. To close this gap, we present the LabTube platform, which is based on assay specific disposable cartridges for processing in laboratory centrifuges. LabTube cartridges comprise interfaces for sample loading and downstream applications and fluidic unit operations for release of prestored reagents, mixing, and solid phase extraction. Process control is achieved by a centrifugally-actuated ballpen mechanism. To demonstrate the workflow and functionality of the LabTube platform, we show two LabTube automated sample preparation assays from laboratory routines: DNA extractions from whole blood and purification of His-tagged proteins. Equal DNA and protein yields were observed compared to manual reference runs, while LabTube automation could significantly reduce the hands-on-time to one minute per extraction.
[Design of visualized medical images network and web platform based on MeVisLab].
Xiang, Jun; Ye, Qing; Yuan, Xun
2017-04-01
With the trend of the development of "Internet +", some further requirements for the mobility of medical images have been required in the medical field. In view of this demand, this paper presents a web-based visual medical imaging platform. First, the feasibility of medical imaging is analyzed and technical points. CT (Computed Tomography) or MRI (Magnetic Resonance Imaging) images are reconstructed three-dimensionally by MeVisLab and packaged as X3D (Extensible 3D Graphics) files shown in the present paper. Then, the B/S (Browser/Server) system specially designed for 3D image is designed by using the HTML 5 and WebGL rendering engine library, and the X3D image file is parsed and rendered by the system. The results of this study showed that the platform was suitable for multiple operating systems to realize the platform-crossing and mobilization of medical image data. The development of medical imaging platform is also pointed out in this paper. It notes that web application technology will not only promote the sharing of medical image data, but also facilitate image-based medical remote consultations and distance learning.
A novel algorithm for thermal image encryption.
Hussain, Iqtadar; Anees, Amir; Algarni, Abdulmohsen
2018-04-16
Thermal images play a vital character at nuclear plants, Power stations, Forensic labs biological research, and petroleum products extraction. Safety of thermal images is very important. Image data has some unique features such as intensity, contrast, homogeneity, entropy and correlation among pixels that is why somehow image encryption is trickier as compare to other encryptions. With conventional image encryption schemes it is normally hard to handle these features. Therefore, cryptographers have paid attention to some attractive properties of the chaotic maps such as randomness and sensitivity to build up novel cryptosystems. That is why, recently proposed image encryption techniques progressively more depends on the application of chaotic maps. This paper proposed an image encryption algorithm based on Chebyshev chaotic map and S8 Symmetric group of permutation based substitution boxes. Primarily, parameters of chaotic Chebyshev map are chosen as a secret key to mystify the primary image. Then, the plaintext image is encrypted by the method generated from the substitution boxes and Chebyshev map. By this process, we can get a cipher text image that is perfectly twisted and dispersed. The outcomes of renowned experiments, key sensitivity tests and statistical analysis confirm that the proposed algorithm offers a safe and efficient approach for real-time image encryption.
Bottom-up laboratory testing of the DKIST Visible Broadband Imager (VBI)
NASA Astrophysics Data System (ADS)
Ferayorni, Andrew; Beard, Andrew; Cole, Wes; Gregory, Scott; Wöeger, Friedrich
2016-08-01
The Daniel K. Inouye Solar Telescope (DKIST) is a 4-meter solar observatory under construction at Haleakala, Hawaii [1]. The Visible Broadband Imager (VBI) is a first light instrument that will record images at the highest possible spatial and temporal resolution of the DKIST at a number of scientifically important wavelengths [2]. The VBI is a pathfinder for DKIST instrumentation and a test bed for developing processes and procedures in the areas of unit, systems integration, and user acceptance testing. These test procedures have been developed and repeatedly executed during VBI construction in the lab as part of a "test early and test often" philosophy aimed at identifying and resolving issues early thus saving cost during integration test and commissioning on summit. The VBI team recently completed a bottom up end-to-end system test of the instrument in the lab that allowed the instrument's functionality, performance, and usability to be validated against documented system requirements. The bottom up testing approach includes four levels of testing, each introducing another layer in the control hierarchy that is tested before moving to the next level. First the instrument mechanisms are tested for positioning accuracy and repeatability using a laboratory position-sensing detector (PSD). Second the real-time motion controls are used to drive the mechanisms to verify speed and timing synchronization requirements are being met. Next the high-level software is introduced and the instrument is driven through a series of end-to-end tests that exercise the mechanisms, cameras, and simulated data processing. Finally, user acceptance testing is performed on operational and engineering use cases through the use of the instrument engineering graphical user interface (GUI). In this paper we present the VBI bottom up test plan, procedures, example test cases and tools used, as well as results from test execution in the laboratory. We will also discuss the benefits realized through completion of this testing, and share lessons learned from the bottoms up testing process.
NASA Astrophysics Data System (ADS)
Oware, E. K.; Moysey, S. M.
2016-12-01
Regularization stabilizes the geophysical imaging problem resulting from sparse and noisy measurements that render solutions unstable and non-unique. Conventional regularization constraints are, however, independent of the physics of the underlying process and often produce smoothed-out tomograms with mass underestimation. Cascaded time-lapse (CTL) is a widely used reconstruction technique for monitoring wherein a tomogram obtained from the background dataset is employed as starting model for the inversion of subsequent time-lapse datasets. In contrast, a proper orthogonal decomposition (POD)-constrained inversion framework enforces physics-based regularization based upon prior understanding of the expected evolution of state variables. The physics-based constraints are represented in the form of POD basis vectors. The basis vectors are constructed from numerically generated training images (TIs) that mimic the desired process. The target can be reconstructed from a small number of selected basis vectors, hence, there is a reduction in the number of inversion parameters compared to the full dimensional space. The inversion involves finding the optimal combination of the selected basis vectors conditioned on the geophysical measurements. We apply the algorithm to 2-D lab-scale saline transport experiments with electrical resistivity (ER) monitoring. We consider two transport scenarios with one and two mass injection points evolving into unimodal and bimodal plume morphologies, respectively. The unimodal plume is consistent with the assumptions underlying the generation of the TIs, whereas bimodality in plume morphology was not conceptualized. We compare difference tomograms retrieved from POD with those obtained from CTL. Qualitative comparisons of the difference tomograms with images of their corresponding dye plumes suggest that POD recovered more compact plumes in contrast to those of CTL. While mass recovery generally deteriorated with increasing number of time-steps, POD outperformed CTL in terms of mass recovery accuracy rates. POD is computationally superior requiring only 2.5 mins to complete each inversion compared to 3 hours for CTL to do the same.
Image-guided thoracic surgery in the hybrid operation room.
Ujiie, Hideki; Effat, Andrew; Yasufuku, Kazuhiro
2017-01-01
There has been an increase in the use of image-guided technology to facilitate minimally invasive therapy. The next generation of minimally invasive therapy is focused on advancement and translation of novel image-guided technologies in therapeutic interventions, including surgery, interventional pulmonology, radiation therapy, and interventional laser therapy. To establish the efficacy of different minimally invasive therapies, we have developed a hybrid operating room, known as the guided therapeutics operating room (GTx OR) at the Toronto General Hospital. The GTx OR is equipped with multi-modality image-guidance systems, which features a dual source-dual energy computed tomography (CT) scanner, a robotic cone-beam CT (CBCT)/fluoroscopy, high-performance endobronchial ultrasound system, endoscopic surgery system, near-infrared (NIR) fluorescence imaging system, and navigation tracking systems. The novel multimodality image-guidance systems allow physicians to quickly, and accurately image patients while they are on the operating table. This yield improved outcomes since physicians are able to use image guidance during their procedures, and carry out innovative multi-modality therapeutics. Multiple preclinical translational studies pertaining to innovative minimally invasive technology is being developed in our guided therapeutics laboratory (GTx Lab). The GTx Lab is equipped with similar technology, and multimodality image-guidance systems as the GTx OR, and acts as an appropriate platform for translation of research into human clinical trials. Through the GTx Lab, we are able to perform basic research, such as the development of image-guided technologies, preclinical model testing, as well as preclinical imaging, and then translate that research into the GTx OR. This OR allows for the utilization of new technologies in cancer therapy, including molecular imaging, and other innovative imaging modalities, and therefore enables a better quality of life for patients, both during and after the procedure. In this article, we describe capabilities of the GTx systems, and discuss the first-in-human technologies used, and evaluated in GTx OR.
Spitzer Telemetry Processing System
NASA Technical Reports Server (NTRS)
Stanboli, Alice; Martinez, Elmain M.; McAuley, James M.
2013-01-01
The Spitzer Telemetry Processing System (SirtfTlmProc) was designed to address objectives of JPL's Multi-mission Image Processing Lab (MIPL) in processing spacecraft telemetry and distributing the resulting data to the science community. To minimize costs and maximize operability, the software design focused on automated error recovery, performance, and information management. The system processes telemetry from the Spitzer spacecraft and delivers Level 0 products to the Spitzer Science Center. SirtfTlmProc is a unique system with automated error notification and recovery, with a real-time continuous service that can go quiescent after periods of inactivity. The software can process 2 GB of telemetry and deliver Level 0 science products to the end user in four hours. It provides analysis tools so the operator can manage the system and troubleshoot problems. It automates telemetry processing in order to reduce staffing costs.
Bao, Shunxing; Weitendorf, Frederick D.; Plassard, Andrew J.; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A.
2016-01-01
The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., “short” processing times and/or “large” datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply “large scale” processing transitions into “big data” and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and non-relevant for medical imaging. PMID:28736473
2004-01-05
KENNEDY SPACE CENTER, FLA. -- In the Space Life Sciences Lab, Lanfang Levine, with Dynamac Corp., transfers material into a sample bottle for analysis. She is standing in front of new equipment in the lab that will provide gas chromatography and mass spectrometry. The equipment will enable analysis of volatile compounds, such as from plants. The 100,000 square-foot facility houses labs for NASA’s ongoing research efforts, microbiology/microbial ecology studies and analytical chemistry labs. Also calling the new lab home are facilities for space flight-experiment and flight-hardware development, new plant growth chambers, and an Orbiter Environment Simulator that will be used to conduct ground control experiments in simulated flight conditions for space flight experiments. The SLS Lab, formerly known as the Space Experiment Research and Processing Laboratory or SERPL, provides space for NASA’s Life Sciences Services contractor Dynamac Corporation, Bionetics Corporation, and researchers from the University of Florida. NASA’s Office of Biological and Physical Research will use the facility for processing life sciences experiments that will be conducted on the International Space Station. The SLS Lab is the magnet facility for the International Space Research Park at KSC being developed in partnership with Florida Space Authority.
Linear-constraint wavefront control for exoplanet coronagraphic imaging systems
NASA Astrophysics Data System (ADS)
Sun, He; Eldorado Riggs, A. J.; Kasdin, N. Jeremy; Vanderbei, Robert J.; Groff, Tyler Dean
2017-01-01
A coronagraph is a leading technology for achieving high-contrast imaging of exoplanets in a space telescope. It uses a system of several masks to modify the diffraction and achieve extremely high contrast in the image plane around target stars. However, coronagraphic imaging systems are very sensitive to optical aberrations, so wavefront correction using deformable mirrors (DMs) is necessary to avoid contrast degradation in the image plane. Electric field conjugation (EFC) and Stroke minimization (SM) are two primary high-contrast wavefront controllers explored in the past decade. EFC minimizes the average contrast in the search areas while regularizing the strength of the control inputs. Stroke minimization calculates the minimum DM commands under the constraint that a target average contrast is achieved. Recently in the High Contrast Imaging Lab at Princeton University (HCIL), a new linear-constraint wavefront controller based on stroke minimization was developed and demonstrated using numerical simulation. Instead of only constraining the average contrast over the entire search area, the new controller constrains the electric field of each single pixel using linear programming, which could led to significant increases in speed of the wavefront correction and also create more uniform dark holes. As a follow-up of this work, another linear-constraint controller modified from EFC is demonstrated theoretically and numerically and the lab verification of the linear-constraint controllers is reported. Based on the simulation and lab results, the pros and cons of linear-constraint controllers are carefully compared with EFC and stroke minimization.
Characterization of Biogenic Gas and Mineral Formation Process by Denitrification in Porous Media
NASA Astrophysics Data System (ADS)
Hall, C. A.; Kim, D.; Mahabadi, N.; van Paassen, L. A.
2017-12-01
Biologically mediated processes have been regarded and developed as an alternative approach to traditional ground improvement techniques. Denitrification has been investigated as a potential ground improvement process towards liquefaction hazard mitigation. During denitrification, microorganisms reduce nitrate to dinitrogen gas and facilitate calcium carbonate precipitation as a by-product under adequate environmental conditions. The formation of dinitrogen gas desaturates soils and allows for potential pore pressure dampening during earthquake events. While, precipitation of calcium carbonate can improve the mechanical properties by filling the voids and cementing soil particles. As a result of small changes in gas and mineral phases, the mechanical properties of soils can be significantly affected. Prior research has primarily focused on quantitative analysis of overall residual calcium carbonate mineral and biogenic gas products in lab-scale porous media. However, the distribution of these products at the pore-scale has not been well-investigated. In this research, denitrification is activated in a microfluidic chip simulating a homogenous pore structure. The denitrification process is monitored by sequential image capture, where gas and mineral phase changes are evaluated by image processing. Analysis of these images correspond with previous findings, which demonstrate that biogenic gas behaviour at the pore scale is affected by the balance between reaction, diffusion, and convection rates.
1979-01-17
Photo by Voyager 1 Jupiter's satellite Io poses before the giant planet in this photo returned Jan 17, 1979 from a distance of 29 million miles (47 million kilometers). The satellite's shadow can be seen falling on the face of Jupiter at left. Io is traveling from left to right in its one-and-three-quarter-day orbit around Jupiter. Even from this great distance the image of Io shows dark poles and bright equatorial region. Voyager 1 will make its closest approach to Jupiter 174, 000 miles (280,000 kilometer) on March 5. It will then continue to Saturn in November 1980. This color photo was assembled at Jet Propulsion Laboratory's Image Processing Lab from three black and white images taken through filters. The Voyagers are managed for NASA's Office of Space Science by Jet Propulsion Laboratory. (JPL Ref: P-20946C)
Bioengineering/Biophysicist Post-doctoral Fellow | Center for Cancer Research
A post-doctoral fellow position is available in the Tissue Morphodynamics Unit headed by Dr. Kandice Tanner at the National Cancer Institute. The Tanner lab combines biophysical and cell biological approaches to understand the interplay between tissue architecture and metastasis. We use a combination of imaging modalities, cell biology and animal models. It is expected that as a member of this lab, one will have an opportunity to be exposed to all these areas. We value a vibrant and collaborative environment where lab members share ideas, reagents and expertise and want to work on fundamental problems in the establishment of metastatic lesions. Our lab is located in the NIH main campus in Bethesda. The research facilities at NIH are outstanding and the lab has state-of-the-art equipment such as multi-photon and confocal microscopes, FACS facilities and animal vivarium.
Semi-automatic mapping for identifying complex geobodies in seismic images
NASA Astrophysics Data System (ADS)
Domínguez-C, Raymundo; Romero-Salcedo, Manuel; Velasquillo-Martínez, Luis G.; Shemeretov, Leonid
2017-03-01
Seismic images are composed of positive and negative seismic wave traces with different amplitudes (Robein 2010 Seismic Imaging: A Review of the Techniques, their Principles, Merits and Limitations (Houten: EAGE)). The association of these amplitudes together with a color palette forms complex visual patterns. The color intensity of such patterns is directly related to impedance contrasts: the higher the contrast, the higher the color intensity. Generally speaking, low impedance contrasts are depicted with low tone colors, creating zones with different patterns whose features are not evident for a 3D automated mapping option available on commercial software. In this work, a workflow for a semi-automatic mapping of seismic images focused on those areas with low-intensity colored zones that may be associated with geobodies of petroleum interest is proposed. The CIE L*A*B* color space was used to perform the seismic image processing, which helped find small but significant differences between pixel tones. This process generated binary masks that bound color regions to low-intensity colors. The three-dimensional-mask projection allowed the construction of 3D structures for such zones (geobodies). The proposed method was applied to a set of digital images from a seismic cube and tested on four representative study cases. The obtained results are encouraging because interesting geobodies are obtained with a minimum of information.
NASA Astrophysics Data System (ADS)
Yin, Leilei; Chen, Ying-Chieh; Gelb, Jeff; Stevenson, Darren M.; Braun, Paul A.
2010-09-01
High resolution x-ray computed tomography is a powerful non-destructive 3-D imaging method. It can offer superior resolution on objects that are opaque or low contrast for optical microscopy. Synchrotron based x-ray computed tomography systems have been available for scientific research, but remain difficult to access for broader users. This work introduces a lab-based high-resolution x-ray nanotomography system with 50nm resolution in absorption and Zernike phase contrast modes. Using this system, we have demonstrated high quality 3-D images of polymerized photonic crystals which have been analyzed for band gap structures. The isotropic volumetric data shows excellent consistency with other characterization results.
Meteorological Instruction Software
NASA Technical Reports Server (NTRS)
1990-01-01
At Florida State University and the Naval Postgraduate School, meteorology students have the opportunity to apply theoretical studies to current weather phenomena, even prepare forecasts and see how their predictions stand up utilizing GEMPAK. GEMPAK can display data quickly in both conventional and non-traditional ways, allowing students to view multiple perspectives of the complex three-dimensional atmospheric structure. With GEMPAK, mathematical equations come alive as students do homework and laboratory assignments on the weather events happening around them. Since GEMPAK provides data on a 'today' basis, each homework assignment is new. At the Naval Postgraduate School, students are now using electronically-managed environmental data in the classroom. The School's Departments of Meteorology and Oceanography have developed the Interactive Digital Environment Analysis (IDEA) Laboratory. GEMPAK is the IDEA Lab's general purpose display package; the IDEA image processing package is a modified version of NASA's Device Management System. Bringing the graphic and image processing packages together is NASA's product, the Transportable Application Executive (TAE).
Atomic force microscopy and spectroscopy to probe single membrane proteins in lipid bilayers.
Sapra, K Tanuj
2013-01-01
The atomic force microscope (AFM) has opened vast avenues hitherto inaccessible to the biological scientist. The high temporal (millisecond) and spatial (nanometer) resolutions of the AFM are suited for studying many biological processes in their native conditions. The AFM cantilever stylus is aptly termed as a "lab on a tip" owing to its versatility as an imaging tool as well as a handle to manipulate single bonds and proteins. Recent examples assert that the AFM can be used to study the mechanical properties and monitor processes of single proteins and single cells, thus affording insight into important mechanistic details. This chapter specifically focuses on practical and analytical protocols of single-molecule AFM methodologies related to high-resolution imaging and single-molecule force spectroscopy of membrane proteins. Both these techniques are operator oriented, and require specialized working knowledge of the instrument, theoretical, and practical skills.
The 3D model control of image processing
NASA Technical Reports Server (NTRS)
Nguyen, An H.; Stark, Lawrence
1989-01-01
Telerobotics studies remote control of distant robots by a human operator using supervisory or direct control. Even if the robot manipulators has vision or other senses, problems arise involving control, communications, and delay. The communication delays that may be expected with telerobots working in space stations while being controlled from an Earth lab have led to a number of experiments attempting to circumvent the problem. This delay in communication is a main motivating factor in moving from well understood instantaneous hands-on manual control to less well understood supervisory control; the ultimate step would be the realization of a fully autonomous robot. The 3-D model control plays a crucial role in resolving many conflicting image processing problems that are inherent in resolving in the bottom-up approach of most current machine vision processes. The 3-D model control approach is also capable of providing the necessary visual feedback information for both the control algorithms and for the human operator.
2004-01-05
KENNEDY SPACE CENTER, FLA. -- In the Space Life Sciences (SLS) Lab, Jan Bauer, with Dynamac Corp., places samples of onion tissue in the elemental analyzer, which analyzes for carbon, hydrogen, nitrogen and sulfur. The 100,000 square-foot SLS houses labs for NASA’s ongoing research efforts, microbiology/microbial ecology studies and analytical chemistry labs. Also calling the new lab home are facilities for space flight-experiment and flight-hardware development, new plant growth chambers, and an Orbiter Environment Simulator that will be used to conduct ground control experiments in simulated flight conditions for space flight experiments. The SLS Lab, formerly known as the Space Experiment Research and Processing Laboratory or SERPL, provides space for NASA’s Life Sciences Services contractor Dynamac Corporation, Bionetics Corporation, and researchers from the University of Florida. NASA’s Office of Biological and Physical Research will use the facility for processing life sciences experiments that will be conducted on the International Space Station. The SLS Lab is the magnet facility for the International Space Research Park at KSC being developed in partnership with Florida Space Authority.
2004-01-05
KENNEDY SPACE CENTER, FLA. -- Sharon Edney, with Dynamac Corp., measures photosynthesis on Bibb lettuce being grown hydroponically for study in the Space Life Sciences Lab. The 100,000 square-foot facility houses labs for NASA’s ongoing research efforts, microbiology/microbial ecology studies and analytical chemistry labs. Also calling the new lab home are facilities for space flight-experiment and flight-hardware development, new plant growth chambers, and an Orbiter Environment Simulator that will be used to conduct ground control experiments in simulated flight conditions for space flight experiments. The SLS Lab, formerly known as the Space Experiment Research and Processing Laboratory or SERPL, provides space for NASA’s Life Sciences Services contractor Dynamac Corporation, Bionetics Corporation, and researchers from the University of Florida. NASA’s Office of Biological and Physical Research will use the facility for processing life sciences experiments that will be conducted on the International Space Station. The SLS Lab is the magnet facility for the International Space Research Park at KSC being developed in partnership with Florida Space Authority.
2004-01-05
KENNEDY SPACE CENTER, FLA. -- Sharon Edney, with Dynamac Corp., checks the roots of green onions being grown hydroponically for study in the Space Life Sciences Lab. The 100,000 square-foot facility houses labs for NASA’s ongoing research efforts, microbiology/microbial ecology studies and analytical chemistry labs. Also calling the new lab home are facilities for space flight-experiment and flight-hardware development, new plant growth chambers, and an Orbiter Environment Simulator that will be used to conduct ground control experiments in simulated flight conditions for space flight experiments. The SLS Lab, formerly known as the Space Experiment Research and Processing Laboratory or SERPL, provides space for NASA’s Life Sciences Services contractor Dynamac Corporation, Bionetics Corporation, and researchers from the University of Florida. NASA’s Office of Biological and Physical Research will use the facility for processing life sciences experiments that will be conducted on the International Space Station. The SLS Lab is the magnet facility for the International Space Research Park at KSC being developed in partnership with Florida Space Authority.
2004-01-05
KENNEDY SPACE CENTER, FLA. -- Lanfang Levine, with Dynamac Corp., helps install a Dionex DX-500 IC/HPLC system in the Space Life Sciences Lab. The equipment will enable analysis of volatile compounds, such as from plants. The 100,000 square-foot facility houses labs for NASA’s ongoing research efforts, microbiology/microbial ecology studies and analytical chemistry labs. Also calling the new lab home are facilities for space flight-experiment and flight-hardware development, new plant growth chambers, and an Orbiter Environment Simulator that will be used to conduct ground control experiments in simulated flight conditions for space flight experiments. The SLS Lab, formerly known as the Space Experiment Research and Processing Laboratory or SERPL, provides space for NASA’s Life Sciences Services contractor Dynamac Corporation, Bionetics Corporation, and researchers from the University of Florida. NASA’s Office of Biological and Physical Research will use the facility for processing life sciences experiments that will be conducted on the International Space Station. The SLS Lab is the magnet facility for the International Space Research Park at KSC being developed in partnership with Florida Space Authority.
2004-01-05
KENNEDY SPACE CENTER, FLA. -- Sharon Edney, with Dynamac Corp., checks the growth of radishes being grown hydroponically for study in the Space Life Sciences Lab. The 100,000 square-foot facility houses labs for NASA’s ongoing research efforts, microbiology/microbial ecology studies and analytical chemistry labs. Also calling the new lab home are facilities for space flight-experiment and flight-hardware development, new plant growth chambers, and an Orbiter Environment Simulator that will be used to conduct ground control experiments in simulated flight conditions for space flight experiments. The SLS Lab, formerly known as the Space Experiment Research and Processing Laboratory or SERPL, provides space for NASA’s Life Sciences Services contractor Dynamac Corporation, Bionetics Corporation, and researchers from the University of Florida. NASA’s Office of Biological and Physical Research will use the facility for processing life sciences experiments that will be conducted on the International Space Station. The SLS Lab is the magnet facility for the International Space Research Park at KSC being developed in partnership with Florida Space Authority.
2006-11-01
NON DESTRUCTIVE 3D X-RAY IMAGING OF NANO STRUCTURES & COMPOSITES AT SUB-30 NM RESOLUTION, WITH A NOVEL LAB BASED X- RAY MICROSCOPE S H Lau...article we describe a 3D x-ray microscope based on a laboratory x-ray source operating at 2.7, 5.4 or 8.0 keV hard x-ray energies. X-ray computed...tomography (XCT) is used to obtain detailed 3D structural information inside optically opaque materials with sub-30 nm resolution. Applications include
Zheng, Yu; Mou, Jun; Niu, Jiwei; Yang, Shuai; Chen, Lin; Xia, Menglei; Wang, Min
2018-03-01
Lactic acid bacteria (LAB) are essential microbiota for the fermentation and flavor formation of Shanxi aged vinegar, a famous Chinese traditional cereal vinegar that is manufactured using open solid-state fermentation (SSF) technology. However, the dynamics of LAB in this SSF process and the underlying mechanism remain poorly understood. Here, the diversity of LAB and the potential driving factors of the entire process were analyzed by combining culture-independent and culture-dependent methods. Canonical correlation analysis indicated that ethanol, acetic acid, and temperature that result from the metabolism of microorganisms serve as potential driving factors for LAB succession. LAB strains were periodically isolated, and the characteristics of 57 isolates on environmental factor tolerance and substrate utilization were analyzed to understand the succession sequence. The environmental tolerance of LAB from different stages was in accordance with their fermentation conditions. Remarkable correlations were identified between LAB growth and environmental factors with 0.866 of ethanol (70 g/L), 0.756 of acetic acid (10 g/L), and 0.803 of temperature (47 °C). More gentle or harsh environments (less or more than 60 or 80 g/L of ethanol, 5 or 20 g/L of acetic acid, and 30 or 55 °C temperature) did not affect the LAB succession. The utilization capability evaluation of the 57 isolates for 95 compounds proved that strains from different fermentation stages exhibited different predilections on substrates to contribute to the fermentation at different stages. Results demonstrated that LAB succession in the SSF process was driven by the capabilities of environmental tolerance and substrate utilization.
NASA Technical Reports Server (NTRS)
Sen, Syamal K.; Shaykhian, Gholam Ali
2011-01-01
MatLab(TradeMark)(MATrix LABoratory) is a numerical computation and simulation tool that is used by thousands Scientists and Engineers in many countries. MatLab does purely numerical calculations, which can be used as a glorified calculator or interpreter programming language; its real strength is in matrix manipulations. Computer algebra functionalities are achieved within the MatLab environment using "symbolic" toolbox. This feature is similar to computer algebra programs, provided by Maple or Mathematica to calculate with mathematical equations using symbolic operations. MatLab in its interpreter programming language form (command interface) is similar with well known programming languages such as C/C++, support data structures and cell arrays to define classes in object oriented programming. As such, MatLab is equipped with most of the essential constructs of a higher programming language. MatLab is packaged with an editor and debugging functionality useful to perform analysis of large MatLab programs and find errors. We believe there are many ways to approach real-world problems; prescribed methods to ensure foregoing solutions are incorporated in design and analysis of data processing and visualization can benefit engineers and scientist in gaining wider insight in actual implementation of their perspective experiments. This presentation will focus on data processing and visualizations aspects of engineering and scientific applications. Specifically, it will discuss methods and techniques to perform intermediate-level data processing covering engineering and scientific problems. MatLab programming techniques including reading various data files formats to produce customized publication-quality graphics, importing engineering and/or scientific data, organizing data in tabular format, exporting data to be used by other software programs such as Microsoft Excel, data presentation and visualization will be discussed.
In vivo multiphoton tomography and fluorescence lifetime imaging of human brain tumor tissue.
Kantelhardt, Sven R; Kalasauskas, Darius; König, Karsten; Kim, Ella; Weinigel, Martin; Uchugonova, Aisada; Giese, Alf
2016-05-01
High resolution multiphoton tomography and fluorescence lifetime imaging differentiates glioma from adjacent brain in native tissue samples ex vivo. Presently, multiphoton tomography is applied in clinical dermatology and experimentally. We here present the first application of multiphoton and fluorescence lifetime imaging for in vivo imaging on humans during a neurosurgical procedure. We used a MPTflex™ Multiphoton Laser Tomograph (JenLab, Germany). We examined cultured glioma cells in an orthotopic mouse tumor model and native human tissue samples. Finally the multiphoton tomograph was applied to provide optical biopsies during resection of a clinical case of glioblastoma. All tissues imaged by multiphoton tomography were sampled and processed for conventional histopathology. The multiphoton tomograph allowed fluorescence intensity- and fluorescence lifetime imaging with submicron spatial resolution and 200 picosecond temporal resolution. Morphological fluorescence intensity imaging and fluorescence lifetime imaging of tumor-bearing mouse brains and native human tissue samples clearly differentiated tumor and adjacent brain tissue. Intraoperative imaging was found to be technically feasible. Intraoperative image quality was comparable to ex vivo examinations. To our knowledge we here present the first intraoperative application of high resolution multiphoton tomography and fluorescence lifetime imaging of human brain tumors in situ. It allowed in vivo identification and determination of cell density of tumor tissue on a cellular and subcellular level within seconds. The technology shows the potential of rapid intraoperative identification of native glioma tissue without need for tissue processing or staining.
Conducting On-orbit Gene Expression Analysis on ISS: WetLab-2
NASA Technical Reports Server (NTRS)
Parra, Macarena; Almeida, Eduardo; Boone, Travis; Jung, Jimmy; Lera, Matthew P.; Ricco, Antonio; Souza, Kenneth; Wu, Diana; Richey, C. Scott
2013-01-01
WetLab-2 will enable expanded genomic research on orbit by developing tools that support in situ sample collection, processing, and analysis on ISS. This capability will reduce the time-to-results for investigators and define new pathways for discovery on the ISS National Lab. The primary objective is to develop a research platform on ISS that will facilitate real-time quantitative gene expression analysis of biological samples collected on orbit. WetLab-2 will be capable of processing multiple sample types ranging from microbial cultures to animal tissues dissected on orbit. WetLab-2 will significantly expand the analytical capabilities onboard ISS and enhance science return from ISS.
Deep Learning Methods for Quantifying Invasive Benthic Species in the Great Lakes
NASA Astrophysics Data System (ADS)
Billings, G.; Skinner, K.; Johnson-Roberson, M.
2017-12-01
In recent decades, invasive species such as the round goby and dreissenid mussels have greatly impacted the Great Lakes ecosystem. It is critical to monitor these species, model their distribution, and quantify the impacts on the native fisheries and surrounding ecosystem in order to develop an effective management response. However, data collection in underwater environments is challenging and expensive. Furthermore, the round goby is typically found in rocky habitats, which are inaccessible to standard survey techniques such as bottom trawling. In this work we propose a robotic system for visual data collection to automatically detect and quantify invasive round gobies and mussels in the Great Lakes. Robotic platforms equipped with cameras can perform efficient, cost-effective, low-bias benthic surveys. This data collection can be further optimized through automatic detection and annotation of the target species. Deep learning methods have shown success in image recognition tasks. However, these methods often rely on a labelled training dataset, with up to millions of labelled images. Hand labeling large numbers of images is expensive and often impracticable. Furthermore, data collected in the field may be sparse when only considering images that contain the objects of interest. It is easier to collect dense, clean data in controlled lab settings, but this data is not a realistic representation of real field environments. In this work, we propose a deep learning approach to generate a large set of labelled training data realistic of underwater environments in the field. To generate these images, first we draw random sample images of individual fish and mussels from a library of images captured in a controlled lab environment. Next, these randomly drawn samples will be automatically merged into natural background images. Finally, we will use a generative adversarial network (GAN) that incorporates constraints of the physical model of underwater light propagation to simulate the process of underwater image formation in various water conditions. The output of the GAN will be realistic looking annotated underwater images. This generated dataset of images will be used to train a classifier to identify round gobies and mussels in order to measure the biomass and abundance of these invasive species in the Great Lakes.
Highly Sophisticated Virtual Laboratory Instruments in Education
NASA Astrophysics Data System (ADS)
Gaskins, T.
2006-12-01
Many areas of Science have advanced or stalled according to the ability to see what can not normally be seen. Visual understanding has been key to many of the world's greatest breakthroughs, such as discovery of DNAs double helix. Scientists use sophisticated instruments to see what the human eye can not. Light microscopes, scanning electron microscopes (SEM), spectrometers and atomic force microscopes are employed to examine and learn the details of the extremely minute. It's rare that students prior to university have access to such instruments, or are granted full ability to probe and magnify as desired. Virtual Lab, by providing highly authentic software instruments and comprehensive imagery of real specimens, provides them this opportunity. Virtual Lab's instruments let explorers operate virtual devices on a personal computer to examine real specimens. Exhaustive sets of images systematically and robotically photographed at thousands of positions and multiple magnifications and focal points allow students to zoom in and focus on the most minute detail of each specimen. Controls on each Virtual Lab device interactively and smoothly move the viewer through these images to display the specimen as the instrument saw it. Users control position, magnification, focal length, filters and other parameters. Energy dispersion spectrometry is combined with SEM imagery to enable exploration of chemical composition at minute scale and arbitrary location. Annotation capabilities allow scientists, teachers and students to indicate important features or areas. Virtual Lab is a joint project of NASA and the Beckman Institute at the University of Illinois at Urbana- Champaign. Four instruments currently compose the Virtual Lab suite: A scanning electron microscope and companion energy dispersion spectrometer, a high-power light microscope, and a scanning probe microscope that captures surface properties to the level of atoms. Descriptions of instrument operating principles and uses are also part of Virtual Lab. The Virtual Lab software and its increasingly rich collection of specimens are free to anyone. This presentation describes Virtual Lab and its uses in formal and informal education.
Evaluation of Petrifilm Lactic Acid Bacteria Plates for Counting Lactic Acid Bacteria in Food.
Kanagawa, Satomi; Ohshima, Chihiro; Takahashi, Hajime; Burenqiqige; Kikuchi, Misato; Sato, Fumina; Nakamura, Ayaka; Mohamed, Shimaa M; Kuda, Takashi; Kimura, Bon
2018-06-01
Although lactic acid bacteria (LAB) are used widely as starter cultures in the production of fermented foods, they are also responsible for food decay and deterioration. The undesirable growth of LAB in food causes spoilage, discoloration, and slime formation. Because of these adverse effects, food companies test for the presence of LAB in production areas and processed foods and consistently monitor the behavior of these bacteria. The 3M Petrifilm LAB Count Plates have recently been launched as a time-saving and simple-to-use plate designed for detecting and quantifying LAB. This study compares the abilities of Petrifilm LAB Count Plates and the de Man Rogosa Sharpe (MRS) agar medium to determine the LAB count in a variety of foods and swab samples collected from a food production area. Bacterial strains isolated from Petrifilm LAB Count Plates were identified by 16S rDNA sequence analysis to confirm the specificity of these plates for LAB. The results showed no significant difference in bacterial counts measured by using Petrifilm LAB Count Plates and MRS medium. Furthermore, all colonies growing on Petrifilm LAB Count Plates were confirmed to be LAB, while yeast colonies also formed in MRS medium. Petrifilm LAB Count Plates eliminated the plate preparation and plate inoculation steps, and the cultures could be started as soon as a diluted food sample was available. Food companies are required to establish quality controls and perform tests to check the quality of food products; the use of Petrifilm LAB Count Plates can simplify this testing process for food companies.
A learning tool for optical and microwave satellite image processing and analysis
NASA Astrophysics Data System (ADS)
Dashondhi, Gaurav K.; Mohanty, Jyotirmoy; Eeti, Laxmi N.; Bhattacharya, Avik; De, Shaunak; Buddhiraju, Krishna M.
2016-04-01
This paper presents a self-learning tool, which contains a number of virtual experiments for processing and analysis of Optical/Infrared and Synthetic Aperture Radar (SAR) images. The tool is named Virtual Satellite Image Processing and Analysis Lab (v-SIPLAB) Experiments that are included in Learning Tool are related to: Optical/Infrared - Image and Edge enhancement, smoothing, PCT, vegetation indices, Mathematical Morphology, Accuracy Assessment, Supervised/Unsupervised classification etc.; Basic SAR - Parameter extraction and range spectrum estimation, Range compression, Doppler centroid estimation, Azimuth reference function generation and compression, Multilooking, image enhancement, texture analysis, edge and detection. etc.; SAR Interferometry - BaseLine Calculation, Extraction of single look SAR images, Registration, Resampling, and Interferogram generation; SAR Polarimetry - Conversion of AirSAR or Radarsat data to S2/C3/T3 matrix, Speckle Filtering, Power/Intensity image generation, Decomposition of S2/C3/T3, Classification of S2/C3/T3 using Wishart Classifier [3]. A professional quality polarimetric SAR software can be found at [8], a part of whose functionality can be found in our system. The learning tool also contains other modules, besides executable software experiments, such as aim, theory, procedure, interpretation, quizzes, link to additional reading material and user feedback. Students can have understanding of Optical and SAR remotely sensed images through discussion of basic principles and supported by structured procedure for running and interpreting the experiments. Quizzes for self-assessment and a provision for online feedback are also being provided to make this Learning tool self-contained. One can download results after performing experiments.
Platform for Postprocessing Waveform-Based NDE
NASA Technical Reports Server (NTRS)
Roth, Don
2008-01-01
Taking advantage of the similarities that exist among all waveform-based non-destructive evaluation (NDE) methods, a common software platform has been developed containing multiple- signal and image-processing techniques for waveforms and images. The NASA NDE Signal and Image Processing software has been developed using the latest versions of LabVIEW, and its associated Advanced Signal Processing and Vision Toolkits. The software is useable on a PC with Windows XP and Windows Vista. The software has been designed with a commercial grade interface in which two main windows, Waveform Window and Image Window, are displayed if the user chooses a waveform file to display. Within these two main windows, most actions are chosen through logically conceived run-time menus. The Waveform Window has plots for both the raw time-domain waves and their frequency- domain transformations (fast Fourier transform and power spectral density). The Image Window shows the C-scan image formed from information of the time-domain waveform (such as peak amplitude) or its frequency-domain transformation at each scan location. The user also has the ability to open an image, or series of images, or a simple set of X-Y paired data set in text format. Each of the Waveform and Image Windows contains menus from which to perform many user actions. An option exists to use raw waves obtained directly from scan, or waves after deconvolution if system wave response is provided. Two types of deconvolution, time-based subtraction or inverse-filter, can be performed to arrive at a deconvolved wave set. Additionally, the menu on the Waveform Window allows preprocessing of waveforms prior to image formation, scaling and display of waveforms, formation of different types of images (including non-standard types such as velocity), gating of portions of waves prior to image formation, and several other miscellaneous and specialized operations. The menu available on the Image Window allows many further image processing and analysis operations, some of which are found in commercially-available image-processing software programs (such as Adobe Photoshop), and some that are not (removing outliers, Bscan information, region-of-interest analysis, line profiles, and precision feature measurements).
NASA Astrophysics Data System (ADS)
Mertens, James Charles Edwin
For decades, microelectronics manufacturing has been concerned with failures related to electromigration phenomena in conductors experiencing high current densities. The influence of interconnect microstructure on device failures related to electromigration in BGA and flip chip solder interconnects has become a significant interest with reduced individual solder interconnect volumes. A survey indicates that x-ray computed micro-tomography (muXCT) is an emerging, novel means for characterizing the microstructures' role in governing electromigration failures. This work details the design and construction of a lab-scale muXCT system to characterize electromigration in the Sn-0.7Cu lead-free solder system by leveraging in situ imaging. In order to enhance the attenuation contrast observed in multi-phase material systems, a modeling approach has been developed to predict settings for the controllable imaging parameters which yield relatively high detection rates over the range of x-ray energies for which maximum attenuation contrast is expected in the polychromatic x-ray imaging system. In order to develop this predictive tool, a model has been constructed for the Bremsstrahlung spectrum of an x-ray tube, and calculations for the detector's efficiency over the relevant range of x-ray energies have been made, and the product of emitted and detected spectra has been used to calculate the effective x-ray imaging spectrum. An approach has also been established for filtering 'zinger' noise in x-ray radiographs, which has proven problematic at high x-ray energies used for solder imaging. The performance of this filter has been compared with a known existing method and the results indicate a significant increase in the accuracy of zinger filtered radiographs. The obtained results indicate the conception of a powerful means for the study of failure causing processes in solder systems used as interconnects in microelectronic packaging devices. These results include the volumetric quantification of parameters which are indicative of both electromigration tolerance of solders and the dominant mechanisms for atomic migration in response to current stressing. This work is aimed to further the community's understanding of failure-causing electromigration processes in industrially relevant material systems for microelectronic interconnect applications and to advance the capability of available characterization techniques for their interrogation.
NASA Astrophysics Data System (ADS)
Stanley, Jacob T.; Su, Weifeng; Lewandowski, H. J.
2017-12-01
We demonstrate how students' use of modeling can be examined and assessed using student notebooks collected from an upper-division electronics lab course. The use of models is a ubiquitous practice in undergraduate physics education, but the process of constructing, testing, and refining these models is much less common. We focus our attention on a lab course that has been transformed to engage students in this modeling process during lab activities. The design of the lab activities was guided by a framework that captures the different components of model-based reasoning, called the Modeling Framework for Experimental Physics. We demonstrate how this framework can be used to assess students' written work and to identify how students' model-based reasoning differed from activity to activity. Broadly speaking, we were able to identify the different steps of students' model-based reasoning and assess the completeness of their reasoning. Varying degrees of scaffolding present across the activities had an impact on how thoroughly students would engage in the full modeling process, with more scaffolded activities resulting in more thorough engagement with the process. Finally, we identified that the step in the process with which students had the most difficulty was the comparison between their interpreted data and their model prediction. Students did not use sufficiently sophisticated criteria in evaluating such comparisons, which had the effect of halting the modeling process. This may indicate that in order to engage students further in using model-based reasoning during lab activities, the instructor needs to provide further scaffolding for how students make these types of experimental comparisons. This is an important design consideration for other such courses attempting to incorporate modeling as a learning goal.
Airborne imaging spectrometers developed in China
NASA Astrophysics Data System (ADS)
Wang, Jianyu; Xue, Yongqi
1998-08-01
Airborne imaging spectral technology, principle means in airborne remote sensing, has been developed rapidly both in the world and in China recently. This paper describes Modular Airborne Imaging Spectrometer (MAIS), Operational Modular Airborne Imaging Spectrometer (OMAIS) and Pushbroom Hyperspectral Imagery (PHI) that have been developed or are being developed in Airborne Remote Sensing Lab of Shanghai Institute of Technical Physics, CAS.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. -- In the Space Life Sciences (SLS) Lab, Jan Bauer, with Dynamac Corp., weighs samples of onion tissue for processing in the elemental analyzer behind it. The equipment analyzes for carbon, hydrogen, nitrogen and sulfur. The 100,000 square-foot SLS houses labs for NASAs ongoing research efforts, microbiology/microbial ecology studies and analytical chemistry labs. Also calling the new lab home are facilities for space flight-experiment and flight-hardware development, new plant growth chambers, and an Orbiter Environment Simulator that will be used to conduct ground control experiments in simulated flight conditions for space flight experiments. The SLS Lab, formerly known as the Space Experiment Research and Processing Laboratory or SERPL, provides space for NASAs Life Sciences Services contractor Dynamac Corporation, Bionetics Corporation, and researchers from the University of Florida. NASAs Office of Biological and Physical Research will use the facility for processing life sciences experiments that will be conducted on the International Space Station. The SLS Lab is the magnet facility for the International Space Research Park at KSC being developed in partnership with Florida Space Authority.
Individual differences in face-looking behavior generalize from the lab to the world.
Peterson, Matthew F; Lin, Jing; Zaun, Ian; Kanwisher, Nancy
2016-05-01
Recent laboratory studies have found large, stable individual differences in the location people first fixate when identifying faces, ranging from the brows to the mouth. Importantly, this variation is strongly associated with differences in fixation-specific identification performance such that individuals' recognition ability is maximized when looking at their preferred location (Mehoudar, Arizpe, Baker, & Yovel, 2014; Peterson & Eckstein, 2013). This finding suggests that face representations are retinotopic and individuals enact gaze strategies that optimize identification, yet the extent to which this behavior reflects real-world gaze behavior is unknown. Here, we used mobile eye trackers to test whether individual differences in face gaze generalize from lab to real-world vision. In-lab fixations were measured with a speeded face identification task, while real-world behavior was measured as subjects freely walked around the Massachusetts Institute of Technology campus. We found a strong correlation between the patterns of individual differences in face gaze in the lab and real-world settings. Our findings support the hypothesis that individuals optimize real-world face identification by consistently fixating the same location and thus strongly constraining the space of retinotopic input. The methods developed for this study entailed collecting a large set of high-definition, wide field-of-view natural videos from head-mounted cameras and the viewer's fixation position, allowing us to characterize subjects' actually experienced real-world retinotopic images. These images enable us to ask how vision is optimized not just for the statistics of the "natural images" found in web databases, but of the truly natural, retinotopic images that have landed on actual human retinae during real-world experience.
NASA Astrophysics Data System (ADS)
Hahlweg, Cornelius; Rothe, Hendrik
2016-09-01
For more than two decades lessons in optics, digital image processing and optronics are compulsory optional subjects and as such integral parts of the courses in mechanical engineering at the University of the Federal Armed Forces in Hamburg. They are provided by the Chair for Measurement and Information Technology. Historically, the curricula started as typical basic lessons in optics and digital image processing and related sensors. Practical sessions originally concentrated on image processing procedures in Pascal, C and later Matlab. They evolved into a broad portfolio of practical hands-on lessons in lab and field, including high-tech and especially military equipment, but also homemaker style primitive experiments, of which the paper will give a methodical overview. A special topic - as always with optics in education - is the introduction to the various levels of abstraction in conjunction with the highly complex and wide-ranging matter squeezed into only two trimesters - instead of semesters at civil universities - for an audience being subject to strains from both study and duty. The talk will be accompanied by striking multi-media material, which will be also part of the multi-media attachment of the paper.
A coherent through-wall MIMO phased array imaging radar based on time-duplexed switching
NASA Astrophysics Data System (ADS)
Chen, Qingchao; Chetty, Kevin; Brennan, Paul; Lok, Lai Bun; Ritchie, Matthiew; Woodbridge, Karl
2017-05-01
Through-the-Wall (TW) radar sensors are gaining increasing interest for security, surveillance and search and rescue applications. Additionally, the integration of Multiple-Input, Multiple-Output (MIMO) techniques with phased array radar is allowing higher performance at lower cost. In this paper we present a 4-by-4 TW MIMO phased array imaging radar operating at 2.4 GHz with 200 MHz bandwidth. To achieve high imaging resolution in a cost-effective manner, the 4 Tx and 4 Rx elements are used to synthesize a uniform linear array (ULA) of 16 virtual elements. Furthermore, the transmitter is based on a single-channel 4-element time-multiplexed switched array. In transmission, the radar utilizes frequency modulated continuous wave (FMCW) waveforms that undergo de-ramping on receive to allow digitization at relatively low sampling rates, which then simplifies the imaging process. This architecture has been designed for the short-range TW scenarios envisaged, and permits sufficient time to switch between antenna elements. The paper first outlines the system characteristics before describing the key signal processing and imaging algorithms which are based on traditional Fast Fourier Transform (FFT) processing. These techniques are implemented in LabVIEW software. Finally, we report results from an experimental campaign that investigated the imaging capabilities of the system and demonstrated the detection of personnel targets. Moreover, we show that multiple targets within a room with greater than approximately 1 meter separation can be distinguished from one another.
NASA Astrophysics Data System (ADS)
Balasubramanian, Kunjithapatham; Riggs, A. J. Eldorado; Cady, Eric; White, Victor; Yee, Karl; Wilson, Daniel; Echternach, Pierre; Muller, Richard; Mejia Prada, Camilo; Seo, Byoung-Joon; Shi, Fang; Ryan, Daniel; Fregoso, Santos; Metzman, Jacob; Wilson, Robert Casey
2017-09-01
NASA WFIRST mission has planned to include a coronagraph instrument to find and characterize exoplanets. Masks are needed to suppress the host star light to better than 10-8 - 10-9 level contrast over a broad bandwidth to enable the coronagraph mission objectives. Such masks for high contrast coronagraphic imaging require various fabrication technologies to meet a wide range of specifications, including precise shapes, micron scale island features, ultra-low reflectivity regions, uniformity, wave front quality, etc. We present the technologies employed at JPL to produce these pupil plane and image plane coronagraph masks, and lab-scale external occulter masks, highlighting accomplishments from the high contrast imaging testbed (HCIT) at JPL and from the high contrast imaging lab (HCIL) at Princeton University. Inherent systematic and random errors in fabrication and their impact on coronagraph performance are discussed with model predictions and measurements.
Study on the high-frequency laser measurement of slot surface difference
NASA Astrophysics Data System (ADS)
Bing, Jia; Lv, Qiongying; Cao, Guohua
2017-10-01
In view of the measurement of the slot surface difference in the large-scale mechanical assembly process, Based on high frequency laser scanning technology and laser detection imaging principle, This paragraph designs a double galvanometer pulse laser scanning system. Laser probe scanning system architecture consists of three parts: laser ranging part, mechanical scanning part, data acquisition and processing part. The part of laser range uses high-frequency laser range finder to measure the distance information of the target shape and get a lot of point cloud data. Mechanical scanning part includes high-speed rotary table, high-speed transit and related structure design, in order to realize the whole system should be carried out in accordance with the design of scanning path on the target three-dimensional laser scanning. Data processing part mainly by FPGA hardware with LAbVIEW software to design a core, to process the point cloud data collected by the laser range finder at the high-speed and fitting calculation of point cloud data, to establish a three-dimensional model of the target, so laser scanning imaging is realized.
Journeying to Make Reggio Emilia "Our Own" in a University Lab School and Teacher Education Program
ERIC Educational Resources Information Center
Zehrt, J. E. R.
2010-01-01
This study was undertaken to develop a rich image and understanding of the actions taken by the leaders in charge to translate the Reggio Emilia approach into their university Child Development Lab School and associated teacher education classes. As the university selling is one in which the links between theory, research and practice are highly…
Enhanced modeling and simulation of EO/IR sensor systems
NASA Astrophysics Data System (ADS)
Hixson, Jonathan G.; Miller, Brian; May, Christopher
2015-05-01
The testing and evaluation process developed by the Night Vision and Electronic Sensors Directorate (NVESD) Modeling and Simulation Division (MSD) provides end to end systems evaluation, testing, and training of EO/IR sensors. By combining NV-LabCap, the Night Vision Integrated Performance Model (NV-IPM), One Semi-Automated Forces (OneSAF) input sensor file generation, and the Night Vision Image Generator (NVIG) capabilities, NVESD provides confidence to the M&S community that EO/IR sensor developmental and operational testing and evaluation are accurately represented throughout the lifecycle of an EO/IR system. This new process allows for both theoretical and actual sensor testing. A sensor can be theoretically designed in NV-IPM, modeled in NV-IPM, and then seamlessly input into the wargames for operational analysis. After theoretical design, prototype sensors can be measured by using NV-LabCap, then modeled in NV-IPM and input into wargames for further evaluation. The measurement process to high fidelity modeling and simulation can then be repeated again and again throughout the entire life cycle of an EO/IR sensor as needed, to include LRIP, full rate production, and even after Depot Level Maintenance. This is a prototypical example of how an engineering level model and higher level simulations can share models to mutual benefit.
Research on the underwater target imaging based on the streak tube laser lidar
NASA Astrophysics Data System (ADS)
Cui, Zihao; Tian, Zhaoshuo; Zhang, Yanchao; Bi, Zongjie; Yang, Gang; Gu, Erdan
2018-03-01
A high frame rate streak tube imaging lidar (STIL) for real-time 3D imaging of underwater targets is presented in this paper. The system uses 532nm pulse laser as the light source, the maximum repetition rate is 120Hz, and the pulse width is 8ns. LabVIEW platform is used in the system, the system control, synchronous image acquisition, 3D data processing and display are realized through PC. 3D imaging experiment of underwater target is carried out in a flume with attenuation coefficient of 0.2, and the images of different depth and different material targets are obtained, the imaging frame rate is 100Hz, and the maximum detection depth is 31m. For an underwater target with a distance of 22m, the high resolution 3D image real-time acquisition is realized with range resolution of 1cm and space resolution of 0.3cm, the spatial relationship of the targets can be clearly identified by the image. The experimental results show that STIL has a good application prospect in underwater terrain detection, underwater search and rescue, and other fields.
2001-01-24
The Laminar Soot Processes (LSP) experiment under way during the Microgravity Sciences Lab-1 mission in 1997. LSP-2 will fly in the STS-107 Research 1 mission in 2001. The principal investigator is Dr. Gerard Faeth of the University of Michigan. LSP uses a small jet burner, similar to a classroom butane lighter, that produces flames up to 60 mm (2.3 in) long. Measurements include color TV cameras and a temperature sensor, and laser images whose darkness indicates the quantity of soot produced in the flame. Glenn Research in Cleveland, OH, manages the project.
Imaging Ruptured Lithosphere Beneath the Arabian Peninsula Using S-wave Receiver Functions
NASA Astrophysics Data System (ADS)
Hansen, S. E.; Rodgers, A. J.; Schwartz, S. Y.; Al-Amri, A. M.
2006-12-01
The lithospheric thickness beneath the Arabian Peninsula has important implications for understanding the tectonic processes associated with continental rifting along the Red Sea. However, estimates of the lithospheric thickness are limited by the lack of high-resolution seismic observations sampling the lithosphere- asthenosphere boundary (LAB). The S-wave receiver function technique allows point determinations of Moho and LAB depths by identifying S-to-P conversions from these discontinuities beneath individual seismic stations. This method is superior to P-wave receiver functions for identifying the LAB because P-to-S multiple reverberations from shallower discontinuities (such as the Moho) often mask the direct conversion from the LAB while S-to-P boundary conversions arrive earlier than the direct S phase and all multiples arrive later. We interpret crustal and lithospheric structure across the entire Arabian Peninsula from S-wave receiver functions computed at 29 stations from four different seismic networks. Generally, both the Moho and the LAB are shallowest near the Red Sea and become deeper towards the Arabian interior. Near the coast, the Moho increases from about 12 to 35 km, with a few exceptions showing a deeper Moho beneath stations that are situated on higher topography in the Asir Province. The crustal thickening continues until an average depth of about 40-45 km is reached over both the central Arabian Shield and Platform. The LAB near the coast is at a depth of about 50 km, increases rapidly, and reaches an average maximum depth of about 120 km beneath the Arabian Shield. At the Shield-Platform boundary, a distinct step is observed in the lithospheric thickness where the LAB depth increases to about 160 km. This step may reflect remnant lithospheric thickening associated with the Shield's accretion onto the Platform and has an important role in guiding asthenospheric flow beneath the eastern margin of the Red Sea. This work was performed in part under the auspices of the U.S. Department of Energy by University of California, Lawrence Livermore National Laboratory under contract W-7405-Eng-48.
Low Budget Biology 3: A Collection of Low Cost Labs and Activities.
ERIC Educational Resources Information Center
Wartski, Bert; Wartski, Lynn Marie
This document contains biology labs, demonstrations, and activities that use low budget materials. The goal is to get students involved in the learning process by experiencing biology. Each lab has a teacher preparation section which outlines the purpose of the lab, some basic information, a list of materials , and how to prepare the different…
Computational imaging of light in flight
NASA Astrophysics Data System (ADS)
Hullin, Matthias B.
2014-10-01
Many computer vision tasks are hindered by image formation itself, a process that is governed by the so-called plenoptic integral. By averaging light falling into the lens over space, angle, wavelength and time, a great deal of information is irreversibly lost. The emerging idea of transient imaging operates on a time resolution fast enough to resolve non-stationary light distributions in real-world scenes. It enables the discrimination of light contributions by the optical path length from light source to receiver, a dimension unavailable in mainstream imaging to date. Until recently, such measurements used to require high-end optical equipment and could only be acquired under extremely restricted lab conditions. To address this challenge, we introduced a family of computational imaging techniques operating on standard time-of-flight image sensors, for the first time allowing the user to "film" light in flight in an affordable, practical and portable way. Just as impulse responses have proven a valuable tool in almost every branch of science and engineering, we expect light-in-flight analysis to impact a wide variety of applications in computer vision and beyond.
In vivo and in vitro hyperspectral imaging of cervical neoplasia
NASA Astrophysics Data System (ADS)
Wang, Chaojian; Zheng, Wenli; Bu, Yanggao; Chang, Shufang; Tong, Qingping; Zhang, Shiwu; Xu, Ronald X.
2014-02-01
Cervical cancer is a prevalent disease in many developing countries. Colposcopy is the most common approach for screening cervical intraepithelial neoplasia (CIN). However, its clinical efficacy heavily relies on the examiner's experience. Spectroscopy is a potentially effective method for noninvasive diagnosis of cervical neoplasia. In this paper, we introduce a hyperspectral imaging technique for noninvasive detection and quantitative analysis of cervical neoplasia. A hyperspectral camera is used to collect the reflectance images of the entire cervix under xenon lamp illumination, followed by standard colposcopy examination and cervical tissue biopsy at both normal and abnormal sites in different quadrants. The collected reflectance data are calibrated and the hyperspectral signals are extracted. Further spectral analysis and image processing works are carried out to classify tissue into different types based on the spectral characteristics at different stages of cervical intraepithelial neoplasia. The hyperspectral camera is also coupled with a lab microscope to acquire the hyperspectral transmittance images of the pathological slides. The in vivo and the in vitro imaging results are compared with clinical findings to assess the accuracy and efficacy of the method.
Development of an imaging method for quantifying a large digital PCR droplet
NASA Astrophysics Data System (ADS)
Huang, Jen-Yu; Lee, Shu-Sheng; Hsu, Yu-Hsiang
2017-02-01
Portable devices have been recognized as the future linkage between end-users and lab-on-a-chip devices. It has a user friendly interface and provides apps to interface headphones, cameras, and communication duct, etc. In particular, the digital resolution of cameras installed in smartphones or pads already has a high imaging resolution with a high number of pixels. This unique feature has triggered researches to integrate optical fixtures with smartphone to provide microscopic imaging capabilities. In this paper, we report our study on developing a portable diagnostic tool based on the imaging system of a smartphone and a digital PCR biochip. A computational algorithm is developed to processing optical images taken from a digital PCR biochip with a smartphone in a black box. Each reaction droplet is recorded in pixels and is analyzed in a sRGB (red, green, and blue) color space. Multistep filtering algorithm and auto-threshold algorithm are adopted to minimize background noise contributed from ccd cameras and rule out false positive droplets, respectively. Finally, a size-filtering method is applied to identify the number of positive droplets to quantify target's concentration. Statistical analysis is then performed for diagnostic purpose. This process can be integrated in an app and can provide a user friendly interface without professional training.
A Window into Longer Lasting Batteries
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2016-11-29
There’s a new tool in the push to engineer rechargeable batteries that last longer and charge more quickly. An X-ray microscopy technique recently developed at Berkeley Lab has given scientists the ability to image nanoscale changes inside lithium-ion battery particles as they charge and discharge. The real-time images provide a new way to learn how batteries work, and how to improve them. The method was developed at Berkeley Lab’s Advanced Light Source, a DOE Office of Science User Facility, by a team of researchers from the Department of Energy’s SLAC National Accelerator Laboratory, Berkeley Lab, Stanford University, and other institutions.
Evaluation and recommendations for work group integration within the Materials and Processes Lab
NASA Technical Reports Server (NTRS)
Farrington, Phillip A.
1992-01-01
The goal of this study was to evaluate and make recommendations for improving the level of integration of several work groups within the Materials and Processes Lab at the Marshall Space Flight Center. This evaluation has uncovered a variety of projects that could improve the efficiency and operation of the work groups as well as the overall integration of the system. In addition, this study provides the foundation for specification of a computer integrated manufacturing test bed environment in the Materials and Processes Lab.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christopher Martin, D.; Chang, Daphne; Matuszewski, Matt
The intergalactic medium (IGM) is the dominant reservoir of baryons, delineates the large-scale structure of the universe at low to moderate overdensities, and provides gas from which galaxies form and evolve. Simulations of a cold-dark-matter- (CDM-) dominated universe predict that the IGM is distributed in a cosmic web of filaments and that galaxies should form along and at the intersections of these filaments. While observations of QSO absorption lines and the large-scale distribution of galaxies have confirmed the CDM paradigm, the cosmic web of IGM has never been confirmed by direct imaging. Here we report our observation of the Lyαmore » blob 2 (LAB2) in SSA22 with the Cosmic Web Imager (CWI). This is an integral field spectrograph optimized for low surface brightness, extended emission. With 22 hr of total on- and off-source exposure, CWI has revealed that LAB2 has extended Lyα emission that is organized into azimuthal zones consistent with filaments. We perform numerous tests with simulations and the data to secure the robustness of this result, which relies on data with modest signal-to-noise ratios. We have developed a smoothing algorithm that permits visualization of data cube slices along image or spectral image planes. With both raw and smoothed data cubes we demonstrate that the filaments are kinematically associated with LAB2 and display double-peaked profiles characteristic of optically thick Lyα emission. The flux is 10-20 times brighter than expected for the average emission from the IGM but is consistent with boosted fluorescence from a buried QSO or gravitation cooling radiation. Using simple emission models, we infer a baryon mass in the filaments of at least 1-4 × 10{sup 11} M {sub ☉}, and the dark halo mass is at least 2 × 10{sup 12} M {sub ☉}. The spatial-kinematic morphology is more consistent with inflow from the cosmic web than outflow from LAB2, although an outflow feature maybe present at one azimuth. LAB2 and the surrounding gas have significant and coaligned angular momentum, strengthening the case for their association.« less
Digital processing techniques and film density calibration for printing image data
Chavez, Pat S.; McSweeney, Joseph A.; Binnie, Douglas R.
1987-01-01
Satellite image data that cover a wide range of environments are being used to make prints that represent a map type product. If a wide distribution of these products is desired, they are printed using lithographic rather than photographic procedures to reduce the cost per print. Problems are encountered in the photo lab if the film products to be used for lithographic printing have the same density range and density curve characteristics as the film used for photographic printing. A method is presented that keeps the film densities within the 1.1 range required for lithographic printing, but generates film products with contrast similar to that in photographic film for the majority of data (80 percent). Also, spatial filters can be used to enhance local detail in dark and bright regions, as well as to sharpen the final image product using edge enhancement techniques.
Mitigation Approaches for Optical Imaging through Clouds and Fog
2009-11-01
Spatially Multiplexed Optical MIMO Imaging System in Cloudy Turbulent Atmosphere ...This atmospheric attenuation imposes a big challenge on laser imaging systems , and it can be as severe as 300 dB/km in heavy fog [3]. As a result, the...MIT Lincoln Lab [8][9][10]. In this report, we propose MIMO imaging systems and investigate their performance under various atmospheric conditions
Self-sensing paper-based actuators employing ferromagnetic nanoparticles and graphite
NASA Astrophysics Data System (ADS)
Phan, Hoang-Phuong; Dinh, Toan; Nguyen, Tuan-Khoa; Vatani, Ashkan; Md Foisal, Abu Riduan; Qamar, Afzaal; Kermany, Atieh Ranjbar; Dao, Dzung Viet; Nguyen, Nam-Trung
2017-04-01
Paper-based microfluidics and sensors have attracted great attention. Although a large number of paper-based devices have been developed, surprisingly there are only a few studies investigating paper actuators. To fulfill the requirements for the integration of both sensors and actuators into paper, this work presents an unprecedented platform which utilizes ferromagnetic particles for actuation and graphite for motion monitoring. The use of the integrated mechanical sensing element eliminates the reliance on image processing for motion detection and also allows real-time measurements of the dynamic response in paper-based actuators. The proposed platform can also be quickly fabricated using a simple process, indicating its potential for controllable paper-based lab on chip.
Designed tools for analysis of lithography patterns and nanostructures
NASA Astrophysics Data System (ADS)
Dervillé, Alexandre; Baderot, Julien; Bernard, Guilhem; Foucher, Johann; Grönqvist, Hanna; Labrosse, Aurélien; Martinez, Sergio; Zimmermann, Yann
2017-03-01
We introduce a set of designed tools for the analysis of lithography patterns and nano structures. The classical metrological analysis of these objects has the drawbacks of being time consuming, requiring manual tuning and lacking robustness and user friendliness. With the goal of improving the current situation, we propose new image processing tools at different levels: semi automatic, automatic and machine-learning enhanced tools. The complete set of tools has been integrated into a software platform designed to transform the lab into a virtual fab. The underlying idea is to master nano processes at the research and development level by accelerating the access to knowledge and hence speed up the implementation in product lines.
Data-Oriented Astrophysics at NOAO: The Science Archive & The Data Lab
NASA Astrophysics Data System (ADS)
Juneau, Stephanie; NOAO Data Lab, NOAO Science Archive
2018-06-01
As we keep progressing into an era of increasingly large astronomy datasets, NOAO’s data-oriented mission is growing in prominence. The NOAO Science Archive, which captures and processes the pixel data from mountaintops in Chile and Arizona, now contains holdings at Petabyte scales. Working at the intersection of astronomy and data science, the main goal of the NOAO Data Lab is to provide users with a suite of tools to work close to this data, the catalogs derived from them, as well as externally provided datasets, and thus optimize the scientific productivity of the astronomy community. These tools and services include databases, query tools, virtual storage space, workflows through our Jupyter Notebook server, and scripted analysis. We currently host datasets from NOAO facilities such as the Dark Energy Survey (DES), the DESI imaging Legacy Surveys (LS), the Dark Energy Camera Plane Survey (DECaPS), and the nearly all-sky NOAO Source Catalog (NSC). We are further preparing for large spectroscopy datasets such as DESI. After a brief overview of the Science Archive, the Data Lab and datasets, I will briefly showcase scientific applications showing use of our data holdings. Lastly, I will describe our vision for future developments as we tackle the next technical and scientific challenges.
Investigating an Aerial Image First
ERIC Educational Resources Information Center
Wyrembeck, Edward P.; Elmer, Jeffrey S.
2006-01-01
Most introductory optics lab activities begin with students locating the real image formed by a converging lens. The method is simple and straightforward--students move a screen back and forth until the real image is in sharp focus on the screen. Students then draw a simple ray diagram to explain the observation using only two or three special…
Recent developments in computer vision-based analytical chemistry: A tutorial review.
Capitán-Vallvey, Luis Fermín; López-Ruiz, Nuria; Martínez-Olmos, Antonio; Erenas, Miguel M; Palma, Alberto J
2015-10-29
Chemical analysis based on colour changes recorded with imaging devices is gaining increasing interest. This is due to its several significant advantages, such as simplicity of use, and the fact that it is easily combinable with portable and widely distributed imaging devices, resulting in friendly analytical procedures in many areas that demand out-of-lab applications for in situ and real-time monitoring. This tutorial review covers computer vision-based analytical (CVAC) procedures and systems from 2005 to 2015, a period of time when 87.5% of the papers on this topic were published. The background regarding colour spaces and recent analytical system architectures of interest in analytical chemistry is presented in the form of a tutorial. Moreover, issues regarding images, such as the influence of illuminants, and the most relevant techniques for processing and analysing digital images are addressed. Some of the most relevant applications are then detailed, highlighting their main characteristics. Finally, our opinion about future perspectives is discussed. Copyright © 2015 Elsevier B.V. All rights reserved.
Fluorescence intensity positivity classification of Hep-2 cells images using fuzzy logic
NASA Astrophysics Data System (ADS)
Sazali, Dayang Farzana Abang; Janier, Josefina Barnachea; May, Zazilah Bt.
2014-10-01
Indirect Immunofluorescence (IIF) is a good standard used for antinuclear autoantibody (ANA) test using Hep-2 cells to determine specific diseases. Different classifier algorithm methods have been proposed in previous works however, there still no valid set as a standard to classify the fluorescence intensity. This paper presents the use of fuzzy logic to classify the fluorescence intensity and to determine the positivity of the Hep-2 cell serum samples. The fuzzy algorithm involves the image pre-processing by filtering the noises and smoothen the image, converting the red, green and blue (RGB) color space of images to luminosity layer, chromaticity layer "a" and "b" (LAB) color space where the mean value of the lightness and chromaticity layer "a" was extracted and classified by using fuzzy logic algorithm based on the standard score ranges of antinuclear autoantibody (ANA) fluorescence intensity. Using 100 data sets of positive and intermediate fluorescence intensity for testing the performance measurements, the fuzzy logic obtained an accuracy of intermediate and positive class as 85% and 87% respectively.
2009-10-01
Bottenus, Brienne N.∞; Fugate, Glenn A.†; Benny, Paul*. Actinides Separations, Conference Pacific Northwest National Lab 6/2006 In situ formation of...Bottenus, Brienne N.∞; Benny, Paul*. Actinides Separations, Conference Pacific Northwest National Lab 3/12/2006 S-functionalized cysteine ligands...cancer imaging. The successful preparation and radiolabeling of the first generation of compounds illustrates one the key critical objectives being
A seismic reflection image for the base of a tectonic plate.
Stern, T A; Henrys, S A; Okaya, D; Louie, J N; Savage, M K; Lamb, S; Sato, H; Sutherland, R; Iwasaki, T
2015-02-05
Plate tectonics successfully describes the surface of Earth as a mosaic of moving lithospheric plates. But it is not clear what happens at the base of the plates, the lithosphere-asthenosphere boundary (LAB). The LAB has been well imaged with converted teleseismic waves, whose 10-40-kilometre wavelength controls the structural resolution. Here we use explosion-generated seismic waves (of about 0.5-kilometre wavelength) to form a high-resolution image for the base of an oceanic plate that is subducting beneath North Island, New Zealand. Our 80-kilometre-wide image is based on P-wave reflections and shows an approximately 15° dipping, abrupt, seismic wave-speed transition (less than 1 kilometre thick) at a depth of about 100 kilometres. The boundary is parallel to the top of the plate and seismic attributes indicate a P-wave speed decrease of at least 8 ± 3 per cent across it. A parallel reflection event approximately 10 kilometres deeper shows that the decrease in P-wave speed is confined to a channel at the base of the plate, which we interpret as a sheared zone of ponded partial melts or volatiles. This is independent, high-resolution evidence for a low-viscosity channel at the LAB that decouples plates from mantle flow beneath, and allows plate tectonics to work.
Xiao, Peng; Huang, Junhua; Dong, Ting; Xie, Jianing; Yuan, Jian; Luo, Dongxiang; Liu, Baiquan
2018-06-06
For the first time, compounds with lanthanum from the main family element Boron (LaB x ) were investigated as an active layer for thin-film transistors (TFTs). Detailed studies showed that the room-temperature fabricated LaB x thin film was in the crystalline state with a relatively narrow optical band gap of 2.28 eV. The atom ration of La/B was related to the working pressure during the sputtering process and the atom ration of La/B increased with the increase of the working pressure, which will result in the freer electrons in the LaB x thin film. LaB x -TFT without any intentionally annealing steps exhibited a saturation mobility of 0.44 cm²·V −1 ·s −1 , which is a subthreshold swing ( SS ) of 0.26 V/decade and a I on / I off ratio larger than 10⁴. The room-temperature process is attractive for its compatibility with almost all kinds of flexible substrates and the LaB x semiconductor may be a new choice for the channel materials in TFTs.
NASA Astrophysics Data System (ADS)
Stetson, Suzanne; Weber, Hadley; Crosby, Frank J.; Tinsley, Kenneth; Kloess, Edmund; Nevis, Andrew J.; Holloway, John H., Jr.; Witherspoon, Ned H.
2004-09-01
The Airborne Littoral Reconnaissance Technologies (ALRT) project has developed and tested a nighttime operational minefield detection capability using commercial off-the-shelf high-power Laser Diode Arrays (LDAs). The Coastal System Station"s ALRT project, under funding from the Office of Naval Research (ONR), has been designing, developing, integrating, and testing commercial arrays using a Cessna airborne platform over the last several years. This has led to the development of the Airborne Laser Diode Array Illuminator wide field-of-view (ALDAI-W) imaging test bed system. The ALRT project tested ALDAI-W at the Army"s Night Vision Lab"s Airborne Mine Detection Arid Test. By participating in Night Vision"s test, ALRT was able to collect initial prototype nighttime operational data using ALDAI-W, showing impressive results and pioneering the way for final test bed demonstration conducted in September 2003. This paper describes the ALDAI-W Arid Test and results, along with processing steps used to generate imagery.
Fiber optic interferometry for industrial process monitoring and control applications
NASA Astrophysics Data System (ADS)
Marcus, Michael A.
2002-02-01
Over the past few years we have been developing applications for a high-resolution (sub-micron accuracy) fiber optic coupled dual Michelson interferometer-based instrument. It is being utilized in a variety of applications including monitoring liquid layer thickness uniformity on coating hoppers, film base thickness uniformity measurement, digital camera focus assessment, optical cell path length assessment and imager and wafer surface profile mapping. The instrument includes both coherent and non-coherent light sources, custom application dependent optical probes and sample interfaces, a Michelson interferometer, custom electronics, a Pentium-based PC with data acquisition cards and LabWindows CVI or LabView based application specific software. This paper describes the development evolution of this instrument platform and applications highlighting robust instrument design, hardware, software, and user interfaces development. The talk concludes with a discussion of a new high-speed instrument configuration, which can be utilized for high speed surface profiling and as an on-line web thickness gauge.
NASA Astrophysics Data System (ADS)
Steinberg, S. J.; Howard, M. D.
2016-02-01
Collecting algae samples from the field presents issues of specimen damage or degradation caused by preservation methods, handling and transport to laboratory facilities for identification. Traditionally, in-field collection of high quality microscopic images has not been possible due to the size, weight and fragility of high quality instruments and training of field staff in species identification. Scientists at the Southern California Coastal Water Research Project (SCCWRP) in collaboration with the Fletcher Lab, University of California Berkeley, Department of Bioengineering, tested and translated Fletcher's original medical CellScope for use in environmental monitoring applications. Field tests conducted by SCCWRP in 2014 led to modifications of the clinical CellScope to one better suited to in-field microscopic imaging for aquatic organisms. SCCWRP subsequently developed a custom cell-phone application to acquire microscopic imagery using the "CellScope Aquatic "in combination with other cell-phone derived field data (e.g. GPS location, date, time and other field observations). Data and imagery collected in-field may be transmitted in real-time to a web-based data system for tele-taxonomy evaluation and assessment by experts in the office. These hardware and software tools was tested in field in a variety of conditions and settings by multiple algae experts during the spring and summer of 2015 to further test and refine the CellScope Aquatic platform. The CellScope Aquatic provides an easy-to-use, affordable, lightweight, professional quality, data collection platform for environmental monitoring. Our ongoing efforts will focus on development of real-time expert systems for data analysis and image processing, to provide onsite feedback to field scientists.
NASA Astrophysics Data System (ADS)
Topp, C. N.
2016-12-01
Our ability to harness the power of plant genomics for basic and applied science depends on how well and how fast we can quantify the phenotypic ramifications of genetic variation. Plants can be considered from many vantage points: at scales from cells to organs, over the course of development or evolution, and from biophysical, physiological, and ecological perspectives. In all of these ways, our understanding of plant form and function is greatly limited by our ability to study subterranean structures and processes. The limitations to accessing this knowledge are well known - soil is opaque, roots are morphologically complex, and root growth can be heavily influenced by a myriad of environmental factors. Nonetheless, recent technological innovations in imaging science have generated a renewed focus on roots and thus new opportunities to understand the plant as a whole. The Topp Lab is interested in crop root system growth dynamics and function in response to environmental stresses such as drought, rhizosphere interactions, and as a consequence of artificial selection for agronomically important traits such as nitrogen uptake and high plant density. Studying roots requires the development of imaging technologies, computational infrastructure, and statistical methods that can capture and analyze morphologically complex networks over time and at high-throughput. The lab uses several imaging tools (optical, X-ray CT, PET, etc.) along with quantitative genetics and molecular biology to understand the dynamics of root growth and physiology. We aim to understand the relationships among root traits that can be effectively measured both in controlled laboratory environments and in the field, and to identify genes and gene networks that control root, and ultimately whole plant architectural features useful for crop improvement.
NASA Astrophysics Data System (ADS)
Zhu, Feng; Akagi, Jin; Hall, Chris J.; Crosier, Kathryn E.; Crosier, Philip S.; Delaage, Pierre; Wlodkowic, Donald
2013-12-01
Drug discovery screenings performed on zebrafish embryos mirror with a high level of accuracy. The tests usually performed on mammalian animal models, and the fish embryo toxicity assay (FET) is one of the most promising alternative approaches to acute ecotoxicity testing with adult fish. Notwithstanding this, conventional methods utilising 96-well microtiter plates and manual dispensing of fish embryos are very time-consuming. They rely on laborious and iterative manual pipetting that is a main source of analytical errors and low throughput. In this work, we present development of a miniaturised and high-throughput Lab-on-a-Chip (LOC) platform for automation of FET assays. The 3D high-density LOC array was fabricated in poly-methyl methacrylate (PMMA) transparent thermoplastic using infrared laser micromachining while the off-chip interfaces were fabricated using additive manufacturing processes (FDM and SLA). The system's design facilitates rapid loading and immobilization of a large number of embryos in predefined clusters of traps during continuous microperfusion of drugs/toxins. It has been conceptually designed to seamlessly interface with both upright and inverted fluorescent imaging systems and also to directly interface with conventional microtiter plate readers that accept 96-well plates. We also present proof-of-concept interfacing with a high-speed imaging cytometer Plate RUNNER HD® capable of multispectral image acquisition with resolution of up to 8192 x 8192 pixels and depth of field of about 40 μm. Furthermore, we developed a miniaturized and self-contained analytical device interfaced with a miniaturized USB microscope. This system modification is capable of performing rapid imaging of multiple embryos at a low resolution for drug toxicity analysis.
Rohawi, Nur Syakila; Ramasamy, Kalavathy; Agatonovic-Kustrin, Snezana; Lim, Siong Meng
2018-06-05
A quantitative assay using high-performance thin-layer chromatography (HPTLC) was developed to investigate bile salt hydrolase (BSH) activity in Pediococcus pentosaceus LAB6 and Lactobacillus plantarum LAB12 probiotic bacteria isolated from Malaysian fermented food. Lactic acid bacteria (LAB) were cultured in de Man Rogosa and Sharpe (MRS) broth containing 1 mmol/L of sodium-based glyco- and tauro-conjugated bile salts for 24 h. The cultures were centrifuged and the resultant cell free supernatant was subjected to chromatographic separation on a HPTLC plate. Conjugated bile salts were quantified by densitometric scans at 550 nm and results were compared to digital image analysis of chromatographic plates after derivatisation with anisaldehyde/sulfuric acid. Standard curves for bile salts determination with both methods show good linearity with high coefficient of determination (R 2 ) between 0.97 and 0.99. Method validation indicates good sensitivity with low relative standard deviation (RSD) (<10%), low limits of detection (LOD) of 0.4 versus 0.2 μg and limit of quantification (LOQ) of 1.4 versus 0.7 μg, for densitometric vs digital image analysis method, respectively. The bile salt hydrolase activity was found to be higher against glyco- than tauro-conjugated bile salts (LAB6; 100% vs >38%: LAB12; 100% vs >75%). The present findings strongly show that quantitative analysis via digitally-enhanced HPTLC offers a rapid quantitative analysis for deconjugation of bile salts by probiotics. Copyright © 2018. Published by Elsevier B.V.
Open web system of Virtual labs for nuclear and applied physics
NASA Astrophysics Data System (ADS)
Saldikov, I. S.; Afanasyev, V. V.; Petrov, V. I.; Ternovykh, M. Yu
2017-01-01
An example of virtual lab work on unique experimental equipment is presented. The virtual lab work is software based on a model of real equipment. Virtual labs can be used for educational process in nuclear safety and analysis field. As an example it includes the virtual lab called “Experimental determination of the material parameter depending on the pitch of a uranium-water lattice”. This paper included general description of this lab. A description of a database on the support of laboratory work on unique experimental equipment which is included this work, its concept development are also presented.
NASA Technical Reports Server (NTRS)
Sen, Syamal K.; Shaykhian, Gholam Ali
2011-01-01
MatLab(R) (MATrix LABoratory) is a numerical computation and simulation tool that is used by thousands Scientists and Engineers in many cou ntries. MatLab does purely numerical calculations, which can be used as a glorified calculator or interpreter programming language; its re al strength is in matrix manipulations. Computer algebra functionalities are achieved within the MatLab environment using "symbolic" toolbo x. This feature is similar to computer algebra programs, provided by Maple or Mathematica to calculate with mathematical equations using s ymbolic operations. MatLab in its interpreter programming language fo rm (command interface) is similar with well known programming languag es such as C/C++, support data structures and cell arrays to define c lasses in object oriented programming. As such, MatLab is equipped with most ofthe essential constructs of a higher programming language. M atLab is packaged with an editor and debugging functionality useful t o perform analysis of large MatLab programs and find errors. We belie ve there are many ways to approach real-world problems; prescribed methods to ensure foregoing solutions are incorporated in design and ana lysis of data processing and visualization can benefit engineers and scientist in gaining wider insight in actual implementation of their perspective experiments. This presentation will focus on data processing and visualizations aspects of engineering and scientific applicati ons. Specifically, it will discuss methods and techniques to perform intermediate-level data processing covering engineering and scientifi c problems. MatLab programming techniques including reading various data files formats to produce customized publication-quality graphics, importing engineering and/or scientific data, organizing data in tabu lar format, exporting data to be used by other software programs such as Microsoft Excel, data presentation and visualization will be discussed. The presentation will emphasize creating practIcal scripts (pro grams) that extend the basic features of MatLab TOPICS mclude (1) Ma trix and vector analysis and manipulations (2) Mathematical functions (3) Symbolic calculations & functions (4) Import/export data files (5) Program lOgic and flow control (6) Writing function and passing parameters (7) Test application programs
NASA Astrophysics Data System (ADS)
Lyu, Bo-Han; Wang, Chen; Tsai, Chun-Wei
2017-08-01
Jasper Display Corp. (JDC) offer high reflectivity, high resolution Liquid Crystal on Silicon - Spatial Light Modulator (LCoS-SLM) which include an associated controller ASIC and LabVIEW based modulation software. Based on this LCoS-SLM, also called Education Kit (EDK), we provide a training platform which includes a series of optical theory and experiments to university students. This EDK not only provides a LabVIEW based operation software to produce Computer Generated Holograms (CGH) to generate some basic diffraction image or holographic image, but also provides simulation software to verity the experiment results simultaneously. However, we believe that a robust LCoSSLM, operation software, simulation software, training system, and training course can help students to study the fundamental optics, wave optics, and Fourier optics more easily. Based on these fundamental knowledges, they could develop their unique skills and create their new innovations on the optoelectronic application in the future.
UNMANNED AERIAL VEHICLE (UAV) HYPERSPECTRAL REMOTE SENSING FOR DRYLAND VEGETATION MONITORING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nancy F. Glenn; Jessica J. Mitchell; Matthew O. Anderson
2012-06-01
UAV-based hyperspectral remote sensing capabilities developed by the Idaho National Lab and Idaho State University, Boise Center Aerospace Lab, were recently tested via demonstration flights that explored the influence of altitude on geometric error, image mosaicking, and dryland vegetation classification. The test flights successfully acquired usable flightline data capable of supporting classifiable composite images. Unsupervised classification results support vegetation management objectives that rely on mapping shrub cover and distribution patterns. Overall, supervised classifications performed poorly despite spectral separability in the image-derived endmember pixels. Future mapping efforts that leverage ground reference data, ultra-high spatial resolution photos and time series analysis shouldmore » be able to effectively distinguish native grasses such as Sandberg bluegrass (Poa secunda), from invasives such as burr buttercup (Ranunculus testiculatus) and cheatgrass (Bromus tectorum).« less
H CANYON PROCESSING IN CORRELATION WITH FH ANALYTICAL LABS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weinheimer, E.
2012-08-06
Management of radioactive chemical waste can be a complicated business. H Canyon and F/H Analytical Labs are two facilities present at the Savannah River Site in Aiken, SC that are at the forefront. In fact H Canyon is the only large-scale radiochemical processing facility in the United States and this processing is only enhanced by the aid given from F/H Analytical Labs. As H Canyon processes incoming materials, F/H Labs provide support through a variety of chemical analyses. Necessary checks of the chemical makeup, processing, and accountability of the samples taken from H Canyon process tanks are performed at themore » labs along with further checks on waste leaving the canyon after processing. Used nuclear material taken in by the canyon is actually not waste. Only a small portion of the radioactive material itself is actually consumed in nuclear reactors. As a result various radioactive elements such as Uranium, Plutonium and Neptunium are commonly found in waste and may be useful to recover. Specific processing is needed to allow for separation of these products from the waste. This is H Canyon's specialty. Furthermore, H Canyon has the capacity to initiate the process for weapons-grade nuclear material to be converted into nuclear fuel. This is one of the main campaigns being set up for the fall of 2012. Once usable material is separated and purified of impurities such as fission products, it can be converted to an oxide and ultimately turned into commercial fuel. The processing of weapons-grade material for commercial fuel is important in the necessary disposition of plutonium. Another processing campaign to start in the fall in H Canyon involves the reprocessing of used nuclear fuel for disposal in improved containment units. The importance of this campaign involves the proper disposal of nuclear waste in order to ensure the safety and well-being of future generations and the environment. As processing proceeds in the fall, H Canyon will have a substantial number of samples being sent to F/H Labs. All analyses of these samples are imperative to safe and efficient processing. The important campaigns to occur would be impossible without feedback from analyses such as chemical makeup of solutions, concentrations of dissolution acids and nuclear material, as well as nuclear isotopic data. The necessity of analysis for radiochemical processing is evident. Processing devoid of F/H Lab's feedback would go against the ideals of a safety-conscious and highly accomplished processing facility such as H Canyon.« less
XNAT Central: Open sourcing imaging research data.
Herrick, Rick; Horton, William; Olsen, Timothy; McKay, Michael; Archie, Kevin A; Marcus, Daniel S
2016-01-01
XNAT Central is a publicly accessible medical imaging data repository based on the XNAT open-source imaging informatics platform. It hosts a wide variety of research imaging data sets. The primary motivation for creating XNAT Central was to provide a central repository to host and provide access to a wide variety of neuroimaging data. In this capacity, XNAT Central hosts a number of data sets from research labs and investigative efforts from around the world, including the OASIS Brains imaging studies, the NUSDAST study of schizophrenia, and more. Over time, XNAT Central has expanded to include imaging data from many different fields of research, including oncology, orthopedics, cardiology, and animal studies, but continues to emphasize neuroimaging data. Through the use of XNAT's DICOM metadata extraction capabilities, XNAT Central provides a searchable repository of imaging data that can be referenced by groups, labs, or individuals working in many different areas of research. The future development of XNAT Central will be geared towards greater ease of use as a reference library of heterogeneous neuroimaging data and associated synthetic data. It will also become a tool for making data available supporting published research and academic articles. Copyright © 2015 Elsevier Inc. All rights reserved.
Femtosecond laser direct-write of optofluidics in polymer-coated optical fiber
NASA Astrophysics Data System (ADS)
Joseph, Kevin A. J.; Haque, Moez; Ho, Stephen; Aitchison, J. Stewart; Herman, Peter R.
2017-03-01
Multifunctional lab in fiber technology seeks to translate the accomplishments of optofluidic, lab on chip devices into silica fibers. a robust, flexible, and ubiquitous optical communication platform that can underpin the `Internet of Things' with distributed sensors, or enable lab on chip functions deep inside our bodies. Femtosecond lasers have driven significant advances in three-dimensional processing, enabling optical circuits, microfluidics, and micro-mechanical structures to be formed around the core of the fiber. However, such processing typically requires the stripping and recoating of the polymer buffer or jacket, increasing processing time and mechanically weakening the device. This paper reports on a comprehensive assessment of laser damage in urethane-acrylate-coated fiber. The results show a sufficient processing window is available for femtosecond laser processing of the fiber without damaging the polymer jacket. The fiber core, cladding, and buffer could be simultaneously processed without removal of the buffer jacket. Three-dimensional lab in fiber devices were successfully fabricated by distortion-free immersionlens focusing, presenting fiber-cladding optical circuits and progress towards chemically-etched channels, microfluidic cavities, and MEMS structure inside buffer-coated fiber.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, J; Shi, W; Andrews, D
2015-06-15
Purpose To compare online image registrations of TrueBeam cone-beam CT (CBCT) and BrainLab ExacTrac imaging systems. Methods Tests were performed on a Varian TrueBeam STx linear accelerator (Version 2.0), which is integrated with a BrainLab ExacTrac imaging system (Version 6.0.5). The study was focused on comparing the online image registrations for translational shifts. A Rando head phantom was placed on treatment couch and immobilized with a BrainLab mask. The phantom was shifted by moving the couch translationally for 8 mm with a step size of 1 mm, in vertical, longitudinal, and lateral directions, respectively. At each location, the phantom wasmore » imaged with CBCT and ExacTrac x-ray. CBCT images were registered with TrueBeam and ExacTrac online registration algorithms, respectively. And ExacTrac x-ray image registrations were performed. Shifts calculated from different registrations were compared with nominal couch shifts. Results The averages and ranges of absolute differences between couch shifts and calculated phantom shifts obtained from ExacTrac x-ray registration, ExacTrac CBCT registration with default window, ExaxTrac CBCT registration with adjusted window (bone), Truebeam CBCT registration with bone window, and Truebeam CBCT registration with soft tissue window, were: 0.07 (0.02–0.14), 0.14 (0.01–0.35), 0.12 (0.02–0.28), 0.09 (0–0.20), and 0.06 (0–0.10) mm, in vertical direction; 0.06 (0.01–0.12), 0.27 (0.07–0.57), 0.23 (0.02–0.48), 0.04 (0–0.10), and 0.08 (0– 0.20) mm, in longitudinal direction; 0.05 (0.01–0.21), 0.35 (0.14–0.80), 0.25 (0.01–0.56), 0.19 (0–0.40), and 0.20 (0–0.40) mm, in lateral direction. Conclusion The shifts calculated from ExacTrac x-ray and TrueBeam CBCT registrations were close to each other (the differences between were less than 0.40 mm in any direction), and had better agreements with couch shifts than those from ExacTrac CBCT registrations. There were no significant differences between TrueBeam CBCT registrations using different windows. In ExacTrac CBCT registrations, using bone window led to better agreements than using default window.« less
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. -- Sharon Edney, with Dynamac Corp., checks the growth of radishes being grown hydroponically for study in the Space Life Sciences Lab. The 100,000 square-foot facility houses labs for NASAs ongoing research efforts, microbiology/microbial ecology studies and analytical chemistry labs. Also calling the new lab home are facilities for space flight-experiment and flight-hardware development, new plant growth chambers, and an Orbiter Environment Simulator that will be used to conduct ground control experiments in simulated flight conditions for space flight experiments. The SLS Lab, formerly known as the Space Experiment Research and Processing Laboratory or SERPL, provides space for NASAs Life Sciences Services contractor Dynamac Corporation, Bionetics Corporation, and researchers from the University of Florida. NASAs Office of Biological and Physical Research will use the facility for processing life sciences experiments that will be conducted on the International Space Station. The SLS Lab is the magnet facility for the International Space Research Park at KSC being developed in partnership with Florida Space Authority.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. -- In the Space Life Sciences (SLS) Lab, Jan Bauer, with Dynamac Corp., places samples of onion tissue in the elemental analyzer, which analyzes for carbon, hydrogen, nitrogen and sulfur. The 100,000 square-foot SLS houses labs for NASAs ongoing research efforts, microbiology/microbial ecology studies and analytical chemistry labs. Also calling the new lab home are facilities for space flight-experiment and flight-hardware development, new plant growth chambers, and an Orbiter Environment Simulator that will be used to conduct ground control experiments in simulated flight conditions for space flight experiments. The SLS Lab, formerly known as the Space Experiment Research and Processing Laboratory or SERPL, provides space for NASAs Life Sciences Services contractor Dynamac Corporation, Bionetics Corporation, and researchers from the University of Florida. NASAs Office of Biological and Physical Research will use the facility for processing life sciences experiments that will be conducted on the International Space Station. The SLS Lab is the magnet facility for the International Space Research Park at KSC being developed in partnership with Florida Space Authority.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. -- Sharon Edney, with Dynamac Corp., checks the roots of green onions being grown hydroponically for study in the Space Life Sciences Lab. The 100,000 square-foot facility houses labs for NASAs ongoing research efforts, microbiology/microbial ecology studies and analytical chemistry labs. Also calling the new lab home are facilities for space flight-experiment and flight-hardware development, new plant growth chambers, and an Orbiter Environment Simulator that will be used to conduct ground control experiments in simulated flight conditions for space flight experiments. The SLS Lab, formerly known as the Space Experiment Research and Processing Laboratory or SERPL, provides space for NASAs Life Sciences Services contractor Dynamac Corporation, Bionetics Corporation, and researchers from the University of Florida. NASAs Office of Biological and Physical Research will use the facility for processing life sciences experiments that will be conducted on the International Space Station. The SLS Lab is the magnet facility for the International Space Research Park at KSC being developed in partnership with Florida Space Authority.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. -- Sharon Edney, with Dynamac Corp., measures photosynthesis on Bibb lettuce being grown hydroponically for study in the Space Life Sciences Lab. The 100,000 square-foot facility houses labs for NASAs ongoing research efforts, microbiology/microbial ecology studies and analytical chemistry labs. Also calling the new lab home are facilities for space flight-experiment and flight-hardware development, new plant growth chambers, and an Orbiter Environment Simulator that will be used to conduct ground control experiments in simulated flight conditions for space flight experiments. The SLS Lab, formerly known as the Space Experiment Research and Processing Laboratory or SERPL, provides space for NASAs Life Sciences Services contractor Dynamac Corporation, Bionetics Corporation, and researchers from the University of Florida. NASAs Office of Biological and Physical Research will use the facility for processing life sciences experiments that will be conducted on the International Space Station. The SLS Lab is the magnet facility for the International Space Research Park at KSC being developed in partnership with Florida Space Authority.
Selection of the thermal imaging approach for the XM29 combat rifle fire control system
NASA Astrophysics Data System (ADS)
Brindley, Eric; Lillie, Jack; Plocki, Peter; Volz, Robert T.
2003-09-01
The paper briefly describes the XM29 (formerly OICW) weapon, its fire control system and the requirements for thermal imaging. System level constraints on the in-hand weight dictate the need for a high degree of integration with other elements of the system such as the laser rangefinder, direct view optics and daylight video, all operating at different wavelengths. The available Focal Plane Array technology choices are outlined and the evaluation process is described, including characterization at the US Army Night Vision and Electronic Sensors Directorate (NVESD) and recent field-testing at Quantico USMC base, Virginia. This paper addresses the trade study, technology assessment and test-bed effort. The relationship between field and lab testing performance is compared and path forward recommended.
Differential dynamic microscopy to characterize Brownian motion and bacteria motility
NASA Astrophysics Data System (ADS)
Germain, David; Leocmach, Mathieu; Gibaud, Thomas
2016-03-01
We have developed a lab module for undergraduate students, which involves the process of quantifying the dynamics of a suspension of microscopic particles using Differential Dynamic Microscopy (DDM). DDM is a relatively new technique that constitutes an alternative method to more classical techniques such as dynamic light scattering (DLS) or video particle tracking (VPT). The technique consists of imaging a particle dispersion with a standard light microscope and a camera and analyzing the images using a digital Fourier transform to obtain the intermediate scattering function, an autocorrelation function that characterizes the dynamics of the dispersion. We first illustrate DDM in the textbook case of colloids under Brownian motion, where we measure the diffusion coefficient. Then we show that DDM is a pertinent tool to characterize biological systems such as motile bacteria.
A method for rapid quantitative assessment of biofilms with biomolecular staining and image analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larimer, Curtis J.; Winder, Eric M.; Jeters, Robert T.
Here, the accumulation of bacteria in surface attached biofilms, or biofouling, can be detrimental to human health, dental hygiene, and many industrial processes. A critical need in identifying and preventing the deleterious effects of biofilms is the ability to observe and quantify their development. Analytical methods capable of assessing early stage fouling are cumbersome or lab-confined, subjective, and qualitative. Herein, a novel photographic method is described that uses biomolecular staining and image analysis to enhance contrast of early stage biofouling. A robust algorithm was developed to objectively and quantitatively measure surface accumulation of Pseudomonas putida from photographs and results weremore » compared to independent measurements of cell density. Results from image analysis quantified biofilm growth intensity accurately and with approximately the same precision of the more laborious cell counting method. This simple method for early stage biofilm detection enables quantifiable measurement of surface fouling and is flexible enough to be applied from the laboratory to the field. Broad spectrum staining highlights fouling biomass, photography quickly captures a large area of interest, and image analysis rapidly quantifies fouling in the image.« less
A method for rapid quantitative assessment of biofilms with biomolecular staining and image analysis
Larimer, Curtis J.; Winder, Eric M.; Jeters, Robert T.; ...
2015-12-07
Here, the accumulation of bacteria in surface attached biofilms, or biofouling, can be detrimental to human health, dental hygiene, and many industrial processes. A critical need in identifying and preventing the deleterious effects of biofilms is the ability to observe and quantify their development. Analytical methods capable of assessing early stage fouling are cumbersome or lab-confined, subjective, and qualitative. Herein, a novel photographic method is described that uses biomolecular staining and image analysis to enhance contrast of early stage biofouling. A robust algorithm was developed to objectively and quantitatively measure surface accumulation of Pseudomonas putida from photographs and results weremore » compared to independent measurements of cell density. Results from image analysis quantified biofilm growth intensity accurately and with approximately the same precision of the more laborious cell counting method. This simple method for early stage biofilm detection enables quantifiable measurement of surface fouling and is flexible enough to be applied from the laboratory to the field. Broad spectrum staining highlights fouling biomass, photography quickly captures a large area of interest, and image analysis rapidly quantifies fouling in the image.« less
2016-10-27
This archival image was released as part of a gallery comparing JPL’s past and present, commemorating the 80th anniversary of NASA’s Jet Propulsion Laboratory on Oct. 31, 2016. This is what greeted visitors to the Jet Propulsion Laboratory in December 1957, before NASA was created and the lab became one of its centers. There is no sign at this location today -- there is just a stairway that runs up the side of the main Administration Building (Building 180). The official lab sign has moved farther south, just as the lab itself has expanded farther south out from the base of the San Gabriel Mountains. http://photojournal.jpl.nasa.gov/catalog/PIA21115
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. Reporters are eager to hear from Armando Oliu about the aid the Image Analysis Lab is giving the FBI in a kidnapping case. Oliu, Final Inspection Team lead for the Shuttle program, oversees the lab that is using an advanced SGI TP9500 data management system to review the tape of the kidnapping in progress in Sarasota, Fla. KSC installed the new $3.2 million system in preparation for Return to Flight of the Space Shuttle fleet. The lab is studying the Sarasota kidnapping video to provide any new information possible to law enforcement officers. KSC is joining NASAs Marshall Space Flight Center in Alabama in reviewing the tape.
Nanoscale Optical Imaging and Spectroscopy from Visible to Mid-Infrared
2015-11-13
field characterization of nanoscale materials, it also complements the near- field scanning optical microscope currently available in the PI’s lab...field scanning optical microscope currently available in the PI’s lab. This equipment will begin making major impacts on at least three current DoD...SECURITY CLASSIFICATION OF: 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13. SUPPLEMENTARY NOTES 12. DISTRIBUTION AVAILIBILITY STATEMENT 6
Visual recognition system of cherry picking robot based on Lab color model
NASA Astrophysics Data System (ADS)
Zhang, Qirong; Zuo, Jianjun; Yu, Tingzhong; Wang, Yan
2017-12-01
This paper designs a visual recognition system suitable for cherry picking. First, the system deals with the image using the vector median filter. And then it extracts a channel of Lab color model to divide the cherries and the background. The cherry contour was successfully fitted by the least square method, and the centroid and radius of the cherry were extracted. Finally, the cherry was successfully extracted.
Gerotziafas, Grigoris T; Ray, Patrick; Gkalea, Vasiliki; Benzarti, Ahlem; Khaterchi, Amir; Cast, Claire; Pernet, Julie; Lefkou, Eleftheria; Elalamy, Ismail
2016-12-01
Easy to use point of care assays for D-Dimers measurement in whole blood from patients with clinical suspicion of venous thromboembolism (VTE) will facilitate the diagnostic strategy in the Emergency Department (ED) setting. We prospectively evaluated the diagnostic performance of the point-of-care mLabs® Whole Blood D-Dimers test and we compared it with the Vidas® D-Dimers assay. As part of the diagnostic algorithm applied in patients with clinical suspicion of VTE, the VIDAS® D-Dimers Test was prescribed by the emergency physician in charge. The mLabs® Whole Blood D-Dimers Test was used on the same samples. All patients had undergone exploration with the recommended imaging techniques for VTE diagnosis. Both assays were performed, on 99 emergency patients (mean age was 65 years) with clinical suspicion of VTE. In 3% of patients, VTE was documented with a reference imaging technique. The Bland and Altman test showed significant agreement between the two methods. Both assays showed equal sensitivity and negative predictive value for VTE. The mLabs whole blood assay is a promising point of care method for measurement of D-Dimers and exclusion of VTE diagnosis in the emergency setting which should be validated in a larger prospective study.
Safer Soldering Guidelines and Instructional Resources
ERIC Educational Resources Information Center
Love, Tyler S.; Tomlinson, Joel
2018-01-01
Soldering is a useful and necessary process for many classroom, makerspace, Fab Lab, technology and engineering lab, and science lab activities. As described in this article, soldering can pose many safety risks without proper engineering controls, standard operating procedures, and direct instructor supervision. There are many safety hazards…
ERIC Educational Resources Information Center
Peters, Erin
2005-01-01
Deconstructing cookbook labs to require the students to be more thoughtful could break down perceived teacher barriers to inquiry learning. Simple steps that remove or disrupt the direct transfer of step-by-step procedures in cookbook labs make students think more critically about their process. Through trials in the author's middle school…
ARS labs update to California Cotton Ginners and Growers
USDA-ARS?s Scientific Manuscript database
There are four USDA-ARS labs involved in cotton harvesting, processing & fiber quality research; The Southwestern Cotton Ginning Research Laboratory (Mesilla Park, NM); The Cotton Production and Processing Unit (Lubbock, TX); The Cotton Ginning Research Unit (Stoneville, MS); and The Cotton Structur...
Natural Alternatives for Chemicals Used in Histopathology Lab- A Literature Review.
Ramamoorthy, Ananthalakshmi; Ravi, Shivani; Jeddy, Nadeem; Thangavelu, Radhika; Janardhanan, Sunitha
2016-11-01
Histopathology lab is the place where the specimen gets processed and stained to view under microscope for interpretation. Exposure to the chemicals used in these processes cause various health hazards to the laboratory technicians, pathologists, and scientists working in the laboratory. Hence, there is a dire need to introduce healthy and bio-friendly alternatives in the field. This literature review explores the natural products and their efficiency to be used as alternatives for chemicals in the histopathology lab.
Use of a Digital Camera To Document Student Observations in a Microbiology Laboratory Class.
ERIC Educational Resources Information Center
Mills, David A.; Kelley, Kevin; Jones, Michael
2001-01-01
Points out the lack of microscopic images of wine-related microbes. Uses a digital camera during a wine microbiology laboratory to capture student-generated microscope images. Discusses the advantages of using a digital camera in a teaching lab. (YDS)
The Americas and Hurricane Andrew
NASA Technical Reports Server (NTRS)
1992-01-01
Image taken on August 25, 1992 by NOAA GOES-7 of the Americas and Hurricane Andrew.
Photo Credit: Image produced by F. Hasler, M. Jentoft-Nilsen, H. Pierce, K. Palaniappan, and M. Manyin. NASA Goddard Lab for Atmospheres - Data from National Oceanic and Atmospheric Administration (NOAA).Sandia National Laboratories: National Security Missions: Global Security
Involvement News News Releases Media Contacts & Resources Lab News Image Gallery Publications Annual Library Events Careers View All Jobs Students & Postdocs Internships & Co-ops Fellowships Security Image Cyber and Infrastructure Security Advanced analyses and technologies for securing the
NASA Astrophysics Data System (ADS)
Xu, Yuanhong; Liu, Jingquan; Zhang, Jizhen; Zong, Xidan; Jia, Xiaofang; Li, Dan; Wang, Erkang
2015-05-01
A portable lab-on-a-chip methodology to generate ionic liquid-functionalized carbon nanodots (CNDs) was developed via electrochemical oxidation of screen printed carbon electrodes. The CNDs can be successfully applied for efficient cell imaging and solid-state electrochemiluminescence sensor fabrication on the paper-based chips.A portable lab-on-a-chip methodology to generate ionic liquid-functionalized carbon nanodots (CNDs) was developed via electrochemical oxidation of screen printed carbon electrodes. The CNDs can be successfully applied for efficient cell imaging and solid-state electrochemiluminescence sensor fabrication on the paper-based chips. Electronic supplementary information (ESI) available: Experimental section; Fig. S1. XPS spectra of the as-prepared CNDs after being dialyzed for 72 hours; Fig. S2. LSCM images showing time-dependent fluorescence signals of HeLa cells treated by the as-prepared CNDs; Tripropylamine analysis using the Nafion/CNDs modified ECL sensor. See DOI: 10.1039/c5nr01765c
Multispectral simulation environment for modeling low-light-level sensor systems
NASA Astrophysics Data System (ADS)
Ientilucci, Emmett J.; Brown, Scott D.; Schott, John R.; Raqueno, Rolando V.
1998-11-01
Image intensifying cameras have been found to be extremely useful in low-light-level (LLL) scenarios including military night vision and civilian rescue operations. These sensors utilize the available visible region photons and an amplification process to produce high contrast imagery. It has been demonstrated that processing techniques can further enhance the quality of this imagery. For example, fusion with matching thermal IR imagery can improve image content when very little visible region contrast is available. To aid in the improvement of current algorithms and the development of new ones, a high fidelity simulation environment capable of producing radiometrically correct multi-band imagery for low- light-level conditions is desired. This paper describes a modeling environment attempting to meet these criteria by addressing the task as two individual components: (1) prediction of a low-light-level radiance field from an arbitrary scene, and (2) simulation of the output from a low- light-level sensor for a given radiance field. The radiance prediction engine utilized in this environment is the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model which is a first principles based multi-spectral synthetic image generation model capable of producing an arbitrary number of bands in the 0.28 to 20 micrometer region. The DIRSIG model is utilized to produce high spatial and spectral resolution radiance field images. These images are then processed by a user configurable multi-stage low-light-level sensor model that applies the appropriate noise and modulation transfer function (MTF) at each stage in the image processing chain. This includes the ability to reproduce common intensifying sensor artifacts such as saturation and 'blooming.' Additionally, co-registered imagery in other spectral bands may be simultaneously generated for testing fusion and exploitation algorithms. This paper discusses specific aspects of the DIRSIG radiance prediction for low- light-level conditions including the incorporation of natural and man-made sources which emphasizes the importance of accurate BRDF. A description of the implementation of each stage in the image processing and capture chain for the LLL model is also presented. Finally, simulated images are presented and qualitatively compared to lab acquired imagery from a commercial system.
Intelligence algorithms for autonomous navigation in a ground vehicle
NASA Astrophysics Data System (ADS)
Petkovsek, Steve; Shakya, Rahul; Shin, Young Ho; Gautam, Prasanna; Norton, Adam; Ahlgren, David J.
2012-01-01
This paper will discuss the approach to autonomous navigation used by "Q," an unmanned ground vehicle designed by the Trinity College Robot Study Team to participate in the Intelligent Ground Vehicle Competition (IGVC). For the 2011 competition, Q's intelligence was upgraded in several different areas, resulting in a more robust decision-making process and a more reliable system. In 2010-2011, the software of Q was modified to operate in a modular parallel manner, with all subtasks (including motor control, data acquisition from sensors, image processing, and intelligence) running simultaneously in separate software processes using the National Instruments (NI) LabVIEW programming language. This eliminated processor bottlenecks and increased flexibility in the software architecture. Though overall throughput was increased, the long runtime of the image processing process (150 ms) reduced the precision of Q's realtime decisions. Q had slow reaction times to obstacles detected only by its cameras, such as white lines, and was limited to slow speeds on the course. To address this issue, the image processing software was simplified and also pipelined to increase the image processing throughput and minimize the robot's reaction times. The vision software was also modified to detect differences in the texture of the ground, so that specific surfaces (such as ramps and sand pits) could be identified. While previous iterations of Q failed to detect white lines that were not on a grassy surface, this new software allowed Q to dynamically alter its image processing state so that appropriate thresholds could be applied to detect white lines in changing conditions. In order to maintain an acceptable target heading, a path history algorithm was used to deal with local obstacle fields and GPS waypoints were added to provide a global target heading. These modifications resulted in Q placing 5th in the autonomous challenge and 4th in the navigation challenge at IGVC.
Hooked on Inquiry: History Labs in the Methods Course
ERIC Educational Resources Information Center
Wood, Linda Sargent
2012-01-01
Methods courses provide a rich opportunity to unpack what it means to "learn history by doing history." To help explain what "doing history" means, the author has created history labs to walk teacher candidates through the historical process. Each lab poses a historical problem, requires analysis of primary and secondary…
Traditional Labs + New Questions = Improved Student Performance.
ERIC Educational Resources Information Center
Rezba, Richard J.; And Others
1992-01-01
Presents three typical lab activities involving the breathing rate of fish, the behavior of electromagnets, and tests for water hardness to demonstrate how labs can be modified to teach process skills. Discusses how basic concepts about experimentation are developed and ways of generating and improving science experiments. Includes a laboratory…
Alcohol Fuel By-Product Utilization and Production.
ERIC Educational Resources Information Center
Boerboom, Jim
Ten lessons comprise this curriculum intended to assist vocational teachers in establishing and conducting an alcohol fuels workshop on engine modification and plant design. A glossary is provided first. The 10 lessons cover these topics: the alcohol fuel plant, feedstock preparation lab, distillation lab, fuel plant processes, plant design lab,…
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. -- In the Space Life Sciences Lab, Lanfang Levine, with Dynamac Corp., transfers material into a sample bottle for analysis. She is standing in front of new equipment in the lab that will provide gas chromatography and mass spectrometry. The equipment will enable analysis of volatile compounds, such as from plants. The 100,000 square-foot facility houses labs for NASAs ongoing research efforts, microbiology/microbial ecology studies and analytical chemistry labs. Also calling the new lab home are facilities for space flight-experiment and flight-hardware development, new plant growth chambers, and an Orbiter Environment Simulator that will be used to conduct ground control experiments in simulated flight conditions for space flight experiments. The SLS Lab, formerly known as the Space Experiment Research and Processing Laboratory or SERPL, provides space for NASAs Life Sciences Services contractor Dynamac Corporation, Bionetics Corporation, and researchers from the University of Florida. NASAs Office of Biological and Physical Research will use the facility for processing life sciences experiments that will be conducted on the International Space Station. The SLS Lab is the magnet facility for the International Space Research Park at KSC being developed in partnership with Florida Space Authority.
NASA Astrophysics Data System (ADS)
Madaras, Eric I.; Anastasi, Robert F.; Smith, Stephen W.; Seebo, Jeffrey P.; Walker, James L.; Lomness, Janice K.; Hintze, Paul E.; Kammerer, Catherine C.; Winfree, William P.; Russell, Richard W.
2008-02-01
There is currently no method for detecting corrosion under Shuttle tiles except for the expensive process of tile removal and replacement; hence NASA is investigating new NDE methods for detecting hidden corrosion. Time domain terahertz radiation has been applied to corrosion detection under tiles in samples ranging from small lab samples to a Shuttle with positive results. Terahertz imaging methods have been able to detect corrosion at thicknesses of 5 mils or greater under 1" thick Shuttle tiles and 7-12 mils or greater under 2" thick Shuttle tiles.
Photographer : JPL Range :12.2 million kilometers (7.6 million miles) The view in this photo shows
NASA Technical Reports Server (NTRS)
1979-01-01
Photographer : JPL Range :12.2 million kilometers (7.6 million miles) The view in this photo shows Jupiter's Great Red Spot emerging from the five-hour Jovian night. One of the three bright, oval clouds which were observed to form approximately 40 years ago can be seen immediately below the Red Spot. Most of the other features appearing in this view are too small to be seen clearly from Earth. The color picture was assembled from three black and white photos in the Image Processing Lab at JPL.
2001-01-24
Interior of the Equipment Module for the Laminar Soot Processes (LSP-2) experiment that fly in the STS-107 Research 1 mission in 2002 (LSP-1 flew on Microgravity Sciences Lab-1 mission in 1997). The principal investigator is Dr. Gerard Faeth of the University of Michigan. LSP uses a small jet burner (yellow ellipse), similar to a classroom butane lighter, that produces flames up to 60 mm (2.3 in) long. Measurements include color TV cameras and a radiometer or heat sensor (blue circle), and laser images whose darkness indicates the quantity of soot produced in the flame. Glenn Research in Cleveland, OH, manages the project.
NASA Technical Reports Server (NTRS)
Madaras, Eric I.; Anastasi, Robert F.; Smith, Stephen W.; Seebo, Jeffrey P.; Walker, James L.; Lomness, Janice K.; Hintze, Paul E.; Kammerer, Catherine C.; Winfree, William P.; Russell, Richard W.
2007-01-01
There is currently no method for detecting corrosion under Shuttle tiles except for the expensive process of tile removal and replacement; hence NASA is investigating new NDE methods for detecting hidden corrosion. Time domain terahertz radiation has been applied to corrosion detection under tiles in samples ranging from small lab samples to a Shuttle with positive results. Terahertz imaging methods have been able to detect corrosion at thicknesses of 5 mils or greater under 1" thick Shuttle tiles and 7-12 mils or greater under 2" thick Shuttle tiles.
Robust Tomato Recognition for Robotic Harvesting Using Feature Images Fusion
Zhao, Yuanshen; Gong, Liang; Huang, Yixiang; Liu, Chengliang
2016-01-01
Automatic recognition of mature fruits in a complex agricultural environment is still a challenge for an autonomous harvesting robot due to various disturbances existing in the background of the image. The bottleneck to robust fruit recognition is reducing influence from two main disturbances: illumination and overlapping. In order to recognize the tomato in the tree canopy using a low-cost camera, a robust tomato recognition algorithm based on multiple feature images and image fusion was studied in this paper. Firstly, two novel feature images, the a*-component image and the I-component image, were extracted from the L*a*b* color space and luminance, in-phase, quadrature-phase (YIQ) color space, respectively. Secondly, wavelet transformation was adopted to fuse the two feature images at the pixel level, which combined the feature information of the two source images. Thirdly, in order to segment the target tomato from the background, an adaptive threshold algorithm was used to get the optimal threshold. The final segmentation result was processed by morphology operation to reduce a small amount of noise. In the detection tests, 93% target tomatoes were recognized out of 200 overall samples. It indicates that the proposed tomato recognition method is available for robotic tomato harvesting in the uncontrolled environment with low cost. PMID:26840313
Sourdough microbial community dynamics: An analysis during French organic bread-making processes.
Lhomme, Emilie; Urien, Charlotte; Legrand, Judith; Dousset, Xavier; Onno, Bernard; Sicard, Delphine
2016-02-01
Natural sourdoughs are commonly used in bread-making processes, especially for organic bread. Despite its role in bread flavor and dough rise, the stability of the sourdough microbial community during and between bread-making processes is debated. We investigated the dynamics of lactic acid bacteria (LAB) and yeast communities in traditional organic sourdoughs of five French bakeries during the bread-making process and several months apart using classical and molecular microbiology techniques. Sourdoughs were sampled at four steps of the bread-making process with repetition. The analysis of microbial density over 68 sourdough/dough samples revealed that both LAB and yeast counts changed along the bread-making process and between bread-making runs. The species composition was less variable. A total of six LAB and nine yeast species was identified from 520 and 1675 isolates, respectively. The dominant LAB species was Lactobacillus sanfranciscensis, found for all bakeries and each bread-making run. The dominant yeast species changed only once between bread-making processes but differed between bakeries. They mostly belonged to the Kazachstania clade. Overall, this study highlights the change of population density within the bread-making process and between bread-making runs and the relative stability of the sourdough species community during bread-making process. Copyright © 2014 Elsevier Ltd. All rights reserved.
Optical biosensor system with integrated microfluidic sample preparation and TIRF based detection
NASA Astrophysics Data System (ADS)
Gilli, Eduard; Scheicher, Sylvia R.; Suppan, Michael; Pichler, Heinz; Rumpler, Markus; Satzinger, Valentin; Palfinger, Christian; Reil, Frank; Hajnsek, Martin; Köstler, Stefan
2013-05-01
There is a steadily growing demand for miniaturized bioanalytical devices allowing for on-site or point-of-care detection of biomolecules or pathogens in applications like diagnostics, food testing, or environmental monitoring. These, so called labs-on-a-chip or micro-total analysis systems (μ-TAS) should ideally enable convenient sample-in - result-out type operation. Therefore, the entire process from sample preparation, metering, reagent incubation, etc. to detection should be performed on a single disposable device (on-chip). In the early days such devices were mainly fabricated using glass or silicon substrates and adapting established fabrication technologies from the electronics and semiconductor industry. More recently, the development focuses on the use of thermoplastic polymers as they allow for low-cost high volume fabrication of disposables. One of the most promising materials for the development of plastic based lab-on-achip systems are cyclic olefin polymers and copolymers (COP/COC) due to their excellent optical properties (high transparency and low autofluorescence) and ease of processing. We present a bioanalytical system for whole blood samples comprising a disposable plastic chip based on TIRF (total internal reflection fluorescence) optical detection. The chips were fabricated by compression moulding of COP and microfluidic channels were structured by hot embossing. These microfluidic structures integrate several sample pretreatment steps. These are the separation of erythrocytes, metering of sample volume using passive valves, and reagent incubation for competitive bioassays. The surface of the following optical detection zone is functionalized with specific capture probes in an array format. The plastic chips comprise dedicated structures for simple and effective coupling of excitation light from low-cost laser diodes. This enables TIRF excitation of fluorescently labeled probes selectively bound to detection spots at the microchannel surface. The fluorescence of these detection arrays is imaged using a simple set-up based on a digital consumer camera. Image processing for spot detection and intensity calculation is accomplished using customized software. Using this combined TIRF excitation and imaging based detection approach allowes for effective suppression of background fluorescence from the sample, multiplexed detection in an array format, as well as internal calibration and background correction.
Incorporating learning goals about modeling into an upper-division physics laboratory experiment
NASA Astrophysics Data System (ADS)
Zwickl, Benjamin M.; Finkelstein, Noah; Lewandowski, H. J.
2014-09-01
Implementing a laboratory activity involves a complex interplay among learning goals, available resources, feedback about the existing course, best practices for teaching, and an overall philosophy about teaching labs. Building on our previous work, which described a process of transforming an entire lab course, we now turn our attention to how an individual lab activity on the polarization of light was redesigned to include a renewed emphasis on one broad learning goal: modeling. By using this common optics lab as a concrete case study of a broadly applicable approach, we highlight many aspects of the activity development and show how modeling is used to integrate sophisticated conceptual and quantitative reasoning into the experimental process through the various aspects of modeling: constructing models, making predictions, interpreting data, comparing measurements with predictions, and refining models. One significant outcome is a natural way to integrate an analysis and discussion of systematic error into a lab activity.
Agarwal, Shikhar; Gallo, Justin J; Parashar, Akhil; Agarwal, Kanika K; Ellis, Stephen G; Khot, Umesh N; Spooner, Robin; Murat Tuzcu, Emin; Kapadia, Samir R
2016-03-01
Operational inefficiencies are ubiquitous in several healthcare processes. To improve the operational efficiency of our catheterization laboratory (Cath Lab), we implemented a lean six sigma process improvement initiative, starting in June 2010. We aimed to study the impact of lean six sigma implementation on improving the efficiency and the patient throughput in our Cath Lab. All elective and urgent cardiac catheterization procedures including diagnostic coronary angiography, percutaneous coronary interventions, structural interventions and peripheral interventions performed between June 2009 and December 2012 were included in the study. Performance metrics utilized for analysis included turn-time, physician downtime, on-time patient arrival, on-time physician arrival, on-time start and manual sheath-pulls inside the Cath Lab. After implementation of lean six sigma in the Cath Lab, we observed a significant improvement in turn-time, physician downtime, on-time patient arrival, on-time physician arrival, on-time start as well as sheath-pulls inside the Cath Lab. The percentage of cases with optimal turn-time increased from 43.6% in 2009 to 56.6% in 2012 (p-trend<0.001). Similarly, the percentage of cases with an aggregate on-time start increased from 41.7% in 2009 to 62.8% in 2012 (p-trend<0.001). In addition, the percentage of manual sheath-pulls performed in the Cath Lab decreased from 60.7% in 2009 to 22.7% in 2012 (p-trend<0.001). The current longitudinal study illustrates the impact of successful implementation of a well-known process improvement initiative, lean six sigma, on improving and sustaining efficiency of our Cath Lab operation. After the successful implementation of this continuous quality improvement initiative, there was a significant improvement in the selected performance metrics namely turn-time, physician downtime, on-time patient arrival, on-time physician arrival, on-time start as well as sheath-pulls inside the Cath Lab. Copyright © 2016 Elsevier Inc. All rights reserved.
Computer Modeling of Complete IC Fabrication Process.
1987-05-28
James Shipley National Semi.Peter N. Manos AMD Ritu Shrivastava Cypress Semi. Corp.Deborah D. Maracas Motorola, Inc. Paramjit Singh Rockwell Intl.Sidney...Carl F Daegs Sandia Hishan Z Massoud Duke* UnIVersdy Anant Dix* Silicon Systems David Matthews Hughes Rese~arch Lab DIolidi DoIIos Spery Tmioomly K...Jaczynski AT&T Bell Labs Jack C. Carlson Motorola Sanjay Jain AT&T Bell Labs Andrew Chan Fairchild Weston Systems Werner Juengling AT&T Bell Labs
Stereotactic radiosurgery for trigeminal neuralgia utilizing the BrainLAB Novalis system.
Zahra, Hadi; Teh, Bin S; Paulino, Arnold C; Yoshor, Daniel; Trask, Todd; Baskin, David; Butler, E Brian
2009-12-01
Stereotactic radiosurgery (SRS) is one of the least invasive treatments for trigeminal neuralgia (TN). To date, most reports have been about Cobalt-based treatments (i.e., Gamma Knife) with limited data on image-guided stereotactic linear accelerator treatments. We describe our initial experience of using BrainLAB Novalis stereotactic system for the radiosurgical treatment of TN. A total of 20 patients were treated between July 2004 and February 2007. Each SRS procedure was performed using the BrainLAB Novalis System. Thin cuts MRI images of 1.5 mm thickness were acquired and fused with the simulation CT of each patient. Majority of the patients received a maximum dose of 90 Gy. The median brainstem dose to 1.0 cc and 0.1 cc was 2.3 Gy and 13.5 Gy, respectively. In addition, specially acquired three-dimensional fast imaging sequence employing steady-state acquisition (FIESTA) MRI was utilized to improve target delineation of the trigeminal proximal nerve root entry zone. Barrow Neurological Index (BNI) pain scale for TN was used for assessing treatment outcome. At a median follow-up time of 14.2 months, 19 patients (95%) reported at least some improvement in pain. Eight (40%) patients were completely pain-free and stopped all medications (BNI Grade I) while another 2 (10%) patients also stopped medications but reported occasional pain (BNI Grade II). Another 2 (10%) patients reported no pain and 7 (35%) patients only occasional pain while continuing medications, BNI Grade IIIA and IIIB, respectively. Median time to pain control was 8.5 days (range: 1-70 days). No patient reported severe pain, worsening pain or any pain not controlled on their previously taken medication. Intermittent or persistent facial numbness following treatments occurred in 35% of patients. No other complications were reported. Stereotactic radiosurgery using the BrainLAB Novalis system is a safe and effective treatment for TN. This information is important as more centers are obtaining image-guided stereotactic-based linear accelerators capable of performing radiosurgery.
Selected KSC Applied Physics Lab Responses to Shuttle Processing Measurement Requests
NASA Technical Reports Server (NTRS)
Youngquist, Robert C.
2010-01-01
The KSC Applied Physics Lab has been supporting Shuttle Ground Processing for over 20 years by solving problems brought to us by Shuttle personnel. Roughly half of the requests to our lab have been to find ways to make measurements, or to improve on an existing measurement process. This talk will briefly cover: 1) Centering the aft end of the External Tank between the Solid Rocket Boosters; 2) Positioning the GOX Vent Hood over the External Tank; 3) Remote Measurements of External Tank Damage; 4) Strain Measurement in the Orbiter Sling; and 5) Over-center Distance Measurement in an Over-center Mechanism.
Imaging electric field dynamics with graphene optoelectronics.
Horng, Jason; Balch, Halleh B; McGuire, Allister F; Tsai, Hsin-Zon; Forrester, Patrick R; Crommie, Michael F; Cui, Bianxiao; Wang, Feng
2016-12-16
The use of electric fields for signalling and control in liquids is widespread, spanning bioelectric activity in cells to electrical manipulation of microstructures in lab-on-a-chip devices. However, an appropriate tool to resolve the spatio-temporal distribution of electric fields over a large dynamic range has yet to be developed. Here we present a label-free method to image local electric fields in real time and under ambient conditions. Our technique combines the unique gate-variable optical transitions of graphene with a critically coupled planar waveguide platform that enables highly sensitive detection of local electric fields with a voltage sensitivity of a few microvolts, a spatial resolution of tens of micrometres and a frequency response over tens of kilohertz. Our imaging platform enables parallel detection of electric fields over a large field of view and can be tailored to broad applications spanning lab-on-a-chip device engineering to analysis of bioelectric phenomena.
Li, Lin; Yin, Heyu; Mason, Andrew J
2018-04-01
The integration of biosensors, microfluidics, and CMOS instrumentation provides a compact lab-on-CMOS microsystem well suited for high throughput measurement. This paper describes a new epoxy chip-in-carrier integration process and two planar metalization techniques for lab-on-CMOS that enable on-CMOS electrochemical measurement with multichannel microfluidics. Several design approaches with different fabrication steps and materials were experimentally analyzed to identify an ideal process that can achieve desired capability with high yield and low material and tool cost. On-chip electrochemical measurements of the integrated assembly were performed to verify the functionality of the chip-in-carrier packaging and its capability for microfluidic integration. The newly developed CMOS-compatible epoxy chip-in-carrier process paves the way for full implementation of many lab-on-CMOS applications with CMOS ICs as core electronic instruments.
Quantitative phase imaging characterization of tumor-associated blood vessel formation on a chip
NASA Astrophysics Data System (ADS)
Guo, Peng; Huang, Jing; Moses, Marsha A.
2018-02-01
Angiogenesis, the formation of new blood vessels from existing ones, is a biological process that has an essential role in solid tumor growth, development, and progression. Recent advances in Lab-on-a-Chip technology has created an opportunity for scientists to observe endothelial cell (EC) behaviors during the dynamic process of angiogenesis using a simple and economical in vitro platform that recapitulates in vivo blood vessel formation. Here, we use quantitative phase imaging (QPI) microscopy to continuously and non-invasively characterize the dynamic process of tumor cell-induced angiogenic sprout formation on a microfluidic chip. The live tumor cell-induced angiogenic sprouts are generated by multicellular endothelial sprouting into 3 dimensional (3D) Matrigel using human umbilical vein endothelial cells (HUVECs). By using QPI, we quantitatively measure a panel of cellular morphological and behavioral parameters of each individual EC participating in this sprouting. In this proof-of-principle study, we demonstrate that QPI is a powerful tool that can provide real-time quantitative analysis of biological processes in in vitro 3D biomimetic devices, which, in turn, can improve our understanding of the biology underlying functional tissue engineering.
NASA Astrophysics Data System (ADS)
Swartz, Charles S.
2003-05-01
The process of distributing and exhibiting a motion picture has changed little since the Lumière brothers presented the first motion picture to an audience in 1895. While this analog photochemical process is capable of producing screen images of great beauty and expressive power, more often the consumer experience is diminished by third generation prints and by the wear and tear of the mechanical process. Furthermore, the film industry globally spends approximately $1B annually manufacturing and shipping prints. Alternatively, distributing digital files would theoretically yield great benefits in terms of image clarity and quality, lower cost, greater security, and more flexibility in the cinema (e.g., multiple language versions). In order to understand the components of the digital cinema chain and evaluate the proposed technical solutions, the Entertainment Technology Center at USC in 2000 established the Digital Cinema Laboratory as a critical viewing environment, with the highest quality film and digital projection equipment. The presentation describes the infrastructure of the Lab, test materials, and testing methodologies developed for compression evaluation, and lessons learned up to the present. In addition to compression, the Digital Cinema Laboratory plans to evaluate other components of the digital cinema process as well.
Molecular Innovations Towards Theranostics of Aggressive Prostate Cancer
2013-09-01
14. ABSTRACT: In this project, we propose to develop a new drug delivery vehicle based on dendrimer nanotechnology for personalized medicine. This new...PI’s lab will make dendrimers bearing functional handles to conjugate with chelating agents provided by the Initiating PI’s lab for PET imaging and...has designed and synthesized the proposed bifunctional chelator scaffold system, CB-TE2A(tBu)2-N3 for the further construction of dendrimer -based
NASA Astrophysics Data System (ADS)
Cobb, Bethany E.
2018-01-01
Since 2013, the Physics Department at GWU has used student-centered active learning in the introductory astronomy course “Introduction to the Cosmos.” Class time is spent in groups on questions, math problems, and hands-on activities, with multiple instructors circulating to answer questions and engage with the students. The students have responded positively to this active-learning. Unfortunately, in transitioning to active-learning there was no time to rewrite the labs. Very quickly, the contrast between the dynamic classroom and the traditional labs became apparent. The labs were almost uniformly “cookie-cutter” in that the procedure and analysis were specified step-by-step and there was just one right answer. Students rightly criticized the labs for lacking a clear purpose and including busy-work. Furthermore, this class fulfills the GWU scientific reasoning general education requirement and thus includes learning objectives related to understanding the scientific method, testing hypotheses with data, and considering uncertainty – but the traditional labs did not require these skills. I set out to rejuvenate the lab sequence by writing new inquiry labs based on both topic-specific and scientific reasoning learning objectives. While inquiry labs can be challenging for the students, as they require active thinking and creativity, these labs engage the students more thoroughly in the scientific process. In these new labs, whenever possible, I include real astronomical data and ask the students to use digital tools (SDSS SkyServer, SOHO archive) as if they are real astronomers. To allow students to easily plot, manipulate and analyze data, I built “smart” Excel files using formulas, dropdown menus and macros. The labs are now much more authentic and thought-provoking. Whenever possible, students independently develop questions, hypotheses, and procedures and the scientific method is “scaffolded” over the semester by providing more guidance in the early labs and more independence later on. Finally, in every lab, students must identify and reflect on sources of error. These labs are more challenging for the instructors to run and to grade, but they are much more satisfying when it comes to student learning.
Refining Southern California Geotherms Using Seismologic, Geologic, and Petrologic Constraints
NASA Astrophysics Data System (ADS)
Thatcher, W. R.; Chapman, D. S.; Allam, A. A.; Williams, C. F.
2017-12-01
Lithospheric deformation in tectonically active regions depends on the 3D distribution of rheology, which is in turn critically controlled by temperature. Under the auspices of the Southern California Earthquake Center (SCEC) we are developing a 3D Community Thermal Model (CTM) to constrain rheology and so better understand deformation processes within this complex but densely monitored and relatively well-understood region. The San Andreas transform system has sliced southern California into distinct blocks, each with characteristic lithologies, seismic velocities and thermal structures. Guided by the geometry of these blocks we use more than 250 surface heat-flow measurements to define 13 geographically distinct heat flow regions (HFRs). Model geotherms within each HFR are constrained by averages and variances of surface heat flow q0 and the 1D depth distribution of thermal conductivity (k) and radiogenic heat production (A), which are strongly dependent on rock type. Crustal lithologies are not always well known and we turn to seismic imaging for help. We interrogate the SCEC Community Velocity Model (CVM) to determine averages and variances of Vp, Vs and Vp/Vs versus depth within each HFR. We bound (A, k) versus depth by relying on empirical relations between seismic wave speed and rock type and laboratory and modeling methods relating (A, k) to rock type. Many 1D conductive geotherms for each HFR are allowed by the variances in surface heat flow and subsurface (A, k). An additional constraint on the lithosphere temperature field is provided by comparing lithosphere-asthenosphere boundary (LAB) depths identified seismologically with those defined thermally as the depth of onset of partial melting. Receiver function studies in Southern California indicate LAB depths that range from 40 km to 90 km. Shallow LAB depths are correlated with high surface heat flow and deep LAB with low heat flow. The much-restricted families of geotherms that intersect peridotite solidi at the seismological LAB depth in each region require that LAB temperatures lie between 1050 to 1250˚ C, a range that is consistent with a hydrous rather than anhydrous mantle below Southern California.
Mercury: Photomosaic of the Michelangelo Quadrangle H-12
NASA Technical Reports Server (NTRS)
2000-01-01
The Michelangelo Quadrangle, which lies in Mercury's southern polar region, was named in memory of the famous Italian artist. The Mercurian surface is heavily marred by numerous impact craters. Ejecta deposits, seen as bright lines or rays, radiate outward from the point of impact, along the planet's surface indicating the source craters are young topographical features. The rays found on Mercury are similar to ones found on the surface of Earth's moon.
Several large lobate scarps, steep and long escarpments which usually show a largely lobate outline on a scale of a few to tens of kilometers, are clearly visible in the lower left side of the image slicing through a variety of terrains including several large impact craters.The Image Processing Lab at NASA's Jet Propulsion Laboratory produced this photomosaic using computer software and techniques developed for use in processing planetary data. The images used to construct the Michelangelo Quadrangle were taken during Mariner 10's second flyby of Mercury.The Mariner 10 spacecraft was launched in 1974. The spacecraft took images of Venus in February 1974 on the way to three encounters with Mercury in March and September 1974 and March 1975. The spacecraft took more than 7,000 images of Mercury, Venus, the Earth and the Moon during its mission. The Mariner 10 Mission was managed by the Jet Propulsion Laboratory for NASA's Office of Space Science in Washington, D.C.NASA Astrophysics Data System (ADS)
Holmes, N. G.; Olsen, Jack; Thomas, James L.; Wieman, Carl E.
2017-06-01
Instructional labs are widely seen as a unique, albeit expensive, way to teach scientific content. We measured the effectiveness of introductory lab courses at achieving this educational goal across nine different lab courses at three very different institutions. These institutions and courses encompassed a broad range of student populations and instructional styles. The nine courses studied had two key things in common: the labs aimed to reinforce the content presented in lectures, and the labs were optional. By comparing the performance of students who did and did not take the labs (with careful normalization for selection effects), we found universally and precisely no added value to learning course content from taking the labs as measured by course exam performance. This work should motivate institutions and departments to reexamine the goals and conduct of their lab courses, given their resource-intensive nature. We show why these results make sense when looking at the comparative mental processes of students involved in research and instructional labs, and offer alternative goals and instructional approaches that would make lab courses more educationally valuable.
Staying Mindful in Action: The Challenge of "Double Awareness" on Task and Process in an Action Lab
ERIC Educational Resources Information Center
Svalgaard, Lotte
2016-01-01
Action Learning is a well-proven method to integrate "task" and "process", as learning about team and self (process) takes place while delivering on a task or business challenge of real importance (task). An Action Lab® is an intensive Action Learning programme lasting for 5 days, which aims at balancing and integrating…
ERIC Educational Resources Information Center
Ende, Fred
2012-01-01
Ask students to name the aspects of science class they enjoy most, and working on labs will undoubtedly be mentioned. What often won't be included, however, is writing lab reports. For many students, the process of exploration and data collection is paramount, while the explanation and analysis of findings often takes a backseat. After all, if…
ERIC Educational Resources Information Center
Stanley, Jacob T.; Lewandowski, H. J.
2016-01-01
In experimental physics, lab notebooks play an essential role in the research process. For all of the ubiquity of lab notebooks, little formal attention has been paid to addressing what is considered "best practice" for scientific documentation and how researchers come to learn these practices in experimental physics. Using interviews…
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. -- Lanfang Levine, with Dynamac Corp., helps install new equipment for gas chromatography and mass spectrometry in the Space Life Sciences Lab. The equipment will enable analysis of volatile compounds, such as from plants. The 100,000 square-foot facility houses labs for NASAs ongoing research efforts, microbiology/microbial ecology studies and analytical chemistry labs. Also calling the new lab home are facilities for space flight-experiment and flight-hardware development, new plant growth chambers, and an Orbiter Environment Simulator that will be used to conduct ground control experiments in simulated flight conditions for space flight experiments. The SLS Lab, formerly known as the Space Experiment Research and Processing Laboratory or SERPL, provides space for NASAs Life Sciences Services contractor Dynamac Corporation, Bionetics Corporation, and researchers from the University of Florida. NASAs Office of Biological and Physical Research will use the facility for processing life sciences experiments that will be conducted on the International Space Station. The SLS Lab is the magnet facility for the International Space Research Park at KSC being developed in partnership with Florida Space Authority.
Java Image I/O for VICAR, PDS, and ISIS
NASA Technical Reports Server (NTRS)
Deen, Robert G.; Levoe, Steven R.
2011-01-01
This library, written in Java, supports input and output of images and metadata (labels) in the VICAR, PDS image, and ISIS-2 and ISIS-3 file formats. Three levels of access exist. The first level comprises the low-level, direct access to the file. This allows an application to read and write specific image tiles, lines, or pixels and to manipulate the label data directly. This layer is analogous to the C-language "VICAR Run-Time Library" (RTL), which is the image I/O library for the (C/C++/Fortran) VICAR image processing system from JPL MIPL (Multimission Image Processing Lab). This low-level library can also be used to read and write labeled, uncompressed images stored in formats similar to VICAR, such as ISIS-2 and -3, and a subset of PDS (image format). The second level of access involves two codecs based on Java Advanced Imaging (JAI) to provide access to VICAR and PDS images in a file-format-independent manner. JAI is supplied by Sun Microsystems as an extension to desktop Java, and has a number of codecs for formats such as GIF, TIFF, JPEG, etc. Although Sun has deprecated the codec mechanism (replaced by IIO), it is still used in many places. The VICAR and PDS codecs allow any program written using the JAI codec spec to use VICAR or PDS images automatically, with no specific knowledge of the VICAR or PDS formats. Support for metadata (labels) is included, but is format-dependent. The PDS codec, when processing PDS images with an embedded VIAR label ("dual-labeled images," such as used for MER), presents the VICAR label in a new way that is compatible with the VICAR codec. The third level of access involves VICAR, PDS, and ISIS Image I/O plugins. The Java core includes an "Image I/O" (IIO) package that is similar in concept to the JAI codec, but is newer and more capable. Applications written to the IIO specification can use any image format for which a plug-in exists, with no specific knowledge of the format itself.
NASA Astrophysics Data System (ADS)
Flowers, R. M.; Arrowsmith, R.; Metcalf, J. R.; Rittenour, T. M.; Schoene, B.; Hole, J. A.; Pavlis, T. L.; Wagner, L. S.; Whitmeyer, S. J.; Williams, M. L.
2015-12-01
The EarthScope AGeS (Awards for Geochronology Student Research) program is a multi-year educational initiative aimed at enhancing interdisciplinary, innovative, and high-impact science by promoting training and new interactions between students, scientists, and geochronology labs at different institutions. The program offers support of up to $10,000 for graduate students to collect and interpret geochronology data that contribute to EarthScope science targets through visits to participating geochronology labs (www.earthscope.org/geochronology). The program was launched by a 2-day short course held before the 2014 National GSA meeting in Vancouver, at which 16 geochronology experts introduced 43 participants to the basic theory and applications of geochronology methods. By the first proposal submission deadline in spring 2015, 33 labs representing a broad range of techniques had joined the program by submitting lab plans that were posted on the EarthScope website. The lab plans provide information about preparation, realistic time frames for visits, and analytical costs. In the first year of the program, students submitted 47 proposals from 32 different institutions. Proposals were ranked by an independent panel, 10 were funded, and research associated with these projects is currently underway. The next proposal deadline will be held in spring 2016. The 4D-Earth initiative is an idea for a natural successor to the EarthScope program aimed at expanding the primarily 3D geophysical focus that captured a snapshot of present day North America into the 4th dimension of time (hence the connection to the prototypical AGeS program), and illuminating the crustal component that was below the resolution of much of the USArray image. Like EarthScope, the notion is that this initiative would integrate new infrastructure and usher in a new way of doing science. The overarching scientific motivation is to develop a Community Geologic Model for the 4-D Evolution of the North American continent to firmly answer long-standing questions of how the time-integrated processes of plate tectonics and surface processes produce the mantle and crustal structures we see today. A breakout session on this topic was held at the 2015 EarthScope National Meeting, and efforts are underway to solicit feedback to shape these ideas.
caNanoLab: data sharing to expedite the use of nanotechnology in biomedicine
Gaheen, Sharon; Hinkal, George W.; Morris, Stephanie A.; Lijowski, Michal; Heiskanen, Mervi
2014-01-01
The use of nanotechnology in biomedicine involves the engineering of nanomaterials to act as therapeutic carriers, targeting agents and diagnostic imaging devices. The application of nanotechnology in cancer aims to transform early detection, targeted therapeutics and cancer prevention and control. To assist in expediting and validating the use of nanomaterials in biomedicine, the National Cancer Institute (NCI) Center for Biomedical Informatics and Information Technology, in collaboration with the NCI Alliance for Nanotechnology in Cancer (Alliance), has developed a data sharing portal called caNanoLab. caNanoLab provides access to experimental and literature curated data from the NCI Nanotechnology Characterization Laboratory, the Alliance and the greater cancer nanotechnology community. PMID:25364375
Developing Inquiry-Based Labs Using Micro-Column Chromatography
ERIC Educational Resources Information Center
Barden-Gabbei, Laura M.; Moffitt, Deborah L.
2006-01-01
Chromatography is a process by which mixtures can be separated or substances can be purified. Biological and chemical laboratories use many different types of chromatographic processes. For example, the pharmaceutical industry uses chromatographic techniques to purify drugs, medical labs use them to identify blood components such as cholesterol,…
Automated Estimation of the Orbital Parameters of Jupiter's Moons
NASA Astrophysics Data System (ADS)
Western, Emma; Ruch, Gerald T.
2016-01-01
Every semester the Physics Department at the University of St. Thomas has the Physics 104 class complete a Jupiter lab. This involves taking around twenty images of Jupiter and its moons with the telescope at the University of St. Thomas Observatory over the course of a few nights. The students then take each image and find the distance from each moon to Jupiter and plot the distances versus the elapsed time for the corresponding image. Students use the plot to fit four sinusoidal curves of the moons of Jupiter. I created a script that automates this process for the professor. It takes the list of images and creates a region file used by the students to measure the distance from the moons to Jupiter, a png image that is the graph of all the data points and the fitted curves of the four moons, and a csv file that contains the list of images, the date and time each image was taken, the elapsed time since the first image, and the distances to Jupiter for Io, Europa, Ganymede, and Callisto. This is important because it lets the professor spend more time working with the students and answering questions as opposed to spending time fitting the curves of the moons on the graph, which can be time consuming.
NASA Astrophysics Data System (ADS)
Lin, Qingyang; Andrew, Matthew; Thompson, William; Blunt, Martin J.; Bijeljic, Branko
2018-05-01
Non-invasive laboratory-based X-ray microtomography has been widely applied in many industrial and research disciplines. However, the main barrier to the use of laboratory systems compared to a synchrotron beamline is its much longer image acquisition time (hours per scan compared to seconds to minutes at a synchrotron), which results in limited application for dynamic in situ processes. Therefore, the majority of existing laboratory X-ray microtomography is limited to static imaging; relatively fast imaging (tens of minutes per scan) can only be achieved by sacrificing imaging quality, e.g. reducing exposure time or number of projections. To alleviate this barrier, we introduce an optimized implementation of a well-known iterative reconstruction algorithm that allows users to reconstruct tomographic images with reasonable image quality, but requires lower X-ray signal counts and fewer projections than conventional methods. Quantitative analysis and comparison between the iterative and the conventional filtered back-projection reconstruction algorithm was performed using a sandstone rock sample with and without liquid phases in the pore space. Overall, by implementing the iterative reconstruction algorithm, the required image acquisition time for samples such as this, with sparse object structure, can be reduced by a factor of up to 4 without measurable loss of sharpness or signal to noise ratio.
Computational Labs Using VPython Complement Conventional Labs in Online and Regular Physics Classes
NASA Astrophysics Data System (ADS)
Bachlechner, Martina E.
2009-03-01
Fairmont State University has developed online physics classes for the high-school teaching certificate based on the text book Matter and Interaction by Chabay and Sherwood. This lead to using computational VPython labs also in the traditional class room setting to complement conventional labs. The computational modeling process has proven to provide an excellent basis for the subsequent conventional lab and allows for a concrete experience of the difference between behavior according to a model and realistic behavior. Observations in the regular class room setting feed back into the development of the online classes.
SoftLab: A Soft-Computing Software for Experimental Research with Commercialization Aspects
NASA Technical Reports Server (NTRS)
Akbarzadeh-T, M.-R.; Shaikh, T. S.; Ren, J.; Hubbell, Rob; Kumbla, K. K.; Jamshidi, M
1998-01-01
SoftLab is a software environment for research and development in intelligent modeling/control using soft-computing paradigms such as fuzzy logic, neural networks, genetic algorithms, and genetic programs. SoftLab addresses the inadequacies of the existing soft-computing software by supporting comprehensive multidisciplinary functionalities from management tools to engineering systems. Furthermore, the built-in features help the user process/analyze information more efficiently by a friendly yet powerful interface, and will allow the user to specify user-specific processing modules, hence adding to the standard configuration of the software environment.
NASA Technical Reports Server (NTRS)
Berrios, Daniel C.; Thompson, Terri G.
2015-01-01
NASA GeneLab is expected to capture and distribute omics data and experimental and process conditions most relevant to research community in their statistical and theoretical analysis of NASAs omics data.
Bao, Shunxing; Damon, Stephen M; Landman, Bennett A; Gokhale, Aniruddha
2016-02-27
Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.
NASA Astrophysics Data System (ADS)
Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha
2016-03-01
Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical- Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for- use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.
Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha
2016-01-01
Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline. PMID:27127335
Experimental Characterization of the Jet Wiping Process
NASA Astrophysics Data System (ADS)
Mendez, Miguel Alfonso; Enache, Adriana; Gosset, Anne; Buchlin, Jean-Marie
2018-06-01
This paper presents an experimental characterization of the jet wiping process, used in continuous coating applications to control the thickness of a liquid coat using an impinging gas jet. Time Resolved Particle Image Velocimetry (TR-PIV) is used to characterize the impinging gas flow, while an automatic interface detection algorithm is developed to track the liquid interface at the impact. The study of the flow interaction is combined with time resolved 3D thickness measurements of the liquid film remaining after the wiping, via Time Resolved Light Absorption (TR-LAbs). The simultaneous frequency analysis of liquid and gas flows allows to correlate their respective instability, provide an experimental data set for the validation of numerical studies and allows for formulating a working hypothesis on the origin of the coat non-uniformity encountered in many jet wiping processes.
The lithosphere-asthenosphere boundary beneath the South Island of New Zealand
NASA Astrophysics Data System (ADS)
Hua, Junlin; Fischer, Karen M.; Savage, Martha K.
2018-02-01
Lithosphere-asthenosphere boundary (LAB) properties beneath the South Island of New Zealand have been imaged by Sp receiver function common-conversion point stacking. In this transpressional boundary between the Australian and Pacific plates, dextral offset on the Alpine fault and convergence have occurred for the past 20 My, with the Alpine fault now bounded by Australian plate subduction to the south and Pacific plate subduction to the north. Using data from onland seismometers, especially the 29 broadband stations of the New Zealand permanent seismic network (GeoNet), we obtained 24,971 individual receiver functions by extended-time multi-taper deconvolution, and mapped them to three-dimensional space using a Fresnel zone approximation. Pervasive strong positive Sp phases are observed in the LAB depth range indicated by surface wave tomography. These phases are interpreted as conversions from a velocity decrease across the LAB. In the central South Island, the LAB is observed to be deeper and broader to the northwest of the Alpine fault. The deeper LAB to the northwest of the Alpine fault is consistent with models in which oceanic lithosphere attached to the Australian plate was partially subducted, or models in which the Pacific lithosphere has been underthrust northwest past the Alpine fault. Further north, a zone of thin lithosphere with a strong and vertically localized LAB velocity gradient occurs to the northwest of the fault, juxtaposed against a region of anomalously weak LAB conversions to the southeast of the fault. This structure could be explained by lithospheric blocks with contrasting LAB properties that meet beneath the Alpine fault, or by the effects of Pacific plate subduction. The observed variations in LAB properties indicate strong modification of the LAB by the interplay of convergence and strike-slip deformation along and across this transpressional plate boundary.
Many-core computing for space-based stereoscopic imaging
NASA Astrophysics Data System (ADS)
McCall, Paul; Torres, Gildo; LeGrand, Keith; Adjouadi, Malek; Liu, Chen; Darling, Jacob; Pernicka, Henry
The potential benefits of using parallel computing in real-time visual-based satellite proximity operations missions are investigated. Improvements in performance and relative navigation solutions over single thread systems can be achieved through multi- and many-core computing. Stochastic relative orbit determination methods benefit from the higher measurement frequencies, allowing them to more accurately determine the associated statistical properties of the relative orbital elements. More accurate orbit determination can lead to reduced fuel consumption and extended mission capabilities and duration. Inherent to the process of stereoscopic image processing is the difficulty of loading, managing, parsing, and evaluating large amounts of data efficiently, which may result in delays or highly time consuming processes for single (or few) processor systems or platforms. In this research we utilize the Single-Chip Cloud Computer (SCC), a fully programmable 48-core experimental processor, created by Intel Labs as a platform for many-core software research, provided with a high-speed on-chip network for sharing information along with advanced power management technologies and support for message-passing. The results from utilizing the SCC platform for the stereoscopic image processing application are presented in the form of Performance, Power, Energy, and Energy-Delay-Product (EDP) metrics. Also, a comparison between the SCC results and those obtained from executing the same application on a commercial PC are presented, showing the potential benefits of utilizing the SCC in particular, and any many-core platforms in general for real-time processing of visual-based satellite proximity operations missions.
Flow Cytometry: Impact on Early Drug Discovery.
Edwards, Bruce S; Sklar, Larry A
2015-07-01
Modern flow cytometers can make optical measurements of 10 or more parameters per cell at tens of thousands of cells per second and more than five orders of magnitude dynamic range. Although flow cytometry is used in most drug discovery stages, "sip-and-spit" sampling technology has restricted it to low-sample-throughput applications. The advent of HyperCyt sampling technology has recently made possible primary screening applications in which tens of thousands of compounds are analyzed per day. Target-multiplexing methodologies in combination with extended multiparameter analyses enable profiling of lead candidates early in the discovery process, when the greatest numbers of candidates are available for evaluation. The ability to sample small volumes with negligible waste reduces reagent costs, compound usage, and consumption of cells. Improved compound library formatting strategies can further extend primary screening opportunities when samples are scarce. Dozens of targets have been screened in 384- and 1536-well assay formats, predominantly in academic screening lab settings. In concert with commercial platform evolution and trending drug discovery strategies, HyperCyt-based systems are now finding their way into mainstream screening labs. Recent advances in flow-based imaging, mass spectrometry, and parallel sample processing promise dramatically expanded single-cell profiling capabilities to bolster systems-level approaches to drug discovery. © 2015 Society for Laboratory Automation and Screening.
Flow Cytometry: Impact On Early Drug Discovery
Edwards, Bruce S.; Sklar, Larry A.
2015-01-01
Summary Modern flow cytometers can make optical measurements of 10 or more parameters per cell at tens-of-thousands of cells per second and over five orders of magnitude dynamic range. Although flow cytometry is used in most drug discovery stages, “sip-and-spit” sampling technology has restricted it to low sample throughput applications. The advent of HyperCyt sampling technology has recently made possible primary screening applications in which tens-of-thousands of compounds are analyzed per day. Target-multiplexing methodologies in combination with extended multi-parameter analyses enable profiling of lead candidates early in the discovery process, when the greatest numbers of candidates are available for evaluation. The ability to sample small volumes with negligible waste reduces reagent costs, compound usage and consumption of cells. Improved compound library formatting strategies can further extend primary screening opportunities when samples are scarce. Dozens of targets have been screened in 384- and 1536-well assay formats, predominantly in academic screening lab settings. In concert with commercial platform evolution and trending drug discovery strategies, HyperCyt-based systems are now finding their way into mainstream screening labs. Recent advances in flow-based imaging, mass spectrometry and parallel sample processing promise dramatically expanded single cell profiling capabilities to bolster systems level approaches to drug discovery. PMID:25805180
LabVIEW control software for scanning micro-beam X-ray fluorescence spectrometer.
Wrobel, Pawel; Czyzycki, Mateusz; Furman, Leszek; Kolasinski, Krzysztof; Lankosz, Marek; Mrenca, Alina; Samek, Lucyna; Wegrzynek, Dariusz
2012-05-15
Confocal micro-beam X-ray fluorescence microscope was constructed. The system was assembled from commercially available components - a low power X-ray tube source, polycapillary X-ray optics and silicon drift detector - controlled by an in-house developed LabVIEW software. A video camera coupled to optical microscope was utilized to display the area excited by X-ray beam. The camera image calibration and scan area definition software were also based entirely on LabVIEW code. Presently, the main area of application of the newly constructed spectrometer is 2-dimensional mapping of element distribution in environmental, biological and geological samples with micrometer spatial resolution. The hardware and the developed software can already handle volumetric 3-D confocal scans. In this work, a front panel graphical user interface as well as communication protocols between hardware components were described. Two applications of the spectrometer, to homogeneity testing of titanium layers and to imaging of various types of grains in air particulate matter collected on membrane filters, were presented. Copyright © 2012 Elsevier B.V. All rights reserved.
Height Measuring System On Video Using Otsu Method
NASA Astrophysics Data System (ADS)
Sandy, C. L. M.; Meiyanti, R.
2017-01-01
A measurement of height is comparing the value of the magnitude of an object with a standard measuring tool. The problems that exist in the measurement are still the use of a simple apparatus in which one of them is by using a meter. This method requires a relatively long time. To overcome these problems, this research aims to create software with image processing that is used for the measurement of height. And subsequent that image is tested, where the object captured by the video camera can be known so that the height of the object can be measured using the learning method of Otsu. The system was built using Delphi 7 of Vision Lab VCL 4.5 component. To increase the quality of work of the system in future research, the developed system can be combined with other methods.
Machine learning for micro-tomography
NASA Astrophysics Data System (ADS)
Parkinson, Dilworth Y.; Pelt, Daniël. M.; Perciano, Talita; Ushizima, Daniela; Krishnan, Harinarayan; Barnard, Harold S.; MacDowell, Alastair A.; Sethian, James
2017-09-01
Machine learning has revolutionized a number of fields, but many micro-tomography users have never used it for their work. The micro-tomography beamline at the Advanced Light Source (ALS), in collaboration with the Center for Applied Mathematics for Energy Research Applications (CAMERA) at Lawrence Berkeley National Laboratory, has now deployed a series of tools to automate data processing for ALS users using machine learning. This includes new reconstruction algorithms, feature extraction tools, and image classification and recommen- dation systems for scientific image. Some of these tools are either in automated pipelines that operate on data as it is collected or as stand-alone software. Others are deployed on computing resources at Berkeley Lab-from workstations to supercomputers-and made accessible to users through either scripting or easy-to-use graphical interfaces. This paper presents a progress report on this work.
Nonnegative Matrix Factorization for Efficient Hyperspectral Image Projection
NASA Technical Reports Server (NTRS)
Iacchetta, Alexander S.; Fienup, James R.; Leisawitz, David T.; Bolcar, Matthew R.
2015-01-01
Hyperspectral imaging for remote sensing has prompted development of hyperspectral image projectors that can be used to characterize hyperspectral imaging cameras and techniques in the lab. One such emerging astronomical hyperspectral imaging technique is wide-field double-Fourier interferometry. NASA's current, state-of-the-art, Wide-field Imaging Interferometry Testbed (WIIT) uses a Calibrated Hyperspectral Image Projector (CHIP) to generate test scenes and provide a more complete understanding of wide-field double-Fourier interferometry. Given enough time, the CHIP is capable of projecting scenes with astronomically realistic spatial and spectral complexity. However, this would require a very lengthy data collection process. For accurate but time-efficient projection of complicated hyperspectral images with the CHIP, the field must be decomposed both spectrally and spatially in a way that provides a favorable trade-off between accurately projecting the hyperspectral image and the time required for data collection. We apply nonnegative matrix factorization (NMF) to decompose hyperspectral astronomical datacubes into eigenspectra and eigenimages that allow time-efficient projection with the CHIP. Included is a brief analysis of NMF parameters that affect accuracy, including the number of eigenspectra and eigenimages used to approximate the hyperspectral image to be projected. For the chosen field, the normalized mean squared synthesis error is under 0.01 with just 8 eigenspectra. NMF of hyperspectral astronomical fields better utilizes the CHIP's capabilities, providing time-efficient and accurate representations of astronomical scenes to be imaged with the WIIT.
2014-10-01
applications of present nano-/ bio -technology include advanced health and fitness monitoring, high-resolution imaging, new environmental sensor platforms...others areas where nano-/ bio -technology development is needed: • Sensors : Diagnostic and detection kits (gene-chips, protein-chips, lab-on-chips, etc...studies on chemo- bio nano- sensors , ultra-sensitive biochips (“lab-on-a-chip” and “cells-on-chips” devices) have been prepared for routine medical
NASA Astrophysics Data System (ADS)
Mancinelli, N. J.; Fischer, K. M.
2018-03-01
We characterize the spatial sensitivity of Sp converted waves to improve constraints on lateral variations in uppermost-mantle velocity gradients, such as the lithosphere-asthenosphere boundary (LAB) and the mid-lithospheric discontinuities. We use SPECFEM2D to generate 2-D scattering kernels that relate perturbations from an elastic half-space to Sp waveforms. We then show that these kernels can be well approximated using ray theory, and develop an approach to calculating kernels for layered background models. As proof of concept, we show that lateral variations in uppermost-mantle discontinuity structure are retrieved by implementing these scattering kernels in the first iteration of a conjugate-directions inversion algorithm. We evaluate the performance of this technique on synthetic seismograms computed for 2-D models with undulations on the LAB of varying amplitude, wavelength and depth. The technique reliably images the position of discontinuities with dips <35° and horizontal wavelengths >100-200 km. In cases of mild topography on a shallow LAB, the relative brightness of the LAB and Moho converters approximately agrees with the ratio of velocity contrasts across the discontinuities. Amplitude retrieval degrades at deeper depths. For dominant periods of 4 s, the minimum station spacing required to produce unaliased results is 5 km, but the application of a Gaussian filter can improve discontinuity imaging where station spacing is greater.
Matovic, Milovan; Jankovic, Milica; Barjaktarovic, Marko; Jeremic, Marija
2017-01-01
After radioiodine therapy of differentiated thyroid cancer (DTC) patients, whole body scintigraphy (WBS) is standard procedure before releasing the patient from the hospital. A common problem is the precise localization of regions where the iod-avide tissue is located. Sometimes is practically impossible to perform precise topographic localization of such regions. In order to face this problem, we have developed a low-cost Vision-Fusion system for web-camera image acquisition simultaneously with routine scintigraphic whole body acquisition including the algorithm for fusion of images given from both cameras. For image acquisition in the gamma part of the spectra we used e.cam dual head gamma camera (Siemens, Erlangen, Germany) in WBS modality, with matrix size of 256×1024 pixels and bed speed of 6cm/min, equipped with high energy collimator. For optical image acquisition in visible part of spectra we have used web-camera model C905 (Logitech, USA) with Carl Zeiss® optics, native resolution 1600×1200 pixels, 34 o field of view, 30g weight, with autofocus option turned "off" and auto white balance turned "on". Web camera is connected to upper head of gamma camera (GC) by a holder of lightweight aluminum rod and a plexiglas adapter. Our own Vision-Fusion software for image acquisition and coregistration was developed using NI LabVIEW programming environment 2015 (National Instruments, Texas, USA) and two additional LabVIEW modules: NI Vision Acquisition Software (VAS) and NI Vision Development Module (VDM). Vision acquisition software enables communication and control between laptop computer and web-camera. Vision development module is image processing library used for image preprocessing and fusion. Software starts the web-camera image acquisition before starting image acquisition on GC and stops it when GC completes the acquisition. Web-camera is in continuous acquisition mode with frame rate f depending on speed of patient bed movement v (f=v/∆ cm , where ∆ cm is a displacement step that can be changed in Settings option of Vision-Fusion software; by default, ∆ cm is set to 1cm corresponding to ∆ p =15 pixels). All images captured while patient's bed is moving are processed. Movement of patient's bed is checked using cross-correlation of two successive images. After each image capturing, algorithm extracts the central region of interest (ROI) of the image, with the same width as captured image (1600 pixels) and the height that is equal to the ∆ p displacement in pixels. All extracted central ROI are placed next to each other in the overall whole-body image. Stacking of narrow central ROI introduces negligible distortion in the overall whole-body image. The first step for fusion of the scintigram and the optical image was determination of spatial transformation between them. We have made an experiment with two markers (point radioactivity sources of 99m Tc pertechnetate 1MBq) visible in both images (WBS and optical) to find transformation of coordinates between images. The distance between point markers is used for spatial coregistration of the gamma and optical images. At the end of coregistration process, gamma image is rescaled in spatial domain and added to the optical image (green or red channel, amplification changeable from user interface). We tested our system for 10 patients with DTC who received radioiodine therapy (8 women and two men, with average age of 50.10±12.26 years). Five patients received 5.55Gbq, three 3.70GBq and two 1.85GBq. Whole-body scintigraphy and optical image acquisition were performed 72 hours after application of radioiodine therapy. Based on our first results during clinical testing of our system, we can conclude that our system can improve diagnostic possibility of whole body scintigraphy to detect thyroid remnant tissue in patients with DTC after radioiodine therapy.
ERIC Educational Resources Information Center
Hasper, Eric; Windhorst, Rogier; Hedgpeth, Terri; Van Tuyl, Leanne; Gonzales, Ashleigh; Martinez, Britta; Yu, Hongyu; Farkas, Zolton; Baluch, Debra P.
2015-01-01
Project 3D IMAGINE or 3D Image Arrays to Graphically Implement New Education is a pilot study that researches the effectiveness of incorporating 3D tactile images, which are critical for learning science, technology, engineering, and mathematics, into entry-level lab courses. The focus of this project is to increase the participation and…
The Role of Cortical Plasticity in Recovery of Function Following Allogeneic Hand Transplantation
2015-10-01
transplantation, functional magnetic resonance imaging, hand replantation, cortical reorganization, functional recovery 16. SECURITY CLASSIFICATION OF: U...functional magnetic resonance imaging (fMRI) data suggest that areas of the sensory and motor cortex devoted to representing the hand prior to...function, recovery, functional magnetic resonance imaging 3. Accomplishments Major Goals Achieved: Year Two My lab is relocated to Washington University
Quality Control in Clinical Laboratory Samples
2015-01-01
is able to find and correct flaws in the analytical processes of a lab before potentially incorrect patient resu lts are released. According to...verifi es that the results produced are accurate and precise . Clinical labs use management of documentation as well as inco rporation of a continuous...improvement process to streamline the overall quality control process . QC samples are expected to be identical and tested identically to patient
Application of Lactic Acid Bacteria (LAB) in freshness keeping of tilapia fillets as sashimi
NASA Astrophysics Data System (ADS)
Cao, Rong; Liu, Qi; Chen, Shengjun; Yang, Xianqing; Li, Laihao
2015-08-01
Aquatic products are extremely perishable food commodities. Developing methods to keep the freshness of fish represents a major task of the fishery processing industry. Application of Lactic Acid Bacteria (LAB) as food preservative is a novel approach. In the present study, the possibility of using lactic acid bacteria in freshness keeping of tilapia fillets as sashimi was examined. Fish fillets were dipped in Lactobacillus plantarum 1.19 (obtained from China General Microbiological Culture Collection Center) suspension as LAB-treated group. Changes in K-value, APC, sensory properties and microbial flora were analyzed. Results showed that LAB treatment slowed the increase of K-value and APC in the earlier storage, and caused a smooth decrease in sensory score. Gram-negative bacteria dominated during refrigerated storage, with Pseudomonas and Aeromonas being relatively abundant. Lactobacillus plantarum 1.19 had no obvious inhibitory effect against these Gram-negatives. However, Lactobacillus plantarum 1.19 changed the composition of Gram-positive bacteria. No Micrococcus were detected and the proportion of Staphylococcus decreased in the spoiled LAB-treated samples. The period that tilapia fillets could be used as sashimi material extended from 24 h to 48 h after LAB treatment. The potential of using LAB in sashimi processing was confirmed.
SWT voting-based color reduction for text detection in natural scene images
NASA Astrophysics Data System (ADS)
Ikica, Andrej; Peer, Peter
2013-12-01
In this article, we propose a novel stroke width transform (SWT) voting-based color reduction method for detecting text in natural scene images. Unlike other text detection approaches that mostly rely on either text structure or color, the proposed method combines both by supervising text-oriented color reduction process with additional SWT information. SWT pixels mapped to color space vote in favor of the color they correspond to. Colors receiving high SWT vote most likely belong to text areas and are blocked from being mean-shifted away. Literature does not explicitly address SWT search direction issue; thus, we propose an adaptive sub-block method for determining correct SWT direction. Both SWT voting-based color reduction and SWT direction determination methods are evaluated on binary (text/non-text) images obtained from a challenging Computer Vision Lab optical character recognition database. SWT voting-based color reduction method outperforms the state-of-the-art text-oriented color reduction approach.
Web-based system for surgical planning and simulation
NASA Astrophysics Data System (ADS)
Eldeib, Ayman M.; Ahmed, Mohamed N.; Farag, Aly A.; Sites, C. B.
1998-10-01
The growing scientific knowledge and rapid progress in medical imaging techniques has led to an increasing demand for better and more efficient methods of remote access to high-performance computer facilities. This paper introduces a web-based telemedicine project that provides interactive tools for surgical simulation and planning. The presented approach makes use of client-server architecture based on new internet technology where clients use an ordinary web browser to view, send, receive and manipulate patients' medical records while the server uses the supercomputer facility to generate online semi-automatic segmentation, 3D visualization, surgical simulation/planning and neuroendoscopic procedures navigation. The supercomputer (SGI ONYX 1000) is located at the Computer Vision and Image Processing Lab, University of Louisville, Kentucky. This system is under development in cooperation with the Department of Neurological Surgery, Alliant Health Systems, Louisville, Kentucky. The server is connected via a network to the Picture Archiving and Communication System at Alliant Health Systems through a DICOM standard interface that enables authorized clients to access patients' images from different medical modalities.
Experimental teaching and training system based on volume holographic storage
NASA Astrophysics Data System (ADS)
Jiang, Zhuqing; Wang, Zhe; Sun, Chan; Cui, Yutong; Wan, Yuhong; Zou, Rufei
2017-08-01
The experiment of volume holographic storage for teaching and training the practical ability of senior students in Applied Physics is introduced. The students can learn to use advanced optoelectronic devices and the automatic control means via this experiment, and further understand the theoretical knowledge of optical information processing and photonics disciplines that have been studied in some courses. In the experiment, multiplexing holographic recording and readout is based on Bragg selectivity of volume holographic grating, in which Bragg diffraction angle is dependent on grating-recording angel. By using different interference angle between reference and object beams, the holograms can be recorded into photorefractive crystal, and then the object images can be read out from these holograms via angular addressing by using the original reference beam. In this system, the experimental data acquisition and the control of the optoelectronic devices, such as the shutter on-off, image loaded in SLM and image acquisition of a CCD sensor, are automatically realized by using LabVIEW programming.
Automated Blazar Light Curves Using Machine Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Spencer James
Every night in a remote clearing called Fenton Hill high in the Jemez Mountains of central New Mexico, a bank of robotically controlled telescopes tilt their lenses to the sky for another round of observation through digital imaging. Los Alamos National Laboratory’s Thinking Telescopes project is watching for celestial transients including high-power cosmic flashes called, and like all science, it can be messy work. To keep the project clicking along, Los Alamos scientists routinely install equipment upgrades, maintain the site, and refine the sophisticated machinelearning computer programs that process those images and extract useful data from them. Each week themore » system amasses 100,000 digital images of the heavens, some of which are compromised by clouds, wind gusts, focus problems, and so on. For a graduate student at the Lab taking a year’s break between master’s and Ph.D. studies, working with state-of-the-art autonomous telescopes that can make fundamental discoveries feels light years beyond the classroom.« less
NASA Astrophysics Data System (ADS)
Maloney, A.; Walsh, E.
2012-12-01
A solid understanding of timescales is crucial for any climate change discussion. This hands-on lab was designed as part of a dual-credit climate change course in which high school students can receive college credit. Using homemade ice cores, students have the opportunity to participate in scientific practices associated with collecting, processing, and interpreting temperature and CO2 data. Exploring millennial-scale cycles in ice core data and extending the CO2 record to the present allows students to discover timescales from an investigators perspective. The Ice Core Lab has been piloted in two high school classrooms and student engagement, and epistemological and conceptual understanding was evaluated using quantitative pre and post assessment surveys. The process of creating this lab involved a partnership between an education assessment professional, high school teachers, and University of Washington professors and graduate students in Oceanography, Earth and Space Sciences, Atmospheric Sciences and the Learning Sciences as part of the NASA Global Climate Change University of Washington in the High School program. This interdisciplinary collaboration led to the inception of the lab and was necessary to ensure that the lesson plan was pedagogically appropriate and scientifically accurate. The lab fits into a unit about natural variability and is paired with additional hands-on activities created by other graduate students that explore short-timescale temperature variations, Milankovitch cycles, isotopes, and other proxies. While the Ice Core Lab is intended to follow units that review the scientific process, global energy budget, and transport, it can be modified to fit any teaching platform.
Experiences in supporting the structured collection of cancer nanotechnology data using caNanoLab
Gaheen, Sharon; Lijowski, Michal; Heiskanen, Mervi; Klemm, Juli
2015-01-01
Summary The cancer Nanotechnology Laboratory (caNanoLab) data portal is an online nanomaterial database that allows users to submit and retrieve information on well-characterized nanomaterials, including composition, in vitro and in vivo experimental characterizations, experimental protocols, and related publications. Initiated in 2006, caNanoLab serves as an established resource with an infrastructure supporting the structured collection of nanotechnology data to address the needs of the cancer biomedical and nanotechnology communities. The portal contains over 1,000 curated nanomaterial data records that are publicly accessible for review, comparison, and re-use, with the ultimate goal of accelerating the translation of nanotechnology-based cancer therapeutics, diagnostics, and imaging agents to the clinic. In this paper, we will discuss challenges associated with developing a nanomaterial database and recognized needs for nanotechnology data curation and sharing in the biomedical research community. We will also describe the latest version of caNanoLab, caNanoLab 2.0, which includes enhancements and new features to improve usability such as personalized views of data and enhanced search and navigation. PMID:26425409
ERIC Educational Resources Information Center
Knabb, Maureen T.; Misquith, Geraldine
2006-01-01
Incorporating inquiry-based learning in the college-level introductory biology laboratory is challenging because the labs serve the dual purpose of providing a hands-on opportunity to explore content while also emphasizing the development of scientific process skills. Time limitations and variations in student preparedness for college further…
2003-09-10
KENNEDY SPACE CENTER, FLA. - The Space Life Sciences Lab (SLSL), formerly known as the Space Experiment Research and Processing Laboratory (SERPL), is nearing completion. The new lab is a state-of-the-art facility being built for ISS biotechnology research. Developed as a partnership between NASA-KSC and the State of Florida, NASA’s life sciences contractor will be the primary tenant of the facility, leasing space to conduct flight experiment processing and NASA-sponsored research. About 20 percent of the facility will be available for use by Florida’s university researchers through the Florida Space Research Institute.
NASA Astrophysics Data System (ADS)
Mohottala, Hashini
2014-03-01
The general student population enrolled in any college level class is highly diverse. An increasing number of ``nontraditional'' students return to college and most of these students follow distance learning degree programs while engaging in their other commitments, work and family. However, those students tend to avoid taking science courses with labs, mostly because of the incapability of remotely completing the lab components in such courses. In order to address this issue, we have come across a method where introductory level physics labs can be taught remotely. In this process a lab kit with the critical lab components that can be easily accessible are conveniently packed into a box and distributed among students at the beginning of the semester. Once the students are given the apparatus they perform the experiments at home and gather data All communications with reference to the lab was done through an interactive user-friendly webpage - Wikispaces (WikiS). Students who create pages on WikiS can submit their lab write-ups, embed videos of the experiments they perform, post pictures and direct questions to the lab instructor. The students who are enrolled in the same lab can interact with each other through WikiS to discuss labs and even get assistance.
Biotic games and cloud experimentation as novel media for biophysics education
NASA Astrophysics Data System (ADS)
Riedel-Kruse, Ingmar; Blikstein, Paulo
2014-03-01
First-hand, open-ended experimentation is key for effective formal and informal biophysics education. We developed, tested and assessed multiple new platforms that enable students and children to directly interact with and learn about microscopic biophysical processes: (1) Biotic games that enable local and online play using galvano- and photo-tactic stimulation of micro-swimmers, illustrating concepts such as biased random walks, Low Reynolds number hydrodynamics, and Brownian motion; (2) an undergraduate course where students learn optics, electronics, micro-fluidics, real time image analysis, and instrument control by building biotic games; and (3) a graduate class on the biophysics of multi-cellular systems that contains a cloud experimentation lab enabling students to execute open-ended chemotaxis experiments on slimemolds online, analyze their data, and build biophysical models. Our work aims to generate the equivalent excitement and educational impact for biophysics as robotics and video games have had for mechatronics and computer science, respectively. We also discuss how scaled-up cloud experimentation systems can support MOOCs with true lab components and life-science research in general.
Objective research on tongue manifestation of patients with eczema.
Yu, Zhifeng; Zhang, Haifang; Fu, Linjie; Lu, Xiaozuo
2017-07-20
Tongue observation often depends on subjective judgment, it is necessary to establish an objective and quantifiable standard for tongue observation. To discuss the features of tongue manifestation of patients who suffered from eczema with different types and to reveal the clinical significance of the tongue images. Two hundred patients with eczema were recruited and divided into three groups according to the diagnostic criteria. Acute group had 47 patients, subacute group had 82 patients, and chronic group had 71 patients. The computerized tongue image digital analysis device was used to detect tongue parameters. The L*a*b* color model was applied to classify tongue parameters quantitatively. For parameters such as tongue color, tongue shape, color of tongue coating, and thickness or thinness of tongue coating, there was a significant difference among acute group, subacute group and chronic group (P< 0.05). For Lab values of both tongue and tongue coating, there was statistical significance among the above types of eczema (P< 0.05). Tongue images can reflect some features of eczema, and different types of eczema may be related to the changes of tongue images. The computerized tongue image digital analysis device can reflect the tongue characteristics of patients with eczema objectively.
Bringing the Digital Camera to the Physics Lab
ERIC Educational Resources Information Center
Rossi, M.; Gratton, L. M.; Oss, S.
2013-01-01
We discuss how compressed images created by modern digital cameras can lead to even severe problems in the quantitative analysis of experiments based on such images. Difficulties result from the nonlinear treatment of lighting intensity values stored in compressed files. To overcome such troubles, one has to adopt noncompressed, native formats, as…
Autonomous system for Web-based microarray image analysis.
Bozinov, Daniel
2003-12-01
Software-based feature extraction from DNA microarray images still requires human intervention on various levels. Manual adjustment of grid and metagrid parameters, precise alignment of superimposed grid templates and gene spots, or simply identification of large-scale artifacts have to be performed beforehand to reliably analyze DNA signals and correctly quantify their expression values. Ideally, a Web-based system with input solely confined to a single microarray image and a data table as output containing measurements for all gene spots would directly transform raw image data into abstracted gene expression tables. Sophisticated algorithms with advanced procedures for iterative correction function can overcome imminent challenges in image processing. Herein is introduced an integrated software system with a Java-based interface on the client side that allows for decentralized access and furthermore enables the scientist to instantly employ the most updated software version at any given time. This software tool is extended from PixClust as used in Extractiff incorporated with Java Web Start deployment technology. Ultimately, this setup is destined for high-throughput pipelines in genome-wide medical diagnostics labs or microarray core facilities aimed at providing fully automated service to its users.
Enhancing image classification models with multi-modal biomarkers
NASA Astrophysics Data System (ADS)
Caban, Jesus J.; Liao, David; Yao, Jianhua; Mollura, Daniel J.; Gochuico, Bernadette; Yoo, Terry
2011-03-01
Currently, most computer-aided diagnosis (CAD) systems rely on image analysis and statistical models to diagnose, quantify, and monitor the progression of a particular disease. In general, CAD systems have proven to be effective at providing quantitative measurements and assisting physicians during the decision-making process. As the need for more flexible and effective CADs continues to grow, questions about how to enhance their accuracy have surged. In this paper, we show how statistical image models can be augmented with multi-modal physiological values to create more robust, stable, and accurate CAD systems. In particular, this paper demonstrates how highly correlated blood and EKG features can be treated as biomarkers and used to enhance image classification models designed to automatically score subjects with pulmonary fibrosis. In our results, a 3-5% improvement was observed when comparing the accuracy of CADs that use multi-modal biomarkers with those that only used image features. Our results show that lab values such as Erythrocyte Sedimentation Rate and Fibrinogen, as well as EKG measurements such as QRS and I:40, are statistically significant and can provide valuable insights about the severity of the pulmonary fibrosis disease.
Zago, Miriam; Scaltriti, Erika; Fornasari, Maria Emanuela; Rivetti, Claudio; Grolli, Stefano; Giraffa, Giorgio; Ramoni, Roberto; Carminati, Domenico
2012-01-01
Bacteriophages attacking lactic acid bacteria (LAB) still represent a crucial problem in industrial dairy fermentations. The consequences of a phage infection against LAB can lead to fermentation delay, alteration of the product quality and, in most severe cases, the product loss. Phage particles enumeration and phage-host interactions are normally evaluated by conventional plaque count assays, but, in many cases, these methods can be unsuccessful. Bacteriophages of Lactobacillus helveticus, a LAB species widely used as dairy starter or probiotic cultures, are often unable to form lysis plaques, thus impairing their enumeration by plate assay. In this study, we used epifluorescence microscopy to enumerate L. helveticus phage particles from phage-infected cultures and Atomic Force Microscopy (AFM) to visualize both phages and bacteria during the different stages of the lytic cycle. Preliminary, we tested the sensitivity of phage counting by epifluorescence microscopy. To this end, phage particles of ΦAQ113, a lytic phage of L. helveticus isolated from a whey starter culture, were stained by SYBR Green I and enumerated by epifluorescence microscopy. Values obtained by the microscopic method were 10 times higher than plate counts, with a lowest sensitivity limit of ≥6log phage/ml. The interaction of phage ΦAQ113 with its host cell L. helveticus Lh1405 was imaged by AFM after 0, 2 and 5h from phage-host adsorption. The lytic cycle was followed by epifluorescence microscopy counting and the concomitant cell wall changes were visualized by AFM imaging. Our results showed that these two methods can be combined for a reliable phage enumeration and for studying phage and host morphology during infection processes, thus giving a complete overview of phage-host interactions in L. helveticus strains involved in dairy productions. Copyright © 2011 Elsevier B.V. All rights reserved.
Lab on a Chip Packing of Submicron Particles for High Performance EOF Pumping
2010-08-26
and wet etching techniques, using a soda lime glass substrate coated with chromium and photoresist (Nanofilm, Westlake Village, CA). A weir structure...observed previously for these soda lime glass microchips [8]. Images of the three segments of different sized particles con- tainedwithin the packed... Silica beads High pressure Lab on a chip a b s t r a c t The packing of submicrometer sized silica beads inside a microchannel was enabled by a novel
Is This Real Life? Is This Just Fantasy?: Realism and Representations in Learning with Technology
NASA Astrophysics Data System (ADS)
Sauter, Megan Patrice
Students often engage in hands-on activities during science learning; however, financial and practical constraints often limit the availability of these activities. Recent advances in technology have led to increases in the use of simulations and remote labs, which attempt to recreate hands-on science learning via computer. Remote labs and simulations are interesting from a cognitive perspective because they allow for different relations between representations and their referents. Remote labs are unique in that they provide a yoked representation, meaning that the representation of the lab on the computer screen is actually linked to that which it represents: a real scientific device. Simulations merely represent the lab and are not connected to any real scientific devices. However, the type of visual representations used in the lab may modify the effects of the lab technology. The purpose of this dissertation is to examine the relation between representation and technology and its effects of students' psychological experiences using online science labs. Undergraduates participated in two studies that investigated the relation between technology and representation. In the first study, participants performed either a remote lab or a simulation incorporating one of two visual representations, either a static image or a video of the equipment. Although participants in both lab conditions learned, participants in the remote lab condition had more authentic experiences. However, effects were moderated by the realism of the visual representation. Participants who saw a video were more invested and felt the experience was more authentic. In a second study, participants performed a remote lab and either saw the same video as in the first study, an animation, or the video and an animation. Most participants had an authentic experience because both representations evoked strong feelings of presence. However, participants who saw the video were more likely to believe the remote technology was real. Overall, the findings suggest that participants' experiences with technology were shaped by representation. Students had more authentic experiences using the remote lab than the simulation. However, incorporating visual representations that enhance presence made these experiences even more authentic and meaningful than afforded by the technology alone.
ERIC Educational Resources Information Center
Stanley, Jacob T.; Su, Weifeng; Lewandowski, H. J.
2017-01-01
We demonstrate how students' use of modeling can be examined and assessed using student notebooks collected from an upper-division electronics lab course. The use of models is a ubiquitous practice in undergraduate physics education, but the process of constructing, testing, and refining these models is much less common. We focus our attention on…
ERIC Educational Resources Information Center
Houston, Linda; Johnson, Candice
After much trial and error, the Agricultural Technical Institute of the Ohio State University (ATI/OSO) discovered that training of writing lab tutors can best be done through collaboration of the Writing Lab Coordinator with the "Development of Tutor Effectiveness" course offered at the institute. The ATI/OSO main computer lab and…
Mobile Robot Lab Project to Introduce Engineering Students to Fault Diagnosis in Mechatronic Systems
ERIC Educational Resources Information Center
Gómez-de-Gabriel, Jesús Manuel; Mandow, Anthony; Fernández-Lozano, Jesús; García-Cerezo, Alfonso
2015-01-01
This paper proposes lab work for learning fault detection and diagnosis (FDD) in mechatronic systems. These skills are important for engineering education because FDD is a key capability of competitive processes and products. The intended outcome of the lab work is that students become aware of the importance of faulty conditions and learn to…
Color digital halftoning taking colorimetric color reproduction into account
NASA Astrophysics Data System (ADS)
Haneishi, Hideaki; Suzuki, Toshiaki; Shimoyama, Nobukatsu; Miyake, Yoichi
1996-01-01
Taking colorimetric color reproduction into account, the conventional error diffusion method is modified for color digital half-toning. Assuming that the input to a bilevel color printer is given in CIE-XYZ tristimulus values or CIE-LAB values instead of the more conventional RGB or YMC values, two modified versions based on vector operation in (1) the XYZ color space and (2) the LAB color space were tested. Experimental results show that the modified methods, especially the method using the LAB color space, resulted in better color reproduction performance than the conventional methods. Spatial artifacts that appear in the modified methods are presented and analyzed. It is also shown that the modified method (2) with a thresholding technique achieves a good spatial image quality.
LabVIEW: a software system for data acquisition, data analysis, and instrument control.
Kalkman, C J
1995-01-01
Computer-based data acquisition systems play an important role in clinical monitoring and in the development of new monitoring tools. LabVIEW (National Instruments, Austin, TX) is a data acquisition and programming environment that allows flexible acquisition and processing of analog and digital data. The main feature that distinguishes LabVIEW from other data acquisition programs is its highly modular graphical programming language, "G," and a large library of mathematical and statistical functions. The advantage of graphical programming is that the code is flexible, reusable, and self-documenting. Subroutines can be saved in a library and reused without modification in other programs. This dramatically reduces development time and enables researchers to develop or modify their own programs. LabVIEW uses a large amount of processing power and computer memory, thus requiring a powerful computer. A large-screen monitor is desirable when developing larger applications. LabVIEW is excellently suited for testing new monitoring paradigms, analysis algorithms, or user interfaces. The typical LabVIEW user is the researcher who wants to develop a new monitoring technique, a set of new (derived) variables by integrating signals from several existing patient monitors, closed-loop control of a physiological variable, or a physiological simulator.
Li, Jun; Shi, Wenyin; Andrews, David; Werner-Wasik, Maria; Lu, Bo; Yu, Yan; Dicker, Adam; Liu, Haisong
2017-06-01
The study was aimed to compare online 6 degree-of-freedom image registrations of TrueBeam cone-beam computed tomography and BrainLab ExacTrac X-ray imaging systems for intracranial radiosurgery. Phantom and patient studies were performed on a Varian TrueBeam STx linear accelerator (version 2.5), which is integrated with a BrainLab ExacTrac imaging system (version 6.1.1). The phantom study was based on a Rando head phantom and was designed to evaluate isocenter location dependence of the image registrations. Ten isocenters at various locations representing clinical treatment sites were selected in the phantom. Cone-beam computed tomography and ExacTrac X-ray images were taken when the phantom was located at each isocenter. The patient study included 34 patients. Cone-beam computed tomography and ExacTrac X-ray images were taken at each patient's treatment position. The 6 degree-of-freedom image registrations were performed on cone-beam computed tomography and ExacTrac, and residual errors calculated from cone-beam computed tomography and ExacTrac were compared. In the phantom study, the average residual error differences (absolute values) between cone-beam computed tomography and ExacTrac image registrations were 0.17 ± 0.11 mm, 0.36 ± 0.20 mm, and 0.25 ± 0.11 mm in the vertical, longitudinal, and lateral directions, respectively. The average residual error differences in the rotation, roll, and pitch were 0.34° ± 0.08°, 0.13° ± 0.09°, and 0.12° ± 0.10°, respectively. In the patient study, the average residual error differences in the vertical, longitudinal, and lateral directions were 0.20 ± 0.16 mm, 0.30 ± 0.18 mm, 0.21 ± 0.18 mm, respectively. The average residual error differences in the rotation, roll, and pitch were 0.40°± 0.16°, 0.17° ± 0.13°, and 0.20° ± 0.14°, respectively. Overall, the average residual error differences were <0.4 mm in the translational directions and <0.5° in the rotational directions. ExacTrac X-ray image registration is comparable to TrueBeam cone-beam computed tomography image registration in intracranial treatments.
Development of Low Cost Gas Atomization of Precursor Powders for Simplified ODS Alloy Production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Iver
2014-08-05
A novel gas atomization reaction synthesis (GARS) method was developed in this project to enable production (at our partner’s facility) a precursor Ni-Cr-Y-Ti powder with a surface oxide and an internal rare earth (RE) containing intermetallic compound (IMC) phase. Consolidation and heat-treatment experiments were performed at Ames Lab to promote the exchange of oxygen from the surface oxide to the RE intermetallic to form nano-metric oxide dispersoids. Alloy selection was aided by an internal oxidation and serial grinding experiments at Ames Lab and found that Hf-containing alloys may form more stable dispersoids than Ti-containing alloy, i.e., the Hf-containing system exhibitedmore » five different oxide phases and two different intermetallics compared to the two oxide phases and one intermetallic in the Ti-containing alloys. Since the simpler Ti-containing system was less complex to characterize, and make observations on the effects of processing parameters, the Ti-containing system was selected by Ames Lab for experimental atomization trials at our partner. An internal oxidation model was developed at Ames Lab and used to predict the heat treatment times necessary for dispersoid formation as a function of powder size and temperature. A new high-pressure gas atomization (HPGA) nozzle was developed at Ames Lab with the aim of promoting fine powder production at scales similar to that of the high gas-flow and melt-flow of industrial atomizers. The atomization nozzle was characterized using schlieren imaging and aspiration pressure testing at Ames Lab to determine the optimum melt delivery tip geometry and atomization pressure to promote enhanced secondary atomization mechanisms. Six atomization trials were performed at our partner to investigate the effects of: gas atomization pressure and reactive gas concentration on the particle size distribution (PSD) and the oxygen content of the resulting powder. Also, the effect on the rapidly solidified microstructure (as a function of powder size) was investigated at Ames Lab as a function of reactive gas composition and bulk alloy composition. The results indicated that the pulsatile gas atomization mechanism and a significantly enhanced yield of fine powders reported in the literature for this type of process were not observed. Also it was determined that reactive gas may marginally improve the fine powder yield but further experiments are required. The oxygen content in the gas also did not have any detrimental effect on the microstructure (i.e. did not significantly reduce undercooling). On the contrary, the oxygen addition to the atomization gas may have mitigated some potent catalytic nucleation sites, but not enough to significantly alter the microstructure vs. particle size relationship. Overall the downstream injection of oxygen was not found to significantly affect either the particle size distribution or undercooling (as inferred from microstructure and XRD observations) but injection further upstream, including in the gas atomization nozzle, remains to be investigated in later work.« less
Research to Develop and Apply Biophotonics to Military Medicine Needs
2012-06-14
brains were hit by a pneumatic (cortical) impact device and imaged by intravital two-photon confocal scanning microscopy via a polished and...Doppler optical frequency domain imaging . In this proposal, we will develop a windowed model of TBI. Using this model, we will characterize for the...following approach to study the microvascular kinetics following TBI. Optical Frequency Domain Imaging . We have developed an instrument in our lab
Characterization of Lactic Acid Bacteria (LAB) isolated from Indonesian shrimp paste (terasi)
NASA Astrophysics Data System (ADS)
Amalia, U.; Sumardianto; Agustini, T. W.
2018-02-01
Shrimp paste was one of fermented products, popular as a taste enhancer in many dishes. The processing of shrimp paste was natural fermentation, depends on shrimp it self and the presence of salt. The salt inhibits the growth of undesirable microorganism and allows the salt-tolerant lactic acid bacteria (LAB) to ferment the protein source to lactic acids. The objectives of this study were to characterize LAB isolated from Indonesian shrimp paste or "Terasi" with different times of fermentation (30, 60 and 90 days). Vitech analysis showed that there were four strains of the microorganism referred to as lactic acid bacteria (named: LABS1, LABS2, LABS3 and LABS4) with 95% sequence similarity. On the basis of biochemical, four isolates represented Lactobacillus, which the name Lactobacillus plantarum is proposed. L.plantarum was play role in resulting secondary metabolites, which gave umami flavor in shrimp paste.
X-ray microtomography-based measurements of meniscal allografts.
Mickiewicz, P; Binkowski, M; Bursig, H; Wróbel, Z
2015-05-01
X-ray microcomputed tomography (XMT) is a technique widely used to image hard and soft tissues. Meniscal allografts as collagen structures can be imaged and analyzed using XMT. The aim of this study was to present an XMT scanning protocol that can be used to obtain the 3D geometry of menisci. It was further applied to compare two methods of meniscal allograft measurement: traditional (based on manual measurement) and novel (based on digital measurement of 3D models of menisci obtained with use of XMT scanner). The XMT-based menisci measurement is a reliable method for assessing the geometry of a meniscal allograft by measuring the basic meniscal dimensions known from traditional protocol. Thirteen dissected menisci were measured according the same principles traditionally applied in a tissue bank. Next, the same specimens were scanned by a laboratory scanner in the XMT Lab. The images were processed to obtain a 3D mesh. 3D models of allograft geometry were then measured using a novel protocol enhanced by computer software. Then, both measurements were compared using statistical tests. The results showed significant differences (P<0.05) between the lengths of the medial and lateral menisci measured in the tissue bank and the XMT Lab. Also, medial meniscal widths were significantly different (P<0.05). Differences in meniscal lengths may result from difficulties in dissected meniscus measurements in tissue banks, and may be related to the elastic structure of the dissected meniscus. Errors may also be caused by the lack of highlighted landmarks on the meniscal surface in this study. The XMT may be a good technique for assessing meniscal dimensions without actually touching the specimen. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Poniger, S. S.; Tochon-Danguy, H. J.; Panopoulos, H. P.; O'Keefe, G. J.; Peake, D.; Rasool, R.; Scott, A. M.
2012-12-01
There is worldwide growing interest for the production of long-lived positron emitters for molecular imaging and the development of novel immuno-PET techniques for drugs discovery. The desire to produce solid target isotopes in Australia has significantly increased over the years and several research projects for labelling of peptides, proteins and biomolecules, including labelling of recombinant antibodies has been limited due to the availability of suitable isotopes. This has led to the recent installation and commissioning of a new lab dedicated to fully automated solid target isotope production, including 124I, 64Cu, 89Zr and 86Y.
Neuroanatomical phenotyping of the mouse brain with three-dimensional autofluorescence imaging
Wong, Michael D.; Dazai, Jun; Altaf, Maliha; Mark Henkelman, R.; Lerch, Jason P.; Nieman, Brian J.
2012-01-01
The structural organization of the brain is important for normal brain function and is critical to understand in order to evaluate changes that occur during disease processes. Three-dimensional (3D) imaging of the mouse brain is necessary to appreciate the spatial context of structures within the brain. In addition, the small scale of many brain structures necessitates resolution at the ∼10 μm scale. 3D optical imaging techniques, such as optical projection tomography (OPT), have the ability to image intact large specimens (1 cm3) with ∼5 μm resolution. In this work we assessed the potential of autofluorescence optical imaging methods, and specifically OPT, for phenotyping the mouse brain. We found that both specimen size and fixation methods affected the quality of the OPT image. Based on these findings we developed a specimen preparation method to improve the images. Using this method we assessed the potential of optical imaging for phenotyping. Phenotypic differences between wild-type male and female mice were quantified using computer-automated methods. We found that optical imaging of the endogenous autofluorescence in the mouse brain allows for 3D characterization of neuroanatomy and detailed analysis of brain phenotypes. This will be a powerful tool for understanding mouse models of disease and development and is a technology that fits easily within the workflow of biology and neuroscience labs. PMID:22718750
NASA Astrophysics Data System (ADS)
Lin, Yongping; Zhang, Xiyang; He, Youwu; Cai, Jianyong; Li, Hui
2018-02-01
The Jones matrix and the Mueller matrix are main tools to study polarization devices. The Mueller matrix can also be used for biological tissue research to get complete tissue properties, while the commercial optical coherence tomography system does not give relevant analysis function. Based on the LabVIEW, a near real time display method of Mueller matrix image of biological tissue is developed and it gives the corresponding phase retardant image simultaneously. A quarter-wave plate was placed at 45 in the sample arm. Experimental results of the two orthogonal channels show that the phase retardance based on incident light vector fixed mode and the Mueller matrix based on incident light vector dynamic mode can provide an effective analysis method of the existing system.
Snow White Trench Prepared for Sample Collection
NASA Technical Reports Server (NTRS)
2008-01-01
The informally named 'Snow White' trench is the source for the next sample to be acquired by NASA's Phoenix Mars Lander for analysis by the wet chemistry lab. The Surface Stereo Imager on Phoenix took this shadow-enhanced image of the trench, on the eastern end of Phoenix's work area, on Sol 103, or the 103rd day of the mission, Sept. 8, 2008. The trench is about 23 centimeters (9 inches) wide. The wet chemistry lab is part of Phoenix's Microscopy, Electrochemistry and Conductivity suite of instruments. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.The purpose of this SOP is to describe how lab results are organized and processed into the official database known as the Complete Dataset (CDS); to describe the structure and creation of the Analysis-ready Dataset (ADS); and to describe the structure and process of creating the...
NASA Astrophysics Data System (ADS)
Wright, Adam A.; Momin, Orko; Shin, Young Ho; Shakya, Rahul; Nepal, Kumud; Ahlgren, David J.
2010-01-01
This paper presents the application of a distributed systems architecture to an autonomous ground vehicle, Q, that participates in both the autonomous and navigation challenges of the Intelligent Ground Vehicle Competition. In the autonomous challenge the vehicle is required to follow a course, while avoiding obstacles and staying within the course boundaries, which are marked by white lines. For the navigation challenge, the vehicle is required to reach a set of target destinations, known as way points, with given GPS coordinates and avoid obstacles that it encounters in the process. Previously the vehicle utilized a single laptop to execute all processing activities including image processing, sensor interfacing and data processing, path planning and navigation algorithms and motor control. National Instruments' (NI) LabVIEW served as the programming language for software implementation. As an upgrade to last year's design, a NI compact Reconfigurable Input/Output system (cRIO) was incorporated to the system architecture. The cRIO is NI's solution for rapid prototyping that is equipped with a real time processor, an FPGA and modular input/output. Under the current system, the real time processor handles the path planning and navigation algorithms, the FPGA gathers and processes sensor data. This setup leaves the laptop to focus on running the image processing algorithm. Image processing as previously presented by Nepal et. al. is a multi-step line extraction algorithm and constitutes the largest processor load. This distributed approach results in a faster image processing algorithm which was previously Q's bottleneck. Additionally, the path planning and navigation algorithms are executed more reliably on the real time processor due to the deterministic nature of operation. The implementation of this architecture required exploration of various inter-system communication techniques. Data transfer between the laptop and the real time processor using UDP packets was established as the most reliable protocol after testing various options. Improvement can be made to the system by migrating more algorithms to the hardware based FPGA to further speed up the operations of the vehicle.
Electrostatic Levitation for Studies of Additive Manufactured Materials
NASA Technical Reports Server (NTRS)
SanSoucie, Michael P.; Rogers, Jan R.; Tramel, Terri
2014-01-01
The electrostatic levitation (ESL) laboratory at NASA's Marshall Space Flight Center is a unique facility for investigators studying high temperature materials. The laboratory boasts two levitators in which samples can be levitated, heated, melted, undercooled, and resolidified. Electrostatic levitation minimizes gravitational effects and allows materials to be studied without contact with a container or instrumentation. The lab also has a high temperature emissivity measurement system, which provides normal spectral and normal total emissivity measurements at use temperature. The ESL lab has been instrumental in many pioneering materials investigations of thermophysical properties, e.g., creep measurements, solidification, triggered nucleation, and emissivity at high temperatures. Research in the ESL lab has already led to the development of advanced high temperature materials for aerospace applications, coatings for rocket nozzles, improved medical and industrial optics, metallic glasses, ablatives for reentry vehicles, and materials with memory. Modeling of additive manufacturing materials processing is necessary for the study of their resulting materials properties. In addition, the modeling of the selective laser melting processes and its materials property predictions are also underway. Unfortunately, there is very little data for the properties of these materials, especially of the materials in the liquid state. Some method to measure thermophysical properties of additive manufacturing materials is necessary. The ESL lab is ideal for these studies. The lab can provide surface tension and viscosity of molten materials, density measurements, emissivity measurements, and even creep strength measurements. The ESL lab can also determine melting temperature, surface temperatures, and phase transition temperatures of additive manufactured materials. This presentation will provide background on the ESL lab and its capabilities, provide an approach to using the ESL in supporting the development and modeling of the selective laser melting process for metals, and provide an overview of the results to date.
NASA Astrophysics Data System (ADS)
Susilowati, Agustine; Melanie, Hakiki; Maryati, Yati; Aspiyanto
2017-01-01
Fermentation of Lactobacillus Acid Bacteria (LAB) which are mixtures of Lactobacillus acidophilus, Bifidobacteriumbifidum, Lactobacillus bulgaricus and Streptococcus thermophillus on hydrolysate as a result of inulin hydrolysis using inulinase enzymes obtained from endophytic fungi ofScopulariopsis sp.-CBS1 (inulin hydrolysate of S) and Class of Deuteromycetes-CBS4 (inulin hydrolysate of D) generate potential fermented inulin fiber as cholesterol binder. Fermentation process was conducted under concentrations of inulin hydrolysate 50% (w/v), LAB 15% (v/v) and skim milk 12.5% (w/v) at room temperature and 40°C for 0, 12, 24, 36 and 48 hours, respectively. Result of experimental work showed that longer time of LAB fermentation increased total acids, TPC and CBC at pH 2, but decreased total sugar, reducing, IDF, SDF, CBC pH 2 and CBC pH 7. Based on Cholesterol Binding Capacity (CBC), optimization of fermentation process on inulin hydrolysate of S was achieved by combining treatment at 40°C for 24 hours resulted in CBC pH 2 of 19.11 mg/g TDF and inulin hydrolysate of D was achieved by fermentation at 40 °C for 48 hours resulted in CBC pH 2 of 24.28 mg/g TDF. Inulin hydrolysate of class of Deutrymecetes CBS4 fermented by LAB had better functional property as cholesterol binder than that inulin hydrolysate of S fermented by LAB. This is due to cholesterol binder and cholesterol derivatives as a result of degradation of LAB on digestive system (stomach) when compared to higher colon under optimal process condition.
In Situ Bioremediation by Natural Attenuation: from Lab to Field Scale
NASA Astrophysics Data System (ADS)
Banwart, S. A.; Thornton, S.; Rees, H.; Lerner, D.; Wilson, R.; Romero-Gonzalez, M.
2007-03-01
In Situ Bioremediation is a passive technology to degrade soil and groundwater contamination in order to reduce environmental and human health risk. Natural attenuation is the application of engineering biotechnology principles to soil and groundwater systems as natural bioreactors to transform or immobilize contamination to less toxic or less bioavailable forms. Current advances in computational methods and site investigation techniques now allow detailed numerical models to be adequately parameterized for interpretation of processes and their interactions in the complex sub-surface system. Clues about biodegradation processes point to the dominant but poorly understood behaviour of attached growth microbial populations that exist within the context of biofilm formation. New techniques that combine biological imaging with non-destructive chemical analysis are providing new insights into attached growth influence on Natural Attenuation. Laboratory studies have been carried out in porous media packed bed reactors that physically simulate plume formation in aquifers. Key results show that only a small percentage of the total biomass within the plume is metabolically active and that activity is greatest at the plume fringe. This increased activity coincides with the zone where dispersive mixing brings dissolved O2 from outside the plume in contact with the contamination and microbes. The exciting new experimental approaches in lab systems offer tremendous potential to move Natural Attenuation and other in situ bioremediation approaches away from purely empirical engineering approaches, to process descriptions that are far more strongly based on first principles and that have a far greater predictive capacity for remediation performance assessment.
DC High Voltage Conditioning of Photoemission Guns at Jefferson Lab FEL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez-Garcia, C.; Benson, S. V.; Biallas, G.
2009-08-04
DC high voltage photoemission electron guns with GaAs photocathodes have been used to produce polarized electron beams for nuclear physics experiments for about 3 decades with great success. In the late 1990s, Jefferson Lab adopted this gun technology for a free electron laser (FEL), but to assist with high bunch charge operation, considerably higher bias voltage is required compared to the photoguns used at the Jefferson Lab Continuous Electron Beam Accelerator Facility. The FEL gun has been conditioned above 400 kV several times, albeit encountering non-trivial challenges with ceramic insulators and field emission from electrodes. Recently, high voltage processing withmore » krypton gas was employed to process very stubborn field emitters. This work presents a summary of the high voltage techniques used to high voltage condition the Jefferson Lab FEL photoemission gun.« less
Positron emission tomography and optical tissue imaging
Falen, Steven W [Carmichael, CA; Hoefer, Richard A [Newport News, VA; Majewski, Stanislaw [Yorktown, VA; McKisson, John [Hampton, VA; Kross, Brian [Yorktown, VA; Proffitt, James [Newport News, VA; Stolin, Alexander [Newport News, VA; Weisenberger, Andrew G [Yorktown, VA
2012-05-22
A mobile compact imaging system that combines both PET imaging and optical imaging into a single system which can be located in the operating room (OR) and provides faster feedback to determine if a tumor has been fully resected and if there are adequate surgical margins. While final confirmation is obtained from the pathology lab, such a device can reduce the total time necessary for the procedure and the number of iterations required to achieve satisfactory resection of a tumor with good margins.
... The sample is taken to the laboratory for evaluation. The lab evaluates the enzymes acetylcholinesterase and pseudocholinesterase, which act to break down acetylcholine. Acetylcholine is a critical chemical in the transmission of nerve impulses.
Amniocentesis - series (image)
... during which you must lie very still. A technician locates your fetus with an ultrasound. Using the ... fluid. This fluid contains fetal cells that a technician grows in a lab and analyzes. Test results ...
NASA Astrophysics Data System (ADS)
Kalaskas, Anthony Bacaoat
The lab report is a genre commonly assigned by lab instructors and written by science majors in undergraduate science programs. The teaching and learning of the lab report, however, is a complicated and complex process that both instructors and students regularly contend with. This thesis is a qualitative study that aims to mediate the mismatch between students and instructors by ascertaining their attitudes, beliefs, and values regarding lab report writing. In this way, this thesis may suggest changes to teaching and learning strategies that lead to an improvement of lab report writing done by students. Given that little research has been conducted in this area thus far, this thesis also serves as a pilot study. A literature review is first conducted on the history of the lab report to delineate its development since its inception into American postsecondary education in the late 19th century. Genre theory and Vygotsky's zone of proximal development (ZPD) serve as the theoretical lenses for this thesis. Surveys and interviews are conducted with biology majors and instructors in the Department of Biology at George Mason University. Univariate analysis and coding are applied to elucidate responses from participants. The findings suggest that students may lack the epistemological background to understand lab reports as a process of doing science. This thesis also finds that both instructors and students consider the lab report primarily as a pedagogical genre as opposed to an apprenticeship genre. Additionally, although instructors were found to have utilized an effective piecemeal teaching strategy, there remains a lack of empathy among instructors for students. Collectively, these findings suggest that instructors should modify teaching strategies to determine and address student weaknesses more directly.
Imaging electric field dynamics with graphene optoelectronics
Horng, Jason; Balch, Halleh B.; McGuire, Allister F.; ...
2016-12-16
The use of electric fields for signalling and control in liquids is widespread, spanning bioelectric activity in cells to electrical manipulation of microstructures in lab-on-a-chip devices. However, an appropriate tool to resolve the spatio-temporal distribution of electric fields over a large dynamic range has yet to be developed. Here we present a label-free method to image local electric fields in real time and under ambient conditions. Our technique combines the unique gate-variable optical transitions of graphene with a critically coupled planar waveguide platform that enables highly sensitive detection of local electric fields with a voltage sensitivity of a few microvolts,more » a spatial resolution of tens of micrometres and a frequency response over tens of kilohertz. Our imaging platform enables parallel detection of electric fields over a large field of view and can be tailored to broad applications spanning lab-on-a-chip device engineering to analysis of bioelectric phenomena.« less
Imaging electric field dynamics with graphene optoelectronics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horng, Jason; Balch, Halleh B.; McGuire, Allister F.
The use of electric fields for signalling and control in liquids is widespread, spanning bioelectric activity in cells to electrical manipulation of microstructures in lab-on-a-chip devices. However, an appropriate tool to resolve the spatio-temporal distribution of electric fields over a large dynamic range has yet to be developed. Here we present a label-free method to image local electric fields in real time and under ambient conditions. Our technique combines the unique gate-variable optical transitions of graphene with a critically coupled planar waveguide platform that enables highly sensitive detection of local electric fields with a voltage sensitivity of a few microvolts,more » a spatial resolution of tens of micrometres and a frequency response over tens of kilohertz. Our imaging platform enables parallel detection of electric fields over a large field of view and can be tailored to broad applications spanning lab-on-a-chip device engineering to analysis of bioelectric phenomena.« less
My Brother’s Keeper National Lab Week
2016-03-02
Harold (Russ) McAmis demonstrates machinery inside NASA Kennedy Space Center’s Prototype Lab for students in the My Brother’s Keeper program. The Florida spaceport is one of six NASA centers that participated in My Brother’s Keeper National Lab Week. The event is a nationwide effort to bring youth from underrepresented communities into federal labs and centers for hands-on activities, tours and inspirational speakers. Sixty students from the nearby cities of Orlando and Sanford visited Kennedy, where they toured the Vehicle Assembly Building, the Space Station Processing Facility and the center’s innovative Swamp Works Labs. The students also had a chance to meet and ask questions of a panel of subject matter experts from across Kennedy.
ERIC Educational Resources Information Center
Pilarz, Matthew
2013-01-01
For this study, a research-based lab module was implemented in two high school chemistry classes for the purpose of examining classroom dynamics throughout the process of students completing the module. A research-based lab module developed for use in undergraduate laboratories by the Center for Authentic Science Practice in Education (CASPiE) was…
MO-DE-BRA-06: MrRSCAL: A Radiological Simulation Tool for Resident Education
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, W; Yanasak, N
Purpose: The goal of this project was to create a readily accessible, comprehensive-yet-flexible interactive MRI simulation tool for use in training and education of radiology residents in particular. This tool was developed to take the place of an MR scanner in laboratory activities, as magnet time has become scarce while hospitals are optimizing clinical scheduling for improved throughput. Methods: MrRSCAL (Magnetic resonance Resident Simulation Console for Active Learning) was programmed and coded using Matlab on a Mac workstation utilizing OS X platform. MR-based brain images were obtained from one of the co-authors and processed to generate parametric maps. Scanner soundsmore » are also generated via mp3 convolution of a single MR gradient slew with a time-profile of gradient waveforms. Results: MrRSCAL facilitates the simulation of multiple MR sequences with the ability to alter MR parameters via an intuitive GUI control panel. The application allows the user to gain real-time understanding of image transformation when varying these said parameters by examining the resulting images. Lab procedures can be loaded and displayed for more directed study. The panel is also configurable, providing a simple interface for elementary labs or a full array of controls for the expert user. Conclusion: Our introduction of MrRSCAL, which is readily available to users with a current laptop or workstation, allows for individual or group study of MR image acquisition with immediate educational feedback as the MR parameters are manipulated. MrRSCAL can be used at any time and any place once installed, offering a new tool for reviewing relaxometric and artifact principles when studying for boards or investigating properties of a pulse sequence. This tool promises to be extremely useful in conveying traditionally difficult and abstract concepts involved with MR to the radiology resident and other medical professionals at large.« less
Imaging the North Anatolian Fault using the scattered teleseismic wavefield
NASA Astrophysics Data System (ADS)
Thompson, D. A.; Rost, S.; Houseman, G. A.; Cornwell, D. G.; Turkelli, N.; Teoman, U.; Kahraman, M.; Altuncu Poyraz, S.; Gülen, L.; Utkucu, M.; Frederiksen, A. W.; Rondenay, S.
2013-12-01
The North Anatolian Fault Zone (NAFZ) is a major continental strike-slip fault system, similar in size and scale to the San Andreas system, that extends ˜1200 km across Turkey. In 2012, a new multidisciplinary project (FaultLab) was instigated to better understand deformation throughout the entire crust in the NAFZ, in particular the expected transition from narrow zones of brittle deformation in the upper crust to possibly broader shear zones in the lower crust/upper mantle and how these features contribute to the earthquake loading cycle. This contribution will discuss the first results from the seismic component of the project, a 73 station network encompassing the northern and southern branches of the NAFZ in the Sakarya region. The Dense Array for North Anatolia (DANA) is arranged as a 6×11 grid with a nominal station spacing of 7 km, with a further 7 stations located outside of the main grid. With the excellent resolution afforded by the DANA network, we will present images of crustal structure using the technique of teleseismic scattering tomography. The method uses a full waveform inversion of the teleseismic scattered wavefield coupled with array processing techniques to infer the properties and location of small-scale heterogeneities (with scales on the order of the seismic wavelength) within the crust. We will also present preliminary results of teleseismic scattering migration, another powerful method that benefits from the dense data coverage of the deployed seismic network. Images obtained using these methods together with other conventional imaging techniques will provide evidence for how the deformation is distributed within the fault zone at depth, providing constraints that can be used in conjunction with structural analyses of exhumed fault segments and models of geodetic strain-rate across the fault system. By linking together results from the complementary techniques being employed in the FaultLab project, we aim to produce a comprehensive picture of fault structure and dynamics throughout the crust and shallow upper mantle of this major active fault zone.
Hamilton, Liberty S; Chang, David L; Lee, Morgan B; Chang, Edward F
2017-01-01
In this article, we introduce img_pipe, our open source python package for preprocessing of imaging data for use in intracranial electrocorticography (ECoG) and intracranial stereo-EEG analyses. The process of electrode localization, labeling, and warping for use in ECoG currently varies widely across laboratories, and it is usually performed with custom, lab-specific code. This python package aims to provide a standardized interface for these procedures, as well as code to plot and display results on 3D cortical surface meshes. It gives the user an easy interface to create anatomically labeled electrodes that can also be warped to an atlas brain, starting with only a preoperative T1 MRI scan and a postoperative CT scan. We describe the full capabilities of our imaging pipeline and present a step-by-step protocol for users.
Polyphony: A Workflow Orchestration Framework for Cloud Computing
NASA Technical Reports Server (NTRS)
Shams, Khawaja S.; Powell, Mark W.; Crockett, Tom M.; Norris, Jeffrey S.; Rossi, Ryan; Soderstrom, Tom
2010-01-01
Cloud Computing has delivered unprecedented compute capacity to NASA missions at affordable rates. Missions like the Mars Exploration Rovers (MER) and Mars Science Lab (MSL) are enjoying the elasticity that enables them to leverage hundreds, if not thousands, or machines for short durations without making any hardware procurements. In this paper, we describe Polyphony, a resilient, scalable, and modular framework that efficiently leverages a large set of computing resources to perform parallel computations. Polyphony can employ resources on the cloud, excess capacity on local machines, as well as spare resources on the supercomputing center, and it enables these resources to work in concert to accomplish a common goal. Polyphony is resilient to node failures, even if they occur in the middle of a transaction. We will conclude with an evaluation of a production-ready application built on top of Polyphony to perform image-processing operations of images from around the solar system, including Mars, Saturn, and Titan.
Building Structured Personal Health Records from Photographs of Printed Medical Records.
Li, Xiang; Hu, Gang; Teng, Xiaofei; Xie, Guotong
2015-01-01
Personal health records (PHRs) provide patient-centric healthcare by making health records accessible to patients. In China, it is very difficult for individuals to access electronic health records. Instead, individuals can easily obtain the printed copies of their own medical records, such as prescriptions and lab test reports, from hospitals. In this paper, we propose a practical approach to extract structured data from printed medical records photographed by mobile phones. An optical character recognition (OCR) pipeline is performed to recognize text in a document photo, which addresses the problems of low image quality and content complexity by image pre-processing and multiple OCR engine synthesis. A series of annotation algorithms that support flexible layouts are then used to identify the document type, entities of interest, and entity correlations, from which a structured PHR document is built. The proposed approach was applied to real world medical records to demonstrate the effectiveness and applicability.
Building Structured Personal Health Records from Photographs of Printed Medical Records
Li, Xiang; Hu, Gang; Teng, Xiaofei; Xie, Guotong
2015-01-01
Personal health records (PHRs) provide patient-centric healthcare by making health records accessible to patients. In China, it is very difficult for individuals to access electronic health records. Instead, individuals can easily obtain the printed copies of their own medical records, such as prescriptions and lab test reports, from hospitals. In this paper, we propose a practical approach to extract structured data from printed medical records photographed by mobile phones. An optical character recognition (OCR) pipeline is performed to recognize text in a document photo, which addresses the problems of low image quality and content complexity by image pre-processing and multiple OCR engine synthesis. A series of annotation algorithms that support flexible layouts are then used to identify the document type, entities of interest, and entity correlations, from which a structured PHR document is built. The proposed approach was applied to real world medical records to demonstrate the effectiveness and applicability. PMID:26958219
Multispectral Photogrammetric Data Acquisition and Processing Forwall Paintings Studies
NASA Astrophysics Data System (ADS)
Pamart, A.; Guillon, O.; Faraci, S.; Gattet, E.; Genevois, M.; Vallet, J. M.; De Luca, L.
2017-02-01
In the field of wall paintings studies different imaging techniques are commonly used for the documentation and the decision making in term of conservation and restoration. There is nowadays some challenging issues to merge scientific imaging techniques in a multimodal context (i.e. multi-sensors, multi-dimensions, multi-spectral and multi-temporal approaches). For decades those CH objects has been widely documented with Technical Photography (TP) which gives precious information to understand or retrieve the painting layouts and history. More recently there is an increasing demand of the use of digital photogrammetry in order to provide, as one of the possible output, an orthophotomosaic which brings a possibility for metrical quantification of conservators/restorators observations and actions planning. This paper presents some ongoing experimentations of the LabCom MAP-CICRP relying on the assumption that those techniques can be merged through a common pipeline to share their own benefits and create a more complete documentation.
Hamilton, Liberty S.; Chang, David L.; Lee, Morgan B.; Chang, Edward F.
2017-01-01
In this article, we introduce img_pipe, our open source python package for preprocessing of imaging data for use in intracranial electrocorticography (ECoG) and intracranial stereo-EEG analyses. The process of electrode localization, labeling, and warping for use in ECoG currently varies widely across laboratories, and it is usually performed with custom, lab-specific code. This python package aims to provide a standardized interface for these procedures, as well as code to plot and display results on 3D cortical surface meshes. It gives the user an easy interface to create anatomically labeled electrodes that can also be warped to an atlas brain, starting with only a preoperative T1 MRI scan and a postoperative CT scan. We describe the full capabilities of our imaging pipeline and present a step-by-step protocol for users. PMID:29163118
NASA Technical Reports Server (NTRS)
1977-01-01
This picture of a crescent-shaped Earth and Moon -- the first of its kind ever taken by a spacecraft -- was recorded Sept. 18, 1977, by NASA's Voyager 1 when it was 7.25 million miles (11.66 million kilometers) from Earth. The Moon is at the top of the picture and beyond the Earth as viewed by Voyager. In the picture are eastern Asia, the western Pacific Ocean and part of the Arctic. Voyager 1 was directly above Mt. Everest (on the night side of the planet at 25 degrees north latitude) when the picture was taken. The photo was made from three images taken through color filters, then processed by the Jet Propulsion Laboratory's Image Processing Lab. Because the Earth is many times brighter than the Moon, the Moon was artificially brightened by a factor of three relative to the Earth by computer enhancement so that both bodies would show clearly in the print. Voyager 2 was launched Aug. 20, 1977, followed by Voyager 1 on Sept. 5, 1977, en route to encounters at Jupiter in 1979 and Saturn in 1980 and 1981. JPL manages the Voyager mission for NASA's Office of Space Science.
Crescent-shaped Earth and Moon
NASA Technical Reports Server (NTRS)
1978-01-01
This picture of a crescent-shaped Earth and Moon -- the first of its kind ever taken by a spacecraft -- was recorded Sept. 18, 1977, by NASA's Voyager 1 when it was 7.25 million miles (11.66 million kilometers) from Earth. The Moon is at the top of the picture and beyond the Earth as viewed by Voyager. In the picture are eastern Asia, the western Pacific Ocean and part of the Arctic. Voyager 1 was directly above Mt. Everest (on the night side of the planet at 25 degrees north latitude) when the picture was taken. The photo was made from three images taken through color filters, then processed by the Jet Propulsion Laboratory's Image Processing Lab. Because the Earth is many times brighter than the Moon, the Moon was artificially brightened by a factor of three relative to the Earth by computer enhancement so that both bodies would show clearly in the print. Voyager 2 was launched Aug. 20, 1977, followed by Voyager 1 on Sept. 5, 1977, en route to encounters at Jupiter in 1979 and Saturn in 1980 and 1981. JPL manages the Voyager mission for NASA.
Berkeley Lab Scientists to Play Role in New Space Telescope
circling distant suns, among other science aims. The Wide Field Infrared Survey Telescope (WFIRST) will Hubble Space Telescope's Wide Field Camera 3 infrared imager. A Hubble large-scale mapping survey of the survey of the M31 galaxy (shown here) required 432 "pointings" of its imager, while only two
Bringing the Digital Camera to the Physics Lab
NASA Astrophysics Data System (ADS)
Rossi, M.; Gratton, L. M.; Oss, S.
2013-03-01
We discuss how compressed images created by modern digital cameras can lead to even severe problems in the quantitative analysis of experiments based on such images. Difficulties result from the nonlinear treatment of lighting intensity values stored in compressed files. To overcome such troubles, one has to adopt noncompressed, native formats, as we examine in this work.
... Grant Grant Finder Therapy Acceleration Program Academic Concierge Biotechnology Accelerator Clinical Trials Division Resources for HCPs Continuing ... Grant Grant Finder Therapy Acceleration Program Academic Concierge Biotechnology Accelerator Clinical Trials Division Resources for HCPs Continuing ...
Karakaş, H M; Karakaş, S; Ozkan Ceylan, A; Tali, E T
2009-08-01
Event-related potentials (ERPs) have high temporal resolution, but insufficient spatial resolution; the converse is true for the functional imaging techniques. The purpose of the study was to test the utility of a multimodal EEG/ERP-MRI technique which combines electroencephalography (EEG) and magnetic resonance imaging (MRI) for a simultaneously high temporal and spatial resolution. The sample consisted of 32 healthy young adults of both sexes. Auditory stimuli were delivered according to the active and passive oddball paradigms in the MRI environment (MRI-e) and in the standard conditions of the electrophysiology laboratory environment (Lab-e). Tasks were presented in a fixed order. Participants were exposed to the recording environments in a counterbalanced order. EEG data were preprocessed for MRI-related artifacts. Source localization was made using a current density reconstruction technique. The ERP waveforms for the MRI-e were morphologically similar to those for the Lab-e. The effect of the recording environment, experimental paradigm and electrode location were analyzed using a 2x2x3 analysis of variance for repeated measures. The ERP components in the two environments showed parametric variations and characteristic topographical distributions. The calculated sources were in line with the related literature. The findings indicated effortful cognitive processing in MRI-e. The study provided preliminary data on the feasibility of the multimodal EEG/ERP-MRI technique. It also indicated lines of research that are to be pursued for a decisive testing of this technique and its implementation to clinical practice.
Smartphone technology can be transformative to the deployment of lab-on-chip diagnostics.
Erickson, David; O'Dell, Dakota; Jiang, Li; Oncescu, Vlad; Gumus, Abdurrahman; Lee, Seoho; Mancuso, Matthew; Mehta, Saurabh
2014-09-07
The rapid expansion of mobile technology is transforming the biomedical landscape. By 2016 there will be 260 M active smartphones in the US and millions of health accessories and software "apps" running off them. In parallel with this have come major technical achievements in lab-on-a-chip technology leading to incredible new biochemical sensors and molecular diagnostic devices. Despite these advancements, the uptake of lab-on-a-chip technologies at the consumer level has been somewhat limited. We believe that the widespread availability of smartphone technology and the capabilities they offer in terms of computation, communication, social networking, and imaging will be transformative to the deployment of lab-on-a-chip type technology both in the developed and developing world. In this paper we outline why we believe this is the case, the new business models that may emerge, and detail some specific application areas in which this synergy will have long term impact, namely: nutrition monitoring and disease diagnostics in limited resource settings.
Smartphone technology can be transformative to the deployment of lab-on-chip diagnostics
Erickson, David; O’Dell, Dakota; Jiang, Li; Oncescu, Vlad; Gumus, Abdurrahman; Lee, Seoho; Mancuso, Matthew; Mehta, Saurabh
2014-01-01
The rapid expansion of mobile technology is transforming the biomedical landscape. By 2016 there will be 260M active smartphones in the US and millions of health accessories and software “apps” running off them. In parallel with this have come major technical achievements in lab-on-a-chip technology leading to incredible new biochemical sensors and molecular diagnostic devices. Despite these advancements, the uptake of lab-on-a-chip technologies at the consumer level has been somewhat limited. We believe that the widespread availability of smartphone technology and the capabilities they offer in terms of computation, communication, social networking, and imaging will be transformative to the deployment of lab-on-a-chip type technology both in the developed and developing world. In this paper we outline why we believe this is the case, the new business models that may emerge, and detail some specific application areas in which this synergy will have long term impact, namely: nutrition monitoring and disease diagnostics in limited resource settings. PMID:24700127
In vivo endoscopic Doppler optical coherence tomography imaging of mouse colon
NASA Astrophysics Data System (ADS)
Welge, Weston A.; Barton, Jennifer K.
2016-03-01
Colorectal cancer remains the second deadliest cancer in the United States, despite the high sensitivity and specificity of colonoscopy and sigmoidoscopy. While these standard imaging procedures can accurately detect medium and large polyps, some studies have shown miss rates up to 25% for polyps less than 5 mm in diameter. An imaging modality capable of detecting small lesions could potentially improve patient outcomes. Optical coherence tomography (OCT) has been shown to be a powerful imaging modality for adenoma detection in a mouse model of colorectal cancer. While previous work has focused on analyzing the structural OCT images based on thickening of the mucosa and changes in light attenuation in depth, imaging the microvasculature of the colon may enable earlier detection of polyps. The structure and function of vessels grown to support tumor growth are markedly different from healthy vessels. Doppler OCT is capable of imaging microvessels in vivo. We developed a method of processing raw fringe data from a commercial swept-source OCT system using a lab-built miniature endoscope to extract microvessels. This method can be used to measure vessel count and density and to measure flow velocities. This may improve early detection and aid in the development of new chemopreventive and chemotherapeutic drugs. We present, to the best of our knowledge, the first endoscopic Doppler OCT images of in vivo mouse colon.
Providing Guidance in Virtual Lab Experimentation: The Case of an Experiment Design Tool
ERIC Educational Resources Information Center
Efstathiou, Charalampos; Hovardas, Tasos; Xenofontos, Nikoletta A.; Zacharia, Zacharias C.; deJong, Ton; Anjewierden, Anjo; van Riesen, Siswa A. N.
2018-01-01
The present study employed a quasi-experimental design to assess a computer-based tool, which was intended to scaffold the task of designing experiments when using a virtual lab for the process of experimentation. In particular, we assessed the impact of this tool on primary school students' cognitive processes and inquiry skills before and after…
ERIC Educational Resources Information Center
Goacher, Robyn E.; Kline, Cynthia M.; Targus, Alexis; Vermette, Paul J.
2017-01-01
We describe how a practical instructional development process helped a first-year assistant professor rapidly develop, implement, and assess the impact on her Analytical Chemistry course caused by three changes: (a) moving the lab into the same semester as the lecture, (b) developing a more collaborative classroom environment, and (c) increasing…
Improved LCI profile of LAB based on latest technology advances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berna, J.L.; Renta, C.
1995-12-31
The first technology used to produce LAB was introduced in the early 60`s and since then a continuous optimization process has taken place on this highly competitive product on which additional cost effectiveness improvements became highly challenging. The latest technology introduced in the market (CEPSA {minus} UOP DETAL) based on a fixed bed alkylation process, has already been proved on a commercial scale. The simplicity of the new technology as compared to current ones, namely HF, has proven to be very effective in reducing substantially the impact due to several major components of the Life Cycle Inventory (LCI) in particularmore » the emissions of the overall operation. Additional improvements in other aspects like energy consumption are extremely difficult to achieve today as this parameter has already been highly optimized during the last two decades making in fact LAB a highly effective chemical in terms of energy requirements as well as on raw material consumption. The results of the first LCI of the new LAB technology indicate a reduction of CO process emissions to nearly 1/2 as compared to standard HF process as well as reduction in solid waste to 1/3 of the corresponding HF process. Important reductions have also been noticed on NOx emissions with the new technology.« less
Three-dimensional femtosecond laser processing for lab-on-a-chip applications
NASA Astrophysics Data System (ADS)
Sima, Felix; Sugioka, Koji; Vázquez, Rebeca Martínez; Osellame, Roberto; Kelemen, Lóránd; Ormos, Pal
2018-02-01
The extremely high peak intensity associated with ultrashort pulse width of femtosecond laser allows us to induce nonlinear interaction such as multiphoton absorption and tunneling ionization with materials that are transparent to the laser wavelength. More importantly, focusing the femtosecond laser beam inside the transparent materials confines the nonlinear interaction only within the focal volume, enabling three-dimensional (3D) micro- and nanofabrication. This 3D capability offers three different schemes, which involve undeformative, subtractive, and additive processing. The undeformative processing preforms internal refractive index modification to construct optical microcomponents including optical waveguides. Subtractive processing can realize the direct fabrication of 3D microfluidics, micromechanics, microelectronics, and photonic microcomponents in glass. Additive processing represented by two-photon polymerization enables the fabrication of 3D polymer micro- and nanostructures for photonic and microfluidic devices. These different schemes can be integrated to realize more functional microdevices including lab-on-a-chip devices, which are miniaturized laboratories that can perform reaction, detection, analysis, separation, and synthesis of biochemical materials with high efficiency, high speed, high sensitivity, low reagent consumption, and low waste production. This review paper describes the principles and applications of femtosecond laser 3D micro- and nanofabrication for lab-on-a-chip applications. A hybrid technique that promises to enhance functionality of lab-on-a-chip devices is also introduced.
2014-10-01
imaging technique used to capture T cell/APC interaction and infiltration in CNS during the disease course of EAE; and finally 3) characterize the...period, we aim to understand the mechanism of APC/T cell interaction by standardizing the available mouse model and imaging techniques in our lab...resulted in the development of new triterpenoids, mouse imaging techniques and biochemistry and chemical library construction. For example, work
My Green Car: Painting Motor City Green (Ep. 2) â DOE Lab-Corps Video Series
Saxena, Samveg; Shah, Nihar; Hansen, Dana
2018-06-12
The Labâs MyGreenCar team kicks off its customer discovery process in Detroit with a business boot camp designed for scientists developing energy-related technologies. Customer interviews lead to late night discussions and insights on less-than-receptive consumers. Back in Berkeley, the team decides to fine tune targeted customer segments. What makes a new technology compelling enough to transition out of the lab and become a consumer product? Thatâs the question Berkeley Lab researchers Samveg Saxena, Nihar Shah, and Dana Hansen plus industry mentor Russell Carrington set out to answer for MyGreenCar, an app providing personalized fuel economy or electric vehicle range estimates for consumers researching new cars. DOEâs Lab-Corps program offered the technology team some answers. The EERE-funded program, based on the National Science Foundationâs I-Corps⢠model for entrepreneurial training, provides tools and training to move energy-related inventions to the marketplace. During Lab-Corpâs intensive six-week session, technology teams interview 100 customer and value chain members to discover which potential products based on their technologies will have significant market pull. A six video series follows the MyGreenCar teamâs Lab-Corps experience, from pre-training preparation with the Labâs Innovation and Partnerships Office through the ups and downs of the customer discovery process. Will the app make it to the marketplace? Youâll just have to watch.
Lab-X-ray multidimensional imaging of processes inside porous media
NASA Astrophysics Data System (ADS)
Godinho, Jose
2017-04-01
Time-lapse and other multidimensional X-ray imaging techniques have mostly been applied using synchrotron radiation, which limits accessibility and complicates data analysis. Here, we present new time-lapse imaging approaches using laboratory X-ray computed microtomography (CT) to study transformations inside porous media. Specifically, three methods will be presented: 1) Quantitative time-lapse radiography to study sub-second processes. For example to study the penetration of particles into fractures and pores, which is essential to understand how proppants keep fractures opened during hydraulic fracturing and how filter cakes form during borehole drilling. 2) Combination of time-lapse CT with diffraction tomography to study the transformation between bio-inspired polymorphs in 6D, e.g. mineral phase transformation between ACC, Vaterite and Calcite - CaCO3, and between ACS, Anhydrite and Gypsum - CaSO4. Crystals can be resolved in nanopores down to 7 nm (over 100 times smaller than the resolution of CT), which allows studying the effect of confinement on phase stability and growth rates. 3) Fast iterative helical micro-CT scanning to study samples of high ratio height to width (e.g. long cores) with optimal resolution. Here we show how this can be useful to study the distribution of the products from fluid-mediated mineral reactions throughout longer reaction paths and more representative volumes. Using state of the art reconstruction algorithms allows reducing the scanning times from over ten hours to below two hours enabling time-lapse studies. It is expected that these new techniques will open new possibilities for time-lapse imaging of a wider range of geological processes using laboratory X-ray CT, thereby increasing the accessibility of multidimensional imaging to a larger number of users and applications in geology.
Spaceport Processing System Development Lab
NASA Technical Reports Server (NTRS)
Dorsey, Michael
2013-01-01
The Spaceport Processing System Development Lab (SPSDL), developed and maintained by the Systems Hardware and Engineering Branch (NE-C4), is a development lab with its own private/restricted networks. A private/restricted network is a network with restricted or no communication with other networks. This allows users from different groups to work on their own projects in their own configured environment without interfering with others utilizing their resources in the lab. The different networks being used in the lab have no way to talk with each other due to the way they are configured, so how a user configures his software, operating system, or the equipment doesn't interfere or carry over on any of the other networks in the lab. The SPSDL is available for any project in KSC that is in need of a lab environment. My job in the SPSDL was to assist in maintaining the lab to make sure it's accessible for users. This includes, but is not limited to, making sure the computers in the lab are properly running and patched with updated hardware/software. In addition to this, I also was to assist users who had issues in utilizing the resources in the lab, which may include helping to configure a restricted network for their own environment. All of this was to ensure workers were able to use the SPSDL to work on their projects without difficulty which would in turn, benefit the work done throughout KSC. When I wasn't working in the SPSDL, I would instead help other coworkers with smaller tasks which included, but wasn't limited to, the proper disposal, moving of, or search for essential equipment. I also, during the free time I had, used NASA's resources to increase my knowledge and skills in a variety of subjects related to my major as a computer engineer, particularly in UNIX, Networking, and Embedded Systems.
SOAR Telescope Progress Report
NASA Astrophysics Data System (ADS)
Sebring, T.; Cecil, G.; Krabbendam, V.
1999-12-01
The 4.3m SOAR telescope is fully funded and under construction. A partnership between the country of Brazil, NOAO, Michigan State University, and the University of North Carolina at Chapel Hill, SOAR is being designed for high-quality imaging and imaging spectroscopy in the optical and near-IR over a field of view up to 12' diameter. US astronomers outside MSU and UNC will access 30% of the observing time through the standard NOAO TAC process. The telescope is being designed to support remote and synoptic observations. First light is scheduled for July 2002 at Cerro Pachon in Chile, a site with median seeing of 2/3" at 500 nm. The telescope will be operated by CTIO. Corning Inc. has fused the mirror blanks from boules of ULE glass. RSI in Richardson, Texas and Raytheon Optical Systems Inc. in Danbury, Conn. are designing and will fabricate the mount and active optics systems, respectively. The mount supports an instrument payload in excess of 5000 kg, at 2 Nasmyth locations and 3 bent Cass. ports. The mount and facility building have space for a laser to generate an artificial AO guide star. LabVIEW running under the Linux OS on compactPCI hardware has been adopted to control all telescope, detector, and instrument systems. The primary mirror is 10 cm thick and will be mounted on 120 electro-mechanical actuators to maintain its ideal optical figure at all elevations. The position of the light-weighted secondary mirror is adjusted to maintain collimation through use of a Shack-Hartmann wavefront sensor. The tertiary mirror feeds instruments and also jitters at up to 50 Hz to compensate for telescope shake and atmosphere wavefront tilt. The dome is a steel framework, with fiberglass panels. Air in the observing volume will be exchanged with that outside every few minutes by using large fans under computer control. All systems will be assembled and checked at the manufacturer's facility, then shipped to Chile. A short integration period is planned, and limited science operations will begin in late 2002. The telescope will deliver an f/16 tip/tilt/focus stabilized image. Optical spectrographs (5' field and IFU) using volume-phase holographic gratings for high efficiency, and wide-field optical and near-IR imagers are under development at partner institutions and at partner expense. These instruments are being designed to exploit the excellent image quality of the telescope. SOAR is participating in consortia for Rockwell 2x2K HgCdTe arrays, and MIT/Lincoln Labs 2x4K CCD's. Most detectors will be run with SDSU-2 array controllers, and custom LabVIEW software. CTIO is also responsible for CCD integration.
Michalsky, Marc P; Inge, Thomas H; Teich, Steven; Eneli, Ihuoma; Miller, Rosemary; Brandt, Mary L; Helmrath, Michael; Harmon, Carroll M; Zeller, Meg H; Jenkins, Todd M; Courcoulas, Anita; Buncher, Ralph C
2014-02-01
The number of adolescents undergoing weight loss surgery (WLS) has increased in response to the increasing prevalence of severe childhood obesity. Adolescents undergoing WLS require unique support, which may differ from adult programs. The aim of this study was to describe institutional and programmatic characteristics of centers participating in Teen Longitudinal Assessment of Bariatric Surgery (Teen-LABS), a prospective study investigating safety and efficacy of adolescent WLS. Data were obtained from the Teen-LABS database, and site survey completed by Teen-LABS investigators. The survey queried (1) institutional characteristics, (2) multidisciplinary team composition, (3) clinical program characteristics, and (4) clinical research infrastructure. All centers had extensive multidisciplinary involvement in the assessment, pre-operative education, and post-operative management of adolescents undergoing WLS. Eligibility criteria and pre-operative clinical and diagnostic evaluations were similar between programs. All programs have well-developed clinical research infrastructure, use adolescent-specific educational resources, and maintain specialty equipment, including high weight capacity diagnostic imaging equipment. The composition of clinical team and institutional resources is consistent with current clinical practice guidelines. These characteristics, coupled with dedicated research staff, have facilitated enrollment of 242 participants into Teen-LABS. © 2013 Published by Elsevier Inc.
Michalsky, M.P.; Inge, T.H.; Teich, S.; Eneli, I.; Miller, R.; Brandt, M.L.; Helmrath, M.; Harmon, C.M.; Zeller, M.H.; Jenkins, T.M.; Courcoulas, A.; Buncher, C.R.
2013-01-01
Background The number of adolescents undergoing weight loss surgery (WLS) has increased in response to the increasing prevalence of severe childhood obesity. Adolescents undergoing WLS require unique support, which may differ from adult programs. The aim of this study was to describe institutional and programmatic characteristics of centers participating in Teen-Longitudinal Assessment of Bariatric Surgery (Teen-LABS), a prospective study investigating safety and efficacy of adolescent WLS. Methods Data were obtained from the Teen-LABS database and site survey completed by Teen-LABS investigators. The survey queried (1) institutional characteristics, (2) multidisciplinary team composition, (3) clinical program characteristics, and (4) clinical research infrastructure. Results All centers had extensive multidisciplinary involvement in the assessment, preoperative education and post-operative management of adolescents undergoing WLS. Eligibility criteria, pre-operative clinical and diagnostic evaluations were similar between programs. All programs have well developed clinical research infrastructure, use adolescent-specific educational resources, and maintain specialty equipment, including high weight capacity diagnostic imaging equipment. Conclusions The composition of clinical team and institutional resources are consistent with current clinical practice guidelines. These characteristics, coupled with dedicated research staff, have facilitated enrollment of 242 participants into Teen-LABS. PMID:24491361
Teaching computer interfacing with virtual instruments in an object-oriented language.
Gulotta, M
1995-01-01
LabVIEW is a graphic object-oriented computer language developed to facilitate hardware/software communication. LabVIEW is a complete computer language that can be used like Basic, FORTRAN, or C. In LabVIEW one creates virtual instruments that aesthetically look like real instruments but are controlled by sophisticated computer programs. There are several levels of data acquisition VIs that make it easy to control data flow, and many signal processing and analysis algorithms come with the software as premade VIs. In the classroom, the similarity between virtual and real instruments helps students understand how information is passed between the computer and attached instruments. The software may be used in the absence of hardware so that students can work at home as well as in the classroom. This article demonstrates how LabVIEW can be used to control data flow between computers and instruments, points out important features for signal processing and analysis, and shows how virtual instruments may be used in place of physical instrumentation. Applications of LabVIEW to the teaching laboratory are also discussed, and a plausible course outline is given. PMID:8580361
Exploring problem-based cooperative learning in undergraduate physics labs: student perspectives
NASA Astrophysics Data System (ADS)
Bergin, S. D.; Murphy, C.; Shuilleabhain, A. Ni
2018-03-01
This study examines the potential of problem-based cooperative learning (PBCL) in expanding undergraduate physics students’ understanding of, and engagement with, the scientific process. Two groups of first-year physics students (n = 180) completed a questionnaire which compared their perceptions of learning science with their engagement in physics labs. One cohort completed a lab based on a PBCL approach, whilst the other completed the same experiment, using a more traditional, manual-based lab. Utilising a participant research approach, the questionnaire was co-constructed by researchers and student advisers from each cohort in order to improve shared meaning between researchers and participants. Analysis of students’ responses suggests that students in the PBCL cohort engaged more in higher-order problem-solving skills and evidenced a deeper understanding of the scientific process than students in the more traditional, manual-based cohort. However, the latter cohort responses placed more emphasis on accuracy and measurement in lab science than the PBCL cohort. The students in the PBCL cohort were also more positively engaged with their learning than their counterparts in the manual led group.
Teaching computer interfacing with virtual instruments in an object-oriented language.
Gulotta, M
1995-11-01
LabVIEW is a graphic object-oriented computer language developed to facilitate hardware/software communication. LabVIEW is a complete computer language that can be used like Basic, FORTRAN, or C. In LabVIEW one creates virtual instruments that aesthetically look like real instruments but are controlled by sophisticated computer programs. There are several levels of data acquisition VIs that make it easy to control data flow, and many signal processing and analysis algorithms come with the software as premade VIs. In the classroom, the similarity between virtual and real instruments helps students understand how information is passed between the computer and attached instruments. The software may be used in the absence of hardware so that students can work at home as well as in the classroom. This article demonstrates how LabVIEW can be used to control data flow between computers and instruments, points out important features for signal processing and analysis, and shows how virtual instruments may be used in place of physical instrumentation. Applications of LabVIEW to the teaching laboratory are also discussed, and a plausible course outline is given.
SU-E-P-10: Imaging in the Cardiac Catheterization Lab - Technologies and Clinical Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fetterly, K
2014-06-01
Purpose: Diagnosis and treatment of cardiovascular disease in the cardiac catheterization laboratory is often aided by a multitude of imaging technologies. The purpose of this work is to highlight the contributions to patient care offered by the various imaging systems used during cardiovascular interventional procedures. Methods: Imaging technologies used in the cardiac catheterization lab were characterized by their fundamental technology and by the clinical applications for which they are used. Whether the modality is external to the patient, intravascular, or intracavity was specified. Specific clinical procedures for which multiple modalities are routinely used will be highlighted. Results: X-ray imaging modalitiesmore » include fluoroscopy/angiography and angiography CT. Ultrasound imaging is performed with external, trans-esophageal echocardiography (TEE), and intravascular (IVUS) transducers. Intravascular infrared optical coherence tomography (IVOCT) is used to assess vessel endothelium. Relatively large (>0.5 mm) anatomical structures are imaged with x-ray and ultrasound. IVUS and IVOCT provide high resolution images of vessel walls. Cardiac CT and MRI images are used to plan complex cardiovascular interventions. Advanced applications are used to spatially and temporally merge images from different technologies. Diagnosis and treatment of coronary artery disease frequently utilizes angiography and intra-vascular imaging, and treatment of complex structural heart conditions routinely includes use of multiple imaging modalities. Conclusion: There are several imaging modalities which are routinely used in the cardiac catheterization laboratory to diagnose and treat both coronary artery and structural heart disease. Multiple modalities are frequently used to enhance the quality and safety of procedures. The cardiac catheterization laboratory includes many opportunities for medical physicists to contribute substantially toward advancing patient care.« less
Soro-Yao, Amenan Anastasie; Brou, Kouakou; Amani, Georges; Thonart, Philippe; Djè, Koffi Marcelin
2014-12-01
Lactic acid bacteria (LAB) are the primary microorganisms used to ferment maize-, sorghum- or millet-based foods that are processed in West Africa. Fermentation contributes to desirable changes in taste, flavour, acidity, digestibility and texture in gruels (ogi, baca, dalaki), doughs (agidi, banku, komé) or steam-cooked granulated products (arraw, ciacry, dégué). Similar to other fermented cereal foods that are available in Africa, these products suffer from inconsistent quality. The use of LAB starter cultures during cereal dough fermentation is a subject of increasing interest in efforts to standardise this step and guaranty product uniformity. However, their use by small-scale processing units or small agro-food industrial enterprises is still limited. This review aims to illustrate and discuss major issues that influence the use of LAB starter cultures during the processing of fermented cereal foods in West Africa.
Soro-Yao, Amenan Anastasie; Brou, Kouakou; Amani, Georges; Thonart, Philippe; Djè, Koffi Marcelin
2014-01-01
Lactic acid bacteria (LAB) are the primary microorganisms used to ferment maize-, sorghum- or millet-based foods that are processed in West Africa. Fermentation contributes to desirable changes in taste, flavour, acidity, digestibility and texture in gruels (ogi, baca, dalaki), doughs (agidi, banku, komé) or steam-cooked granulated products (arraw, ciacry, dégué). Similar to other fermented cereal foods that are available in Africa, these products suffer from inconsistent quality. The use of LAB starter cultures during cereal dough fermentation is a subject of increasing interest in efforts to standardise this step and guaranty product uniformity. However, their use by small-scale processing units or small agro-food industrial enterprises is still limited. This review aims to illustrate and discuss major issues that influence the use of LAB starter cultures during the processing of fermented cereal foods in West Africa. PMID:27073601
NASA Astrophysics Data System (ADS)
Wang, Yang; Wang, Qianqian
2008-12-01
When laser ranger is transported or used in field operations, the transmitting axis, receiving axis and aiming axis may be not parallel. The nonparallelism of the three-light-axis will affect the range-measuring ability or make laser ranger not be operated exactly. So testing and adjusting the three-light-axis parallelity in the production and maintenance of laser ranger is important to ensure using laser ranger reliably. The paper proposes a new measurement method using digital image processing based on the comparison of some common measurement methods for the three-light-axis parallelity. It uses large aperture off-axis paraboloid reflector to get the images of laser spot and white light cross line, and then process the images on LabVIEW platform. The center of white light cross line can be achieved by the matching arithmetic in LABVIEW DLL. And the center of laser spot can be achieved by gradation transformation, binarization and area filter in turn. The software system can set CCD, detect the off-axis paraboloid reflector, measure the parallelity of transmitting axis and aiming axis and control the attenuation device. The hardware system selects SAA7111A, a programmable vedio decoding chip, to perform A/D conversion. FIFO (first-in first-out) is selected as buffer.USB bus is used to transmit data to PC. The three-light-axis parallelity can be achieved according to the position bias between them. The device based on this method has been already used. The application proves this method has high precision, speediness and automatization.
R, Jini; HC, Swapna; Rai, Amit Kumar; R, Vrinda; PM, Halami; NM, Sachindra; N, Bhaskar
2011-01-01
Proteolytic and/or lipolytic lactic acid bacteria (LAB) were isolated from visceral wastes of different fresh water fishes. LAB count was found to be highest in case of visceral wastes of Mrigal (5.88 log cfu/g) and lowest in that of tilapia (4.22 log cfu/g). Morphological, biochemical and molecular characterization of the selected LAB isolates were carried out. Two isolates FJ1 (E. faecalis NCIM5367) and LP3 (P. acidilactici NCIM5368) showed both proteolytic and lipolytic properties. All the six native isolates selected for characterization showed antagonistic properties against several human pathogens. All the native isolates were sensitive to antibiotics cephalothin and clindamycin; and, resistant to cotrimoxazole and vancomycin. Considering individually, P. acidilactici FM37, P. acidilactici MW2 and E. faecalis FD3 were sensitive to erythromycin. The two strains FJ1 (E. faecalis NCIM 5367) and LP3 (P. acidilactici NCIM 5368) that had both proteolytic and lipolytic properties have the potential for application in fermentative recovery of lipids and proteins from fish processing wastes. PMID:24031786
1995-07-08
Marshall researchers, in the Astrionics lab, study rotating unbalanced mass devices. These require less power, and are lighter than current devices used for scanning images, a slice at a time. They have a wide range of space-based applications.
The neuroscience of investing: fMRI of the reward system.
Peterson, Richard L
2005-11-15
Functional magnetic resonance imaging (fMRI) has proven a useful tool for observing neural BOLD signal changes during complex cognitive and emotional tasks. Yet the meaning and applicability of the fMRI data being gathered is still largely unknown. The brain's reward system underlies the fundamental neural processes of goal evaluation, preference formation, positive motivation, and choice behavior. fMRI technology allows researchers to dynamically visualize reward system processes. Experimenters can then correlate reward system BOLD activations with experimental behavior from carefully controlled experiments. In the SPAN lab at Stanford University, directed by Brian Knutson Ph.D., researchers have been using financial tasks during fMRI scanning to correlate emotion, behavior, and cognition with the reward system's fundamental neural activations. One goal of the SPAN lab is the development of predictive models of behavior. In this paper we extrapolate our fMRI results toward understanding and predicting individual behavior in the uncertain and high-risk environment of the financial markets. The financial market price anomalies of "value versus glamour" and "momentum" may be real-world examples of reward system activation biasing collective behavior. On the individual level, the investor's bias of overconfidence may similarly be related to reward system activation. We attempt to understand selected "irrational" investor behaviors and anomalous financial market price patterns through correlations with findings from fMRI research of the reward system.
A practical and objective approach to scar colour assessment.
Hallam, M J; McNaught, K; Thomas, A N; Nduka, C
2013-10-01
Scarring is a significant clinical problem following dermal injury. However, scars are not a single describable entity and huge phenotypic variability is evident. Quantitative, reproducible inter-observer scar assessment is essential to monitor wound healing and the effect of scar treatments. Scar colour, reflecting the biological processes occurring within a scar, is integral to any assessment. The objective of this study was to analyse scar colour using the non-invasive Eykona® Wound Measurement System (the System) as compared against the Manchester Scar Scale (MSS). Three dimensional images of 43 surgical scars were acquired post-operatively from 35 patients at 3-6 months and the colour difference between the scar and surrounding skin was calculated (giving ΔLab values). The colourimetric results were then compared against subjective MSS gradings. A significant difference in ΔLab values between MSS gradings of "slight mismatch" and "obvious mismatch" (p<0.025) and between "obvious mismatch" and "gross mismatch" (p<0.05) were noted. The System creates objective, reproducible data, without the need for any specialist expertise and compares favourably with the MSS. Greater scar numbers are required to further clinically validate this device--however, with this potential to calculate scar length, width, volume and other characteristics, it could provide a complete, objective, quantitative record of scarring throughout the wound-healing process. Copyright © 2013 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Katz, Jonathan E
2017-01-01
Laboratories tend to be amenable environments for long-term reliable operation of scientific measurement equipment. Indeed, it is not uncommon to find equipment 5, 10, or even 20+ years old still being routinely used in labs. Unfortunately, the Achilles heel for many of these devices is the control/data acquisition computer. Often these computers run older operating systems (e.g., Windows XP) and, while they might only use standard network, USB or serial ports, they require proprietary software to be installed. Even if the original installation disks can be found, it is a burdensome process to reinstall and is fraught with "gotchas" that can derail the process-lost license keys, incompatible hardware, forgotten configuration settings, etc. If you have running legacy instrumentation, the computer is the ticking time bomb waiting to put a halt to your operation.In this chapter, I describe how to virtualize your currently running control computer. This virtualized computer "image" is easy to maintain, easy to back up and easy to redeploy. I have used this multiple times in my own lab to greatly improve the robustness of my legacy devices.After completing the steps in this chapter, you will have your original control computer as well as a virtual instance of that computer with all the software installed ready to control your hardware should your original computer ever be decommissioned.
Lithospheric thickness variations across the North Anatolian Fault Zone
NASA Astrophysics Data System (ADS)
Thompson, D. A.; Rost, S.; Cornwell, D. G.; Houseman, G.; Turkelli, N.; Teoman, U.; Altuncu Poyraz, S.; Kahraman, M.; Gulen, L.; Utkucu, M.; Williams, J. R.
2017-12-01
The North Anatolian Fault Zone (NAFZ) is a major continental strike-slip fault zone, similar in size and scale to the San Andreas system, that extends 1200km across Turkey. These type of faults may broaden significantly with depth or penetrate as narrow features all the way to the lithosphere-asthenosphere boundary (LAB), potentially providing pathways for fluids and magma to shallower levels. The Dense Array for North Anatolia (DANA) was a 73 station broadband seismic network arranged in a rectangular grid (7km station spacing) deployed to image the deep structure of the fault zone. We present here new S-receiver function images that map out both the depth to the Moho and to negative velocity gradients commonly ascribed to the LAB, with preliminary results suggesting lithospheric thicknesses on the order of 80-100km for the region.
Program Processes Thermocouple Readings
NASA Technical Reports Server (NTRS)
Quave, Christine A.; Nail, William, III
1995-01-01
Digital Signal Processor for Thermocouples (DART) computer program implements precise and fast method of converting voltage to temperature for large-temperature-range thermocouple applications. Written using LabVIEW software. DART available only as object code for use on Macintosh II FX or higher-series computers running System 7.0 or later and IBM PC-series and compatible computers running Microsoft Windows 3.1. Macintosh version of DART (SSC-00032) requires LabVIEW 2.2.1 or 3.0 for execution. IBM PC version (SSC-00031) requires LabVIEW 3.0 for Windows 3.1. LabVIEW software product of National Instruments and not included with program.
Severtson, Dolores J; Henriques, Jeffrey B
2009-11-01
Lay people have difficulty understanding the meaning of environmental health risk information. Visual images can use features that leverage visual perception capabilities and semiotic conventions to promote meaningful comprehension. Such evidence-based features were employed to develop two images of a color-coded visual scale to convey drinking water test results. The effect of these images and a typical alphanumeric (AN) lab report were explored in a repeated measures randomized trial among 261 undergraduates. Outcome measures included risk beliefs, emotions, personal safety threshold, mitigation intentions, the durability of beliefs and intentions over time, and test result recall. The plain image conveyed the strongest risk message overall, likely due to increased visual salience. The more detailed graded image conveyed a stronger message than the AN format only for females. Images only prompted meaningful risk reduction intentions among participants with optimistically biased safety threshold beliefs. Fuzzy trace theory supported some findings as follow. Images appeared to promote the consolidation of beliefs over time from an initial meaning of safety to an integrated meaning of safety and health risk; emotion potentially shaped this process. Although the AN report fostered more accurate recall, images were related to more appropriate beliefs and intentions at both time points. Findings hinted at the potential for images to prompt appropriate beliefs independent of accurate factual knowledge. Overall, results indicate that images facilitated meaningful comprehension of environmental health risk information and suggest foci for further research.
My Brother’s Keeper National Lab Week
2016-03-02
Students in the My Brother’s Keeper program line the railings of an observation deck overlooking the Granular Mechanics and Regolith Operations Lab at NASA’s Kennedy Space Center in Florida. The spaceport is one of six NASA centers that participated in My Brother’s Keeper National Lab Week. The event is a nationwide effort to bring youth from underrepresented communities into federal labs and centers for hands-on activities, tours and inspirational speakers. Sixty students from the nearby cities of Orlando and Sanford visited Kennedy, where they toured the Vehicle Assembly Building, the Space Station Processing Facility and the center’s innovative Swamp Works Labs. The students also had a chance to meet and ask questions of a panel of subject matter experts from across Kennedy.
My Brother’s Keeper National Lab Week
2016-03-02
Students in the My Brother’s Keeper program try out some of the machinery inside the Prototype Lab at NASA’s Kennedy Space Center. The Florida spaceport is one of six NASA centers that participated in My Brother’s Keeper National Lab Week. The event is a nationwide effort to bring youth from underrepresented communities into federal labs and centers for hands-on activities, tours and inspirational speakers. Sixty students from the nearby cities of Orlando and Sanford visited Kennedy, where they toured the Vehicle Assembly Building, the Space Station Processing Facility and the center’s innovative Swamp Works Labs. The students also had a chance to meet and ask questions of a panel of subject matter experts from across Kennedy.
My Brother’s Keeper National Lab Week
2016-03-02
Mike Lane demonstrates a 3D scanner inside the NASA Kennedy Space Center Prototype Lab for students in the My Brother’s Keeper program. The Florida spaceport is one of six NASA centers that participated in My Brother’s Keeper National Lab Week. The event is a nationwide effort to bring youth from underrepresented communities into federal labs and centers for hands-on activities, tours and inspirational speakers. Sixty students from the nearby cities of Orlando and Sanford visited Kennedy, where they toured the Vehicle Assembly Building, the Space Station Processing Facility and the center’s innovative Swamp Works Labs. The students also had a chance to meet and ask questions of a panel of subject matter experts from across Kennedy.
My Brother’s Keeper National Lab Week
2016-03-02
Jose Nunez of NASA Kennedy Space Center’s Exploration Research and Technology Programs talks to students in the My Brother’s Keeper program outside the Florida spaceport’s Swamp Works Lab. Kennedy is one of six NASA centers that participated in My Brother’s Keeper National Lab Week. The event is a nationwide effort to bring youth from underrepresented communities into federal labs and centers for hands-on activities, tours and inspirational speakers. Sixty students from the nearby cities of Orlando and Sanford visited Kennedy, where they toured the Vehicle Assembly Building, the Space Station Processing Facility and the center’s innovative Swamp Works Labs. The students also had a chance to meet and ask questions of a panel of subject matter experts from across Kennedy.
Nondestructive Evaluation of Foam Insulation for the External Tank Return to Flight
NASA Technical Reports Server (NTRS)
Walker, James L.; Richter, Joel D.
2006-01-01
Nondestructive evaluation methods have been developed to identify defects in the foam thermal protection system (TPS) of the Space Shuttle External Tank (ET). Terahertz imaging and backscatter radiography have been brought from prototype lab systems to production hardened inspection tools in just a few years. These methods have been demonstrated to be capable of detecting void type defects under many inches of foam which, if not repaired, could lead to detrimental foam loss. The evolution of these methods from lab tools to implementation on the ET will be discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Robert K.
Ernest Orland Lawrence Berkeley National Laboratory (Berkeley Lab) is the oldest of America's national laboratories and has been a leader in science and engineering technology for more than 65 years, serving as a powerful resource to meet Us national needs. As a multi-program Department of Energy laboratory, Berkeley Lab is dedicated to performing leading edge research in the biological, physical, materials, chemical, energy, environmental and computing sciences. Ernest Orlando Lawrence, the Lab's founder and the first of its nine Nobel prize winners, invented the cyclotron, which led to a Golden Age of particle physics and revolutionary discoveries about the naturemore » of the universe. To this day, the Lab remains a world center for accelerator and detector innovation and design. The Lab is the birthplace of nuclear medicine and the cradle of invention for medical imaging. In the field of heart disease, Lab researchers were the first to isolate lipoproteins and the first to determine that the ratio of high density to low density lipoproteins is a strong indicator of heart disease risk. The demise of the dinosaurs--the revelation that they had been killed off by a massive comet or asteroid that had slammed into the Earth--was a theory developed here. The invention of the chemical laser, the unlocking of the secrets of photosynthesis--this is a short preview of the legacy of this Laboratory.« less
NASA Astrophysics Data System (ADS)
Gurrola, H.; Berdine, A.; Pulliam, J.
2017-12-01
Interference between Ps phases and reverberations (PPs, PSs phases and reverberations thereof) make it difficult to use Ps receiver functions (RF) in regions with thick sediments. Crustal reverberations typically interfere with Ps phases from the lithosphere-asthenosphere boundary (LAB). We have developed a method to separate Ps phases from reverberations by deconvolution of all the data recorded at a seismic station by removing phases from a single wavefront at each iteration of the deconvolution (wavefield iterative deconvolution or WID). We applied WID to data collected in the Gulf Coast and Llano Front regions of Texas by the EarthScope Transportable array and by a temporary deployment of 23 broadband seismometers (deployed by Texas Tech and Baylor Universities). The 23 station temporary deployment was 300 km long; crossing from Matagorda Island onto the Llano uplift. 3-D imaging using these data shows that the deepest part of the sedimentary basin may be inboard of the coastline. The Moho beneath the Gulf Coast plain does not appear in many of the images. This could be due to interference from reverberations from shallower layers or it may indicate the lack of a strong velocity contrast at the Moho perhaps due to serpentinization of the uppermost mantle. The Moho appears to be flat, at 40 km) beneath most of the Llano uplift but may thicken to the south and thin beneath the Coastal plain. After application of WID, we were able to identify a negatively polarized Ps phase consistent with LAB depths identified in Sp RF images. The LAB appears to be 80-100 km deep beneath most of the coast but is 100 to 120 km deep beneath the Llano uplift. There are other negatively polarized phases between 160 and 200 km depths beneath the Gulf Coast and the Llano Uplift. These deeper phases may indicate that, in this region, the LAB is transitional in nature and rather than a discrete boundary.
Asmaro, Deyar; Liotti, Mario
2014-01-10
There has been a great deal of interest in understanding how the human brain processes appetitive food cues, and knowing how such cues elicit craving responses is particularly relevant when current eating behavior trends within Westernized societies are considered. One substance that holds a special place with regard to food preference is chocolate, and studies that used functional magnetic resonance imaging (fMRI) and event-related potentials (ERPs) have identified neural regions and electrical signatures that are elicited by chocolate cue presentations. This review will examine fMRI and ERP findings from studies that used high-caloric food and chocolate cues as stimuli, with a focus on responses observed in samples of healthy participants, as opposed to those with eating-related pathology. The utility of using high-caloric and chocolate stimuli as a means of understanding the human reward system will also be highlighted, as these findings may be particularly important for understanding processes related to pathological overeating and addiction to illicit substances. Finally, research from our own lab that focused on chocolate stimulus processing in chocolate cravers and non-cravers will be discussed, as the approach used may help bridge fMRI and ERP findings so that a more complete understanding of appetitive stimulus processing in the temporal and spatial domains may be established.
Asmaro, Deyar; Liotti, Mario
2014-01-01
There has been a great deal of interest in understanding how the human brain processes appetitive food cues, and knowing how such cues elicit craving responses is particularly relevant when current eating behavior trends within Westernized societies are considered. One substance that holds a special place with regard to food preference is chocolate, and studies that used functional magnetic resonance imaging (fMRI) and event-related potentials (ERPs) have identified neural regions and electrical signatures that are elicited by chocolate cue presentations. This review will examine fMRI and ERP findings from studies that used high-caloric food and chocolate cues as stimuli, with a focus on responses observed in samples of healthy participants, as opposed to those with eating-related pathology. The utility of using high-caloric and chocolate stimuli as a means of understanding the human reward system will also be highlighted, as these findings may be particularly important for understanding processes related to pathological overeating and addiction to illicit substances. Finally, research from our own lab that focused on chocolate stimulus processing in chocolate cravers and non-cravers will be discussed, as the approach used may help bridge fMRI and ERP findings so that a more complete understanding of appetitive stimulus processing in the temporal and spatial domains may be established. PMID:24434747
NDE of ceramics and ceramic composites
NASA Technical Reports Server (NTRS)
Vary, Alex; Klima, Stanley J.
1991-01-01
Although nondestructive evaluation (NDE) techniques for ceramics are fairly well developed, they are difficult to apply in many cases for high probability detection of the minute flaws that can cause failure in monolithic ceramics. Conventional NDE techniques are available for monolithic and fiber reinforced ceramic matrix composites, but more exact quantitative techniques needed are still being investigated and developed. Needs range from flaw detection to below 100 micron levels in monolithic ceramics to global imaging of fiber architecture and matrix densification anomalies in ceramic composites. NDE techniques that will ultimately be applicable to production and quality control of ceramic structures are still emerging from the lab. Needs are different depending on the processing stage, fabrication method, and nature of the finished product. NDE techniques are being developed in concert with materials processing research where they can provide feedback information to processing development and quality improvement. NDE techniques also serve as research tools for materials characterization and for understanding failure processes, e.g., during thermomechanical testing.
The Opaque Projector: The Inverse of the Camera Obscura
ERIC Educational Resources Information Center
Greenslade, Thomas B., Jr.
2011-01-01
Many years ago I was running the standard laboratory experiment on thin lens optics. The source was the usual self illuminated object mounted on an optical bench, and a converging lens formed a real image on a screen. One of the students sitting near one wall of the darkened lab was having some trouble with the idea of image formation. Her face…
ERIC Educational Resources Information Center
Murray, Michael
This report describes the use of the Internet as an image and information resource in an introductory television and radio production class (COMM 223: Principles of Radio and Television Production) at Western Illinois University. The report states that the class's two lab sections spent the first half of the semester preparing a television…
Multimodal imaging of the human knee down to the cellular level
NASA Astrophysics Data System (ADS)
Schulz, G.; Götz, C.; Müller-Gerbl, M.; Zanette, I.; Zdora, M.-C.; Khimchenko, A.; Deyhle, H.; Thalmann, P.; Müller, B.
2017-06-01
Computed tomography reaches the best spatial resolution for the three-dimensional visualization of human tissues among the available nondestructive clinical imaging techniques. Nowadays, sub-millimeter voxel sizes are regularly obtained. Regarding investigations on true micrometer level, lab-based micro-CT (μCT) has become gold standard. The aim of the present study is firstly the hierarchical investigation of a human knee post mortem using hard X-ray μCT and secondly a multimodal imaging using absorption and phase contrast modes in order to investigate hard (bone) and soft (cartilage) tissues on the cellular level. After the visualization of the entire knee using a clinical CT, a hierarchical imaging study was performed using the lab-system nanotom® m. First, the entire knee was measured with a pixel length of 65 μm. The highest resolution with a pixel length of 3 μm could be achieved after extracting cylindrically shaped plugs from the femoral bones. For the visualization of the cartilage, grating-based phase contrast μCT (I13-2, Diamond Light Source) was performed. With an effective voxel size of 2.3 μm it was possible to visualize individual chondrocytes within the cartilage.
Desai, Sunita; Hatfield, Laura A; Hicks, Andrew L; Sinaiko, Anna D; Chernew, Michael E; Cowling, David; Gautam, Santosh; Wu, Sze-Jung; Mehrotra, Ateev
2017-08-01
Insurers, employers, and states increasingly encourage price transparency so that patients can compare health care prices across providers. However, the evidence on whether price transparency tools encourage patients to receive lower-cost care and reduce overall spending remains limited and mixed. We examined the experience of a large insured population that was offered a price transparency tool, focusing on a set of "shoppable" services (lab tests, office visits, and advanced imaging services). Overall, offering the tool was not associated with lower shoppable services spending. Only 12 percent of employees who were offered the tool used it in the first fifteen months after it was introduced, and use of the tool was not associated with lower prices for lab tests or office visits. The average price paid for imaging services preceded by a price search was 14 percent lower than that paid for imaging services not preceded by a price search. However, only 1 percent of those who received advanced imaging conducted a price search. Simply offering a price transparency tool is not sufficient to meaningfully decrease health care prices or spending. Project HOPE—The People-to-People Health Foundation, Inc.
NASA Astrophysics Data System (ADS)
Pal, Robert; Beeby, Andrew
2014-09-01
An inverted microscope has been adapted to allow time-gated imaging and spectroscopy to be carried out on samples containing responsive lanthanide probes. The adaptation employs readily available components, including a pulsed light source, time-gated camera, spectrometer and photon counting detector, allowing imaging, emission spectroscopy and lifetime measurements. Each component is controlled by a suite of software written in LabVIEW and is powered via conventional USB ports.
Reyes, D R; Halter, M; Hwang, J
2015-07-01
The characterization of internal structures in a polymeric microfluidic device, especially of a final product, will require a different set of optical metrology tools than those traditionally used for microelectronic devices. We demonstrate that optical coherence tomography (OCT) imaging is a promising technique to characterize the internal structures of poly(methyl methacrylate) devices where the subsurface structures often cannot be imaged by conventional wide field optical microscopy. The structural details of channels in the devices were imaged with OCT and analyzed with an in-house written ImageJ macro in an effort to identify the structural details of the channel. The dimensional values obtained with OCT were compared with laser-scanning confocal microscopy images of channels filled with a fluorophore solution. Attempts were also made using confocal reflectance and interferometry microscopy to measure the channel dimensions, but artefacts present in the images precluded quantitative analysis. OCT provided the most accurate estimates for the channel height based on an analysis of optical micrographs obtained after destructively slicing the channel with a microtome. OCT may be a promising technique for the future of three-dimensional metrology of critical internal structures in lab-on-a-chip devices because scans can be performed rapidly and noninvasively prior to their use. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
Commerce Lab - An enabling facility and test bed for commercial flight opportunities
NASA Technical Reports Server (NTRS)
Robertson, Jack; Atkins, Harry L.; Williams, John R.
1986-01-01
Commerce Lab is conceived as an adjunct to the National Space Transportation System (NSTS) by providing a focal point for commercial missions which could utilize existing NSTS carrier and resource capabilities for on-orbit experimentation in the microgravity sciences. In this context, the Commerce Lab provides an enabling facility and test bed for commercial flight opportunities. Commerce Lab program activities to date have focused on mission planning for private sector involvement in the space program to facilitate the commercial exploitation of the microgravity environment for materials processing research and development. It is expected that Commerce Lab will provide a logical transition between currently planned NSTS missions and future microgravity science and commercial R&D missions centered around the Space Station. The present study identifies candidate Commerce Lab flight experiments and their development status and projects a mission traffic model that can be used in commercial mission planning.