Survey of currently available high-resolution raster graphics systems
NASA Technical Reports Server (NTRS)
Jones, Denise R.
1987-01-01
Presented are data obtained on high-resolution raster graphics engines currently available on the market. The data were obtained through survey responses received from various vendors and also from product literature. The questionnaire developed for this survey was basically a list of characteristics desired in a high performance color raster graphics system which could perform real-time aircraft simulations. Several vendors responded to the survey, with most reporting on their most advanced high-performance, high-resolution raster graphics engine.
A High Resolution Graphic Input System for Interactive Graphic Display Terminals. Appendix B.
ERIC Educational Resources Information Center
Van Arsdall, Paul Jon
The search for a satisfactory computer graphics input system led to this version of an analog sheet encoder which is transparent and requires no special probes. The goal of the research was to provide high resolution touch input capabilities for an experimental minicomputer based intelligent terminal system. The technique explored is compatible…
ERIC Educational Resources Information Center
Fletcher, Richard K., Jr.
This description of procedures for dumping high and low resolution graphics using the Apple IIe microcomputer system focuses on two special hardware configurations that are commonly used in schools--the Apple Dot Matrix Printer with the Apple Parallel Interface Card, and the Imagewriter Printer with the Apple Super Serial Interface Card. Special…
Vatsavai, Ranga Raju; Graesser, Jordan B.; Bhaduri, Budhendra L.
2016-07-05
A programmable media includes a graphical processing unit in communication with a memory element. The graphical processing unit is configured to detect one or more settlement regions from a high resolution remote sensed image based on the execution of programming code. The graphical processing unit identifies one or more settlements through the execution of the programming code that executes a multi-instance learning algorithm that models portions of the high resolution remote sensed image. The identification is based on spectral bands transmitted by a satellite and on selected designations of the image patches.
Computer-Graphics Emulation of Chemical Instrumentation: Absorption Spectrophotometers.
ERIC Educational Resources Information Center
Gilbert, D. D.; And Others
1982-01-01
Describes interactive, computer-graphics program emulating behavior of high resolution, ultraviolet-visible analog recording spectrophotometer. Graphics terminal behaves as recording absorption spectrophotometer. Objective of the emulation is study of optimization of the instrument to yield accurate absorption spectra, including…
NASA Technical Reports Server (NTRS)
1987-01-01
Genigraphics Corporation's Masterpiece 8770 FilmRecorder is an advanced high resolution system designed to improve and expand a company's in-house graphics production. GRAFTIME/software package was designed to allow office personnel with minimal training to produce professional level graphics for business communications and presentations. Products are no longer being manufactured.
3D Graphics For Interactive Surgical Simulation And Implant Design
NASA Astrophysics Data System (ADS)
Dev, P.; Fellingham, L. L.; Vassiliadis, A.; Woolson, S. T.; White, D. N.; Young, S. L.
1984-10-01
The combination of user-friendly, highly interactive software, 3D graphics, and the high-resolution detailed views of anatomy afforded by X-ray computer tomography and magnetic resonance imaging can provide surgeons with the ability to plan and practice complex surgeries. In addition to providing a realistic and manipulable 3D graphics display, this system can drive a milling machine in order to produce physical models of the anatomy or prosthetic devices and implants which have been designed using its interactive graphics editing facilities.
Dumping Low and High Resolution Graphics on the Apple IIe Microcomputer System.
ERIC Educational Resources Information Center
Fletcher, Richard K., Jr.; Ruckman, Frank, Jr.
This paper discusses and outlines procedures for obtaining a hard copy of the graphic output of a microcomputer or "dumping a graphic" using the Apple Dot Matrix Printer with the Apple Parallel Interface Card, and the Imagewriter Printer with the Apple Super Serial Interface Card. Hardware configurations and instructions for high…
NASA Astrophysics Data System (ADS)
Gao, Jerry Z.; Zhu, Eugene; Shim, Simon
2003-01-01
With the increasing applications of the Web in e-commerce, advertising, and publication, new technologies are needed to improve Web graphics technology due to the current limitation of technology. The SVG (Scalable Vector Graphics) technology is a new revolutionary solution to overcome the existing problems in the current web technology. It provides precise and high-resolution web graphics using plain text format commands. It sets a new standard for web graphic format to allow us to present complicated graphics with rich test fonts and colors, high printing quality, and dynamic layout capabilities. This paper provides a tutorial overview about SVG technology and its essential features, capability, and advantages. The reports a comparison studies between SVG and other web graphics technologies.
Improved-resolution real-time skin-dose mapping for interventional fluoroscopic procedures
NASA Astrophysics Data System (ADS)
Rana, Vijay K.; Rudin, Stephen; Bednarek, Daniel R.
2014-03-01
We have developed a dose-tracking system (DTS) that provides a real-time display of the skin-dose distribution on a 3D patient graphic during fluoroscopic procedures. Radiation dose to individual points on the skin is calculated using exposure and geometry parameters from the digital bus on a Toshiba C-arm unit. To accurately define the distribution of dose, it is necessary to use a high-resolution patient graphic consisting of a large number of elements. In the original DTS version, the patient graphics were obtained from a library of population body scans which consisted of larger-sized triangular elements resulting in poor congruence between the graphic points and the x-ray beam boundary. To improve the resolution without impacting real-time performance, the number of calculations must be reduced and so we created software-designed human models and modified the DTS to read the graphic as a list of vertices of the triangular elements such that common vertices of adjacent triangles are listed once. Dose is calculated for each vertex point once instead of the number of times that a given vertex appears in multiple triangles. By reformatting the graphic file, we were able to subdivide the triangular elements by a factor of 64 times with an increase in the file size of only 1.3 times. This allows a much greater number of smaller triangular elements and improves resolution of the patient graphic without compromising the real-time performance of the DTS and also gives a smoother graphic display for better visualization of the dose distribution.
Robot graphic simulation testbed
NASA Technical Reports Server (NTRS)
Cook, George E.; Sztipanovits, Janos; Biegl, Csaba; Karsai, Gabor; Springfield, James F.
1991-01-01
The objective of this research was twofold. First, the basic capabilities of ROBOSIM (graphical simulation system) were improved and extended by taking advantage of advanced graphic workstation technology and artificial intelligence programming techniques. Second, the scope of the graphic simulation testbed was extended to include general problems of Space Station automation. Hardware support for 3-D graphics and high processing performance make high resolution solid modeling, collision detection, and simulation of structural dynamics computationally feasible. The Space Station is a complex system with many interacting subsystems. Design and testing of automation concepts demand modeling of the affected processes, their interactions, and that of the proposed control systems. The automation testbed was designed to facilitate studies in Space Station automation concepts.
Astronomy Simulation with Computer Graphics.
ERIC Educational Resources Information Center
Thomas, William E.
1982-01-01
"Planetary Motion Simulations" is a system of programs designed for students to observe motions of a superior planet (one whose orbit lies outside the orbit of the earth). Programs run on the Apple II microcomputer and employ high-resolution graphics to present the motions of Saturn. (Author/JN)
Intelligent Visual Input: A Graphical Method for Rapid Entry of Patient-Specific Data
Bergeron, Bryan P.; Greenes, Robert A.
1987-01-01
Intelligent Visual Input (IVI) provides a rapid, graphical method of data entry for both expert system interaction and medical record keeping purposes. Key components of IVI include: a high-resolution graphic display; an interface supportive of rapid selection, i.e., one utilizing a mouse or light pen; algorithm simplification modules; and intelligent graphic algorithm expansion modules. A prototype IVI system, designed to facilitate entry of physical exam findings, is used to illustrates the potential advantages of this approach.
Realtime multi-plot graphics system
NASA Technical Reports Server (NTRS)
Shipkowski, Michael S.
1990-01-01
The increased complexity of test operations and customer requirements at Langley Research Center's National Transonic Facility (NTF) surpassed the capabilities of the initial realtime graphics system. The analysis of existing hardware and software and the enhancements made to develop a new realtime graphics system are described. The result of this effort is a cost effective system, based on hardware already in place, that support high speed, high resolution, generation and display of multiple realtime plots. The enhanced graphics system (EGS) meets the current and foreseeable future realtime graphics requirements of the NTF. While this system was developed to support wind tunnel operations, the overall design and capability of the system is applicable to other realtime data acquisition systems that have realtime plot requirements.
Computer graphics application in the engineering design integration system
NASA Technical Reports Server (NTRS)
Glatt, C. R.; Abel, R. W.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Stewart, W. A.
1975-01-01
The computer graphics aspect of the Engineering Design Integration (EDIN) system and its application to design problems were discussed. Three basic types of computer graphics may be used with the EDIN system for the evaluation of aerospace vehicles preliminary designs: offline graphics systems using vellum-inking or photographic processes, online graphics systems characterized by direct coupled low cost storage tube terminals with limited interactive capabilities, and a minicomputer based refresh terminal offering highly interactive capabilities. The offline line systems are characterized by high quality (resolution better than 0.254 mm) and slow turnaround (one to four days). The online systems are characterized by low cost, instant visualization of the computer results, slow line speed (300 BAUD), poor hard copy, and the early limitations on vector graphic input capabilities. The recent acquisition of the Adage 330 Graphic Display system has greatly enhanced the potential for interactive computer aided design.
Mouse Driven Window Graphics for Network Teaching.
ERIC Educational Resources Information Center
Makinson, G. J.; And Others
Computer enhanced teaching of computational mathematics on a network system driving graphics terminals is being redeveloped for a mouse-driven, high resolution, windowed environment of a UNIX work station. Preservation of the features of networked access by heterogeneous terminals is provided by the use of the X Window environment. A dmonstrator…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stanescu, C.
1990-08-01
Complex software for shower reconstruction in DELPHI barrel electromagnetic calorimeter which deals, for each event, with great amounts of information, due to the high spatial resolution of this detector, needs powerful verification tools. An interactive graphics program, running on high performance graphics display system Whizzard 7555 from Megatek, was developed to display the logical steps in showers and their axes reconstruction. The program allows both operations on the image in real-time (rotation, translation and zoom) and the use of non-geometrical criteria to modify it (as the use of energy) thresholds for the representation of the elements that compound the showersmore » (or of the associated lego plots). For this purpose graphics objects associated to user parameters were defined. Instancing and modelling features of the native graphics library were extensively used.« less
Keleshis, C; Ionita, CN; Yadava, G; Patel, V; Bednarek, DR; Hoffmann, KR; Verevkin, A; Rudin, S
2008-01-01
A graphical user interface based on LabVIEW software was developed to enable clinical evaluation of a new High-Sensitivity Micro-Angio-Fluoroscopic (HSMAF) system for real-time acquisition, display and rapid frame transfer of high-resolution region-of-interest images. The HSMAF detector consists of a CsI(Tl) phosphor, a light image intensifier (LII), and a fiber-optic taper coupled to a progressive scan, frame-transfer, charged-coupled device (CCD) camera which provides real-time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. Images can be captured in continuous or triggered mode, and the camera can be programmed by a computer using Camera Link serial communication. A graphical user interface was developed to control the camera modes such as gain and pixel binning as well as to acquire, store, display, and process the images. The program, written in LabVIEW, has the following capabilities: camera initialization, synchronized image acquisition with the x-ray pulses, roadmap and digital subtraction angiography acquisition (DSA), flat field correction, brightness and contrast control, last frame hold in fluoroscopy, looped playback of the acquired images in angiography, recursive temporal filtering and LII gain control. Frame rates can be up to 30 fps in full-resolution mode. The user friendly implementation of the interface along with the high framerate acquisition and display for this unique high-resolution detector should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents and hence enable more accurate diagnoses and image guided interventions. (Support: NIH Grants R01NS43924, R01EB002873) PMID:18836570
Keleshis, C; Ionita, Cn; Yadava, G; Patel, V; Bednarek, Dr; Hoffmann, Kr; Verevkin, A; Rudin, S
2008-01-01
A graphical user interface based on LabVIEW software was developed to enable clinical evaluation of a new High-Sensitivity Micro-Angio-Fluoroscopic (HSMAF) system for real-time acquisition, display and rapid frame transfer of high-resolution region-of-interest images. The HSMAF detector consists of a CsI(Tl) phosphor, a light image intensifier (LII), and a fiber-optic taper coupled to a progressive scan, frame-transfer, charged-coupled device (CCD) camera which provides real-time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. Images can be captured in continuous or triggered mode, and the camera can be programmed by a computer using Camera Link serial communication. A graphical user interface was developed to control the camera modes such as gain and pixel binning as well as to acquire, store, display, and process the images. The program, written in LabVIEW, has the following capabilities: camera initialization, synchronized image acquisition with the x-ray pulses, roadmap and digital subtraction angiography acquisition (DSA), flat field correction, brightness and contrast control, last frame hold in fluoroscopy, looped playback of the acquired images in angiography, recursive temporal filtering and LII gain control. Frame rates can be up to 30 fps in full-resolution mode. The user friendly implementation of the interface along with the high framerate acquisition and display for this unique high-resolution detector should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents and hence enable more accurate diagnoses and image guided interventions. (Support: NIH Grants R01NS43924, R01EB002873).
Adaptive-optics optical coherence tomography processing using a graphics processing unit.
Shafer, Brandon A; Kriske, Jeffery E; Kocaoglu, Omer P; Turner, Timothy L; Liu, Zhuolin; Lee, John Jaehwan; Miller, Donald T
2014-01-01
Graphics processing units are increasingly being used for scientific computing for their powerful parallel processing abilities, and moderate price compared to super computers and computing grids. In this paper we have used a general purpose graphics processing unit to process adaptive-optics optical coherence tomography (AOOCT) images in real time. Increasing the processing speed of AOOCT is an essential step in moving the super high resolution technology closer to clinical viability.
Li, Xiangrui; Lu, Zhong-Lin
2012-02-29
Display systems based on conventional computer graphics cards are capable of generating images with 8-bit gray level resolution. However, most experiments in vision research require displays with more than 12 bits of luminance resolution. Several solutions are available. Bit++ (1) and DataPixx (2) use the Digital Visual Interface (DVI) output from graphics cards and high resolution (14 or 16-bit) digital-to-analog converters to drive analog display devices. The VideoSwitcher (3) described here combines analog video signals from the red and blue channels of graphics cards with different weights using a passive resister network (4) and an active circuit to deliver identical video signals to the three channels of color monitors. The method provides an inexpensive way to enable high-resolution monochromatic displays using conventional graphics cards and analog monitors. It can also provide trigger signals that can be used to mark stimulus onsets, making it easy to synchronize visual displays with physiological recordings or response time measurements. Although computer keyboards and mice are frequently used in measuring response times (RT), the accuracy of these measurements is quite low. The RTbox is a specialized hardware and software solution for accurate RT measurements. Connected to the host computer through a USB connection, the driver of the RTbox is compatible with all conventional operating systems. It uses a microprocessor and high-resolution clock to record the identities and timing of button events, which are buffered until the host computer retrieves them. The recorded button events are not affected by potential timing uncertainties or biases associated with data transmission and processing in the host computer. The asynchronous storage greatly simplifies the design of user programs. Several methods are available to synchronize the clocks of the RTbox and the host computer. The RTbox can also receive external triggers and be used to measure RT with respect to external events. Both VideoSwitcher and RTbox are available for users to purchase. The relevant information and many demonstration programs can be found at http://lobes.usc.edu/.
Chemical Engineering and Instructional Computing: Are They in Step? (Part 2).
ERIC Educational Resources Information Center
Seider, Warren D.
1988-01-01
Describes the use of "CACHE IBM PC Lessons for Courses Other than Design and Control" as open-ended design oriented problems. Presents graphics from some of the software and discusses high-resolution graphics workstations. Concludes that computing tools are in line with design and control practice in chemical engineering. (MVL)
Liya Thomas; R. Edward Thomas
2011-01-01
We have developed an automated defect detection system and a state-of-the-art Graphic User Interface (GUI) for hardwood logs. The algorithm identifies defects at least 0.5 inch high and at least 3 inches in diameter on barked hardwood log and stem surfaces. To summarize defect features and to build a knowledge base, hundreds of defects were measured, photographed, and...
Developments in the CCP4 molecular-graphics project.
Potterton, Liz; McNicholas, Stuart; Krissinel, Eugene; Gruber, Jan; Cowtan, Kevin; Emsley, Paul; Murshudov, Garib N; Cohen, Serge; Perrakis, Anastassis; Noble, Martin
2004-12-01
Progress towards structure determination that is both high-throughput and high-value is dependent on the development of integrated and automatic tools for electron-density map interpretation and for the analysis of the resulting atomic models. Advances in map-interpretation algorithms are extending the resolution regime in which fully automatic tools can work reliably, but at present human intervention is required to interpret poor regions of macromolecular electron density, particularly where crystallographic data is only available to modest resolution [for example, I/sigma(I) < 2.0 for minimum resolution 2.5 A]. In such cases, a set of manual and semi-manual model-building molecular-graphics tools is needed. At the same time, converting the knowledge encapsulated in a molecular structure into understanding is dependent upon visualization tools, which must be able to communicate that understanding to others by means of both static and dynamic representations. CCP4 mg is a program designed to meet these needs in a way that is closely integrated with the ongoing development of CCP4 as a program suite suitable for both low- and high-intervention computational structural biology. As well as providing a carefully designed user interface to advanced algorithms of model building and analysis, CCP4 mg is intended to present a graphical toolkit to developers of novel algorithms in these fields.
Procurement specification color graphic camera system
NASA Technical Reports Server (NTRS)
Prow, G. E.
1980-01-01
The performance and design requirements for a Color Graphic Camera System are presented. The system is a functional part of the Earth Observation Department Laboratory System (EODLS) and will be interfaced with Image Analysis Stations. It will convert the output of a raster scan computer color terminal into permanent, high resolution photographic prints and transparencies. Images usually displayed will be remotely sensed LANDSAT imager scenes.
NASA Technical Reports Server (NTRS)
Case, Jonathan; Spratt, Scott; Sharp, David
2006-01-01
The Applied Meteorology Unit (AMU) located at the Kennedy Space Center (KSC)/Cape Canaveral Air Force Station (CCAFS) implemented an operational configuration of the Advanced Regional Prediction System (ARPS) Data Analysis System (ADAS), as well as the ARPS numerical weather prediction (NWP) model. Operational, high-resolution ADAS analyses have been produced from this configuration at the National Weather Service in Melbourne, FL (NWS MLB) and the Spaceflight Meteorology Group (SMG) over the past several years. Since that time, ADAS fields have become an integral part of forecast operations at both NWS MLB and SMG. To continue providing additional utility, the AMU has been tasked to implement visualization products to assess the potential for supercell thunderstorms and significant tornadoes, and to improve assessments of short-term cloud-to-ground (CG) lightning potential. This paper and presentation focuses on the visualization products developed by the AMU for the operational high-resolution ADAS and AR.PS at the NWS MLB and SMG. The two severe weather threat graphics implemented within ADAS/ARPS are the Supercell Composite Parameter (SCP) and Significant Tornado Parameter (SIP). The SCP was designed to identify areas with supercell thunderstorm potential through a combination of several instability and shear parameters. The SIP was designed to identify areas that favor supercells producing significant tornadoes (F2 or greater intensity) versus non-tornadic supercells. Both indices were developed by the NOAAINWS Storm Prediction Center (SPC) and were normalized by key threshold values based on previous studies. The indices apply only to discrete storms, not other convective modes. In a post-analysis mode, the AMU calculated SCP and SIP for graphical output using an ADAS configuration similar to the operational set-ups at NWS MLB and SMG. Graphical images from ADAS were generated every 15 minutes for 13 August 2004, the day that Hurricane Charley approached and made landfall on the Florida peninsula. Several tornadoes struck the interior of the Florida peninsula in advance of Hurricane Charley's landfall during the daylight hours of 13 August. Since SPC had previously examined this case using SCP and SIP graphics generated from output of the Rapid Update Cycle (RUC) model, this day served as a good benchmark to compare and validate the high-resolution ADAS graphics against the smoother RUC analyses, which serves as background fields to the ADAS analyses. The ADAS-generated SCP and STP graphics have been integrated into the suite of products examined operationally by NWS MLB forecasters and are used to provide additional guidance for assessment of the near-storm environment during convective situations.
Fine Scale Modeling and Forecasts of Upper Atmospheric Turbulence for Operational Use
2014-11-30
Weather Center Digital Data Service (ADDS) fhttp://www.aviationweather.gov/adds, http://weather.aero/1 Graphical Turbulence Guidance product, GTG -2.5...analysis GTG - Graphical Turbulence Guidance HRMM - High Resolution Mesoscale/Microscale ICD - Interface Control Document IDE - Integrated Development...site (with GTG 2.5 data) http://www.aviationweather.gov/turbuience • ADDS Experimental site http://weather.aero/ • NCEP FNL data - http
Stork Color Proofing Technology
NASA Astrophysics Data System (ADS)
Ekman, C. Frederick
1989-04-01
For the past few years, Stork Colorproofing B.V. has been marketing an analog color proofing system in Europe based on electrophoto-graphic technology it pioneered for the purpose of high resolution, high fidelity color imaging in the field of the Graphic Arts. Based in part on this technology, it will make available on a commercial basis a digital color proofing system in 1989. Proofs from both machines will provide an exact reference for the user and will look, feel, and behave in a reproduction sense like the printed press sheet.
NASA Technical Reports Server (NTRS)
Jedlovec, Gary; Srikishen, Jayanthi; Edwards, Rita; Cross, David; Welch, Jon; Smith, Matt
2013-01-01
The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of "big data" available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Shortterm Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.
NASA Astrophysics Data System (ADS)
Jedlovec, G.; Srikishen, J.; Edwards, R.; Cross, D.; Welch, J. D.; Smith, M. R.
2013-12-01
The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of 'big data' available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Short-term Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.
Technical Directions In High Resolution Non-Impact Printers
NASA Astrophysics Data System (ADS)
Dunn, S. Thomas; Dunn, Patrice M.
1987-04-01
There are several factors to consider when addressing the issue of non-impact printer resolution. One will find differences between the imaging resolution and the final output resolution, and most assuradly differences exist between the advertised and actual resolution of many of these systems. Beyond that some of the technical factors that effect the resolution of a system in-clude: . Scan Line Density . Overlap . Spot Size . Energy Profile . Symmetry of Imaging Generally speaking, the user of graphic arts equipment, is best advised to view output to determine the degree of acceptable quality.
Bibliography of In-House and Contract Reports, Supplement 12.
1984-03-01
A134 952 Karow, Kenneth ADVANCE EDIT SYSTEM January 1983 Sonicraft, Inc. DAAK70-79-C-0 180 Keywords: Automated Cartography, Digital Data Editing...Interactive Graphics. An advanced edit system with high resolution interactive graphic workstations and support software for editing digital cartographic...J.R. OF INERTIAL SURVEY DATA Wei, S.Y. December 1982 Litton Guidance and Control Systems DAAK-70-81-C-0082 Keywords: Collocation, Gravity vector
The Navy’s Application of Ocean Forecasting to Decision Support
2014-09-01
Prediction Center (OPC) website for graphics or the National Operational Model Archive and Distribution System ( NOMADS ) for data files. Regional...inputs: » GLOBE = Global Land One-km Base Elevation » WVS = World Vector Shoreline » DBDB2 = Digital Bathymetry Data Base 2 minute resolution » DBDBV... Digital Bathymetry Data Base variable resolution Oceanography | Vol. 27, No.3130 Very High-Resolution Coastal Circulation Models Nearshore
NASA Technical Reports Server (NTRS)
Panthaki, Malcolm J.
1987-01-01
Three general tasks on general-purpose, interactive color graphics postprocessing for three-dimensional computational mechanics were accomplished. First, the existing program (POSTPRO3D) is ported to a high-resolution device. In the course of this transfer, numerous enhancements are implemented in the program. The performance of the hardware was evaluated from the point of view of engineering postprocessing, and the characteristics of future hardware were discussed. Second, interactive graphical tools implemented to facilitate qualitative mesh evaluation from a single analysis. The literature was surveyed and a bibliography compiled. Qualitative mesh sensors were examined, and the use of two-dimensional plots of unaveraged responses on the surface of three-dimensional continua was emphasized in an interactive color raster graphics environment. Finally, a postprocessing environment was designed for state-of-the-art workstation technology. Modularity, personalization of the environment, integration of the engineering design processes, and the development and use of high-level graphics tools are some of the features of the intended environment.
National Centers for Environmental Prediction
; at the NOAA/ESRL Rapid Refresh Page [<--click here] See "Current and Forecast Graphics" ; at the NOAA/ESRL High-Resolution Rapid Refresh Page [<--click here] NOAA / National Weather
Data graphing methods, articles of manufacture, and computing devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, Pak Chung; Mackey, Patrick S.; Cook, Kristin A.
Data graphing methods, articles of manufacture, and computing devices are described. In one aspect, a method includes accessing a data set, displaying a graphical representation including data of the data set which is arranged according to a first of different hierarchical levels, wherein the first hierarchical level represents the data at a first of a plurality of different resolutions which respectively correspond to respective ones of the hierarchical levels, selecting a portion of the graphical representation wherein the data of the portion is arranged according to the first hierarchical level at the first resolution, modifying the graphical representation by arrangingmore » the data of the portion according to a second of the hierarchal levels at a second of the resolutions, and after the modifying, displaying the graphical representation wherein the data of the portion is arranged according to the second hierarchal level at the second resolution.« less
Abdellah, Marwan; Eldeib, Ayman; Owis, Mohamed I
2015-01-01
This paper features an advanced implementation of the X-ray rendering algorithm that harnesses the giant computing power of the current commodity graphics processors to accelerate the generation of high resolution digitally reconstructed radiographs (DRRs). The presented pipeline exploits the latest features of NVIDIA Graphics Processing Unit (GPU) architectures, mainly bindless texture objects and dynamic parallelism. The rendering throughput is substantially improved by exploiting the interoperability mechanisms between CUDA and OpenGL. The benchmarks of our optimized rendering pipeline reflect its capability of generating DRRs with resolutions of 2048(2) and 4096(2) at interactive and semi interactive frame-rates using an NVIDIA GeForce 970 GTX device.
Goodman, Thomas C.; Hardies, Stephen C.; Cortez, Carlos; Hillen, Wolfgang
1981-01-01
Computer programs are described that direct the collection, processing, and graphical display of numerical data obtained from high resolution thermal denaturation (1-3) and circular dichroism (4) studies. Besides these specific applications, the programs may also be useful, either directly or as programming models, in other types of spectrophotometric studies employing computers, programming languages, or instruments similar to those described here (see Materials and Methods). PMID:7335498
A High Performance VLSI Computer Architecture For Computer Graphics
NASA Astrophysics Data System (ADS)
Chin, Chi-Yuan; Lin, Wen-Tai
1988-10-01
A VLSI computer architecture, consisting of multiple processors, is presented in this paper to satisfy the modern computer graphics demands, e.g. high resolution, realistic animation, real-time display etc.. All processors share a global memory which are partitioned into multiple banks. Through a crossbar network, data from one memory bank can be broadcasted to many processors. Processors are physically interconnected through a hyper-crossbar network (a crossbar-like network). By programming the network, the topology of communication links among processors can be reconfigurated to satisfy specific dataflows of different applications. Each processor consists of a controller, arithmetic operators, local memory, a local crossbar network, and I/O ports to communicate with other processors, memory banks, and a system controller. Operations in each processor are characterized into two modes, i.e. object domain and space domain, to fully utilize the data-independency characteristics of graphics processing. Special graphics features such as 3D-to-2D conversion, shadow generation, texturing, and reflection, can be easily handled. With the current high density interconnection (MI) technology, it is feasible to implement a 64-processor system to achieve 2.5 billion operations per second, a performance needed in most advanced graphics applications.
NASA Astrophysics Data System (ADS)
Schiefele, Jens; Bader, Joachim; Kastner, S.; Wiesemann, Thorsten; von Viebahn, Harro
2002-07-01
Next generation of cockpit display systems will display mass data. Mass data includes terrain, obstacle, and airport databases. Display formats will be two and eventually 3D. A prerequisite for the introduction of these new functions is the availability of certified graphics hardware. The paper describes functionality and required features of an aviation certified 2D/3D graphics board. This graphics board should be based on low-level and hi-level API calls. These graphic calls should be very similar to OpenGL. All software and the API must be aviation certified. As an example application, a 2D airport navigation function and a 3D terrain visualization is presented. The airport navigation format is based on highly precise airport database following EUROCAE ED-99/RTCA DO-272 specifications. Terrain resolution is based on EUROCAE ED-98/RTCA DO-276 requirements.
An Electronic Pressure Profile Display system for aeronautic test facilities
NASA Technical Reports Server (NTRS)
Woike, Mark R.
1990-01-01
The NASA Lewis Research Center has installed an Electronic Pressure Profile Display system. This system provides for the real-time display of pressure readings on high resolution graphics monitors. The Electronic Pressure Profile Display system will replace manometer banks currently used in aeronautic test facilities. The Electronic Pressure Profile Display system consists of an industrial type Digital Pressure Transmitter (DPI) unit which interfaces with a host computer. The host computer collects the pressure data from the DPI unit, converts it into engineering units, and displays the readings on a high resolution graphics monitor in bar graph format. Software was developed to accomplish the above tasks and also draw facility diagrams as background information on the displays. Data transfer between host computer and DPT unit is done with serial communications. Up to 64 channels are displayed with one second update time. This paper describes the system configuration, its features, and its advantages over existing systems.
An electronic pressure profile display system for aeronautic test facilities
NASA Technical Reports Server (NTRS)
Woike, Mark R.
1990-01-01
The NASA Lewis Research Center has installed an Electronic Pressure Profile Display system. This system provides for the real-time display of pressure readings on high resolution graphics monitors. The Electronic Pressure Profile Display system will replace manometer banks currently used in aeronautic test facilities. The Electronic Pressure Profile Display system consists of an industrial type Digital Pressure Transmitter (DPT) unit which interfaces with a host computer. The host computer collects the pressure data from the DPT unit, converts it into engineering units, and displays the readings on a high resolution graphics monitor in bar graph format. Software was developed to accomplish the above tasks and also draw facility diagrams as background information on the displays. Data transfer between host computer and DPT unit is done with serial communications. Up to 64 channels are displayed with one second update time. This paper describes the system configuration, its features, and its advantages over existing systems.
A fast mass spring model solver for high-resolution elastic objects
NASA Astrophysics Data System (ADS)
Zheng, Mianlun; Yuan, Zhiyong; Zhu, Weixu; Zhang, Guian
2017-03-01
Real-time simulation of elastic objects is of great importance for computer graphics and virtual reality applications. The fast mass spring model solver can achieve visually realistic simulation in an efficient way. Unfortunately, this method suffers from resolution limitations and lack of mechanical realism for a surface geometry model, which greatly restricts its application. To tackle these problems, in this paper we propose a fast mass spring model solver for high-resolution elastic objects. First, we project the complex surface geometry model into a set of uniform grid cells as cages through *cages mean value coordinate method to reflect its internal structure and mechanics properties. Then, we replace the original Cholesky decomposition method in the fast mass spring model solver with a conjugate gradient method, which can make the fast mass spring model solver more efficient for detailed surface geometry models. Finally, we propose a graphics processing unit accelerated parallel algorithm for the conjugate gradient method. Experimental results show that our method can realize efficient deformation simulation of 3D elastic objects with visual reality and physical fidelity, which has a great potential for applications in computer animation.
enhancedGraphics: a Cytoscape app for enhanced node graphics
Morris, John H.; Kuchinsky, Allan; Ferrin, Thomas E.; Pico, Alexander R.
2014-01-01
enhancedGraphics ( http://apps.cytoscape.org/apps/enhancedGraphics) is a Cytoscape app that implements a series of enhanced charts and graphics that may be added to Cytoscape nodes. It enables users and other app developers to create pie, line, bar, and circle plots that are driven by columns in the Cytoscape Node Table. Charts are drawn using vector graphics to allow full-resolution scaling. PMID:25285206
A Large Scale, High Resolution Agent-Based Insurgency Model
2013-09-30
CUDA) is NVIDIA Corporation’s software development model for General Purpose Programming on Graphics Processing Units (GPGPU) ( NVIDIA Corporation ...Conference. Argonne National Laboratory, Argonne, IL, October, 2005. NVIDIA Corporation . NVIDIA CUDA Programming Guide 2.0 [Online]. NVIDIA Corporation
Graphics Processing Unit (GPU) Acceleration of the Goddard Earth Observing System Atmospheric Model
NASA Technical Reports Server (NTRS)
Putnam, Williama
2011-01-01
The Goddard Earth Observing System 5 (GEOS-5) is the atmospheric model used by the Global Modeling and Assimilation Office (GMAO) for a variety of applications, from long-term climate prediction at relatively coarse resolution, to data assimilation and numerical weather prediction, to very high-resolution cloud-resolving simulations. GEOS-5 is being ported to a graphics processing unit (GPU) cluster at the NASA Center for Climate Simulation (NCCS). By utilizing GPU co-processor technology, we expect to increase the throughput of GEOS-5 by at least an order of magnitude, and accelerate the process of scientific exploration across all scales of global modeling, including: The large-scale, high-end application of non-hydrostatic, global, cloud-resolving modeling at 10- to I-kilometer (km) global resolutions Intermediate-resolution seasonal climate and weather prediction at 50- to 25-km on small clusters of GPUs Long-range, coarse-resolution climate modeling, enabled on a small box of GPUs for the individual researcher After being ported to the GPU cluster, the primary physics components and the dynamical core of GEOS-5 have demonstrated a potential speedup of 15-40 times over conventional processor cores. Performance improvements of this magnitude reduce the required scalability of 1-km, global, cloud-resolving models from an unfathomable 6 million cores to an attainable 200,000 GPU-enabled cores.
Wachter, S. Blake; Johnson, Ken; Albert, Robert; Syroid, Noah; Drews, Frank; Westenskow, Dwayne
2006-01-01
Objective Authors developed a picture-graphics display for pulmonary function to present typical respiratory data used in perioperative and intensive care environments. The display utilizes color, shape and emergent alerting to highlight abnormal pulmonary physiology. The display serves as an adjunct to traditional operating room displays and monitors. Design To evaluate the prototype, nineteen clinician volunteers each managed four adverse respiratory events and one normal event using a high-resolution patient simulator which included the new displays (intervention subjects) and traditional displays (control subjects). Between-group comparisons included (i) time to diagnosis and treatment for each adverse respiratory event; (ii) the number of unnecessary treatments during the normal scenario; and (iii) self-reported workload estimates while managing study events. Measurements Two expert anesthesiologists reviewed video-taped transcriptions of the volunteers to determine time to treat and time to diagnosis. Time values were then compared between groups using a Mann-Whitney-U Test. Estimated workload for both groups was assessed using the NASA-TLX and compared between groups using an ANOVA. P-values < 0.05 were considered significant. Results Clinician volunteers detected and treated obstructed endotracheal tubes and intrinsic PEEP problems faster with graphical rather than conventional displays (p < 0.05). During the normal scenario simulation, 3 clinicians using the graphical display, and 5 clinicians using the conventional display gave unnecessary treatments. Clinician-volunteers reported significantly lower subjective workloads using the graphical display for the obstructed endotracheal tube scenario (p < 0.001) and the intrinsic PEEP scenario (p < 0.03). Conclusion Authors conclude that the graphical pulmonary display may serve as a useful adjunct to traditional displays in identifying adverse respiratory events. PMID:16929038
The PyRosetta Toolkit: a graphical user interface for the Rosetta software suite.
Adolf-Bryfogle, Jared; Dunbrack, Roland L
2013-01-01
The Rosetta Molecular Modeling suite is a command-line-only collection of applications that enable high-resolution modeling and design of proteins and other molecules. Although extremely useful, Rosetta can be difficult to learn for scientists with little computational or programming experience. To that end, we have created a Graphical User Interface (GUI) for Rosetta, called the PyRosetta Toolkit, for creating and running protocols in Rosetta for common molecular modeling and protein design tasks and for analyzing the results of Rosetta calculations. The program is highly extensible so that developers can add new protocols and analysis tools to the PyRosetta Toolkit GUI.
Structural identifiability of cyclic graphical models of biological networks with latent variables.
Wang, Yulin; Lu, Na; Miao, Hongyu
2016-06-13
Graphical models have long been used to describe biological networks for a variety of important tasks such as the determination of key biological parameters, and the structure of graphical model ultimately determines whether such unknown parameters can be unambiguously obtained from experimental observations (i.e., the identifiability problem). Limited by resources or technical capacities, complex biological networks are usually partially observed in experiment, which thus introduces latent variables into the corresponding graphical models. A number of previous studies have tackled the parameter identifiability problem for graphical models such as linear structural equation models (SEMs) with or without latent variables. However, the limited resolution and efficiency of existing approaches necessarily calls for further development of novel structural identifiability analysis algorithms. An efficient structural identifiability analysis algorithm is developed in this study for a broad range of network structures. The proposed method adopts the Wright's path coefficient method to generate identifiability equations in forms of symbolic polynomials, and then converts these symbolic equations to binary matrices (called identifiability matrix). Several matrix operations are introduced for identifiability matrix reduction with system equivalency maintained. Based on the reduced identifiability matrices, the structural identifiability of each parameter is determined. A number of benchmark models are used to verify the validity of the proposed approach. Finally, the network module for influenza A virus replication is employed as a real example to illustrate the application of the proposed approach in practice. The proposed approach can deal with cyclic networks with latent variables. The key advantage is that it intentionally avoids symbolic computation and is thus highly efficient. Also, this method is capable of determining the identifiability of each single parameter and is thus of higher resolution in comparison with many existing approaches. Overall, this study provides a basis for systematic examination and refinement of graphical models of biological networks from the identifiability point of view, and it has a significant potential to be extended to more complex network structures or high-dimensional systems.
EnviroNET: An online environmental interactions resource
NASA Technical Reports Server (NTRS)
Lauriente, Michael
1991-01-01
EnviroNET is a centralized depository for technical information on environmentally induced interactions likely to be encountered by spacecraft in both low-altitude and high-altitude orbits. It provides a user-friendly, menu-driven format on networks that are connected globally and is available 24 hours a day - every day. The service pools space data collected over the years by NASA, USAF, other government research facilities, industry, universities, and the European Space Agency. This information contains text, tables and over one hundred high resolution figures and graphs based on empirical data. These graphics can be accessed while still in the chapters, making it easy to flip from text to graphics and back. Interactive graphics programs are also available on space debris, the neutral atmosphere, magnetic field, and ionosphere. EnviroNET can help designers meet tough environmental flight criteria before committing to flight hardware built for experiments, instrumentation, or payloads.
Levin, David; Aladl, Usaf; Germano, Guido; Slomka, Piotr
2005-09-01
We exploit consumer graphics hardware to perform real-time processing and visualization of high-resolution, 4D cardiac data. We have implemented real-time, realistic volume rendering, interactive 4D motion segmentation of cardiac data, visualization of multi-modality cardiac data and 3D display of multiple series cardiac MRI. We show that an ATI Radeon 9700 Pro can render a 512x512x128 cardiac Computed Tomography (CT) study at 0.9 to 60 frames per second (fps) depending on rendering parameters and that 4D motion based segmentation can be performed in real-time. We conclude that real-time rendering and processing of cardiac data can be implemented on consumer graphics cards.
Displaying Geographically-Based Domestic Statistics
NASA Technical Reports Server (NTRS)
Quann, J.; Dalton, J.; Banks, M.; Helfer, D.; Szczur, M.; Winkert, G.; Billingsley, J.; Borgstede, R.; Chen, J.; Chen, L.;
1982-01-01
Decision Information Display System (DIDS) is rapid-response information-retrieval and color-graphics display system. DIDS transforms tables of geographically-based domestic statistics (such as population or unemployment by county, energy usage by county, or air-quality figures) into high-resolution, color-coded maps on television display screen.
From Geocentrism to Allocentrism: Teaching the Phases of the Moon in a Digital Full-Dome Planetarium
ERIC Educational Resources Information Center
Chastenay, Pierre
2016-01-01
An increasing number of planetariums worldwide are turning digital, using ultra-fast computers, powerful graphic cards, and high-resolution video projectors to create highly realistic astronomical imagery in real time. This modern technology makes it so that the audience can observe astronomical phenomena from a geocentric as well as an…
Ripesi, P; Biferale, L; Schifano, S F; Tripiccione, R
2014-04-01
We study the turbulent evolution originated from a system subjected to a Rayleigh-Taylor instability with a double density at high resolution in a two-dimensional geometry using a highly optimized thermal lattice-Boltzmann code for GPUs. Our investigation's initial condition, given by the superposition of three layers with three different densities, leads to the development of two Rayleigh-Taylor fronts that expand upward and downward and collide in the middle of the cell. By using high-resolution numerical data we highlight the effects induced by the collision of the two turbulent fronts in the long-time asymptotic regime. We also provide details on the optimized lattice-Boltzmann code that we have run on a cluster of GPUs.
Retinal optical coherence tomography at 1 μm with dynamic focus control and axial motion tracking
NASA Astrophysics Data System (ADS)
Cua, Michelle; Lee, Sujin; Miao, Dongkai; Ju, Myeong Jin; Mackenzie, Paul J.; Jian, Yifan; Sarunic, Marinko V.
2016-02-01
High-resolution optical coherence tomography (OCT) retinal imaging is important to noninvasively visualize the various retinal structures to aid in better understanding of the pathogenesis of vision-robbing diseases. However, conventional OCT systems have a trade-off between lateral resolution and depth-of-focus. In this report, we present the development of a focus-stacking OCT system with automatic focus optimization for high-resolution, extended-focal-range clinical retinal imaging by incorporating a variable-focus liquid lens into the sample arm optics. Retinal layer tracking and selection was performed using a graphics processing unit accelerated processing platform for focus optimization, providing real-time layer-specific en face visualization. After optimization, multiple volumes focused at different depths were acquired, registered, and stitched together to yield a single, high-resolution focus-stacked dataset. Using this system, we show high-resolution images of the retina and optic nerve head, from which we extracted clinically relevant parameters such as the nerve fiber layer thickness and lamina cribrosa microarchitecture.
Retinal optical coherence tomography at 1 μm with dynamic focus control and axial motion tracking.
Cua, Michelle; Lee, Sujin; Miao, Dongkai; Ju, Myeong Jin; Mackenzie, Paul J; Jian, Yifan; Sarunic, Marinko V
2016-02-01
High-resolution optical coherence tomography (OCT) retinal imaging is important to noninvasively visualize the various retinal structures to aid in better understanding of the pathogenesis of vision-robbing diseases. However, conventional OCT systems have a trade-off between lateral resolution and depth-of-focus. In this report, we present the development of a focus-stacking OCT system with automatic focus optimization for high-resolution, extended-focal-range clinical retinal imaging by incorporating a variable-focus liquid lens into the sample arm optics. Retinal layer tracking and selection was performed using a graphics processing unit accelerated processing platform for focus optimization, providing real-time layer-specific en face visualization. After optimization, multiple volumes focused at different depths were acquired, registered, and stitched together to yield a single, high-resolution focus-stacked dataset. Using this system, we show high-resolution images of the retina and optic nerve head, from which we extracted clinically relevant parameters such as the nerve fiber layer thickness and lamina cribrosa microarchitecture.
Fast generation of computer-generated hologram by graphics processing unit
NASA Astrophysics Data System (ADS)
Matsuda, Sho; Fujii, Tomohiko; Yamaguchi, Takeshi; Yoshikawa, Hiroshi
2009-02-01
A cylindrical hologram is well known to be viewable in 360 deg. This hologram depends high pixel resolution.Therefore, Computer-Generated Cylindrical Hologram (CGCH) requires huge calculation amount.In our previous research, we used look-up table method for fast calculation with Intel Pentium4 2.8 GHz.It took 480 hours to calculate high resolution CGCH (504,000 x 63,000 pixels and the average number of object points are 27,000).To improve quality of CGCH reconstructed image, fringe pattern requires higher spatial frequency and resolution.Therefore, to increase the calculation speed, we have to change the calculation method. In this paper, to reduce the calculation time of CGCH (912,000 x 108,000 pixels), we employ Graphics Processing Unit (GPU).It took 4,406 hours to calculate high resolution CGCH on Xeon 3.4 GHz.Since GPU has many streaming processors and a parallel processing structure, GPU works as the high performance parallel processor.In addition, GPU gives max performance to 2 dimensional data and streaming data.Recently, GPU can be utilized for the general purpose (GPGPU).For example, NVIDIA's GeForce7 series became a programmable processor with Cg programming language.Next GeForce8 series have CUDA as software development kit made by NVIDIA.Theoretically, calculation ability of GPU is announced as 500 GFLOPS. From the experimental result, we have achieved that 47 times faster calculation compared with our previous work which used CPU.Therefore, CGCH can be generated in 95 hours.So, total time is 110 hours to calculate and print the CGCH.
Rana, Vijay; Rudin, Stephen; Bednarek, Daniel R.
2012-01-01
We have developed a dose-tracking system (DTS) that calculates the radiation dose to the patient’s skin in real-time by acquiring exposure parameters and imaging-system-geometry from the digital bus on a Toshiba Infinix C-arm unit. The cumulative dose values are then displayed as a color map on an OpenGL-based 3D graphic of the patient for immediate feedback to the interventionalist. Determination of those elements on the surface of the patient 3D-graphic that intersect the beam and calculation of the dose for these elements in real time demands fast computation. Reducing the size of the elements results in more computation load on the computer processor and therefore a tradeoff occurs between the resolution of the patient graphic and the real-time performance of the DTS. The speed of the DTS for calculating dose to the skin is limited by the central processing unit (CPU) and can be improved by using the parallel processing power of a graphics processing unit (GPU). Here, we compare the performance speed of GPU-based DTS software to that of the current CPU-based software as a function of the resolution of the patient graphics. Results show a tremendous improvement in speed using the GPU. While an increase in the spatial resolution of the patient graphics resulted in slowing down the computational speed of the DTS on the CPU, the speed of the GPU-based DTS was hardly affected. This GPU-based DTS can be a powerful tool for providing accurate, real-time feedback about patient skin-dose to physicians while performing interventional procedures. PMID:24027616
Rana, Vijay; Rudin, Stephen; Bednarek, Daniel R
2012-02-23
We have developed a dose-tracking system (DTS) that calculates the radiation dose to the patient's skin in real-time by acquiring exposure parameters and imaging-system-geometry from the digital bus on a Toshiba Infinix C-arm unit. The cumulative dose values are then displayed as a color map on an OpenGL-based 3D graphic of the patient for immediate feedback to the interventionalist. Determination of those elements on the surface of the patient 3D-graphic that intersect the beam and calculation of the dose for these elements in real time demands fast computation. Reducing the size of the elements results in more computation load on the computer processor and therefore a tradeoff occurs between the resolution of the patient graphic and the real-time performance of the DTS. The speed of the DTS for calculating dose to the skin is limited by the central processing unit (CPU) and can be improved by using the parallel processing power of a graphics processing unit (GPU). Here, we compare the performance speed of GPU-based DTS software to that of the current CPU-based software as a function of the resolution of the patient graphics. Results show a tremendous improvement in speed using the GPU. While an increase in the spatial resolution of the patient graphics resulted in slowing down the computational speed of the DTS on the CPU, the speed of the GPU-based DTS was hardly affected. This GPU-based DTS can be a powerful tool for providing accurate, real-time feedback about patient skin-dose to physicians while performing interventional procedures.
Mori, Shinichiro; Inaniwa, Taku; Kumagai, Motoki; Kuwae, Tsunekazu; Matsuzaki, Yuka; Furukawa, Takuji; Shirai, Toshiyuki; Noda, Koji
2012-06-01
To increase the accuracy of carbon ion beam scanning therapy, we have developed a graphical user interface-based digitally-reconstructed radiograph (DRR) software system for use in routine clinical practice at our center. The DRR software is used in particular scenarios in the new treatment facility to achieve the same level of geometrical accuracy at the treatment as at the imaging session. DRR calculation is implemented simply as the summation of CT image voxel values along the X-ray projection ray. Since we implemented graphics processing unit-based computation, the DRR images are calculated with a speed sufficient for the particular clinical practice requirements. Since high spatial resolution flat panel detector (FPD) images should be registered to the reference DRR images in patient setup process in any scenarios, the DRR images also needs higher spatial resolution close to that of FPD images. To overcome the limitation of the CT spatial resolution imposed by the CT voxel size, we applied image processing to improve the calculated DRR spatial resolution. The DRR software introduced here enabled patient positioning with sufficient accuracy for the implementation of carbon-ion beam scanning therapy at our center.
Reading Outside Micrometers. Courseware Evaluation for Vocational and Technical Education.
ERIC Educational Resources Information Center
Sommer, Sandra; And Others
This courseware evaluation rates the Reading Outside Micrometers program developed by EMC Publishing Company. (The program--not contained in this document--uses high resolution graphics to illustrate the micrometer's components, functions, and practical applications.) Part A describes the program in terms of subject area and equipment requirements…
Reading Vernier Calipers. Courseware Evaluation for Vocational and Technical Education.
ERIC Educational Resources Information Center
Goldstine, James; And Others
This courseware evaluation rates the Reading Vernier Calipers program developed by EMC Publishing Company. (The program--not contained in this document--uses high resolution graphics to illustrate the micrometer and describe its components, functions, and practical applications.) Part A describes the program in terms of subject area (technical…
- spac0118 Overhead view of a TIROS satellite showing interior arrangement of satellite sensing packages including TV cameras and infra-red sensors. In: "TIROS A Story of Achievement" RCA, February 28 /Graphic/Satellite/ * High Resolution Photo Available Publication of the U.S. Department of Commerce
Exploratory visualization of astronomical data on ultra-high-resolution wall displays
NASA Astrophysics Data System (ADS)
Pietriga, Emmanuel; del Campo, Fernando; Ibsen, Amanda; Primet, Romain; Appert, Caroline; Chapuis, Olivier; Hempel, Maren; Muñoz, Roberto; Eyheramendy, Susana; Jordan, Andres; Dole, Hervé
2016-07-01
Ultra-high-resolution wall displays feature a very high pixel density over a large physical surface, which makes them well-suited to the collaborative, exploratory visualization of large datasets. We introduce FITS-OW, an application designed for such wall displays, that enables astronomers to navigate in large collections of FITS images, query astronomical databases, and display detailed, complementary data and documents about multiple sources simultaneously. We describe how astronomers interact with their data using both the wall's touchsensitive surface and handheld devices. We also report on the technical challenges we addressed in terms of distributed graphics rendering and data sharing over the computer clusters that drive wall displays.
NASA Technical Reports Server (NTRS)
King, James D.
2004-01-01
Using high resolution transmission electron images of carbon nanotubes and carbon particles, we are able to use image analysis program to determine several carbon fringe properties, including length, separation, curvature and orientation. Results are shown in the form of histograms for each of those quantities. The combination of those measurements can give a better indication of the graphic structure within nanotubes and particles of carbon and can distinguish carbons based upon fringe properties. Carbon with longer, straighter and closer spaced fringes are considered graphite, while amorphous carbon contain shorter, less structured fringes.
NASA Astrophysics Data System (ADS)
Eichenlaub, Jesse B.
1995-03-01
Mounting a lenticular lens in front of a flat panel display is a well known, inexpensive, and easy way to create an autostereoscopic system. Such a lens produces half resolution 3D images because half the pixels on the LCD are seen by the left eye and half by the right eye. This may be acceptable for graphics, but it makes full resolution text, as displayed by common software, nearly unreadable. Very fine alignment tolerances normally preclude the possibility of removing and replacing the lens in order to switch between 2D and 3D applications. Lenticular lens based displays are therefore limited to use as dedicated 3D devices. DTI has devised a technique which removes this limitation, allowing switching between full resolution 2D and half resolution 3D imaging modes. A second element, in the form of a concave lenticular lens array whose shape is exactly the negative of the first lens, is mounted on a hinge so that it can be swung down over the first lens array. When so positioned the two lenses cancel optically, allowing the user to see full resolution 2D for text or numerical applications. The two lenses, having complementary shapes, naturally tend to nestle together and snap into perfect alignment when pressed together--thus obviating any need for user operated alignment mechanisms. This system represents an ideal solution for laptop and notebook computer applications. It was devised to meet the stringent requirements of a laptop computer manufacturer including very compact size, very low cost, little impact on existing manufacturing or assembly procedures, and compatibility with existing full resolution 2D text- oriented software as well as 3D graphics. Similar requirements apply to high and electronic calculators, several models of which now use LCDs for the display of graphics.
Smoldyn on graphics processing units: massively parallel Brownian dynamics simulations.
Dematté, Lorenzo
2012-01-01
Space is a very important aspect in the simulation of biochemical systems; recently, the need for simulation algorithms able to cope with space is becoming more and more compelling. Complex and detailed models of biochemical systems need to deal with the movement of single molecules and particles, taking into consideration localized fluctuations, transportation phenomena, and diffusion. A common drawback of spatial models lies in their complexity: models can become very large, and their simulation could be time consuming, especially if we want to capture the systems behavior in a reliable way using stochastic methods in conjunction with a high spatial resolution. In order to deliver the promise done by systems biology to be able to understand a system as whole, we need to scale up the size of models we are able to simulate, moving from sequential to parallel simulation algorithms. In this paper, we analyze Smoldyn, a widely diffused algorithm for stochastic simulation of chemical reactions with spatial resolution and single molecule detail, and we propose an alternative, innovative implementation that exploits the parallelism of Graphics Processing Units (GPUs). The implementation executes the most computational demanding steps (computation of diffusion, unimolecular, and bimolecular reaction, as well as the most common cases of molecule-surface interaction) on the GPU, computing them in parallel on each molecule of the system. The implementation offers good speed-ups and real time, high quality graphics output
Augmenting reality in Direct View Optical (DVO) overlay applications
NASA Astrophysics Data System (ADS)
Hogan, Tim; Edwards, Tim
2014-06-01
The integration of overlay displays into rifle scopes can transform precision Direct View Optical (DVO) sights into intelligent interactive fire-control systems. Overlay displays can provide ballistic solutions within the sight for dramatically improved targeting, can fuse sensor video to extend targeting into nighttime or dirty battlefield conditions, and can overlay complex situational awareness information over the real-world scene. High brightness overlay solutions for dismounted soldier applications have previously been hindered by excessive power consumption, weight and bulk making them unsuitable for man-portable, battery powered applications. This paper describes the advancements and capabilities of a high brightness, ultra-low power text and graphics overlay display module developed specifically for integration into DVO weapon sight applications. Central to the overlay display module was the development of a new general purpose low power graphics controller and dual-path display driver electronics. The graphics controller interface is a simple 2-wire RS-232 serial interface compatible with existing weapon systems such as the IBEAM ballistic computer and the RULR and STORM laser rangefinders (LRF). The module features include multiple graphics layers, user configurable fonts and icons, and parameterized vector rendering, making it suitable for general purpose DVO overlay applications. The module is configured for graphics-only operation for daytime use and overlays graphics with video for nighttime applications. The miniature footprint and ultra-low power consumption of the module enables a new generation of intelligent DVO systems and has been implemented for resolutions from VGA to SXGA, in monochrome and color, and in graphics applications with and without sensor video.
ERIC Educational Resources Information Center
Dwyer, Daniel J.
Designed to assess the effect of alternative display (CRT) screen sizes and resolution levels on user ability to identify and locate printed circuit (PC) board points, this study is the first in a protracted research program on the legibility of graphics in computer-based job aids. Air Force maintenance training pipeline students (35 male and 1…
Chem Lab Simulation #3 and #4.
ERIC Educational Resources Information Center
Pipeline, 1983
1983-01-01
Two copy-protected chemistry simulations (for Apple II) are described. The first demonstrates Hess' law of heat reaction. The second illustrates how heat of vaporization can be used to determine an unknown liquid and shows how to find thermodynamic parameters in an equilibrium reaction. Both are self-instructing and use high-resolution graphics.…
Groucho: An Energy Conservation Computer Game.
ERIC Educational Resources Information Center
Canipe, Stephen L.
Groucho is a computer game designed to teach energy conservation concepts to upper elementary and junior high school students. The game is written in Applesoft Basic for the Apple II microcomputer. A complete listing of the program is provided. The game utilizes low resolution graphics to reward students for correct answers to 10 questions…
High resolution ultrasonic spectroscopy system for nondestructive evaluation
NASA Technical Reports Server (NTRS)
Chen, C. H.
1991-01-01
With increased demand for high resolution ultrasonic evaluation, computer based systems or work stations become essential. The ultrasonic spectroscopy method of nondestructive evaluation (NDE) was used to develop a high resolution ultrasonic inspection system supported by modern signal processing, pattern recognition, and neural network technologies. The basic system which was completed consists of a 386/20 MHz PC (IBM AT compatible), a pulser/receiver, a digital oscilloscope with serial and parallel communications to the computer, an immersion tank with motor control of X-Y axis movement, and the supporting software package, IUNDE, for interactive ultrasonic evaluation. Although the hardware components are commercially available, the software development is entirely original. By integrating signal processing, pattern recognition, maximum entropy spectral analysis, and artificial neural network functions into the system, many NDE tasks can be performed. The high resolution graphics capability provides visualization of complex NDE problems. The phase 3 efforts involve intensive marketing of the software package and collaborative work with industrial sectors.
Tourtellotte, W G; Lawrence, D T; Getting, P A; Van Hoesen, G W
1989-07-01
This report describes a computerized microscope charting system based on the IBM personal computer or compatible. Stepping motors are used to control the movement of the microscope stage and to encode its position by hand manipulation of a joystick. Tissue section contours and the location of cells labeled with various compounds are stored by the computer, plotted at any magnification and manipulated into composites created from several charted sections. The system has many advantages: (1) it is based on an industry standardized computer that is affordable and familiar; (2) compact and commercially available stepping motor microprocessors control the stage movement. These controllers increase reliability, simplify implementation, and increase efficiency by relieving the computer of time consuming control tasks; (3) the system has an interactive graphics interface allowing the operator to view the image during data collection. Regions of the graphics display can be enlarged during the charting process to provide higher resolution and increased accuracy; (4) finally, the digitized data are stored at 0.5 micron resolution and can be routed directly to a multi-pen plotter or exported to a computer-aided design (CAD) program to generate a publication-quality montage composed of several computerized chartings. The system provides a useful tool for the acquisition and qualitative analysis of data representing stained cells or chemical markers in tissue. The modular design, together with data storage at high resolution, allows for potential analytical enhancements involving planimetric, stereologic and 3-D serial section reconstruction.
NASA Technical Reports Server (NTRS)
Putnam, William M.
2011-01-01
Earth system models like the Goddard Earth Observing System model (GEOS-5) have been pushing the limits of large clusters of multi-core microprocessors, producing breath-taking fidelity in resolving cloud systems at a global scale. GPU computing presents an opportunity for improving the efficiency of these leading edge models. A GPU implementation of GEOS-5 will facilitate the use of cloud-system resolving resolutions in data assimilation and weather prediction, at resolutions near 3.5 km, improving our ability to extract detailed information from high-resolution satellite observations and ultimately produce better weather and climate predictions
NASA Astrophysics Data System (ADS)
Baart, F.; Donchyts, G.; van Dam, A.; Plieger, M.
2015-12-01
The emergence of interactive art has blurred the line between electronic, computer graphics and art. Here we apply this art form to numerical models. Here we show how the transformation of a numerical model into an interactive painting can both provide insights and solve real world problems. The cases that are used as an example include forensic reconstructions, dredging optimization, barrier design. The system can be fed using any source of time varying vector fields, such as hydrodynamic models. The cases used here, the Indian Ocean (HYCOM), the Wadden Sea (Delft3D Curvilinear), San Francisco Bay (3Di subgrid and Delft3D Flexible Mesh), show that the method used is suitable for different time and spatial scales. High resolution numerical models become interactive paintings by exchanging their velocity fields with a high resolution (>=1M cells) image based flow visualization that runs in a html5 compatible web browser. The image based flow visualization combines three images into a new image: the current image, a drawing, and a uv + mask field. The advection scheme that computes the resultant image is executed in the graphics card using WebGL, allowing for 1M grid cells at 60Hz performance on mediocre graphic cards. The software is provided as open source software. By using different sources for a drawing one can gain insight into several aspects of the velocity fields. These aspects include not only the commonly represented magnitude and direction, but also divergence, topology and turbulence .
Matrix Recrystallization for MALDI-MS Imaging of Maize Lipids at High-Spatial Resolution.
Dueñas, Maria Emilia; Carlucci, Laura; Lee, Young Jin
2016-09-01
Matrix recrystallization is optimized and applied to improve lipid ion signals in maize embryos and leaves. A systematic study was performed varying solvent and incubation time. During this study, unexpected side reactions were found when methanol was used as a recrystallization solvent, resulting in the formation of a methyl ester of phosphatidic acid. Using an optimum recrystallization condition with isopropanol, there is no apparent delocalization demonstrated with a transmission electron microscopy (TEM) pattern and maize leaf images obtained at 10 μm spatial resolution. Graphical Abstract ᅟ.
Three-Dimensional Media Technologies: Potentials for Study in Visual Literacy.
ERIC Educational Resources Information Center
Thwaites, Hal
This paper presents an overview of three-dimensional media technologies (3Dmt). Many of the new 3Dmt are the direct result of interactions of computing, communications, and imaging technologies. Computer graphics are particularly well suited to the creation of 3D images due to the high resolution and programmable nature of the current displays.…
Motion control of 7-DOF arms - The configuration control approach
NASA Technical Reports Server (NTRS)
Seraji, Homayoun; Long, Mark K.; Lee, Thomas S.
1993-01-01
Graphics simulation and real-time implementation of configuration control schemes for a redundant 7-DOF Robotics Research arm are described. The arm kinematics and motion control schemes are described briefly. This is followed by a description of a graphics simulation environment for 7-DOF arm control on the Silicon Graphics IRIS Workstation. Computer simulation results are presented to demonstrate elbow control, collision avoidance, and optimal joint movement as redundancy resolution goals. The laboratory setup for experimental validation of motion control of the 7-DOF Robotics Research arm is then described. The configuration control approach is implemented on a Motorola-68020/VME-bus-based real-time controller, with elbow positioning for redundancy resolution. Experimental results demonstrate the efficacy of configuration control for real-time control.
Development of a Low Cost Graphics Terminal.
ERIC Educational Resources Information Center
Lehr, Ted
1985-01-01
Describes modifications made to expand the capabilities of a display unit (Lear Siegler ADM-3A) to include medium resolution graphics. The modifying circuitry is detailed along with software subroutined written in Z-80 machine language for controlling the video display. (JN)
Knowledge representation in space flight operations
NASA Technical Reports Server (NTRS)
Busse, Carl
1989-01-01
In space flight operations rapid understanding of the state of the space vehicle is essential. Representation of knowledge depicting space vehicle status in a dynamic environment presents a difficult challenge. The NASA Jet Propulsion Laboratory has pursued areas of technology associated with the advancement of spacecraft operations environment. This has led to the development of several advanced mission systems which incorporate enhanced graphics capabilities. These systems include: (1) Spacecraft Health Automated Reasoning Prototype (SHARP); (2) Spacecraft Monitoring Environment (SME); (3) Electrical Power Data Monitor (EPDM); (4) Generic Payload Operations Control Center (GPOCC); and (5) Telemetry System Monitor Prototype (TSM). Knowledge representation in these systems provides a direct representation of the intrinsic images associated with the instrument and satellite telemetry and telecommunications systems. The man-machine interface includes easily interpreted contextual graphic displays. These interactive video displays contain multiple display screens with pop-up windows and intelligent, high resolution graphics linked through context and mouse-sensitive icons and text.
Dimensionality of visual complexity in computer graphics scenes
NASA Astrophysics Data System (ADS)
Ramanarayanan, Ganesh; Bala, Kavita; Ferwerda, James A.; Walter, Bruce
2008-02-01
How do human observers perceive visual complexity in images? This problem is especially relevant for computer graphics, where a better understanding of visual complexity can aid in the development of more advanced rendering algorithms. In this paper, we describe a study of the dimensionality of visual complexity in computer graphics scenes. We conducted an experiment where subjects judged the relative complexity of 21 high-resolution scenes, rendered with photorealistic methods. Scenes were gathered from web archives and varied in theme, number and layout of objects, material properties, and lighting. We analyzed the subject responses using multidimensional scaling of pooled subject responses. This analysis embedded the stimulus images in a two-dimensional space, with axes that roughly corresponded to "numerosity" and "material / lighting complexity". In a follow-up analysis, we derived a one-dimensional complexity ordering of the stimulus images. We compared this ordering with several computable complexity metrics, such as scene polygon count and JPEG compression size, and did not find them to be very correlated. Understanding the differences between these measures can lead to the design of more efficient rendering algorithms in computer graphics.
Electronic data generation and display system
NASA Technical Reports Server (NTRS)
Wetekamm, Jules
1988-01-01
The Electronic Data Generation and Display System (EDGADS) is a field tested paperless technical manual system. The authoring provides subject matter experts the option of developing procedureware from digital or hardcopy inputs of technical information from text, graphics, pictures, and recorded media (video, audio, etc.). The display system provides multi-window presentations of graphics, pictures, animations, and action sequences with text and audio overlays on high resolution color CRT and monochrome portable displays. The database management system allows direct access via hierarchical menus, keyword name, ID number, voice command or touch of a screen pictoral of the item (ICON). It contains operations and maintenance technical information at three levels of intelligence for a total system.
GSMS and space views: Advanced spacecraft monitoring tools
NASA Technical Reports Server (NTRS)
Carlton, Douglas; Vaules, David, Jr.; Mandl, Daniel
1993-01-01
The Graphical Spacecraft Monitoring System (GSMS) processes and translates real-time telemetry data from the Gamma Ray Observatory (GRO) spacecraft into high resolution 2-D and 3-D color displays showing the spacecraft's position relative to the Sun, Earth, Moon, and stars, its predicted orbit path, its attitude, instrument field of views, and other items of interest to the GRO Flight Operations Team (FOT). The GSMS development project is described and the approach being undertaken for implementing Space Views, the next version of GSMS, is presented. Space Views is an object-oriented graphical spacecraft monitoring system that will become a standard component of Goddard Space Flight Center's Transportable Payload Operations Control Center (TPOCC).
Tankam, Patrice; Santhanam, Anand P.; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P.
2014-01-01
Abstract. Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6 mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing. PMID:24695868
Tankam, Patrice; Santhanam, Anand P; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P
2014-07-01
Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6 mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing.
Pasin, Daniel; Cawley, Adam; Bidny, Sergei; Fu, Shanlin
2017-10-01
The proliferation of new psychoactive substances (NPS) in recent years has resulted in the development of numerous analytical methods for the detection and identification of known and unknown NPS derivatives. High-resolution mass spectrometry (HRMS) has been identified as the method of choice for broad screening of NPS in a wide range of analytical contexts because of its ability to measure accurate masses using data-independent acquisition (DIA) techniques. Additionally, it has shown promise for non-targeted screening strategies that have been developed in order to detect and identify novel analogues without the need for certified reference materials (CRMs) or comprehensive mass spectral libraries. This paper reviews the applications of HRMS for the analysis of NPS in forensic drug chemistry and analytical toxicology. It provides an overview of the sample preparation procedures in addition to data acquisition, instrumental analysis, and data processing techniques. Furthermore, it gives an overview of the current state of non-targeted screening strategies with discussion on future directions and perspectives of this technique. Graphical Abstract Missing the bullseye - a graphical respresentation of non-targeted screening. Image courtesy of Christian Alonzo.
NASA Astrophysics Data System (ADS)
Zhu, Aichun; Wang, Tian; Snoussi, Hichem
2018-03-01
This paper addresses the problems of the graphical-based human pose estimation in still images, including the diversity of appearances and confounding background clutter. We present a new architecture for estimating human pose using a Convolutional Neural Network (CNN). Firstly, a Relative Mixture Deformable Model (RMDM) is defined by each pair of connected parts to compute the relative spatial information in the graphical model. Secondly, a Local Multi-Resolution Convolutional Neural Network (LMR-CNN) is proposed to train and learn the multi-scale representation of each body parts by combining different levels of part context. Thirdly, a LMR-CNN based hierarchical model is defined to explore the context information of limb parts. Finally, the experimental results demonstrate the effectiveness of the proposed deep learning approach for human pose estimation.
High-resolution, continuous field-of-view (FOV), non-rotating imaging system
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance L. (Inventor); Stirbl, Robert C. (Inventor); Aghazarian, Hrand (Inventor); Padgett, Curtis W. (Inventor)
2010-01-01
A high resolution CMOS imaging system especially suitable for use in a periscope head. The imaging system includes a sensor head for scene acquisition, and a control apparatus inclusive of distributed processors and software for device-control, data handling, and display. The sensor head encloses a combination of wide field-of-view CMOS imagers and narrow field-of-view CMOS imagers. Each bank of imagers is controlled by a dedicated processing module in order to handle information flow and image analysis of the outputs of the camera system. The imaging system also includes automated or manually controlled display system and software for providing an interactive graphical user interface (GUI) that displays a full 360-degree field of view and allows the user or automated ATR system to select regions for higher resolution inspection.
Gustafsson, Nils; Culley, Siân; Ashdown, George; Owen, Dylan M.; Pereira, Pedro Matos; Henriques, Ricardo
2016-01-01
Despite significant progress, high-speed live-cell super-resolution studies remain limited to specialized optical setups, generally requiring intense phototoxic illumination. Here, we describe a new analytical approach, super-resolution radial fluctuations (SRRF), provided as a fast graphics processing unit-enabled ImageJ plugin. In the most challenging data sets for super-resolution, such as those obtained in low-illumination live-cell imaging with GFP, we show that SRRF is generally capable of achieving resolutions better than 150 nm. Meanwhile, for data sets similar to those obtained in PALM or STORM imaging, SRRF achieves resolutions approaching those of standard single-molecule localization analysis. The broad applicability of SRRF and its performance at low signal-to-noise ratios allows super-resolution using modern widefield, confocal or TIRF microscopes with illumination orders of magnitude lower than methods such as PALM, STORM or STED. We demonstrate this by super-resolution live-cell imaging over timescales ranging from minutes to hours. PMID:27514992
ERIC Educational Resources Information Center
Callejo, Maria Luz
1994-01-01
Reports, in French, an investigation on the use of graphic representations in problem-solving tasks of the type in Spanish Mathematical Olympiads. Analysis showed that the choice and interpretation of the first graphic representation played a decisive role in the discovery of the solution. (34 references) (Author/MKR)
Real-time high-resolution PC-based system for measurement of errors on compact disks
NASA Astrophysics Data System (ADS)
Tehranchi, Babak; Howe, Dennis G.
1994-10-01
Hardware and software utilities are developed to directly monitor the Eight-to-Fourteen (EFM) demodulated data bytes at the input of a CD player's Cross-Interleaved Reed-Solomon Code (CIRC) block decoder. The hardware is capable of identifying erroneous data with single-byte resolution in the serial data stream read from a Compact Disc by a CDD 461 Philips CD-ROM drive. In addition, the system produces graphical maps that show the physical location of the measured errors on the entire disc, or via a zooming and planning feature, on user selectable local disc regions.
SeaTrack: Ground station orbit prediction and planning software for sea-viewing satellites
NASA Technical Reports Server (NTRS)
Lambert, Kenneth S.; Gregg, Watson W.; Hoisington, Charles M.; Patt, Frederick S.
1993-01-01
An orbit prediction software package (Sea Track) was designed to assist High Resolution Picture Transmission (HRPT) stations in the acquisition of direct broadcast data from sea-viewing spacecraft. Such spacecraft will be common in the near future, with the launch of the Sea viewing Wide Field-of-view Sensor (SeaWiFS) in 1994, along with the continued Advanced Very High Resolution Radiometer (AVHRR) series on NOAA platforms. The Brouwer-Lyddane model was chosen for orbit prediction because it meets the needs of HRPT tracking accuracies, provided orbital elements can be obtained frequently (up to within 1 week). Sea Track requires elements from the U.S. Space Command (NORAD Two-Line Elements) for the satellite's initial position. Updated Two-Line Elements are routinely available from many electronic sources (some are listed in the Appendix). Sea Track is a menu-driven program that allows users to alter input and output formats. The propagation period is entered by a start date and end date with times in either Greenwich Mean Time (GMT) or local time. Antenna pointing information is provided in tabular form and includes azimuth/elevation pointing angles, sub-satellite longitude/latitude, acquisition of signal (AOS), loss of signal (LOS), pass orbit number, and other pertinent pointing information. One version of Sea Track (non-graphical) allows operation under DOS (for IBM-compatible personal computers) and UNIX (for Sun and Silicon Graphics workstations). A second, graphical, version displays orbit tracks, and azimuth-elevation for IBM-compatible PC's, but requires a VGA card and Microsoft FORTRAN.
Spectral atlases of the Sun from 3980 to 7100 Å at the center and at the limb
NASA Astrophysics Data System (ADS)
Fathivavsari, H.; Ajabshirizadeh, A.; Koutchmy, S.
2014-10-01
In this work, we present digital and graphical atlases of spectra of both the solar disk-center and of the limb near the Solar poles using data taken at the UTS-IAP & RIAAM (the University of Tabriz Siderostat, telescope and spectrograph jointly developed with the Institut d'Astrophysique de Paris and Research Institute for Astronomy and Astrophysics of Maragha). High resolution and high signal-to-noise ratio (SNR) CCD-slit spectra of the sun for 2 different parts of the disk, namely for μ=1.0 (solar center) & for μ=0.3 (solar limb) are provided and discussed. While there are several spectral atlases of the solar disk-center, this is the first spectral atlas ever produced for the solar limb at this spectral range. The resolution of the spectra is about R˜70 000 (Δ λ˜0.09 Å) with the signal-to-noise ratio (SNR) of 400-600. The full atlas covers the 3980 to 7100 Å spectral regions and contains 44 pages with three partial spectra of the solar spectrum put on each page to make it compact. The difference spectrum of the normalized solar disk-center and the solar limb is also included in the graphic presentation of the atlas to show the difference of line profiles, including far wings. The identification of the most significant solar lines is included in the graphic presentation of the atlas. Telluric lines are producing a definite signature on the difference spectra which is easy to notice. At the end of this paper we present only two sample pages of the whole atlas while the graphic presentation of the whole atlas along with its ASCII file can be accessed via the ftp server of the CDS in Strasbourg via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via this link: http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/other/ApSS.
PAGANI Toolkit: Parallel graph-theoretical analysis package for brain network big data.
Du, Haixiao; Xia, Mingrui; Zhao, Kang; Liao, Xuhong; Yang, Huazhong; Wang, Yu; He, Yong
2018-05-01
The recent collection of unprecedented quantities of neuroimaging data with high spatial resolution has led to brain network big data. However, a toolkit for fast and scalable computational solutions is still lacking. Here, we developed the PArallel Graph-theoretical ANalysIs (PAGANI) Toolkit based on a hybrid central processing unit-graphics processing unit (CPU-GPU) framework with a graphical user interface to facilitate the mapping and characterization of high-resolution brain networks. Specifically, the toolkit provides flexible parameters for users to customize computations of graph metrics in brain network analyses. As an empirical example, the PAGANI Toolkit was applied to individual voxel-based brain networks with ∼200,000 nodes that were derived from a resting-state fMRI dataset of 624 healthy young adults from the Human Connectome Project. Using a personal computer, this toolbox completed all computations in ∼27 h for one subject, which is markedly less than the 118 h required with a single-thread implementation. The voxel-based functional brain networks exhibited prominent small-world characteristics and densely connected hubs, which were mainly located in the medial and lateral fronto-parietal cortices. Moreover, the female group had significantly higher modularity and nodal betweenness centrality mainly in the medial/lateral fronto-parietal and occipital cortices than the male group. Significant correlations between the intelligence quotient and nodal metrics were also observed in several frontal regions. Collectively, the PAGANI Toolkit shows high computational performance and good scalability for analyzing connectome big data and provides a friendly interface without the complicated configuration of computing environments, thereby facilitating high-resolution connectomics research in health and disease. © 2018 Wiley Periodicals, Inc.
Principled Design of an Augmented Reality Trainer for Medics
2018-04-13
retake test is scheduled. In addition, extensive simulation capstone scenarios are run with a full body manikin that includes airway management...platform so they could run with high quality graphical resolution. We updated the underlying data models to reflect the training scenario parameters...Sedeh, P., Schumann, M., & Groeben, H. (2009). Laryngoscopy via Macintosh blade versus GlideScope: Success rate and time for endotracheal intubation
NASA Astrophysics Data System (ADS)
Yamaguchi, Masahiro; Haneishi, Hideaki; Fukuda, Hiroyuki; Kishimoto, Junko; Kanazawa, Hiroshi; Tsuchida, Masaru; Iwama, Ryo; Ohyama, Nagaaki
2006-01-01
In addition to the great advancement of high-resolution and large-screen imaging technology, the issue of color is now receiving considerable attention as another aspect than the image resolution. It is difficult to reproduce the original color of subject in conventional imaging systems, and that obstructs the applications of visual communication systems in telemedicine, electronic commerce, and digital museum. To breakthrough the limitation of conventional RGB 3-primary systems, "Natural Vision" project aims at an innovative video and still-image communication technology with high-fidelity color reproduction capability, based on spectral information. This paper summarizes the results of NV project including the development of multispectral and multiprimary imaging technologies and the experimental investigations on the applications to medicine, digital archives, electronic commerce, and computer graphics.
Khomtchouk, Bohdan B; Van Booven, Derek J; Wahlestedt, Claes
2014-01-01
The graphical visualization of gene expression data using heatmaps has become an integral component of modern-day medical research. Heatmaps are used extensively to plot quantitative differences in gene expression levels, such as those measured with RNAseq and microarray experiments, to provide qualitative large-scale views of the transcriptonomic landscape. Creating high-quality heatmaps is a computationally intensive task, often requiring considerable programming experience, particularly for customizing features to a specific dataset at hand. Software to create publication-quality heatmaps is developed with the R programming language, C++ programming language, and OpenGL application programming interface (API) to create industry-grade high performance graphics. We create a graphical user interface (GUI) software package called HeatmapGenerator for Windows OS and Mac OS X as an intuitive, user-friendly alternative to researchers with minimal prior coding experience to allow them to create publication-quality heatmaps using R graphics without sacrificing their desired level of customization. The simplicity of HeatmapGenerator is that it only requires the user to upload a preformatted input file and download the publicly available R software language, among a few other operating system-specific requirements. Advanced features such as color, text labels, scaling, legend construction, and even database storage can be easily customized with no prior programming knowledge. We provide an intuitive and user-friendly software package, HeatmapGenerator, to create high-quality, customizable heatmaps generated using the high-resolution color graphics capabilities of R. The software is available for Microsoft Windows and Apple Mac OS X. HeatmapGenerator is released under the GNU General Public License and publicly available at: http://sourceforge.net/projects/heatmapgenerator/. The Mac OS X direct download is available at: http://sourceforge.net/projects/heatmapgenerator/files/HeatmapGenerator_MAC_OSX.tar.gz/download. The Windows OS direct download is available at: http://sourceforge.net/projects/heatmapgenerator/files/HeatmapGenerator_WINDOWS.zip/download.
High-resolution Single Particle Analysis from Electron Cryo-microscopy Images Using SPHIRE
Moriya, Toshio; Saur, Michael; Stabrin, Markus; Merino, Felipe; Voicu, Horatiu; Huang, Zhong; Penczek, Pawel A.; Raunser, Stefan; Gatsogiannis, Christos
2017-01-01
SPHIRE (SPARX for High-Resolution Electron Microscopy) is a novel open-source, user-friendly software suite for the semi-automated processing of single particle electron cryo-microscopy (cryo-EM) data. The protocol presented here describes in detail how to obtain a near-atomic resolution structure starting from cryo-EM micrograph movies by guiding users through all steps of the single particle structure determination pipeline. These steps are controlled from the new SPHIRE graphical user interface and require minimum user intervention. Using this protocol, a 3.5 Å structure of TcdA1, a Tc toxin complex from Photorhabdus luminescens, was derived from only 9500 single particles. This streamlined approach will help novice users without extensive processing experience and a priori structural information, to obtain noise-free and unbiased atomic models of their purified macromolecular complexes in their native state. PMID:28570515
Higher Resolution Neutron Velocity Spectrometer Measurements of Enriched Uranium
DOE R&D Accomplishments Database
Rainwater, L. J.; Havens, W. W. Jr.
1950-08-09
The slow neutron transmission of a sample of enriched U containing 3.193 gm/cm2 was investigated with a resolution width of 1 microsec/m. Results of transmission measurements are shown graphically. (B.J.H.)
Graphics-Printing Program For The HP Paintjet Printer
NASA Technical Reports Server (NTRS)
Atkins, Victor R.
1993-01-01
IMPRINT utility computer program developed to print graphics specified in raster files by use of Hewlett-Packard Paintjet(TM) color printer. Reads bit-mapped images from files on UNIX-based graphics workstation and prints out three different types of images: wire-frame images, solid-color images, and gray-scale images. Wire-frame images are in continuous tone or, in case of low resolution, in random gray scale. In case of color images, IMPRINT also prints by use of default palette of solid colors. Written in C language.
Caple, Jodi; Stephan, Carl N
2017-05-01
Graphic exemplars of cranial sex and ancestry are essential to forensic anthropology for standardizing casework, training analysts, and communicating group trends. To date, graphic exemplars have comprised hand-drawn sketches, or photographs of individual specimens, which risks bias/subjectivity. Here, we performed quantitative analysis of photographic data to generate new photo-realistic and objective exemplars of skull form. Standardized anterior and left lateral photographs of skulls for each sex were analyzed in the computer graphics program Psychomorph for the following groups: South African Blacks, South African Whites, American Blacks, American Whites, and Japanese. The average cranial form was calculated for each photographic view, before the color information for every individual was warped to the average form and combined to produce statistical averages. These mathematically derived exemplars-and their statistical exaggerations or extremes-retain the high-resolution detail of the original photographic dataset, making them the ideal casework and training reference standards. © 2016 American Academy of Forensic Sciences.
Fast Low-Cost Multiple Sensor Readout System
Carter-Lewis, David; Krennich, Frank; Le Bohec, Stephane; Petry, Dirk; Sleege, Gary
2004-04-06
A low resolution data acquisition system is presented. The data acquisition system has a plurality of readout modules serially connected to a controller. Each readout module has a FPGA in communication with analog to digital (A/D) converters, which are connected to sensors. The A/D converter has eight bit or lower resolution. The FPGA detects when a command is addressed to it and commands the A/D converters to convert analog sensor data into digital data. The digital data is sent on a high speed serial communication bus to the controller. A graphical display is used in one embodiment to indicate if a sensor reading is outside of a predetermined range.
Kronholm, Scott C.; Capel, Paul D.
2015-01-01
Quantifying the relative contributions of different sources of water to a stream hydrograph is important for understanding the hydrology and water quality dynamics of a given watershed. To compare the performance of two methods of hydrograph separation, a graphical program [baseflow index (BFI)] and an end-member mixing analysis that used high-resolution specific conductance measurements (SC-EMMA) were used to estimate daily and average long-term slowflow additions of water to four small, primarily agricultural streams with different dominant sources of water (natural groundwater, overland flow, subsurface drain outflow, and groundwater from irrigation). Because the result of hydrograph separation by SC-EMMA is strongly related to the choice of slowflow and fastflow end-member values, a sensitivity analysis was conducted based on the various approaches reported in the literature to inform the selection of end-members. There were substantial discrepancies among the BFI and SC-EMMA, and neither method produced reasonable results for all four streams. Streams that had a small difference in the SC of slowflow compared with fastflow or did not have a monotonic relationship between streamflow and stream SC posed a challenge to the SC-EMMA method. The utility of the graphical BFI program was limited in the stream that had only gradual changes in streamflow. The results of this comparison suggest that the two methods may be quantifying different sources of water. Even though both methods are easy to apply, they should be applied with consideration of the streamflow and/or SC characteristics of a stream, especially where anthropogenic water sources (irrigation and subsurface drainage) are present.
Writing a Scientific Paper II. Communication by Graphics
NASA Astrophysics Data System (ADS)
Sterken, C.
2011-07-01
This paper discusses facets of visual communication by way of images, graphs, diagrams and tabular material. Design types and elements of graphical images are presented, along with advice on how to create graphs, and on how to read graphical illustrations. This is done in astronomical context, using case studies and historical examples of good and bad graphics. Design types of graphs (scatter and vector plots, histograms, pie charts, ternary diagrams and three-dimensional surface graphs) are explicated, as well as the major components of graphical images (axes, legends, textual parts, etc.). The basic features of computer graphics (image resolution, vector images, bitmaps, graphical file formats and file conversions) are explained, as well as concepts of color models and of color spaces (with emphasis on aspects of readability of color graphics by viewers suffering from color-vision deficiencies). Special attention is given to the verity of graphical content, and to misrepresentations and errors in graphics and associated basic statistics. Dangers of dot joining and curve fitting are discussed, with emphasis on the perception of linearity, the issue of nonsense correlations, and the handling of outliers. Finally, the distinction between data, fits and models is illustrated.
Resolution-independent surface rendering using programmable graphics hardware
Loop, Charles T.; Blinn, James Frederick
2008-12-16
Surfaces defined by a Bezier tetrahedron, and in particular quadric surfaces, are rendered on programmable graphics hardware. Pixels are rendered through triangular sides of the tetrahedra and locations on the shapes, as well as surface normals for lighting evaluations, are computed using pixel shader computations. Additionally, vertex shaders are used to aid interpolation over a small number of values as input to the pixel shaders. Through this, rendering of the surfaces is performed independently of viewing resolution, allowing for advanced level-of-detail management. By individually rendering tetrahedrally-defined surfaces which together form complex shapes, the complex shapes can be rendered in their entirety.
Combining endoscopic ultrasound with Time-Of-Flight PET: The EndoTOFPET-US Project
NASA Astrophysics Data System (ADS)
Frisch, Benjamin
2013-12-01
The EndoTOFPET-US collaboration develops a multimodal imaging technique for endoscopic exams of the pancreas or the prostate. It combines the benefits of high resolution metabolic imaging with Time-Of-Flight Positron Emission Tomography (TOF PET) and anatomical imaging with ultrasound (US). EndoTOFPET-US consists of a PET head extension for a commercial US endoscope and a PET plate outside the body in coincidence with the head. The high level of miniaturization and integration creates challenges in fields such as scintillating crystals, ultra-fast photo-detection, highly integrated electronics, system integration and image reconstruction. Amongst the developments, fast scintillators as well as fast and compact digital SiPMs with single SPAD readout are used to obtain the best coincidence time resolution (CTR). Highly integrated ASICs and DAQ electronics contribute to the timing performances of EndoTOFPET. In view of the targeted resolution of around 1 mm in the reconstructed image, we present a prototype detector system with a CTR better than 240 ps FWHM. We discuss the challenges in simulating such a system and introduce reconstruction algorithms based on graphics processing units (GPU).
MrEnt: an editor for publication-quality phylogenetic tree illustrations.
Zuccon, Alessandro; Zuccon, Dario
2014-09-01
We developed MrEnt, a Windows-based, user-friendly software that allows the production of complex, high-resolution, publication-quality phylogenetic trees in few steps, directly from the analysis output. The program recognizes the standard Nexus tree format and the annotated tree files produced by BEAST and MrBayes. MrEnt combines in a single software a large suite of tree manipulation functions (e.g. handling of multiple trees, tree rotation, character mapping, node collapsing, compression of large clades, handling of time scale and error bars for chronograms) with drawing tools typical of standard graphic editors, including handling of graphic elements and images. The tree illustration can be printed or exported in several standard formats suitable for journal publication, PowerPoint presentation or Web publication. © 2014 John Wiley & Sons Ltd.
Analysis of IUE spectra using the interactive data language
NASA Technical Reports Server (NTRS)
Joseph, C. L.
1981-01-01
The Interactive Data Language (IDL) is used to analyze high resolution spectra from the IUE. Like other interactive languages, IDL is designed for use by the scientist rather than the professional programmer, allowing him to conceive of his data as simple entities and to operate on this data with minimal difficulty. A package of programs created to analyze interstellar absorption lines is presented as an example of the graphical power of IDL.
Tang, Yunqing; Dai, Luru; Zhang, Xiaoming; Li, Junbai; Hendriks, Johnny; Fan, Xiaoming; Gruteser, Nadine; Meisenberg, Annika; Baumann, Arnd; Katranidis, Alexandros; Gensch, Thomas
2015-01-01
Single molecule localization based super-resolution fluorescence microscopy offers significantly higher spatial resolution than predicted by Abbe’s resolution limit for far field optical microscopy. Such super-resolution images are reconstructed from wide-field or total internal reflection single molecule fluorescence recordings. Discrimination between emission of single fluorescent molecules and background noise fluctuations remains a great challenge in current data analysis. Here we present a real-time, and robust single molecule identification and localization algorithm, SNSMIL (Shot Noise based Single Molecule Identification and Localization). This algorithm is based on the intrinsic nature of noise, i.e., its Poisson or shot noise characteristics and a new identification criterion, QSNSMIL, is defined. SNSMIL improves the identification accuracy of single fluorescent molecules in experimental or simulated datasets with high and inhomogeneous background. The implementation of SNSMIL relies on a graphics processing unit (GPU), making real-time analysis feasible as shown for real experimental and simulated datasets. PMID:26098742
[Graphic reconstruction of anatomic surfaces].
Ciobanu, O
2004-01-01
The paper deals with the graphic reconstruction of anatomic surfaces in a virtual 3D setting. Scanning technologies and soft provides a greater flexibility in the digitization of surfaces and a higher resolution and accuracy. An alternative cheap method for the reconstruction of 3D anatomic surfaces is presented in connection with some studies and international projects developed by Medical Design research team.
Super-Resolution in Plenoptic Cameras Using FPGAs
Pérez, Joel; Magdaleno, Eduardo; Pérez, Fernando; Rodríguez, Manuel; Hernández, David; Corrales, Jaime
2014-01-01
Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA) devices using VHDL (very high speed integrated circuit (VHSIC) hardware description language). With this technology, we obtain an acceleration of several orders of magnitude using its extremely high-performance signal processing capability through parallelism and pipeline architecture. The system has been developed using generics of the VHDL language. This allows a very versatile and parameterizable system. The system user can easily modify parameters such as data width, number of microlenses of the plenoptic camera, their size and shape, and the super-resolution factor. The speed of the algorithm in FPGA has been successfully compared with the execution using a conventional computer for several image sizes and different 3D refocusing planes. PMID:24841246
Print-to-pattern dry film photoresist lithography
NASA Astrophysics Data System (ADS)
Garland, Shaun P.; Murphy, Terrence M., Jr.; Pan, Tingrui
2014-05-01
Here we present facile microfabrication processes, referred to as print-to-pattern dry film photoresist (DFP) lithography, that utilize the combined advantages of wax printing and DFP to produce micropatterned substrates with high resolution over a large surface area in a non-cleanroom setting. The print-to-pattern methods can be performed in an out-of-cleanroom environment making microfabrication much more accessible to minimally equipped laboratories. Two different approaches employing either wax photomasks or wax etchmasks from a solid ink desktop printer have been demonstrated that allow the DFP to be processed in a negative tone or positive tone fashion, respectively, with resolutions of 100 µm. The effect of wax melting on resolution and as a bonding material was also characterized. In addition, solid ink printers have the capacity to pattern large areas with high resolution, which was demonstrated by stacking DFP layers in a 50 mm × 50 mm woven pattern with 1 mm features. By using an office printer to generate the masking patterns, the mask designs can be easily altered in a graphic user interface to enable rapid prototyping.
Super-resolution in plenoptic cameras using FPGAs.
Pérez, Joel; Magdaleno, Eduardo; Pérez, Fernando; Rodríguez, Manuel; Hernández, David; Corrales, Jaime
2014-05-16
Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA) devices using VHDL (very high speed integrated circuit (VHSIC) hardware description language). With this technology, we obtain an acceleration of several orders of magnitude using its extremely high-performance signal processing capability through parallelism and pipeline architecture. The system has been developed using generics of the VHDL language. This allows a very versatile and parameterizable system. The system user can easily modify parameters such as data width, number of microlenses of the plenoptic camera, their size and shape, and the super-resolution factor. The speed of the algorithm in FPGA has been successfully compared with the execution using a conventional computer for several image sizes and different 3D refocusing planes.
Zhang, Shuai; Li, PeiPei; Yan, Zhongyong; Long, Ju; Zhang, Xiaojun
2017-03-01
An ultraperformance liquid chromatography-quadrupole time-of-flight high-resolution mass spectrometry method was developed and validated for the determination of nitrofurazone metabolites. Precolumn derivatization with 2,4-dinitrophenylhydrazine and p-dimethylaminobenzaldehyde as an internal standard was used successfully to determine the biomarker 5-nitro-2-furaldehyde. In negative electrospray ionization mode, the precise molecular weights of the derivatives were 320.0372 for the biomarker and 328.1060 for the internal standard (relative error 1.08 ppm). The matrix effect was evaluated and the analytical characteristics of the method and derivatization reaction conditions were validated. For comparison purposes, spiked samples were tested by both internal and external standard methods. The results show high precision can be obtained with p-dimethylaminobenzaldehyde as an internal standard for the identification and quantification of nitrofurazone metabolites in complex biological samples. Graphical Abstract A simplified preparation strategy for biological samples.
Ultrathin high-resolution flexographic printing using nanoporous stamps
Kim, Sanha; Sojoudi, Hossein; Zhao, Hangbo; Mariappan, Dhanushkodi; McKinley, Gareth H.; Gleason, Karen K.; Hart, A. John
2016-01-01
Since its invention in ancient times, relief printing, commonly called flexography, has been used to mass-produce artifacts ranging from decorative graphics to printed media. Now, higher-resolution flexography is essential to manufacturing low-cost, large-area printed electronics. However, because of contact-mediated liquid instabilities and spreading, the resolution of flexographic printing using elastomeric stamps is limited to tens of micrometers. We introduce engineered nanoporous microstructures, comprising polymer-coated aligned carbon nanotubes (CNTs), as a next-generation stamp material. We design and engineer the highly porous microstructures to be wetted by colloidal inks and to transfer a thin layer to a target substrate upon brief contact. We demonstrate printing of diverse micrometer-scale patterns of a variety of functional nanoparticle inks, including Ag, ZnO, WO3, and CdSe/ZnS, onto both rigid and compliant substrates. The printed patterns have highly uniform nanoscale thickness (5 to 50 nm) and match the stamp features with high fidelity (edge roughness, ~0.2 μm). We derive conditions for uniform printing based on nanoscale contact mechanics, characterize printed Ag lines and transparent conductors, and achieve continuous printing at a speed of 0.2 m/s. The latter represents a combination of resolution and throughput that far surpasses industrial printing technologies. PMID:27957542
,
2000-01-01
The U.S. Geological Survey's (USGS) Earth Explorer Web site provides access to millions of land-related products, including the following: Satellite images from Landsat, advanced very high resolution radiometer (AVHRR), and Corona data sets. Aerial photographs from the National Aerial Photography Program, NASA, and USGS data sets. Digital cartographic data from digital elevation models, digital line graphs, digital raster graphics, and digital orthophoto quadrangles. USGS paper maps Digital, film, and paper products are available, and many products can be previewed before ordering.
The use of hypermedia to increase the productivity of software development teams
NASA Technical Reports Server (NTRS)
Coles, L. Stephen
1991-01-01
Rapid progress in low-cost commercial PC-class multimedia workstation technology will potentially have a dramatic impact on the productivity of distributed work groups of 50-100 software developers. Hypermedia/multimedia involves the seamless integration in a graphical user interface (GUI) of a wide variety of data structures, including high-resolution graphics, maps, images, voice, and full-motion video. Hypermedia will normally require the manipulation of large dynamic files for which relational data base technology and SQL servers are essential. Basic machine architecture, special-purpose video boards, video equipment, optical memory, software needed for animation, network technology, and the anticipated increase in productivity that will result for the introduction of hypermedia technology are covered. It is suggested that the cost of the hardware and software to support an individual multimedia workstation will be on the order of $10,000.
Maurer, S A; Kussmann, J; Ochsenfeld, C
2014-08-07
We present a low-prefactor, cubically scaling scaled-opposite-spin second-order Møller-Plesset perturbation theory (SOS-MP2) method which is highly suitable for massively parallel architectures like graphics processing units (GPU). The scaling is reduced from O(N⁵) to O(N³) by a reformulation of the MP2-expression in the atomic orbital basis via Laplace transformation and the resolution-of-the-identity (RI) approximation of the integrals in combination with efficient sparse algebra for the 3-center integral transformation. In contrast to previous works that employ GPUs for post Hartree-Fock calculations, we do not simply employ GPU-based linear algebra libraries to accelerate the conventional algorithm. Instead, our reformulation allows to replace the rate-determining contraction step with a modified J-engine algorithm, that has been proven to be highly efficient on GPUs. Thus, our SOS-MP2 scheme enables us to treat large molecular systems in an accurate and efficient manner on a single GPU-server.
Graphical user interface for a dual-module EMCCD x-ray detector array
NASA Astrophysics Data System (ADS)
Wang, Weiyuan; Ionita, Ciprian; Kuhls-Gilcrist, Andrew; Huang, Ying; Qu, Bin; Gupta, Sandesh K.; Bednarek, Daniel R.; Rudin, Stephen
2011-03-01
A new Graphical User Interface (GUI) was developed using Laboratory Virtual Instrumentation Engineering Workbench (LabVIEW) for a high-resolution, high-sensitivity Solid State X-ray Image Intensifier (SSXII), which is a new x-ray detector for radiographic and fluoroscopic imaging, consisting of an array of Electron-Multiplying CCDs (EMCCDs) each having a variable on-chip electron-multiplication gain of up to 2000x to reduce the effect of readout noise. To enlarge the field-of-view (FOV), each EMCCD sensor is coupled to an x-ray phosphor through a fiberoptic taper. Two EMCCD camera modules are used in our prototype to form a computer-controlled array; however, larger arrays are under development. The new GUI provides patient registration, EMCCD module control, image acquisition, and patient image review. Images from the array are stitched into a 2kx1k pixel image that can be acquired and saved at a rate of 17 Hz (faster with pixel binning). When reviewing the patient's data, the operator can select images from the patient's directory tree listed by the GUI and cycle through the images using a slider bar. Commonly used camera parameters including exposure time, trigger mode, and individual EMCCD gain can be easily adjusted using the GUI. The GUI is designed to accommodate expansion of the EMCCD array to even larger FOVs with more modules. The high-resolution, high-sensitivity EMCCD modular-array SSXII imager with the new user-friendly GUI should enable angiographers and interventionalists to visualize smaller vessels and endovascular devices, helping them to make more accurate diagnoses and to perform more precise image-guided interventions.
Scalable large format 3D displays
NASA Astrophysics Data System (ADS)
Chang, Nelson L.; Damera-Venkata, Niranjan
2010-02-01
We present a general framework for the modeling and optimization of scalable large format 3-D displays using multiple projectors. Based on this framework, we derive algorithms that can robustly optimize the visual quality of an arbitrary combination of projectors (e.g. tiled, superimposed, combinations of the two) without manual adjustment. The framework creates for the first time a new unified paradigm that is agnostic to a particular configuration of projectors yet robustly optimizes for the brightness, contrast, and resolution of that configuration. In addition, we demonstrate that our algorithms support high resolution stereoscopic video at real-time interactive frame rates achieved on commodity graphics hardware. Through complementary polarization, the framework creates high quality multi-projector 3-D displays at low hardware and operational cost for a variety of applications including digital cinema, visualization, and command-and-control walls.
Hybrid parallel computing architecture for multiview phase shifting
NASA Astrophysics Data System (ADS)
Zhong, Kai; Li, Zhongwei; Zhou, Xiaohui; Shi, Yusheng; Wang, Congjun
2014-11-01
The multiview phase-shifting method shows its powerful capability in achieving high resolution three-dimensional (3-D) shape measurement. Unfortunately, this ability results in very high computation costs and 3-D computations have to be processed offline. To realize real-time 3-D shape measurement, a hybrid parallel computing architecture is proposed for multiview phase shifting. In this architecture, the central processing unit can co-operate with the graphic processing unit (GPU) to achieve hybrid parallel computing. The high computation cost procedures, including lens distortion rectification, phase computation, correspondence, and 3-D reconstruction, are implemented in GPU, and a three-layer kernel function model is designed to simultaneously realize coarse-grained and fine-grained paralleling computing. Experimental results verify that the developed system can perform 50 fps (frame per second) real-time 3-D measurement with 260 K 3-D points per frame. A speedup of up to 180 times is obtained for the performance of the proposed technique using a NVIDIA GT560Ti graphics card rather than a sequential C in a 3.4 GHZ Inter Core i7 3770.
A CAMAC display module for fast bit-mapped graphics
NASA Astrophysics Data System (ADS)
Abdel-Aal, R. E.
1992-10-01
In many data acquisition and analysis facilities for nuclear physics research, utilities for the display of two-dimensional (2D) images and spectra on graphics terminals suffer from low speed, poor resolution, and limited accuracy. Development of CAMAC bit-mapped graphics modules for this purpose has been discouraged in the past by the large device count needed and the long times required to load the image data from the host computer into the CAMAC hardware; particularly since many such facilities have been designed to support fast DMA block transfers only for data acquisition into the host. This paper describes the design and implementation of a prototype CAMAC graphics display module with a resolution of 256×256 pixels at eight colours for which all components can be easily accommodated in a single-width package. Employed is a hardware technique which reduces the number of programmed CAMAC data transfer operations needed for writing 2D images into the display memory by approximately an order of magnitude, with attendant improvements in the display speed and CPU time consumption. Hardware and software details are given together with sample results. Information on the performance of the module in a typical VAX/MBD data acquisition environment is presented, including data on the mutual effects of simultaneous data acquisition traffic. Suggestions are made for further improvements in performance.
Liang, Yicheng; Peng, Hao
2015-02-07
Depth-of-interaction (DOI) poses a major challenge for a PET system to achieve uniform spatial resolution across the field-of-view, particularly for small animal and organ-dedicated PET systems. In this work, we implemented an analytical method to model system matrix for resolution recovery, which was then incorporated in PET image reconstruction on a graphical processing unit platform, due to its parallel processing capacity. The method utilizes the concepts of virtual DOI layers and multi-ray tracing to calculate the coincidence detection response function for a given line-of-response. The accuracy of the proposed method was validated for a small-bore PET insert to be used for simultaneous PET/MR breast imaging. In addition, the performance comparisons were studied among the following three cases: 1) no physical DOI and no resolution modeling; 2) two physical DOI layers and no resolution modeling; and 3) no physical DOI design but with a different number of virtual DOI layers. The image quality was quantitatively evaluated in terms of spatial resolution (full-width-half-maximum and position offset), contrast recovery coefficient and noise. The results indicate that the proposed method has the potential to be used as an alternative to other physical DOI designs and achieve comparable imaging performances, while reducing detector/system design cost and complexity.
ChelomEx: Isotope-assisted discovery of metal chelates in complex media using high-resolution LC-MS.
Baars, Oliver; Morel, François M M; Perlman, David H
2014-11-18
Chelating agents can control the speciation and reactivity of trace metals in biological, environmental, and laboratory-derived media. A large number of trace metals (including Fe, Cu, Zn, Hg, and others) show characteristic isotopic fingerprints that can be exploited for the discovery of known and unknown organic metal complexes and related chelating ligands in very complex sample matrices using high-resolution liquid chromatography mass spectrometry (LC-MS). However, there is currently no free open-source software available for this purpose. We present a novel software tool, ChelomEx, which identifies isotope pattern-matched chromatographic features associated with metal complexes along with free ligands and other related adducts in high-resolution LC-MS data. High sensitivity and exclusion of false positives are achieved by evaluation of the chromatographic coherence of the isotope pattern within chromatographic features, which we demonstrate through the analysis of bacterial culture media. A built-in graphical user interface and compound library aid in identification and efficient evaluation of results. ChelomEx is implemented in MatLab. The source code, binaries for MS Windows and MAC OS X as well as test LC-MS data are available for download at SourceForge ( http://sourceforge.net/projects/chelomex ).
Providing Internet Access to High-Resolution Lunar Images
NASA Technical Reports Server (NTRS)
Plesea, Lucian
2008-01-01
The OnMoon server is a computer program that provides Internet access to high-resolution Lunar images, maps, and elevation data, all suitable for use in geographical information system (GIS) software for generating images, maps, and computational models of the Moon. The OnMoon server implements the Open Geospatial Consortium (OGC) Web Map Service (WMS) server protocol and supports Moon-specific extensions. Unlike other Internet map servers that provide Lunar data using an Earth coordinate system, the OnMoon server supports encoding of data in Moon-specific coordinate systems. The OnMoon server offers access to most of the available high-resolution Lunar image and elevation data. This server can generate image and map files in the tagged image file format (TIFF) or the Joint Photographic Experts Group (JPEG), 8- or 16-bit Portable Network Graphics (PNG), or Keyhole Markup Language (KML) format. Image control is provided by use of the OGC Style Layer Descriptor (SLD) protocol. Full-precision spectral arithmetic processing is also available, by use of a custom SLD extension. This server can dynamically add shaded relief based on the Lunar elevation to any image layer. This server also implements tiled WMS protocol and super-overlay KML for high-performance client application programs.
Providing Internet Access to High-Resolution Mars Images
NASA Technical Reports Server (NTRS)
Plesea, Lucian
2008-01-01
The OnMars server is a computer program that provides Internet access to high-resolution Mars images, maps, and elevation data, all suitable for use in geographical information system (GIS) software for generating images, maps, and computational models of Mars. The OnMars server is an implementation of the Open Geospatial Consortium (OGC) Web Map Service (WMS) server. Unlike other Mars Internet map servers that provide Martian data using an Earth coordinate system, the OnMars WMS server supports encoding of data in Mars-specific coordinate systems. The OnMars server offers access to most of the available high-resolution Martian image and elevation data, including an 8-meter-per-pixel uncontrolled mosaic of most of the Mars Global Surveyor (MGS) Mars Observer Camera Narrow Angle (MOCNA) image collection, which is not available elsewhere. This server can generate image and map files in the tagged image file format (TIFF), Joint Photographic Experts Group (JPEG), 8- or 16-bit Portable Network Graphics (PNG), or Keyhole Markup Language (KML) format. Image control is provided by use of the OGC Style Layer Descriptor (SLD) protocol. The OnMars server also implements tiled WMS protocol and super-overlay KML for high-performance client application programs.
Chavez, P.S.; Sides, S.C.; Anderson, J.A.
1991-01-01
The merging of multisensor image data is becoming a widely used procedure because of the complementary nature of various data sets. Ideally, the method used to merge data sets with high-spatial and high-spectral resolution should not distort the spectral characteristics of the high-spectral resolution data. This paper compares the results of three different methods used to merge the information contents of the Landsat Thematic Mapper (TM) and Satellite Pour l'Observation de la Terre (SPOT) panchromatic data. The comparison is based on spectral characteristics and is made using statistical, visual, and graphical analyses of the results. The three methods used to merge the information contents of the Landsat TM and SPOT panchromatic data were the Hue-Intensity-Saturation (HIS), Principal Component Analysis (PCA), and High-Pass Filter (HPF) procedures. The HIS method distorted the spectral characteristics of the data the most. The HPF method distorted the spectral characteristics the least; the distortions were minimal and difficult to detect. -Authors
NASA Technical Reports Server (NTRS)
1994-01-01
A software management system, originally developed for Goddard Space Flight Center (GSFC) by Century Computing, Inc. has evolved from a menu and command oriented system to a state-of-the art user interface development system supporting high resolution graphics workstations. Transportable Applications Environment (TAE) was initially distributed through COSMIC and backed by a TAE support office at GSFC. In 1993, Century Computing assumed the support and distribution functions and began marketing TAE Plus, the system's latest version. The software is easy to use and does not require programming experience.
Laser Speckle Imaging of Cerebral Blood Flow
NASA Astrophysics Data System (ADS)
Luo, Qingming; Jiang, Chao; Li, Pengcheng; Cheng, Haiying; Wang, Zhen; Wang, Zheng; Tuchin, Valery V.
Monitoring the spatio-temporal characteristics of cerebral blood flow (CBF) is crucial for studying the normal and pathophysiologic conditions of brain metabolism. By illuminating the cortex with laser light and imaging the resulting speckle pattern, relative CBF images with tens of microns spatial and millisecond temporal resolution can be obtained. In this chapter, a laser speckle imaging (LSI) method for monitoring dynamic, high-resolution CBF is introduced. To improve the spatial resolution of current LSI, a modified LSI method is proposed. To accelerate the speed of data processing, three LSI data processing frameworks based on graphics processing unit (GPU), digital signal processor (DSP), and field-programmable gate array (FPGA) are also presented. Applications for detecting the changes in local CBF induced by sensory stimulation and thermal stimulation, the influence of a chemical agent on CBF, and the influence of acute hyperglycemia following cortical spreading depression on CBF are given.
Graphics simulation and training aids for advanced teleoperation
NASA Technical Reports Server (NTRS)
Kim, Won S.; Schenker, Paul S.; Bejczy, Antal K.
1993-01-01
Graphics displays can be of significant aid in accomplishing a teleoperation task throughout all three phases of off-line task analysis and planning, operator training, and online operation. In the first phase, graphics displays provide substantial aid to investigate work cell layout, motion planning with collision detection and with possible redundancy resolution, and planning for camera views. In the second phase, graphics displays can serve as very useful tools for introductory training of operators before training them on actual hardware. In the third phase, graphics displays can be used for previewing planned motions and monitoring actual motions in any desired viewing angle, or, when communication time delay prevails, for providing predictive graphics overlay on the actual camera view of the remote site to show the non-time-delayed consequences of commanded motions in real time. This paper addresses potential space applications of graphics displays in all three operational phases of advanced teleoperation. Possible applications are illustrated with techniques developed and demonstrated in the Advanced Teleoperation Laboratory at JPL. The examples described include task analysis and planning of a simulated Solar Maximum Satellite Repair task, a novel force-reflecting teleoperation simulator for operator training, and preview and predictive displays for on-line operations.
Optimized Two-Party Video Chat with Restored Eye Contact Using Graphics Hardware
NASA Astrophysics Data System (ADS)
Dumont, Maarten; Rogmans, Sammy; Maesen, Steven; Bekaert, Philippe
We present a practical system prototype to convincingly restore eye contact between two video chat participants, with a minimal amount of constraints. The proposed six-fold camera setup is easily integrated into the monitor frame, and is used to interpolate an image as if its virtual camera captured the image through a transparent screen. The peer user has a large freedom of movement, resulting in system specifications that enable genuine practical usage. Our software framework thereby harnesses the powerful computational resources inside graphics hardware, and maximizes arithmetic intensity to achieve over real-time performance up to 42 frames per second for 800 ×600 resolution images. Furthermore, an optimal set of fine tuned parameters are presented, that optimizes the end-to-end performance of the application to achieve high subjective visual quality, and still allows for further algorithmic advancement without loosing its real-time capabilities.
High-resolution studies of the HF ionospheric modification interaction region
NASA Technical Reports Server (NTRS)
Duncan, L. M.; Sheerin, J. P.
1985-01-01
The use of the pulse edge analysis technique to explain ionospheric modifications caused by high-power HF radio waves is discussed. The technique, implemented at the Arecibo Observatory, uses long radar pulses and very rapid data sampling. A comparison of the pulse leading and trailing edge characteristics is obtained and the comparison is used to estimate the relative changes in the interaction region height and layer width; an example utilizing this technique is provided. Main plasma line overshoot and miniovershoot were studied from the pulse edge observations; the observations at various HF pulsings and radar resolutions are graphically presented. From the pulse edge data the development and the occurrence of main plasma line overshoot and miniovershoot are explained. The theories of soliton formation and collapse, wave ducting, profile modification, and parametric instabilities are examined as a means of explaining main plasma line overshoots and miniovershoots.
Dier, Tobias K F; Fleckenstein, Marco; Militz, Holger; Volmer, Dietrich A
2017-05-01
Chemical degradation is an efficient method to obtain bio-oils and other compounds from lignin. Lignin bio-oils are potential substitutes for the phenol component of phenol formaldehyde (PF) resins. Here, we developed an analytical method based on high resolution mass spectrometry that provided structural information for the synthesized lignin-derived resins and supported the prediction of their properties. Different model resins based on typical lignin degradation products were analyzed by electrospray ionization in negative ionization mode. Utilizing enhanced mass defect filter techniques provided detailed structural information of the lignin-based model resins and readily complemented the analytical data from differential scanning calorimetry and thermogravimetric analysis. Relative reactivity and chemical diversity of the phenol substitutes were significant determinants of the outcome of the PF resin synthesis and thus controlled the areas of application of the resulting polymers. Graphical abstract ᅟ.
Fundamental techniques for resolution enhancement of average subsampled images
NASA Astrophysics Data System (ADS)
Shen, Day-Fann; Chiu, Chui-Wen
2012-07-01
Although single image resolution enhancement, otherwise known as super-resolution, is widely regarded as an ill-posed inverse problem, we re-examine the fundamental relationship between a high-resolution (HR) image acquisition module and its low-resolution (LR) counterpart. Analysis shows that partial HR information is attenuated but still exists, in its LR version, through the fundamental averaging-and-subsampling process. As a result, we propose a modified Laplacian filter (MLF) and an intensity correction process (ICP) as the pre and post process, respectively, with an interpolation algorithm to partially restore the attenuated information in a super-resolution (SR) enhanced image image. Experiments show that the proposed MLF and ICP provide significant and consistent quality improvements on all 10 test images with three well known interpolation methods including bilinear, bi-cubic, and the SR graphical user interface program provided by Ecole Polytechnique Federale de Lausanne. The proposed MLF and ICP are simple in implementation and generally applicable to all average-subsampled LR images. MLF and ICP, separately or together, can be integrated into most interpolation methods that attempt to restore the original HR contents. Finally, the idea of MLF and ICP can also be applied for average, subsampled one-dimensional signal.
Funderburg, Rebecca; Arevalo, Ricardo; Locmelis, Marek; Adachi, Tomoko
2017-11-01
Laser ablation ICP-MS enables streamlined, high-sensitivity measurements of rare earth element (REE) abundances in geological materials. However, many REE isotope mass stations are plagued by isobaric interferences, particularly from diatomic oxides and argides. In this study, we compare REE abundances quantitated from mass spectra collected with low-resolution (m/Δm = 300 at 5% peak height) and medium-resolution (m/Δm = 2500) mass discrimination. A wide array of geological samples was analyzed, including USGS and NIST glasses ranging from mafic to felsic in composition, with NIST 610 employed as the bracketing calibrating reference material. The medium-resolution REE analyses are shown to be significantly more accurate and precise (at the 95% confidence level) than low-resolution analyses, particularly in samples characterized by low (<μg/g levels) REE abundances. A list of preferred mass stations that are least susceptible to isobaric interferences is reported. These findings impact the reliability of REE abundances derived from LA-ICP-MS methods, particularly those relying on mass analyzers that do not offer tuneable mass-resolution and/or collision cell technologies that can reduce oxide and/or argide formation. Graphical Abstract ᅟ.
Marvel, Skylar W; To, Kimberly; Grimm, Fabian A; Wright, Fred A; Rusyn, Ivan; Reif, David M
2018-03-05
Drawing integrated conclusions from diverse source data requires synthesis across multiple types of information. The ToxPi (Toxicological Prioritization Index) is an analytical framework that was developed to enable integration of multiple sources of evidence by transforming data into integrated, visual profiles. Methodological improvements have advanced ToxPi and expanded its applicability, necessitating a new, consolidated software platform to provide functionality, while preserving flexibility for future updates. We detail the implementation of a new graphical user interface for ToxPi (Toxicological Prioritization Index) that provides interactive visualization, analysis, reporting, and portability. The interface is deployed as a stand-alone, platform-independent Java application, with a modular design to accommodate inclusion of future analytics. The new ToxPi interface introduces several features, from flexible data import formats (including legacy formats that permit backward compatibility) to similarity-based clustering to options for high-resolution graphical output. We present the new ToxPi interface for dynamic exploration, visualization, and sharing of integrated data models. The ToxPi interface is freely-available as a single compressed download that includes the main Java executable, all libraries, example data files, and a complete user manual from http://toxpi.org .
NASA Astrophysics Data System (ADS)
Pototschnig, Johann V.; Meyer, Ralf; Hauser, Andreas W.; Ernst, Wolfgang E.
2017-02-01
Research on ultracold molecules has seen a growing interest recently in the context of high-resolution spectroscopy and quantum computation. After forming weakly bound molecules from atoms in cold collisions, the preparation of molecules in low vibrational levels of the ground state is experimentally challenging, and typically achieved by population transfer using excited electronic states. Accurate potential energy surfaces are needed for a correct description of processes such as the coherent de-excitation from the highest and therefore weakly bound vibrational levels in the electronic ground state via couplings to electronically excited states. This paper is dedicated to the vibrational analysis of potentially relevant electronically excited states in the alkali-metal (Li, Na, K, Rb)- alkaline-earth metal (Ca,Sr) diatomic series. Graphical maps of Frank-Condon overlap integrals are presented for all molecules of the group. By comparison to overlap graphics produced for idealized potential surfaces, we judge the usability of the selected states for future experiments on laser-enhanced molecular formation from mixtures of quantum degenerate gases.
Graphical User Interface for a Dual-Module EMCCD X-ray Detector Array.
Wang, Weiyuan; Ionita, Ciprian; Kuhls-Gilcrist, Andrew; Huang, Ying; Qu, Bin; Gupta, Sandesh K; Bednarek, Daniel R; Rudin, Stephen
2011-03-16
A new Graphical User Interface (GUI) was developed using Laboratory Virtual Instrumentation Engineering Workbench (LabVIEW) for a high-resolution, high-sensitivity Solid State X-ray Image Intensifier (SSXII), which is a new x-ray detector for radiographic and fluoroscopic imaging, consisting of an array of Electron-Multiplying CCDs (EMCCDs) each having a variable on-chip electron-multiplication gain of up to 2000× to reduce the effect of readout noise. To enlarge the field-of-view (FOV), each EMCCD sensor is coupled to an x-ray phosphor through a fiberoptic taper. Two EMCCD camera modules are used in our prototype to form a computer-controlled array; however, larger arrays are under development. The new GUI provides patient registration, EMCCD module control, image acquisition, and patient image review. Images from the array are stitched into a 2k×1k pixel image that can be acquired and saved at a rate of 17 Hz (faster with pixel binning). When reviewing the patient's data, the operator can select images from the patient's directory tree listed by the GUI and cycle through the images using a slider bar. Commonly used camera parameters including exposure time, trigger mode, and individual EMCCD gain can be easily adjusted using the GUI. The GUI is designed to accommodate expansion of the EMCCD array to even larger FOVs with more modules. The high-resolution, high-sensitivity EMCCD modular-array SSXII imager with the new user-friendly GUI should enable angiographers and interventionalists to visualize smaller vessels and endovascular devices, helping them to make more accurate diagnoses and to perform more precise image-guided interventions.
Integration of Modelling and Graphics to Create an Infrared Signal Processing Test Bed
NASA Astrophysics Data System (ADS)
Sethi, H. R.; Ralph, John E.
1989-03-01
The work reported in this paper was carried out as part of a contract with MoD (PE) UK. It considers the problems associated with realistic modelling of a passive infrared system in an operational environment. Ideally all aspects of the system and environment should be integrated into a complete end-to-end simulation but in the past limited computing power has prevented this. Recent developments in workstation technology and the increasing availability of parallel processing techniques makes the end-to-end simulation possible. However the complexity and speed of such simulations means difficulties for the operator in controlling the software and understanding the results. These difficulties can be greatly reduced by providing an extremely user friendly interface and a very flexible, high power, high resolution colour graphics capability. Most system modelling is based on separate software simulation of the individual components of the system itself and its environment. These component models may have their own characteristic inbuilt assumptions and approximations, may be written in the language favoured by the originator and may have a wide variety of input and output conventions and requirements. The models and their limitations need to be matched to the range of conditions appropriate to the operational scenerio. A comprehensive set of data bases needs to be generated by the component models and these data bases must be made readily available to the investigator. Performance measures need to be defined and displayed in some convenient graphics form. Some options are presented for combining available hardware and software to create an environment within which the models can be integrated, and which provide the required man-machine interface, graphics and computing power. The impact of massively parallel processing and artificial intelligence will be discussed. Parallel processing will make real time end-to-end simulation possible and will greatly improve the graphical visualisation of the model output data. Artificial intelligence should help to enhance the man-machine interface.
NASA Technical Reports Server (NTRS)
Molthan, Andrew L.; Case, Jonathan L.; Venner, Jason; Moreno-Madrinan, Max. J.; Delgado, Francisco
2012-01-01
Over the past two years, scientists in the Earth Science Office at NASA fs Marshall Space Flight Center (MSFC) have explored opportunities to apply cloud computing concepts to support near real ]time weather forecast modeling via the Weather Research and Forecasting (WRF) model. Collaborators at NASA fs Short ]term Prediction Research and Transition (SPoRT) Center and the SERVIR project at Marshall Space Flight Center have established a framework that provides high resolution, daily weather forecasts over Mesoamerica through use of the NASA Nebula Cloud Computing Platform at Ames Research Center. Supported by experts at Ames, staff at SPoRT and SERVIR have established daily forecasts complete with web graphics and a user interface that allows SERVIR partners access to high resolution depictions of weather in the next 48 hours, useful for monitoring and mitigating meteorological hazards such as thunderstorms, heavy precipitation, and tropical weather that can lead to other disasters such as flooding and landslides. This presentation will describe the framework for establishing and providing WRF forecasts, example applications of output provided via the SERVIR web portal, and early results of forecast model verification against available surface ] and satellite ]based observations.
NASA Astrophysics Data System (ADS)
Molthan, A.; Case, J.; Venner, J.; Moreno-Madriñán, M. J.; Delgado, F.
2012-12-01
Over the past two years, scientists in the Earth Science Office at NASA's Marshall Space Flight Center (MSFC) have explored opportunities to apply cloud computing concepts to support near real-time weather forecast modeling via the Weather Research and Forecasting (WRF) model. Collaborators at NASA's Short-term Prediction Research and Transition (SPoRT) Center and the SERVIR project at Marshall Space Flight Center have established a framework that provides high resolution, daily weather forecasts over Mesoamerica through use of the NASA Nebula Cloud Computing Platform at Ames Research Center. Supported by experts at Ames, staff at SPoRT and SERVIR have established daily forecasts complete with web graphics and a user interface that allows SERVIR partners access to high resolution depictions of weather in the next 48 hours, useful for monitoring and mitigating meteorological hazards such as thunderstorms, heavy precipitation, and tropical weather that can lead to other disasters such as flooding and landslides. This presentation will describe the framework for establishing and providing WRF forecasts, example applications of output provided via the SERVIR web portal, and early results of forecast model verification against available surface- and satellite-based observations.
cisTEM, user-friendly software for single-particle image processing.
Grant, Timothy; Rohou, Alexis; Grigorieff, Nikolaus
2018-03-07
We have developed new open-source software called cis TEM (computational imaging system for transmission electron microscopy) for the processing of data for high-resolution electron cryo-microscopy and single-particle averaging. cis TEM features a graphical user interface that is used to submit jobs, monitor their progress, and display results. It implements a full processing pipeline including movie processing, image defocus determination, automatic particle picking, 2D classification, ab-initio 3D map generation from random parameters, 3D classification, and high-resolution refinement and reconstruction. Some of these steps implement newly-developed algorithms; others were adapted from previously published algorithms. The software is optimized to enable processing of typical datasets (2000 micrographs, 200 k - 300 k particles) on a high-end, CPU-based workstation in half a day or less, comparable to GPU-accelerated processing. Jobs can also be scheduled on large computer clusters using flexible run profiles that can be adapted for most computing environments. cis TEM is available for download from cistem.org. © 2018, Grant et al.
cisTEM, user-friendly software for single-particle image processing
2018-01-01
We have developed new open-source software called cisTEM (computational imaging system for transmission electron microscopy) for the processing of data for high-resolution electron cryo-microscopy and single-particle averaging. cisTEM features a graphical user interface that is used to submit jobs, monitor their progress, and display results. It implements a full processing pipeline including movie processing, image defocus determination, automatic particle picking, 2D classification, ab-initio 3D map generation from random parameters, 3D classification, and high-resolution refinement and reconstruction. Some of these steps implement newly-developed algorithms; others were adapted from previously published algorithms. The software is optimized to enable processing of typical datasets (2000 micrographs, 200 k – 300 k particles) on a high-end, CPU-based workstation in half a day or less, comparable to GPU-accelerated processing. Jobs can also be scheduled on large computer clusters using flexible run profiles that can be adapted for most computing environments. cisTEM is available for download from cistem.org. PMID:29513216
High Resolution Visualization Applied to Future Heavy Airlift Concept Development and Evaluation
NASA Technical Reports Server (NTRS)
FordCook, A. B.; King, T.
2012-01-01
This paper explores the use of high resolution 3D visualization tools for exploring the feasibility and advantages of future military cargo airlift concepts and evaluating compatibility with existing and future payload requirements. Realistic 3D graphic representations of future airlifters are immersed in rich, supporting environments to demonstrate concepts of operations to key personnel for evaluation, feedback, and development of critical joint support. Accurate concept visualizations are reviewed by commanders, platform developers, loadmasters, soldiers, scientists, engineers, and key principal decision makers at various stages of development. The insight gained through the review of these physically and operationally realistic visualizations is essential to refining design concepts to meet competing requirements in a fiscally conservative defense finance environment. In addition, highly accurate 3D geometric models of existing and evolving large military vehicles are loaded into existing and proposed aircraft cargo bays. In this virtual aircraft test-loading environment, materiel developers, engineers, managers, and soldiers can realistically evaluate the compatibility of current and next-generation airlifters with proposed cargo.
NASA Astrophysics Data System (ADS)
Reilly, B. T.; Stoner, J. S.; Wiest, J.; Abbott, M. B.; Francus, P.; Lapointe, F.
2015-12-01
Computed Tomography (CT) of sediment cores allow for high resolution images, three dimensional volumes, and down core profiles, generated through the attenuation of X-rays as a function of density and atomic number. When using a medical CT-Scanner, these quantitative data are stored in pixels using the Hounsfield scale, which are relative to the attenuation of X-rays in water and air at standard temperature and pressure. Here we present MATLAB based software specifically designed for sedimentary applications with a user friendly graphical interface to process DICOM files and stitch overlapping CT scans. For visualization, the software allows easy generation of core slice images with grayscale and false color relative to a user defined Hounsfield number range. For comparison to other high resolution non-destructive methods, down core Hounsfield number profiles are extracted using a method robust to coring imperfections, like deformation, bowing, gaps, and gas expansion. We demonstrate the usefulness of this technique with lacustrine sediment cores from the Western United States and Canadian High Arctic, including Fish Lake, Oregon, and Sawtooth Lake, Ellesmere Island. These sites represent two different depositional environments and provide examples for a variety of common coring defects and lithologies. The Hounsfield profiles and images can be used in combination with other high resolution data sets, including sediment magnetic parameters, XRF core scans and many other types of data, to provide unique insights into how lithology influences paleoenvironmental and paleomagnetic records and their interpretations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maurer, S. A.; Kussmann, J.; Ochsenfeld, C., E-mail: Christian.Ochsenfeld@cup.uni-muenchen.de
2014-08-07
We present a low-prefactor, cubically scaling scaled-opposite-spin second-order Møller-Plesset perturbation theory (SOS-MP2) method which is highly suitable for massively parallel architectures like graphics processing units (GPU). The scaling is reduced from O(N{sup 5}) to O(N{sup 3}) by a reformulation of the MP2-expression in the atomic orbital basis via Laplace transformation and the resolution-of-the-identity (RI) approximation of the integrals in combination with efficient sparse algebra for the 3-center integral transformation. In contrast to previous works that employ GPUs for post Hartree-Fock calculations, we do not simply employ GPU-based linear algebra libraries to accelerate the conventional algorithm. Instead, our reformulation allows tomore » replace the rate-determining contraction step with a modified J-engine algorithm, that has been proven to be highly efficient on GPUs. Thus, our SOS-MP2 scheme enables us to treat large molecular systems in an accurate and efficient manner on a single GPU-server.« less
Ray Casting of Large Multi-Resolution Volume Datasets
NASA Astrophysics Data System (ADS)
Lux, C.; Fröhlich, B.
2009-04-01
High quality volume visualization through ray casting on graphics processing units (GPU) has become an important approach for many application domains. We present a GPU-based, multi-resolution ray casting technique for the interactive visualization of massive volume data sets commonly found in the oil and gas industry. Large volume data sets are represented as a multi-resolution hierarchy based on an octree data structure. The original volume data is decomposed into small bricks of a fixed size acting as the leaf nodes of the octree. These nodes are the highest resolution of the volume. Coarser resolutions are represented through inner nodes of the hierarchy which are generated by down sampling eight neighboring nodes on a finer level. Due to limited memory resources of current desktop workstations and graphics hardware only a limited working set of bricks can be locally maintained for a frame to be displayed. This working set is chosen to represent the whole volume at different local resolution levels depending on the current viewer position, transfer function and distinct areas of interest. During runtime the working set of bricks is maintained in CPU- and GPU memory and is adaptively updated by asynchronously fetching data from external sources like hard drives or a network. The CPU memory hereby acts as a secondary level cache for these sources from which the GPU representation is updated. Our volume ray casting algorithm is based on a 3D texture-atlas in GPU memory. This texture-atlas contains the complete working set of bricks of the current multi-resolution representation of the volume. This enables the volume ray casting algorithm to access the whole working set of bricks through only a single 3D texture. For traversing rays through the volume, information about the locations and resolution levels of visited bricks are required for correct compositing computations. We encode this information into a small 3D index texture which represents the current octree subdivision on its finest level and spatially organizes the bricked data. This approach allows us to render a bricked multi-resolution volume data set utilizing only a single rendering pass with no loss of compositing precision. In contrast most state-of-the art volume rendering systems handle the bricked data as individual 3D textures, which are rendered one at a time while the results are composited into a lower precision frame buffer. Furthermore, our method enables us to integrate advanced volume rendering techniques like empty-space skipping, adaptive sampling and preintegrated transfer functions in a very straightforward manner with virtually no extra costs. Our interactive volume ray tracing implementation allows high quality visualizations of massive volume data sets of tens of Gigabytes in size on standard desktop workstations.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-17
... cards, and other commercial printing applications requiring high quality print graphics. Specifically... Suitable for High-Quality Print Graphics Using Sheet-Fed Presses From the People's Republic of China... on certain coated paper suitable for high-quality print graphics using sheet-fed presses (``coated...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-27
... Suitable for High-Quality Print Graphics Using Sheet-Fed Presses From Indonesia: Final Determination of... high-quality print graphics using sheet-fed presses (certain coated paper) from Indonesia is being, or... certain coated paper from Indonesia. See Certain Coated Paper Suitable for High-Quality Print Graphics...
NASA Astrophysics Data System (ADS)
Min, Jae-Hong; Gelo, Nikolas J.; Jo, Hongki
2016-04-01
The newly developed smartphone application, named RINO, in this study allows measuring absolute dynamic displacements and processing them in real time using state-of-the-art smartphone technologies, such as high-performance graphics processing unit (GPU), in addition to already powerful CPU and memories, embedded high-speed/ resolution camera, and open-source computer vision libraries. A carefully designed color-patterned target and user-adjustable crop filter enable accurate and fast image processing, allowing up to 240fps for complete displacement calculation and real-time display. The performances of the developed smartphone application are experimentally validated, showing comparable accuracy with those of conventional laser displacement sensor.
State of the art in video system performance
NASA Technical Reports Server (NTRS)
Lewis, Michael J.
1990-01-01
The closed circuit television (CCTV) system that is onboard the Space Shuttle has the following capabilities: camera, video signal switching and routing unit (VSU); and Space Shuttle video tape recorder. However, this system is inadequate for use with many experiments that require video imaging. In order to assess the state-of-the-art in video technology and data storage systems, a survey was conducted of the High Resolution, High Frame Rate Video Technology (HHVT) products. The performance of the state-of-the-art solid state cameras and image sensors, video recording systems, data transmission devices, and data storage systems versus users' requirements are shown graphically.
The 1984 ASEE-NASA summer faculty fellowship program (aeronautics and research)
NASA Technical Reports Server (NTRS)
Dah-Nien, F.; Hodge, J. R.; Emad, F. P.
1984-01-01
The 1984 NASA-ASEE Faculty Fellowship Program (SFFP) is reported. The report includes: (1) a list of participants; (2) abstracts of research projects; (3) seminar schedule; (4) evaluation questionnaire; and (5) agenda of visitation by faculty programs committee. Topics discussed include: effects of multiple scattering on laser beam propagation; information management; computer techniques; guidelines for writing user documentation; 30 graphics software; high energy electron and antiproton cosmic rays; high resolution Fourier transform infrared spectrum; average monthly annual zonal and global albedos; laser backscattering from ocean surface; image processing systems; geomorphological mapping; low redshift quasars; application of artificial intelligence to command management systems.
Distributed health care imaging information systems
NASA Astrophysics Data System (ADS)
Thompson, Mary R.; Johnston, William E.; Guojun, Jin; Lee, Jason; Tierney, Brian; Terdiman, Joseph F.
1997-05-01
We have developed an ATM network-based system to collect and catalogue cardio-angiogram videos from the source at a Kaiser central facility and make them available for viewing by doctors at primary care Kaiser facilities. This an example of the general problem of diagnostic data being generated at tertiary facilities, while the images, or other large data objects they produce, need to be used from a variety of other locations such as doctor's offices or local hospitals. We describe the use of a highly distributed computing and storage architecture to provide all aspects of collecting, storing, analyzing, and accessing such large data-objects in a metropolitan area ATM network. Our large data-object management system provides network interface between the object sources, the data management system and the user of the data. As the data is being stored, a cataloguing system automatically creates and stores condensed versions of the data, textural metadata and pointers to the original data. The catalogue system provides a Web-based graphical interface to the data. The user is able the view the low-resolution data with a standard Internet connection and Web browser. If high-resolution is required, a high-speed connection and special application programs can be used to view the high-resolution original data.
NASA Technical Reports Server (NTRS)
Anderson, B. H.; Putt, C. W.; Giamati, C. C.
1981-01-01
Color coding techniques used in the processing of remote sensing imagery were adapted and applied to the fluid dynamics problems associated with turbofan mixer nozzles. The computer generated color graphics were found to be useful in reconstructing the measured flow field from low resolution experimental data to give more physical meaning to this information and in scanning and interpreting the large volume of computer generated data from the three dimensional viscous computer code used in the analysis.
Mass Defect from Nuclear Physics to Mass Spectral Analysis.
Pourshahian, Soheil
2017-09-01
Mass defect is associated with the binding energy of the nucleus. It is a fundamental property of the nucleus and the principle behind nuclear energy. Mass defect has also entered into the mass spectrometry terminology with the availability of high resolution mass spectrometry and has found application in mass spectral analysis. In this application, isobaric masses are differentiated and identified by their mass defect. What is the relationship between nuclear mass defect and mass defect used in mass spectral analysis, and are they the same? Graphical Abstract ᅟ.
Reid, Jeffrey C.
1989-01-01
Computer processing and high resolution graphics display of geochemical data were used to quickly, accurately, and efficiently obtain important decision-making information for tin (cassiterite) exploration, Seward Peninsula, Alaska (USA). Primary geochemical dispersion patterns were determined for tin-bearing intrusive granite phases of Late Cretaceous age with exploration bedrock lithogeochemistry at the Kougarok tin prospect. Expensive diamond drilling footage was required to reach exploration objectives. Recognition of element distribution and dispersion patterns was useful in subsurface interpretation and correlation, and to aid location of other holes.
Graphical tools for TV weather presentation
NASA Astrophysics Data System (ADS)
Najman, M.
2010-09-01
Contemporary meteorology and its media presentation faces in my opinion following key tasks: - Delivering the meteorological information to the end user/spectator in understandable and modern fashion, which follows industry standard of video output (HD, 16:9) - Besides weather icons show also the outputs of numerical weather prediction models, climatological data, satellite and radar images, observed weather as actual as possible. - Does not compromise the accuracy of presented data. - Ability to prepare and adjust the weather show according to actual synoptic situtation. - Ability to refocus and completely adjust the weather show to actual extreme weather events. - Ground map resolution weather data presentation need to be at least 20 m/pixel to be able to follow the numerical weather prediction model resolution. - Ability to switch between different numerical weather prediction models each day, each show or even in the middle of one weather show. - The graphical weather software need to be flexible and fast. The graphical changes nee to be implementable and airable within minutes before the show or even live. These tasks are so demanding and the usual original approach of custom graphics could not deal with it. It was not able to change the show every day, the shows were static and identical day after day. To change the content of the weather show daily was costly and most of the time impossible with the usual approach. The development in this area is fast though and there are several different options for weather predicting organisations such as national meteorological offices and private meteorological companies to solve this problem. What are the ways to solve it? What are the limitations and advantages of contemporary graphical tools for meteorologists? All these questions will be answered.
Multi-GPU Accelerated Admittance Method for High-Resolution Human Exposure Evaluation.
Xiong, Zubiao; Feng, Shi; Kautz, Richard; Chandra, Sandeep; Altunyurt, Nevin; Chen, Ji
2015-12-01
A multi-graphics processing unit (GPU) accelerated admittance method solver is presented for solving the induced electric field in high-resolution anatomical models of human body when exposed to external low-frequency magnetic fields. In the solver, the anatomical model is discretized as a three-dimensional network of admittances. The conjugate orthogonal conjugate gradient (COCG) iterative algorithm is employed to take advantage of the symmetric property of the complex-valued linear system of equations. Compared against the widely used biconjugate gradient stabilized method, the COCG algorithm can reduce the solving time by 3.5 times and reduce the storage requirement by about 40%. The iterative algorithm is then accelerated further by using multiple NVIDIA GPUs. The computations and data transfers between GPUs are overlapped in time by using asynchronous concurrent execution design. The communication overhead is well hidden so that the acceleration is nearly linear with the number of GPU cards. Numerical examples show that our GPU implementation running on four NVIDIA Tesla K20c cards can reach 90 times faster than the CPU implementation running on eight CPU cores (two Intel Xeon E5-2603 processors). The implemented solver is able to solve large dimensional problems efficiently. A whole adult body discretized in 1-mm resolution can be solved in just several minutes. The high efficiency achieved makes it practical to investigate human exposure involving a large number of cases with a high resolution that meets the requirements of international dosimetry guidelines.
SUGAR: graphical user interface-based data refiner for high-throughput DNA sequencing.
Sato, Yukuto; Kojima, Kaname; Nariai, Naoki; Yamaguchi-Kabata, Yumi; Kawai, Yosuke; Takahashi, Mamoru; Mimori, Takahiro; Nagasaki, Masao
2014-08-08
Next-generation sequencers (NGSs) have become one of the main tools for current biology. To obtain useful insights from the NGS data, it is essential to control low-quality portions of the data affected by technical errors such as air bubbles in sequencing fluidics. We develop a software SUGAR (subtile-based GUI-assisted refiner) which can handle ultra-high-throughput data with user-friendly graphical user interface (GUI) and interactive analysis capability. The SUGAR generates high-resolution quality heatmaps of the flowcell, enabling users to find possible signals of technical errors during the sequencing. The sequencing data generated from the error-affected regions of a flowcell can be selectively removed by automated analysis or GUI-assisted operations implemented in the SUGAR. The automated data-cleaning function based on sequence read quality (Phred) scores was applied to a public whole human genome sequencing data and we proved the overall mapping quality was improved. The detailed data evaluation and cleaning enabled by SUGAR would reduce technical problems in sequence read mapping, improving subsequent variant analysis that require high-quality sequence data and mapping results. Therefore, the software will be especially useful to control the quality of variant calls to the low population cells, e.g., cancers, in a sample with technical errors of sequencing procedures.
Graphene Oxide as a Novel Evenly Continuous Phase Matrix for TOF-SIMS.
Cai, Lesi; Sheng, Linfeng; Xia, Mengchan; Li, Zhanping; Zhang, Sichun; Zhang, Xinrong; Chen, Hongyuan
2017-03-01
Using matrix to enhance the molecular ion signals for biomolecule identification without loss of spatial resolution caused by matrix crystallization is a great challenge for the application of TOF-SIMS in real-world biological research. In this report, graphene oxide (GO) was used as a matrix for TOF-SIMS to improve the secondary ion yields of intact molecular ions ([M + H] + ). Identifying and distinguishing the molecular ions of lipids (m/z >700) therefore became straightforward. The spatial resolution of TOF-SIMS imaging could also be improved as GO can form a homogeneous layer of matrix instead of crystalline domain, which prevents high spatial resolution in TOF-SIMS imaging. Lipid mapping in presence of GO revealed the delicate morphology and distribution of single vesicles with a diameter of 800 nm. On GO matrix, the vesicles with similar shape but different chemical composition could be distinguished using molecular ions. This novel matrix holds potentials in such applications as the analysis and imaging of complex biological samples by TOF-SIMS. Graphical Abstract ᅟ.
GPU-accelerated two dimensional synthetic aperture focusing for photoacoustic microscopy
NASA Astrophysics Data System (ADS)
Liu, Siyu; Feng, Xiaohua; Gao, Fei; Jin, Haoran; Zhang, Ruochong; Luo, Yunqi; Zheng, Yuanjin
2018-02-01
Acoustic resolution photoacoustic microscopy (AR-PAM) generally suffers from limited depth of focus, which had been extended by synthetic aperture focusing techniques (SAFTs). However, for three dimensional AR-PAM, current one dimensional (1D) SAFT and its improved version like cross-shaped SAFT do not provide isotropic resolution in the lateral direction. The full potential of the SAFT remains to be tapped. To this end, two dimensional (2D) SAFT with fast computing architecture is proposed in this work. Explained by geometric modeling and Fourier acoustics theories, 2D-SAFT provide the narrowest post-focusing capability, thus to achieve best lateral resolution. Compared with previous 1D-SAFT techniques, the proposed 2D-SAFT improved the lateral resolution by at least 1.7 times and the signal-to-noise ratio (SNR) by about 10 dB in both simulation and experiments. Moreover, the improved 2D-SAFT algorithm is accelerated by a graphical processing unit that reduces the long period of reconstruction to only a few seconds. The proposed 2D-SAFT is demonstrated to outperform previous reported 1D SAFT in the aspects of improving the depth of focus, imaging resolution, and SNR with fast computational efficiency. This work facilitates future studies on in vivo deeper and high-resolution photoacoustic microscopy beyond several centimeters.
Video-rate imaging of microcirculation with single-exposure oblique back-illumination microscopy
NASA Astrophysics Data System (ADS)
Ford, Tim N.; Mertz, Jerome
2013-06-01
Oblique back-illumination microscopy (OBM) is a new technique for simultaneous, independent measurements of phase gradients and absorption in thick scattering tissues based on widefield imaging. To date, OBM has been used with sequential camera exposures, which reduces temporal resolution, and can produce motion artifacts in dynamic samples. Here, a variation of OBM that allows single-exposure operation with wavelength multiplexing and image splitting with a Wollaston prism is introduced. Asymmetric anamorphic distortion induced by the prism is characterized and corrected in real time using a graphics-processing unit. To demonstrate the capacity of single-exposure OBM to perform artifact-free imaging of blood flow, video-rate movies of microcirculation in ovo in the chorioallantoic membrane of the developing chick are presented. Imaging is performed with a high-resolution rigid Hopkins lens suitable for endoscopy.
Video-rate imaging of microcirculation with single-exposure oblique back-illumination microscopy.
Ford, Tim N; Mertz, Jerome
2013-06-01
Oblique back-illumination microscopy (OBM) is a new technique for simultaneous, independent measurements of phase gradients and absorption in thick scattering tissues based on widefield imaging. To date, OBM has been used with sequential camera exposures, which reduces temporal resolution, and can produce motion artifacts in dynamic samples. Here, a variation of OBM that allows single-exposure operation with wavelength multiplexing and image splitting with a Wollaston prism is introduced. Asymmetric anamorphic distortion induced by the prism is characterized and corrected in real time using a graphics-processing unit. To demonstrate the capacity of single-exposure OBM to perform artifact-free imaging of blood flow, video-rate movies of microcirculation in ovo in the chorioallantoic membrane of the developing chick are presented. Imaging is performed with a high-resolution rigid Hopkins lens suitable for endoscopy.
Mapping detailed 3D information onto high resolution SAR signatures
NASA Astrophysics Data System (ADS)
Anglberger, H.; Speck, R.
2017-05-01
Due to challenges in the visual interpretation of radar signatures or in the subsequent information extraction, a fusion with other data sources can be beneficial. The most accurate basis for a fusion of any kind of remote sensing data is the mapping of the acquired 2D image space onto the true 3D geometry of the scenery. In the case of radar images this is a challenging task because the coordinate system is based on the measured range which causes ambiguous regions due to layover effects. This paper describes a method that accurately maps the detailed 3D information of a scene to the slantrange-based coordinate system of imaging radars. Due to this mapping all the contributing geometrical parts of one resolution cell can be determined in 3D space. The proposed method is highly efficient, because computationally expensive operations can be directly performed on graphics card hardware. The described approach builds a perfect basis for sophisticated methods to extract data from multiple complimentary sensors like from radar and optical images, especially because true 3D information from whole cities will be available in the near future. The performance of the developed methods will be demonstrated with high resolution radar data acquired by the space-borne SAR-sensor TerraSAR-X.
G.A.M.E.: GPU-accelerated mixture elucidator.
Schurz, Alioune; Su, Bo-Han; Tu, Yi-Shu; Lu, Tony Tsung-Yu; Lin, Olivia A; Tseng, Yufeng J
2017-09-15
GPU acceleration is useful in solving complex chemical information problems. Identifying unknown structures from the mass spectra of natural product mixtures has been a desirable yet unresolved issue in metabolomics. However, this elucidation process has been hampered by complex experimental data and the inability of instruments to completely separate different compounds. Fortunately, with current high-resolution mass spectrometry, one feasible strategy is to define this problem as extending a scaffold database with sidechains of different probabilities to match the high-resolution mass obtained from a high-resolution mass spectrum. By introducing a dynamic programming (DP) algorithm, it is possible to solve this NP-complete problem in pseudo-polynomial time. However, the running time of the DP algorithm grows by orders of magnitude as the number of mass decimal digits increases, thus limiting the boost in structural prediction capabilities. By harnessing the heavily parallel architecture of modern GPUs, we designed a "compute unified device architecture" (CUDA)-based GPU-accelerated mixture elucidator (G.A.M.E.) that considerably improves the performance of the DP, allowing up to five decimal digits for input mass data. As exemplified by four testing datasets with verified constitutions from natural products, G.A.M.E. allows for efficient and automatic structural elucidation of unknown mixtures for practical procedures. Graphical abstract .
NASA Astrophysics Data System (ADS)
Beria, H.; Nanda, T., Sr.; Chatterjee, C.
2015-12-01
High resolution satellite precipitation products such as Tropical Rainfall Measuring Mission (TRMM), Climate Forecast System Reanalysis (CFSR), European Centre for Medium-Range Weather Forecasts (ECMWF), etc., offer a promising alternative to flood forecasting in data scarce regions. At the current state-of-art, these products cannot be used in the raw form for flood forecasting, even at smaller lead times. In the current study, these precipitation products are bias corrected using statistical techniques, such as additive and multiplicative bias corrections, and wavelet multi-resolution analysis (MRA) with India Meteorological Department (IMD) gridded precipitation product,obtained from gauge-based rainfall estimates. Neural network based rainfall-runoff modeling using these bias corrected products provide encouraging results for flood forecasting upto 48 hours lead time. We will present various statistical and graphical interpretations of catchment response to high rainfall events using both the raw and bias corrected precipitation products at different lead times.
Diserens, Gaëlle; Vermathen, Martina; Gjuroski, Ilche; Eggimann, Sandra; Precht, Christina; Boesch, Chris; Vermathen, Peter
2016-08-01
The study aim was to unambiguously assign nucleotide sugars, mainly UDP-X that are known to be important in glycosylation processes as sugar donors, and glucose-phosphates that are important intermediate metabolites for storage and transfer of energy directly in spectra of intact cells, as well as in skeletal muscle biopsies by (1)H high-resolution magic-angle-spinning (HR-MAS) NMR. The results demonstrate that sugar phosphates can be determined quickly and non-destructively in cells and biopsies by HR-MAS, which may prove valuable considering the importance of phosphate sugars in cell metabolism for nucleic acid synthesis. As proof of principle, an example of phosphate-sugar reaction and degradation kinetics after unfreezing the sample is shown for a cardiac muscle, suggesting the possibility to follow by HR-MAS NMR some metabolic pathways. Graphical abstract Glucose-phosphate sugars (Glc-1P and Glc-6P) detected in muscle by 1H HR-MAS NMR.
Aoyagi, Satoka; Abe, Kiyoshi; Yamagishi, Takayuki; Iwai, Hideo; Yamaguchi, Satoru; Sunohara, Takashi
2017-11-01
Blood adsorption onto the inside surface of hollow fiber dialysis membranes was investigated by means of time-of-flight secondary ion mass spectrometry (TOF-SIMS) and near-field infrared microscopy (NFIR) in order to evaluate the biocompatibility and permeability of dialysis membranes. TOF-SIMS is useful for the imaging of particular molecules with a high spatial resolution of approximately 100 nm. In contrast, infrared spectra provide quantitative information and NFIR enables analysis with a high spatial resolution of less than 1 μm, which is close to the resolution of TOF-SIMS. A comparison was made of one of the most widely used dialysis membranes made of polysulfone (PSf), that has an asymmetric and inhomogeneous pore structure, and a newly developed asymmetric cellulose triacetate (ATA) membrane that also has an asymmetric pore structure, even though the conventional cellulose triacetate membrane has a symmetric and homogeneous pore structure. As a result, it was demonstrated that blood adsorption on the inside surface of the ATA membrane is more reduced than that on the PSf membrane. Graphical abstract Analysis of blood adsorption on inside surface of hollow fiber membrane.
GPUs benchmarking in subpixel image registration algorithm
NASA Astrophysics Data System (ADS)
Sanz-Sabater, Martin; Picazo-Bueno, Jose Angel; Micó, Vicente; Ferrerira, Carlos; Granero, Luis; Garcia, Javier
2015-05-01
Image registration techniques are used among different scientific fields, like medical imaging or optical metrology. The straightest way to calculate shifting between two images is using the cross correlation, taking the highest value of this correlation image. Shifting resolution is given in whole pixels which cannot be enough for certain applications. Better results can be achieved interpolating both images, as much as the desired resolution we want to get, and applying the same technique described before, but the memory needed by the system is significantly higher. To avoid memory consuming we are implementing a subpixel shifting method based on FFT. With the original images, subpixel shifting can be achieved multiplying its discrete Fourier transform by a linear phase with different slopes. This method is high time consuming method because checking a concrete shifting means new calculations. The algorithm, highly parallelizable, is very suitable for high performance computing systems. GPU (Graphics Processing Unit) accelerated computing became very popular more than ten years ago because they have hundreds of computational cores in a reasonable cheap card. In our case, we are going to register the shifting between two images, doing the first approach by FFT based correlation, and later doing the subpixel approach using the technique described before. We consider it as `brute force' method. So we will present a benchmark of the algorithm consisting on a first approach (pixel resolution) and then do subpixel resolution approaching, decreasing the shifting step in every loop achieving a high resolution in few steps. This program will be executed in three different computers. At the end, we will present the results of the computation, with different kind of CPUs and GPUs, checking the accuracy of the method, and the time consumed in each computer, discussing the advantages, disadvantages of the use of GPUs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vijayan, S; Rana, V; Setlur Nagesh, S
2014-06-15
Purpose: Our real-time skin dose tracking system (DTS) has been upgraded to monitor dose for the micro-angiographic fluoroscope (MAF), a high-resolution, small field-of-view x-ray detector. Methods: The MAF has been mounted on a changer on a clinical C-Arm gantry so it can be used interchangeably with the standard flat-panel detector (FPD) during neuro-interventional procedures when high resolution is needed in a region-of-interest. To monitor patient skin dose when using the MAF, our DTS has been modified to automatically account for the change in scatter for the very small MAF FOV and to provide separated dose distributions for each detector. Themore » DTS is able to provide a color-coded mapping of the cumulative skin dose on a 3D graphic model of the patient. To determine the correct entrance skin exposure to be applied by the DTS, a correction factor was determined by measuring the exposure at the entrance surface of a skull phantom with an ionization chamber as a function of entrance beam size for various beam filters and kVps. Entrance exposure measurements included primary radiation, patient backscatter and table forward scatter. To allow separation of the dose from each detector, a parameter log is kept that allows a replay of the procedure exposure events and recalculation of the dose components.The graphic display can then be constructed showing the dose distribution from the MAF and FPD separately or together. Results: The DTS is able to provide separate displays of dose for the MAF and FPD with field-size specific scatter corrections. These measured corrections change from about 49% down to 10% when changing from the FPD to the MAF. Conclusion: The upgraded DTS allows identification of the patient skin dose delivered when using each detector in order to achieve improved dose management as well as to facilitate peak skin-dose reduction through dose spreading. Research supported in part by Toshiba Medical Systems Corporation and NIH Grants R43FD0158401, R44FD0158402 and R01EB002873.« less
A high-resolution and intelligent dead pixel detection scheme for an electrowetting display screen
NASA Astrophysics Data System (ADS)
Luo, ZhiJie; Luo, JianKun; Zhao, WenWen; Cao, Yang; Lin, WeiJie; Zhou, GuoFu
2018-02-01
Electrowetting display technology is realized by tuning the surface energy of a hydrophobic surface by applying a voltage based on electrowetting mechanism. With the rapid development of the electrowetting industry, how to analyze efficiently the quality of an electrowetting display screen has a very important significance. There are two kinds of dead pixels on the electrowetting display screen. One is that the oil of pixel cannot completely cover the display area. The other is that indium tin oxide semiconductor wire connecting pixel and foil was burned. In this paper, we propose a high-resolution and intelligent dead pixel detection scheme for an electrowetting display screen. First, we built an aperture ratio-capacitance model based on the electrical characteristics of electrowetting display. A field-programmable gate array is used as the integrated logic hub of the system for a highly reliable and efficient control of the circuit. Dead pixels can be detected and displayed on a PC-based 2D graphical interface in real time. The proposed dead pixel detection scheme reported in this work has promise in automating electrowetting display experiments.
ERIC Educational Resources Information Center
Poehls, Eddie; And Others
This course guide for a graphic arts course is one of four developed for the graphic communications area in the North Dakota senior high industrial arts education program. (Eight other guides are available for two other areas of Industrial Arts--energy/power and production.) Part 1 provides such introductory information as a definition and…
Use of ground-penetrating radar techniques in archaeological investigations
NASA Technical Reports Server (NTRS)
Doolittle, James A.; Miller, W. Frank
1991-01-01
Ground-penetrating radar (GPR) techniques are increasingly being used to aid reconnaissance and pre-excavation surveys at many archaeological sites. As a 'remote sensing' tool, GPR provides a high resolution graphic profile of the subsurface. Radar profiles are used to detect, identify, and locate buried artifacts. Ground-penetrating radar provides a rapid, cost effective, and nondestructive method for identification and location analyses. The GPR can be used to facilitate excavation strategies, provide greater areal coverage per unit time and cost, minimize the number of unsuccessful exploratory excavations, and reduce unnecessary or unproductive expenditures of time and effort.
Extracting microtubule networks from superresolution single-molecule localization microscopy data
Zhang, Zhen; Nishimura, Yukako; Kanchanawong, Pakorn
2017-01-01
Microtubule filaments form ubiquitous networks that specify spatial organization in cells. However, quantitative analysis of microtubule networks is hampered by their complex architecture, limiting insights into the interplay between their organization and cellular functions. Although superresolution microscopy has greatly facilitated high-resolution imaging of microtubule filaments, extraction of complete filament networks from such data sets is challenging. Here we describe a computational tool for automated retrieval of microtubule filaments from single-molecule-localization–based superresolution microscopy images. We present a user-friendly, graphically interfaced implementation and a quantitative analysis of microtubule network architecture phenotypes in fibroblasts. PMID:27852898
High resolution image processing on low-cost microcomputers
NASA Technical Reports Server (NTRS)
Miller, R. L.
1993-01-01
Recent advances in microcomputer technology have resulted in systems that rival the speed, storage, and display capabilities of traditionally larger machines. Low-cost microcomputers can provide a powerful environment for image processing. A new software program which offers sophisticated image display and analysis on IBM-based systems is presented. Designed specifically for a microcomputer, this program provides a wide-range of functions normally found only on dedicated graphics systems, and therefore can provide most students, universities and research groups with an affordable computer platform for processing digital images. The processing of AVHRR images within this environment is presented as an example.
Analysis of the upper massif of the craniofacial with the radial method – practical use
Lepich, Tomasz; Dąbek, Józefa; Stompel, Daniel; Gielecki, Jerzy S.
2011-01-01
Introduction The analysis of the upper massif of the craniofacial (UMC) is widely used in many fields of science. The aim of the study was to create a high resolution computer system based on a digital information record and on vector graphics, that could enable dimension measuring and evaluation of craniofacial shape using the radial method. Material and methods The study was carried out on 184 skulls, in a good state of preservation, from the early middle ages. The examined skulls were fixed into Molisson's craniostat in the author's own modification. They were directed in space towards the Frankfurt plane and photographed in frontal norm with a digital camera. The parameters describing the plane and dimensional structure of the UMC and orbits were obtained thanks to the computer analysis of the function recordings picturing the craniofacial structures and using software combining raster graphics with vector graphics. Results It was compared mean values of both orbits separately for male and female groups. In female skulls the comparison of the left and right side did not show statistically significant differences. In male group, higher values were observed for the right side. Only the circularity index presented higher values for the left side. Conclusions Computer graphics with the software used for analysing digital pictures of UMC and orbits increase the precision of measurements as well as the calculation possibilities. Recognition of the face in the post mortem examination is crucial for those working on identification in anthropology and criminology laboratories. PMID:22291834
Analysis of impact of general-purpose graphics processor units in supersonic flow modeling
NASA Astrophysics Data System (ADS)
Emelyanov, V. N.; Karpenko, A. G.; Kozelkov, A. S.; Teterina, I. V.; Volkov, K. N.; Yalozo, A. V.
2017-06-01
Computational methods are widely used in prediction of complex flowfields associated with off-normal situations in aerospace engineering. Modern graphics processing units (GPU) provide architectures and new programming models that enable to harness their large processing power and to design computational fluid dynamics (CFD) simulations at both high performance and low cost. Possibilities of the use of GPUs for the simulation of external and internal flows on unstructured meshes are discussed. The finite volume method is applied to solve three-dimensional unsteady compressible Euler and Navier-Stokes equations on unstructured meshes with high resolution numerical schemes. CUDA technology is used for programming implementation of parallel computational algorithms. Solutions of some benchmark test cases on GPUs are reported, and the results computed are compared with experimental and computational data. Approaches to optimization of the CFD code related to the use of different types of memory are considered. Speedup of solution on GPUs with respect to the solution on central processor unit (CPU) is compared. Performance measurements show that numerical schemes developed achieve 20-50 speedup on GPU hardware compared to CPU reference implementation. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.
VennDIS: a JavaFX-based Venn and Euler diagram software to generate publication quality figures.
Ignatchenko, Vladimir; Ignatchenko, Alexandr; Sinha, Ankit; Boutros, Paul C; Kislinger, Thomas
2015-04-01
Venn diagrams are graphical representations of the relationships among multiple sets of objects and are often used to illustrate similarities and differences among genomic and proteomic datasets. All currently existing tools for producing Venn diagrams evince one of two traits; they require expertise in specific statistical software packages (such as R), or lack the flexibility required to produce publication-quality figures. We describe a simple tool that addresses both shortcomings, Venn Diagram Interactive Software (VennDIS), a JavaFX-based solution for producing highly customizable, publication-quality Venn, and Euler diagrams of up to five sets. The strengths of VennDIS are its simple graphical user interface and its large array of customization options, including the ability to modify attributes such as font, style and position of the labels, background color, size of the circle/ellipse, and outline color. It is platform independent and provides real-time visualization of figure modifications. The created figures can be saved as XML files for future modification or exported as high-resolution images for direct use in publications. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Point Analysis in Java applied to histological images of the perforant pathway: a user's account.
Scorcioni, Ruggero; Wright, Susan N; Patrick Card, J; Ascoli, Giorgio A; Barrionuevo, Germán
2008-01-01
The freeware Java tool Point Analysis in Java (PAJ), created to perform 3D point analysis, was tested in an independent laboratory setting. The input data consisted of images of the hippocampal perforant pathway from serial immunocytochemical localizations of the rat brain in multiple views at different resolutions. The low magnification set (x2 objective) comprised the entire perforant pathway, while the high magnification set (x100 objective) allowed the identification of individual fibers. A preliminary stereological study revealed a striking linear relationship between the fiber count at high magnification and the optical density at low magnification. PAJ enabled fast analysis for down-sampled data sets and a friendly interface with automated plot drawings. Noted strengths included the multi-platform support as well as the free availability of the source code, conducive to a broad user base and maximum flexibility for ad hoc requirements. PAJ has great potential to extend its usability by (a) improving its graphical user interface, (b) increasing its input size limit, (c) improving response time for large data sets, and (d) potentially being integrated with other Java graphical tools such as ImageJ.
A GPU-based incompressible Navier-Stokes solver on moving overset grids
NASA Astrophysics Data System (ADS)
Chandar, Dominic D. J.; Sitaraman, Jayanarayanan; Mavriplis, Dimitri J.
2013-07-01
In pursuit of obtaining high fidelity solutions to the fluid flow equations in a short span of time, graphics processing units (GPUs) which were originally intended for gaming applications are currently being used to accelerate computational fluid dynamics (CFD) codes. With a high peak throughput of about 1 TFLOPS on a PC, GPUs seem to be favourable for many high-resolution computations. One such computation that involves a lot of number crunching is computing time accurate flow solutions past moving bodies. The aim of the present paper is thus to discuss the development of a flow solver on unstructured and overset grids and its implementation on GPUs. In its present form, the flow solver solves the incompressible fluid flow equations on unstructured/hybrid/overset grids using a fully implicit projection method. The resulting discretised equations are solved using a matrix-free Krylov solver using several GPU kernels such as gradient, Laplacian and reduction. Some of the simple arithmetic vector calculations are implemented using the CU++: An Object Oriented Framework for Computational Fluid Dynamics Applications using Graphics Processing Units, Journal of Supercomputing, 2013, doi:10.1007/s11227-013-0985-9 approach where GPU kernels are automatically generated at compile time. Results are presented for two- and three-dimensional computations on static and moving grids.
NASA Astrophysics Data System (ADS)
Basu, S.; Ganguly, S.; Nemani, R. R.; Mukhopadhyay, S.; Milesi, C.; Votava, P.; Michaelis, A.; Zhang, G.; Cook, B. D.; Saatchi, S. S.; Boyda, E.
2014-12-01
Accurate tree cover delineation is a useful instrument in the derivation of Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) satellite imagery data. Numerous algorithms have been designed to perform tree cover delineation in high to coarse resolution satellite imagery, but most of them do not scale to terabytes of data, typical in these VHR datasets. In this paper, we present an automated probabilistic framework for the segmentation and classification of 1-m VHR data as obtained from the National Agriculture Imagery Program (NAIP) for deriving tree cover estimates for the whole of Continental United States, using a High Performance Computing Architecture. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field (CRF), which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by incorporating expert knowledge through the relabeling of misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the state of California, which covers a total of 11,095 NAIP tiles and spans a total geographical area of 163,696 sq. miles. Our framework produced correct detection rates of around 85% for fragmented forests and 70% for urban tree cover areas, with false positive rates lower than 3% for both regions. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR high-resolution canopy height model shows the effectiveness of our algorithm in generating accurate high-resolution tree cover maps.
A Hybrid CPU-GPU Accelerated Framework for Fast Mapping of High-Resolution Human Brain Connectome
Ren, Ling; Xu, Mo; Xie, Teng; Gong, Gaolang; Xu, Ningyi; Yang, Huazhong; He, Yong
2013-01-01
Recently, a combination of non-invasive neuroimaging techniques and graph theoretical approaches has provided a unique opportunity for understanding the patterns of the structural and functional connectivity of the human brain (referred to as the human brain connectome). Currently, there is a very large amount of brain imaging data that have been collected, and there are very high requirements for the computational capabilities that are used in high-resolution connectome research. In this paper, we propose a hybrid CPU-GPU framework to accelerate the computation of the human brain connectome. We applied this framework to a publicly available resting-state functional MRI dataset from 197 participants. For each subject, we first computed Pearson’s Correlation coefficient between any pairs of the time series of gray-matter voxels, and then we constructed unweighted undirected brain networks with 58 k nodes and a sparsity range from 0.02% to 0.17%. Next, graphic properties of the functional brain networks were quantified, analyzed and compared with those of 15 corresponding random networks. With our proposed accelerating framework, the above process for each network cost 80∼150 minutes, depending on the network sparsity. Further analyses revealed that high-resolution functional brain networks have efficient small-world properties, significant modular structure, a power law degree distribution and highly connected nodes in the medial frontal and parietal cortical regions. These results are largely compatible with previous human brain network studies. Taken together, our proposed framework can substantially enhance the applicability and efficacy of high-resolution (voxel-based) brain network analysis, and have the potential to accelerate the mapping of the human brain connectome in normal and disease states. PMID:23675425
An Integrated Coastal Observation and Flood Warning System: Rapid Prototype Development
2006-09-01
And Ranging (LIDAR) tiles describing the area of interest are critical to the accuracy of the associated graphical representations of the inundation...Elevation Dataset (NED) with 30-meter resolution for the upper Potomac area and USGS 0.3-meter resolution orthophotos for viewing when zoomed down...on the areas of interest. Using orthophotos is much easier than trying to recreate the landscape with point, line, and polygon features, and it
NASA Astrophysics Data System (ADS)
Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng
2018-02-01
De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.
Powars, David S.; Catchings, Rufus D.; Goldman, Mark R.; Gohn, Gregory S.; Horton, J. Wright; Edwards, Lucy E.; Rymer, Michael J.; Gandhok, Gini
2009-01-01
The U.S. Geological Survey (USGS) acquired two 1.4-km-long, high-resolution (~5 m vertical resolution) seismic-reflection lines in 2006 that cross near the International Continental Scientific Drilling Program (ICDP)-USGS Eyreville deep drilling site located above the late Eocene Chesapeake Bay impact structure in Virginia, USA. Five-meter spacing of seismic sources and geophones produced high-resolution images of the subsurface adjacent to the 1766-m-depth Eyreville core holes. Analysis of these lines, in the context of the core hole stratigraphy, shows that moderate-amplitude, discontinuous, dipping reflections below ~527 m correlate with a variety of Chesapeake Bay impact structure sediment and rock breccias recovered in the cores. High-amplitude, continuous, subhorizontal reflections above ~527 m depth correlate with the uppermost part of the Chesapeake Bay impact structure crater-fill sediments and postimpact Eocene to Pleistocene sediments. Reflections with ~20-30 m of relief in the uppermost part of the crater-fill and lowermost part of the postimpact section suggest differential compaction of the crater-fill materials during early postimpact time. The top of the crater-fill section also shows ~20 m of relief that appears to represent an original synimpact surface. Truncation surfaces, locally dipping reflections, and depth variations in reflection amplitudes generally correlate with the lithostrati-graphic and sequence-stratigraphic units and contacts in the core. Seismic images show apparent postimpact paleochannels that include the first possible Miocene paleochannels in the Mid-Atlantic Coastal Plain. Broad downwarping in the postim-pact section unrelated to structures in the crater fill indicates postimpact sediment compaction.
Integrating Commercial Off-The-Shelf (COTS) graphics and extended memory packages with CLIPS
NASA Technical Reports Server (NTRS)
Callegari, Andres C.
1990-01-01
This paper addresses the question of how to mix CLIPS with graphics and how to overcome PC's memory limitations by using the extended memory available in the computer. By adding graphics and extended memory capabilities, CLIPS can be converted into a complete and powerful system development tool, on the other most economical and popular computer platform. New models of PCs have amazing processing capabilities and graphic resolutions that cannot be ignored and should be used to the fullest of their resources. CLIPS is a powerful expert system development tool, but it cannot be complete without the support of a graphics package needed to create user interfaces and general purpose graphics, or without enough memory to handle large knowledge bases. Now, a well known limitation on the PC's is the usage of real memory which limits CLIPS to use only 640 Kb of real memory, but now that problem can be solved by developing a version of CLIPS that uses extended memory. The user has access of up to 16 MB of memory on 80286 based computers and, practically, all the available memory (4 GB) on computers that use the 80386 processor. So if we give CLIPS a self-configuring graphics package that will automatically detect the graphics hardware and pointing device present in the computer, and we add the availability of the extended memory that exists in the computer (with no special hardware needed), the user will be able to create more powerful systems at a fraction of the cost and on the most popular, portable, and economic platform available such as the PC platform.
Development and application of GIS-based PRISM integration through a plugin approach
NASA Astrophysics Data System (ADS)
Lee, Woo-Seop; Chun, Jong Ahn; Kang, Kwangmin
2014-05-01
A PRISM (Parameter-elevation Regressions on Independent Slopes Model) QGIS-plugin was developed on Quantum GIS platform in this study. This Quantum GIS plugin system provides user-friendly graphic user interfaces (GUIs) so that users can obtain gridded meteorological data of high resolutions (1 km × 1 km). Also, this software is designed to run on a personal computer so that it does not require an internet access or a sophisticated computer system. This module is a user-friendly system that a user can generate PRISM data with ease. The proposed PRISM QGIS-plugin is a hybrid statistical-geographic model system that uses coarse resolution datasets (APHRODITE datasets in this study) with digital elevation data to generate the fine-resolution gridded precipitation. To validate the performance of the software, Prek Thnot River Basin in Kandal, Cambodia is selected for application. Overall statistical analysis shows promising outputs generated by the proposed plugin. Error measures such as RMSE (Root Mean Square Error) and MAPE (Mean Absolute Percentage Error) were used to evaluate the performance of the developed PRISM QGIS-plugin. Evaluation results using RMSE and MAPE were 2.76 mm and 4.2%, respectively. This study suggested that the plugin can be used to generate high resolution precipitation datasets for hydrological and climatological studies at a watershed where observed weather datasets are limited.
NASA Astrophysics Data System (ADS)
Canavesi, Cristina; Cogliati, Andrea; Hayes, Adam; Tankam, Patrice; Santhanam, Anand; Rolland, Jannick P.
2017-02-01
Real-time volumetric high-definition wide-field-of-view in-vivo cellular imaging requires micron-scale resolution in 3D. Compactness of the handheld device and distortion-free images with cellular resolution are also critically required for onsite use in clinical applications. By integrating a custom liquid lens-based microscope and a dual-axis MEMS scanner in a compact handheld probe, Gabor-domain optical coherence microscopy (GD-OCM) breaks the lateral resolution limit of optical coherence tomography through depth, overcoming the tradeoff between numerical aperture and depth of focus, enabling advances in biotechnology. Furthermore, distortion-free imaging with no post-processing is achieved with a compact, lightweight handheld MEMS scanner that obtained a 12-fold reduction in volume and 17-fold reduction in weight over a previous dual-mirror galvanometer-based scanner. Approaching the holy grail of medical imaging - noninvasive real-time imaging with histologic resolution - GD-OCM demonstrates invariant resolution of 2 μm throughout a volume of 1 x 1 x 0.6 mm3, acquired and visualized in less than 2 minutes with parallel processing on graphics processing units. Results on the metrology of manufactured materials and imaging of human tissue with GD-OCM are presented.
46 CFR 78.01-2 - Incorporation by reference.
Code of Federal Regulations, 2011 CFR
2011-10-01
...), and at the U.S. Coast Guard, Lifesaving and Fire Safety Division (CG-5214), 2100 2nd St., SW., Stop... Kingdom. Resolution A.654(16), Graphical Symbols for Fire Control Plans—78.45-1 [CGD 95-028, 62 FR 51204...
46 CFR 78.01-2 - Incorporation by reference.
Code of Federal Regulations, 2012 CFR
2012-10-01
...), and at the U.S. Coast Guard, Lifesaving and Fire Safety Division (CG-ENG-4), 2100 2nd St., SW., Stop... Kingdom. Resolution A.654(16), Graphical Symbols for Fire Control Plans—78.45-1 [CGD 95-028, 62 FR 51204...
46 CFR 78.01-2 - Incorporation by reference.
Code of Federal Regulations, 2010 CFR
2010-10-01
...), and at the U.S. Coast Guard, Lifesaving and Fire Safety Division (CG-5214), 2100 2nd St., SW., Stop... Kingdom. Resolution A.654(16), Graphical Symbols for Fire Control Plans—78.45-1 [CGD 95-028, 62 FR 51204...
Now I "See": The Impact of Graphic Novels on Reading Comprehension in High School English Classrooms
ERIC Educational Resources Information Center
Cook, Mike P.
2017-01-01
Few empirical studies have been conducted to investigate the educational uses of graphic novels. Because of this, misconceptions and stereotypes exist. This article presents findings from a study examining the effects of graphic novels on high school students' (N = 217) reading comprehension. A graphic adaptation of a traditionally taught text…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-09
... Suitable For High-Quality Print Graphics Using Sheet-Fed Presses from the People's Republic of China...-quality print graphics using sheet-fed presses from the People's Republic of China (``PRC''). For... Graphics Using Sheet-Fed Presses from the People's Republic of China: Initiation of Countervailing Duty...
An Early-Warning System for Volcanic Ash Dispersal: The MAFALDA Procedure
NASA Astrophysics Data System (ADS)
Barsotti, S.; Nannipieri, L.; Neri, A.
2006-12-01
Forecasts of the dispersal of volcanic ash is a fundamental goal in order to mitigate its potential impact on urbanized areas and transport routes surrounding explosive volcanoes. To this aim we developed an early- warning procedure named MAFALDA (Modeling And Forecasting Ash Loading and Dispersal in the Atmosphere). Such tool is able to quantitatively forecast the atmospheric concentration of ash as well as the ground deposition as a function of time over a 3D spatial domain.\\The main features of MAFALDA are: (1) the use of the hybrid Lagrangian-Eulerian code VOL-CALPUFF able to describe both the rising column phase and the atmospheric dispersal as a function of weather conditions, (2) the use of high-resolution weather forecasting data, (3) the short execution time that allows to analyse a set of scenarios and (4) the web-based CGI software application (written in Perl programming language) that shows the results in a standard graphical web interface and makes it suitable as an early-warning system during volcanic crises.\\MAFALDA is composed by a computational part that simulates the ash cloud dynamics and a graphical interface for visualizing the modelling results. The computational part includes the codes for elaborating the meteorological data, the dispersal code and the post-processing programs. These produces hourly 2D maps of aerial ash concentration at several vertical levels, extension of "threat" area on air and 2D maps of ash deposit on the ground, in addition to graphs of hourly variations of column height.\\The processed results are available on the web by the graphical interface and the users can choose, by drop-down menu, which data to visualize. \\A first partial application of the procedure has been carried out for Mt. Etna (Italy). In this case, the procedure simulates four volcanological scenarios characterized by different plume intensities and uses 48-hrs weather forecasting data with a resolution of 7 km provided by the Italian Air Force.
Using an analytical geometry method to improve tiltmeter data presentation
Su, W.-J.
2000-01-01
The tiltmeter is a useful tool for geologic and geotechnical applications. To obtain full benefit from the tiltmeter, easy and accurate data presentations should be used. Unfortunately, the most commonly used method for tilt data reduction now may yield inaccurate and low-resolution results. This article describes a simple, accurate, and high-resolution approach developed at the Illinois State Geological Survey for data reduction and presentation. The orientation of tiltplates is determined first by using a trigonometric relationship, followed by a matrix transformation, to obtain the true amount of rotation change of the tiltplate at any given time. The mathematical derivations used for the determination and transformation are then coded into an integrated PC application by adapting the capabilities of commercial spreadsheet, database, and graphics software. Examples of data presentation from tiltmeter applications in studies of landfill covers, characterizations of mine subsidence, and investigations of slope stability are also discussed.
Towards real-time image deconvolution: application to confocal and STED microscopy
Zanella, R.; Zanghirati, G.; Cavicchioli, R.; Zanni, L.; Boccacci, P.; Bertero, M.; Vicidomini, G.
2013-01-01
Although deconvolution can improve the quality of any type of microscope, the high computational time required has so far limited its massive spreading. Here we demonstrate the ability of the scaled-gradient-projection (SGP) method to provide accelerated versions of the most used algorithms in microscopy. To achieve further increases in efficiency, we also consider implementations on graphic processing units (GPUs). We test the proposed algorithms both on synthetic and real data of confocal and STED microscopy. Combining the SGP method with the GPU implementation we achieve a speed-up factor from about a factor 25 to 690 (with respect the conventional algorithm). The excellent results obtained on STED microscopy images demonstrate the synergy between super-resolution techniques and image-deconvolution. Further, the real-time processing allows conserving one of the most important property of STED microscopy, i.e the ability to provide fast sub-diffraction resolution recordings. PMID:23982127
Chi, Bryan; DeLeeuw, Ronald J; Coe, Bradley P; MacAulay, Calum; Lam, Wan L
2004-02-09
Array comparative genomic hybridization (CGH) is a technique which detects copy number differences in DNA segments. Complete sequencing of the human genome and the development of an array representing a tiling set of tens of thousands of DNA segments spanning the entire human genome has made high resolution copy number analysis throughout the genome possible. Since array CGH provides signal ratio for each DNA segment, visualization would require the reassembly of individual data points into chromosome profiles. We have developed a visualization tool for displaying whole genome array CGH data in the context of chromosomal location. SeeGH is an application that translates spot signal ratio data from array CGH experiments to displays of high resolution chromosome profiles. Data is imported from a simple tab delimited text file obtained from standard microarray image analysis software. SeeGH processes the signal ratio data and graphically displays it in a conventional CGH karyotype diagram with the added features of magnification and DNA segment annotation. In this process, SeeGH imports the data into a database, calculates the average ratio and standard deviation for each replicate spot, and links them to chromosome regions for graphical display. Once the data is displayed, users have the option of hiding or flagging DNA segments based on user defined criteria, and retrieve annotation information such as clone name, NCBI sequence accession number, ratio, base pair position on the chromosome, and standard deviation. SeeGH represents a novel software tool used to view and analyze array CGH data. The software gives users the ability to view the data in an overall genomic view as well as magnify specific chromosomal regions facilitating the precise localization of genetic alterations. SeeGH is easily installed and runs on Microsoft Windows 2000 or later environments.
Joseph, Arun A; Kalentev, Oleksandr; Merboldt, Klaus-Dietmar; Voit, Dirk; Roeloffs, Volkert B; van Zalk, Maaike; Frahm, Jens
2016-01-01
Objective: To develop a novel method for rapid myocardial T1 mapping at high spatial resolution. Methods: The proposed strategy represents a single-shot inversion recovery experiment triggered to early diastole during a brief breath-hold. The measurement combines an adiabatic inversion pulse with a real-time readout by highly undersampled radial FLASH, iterative image reconstruction and T1 fitting with automatic deletion of systolic frames. The method was implemented on a 3-T MRI system using a graphics processing unit-equipped bypass computer for online application. Validations employed a T1 reference phantom including analyses at simulated heart rates from 40 to 100 beats per minute. In vivo applications involved myocardial T1 mapping in short-axis views of healthy young volunteers. Results: At 1-mm in-plane resolution and 6-mm section thickness, the inversion recovery measurement could be shortened to 3 s without compromising T1 quantitation. Phantom studies demonstrated T1 accuracy and high precision for values ranging from 300 to 1500 ms and up to a heart rate of 100 beats per minute. Similar results were obtained in vivo yielding septal T1 values of 1246 ± 24 ms (base), 1256 ± 33 ms (mid-ventricular) and 1288 ± 30 ms (apex), respectively (mean ± standard deviation, n = 6). Conclusion: Diastolic myocardial T1 mapping with use of single-shot inversion recovery FLASH offers high spatial resolution, T1 accuracy and precision, and practical robustness and speed. Advances in knowledge: The proposed method will be beneficial for clinical applications relying on native and post-contrast T1 quantitation. PMID:27759423
Scanner observations of hot helium-carbon stars.
NASA Technical Reports Server (NTRS)
Fay, T.; Honeycutt, R. K.; Warren, W. H., Jr.
1973-01-01
Photoelectric spectral scans at 20 A resolution of four hot helium-carbon-rich stars have been reduced to fluxes and are presented in graphical form. Similar flux curves for several normal (hydrogen-rich) stars in the same temperature range are presented for comparison.
Optical Fiber In The Loop: Features And Applications
NASA Astrophysics Data System (ADS)
Shariati, Ross
1986-01-01
It is expected that there would be various demands for digital capacity, from a few kilobits per second for such services as facsimile, data entry, and provision of audio and graphic for teleconferencing, to about 56Kb/sec for electronic mail and integrated work stations, and higher speeds for cable television, high resolution TV, and computer-aided engineering. Fiber optics has been proven-in from an economic standpoint to provide the above-mentioned services. This is primarily due to the fact that in less than five years optical line rates have leaped from 45Mb/s to gigabit rates, therefore reducing the cost per DS3 of capacity, and the price of high quality fiber cable has taken a nosedive.
An Assessment of Gigabit Ethernet Technology and Its Applications at the NASA Glenn Research Center
NASA Technical Reports Server (NTRS)
Bakes, Catherine Murphy; Kim, Chan M.; Ramos, Calvin T.
2000-01-01
This paper describes Gigabit Ethernet and its role in supporting R&D programs at NASA Glenn. These programs require an advanced high-speed network capable of transporting multimedia traffic, including real-time visualization, high- resolution graphics, and scientific data. GigE is a 1 Gbps extension to 10 and 100 Mbps Ethernet. The IEEE 802.3z and 802.3ab standards define the MAC layer and 1000BASE-X and 1000BASE-T physical layer specifications for GigE. GigE switches and buffered distributors support IEEE 802.3x flow control. The paper also compares GigE with ATM in terms of quality of service, data rate, throughput, scalability, interoperability, network management, and cost of ownership.
Computation of nonstationary strong shock diffraction by curved surfaces
NASA Technical Reports Server (NTRS)
Yang, J. Y.; Lombard, C. K.; Bershader, D.
1986-01-01
A two-dimensional, high resolution shock-capturing algorithm was used on a supercomputer to solve Eulerian gasdynamic equations in order to simulate nonstationary strong shock diffraction by a circular arc model in a shock tube. The hypersonic Mach shock wave was assumed to arrive at a high angle of incidence, and attention was given to the effect of varying values of the ratio of specific heats on the shock diffraction process. Details of the conservation equations of the numerical algorithm, written in curvilinear coordinates, are provided, and model output is illustrated with the results generated for a Mach shock encountering a 15 deg circular arc. The sample graphics include isopycnics, a shock surface density profile, and pressure and Mach number contours.
Körsgen, Martin; Pelster, Andreas; Dreisewerd, Klaus; Arlinghaus, Heinrich F
2016-02-01
The analytical sensitivity in matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) is largely affected by the specific analyte-matrix interaction, in particular by the possible incorporation of the analytes into crystalline MALDI matrices. Here we used time-of-flight secondary ion mass spectrometry (ToF-SIMS) to visualize the incorporation of three peptides with different hydrophobicities, bradykinin, Substance P, and vasopressin, into two classic MALDI matrices, 2,5-dihydroxybenzoic acid (DHB) and α-cyano-4-hydroxycinnamic acid (HCCA). For depth profiling, an Ar cluster ion beam was used to gradually sputter through the matrix crystals without causing significant degradation of matrix or biomolecules. A pulsed Bi3 ion cluster beam was used to image the lateral analyte distribution in the center of the sputter crater. Using this dual beam technique, the 3D distribution of the analytes and spatial segregation effects within the matrix crystals were imaged with sub-μm resolution. The technique could in the future enable matrix-enhanced (ME)-ToF-SIMS imaging of peptides in tissue slices at ultra-high resolution. Graphical Abstract ᅟ.
Supporting Scientific Analysis within Collaborative Problem Solving Environments
NASA Technical Reports Server (NTRS)
Watson, Velvin R.; Kwak, Dochan (Technical Monitor)
2000-01-01
Collaborative problem solving environments for scientists should contain the analysis tools the scientists require in addition to the remote collaboration tools used for general communication. Unfortunately, most scientific analysis tools have been designed for a "stand-alone mode" and cannot be easily modified to work well in a collaborative environment. This paper addresses the questions, "What features are desired in a scientific analysis tool contained within a collaborative environment?", "What are the tool design criteria needed to provide these features?", and "What support is required from the architecture to support these design criteria?." First, the features of scientific analysis tools that are important for effective analysis in collaborative environments are listed. Next, several design criteria for developing analysis tools that will provide these features are presented. Then requirements for the architecture to support these design criteria are listed. Sonic proposed architectures for collaborative problem solving environments are reviewed and their capabilities to support the specified design criteria are discussed. A deficiency in the most popular architecture for remote application sharing, the ITU T. 120 architecture, prevents it from supporting highly interactive, dynamic, high resolution graphics. To illustrate that the specified design criteria can provide a highly effective analysis tool within a collaborative problem solving environment, a scientific analysis tool that contains the specified design criteria has been integrated into a collaborative environment and tested for effectiveness. The tests were conducted in collaborations between remote sites in the US and between remote sites on different continents. The tests showed that the tool (a tool for the visual analysis of computer simulations of physics) was highly effective for both synchronous and asynchronous collaborative analyses. The important features provided by the tool (and made possible by the specified design criteria) are: 1. The tool provides highly interactive, dynamic, high resolution, 3D graphics. 2. All remote scientists can view the same dynamic, high resolution, 3D scenes of the analysis as the analysis is being conducted. 3. The responsiveness of the tool is nearly identical to the responsiveness of the tool in a stand-alone mode. 4. The scientists can transfer control of the analysis between themselves. 5. Any analysis session or segment of an analysis session, whether done individually or collaboratively, can be recorded and posted on the Web for other scientists or students to download and play in either a collaborative or individual mode. 6. The scientist or student who downloaded the session can, individually or collaboratively, modify or extend the session with his/her own "what if" analysis of the data and post his/her version of the analysis back onto the Web. 7. The peak network bandwidth used in the collaborative sessions is only 1K bit/second even though the scientists at all sites are viewing high resolution (1280 x 1024 pixels), dynamic, 3D scenes of the analysis. The links between the specified design criteria and these performance features are presented.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-17
... Suitable for High-Quality Print Graphics Using Sheet-Fed Presses From Indonesia: Countervailing Duty Order... Indonesia. DATES: Effective Date: November 17, 2010. FOR FURTHER INFORMATION CONTACT: Gene Calvert or... from Indonesia. See Certain Coated Paper Suitable for High-Quality Print Graphics Using Sheet-Fed...
Using 1H2O MR to measure and map sodium pump activity in vivo
NASA Astrophysics Data System (ADS)
Springer, Charles S.
2018-06-01
The cell plasma membrane Na+,K+-ATPase [NKA] is one of biology's most [if not the most] significant enzymes. By actively transporting Na+ out [and K+ in], it maintains the vital trans-membrane ion concentration gradients and the membrane potential. The forward NKA reaction is shown in the Graphical Abstract [which is elaborated in the text]. Crucially, NKA does not operate in isolation. There are other transporters that conduct K+ back out of [II, Graphical Abstract] and Na+ back into [III, Graphical Abstract] the cell. Thus, NKA must function continually. Principal routes for ATP replenishment include mitochondrial oxidative phosphorylation, glycolysis, and creatine kinase [CrK] activity. However, it has never been possible to measure, let alone map, this integrated, cellular homeostatic NKA activity in vivo. Active trans-membrane water cycling [AWC] promises a way to do this with 1H2O MR. In the Graphical Abstract, the AWC system is characterized by active contributions to the unidirectional rate constants for steady-state water efflux and influx, respectively, kio(a) and koi(a). The discovery, validation, and initial exploration of active water cycling are reviewed here. Promising applications in cancer, cardiological, and neurological MRI are covered. This initial work employed paramagnetic Gd(III) chelate contrast agents [CAs]. However, the significant problems associated with in vivo CA use are also reviewed. A new analysis of water diffusion-weighted MRI [DWI] is presented. Preliminary results suggest a non-invasive way to measure the cell number density [ρ (cells/μL)], the mean cell volume [V (pL)], and the cellular NKA metabolic rate [cMRNKA (fmol(ATP)/s/cell)] with high spatial resolution. These crucial cell biology properties have not before been accessible in vivo. Furthermore, initial findings indicate their absolute values can be determined.
System analysis of graphics processor architecture using virtual prototyping
NASA Astrophysics Data System (ADS)
Hancock, William R.; Groat, Jeff; Steeves, Todd; Spaanenburg, Henk; Shackleton, John
1995-06-01
Honeywell has been actively involved in the definition of the next generation display processors for military and commercial cockpits. A major concern is how to achieve super graphics workstation performance in avionics application. Most notable are requirements for low volume, low power, harsh environmental conditions, real-time performance and low cost. This paper describes the application of VHDL to the system analysis tasks associated with achieving these goals in a cost effective manner. The paper will describe the top level architecture identified to provide the graphical and video processing power needed to drive future high resolution display devices and to generate more natural panoramic 3D formats. The major discussion, however, will be on the use of VHDL to model the processing elements and customized pipelines needed to realize the architecture and for doing the complex system tradeoff studies necessary to achieve a cost effective implementation. New software tools have been developed to allow 'virtual' prototyping in the VHDL environment. This results in a hardware/software codesign using VHDL performance and functional models. This unique architectural tool allows simulation and tradeoffs within a standard and tightly integrated toolset, which eventually will be used to specify and design the entire system from the top level requirements and system performance to the lowest level individual ASICs. New processing elements, algorithms, and standard graphical inputs can be designed, tested and evaluated without the costly hardware prototyping using the innovative 'virtual' prototyping techniques which are evolving on this project. In addition, virtual prototyping of the display processor does not bind the preliminary design to point solutions as a physical prototype will. when the development schedule is known, one can extrapolate processing elements performance and design the system around the most current technology.
NASA Technical Reports Server (NTRS)
1998-01-01
SYMED, Inc., developed a unique electronic medical records and information management system. The S2000 Medical Interactive Care System (MICS) incorporates both a comprehensive and interactive medical care support capability and an extensive array of digital medical reference materials in either text or high resolution graphic form. The system was designed, in cooperation with NASA, to improve the effectiveness and efficiency of physician practices. The S2000 is a MS (Microsoft) Windows based software product which combines electronic forms, medical documents, records management, and features a comprehensive medical information system for medical diagnostic support and treatment. SYMED, Inc. offers access to its medical systems to all companies seeking competitive advantages.
FIEStool: Automated data reduction for FIber-fed Echelle Spectrograph (FIES)
NASA Astrophysics Data System (ADS)
Stempels, Eric; Telting, John
2017-08-01
FIEStool automatically reduces data obtained with the FIber-fed Echelle Spectrograph (FIES) at the Nordic Optical Telescope, a high-resolution spectrograph available on a stand-by basis, while also allowing the basic properties of the reduction to be controlled in real time by the user. It provides a Graphical User Interface and offers bias subtraction, flat-fielding, scattered-light subtraction, and specialized reduction tasks from the external packages IRAF (ascl:9911.002) and NumArray. The core of FIEStool is instrument-independent; the software, written in Python, could with minor modifications also be used for automatic reduction of data from other instruments.
Real-time blood flow visualization using the graphics processing unit
NASA Astrophysics Data System (ADS)
Yang, Owen; Cuccia, David; Choi, Bernard
2011-01-01
Laser speckle imaging (LSI) is a technique in which coherent light incident on a surface produces a reflected speckle pattern that is related to the underlying movement of optical scatterers, such as red blood cells, indicating blood flow. Image-processing algorithms can be applied to produce speckle flow index (SFI) maps of relative blood flow. We present a novel algorithm that employs the NVIDIA Compute Unified Device Architecture (CUDA) platform to perform laser speckle image processing on the graphics processing unit. Software written in C was integrated with CUDA and integrated into a LabVIEW Virtual Instrument (VI) that is interfaced with a monochrome CCD camera able to acquire high-resolution raw speckle images at nearly 10 fps. With the CUDA code integrated into the LabVIEW VI, the processing and display of SFI images were performed also at ~10 fps. We present three video examples depicting real-time flow imaging during a reactive hyperemia maneuver, with fluid flow through an in vitro phantom, and a demonstration of real-time LSI during laser surgery of a port wine stain birthmark.
Real-time blood flow visualization using the graphics processing unit
Yang, Owen; Cuccia, David; Choi, Bernard
2011-01-01
Laser speckle imaging (LSI) is a technique in which coherent light incident on a surface produces a reflected speckle pattern that is related to the underlying movement of optical scatterers, such as red blood cells, indicating blood flow. Image-processing algorithms can be applied to produce speckle flow index (SFI) maps of relative blood flow. We present a novel algorithm that employs the NVIDIA Compute Unified Device Architecture (CUDA) platform to perform laser speckle image processing on the graphics processing unit. Software written in C was integrated with CUDA and integrated into a LabVIEW Virtual Instrument (VI) that is interfaced with a monochrome CCD camera able to acquire high-resolution raw speckle images at nearly 10 fps. With the CUDA code integrated into the LabVIEW VI, the processing and display of SFI images were performed also at ∼10 fps. We present three video examples depicting real-time flow imaging during a reactive hyperemia maneuver, with fluid flow through an in vitro phantom, and a demonstration of real-time LSI during laser surgery of a port wine stain birthmark. PMID:21280915
NASA Technical Reports Server (NTRS)
Schaffner, Philip R.; Daniels, Taumi S.; West, Leanne L.; Gimmestad, Gary G.; Lane, Sarah E.; Burdette, Edward M.; Smith, William L.; Kireev, Stanislav; Cornman, Larry; Sharman, Robert D.
2012-01-01
The Forward-Looking Interferometer (FLI) is an airborne sensor concept for detection and estimation of potential atmospheric hazards to aircraft. The FLI concept is based on high-resolution Infrared Fourier Transform Spectrometry technologies that have been developed for satellite remote sensing. The FLI is being evaluated for its potential to address multiple hazards, during all phases of flight, including clear air turbulence, volcanic ash, wake vortices, low slant range visibility, dry wind shear, and icing. In addition, the FLI is being evaluated for its potential to detect hazardous runway conditions during landing, such as wet or icy asphalt or concrete. The validation of model-based instrument and hazard simulation results is accomplished by comparing predicted performance against empirical data. In the mountain lee wave data collected in the previous FLI project, the data showed a damped, periodic mountain wave structure. The wave data itself will be of use in forecast and nowcast turbulence products such as the Graphical Turbulence Guidance and Graphical Turbulence Guidance Nowcast products. Determining how turbulence hazard estimates can be derived from FLI measurements will require further investigation.
Adaptive mesh fluid simulations on GPU
NASA Astrophysics Data System (ADS)
Wang, Peng; Abel, Tom; Kaehler, Ralf
2010-10-01
We describe an implementation of compressible inviscid fluid solvers with block-structured adaptive mesh refinement on Graphics Processing Units using NVIDIA's CUDA. We show that a class of high resolution shock capturing schemes can be mapped naturally on this architecture. Using the method of lines approach with the second order total variation diminishing Runge-Kutta time integration scheme, piecewise linear reconstruction, and a Harten-Lax-van Leer Riemann solver, we achieve an overall speedup of approximately 10 times faster execution on one graphics card as compared to a single core on the host computer. We attain this speedup in uniform grid runs as well as in problems with deep AMR hierarchies. Our framework can readily be applied to more general systems of conservation laws and extended to higher order shock capturing schemes. This is shown directly by an implementation of a magneto-hydrodynamic solver and comparing its performance to the pure hydrodynamic case. Finally, we also combined our CUDA parallel scheme with MPI to make the code run on GPU clusters. Close to ideal speedup is observed on up to four GPUs.
Designing an ultrafast laser virtual laboratory using MATLAB GUIDE
NASA Astrophysics Data System (ADS)
Cambronero-López, F.; Gómez-Varela, A. I.; Bao-Varela, C.
2017-05-01
In this work we present a virtual simulator developed using the MATLAB GUIDE environment based on the numerical resolution of the nonlinear Schrödinger equation (NLS) and using the split step method for the study of the spatial-temporal propagation of nonlinear ultrashort laser pulses. This allows us to study the spatial-temporal propagation of ultrafast pulses as well as the influence of high-order spectral phases such as group delay dispersion and third-order dispersion on pulse compression in time. The NLS can describe several nonlinear effects, in particular in this paper we consider the Kerr effect, cross-polarized wave generation and cubic-quintic propagation in order to highlight the potential of this equation combined with the GUIDE environment. Graphical user interfaces are commonly used in science and engineering teaching due to their educational value, and have proven to be an effective way to engage and motivate students. Specifically, the interactive graphical interfaces presented provide the visualization of some of the most important nonlinear optics phenomena and allows users to vary the values of the main parameters involved.
Overview of the NASA Wallops Flight Facility Mobile Range Control System
NASA Technical Reports Server (NTRS)
Davis, Rodney A.; Semancik, Susan K.; Smith, Donna C.; Stancil, Robert K.
1999-01-01
The NASA GSFC's Wallops Flight Facility (WFF) Mobile Range Control System (MRCS) is based on the functionality of the WFF Range Control Center at Wallops Island, Virginia. The MRCS provides real time instantaneous impact predictions, real time flight performance data, and other critical information needed by mission and range safety personnel in support of range operations at remote launch sites. The MRCS integrates a PC telemetry processing system (TELPro), a PC radar processing system (PCDQS), multiple Silicon Graphics display workstations (IRIS), and communication links within a mobile van for worldwide support of orbital, suborbital, and aircraft missions. This paper describes the MRCS configuration; the TELPro's capability to provide single/dual telemetry tracking and vehicle state data processing; the PCDQS' capability to provide real time positional data and instantaneous impact prediction for up to 8 data sources; and the IRIS' user interface for setup/display options. With portability, PC-based data processing, high resolution graphics, and flexible multiple source support, the MRCS system is proving to be responsive to the ever-changing needs of a variety of increasingly complex missions.
The iQID Camera: An Ionizing-Radiation Quantum Imaging Detector
Miller, Brian W.; Gregory, Stephanie J.; Fuller, Erin S.; ...
2014-06-11
We have developed and tested a novel, ionizing-radiation Quantum Imaging Detector (iQID). This scintillation-based detector was originally developed as a high-resolution gamma-ray imager, called BazookaSPECT, for use in single-photon emission computed tomography (SPECT). Recently, we have investigated the detectors response and imaging potential with other forms of ionizing radiation including alpha, neutron, beta, and fission fragment particles. The detector’s response to a broad range of ionizing radiation has prompted its new title. The principle operation of the iQID camera involves coupling a scintillator to an image intensifier. The scintillation light generated particle interactions is optically amplified by the intensifier andmore » then re-imaged onto a CCD/CMOS camera sensor. The intensifier provides sufficient optical gain that practically any CCD/CMOS camera can be used to image ionizing radiation. Individual particles are identified and their spatial position (to sub-pixel accuracy) and energy are estimated on an event-by-event basis in real time using image analysis algorithms on high-performance graphics processing hardware. Distinguishing features of the iQID camera include portability, large active areas, high sensitivity, and high spatial resolution (tens of microns). Although modest, iQID has energy resolution that is sufficient to discrimate between particles. Additionally, spatial features of individual events can be used for particle discrimination. An important iQID imaging application that has recently been developed is single-particle, real-time digital autoradiography. In conclusion, we present the latest results and discuss potential applications.« less
Righetti, Laura; Dellafiora, Luca; Cavanna, Daniele; Rolli, Enrico; Galaverna, Gianni; Bruni, Renato; Suman, Michele; Dall'Asta, Chiara
2018-04-30
Zearalenone (ZEN) major biotransformation pathways described so far are based on glycosylation and sulfation, although acetylation of trichothecenes has been reported as well. We investigated herein the ZEN acetylation metabolism route in micropropagated durum wheat leaf, artificially contaminated with ZEN. We report the first experimental evidence of the formation of novel ZEN acetylated forms in wheat, attached both to the aglycone backbone as well as on the glucose moiety. Thanks to the advantages provided by high-resolution mass spectrometry, identification and structure annotation of 20 metabolites was achieved. In addition, a preliminary assessment of the toxicity of the annotated metabolites was performed in silico focusing on the toxicodynamic of ZEN group toxicity. All the metabolites showed a worse fitting within the estrogen receptor pocket in comparison with ZEN. Nevertheless, possible hydrolysis to the respective parent compounds (i.e., ZEN) may raise concern from the health perspective because these are well-known xenoestrogens. These results further enrich the biotransformation profile of ZEN, providing a helpful reference for assessing the risks to animals and humans. Graphical abstract ᅟ.
Evaluation of a Multicore-Optimized Implementation for Tomographic Reconstruction
Agulleiro, Jose-Ignacio; Fernández, José Jesús
2012-01-01
Tomography allows elucidation of the three-dimensional structure of an object from a set of projection images. In life sciences, electron microscope tomography is providing invaluable information about the cell structure at a resolution of a few nanometres. Here, large images are required to combine wide fields of view with high resolution requirements. The computational complexity of the algorithms along with the large image size then turns tomographic reconstruction into a computationally demanding problem. Traditionally, high-performance computing techniques have been applied to cope with such demands on supercomputers, distributed systems and computer clusters. In the last few years, the trend has turned towards graphics processing units (GPUs). Here we present a detailed description and a thorough evaluation of an alternative approach that relies on exploitation of the power available in modern multicore computers. The combination of single-core code optimization, vector processing, multithreading and efficient disk I/O operations succeeds in providing fast tomographic reconstructions on standard computers. The approach turns out to be competitive with the fastest GPU-based solutions thus far. PMID:23139768
DOE Office of Scientific and Technical Information (OSTI.GOV)
Popa, Karin; Raison, Philippe E., E-mail: philippe.raison@ec.europa.eu; Martel, Laura
2015-10-15
PuPO{sub 4} was prepared by a solid state reaction method and its crystal structure at room temperature was solved by powder X-ray diffraction combined with Rietveld refinement. High resolution XANES measurements confirm the +III valence state of plutonium, in agreement with valence bond derivation. The presence of the americium (as β{sup −} decay product of plutonium) in the +III oxidation state was determined based on XANES spectroscopy. High resolution solid state {sup 31}P NMR agrees with the XANES results and the presence of a solid-solution. - Graphical abstract: A full structural analysis of PuPO{sub 4} based on Rietveld analysis ofmore » room temperature X-ray diffraction data, XANES and MAS NMR measurements was performed. - Highlights: • The crystal structure of PuPO{sub 4} monazite is solved. • In PuPO{sub 4} plutonium is strictly trivalent. • The presence of a minute amount of Am{sup III} is highlighted. • We propose PuPO{sub 4} as a potential reference material for spectroscopic and microscopic studies.« less
MASH Suite Pro: A Comprehensive Software Tool for Top-Down Proteomics*
Cai, Wenxuan; Guner, Huseyin; Gregorich, Zachery R.; Chen, Albert J.; Ayaz-Guner, Serife; Peng, Ying; Valeja, Santosh G.; Liu, Xiaowen; Ge, Ying
2016-01-01
Top-down mass spectrometry (MS)-based proteomics is arguably a disruptive technology for the comprehensive analysis of all proteoforms arising from genetic variation, alternative splicing, and posttranslational modifications (PTMs). However, the complexity of top-down high-resolution mass spectra presents a significant challenge for data analysis. In contrast to the well-developed software packages available for data analysis in bottom-up proteomics, the data analysis tools in top-down proteomics remain underdeveloped. Moreover, despite recent efforts to develop algorithms and tools for the deconvolution of top-down high-resolution mass spectra and the identification of proteins from complex mixtures, a multifunctional software platform, which allows for the identification, quantitation, and characterization of proteoforms with visual validation, is still lacking. Herein, we have developed MASH Suite Pro, a comprehensive software tool for top-down proteomics with multifaceted functionality. MASH Suite Pro is capable of processing high-resolution MS and tandem MS (MS/MS) data using two deconvolution algorithms to optimize protein identification results. In addition, MASH Suite Pro allows for the characterization of PTMs and sequence variations, as well as the relative quantitation of multiple proteoforms in different experimental conditions. The program also provides visualization components for validation and correction of the computational outputs. Furthermore, MASH Suite Pro facilitates data reporting and presentation via direct output of the graphics. Thus, MASH Suite Pro significantly simplifies and speeds up the interpretation of high-resolution top-down proteomics data by integrating tools for protein identification, quantitation, characterization, and visual validation into a customizable and user-friendly interface. We envision that MASH Suite Pro will play an integral role in advancing the burgeoning field of top-down proteomics. PMID:26598644
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-27
... (``BOPP'') or to an exterior ply of paper that is suitable for high quality print graphics; \\4\\ printed... suitable for high quality print graphics,'' as used herein, means paper having an ISO brightness of 82 or... high quality print graphics. Effective July 1, 2007, laminated woven sacks are classifiable under...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-27
... Suitable for High-Quality Print Graphics Using Sheet-Fed Presses From the People's Republic of China: Final... suitable for high-quality print graphics using sheet-fed presses from the People's Republic of China (``PRC...-Fed Presses from the People's Republic of China: Preliminary Affirmative Countervailing Duty...
Dickel, Timo; Plaß, Wolfgang R; Lippert, Wayne; Lang, Johannes; Yavor, Mikhail I; Geissel, Hans; Scheidenberger, Christoph
2017-06-01
A novel method for (ultra-)high-resolution spatial mass separation in time-of-flight mass spectrometers is presented. Ions are injected into a time-of-flight analyzer from a radio frequency (rf) trap, dispersed in time-of-flight according to their mass-to-charge ratios and then re-trapped dynamically in the same rf trap. This re-trapping technique is highly mass-selective and after sufficiently long flight times can provide even isobaric separation. A theoretical treatment of the method is presented and the conditions for optimum performance of the method are derived. The method has been implemented in a multiple-reflection time-of-flight mass spectrometer and mass separation powers (FWHM) in excess of 70,000, and re-trapping efficiencies of up to 35% have been obtained for the protonated molecular ion of caffeine. The isobars glutamine and lysine (relative mass difference of 1/4000) have been separated after a flight time of 0.2 ms only. Higher mass separation powers can be achieved using longer flight times. The method will have important applications, including isobar separation in nuclear physics and (ultra-)high-resolution precursor ion selection in multiple-stage tandem mass spectrometry. Graphical Abstract ᅟ.
O'Rourke, Matthew B; Raymond, Benjamin B A; Padula, Matthew P
2017-05-01
Matrix assisted laser desorption ionization imaging mass spectrometry (MALDI-IMS) is a technique that has seen a sharp rise in both use and development. Despite this rapid adoption, there have been few thorough investigations into the actual physical mechanisms that underlie the acquisition of IMS images. We therefore set out to characterize the effect of IMS laser ablation patterns on the surface of a sample. We also concluded that the governing factors that control spatial resolution have not been correctly defined and therefore propose a new definition of resolution. Graphical Abstract ᅟ.
Realtime Compositing of Procedural Facade Textures on the Gpu
NASA Astrophysics Data System (ADS)
Krecklau, L.; Kobbelt, L.
2011-09-01
The real time rendering of complex virtual city models has become more important in the last few years for many practical applications like realistic navigation or urban planning. For maximum rendering performance, the complexity of the geometry or textures can be reduced by decreasing the resolution until the data set can fully reside on the memory of the graphics card. This typically results in a low quality of the virtual city model. Alternatively, a streaming algorithm can load the high quality data set from the hard drive. However, this approach requires a large amount of persistent storage providing several gigabytes of static data. We present a system that uses a texture atlas containing atomic tiles like windows, doors or wall patterns, and that combines those elements on-the-fly directly on the graphics card. The presented approach benefits from a sophisticated randomization approach that produces lots of different facades while the grammar description itself remains small. By using a ray casting apporach, we are able to trace through transparent windows revealing procedurally generated rooms which further contributes to the realism of the rendering. The presented method enables real time rendering of city models with a high level of detail for facades while still relying on a small memory footprint.
NASA Technical Reports Server (NTRS)
Laudeman, Irene V.; Brasil, Connie L.; Stassart, Philippe
1998-01-01
The Planview Graphical User Interface (PGUI) is the primary display of air traffic for the Conflict Prediction and Trial Planning, function of the Center TRACON Automation System. The PGUI displays air traffic information that assists the user in making decisions related to conflict detection, conflict resolution, and traffic flow management. The intent of this document is to outline the human factors issues related to the design of the conflict prediction and trial planning portions of the PGUI, document all human factors related design changes made to the PGUI from December 1996 to September 1997, and outline future plans for the ongoing PGUI design.
A multiresolution approach to iterative reconstruction algorithms in X-ray computed tomography.
De Witte, Yoni; Vlassenbroeck, Jelle; Van Hoorebeke, Luc
2010-09-01
In computed tomography, the application of iterative reconstruction methods in practical situations is impeded by their high computational demands. Especially in high resolution X-ray computed tomography, where reconstruction volumes contain a high number of volume elements (several giga voxels), this computational burden prevents their actual breakthrough. Besides the large amount of calculations, iterative algorithms require the entire volume to be kept in memory during reconstruction, which quickly becomes cumbersome for large data sets. To overcome this obstacle, we present a novel multiresolution reconstruction, which greatly reduces the required amount of memory without significantly affecting the reconstructed image quality. It is shown that, combined with an efficient implementation on a graphical processing unit, the multiresolution approach enables the application of iterative algorithms in the reconstruction of large volumes at an acceptable speed using only limited resources.
Programming Language Software For Graphics Applications
NASA Technical Reports Server (NTRS)
Beckman, Brian C.
1993-01-01
New approach reduces repetitive development of features common to different applications. High-level programming language and interactive environment with access to graphical hardware and software created by adding graphical commands and other constructs to standardized, general-purpose programming language, "Scheme". Designed for use in developing other software incorporating interactive computer-graphics capabilities into application programs. Provides alternative to programming entire applications in C or FORTRAN, specifically ameliorating design and implementation of complex control and data structures typifying applications with interactive graphics. Enables experimental programming and rapid development of prototype software, and yields high-level programs serving as executable versions of software-design documentation.
A Linux Workstation for High Performance Graphics
NASA Technical Reports Server (NTRS)
Geist, Robert; Westall, James
2000-01-01
The primary goal of this effort was to provide a low-cost method of obtaining high-performance 3-D graphics using an industry standard library (OpenGL) on PC class computers. Previously, users interested in doing substantial visualization or graphical manipulation were constrained to using specialized, custom hardware most often found in computers from Silicon Graphics (SGI). We provided an alternative to expensive SGI hardware by taking advantage of third-party, 3-D graphics accelerators that have now become available at very affordable prices. To make use of this hardware our goal was to provide a free, redistributable, and fully-compatible OpenGL work-alike library so that existing bodies of code could simply be recompiled. for PC class machines running a free version of Unix. This should allow substantial cost savings while greatly expanding the population of people with access to a serious graphics development and viewing environment. This should offer a means for NASA to provide a spectrum of graphics performance to its scientists, supplying high-end specialized SGI hardware for high-performance visualization while fulfilling the requirements of medium and lower performance applications with generic, off-the-shelf components and still maintaining compatibility between the two.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-13
... that is suitable for high quality print graphics; \\8\\ printed with three colors or more in register... goods such as pet foods and bird seed. \\8\\ ``Paper suitable for high quality print graphics,'' as used.... Coated free sheet is an example of a paper suitable for high quality print graphics. Effective July 1...
Spectral-element Seismic Wave Propagation on CUDA/OpenCL Hardware Accelerators
NASA Astrophysics Data System (ADS)
Peter, D. B.; Videau, B.; Pouget, K.; Komatitsch, D.
2015-12-01
Seismic wave propagation codes are essential tools to investigate a variety of wave phenomena in the Earth. Furthermore, they can now be used for seismic full-waveform inversions in regional- and global-scale adjoint tomography. Although these seismic wave propagation solvers are crucial ingredients to improve the resolution of tomographic images to answer important questions about the nature of Earth's internal processes and subsurface structure, their practical application is often limited due to high computational costs. They thus need high-performance computing (HPC) facilities to improving the current state of knowledge. At present, numerous large HPC systems embed many-core architectures such as graphics processing units (GPUs) to enhance numerical performance. Such hardware accelerators can be programmed using either the CUDA programming environment or the OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted by additional hardware accelerators, like e.g. AMD graphic cards, ARM-based processors as well as Intel Xeon Phi coprocessors. For seismic wave propagation simulations using the open-source spectral-element code package SPECFEM3D_GLOBE, we incorporated an automatic source-to-source code generation tool (BOAST) which allows us to use meta-programming of all computational kernels for forward and adjoint runs. Using our BOAST kernels, we generate optimized source code for both CUDA and OpenCL languages within the source code package. Thus, seismic wave simulations are able now to fully utilize CUDA and OpenCL hardware accelerators. We show benchmarks of forward seismic wave propagation simulations using SPECFEM3D_GLOBE on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.
cp-R, an interface the R programming language for clinical laboratory method comparisons.
Holmes, Daniel T
2015-02-01
Clinical scientists frequently need to compare two different bioanalytical methods as part of assay validation/monitoring. As a matter necessity, regression methods for quantitative comparison in clinical chemistry, hematology and other clinical laboratory disciplines must allow for error in both the x and y variables. Traditionally the methods popularized by 1) Deming and 2) Passing and Bablok have been recommended. While commercial tools exist, no simple open source tool is available. The purpose of this work was to develop and entirely open-source GUI-driven program for bioanalytical method comparisons capable of performing these regression methods and able to produce highly customized graphical output. The GUI is written in python and PyQt4 with R scripts performing regression and graphical functions. The program can be run from source code or as a pre-compiled binary executable. The software performs three forms of regression and offers weighting where applicable. Confidence bands of the regression are calculated using bootstrapping for Deming and Passing Bablok methods. Users can customize regression plots according to the tools available in R and can produced output in any of: jpg, png, tiff, bmp at any desired resolution or ps and pdf vector formats. Bland Altman plots and some regression diagnostic plots are also generated. Correctness of regression parameter estimates was confirmed against existing R packages. The program allows for rapid and highly customizable graphical output capable of conforming to the publication requirements of any clinical chemistry journal. Quick method comparisons can also be performed and cut and paste into spreadsheet or word processing applications. We present a simple and intuitive open source tool for quantitative method comparison in a clinical laboratory environment. Copyright © 2014 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
A streaming-based solution for remote visualization of 3D graphics on mobile devices.
Lamberti, Fabrizio; Sanna, Andrea
2007-01-01
Mobile devices such as Personal Digital Assistants, Tablet PCs, and cellular phones have greatly enhanced user capability to connect to remote resources. Although a large set of applications are now available bridging the gap between desktop and mobile devices, visualization of complex 3D models is still a task hard to accomplish without specialized hardware. This paper proposes a system where a cluster of PCs, equipped with accelerated graphics cards managed by the Chromium software, is able to handle remote visualization sessions based on MPEG video streaming involving complex 3D models. The proposed framework allows mobile devices such as smart phones, Personal Digital Assistants (PDAs), and Tablet PCs to visualize objects consisting of millions of textured polygons and voxels at a frame rate of 30 fps or more depending on hardware resources at the server side and on multimedia capabilities at the client side. The server is able to concurrently manage multiple clients computing a video stream for each one; resolution and quality of each stream is tailored according to screen resolution and bandwidth of the client. The paper investigates in depth issues related to latency time, bit rate and quality of the generated stream, screen resolutions, as well as frames per second displayed.
Tunable, mixed-resolution modeling using library-based Monte Carlo and graphics processing units
Mamonov, Artem B.; Lettieri, Steven; Ding, Ying; Sarver, Jessica L.; Palli, Rohith; Cunningham, Timothy F.; Saxena, Sunil; Zuckerman, Daniel M.
2012-01-01
Building on our recently introduced library-based Monte Carlo (LBMC) approach, we describe a flexible protocol for mixed coarse-grained (CG)/all-atom (AA) simulation of proteins and ligands. In the present implementation of LBMC, protein side chain configurations are pre-calculated and stored in libraries, while bonded interactions along the backbone are treated explicitly. Because the AA side chain coordinates are maintained at minimal run-time cost, arbitrary sites and interaction terms can be turned on to create mixed-resolution models. For example, an AA region of interest such as a binding site can be coupled to a CG model for the rest of the protein. We have additionally developed a hybrid implementation of the generalized Born/surface area (GBSA) implicit solvent model suitable for mixed-resolution models, which in turn was ported to a graphics processing unit (GPU) for faster calculation. The new software was applied to study two systems: (i) the behavior of spin labels on the B1 domain of protein G (GB1) and (ii) docking of randomly initialized estradiol configurations to the ligand binding domain of the estrogen receptor (ERα). The performance of the GPU version of the code was also benchmarked in a number of additional systems. PMID:23162384
ERIC Educational Resources Information Center
Cook, Michael P.
2014-01-01
There have been few empirical studies investigating the uses of graphic novels in education, fewer still in English Language Arts (ELA). As a result, there remain misconceptions about possible uses and potential benefits of graphic texts in ELA classrooms. The purpose of this study was to investigate the effects of graphic novels on the reading…
50 CFR 660.15 - Equipment requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... perceived weight of water, slime, mud, debris, or other materials. Scale printouts must show: (A) The vessel... with Pentium 75-MHz or higher. Random Access Memory (RAM) must have sufficient megabyte (MB) space to... space of 217 MB or greater. A CD-ROM drive with a Video Graphics Adapter (VGA) or higher resolution...
ERIC Educational Resources Information Center
Gasevic, Dragan; Devedzic, Vladan
2004-01-01
This paper presents Petri net software tool P3 that is developed for training purposes of the Architecture and organization of computers (AOC) course. The P3 has the following features: graphical modeling interface, interactive simulation by single and parallel (with previous conflict resolution) transition firing, two well-known Petri net…
Automatic Perceptual Color Map Generation for Realistic Volume Visualization
Silverstein, Jonathan C.; Parsad, Nigel M.; Tsirline, Victor
2008-01-01
Advances in computed tomography imaging technology and inexpensive high performance computer graphics hardware are making high-resolution, full color (24-bit) volume visualizations commonplace. However, many of the color maps used in volume rendering provide questionable value in knowledge representation and are non-perceptual thus biasing data analysis or even obscuring information. These drawbacks, coupled with our need for realistic anatomical volume rendering for teaching and surgical planning, has motivated us to explore the auto-generation of color maps that combine natural colorization with the perceptual discriminating capacity of grayscale. As evidenced by the examples shown that have been created by the algorithm described, the merging of perceptually accurate and realistically colorized virtual anatomy appears to insightfully interpret and impartially enhance volume rendered patient data. PMID:18430609
A Correlational Study of Graphic Organizers and Science Achievement of English Language Learners
NASA Astrophysics Data System (ADS)
Clarke, William Gordon
English language learners (ELLs) demonstrate lower academic performance and have lower graduation and higher dropout rates than their non-ELL peers. The primary purpose of this correlational quantitative study was to investigate the relationship between the use of graphic organizer-infused science instruction and science learning of high school ELLs. Another objective was to determine if the method of instruction, socioeconomic status (SES), gender, and English language proficiency (ELP) were predictors of academic achievement of high school ELLs. Data were gathered from a New York City (NYC) high school fall 2012-2013 archival records of 145 ninth-grade ELLs who had received biology instruction in freestanding English as a second language (ESL) classes, followed by a test of their learning of the material. Fifty-four (37.2%) of these records were of students who had learned science by the conventional textbook method, and 91 (62.8%) by using graphic organizers. Data analysis employed the Statistical Package for the Social Sciences (SPSS) software for multiple regression analysis, which found graphic organizer use to be a significant predictor of New York State Regents Living Environment (NYSRLE) test scores (p < .01). One significant regression model was returned whereby, when combined, the four predictor variables (method of instruction, SES, gender, and ELP) explained 36% of the variance of the NYSRLE score. Implications of the study findings noted graphic organizer use as advantageous for ELL science achievement. Recommendations made for practice were for (a) the adoption of graphic organizer infused-instruction, (b) establishment of a protocol for the implementation of graphic organizer-infused instruction, and (c) increased length of graphic organizer instructional time. Recommendations made for future research were (a) a replication quantitative correlational study in two or more high schools, (b) a quantitative quasi-experimental quantitative study to determine the influence of graphic organizer instructional intervention and ELL science achievement, (c) a quantitative quasi-experimental study to determine the effect of teacher-based factors on graphic organizer-infused instruction, and (c) a causal comparative study to determine the efficacy of graphic organizer use in testing modifications for high school ELL science.
NASA Astrophysics Data System (ADS)
Ding, Huanjun; Gao, Hao; Zhao, Bo; Cho, Hyo-Min; Molloi, Sabee
2014-10-01
Both computer simulations and experimental phantom studies were carried out to investigate the radiation dose reduction with tensor framelet based iterative image reconstruction (TFIR) for a dedicated high-resolution spectral breast computed tomography (CT) based on a silicon strip photon-counting detector. The simulation was performed with a 10 cm-diameter water phantom including three contrast materials (polyethylene, 8 mg ml-1 iodine and B-100 bone-equivalent plastic). In the experimental study, the data were acquired with a 1.3 cm-diameter polymethylmethacrylate (PMMA) phantom containing iodine in three concentrations (8, 16 and 32 mg ml-1) at various radiation doses (1.2, 2.4 and 3.6 mGy) and then CT images were reconstructed using the filtered-back-projection (FBP) technique and the TFIR technique, respectively. The image quality between these two techniques was evaluated by the quantitative analysis on contrast-to-noise ratio (CNR) and spatial resolution that was evaluated using the task-based modulation transfer function (MTF). Both the simulation and experimental results indicated that the task-based MTF obtained from TFIR reconstruction with one-third of the radiation dose was comparable to that from the FBP reconstruction for low contrast target. For high contrast target, the TFIR was substantially superior to the FBP reconstruction in terms of spatial resolution. In addition, TFIR was able to achieve a factor of 1.6-1.8 increase in CNR, depending on the target contrast level. This study demonstrates that the TFIR can reduce the required radiation dose by a factor of two-thirds for a CT image reconstruction compared to the FBP technique. It achieves much better CNR and spatial resolution for high contrast target in addition to retaining similar spatial resolution for low contrast target. This TFIR technique has been implemented with a graphic processing unit system and it takes approximately 10 s to reconstruct a single-slice CT image, which can potentially be used in a future multi-slit multi-slice spiral CT system.
Carbohydrate structure: the rocky road to automation.
Agirre, Jon; Davies, Gideon J; Wilson, Keith S; Cowtan, Kevin D
2017-06-01
With the introduction of intuitive graphical software, structural biologists who are not experts in crystallography are now able to build complete protein or nucleic acid models rapidly. In contrast, carbohydrates are in a wholly different situation: scant automation exists, with manual building attempts being sometimes toppled by incorrect dictionaries or refinement problems. Sugars are the most stereochemically complex family of biomolecules and, as pyranose rings, have clear conformational preferences. Despite this, all refinement programs may produce high-energy conformations at medium to low resolution, without any support from the electron density. This problem renders the affected structures unusable in glyco-chemical terms. Bringing structural glycobiology up to 'protein standards' will require a total overhaul of the methodology. Time is of the essence, as the community is steadily increasing the production rate of glycoproteins, and electron cryo-microscopy has just started to image them in precisely that resolution range where crystallographic methods falter most. Copyright © 2016 Elsevier Ltd. All rights reserved.
Aspects of the "Design Space" in high pressure liquid chromatography method development.
Molnár, I; Rieger, H-J; Monks, K E
2010-05-07
The present paper describes a multifactorial optimization of 4 critical HPLC method parameters, i.e. gradient time (t(G)), temperature (T), pH and ternary composition (B(1):B(2)) based on 36 experiments. The effect of these experimental variables on critical resolution and selectivity was carried out in such a way as to systematically vary all four factors simultaneously. The basic element is a gradient time-temperature (t(G)-T) plane, which is repeated at three different pH's of the eluent A and at three different ternary compositions of eluent B between methanol and acetonitrile. The so-defined volume enables the investigation of the critical resolution for a part of the Design Space of a given sample. Further improvement of the analysis time, with conservation of the previously optimized selectivity, was possible by reducing the gradient time and increasing the flow rate. Multidimensional robust regions were successfully defined and graphically depicted. Copyright (c) 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Or, D.; von Ruette, J.; Lehmann, P.
2017-12-01
Landslides and subsequent debris-flows initiated by rainfall represent a common natural hazard in mountainous regions. We integrated a landslide hydro-mechanical triggering model with a simple model for debris flow runout pathways and developed a graphical user interface (GUI) to represent these natural hazards at catchment scale at any location. The STEP-TRAMM GUI provides process-based estimates of the initiation locations and sizes of landslides patterns based on digital elevation models (SRTM) linked with high resolution global soil maps (SoilGrids 250 m resolution) and satellite based information on rainfall statistics for the selected region. In the preprocessing phase the STEP-TRAMM model estimates soil depth distribution to supplement other soil information for delineating key hydrological and mechanical properties relevant to representing local soil failure. We will illustrate this publicly available GUI and modeling platform to simulate effects of deforestation on landslide hazards in several regions and compare model outcome with satellite based information.
Utilizing a Graphic Organizer for Promoting Pupils' Argumentation
ERIC Educational Resources Information Center
Hsieh, Fu-Pei; Lee, Sung-Tao
2011-01-01
The purpose of this study was utilizing a GO (graphic organizer) for promoting pupils' argumentation. The method of case study was employed. A total of eight fifth grade pupils from two classes were assigned (n = 4, two high achievers, two low achievers) with GOI (graphic organizer instruction), and the others (n = 4, 2 high achievers, 2 low…
Fast, Accurate and Shift-Varying Line Projections for Iterative Reconstruction Using the GPU
Pratx, Guillem; Chinn, Garry; Olcott, Peter D.; Levin, Craig S.
2013-01-01
List-mode processing provides an efficient way to deal with sparse projections in iterative image reconstruction for emission tomography. An issue often reported is the tremendous amount of computation required by such algorithm. Each recorded event requires several back- and forward line projections. We investigated the use of the programmable graphics processing unit (GPU) to accelerate the line-projection operations and implement fully-3D list-mode ordered-subsets expectation-maximization for positron emission tomography (PET). We designed a reconstruction approach that incorporates resolution kernels, which model the spatially-varying physical processes associated with photon emission, transport and detection. Our development is particularly suitable for applications where the projection data is sparse, such as high-resolution, dynamic, and time-of-flight PET reconstruction. The GPU approach runs more than 50 times faster than an equivalent CPU implementation while image quality and accuracy are virtually identical. This paper describes in details how the GPU can be used to accelerate the line projection operations, even when the lines-of-response have arbitrary endpoint locations and shift-varying resolution kernels are used. A quantitative evaluation is included to validate the correctness of this new approach. PMID:19244015
Scherer, Sebastian; Kowal, Julia; Chami, Mohamed; Dandey, Venkata; Arheit, Marcel; Ringler, Philippe; Stahlberg, Henning
2014-05-01
The introduction of direct electron detectors (DED) to cryo-electron microscopy has tremendously increased the signal-to-noise ratio (SNR) and quality of the recorded images. We discuss the optimal use of DEDs for cryo-electron crystallography, introduce a new automatic image processing pipeline, and demonstrate the vast improvement in the resolution achieved by the use of both together, especially for highly tilted samples. The new processing pipeline (now included in the software package 2dx) exploits the high SNR and frame readout frequency of DEDs to automatically correct for beam-induced sample movement, and reliably processes individual crystal images without human interaction as data are being acquired. A new graphical user interface (GUI) condenses all information required for quality assessment in one window, allowing the imaging conditions to be verified and adjusted during the data collection session. With this new pipeline an automatically generated unit cell projection map of each recorded 2D crystal is available less than 5 min after the image was recorded. The entire processing procedure yielded a three-dimensional reconstruction of the 2D-crystallized ion-channel membrane protein MloK1 with a much-improved resolution of 5Å in-plane and 7Å in the z-direction, within 2 days of data acquisition and simultaneous processing. The results obtained are superior to those delivered by conventional photographic film-based methodology of the same sample, and demonstrate the importance of drift-correction. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Copyright | USDA Plant Hardiness Zone Map
Copyright Copyright Map graphics. As a U.S. Government publication, the USDA Plant Hardiness Zone Map itself Specific Cooperative Agreement, Oregon State University agreed to supply the U.S. Government with unenhanced (standard resolution) GIS data in grid and shapefile formats. U.S. Government users may use these
ERIC Educational Resources Information Center
Metraglia, Riccardo; Villa, Valerio; Baronio, Gabriele; Adamini, Riccardo
2015-01-01
Today's students enter engineering colleges with different technical backgrounds and prior graphics experience. This may due to their high school of provenience, which can be technical or non-technical. The prior experience affects students' ability in learning and hence their motivation and self-efficacy beliefs. This study intended to evaluate…
A Symbolic and Graphical Computer Representation of Dynamical Systems
NASA Astrophysics Data System (ADS)
Gould, Laurence I.
2005-04-01
AUTONO is a Macsyma/Maxima program, designed at the University of Hartford, for solving autonomous systems of differential equations as well as for relating Lagrangians and Hamiltonians to their associated dynamical equations. AUTONO can be used in a number of fields to decipher a variety of complex dynamical systems with ease, producing their Lagrangian and Hamiltonian equations in seconds. These equations can then be incorporated into VisSim, a modeling and simulation program, which yields graphical representations of motion in a given system through easily chosen input parameters. The program, along with the VisSim differential-equations graphical package, allows for resolution and easy understanding of complex problems in a relatively short time; thus enabling quicker and more advanced computing of dynamical systems on any number of platforms---from a network of sensors on a space probe, to the behavior of neural networks, to the effects of an electromagnetic field on components in a dynamical system. A flowchart of AUTONO, along with some simple applications and VisSim output, will be shown.
New technologies for HWIL testing of WFOV, large-format FPA sensor systems
NASA Astrophysics Data System (ADS)
Fink, Christopher
2016-05-01
Advancements in FPA density and associated wide-field-of-view infrared sensors (>=4000x4000 detectors) have outpaced the current-art HWIL technology. Whether testing in optical projection or digital signal injection modes, current-art technologies for infrared scene projection, digital injection interfaces, and scene generation systems simply lack the required resolution and bandwidth. For example, the L3 Cincinnati Electronics ultra-high resolution MWIR Camera deployed in some UAV reconnaissance systems features 16MP resolution at 60Hz, while the current upper limit of IR emitter arrays is ~1MP, and single-channel dual-link DVI throughput of COTs graphics cards is limited to 2560x1580 pixels at 60Hz. Moreover, there are significant challenges in real-time, closed-loop, physics-based IR scene generation for large format FPAs, including the size and spatial detail required for very large area terrains, and multi - channel low-latency synchronization to achieve the required bandwidth. In this paper, the author's team presents some of their ongoing research and technical approaches toward HWIL testing of large-format FPAs with wide-FOV optics. One approach presented is a hybrid projection/injection design, where digital signal injection is used to augment the resolution of current-art IRSPs, utilizing a multi-channel, high-fidelity physics-based IR scene simulator in conjunction with a novel image composition hardware unit, to allow projection in the foveal region of the sensor, while non-foveal regions of the sensor array are simultaneously stimulated via direct injection into the post-detector electronics.
Images of Struggle: Teaching Human Rights with Graphic Novels
ERIC Educational Resources Information Center
Carano, Kenneth T.; Clabough, Jeremiah
2016-01-01
The authors explore how graphic novels can be used in the middle and high school social studies classroom to teach human rights. The article begins with a rationale on the benefits of using graphic novels. It next focuses on four graphic novels related to human rights issues: "Maus I: A Survivor's Tale: My Father Bleeds" (Speigelman…
Cluster Active Archive: lessons learnt
NASA Astrophysics Data System (ADS)
Laakso, H. E.; Perry, C. H.; Taylor, M. G.; Escoubet, C. P.; Masson, A.
2010-12-01
The ESA Cluster Active Archive (CAA) was opened to public in February 2006 after an initial three-year development phase. It provides access (both web GUI and command-line tool are available) to the calibrated full-resolution datasets of the four-satellite Cluster mission. The data archive is publicly accessible and suitable for science use and publication by the world-wide scientific community. There are more than 350 datasets from each spacecraft, including high-resolution magnetic and electric DC and AC fields as well as full 3-dimensional electron and ion distribution functions and moments from a few eV to hundreds of keV. The Cluster mission has been in operation since February 2001, and currently although the CAA can provide access to some recent observations, the ingestion of some other datasets can be delayed by a few years due to large and difficult calibration routines of aging detectors. The quality of the datasets is the central matter to the CAA. Having the same instrument on four spacecraft allows the cross-instrument comparisons and provide confidence on some of the instrumental calibration parameters. Furthermore it is highly important that many physical parameters are measured by more than one instrument which allow to perform extensive and continuous cross-calibration analyses. In addition some of the instruments can be regarded as absolute or reference measurements for other instruments. The CAA attempts to avoid as much as possible mission-specific acronyms and concepts and tends to use more generic terms in describing the datasets and their contents in order to ease the usage of the CAA data by “non-Cluster” scientists. Currently the CAA has more 1000 users and every month more than 150 different users log in the CAA for plotting and/or downloading observations. The users download about 1 TeraByte of data every month. The CAA has separated the graphical tool from the download tool because full-resolution datasets can be visualized in many ways and so there is no one-to-one correspondence between graphical products and full-resolution datasets. The CAA encourages users to contact the CAA team for all kind of issues whether it concerns the user interface, the content of the datasets, the quality of the observations or provision of new type of services. The CAA runs regular annual reviews on the data products and the user services in order to improve the quality and usability of the CAA system to the world-wide user community. The CAA is continuously being upgraded in terms of datasets and services.
visnormsc: A Graphical User Interface to Normalize Single-cell RNA Sequencing Data.
Tang, Lijun; Zhou, Nan
2017-12-26
Single-cell RNA sequencing (RNA-seq) allows the analysis of gene expression with high resolution. The intrinsic defects of this promising technology imports technical noise into the single-cell RNA-seq data, increasing the difficulty of accurate downstream inference. Normalization is a crucial step in single-cell RNA-seq data pre-processing. SCnorm is an accurate and efficient method that can be used for this purpose. An R implementation of this method is currently available. On one hand, the R package possesses many excellent features from R. On the other hand, R programming ability is required, which prevents the biologists who lack the skills from learning to use it quickly. To make this method more user-friendly, we developed a graphical user interface, visnormsc, for normalization of single-cell RNA-seq data. It is implemented in Python and is freely available at https://github.com/solo7773/visnormsc . Although visnormsc is based on the existing method, it contributes to this field by offering a user-friendly alternative. The out-of-the-box and cross-platform features make visnormsc easy to learn and to use. It is expected to serve biologists by simplifying single-cell RNA-seq normalization.
Circular Data Images for Directional Data
NASA Technical Reports Server (NTRS)
Morpet, William J.
2004-01-01
Directional data includes vectors, points on a unit sphere, axis orientation, angular direction, and circular or periodic data. The theoretical statistics for circular data (random points on a unit circle) or spherical data (random points on a unit sphere) are a recent development. An overview of existing graphical methods for the display of directional data is given. Cross-over occurs when periodic data are measured on a scale for the measurement of linear variables. For example, if angle is represented by a linear color gradient changing uniformly from dark blue at -180 degrees to bright red at +180 degrees, the color image will be discontinuous at +180 degrees and -180 degrees, which are the same location. The resultant color would depend on the direction of approach to the cross-over point. A new graphical method for imaging directional data is described, which affords high resolution without color discontinuity from "cross-over". It is called the circular data image. The circular data image uses a circular color scale in which colors repeat periodically. Some examples of the circular data image include direction of earth winds on a global scale, rocket motor internal flow, earth global magnetic field direction, and rocket motor nozzle vector direction vs. time.
VisANT 3.0: new modules for pathway visualization, editing, prediction and construction.
Hu, Zhenjun; Ng, David M; Yamada, Takuji; Chen, Chunnuan; Kawashima, Shuichi; Mellor, Joe; Linghu, Bolan; Kanehisa, Minoru; Stuart, Joshua M; DeLisi, Charles
2007-07-01
With the integration of the KEGG and Predictome databases as well as two search engines for coexpressed genes/proteins using data sets obtained from the Stanford Microarray Database (SMD) and Gene Expression Omnibus (GEO) database, VisANT 3.0 supports exploratory pathway analysis, which includes multi-scale visualization of multiple pathways, editing and annotating pathways using a KEGG compatible visual notation and visualization of expression data in the context of pathways. Expression levels are represented either by color intensity or by nodes with an embedded expression profile. Multiple experiments can be navigated or animated. Known KEGG pathways can be enriched by querying either coexpressed components of known pathway members or proteins with known physical interactions. Predicted pathways for genes/proteins with unknown functions can be inferred from coexpression or physical interaction data. Pathways produced in VisANT can be saved as computer-readable XML format (VisML), graphic images or high-resolution Scalable Vector Graphics (SVG). Pathways in the format of VisML can be securely shared within an interested group or published online using a simple Web link. VisANT is freely available at http://visant.bu.edu.
Magnetohydrodynamics with GAMER
NASA Astrophysics Data System (ADS)
Zhang, Ui-Han; Schive, Hsi-Yu; Chiueh, Tzihong
2018-06-01
GAMER, a parallel Graphic-processing-unit-accelerated Adaptive-MEsh-Refinement (AMR) hydrodynamic code, has been extended to support magnetohydrodynamics (MHD) with both the corner-transport-upwind and MUSCL-Hancock schemes and the constraint transport technique. The divergent preserving operator for AMR has been applied to reinforce the divergence-free constraint on the magnetic field. GAMER-MHD has fully exploited the concurrent executions between the graphic process unit (GPU) MHD solver and other central processing unit computation pertinent to AMR. We perform various standard tests to demonstrate that GAMER-MHD is both second-order accurate and robust, producing results as accurate as those given by high-resolution uniform-grid runs. We also explore a new 3D MHD test, where the magnetic field assumes the Arnold–Beltrami–Childress configuration, temporarily becomes turbulent with current sheets, and finally settles to a lowest-energy equilibrium state. This 3D problem is adopted for the performance test of GAMER-MHD. The single-GPU performance reaches 1.2 × 108 and 5.5 × 107 cell updates per second for the single- and double-precision calculations, respectively, on Tesla P100. We also demonstrate a parallel efficiency of ∼70% for both weak and strong scaling using 1024 XK nodes on the Blue Waters supercomputers.
Multilevel Summation of Electrostatic Potentials Using Graphics Processing Units*
Hardy, David J.; Stone, John E.; Schulten, Klaus
2009-01-01
Physical and engineering practicalities involved in microprocessor design have resulted in flat performance growth for traditional single-core microprocessors. The urgent need for continuing increases in the performance of scientific applications requires the use of many-core processors and accelerators such as graphics processing units (GPUs). This paper discusses GPU acceleration of the multilevel summation method for computing electrostatic potentials and forces for a system of charged atoms, which is a problem of paramount importance in biomolecular modeling applications. We present and test a new GPU algorithm for the long-range part of the potentials that computes a cutoff pair potential between lattice points, essentially convolving a fixed 3-D lattice of “weights” over all sub-cubes of a much larger lattice. The implementation exploits the different memory subsystems provided on the GPU to stream optimally sized data sets through the multiprocessors. We demonstrate for the full multilevel summation calculation speedups of up to 26 using a single GPU and 46 using multiple GPUs, enabling the computation of a high-resolution map of the electrostatic potential for a system of 1.5 million atoms in under 12 seconds. PMID:20161132
Visualization in aerospace research with a large wall display system
NASA Astrophysics Data System (ADS)
Matsuo, Yuichi
2002-05-01
National Aerospace Laboratory of Japan has built a large- scale visualization system with a large wall-type display. The system has been operational since April 2001 and comprises a 4.6x1.5-meter (15x5-foot) rear projection screen with 3 BARCO 812 high-resolution CRT projectors. The reason we adopted the 3-gun CRT projectors is support for stereoscopic viewing, ease with color/luminosity matching and accuracy of edge-blending. The system is driven by a new SGI Onyx 3400 server of distributed shared-memory architecture with 32 CPUs, 64Gbytes memory, 1.5TBytes FC RAID disk and 6 IR3 graphics pipelines. Software is another important issue for us to make full use of the system. We have introduced some applications available in a multi- projector environment such as AVS/MPE, EnSight Gold and COVISE, and been developing some software tools that create volumetric images with using SGI graphics libraries. The system is mainly used for visualization fo computational fluid dynamics (CFD) simulation sin aerospace research. Visualized CFD results are of our help for designing an improved configuration of aerospace vehicles and analyzing their aerodynamic performances. These days we also use it for various collaborations among researchers.
Cr{sub 2}O{sub 5} as new cathode for rechargeable sodium ion batteries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Xu-Yong; Chien, Po-Hsiu; Rose, Alyssa M.
2016-10-15
Chromium oxide, Cr{sub 2}O{sub 5}, was synthesized by pyrolyzing CrO{sub 3} at 350 °C and employed as a new cathode in rechargeable sodium ion batteries. Cr{sub 2}O{sub 5}/Na rechargeable batteries delivered high specific capacities up to 310 mAh/g at a current density of C/16 (or 20 mA/g). High-resolution solid-state {sup 23}Na NMR both qualitatively and quantitatively revealed the reversible intercalation of Na ions into the bulk electrode and participation of Na ions in the formation of the solid-electrolyte interphase largely at low potentials. Amorphization of the electrode structure occurred during the first discharge revealed by both NMR and X-ray diffractionmore » data. CrO{sub 3}-catalyzed electrolyte degradation and loss in electronic conductivity led to gradual capacity fading. The specific capacity stabilized at >120 mAh/g after 50 charge-discharge cycles. Further improvement in electrochemical performance is possible via electrode surface modification, polymer binder incorporation, or designs of new morphologies. - Graphical abstract: Electrochemical profile of a Cr{sub 2}O{sub 5}/Na battery cell and high-resolution solid-state {sup 23}Na MAS NMR spectrum of a Cr{sub 2}O{sub 5} electrode discharged to 2 V. - Highlights: • Cr{sub 2}O{sub 5} was synthesized and used as a new cathode in rechargeable Na ion batteries. • A high capacity of 310 mAh/g and an energy density of 564 Wh/kg were achieved. • High-resolution solid-state {sup 23}Na NMR was employed to follow the reaction mechanisms.« less
Sub-pixel analysis to support graphic security after scanning at low resolution
NASA Astrophysics Data System (ADS)
Haas, Bertrand; Cordery, Robert; Gou, Hongmei; Decker, Steve
2006-02-01
Whether in the domain of audio, video or finance, our world tends to become increasingly digital. However, for diverse reasons, the transition from analog to digital is often much extended in time, and proceeds by long steps (and sometimes never completes). One such step is the conversion of information on analog media to digital information. We focus in this paper on the conversion (scanning) of printed documents to digital images. Analog media have the advantage over digital channels that they can harbor much imperceptible information that can be used for fraud detection and forensic purposes. But this secondary information usually fails to be retrieved during the conversion step. This is particularly relevant since the Check-21 act (Check Clearing for the 21st Century act) became effective in 2004 and allows images of checks to be handled by banks as usual paper checks. We use here this situation of check scanning as our primary benchmark for graphic security features after scanning. We will first present a quick review of the most common graphic security features currently found on checks, with their specific purpose, qualities and disadvantages, and we demonstrate their poor survivability after scanning in the average scanning conditions expected from the Check-21 Act. We will then present a novel method of measurement of distances between and rotations of line elements in a scanned image: Based on an appropriate print model, we refine direct measurements to an accuracy beyond the size of a scanning pixel, so we can then determine expected distances, periodicity, sharpness and print quality of known characters, symbols and other graphic elements in a document image. Finally we will apply our method to fraud detection of documents after gray-scale scanning at 300dpi resolution. We show in particular that alterations on legitimate checks or copies of checks can be successfully detected by measuring with sub-pixel accuracy the irregularities inherently introduced by the illegitimate process.
NASA Astrophysics Data System (ADS)
Jung, E.
1984-05-01
A color recording unit was designed for output and control of digitized picture data within computer controlled reproduction and picture processing systems. In order to get a color proof picture of high quality similar to a color print, together with reduced time and material consumption, a photographic color film material was exposed pixelwise by modulated laser beams of three wavelengths for red, green and blue light. Components of different manufacturers for lasers, acousto-optic modulators and polygon mirrors were tested, also different recording methods as (continuous tone mode or screened mode and with a drum or flatbed recording principle). Besides the application for the graphic arts - the proof recorder CPR 403 with continuous tone color recording with a drum scanner - such a color hardcopy peripheral unit with large picture formats and high resolution can be used in medicine, communication, and satellite picture processing.
Tools for 3D scientific visualization in computational aerodynamics at NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val
1989-01-01
Hardware, software, and techniques used by the Fluid Dynamics Division (NASA) for performing visualization of computational aerodynamics, which can be applied to the visualization of flow fields from computer simulations of fluid dynamics about the Space Shuttle, are discussed. Three visualization techniques applied, post-processing, tracking, and steering, are described, as well as the post-processing software packages used, PLOT3D, SURF (Surface Modeller), GAS (Graphical Animation System), and FAST (Flow Analysis software Toolkit). Using post-processing methods a flow simulation was executed on a supercomputer and, after the simulation was complete, the results were processed for viewing. It is shown that the high-resolution, high-performance three-dimensional workstation combined with specially developed display and animation software provides a good tool for analyzing flow field solutions obtained from supercomputers.
NASA Technical Reports Server (NTRS)
Gott, Charles; Galicki, Peter; Shores, David
1990-01-01
The Helmet Mounted Display system and Part Task Trainer are two projects currently underway that are closely related to the in-flight crew training concept. The first project is a training simulator and an engineering analysis tool. The simulator's unique helmet mounted display actually projects the wearer into the simulated environment of 3-D space. Miniature monitors are mounted in front of the wearers eyes. Partial Task Trainer is a kinematic simulator for the Shuttle Remote Manipulator System. The simulator consists of a high end graphics workstation with a high resolution color screen and a number of input peripherals that create a functional equivalent of the RMS control panel in the back of the Orbiter. It is being used in the training cycle for Shuttle crew members. Activities are underway to expand the capability of the Helmet Display System and the Partial Task Trainer.
Map of the Pluto System - Children's Edition
NASA Astrophysics Data System (ADS)
Hargitai, H. I.
2016-12-01
Cartography is a powerful tool in the scientific visualization and communication of spatial data. Cartographic visualization for children requires special methods. Although almost all known solid surface bodies in the Solar System have been mapped in detail during the last more than 5 decades, books and publications that target children, tweens and teens never include any of the cartographic results of these missions. We have developed a series of large size planetary maps with the collaboration of planetary scientists, cartographers and graphic artists. The maps are based on photomosaics and DTMs that were redrawn as artwork. This process necessarily involved generalization, interpretation and transformation into the visual language that can be understood by children. In the first project we selected six planetary bodies (Venus, the Moon, Mars, Io, Europa and Titan) and invited six illustrators of childrens'books. Although the overall structure of the maps look similar, the visual approach was significantly different. An important addition was that the maps contained a narrative: different characters - astronauts or "alien-like lifeforms" - interacted with the surface. The map contents were translated into 11 languages and published online at https://childrensmaps.wordpress.com.We report here on the new map of the series. Following the New Horizons' Pluto flyby we have started working on a map that, unlike the others, depicts a planetary system, not only one body. Since only one hemisphere was imaged in high resolution, this map is showing the encounter hemispheres of Pluto and Charon. Projected high resolution image mosaics with informal nomenclature were provided by the New Horizons Team. The graphic artist is Adrienn Gyöngyösi. Our future plan is to produce a book format Children's Atlas of Solar System bodies that makes planetary cartographic and astrogeologic results more accessible for children, and the next generation of planetary scientists among them.
Higher-order ice-sheet modelling accelerated by multigrid on graphics cards
NASA Astrophysics Data System (ADS)
Brædstrup, Christian; Egholm, David
2013-04-01
Higher-order ice flow modelling is a very computer intensive process owing primarily to the nonlinear influence of the horizontal stress coupling. When applied for simulating long-term glacial landscape evolution, the ice-sheet models must consider very long time series, while both high temporal and spatial resolution is needed to resolve small effects. The use of higher-order and full stokes models have therefore seen very limited usage in this field. However, recent advances in graphics card (GPU) technology for high performance computing have proven extremely efficient in accelerating many large-scale scientific computations. The general purpose GPU (GPGPU) technology is cheap, has a low power consumption and fits into a normal desktop computer. It could therefore provide a powerful tool for many glaciologists working on ice flow models. Our current research focuses on utilising the GPU as a tool in ice-sheet and glacier modelling. To this extent we have implemented the Integrated Second-Order Shallow Ice Approximation (iSOSIA) equations on the device using the finite difference method. To accelerate the computations, the GPU solver uses a non-linear Red-Black Gauss-Seidel iterator coupled with a Full Approximation Scheme (FAS) multigrid setup to further aid convergence. The GPU finite difference implementation provides the inherent parallelization that scales from hundreds to several thousands of cores on newer cards. We demonstrate the efficiency of the GPU multigrid solver using benchmark experiments.
History of one family of atmospheric radiative transfer codes
NASA Astrophysics Data System (ADS)
Anderson, Gail P.; Wang, Jinxue; Hoke, Michael L.; Kneizys, F. X.; Chetwynd, James H., Jr.; Rothman, Laurence S.; Kimball, L. M.; McClatchey, Robert A.; Shettle, Eric P.; Clough, Shepard (.; Gallery, William O.; Abreu, Leonard W.; Selby, John E. A.
1994-12-01
Beginning in the early 1970's, the then Air Force Cambridge Research Laboratory initiated a program to develop computer-based atmospheric radiative transfer algorithms. The first attempts were translations of graphical procedures described in a 1970 report on The Optical Properties of the Atmosphere, based on empirical transmission functions and effective absorption coefficients derived primarily from controlled laboratory transmittance measurements. The fact that spectrally-averaged atmospheric transmittance (T) does not obey the Beer-Lambert Law (T equals exp(-(sigma) (DOT)(eta) ), where (sigma) is a species absorption cross section, independent of (eta) , the species column amount along the path) at any but the finest spectral resolution was already well known. Band models to describe this gross behavior were developed in the 1950's and 60's. Thus began LOWTRAN, the Low Resolution Transmittance Code, first released in 1972. This limited initial effort has how progressed to a set of codes and related algorithms (including line-of-sight spectral geometry, direct and scattered radiance and irradiance, non-local thermodynamic equilibrium, etc.) that contain thousands of coding lines, hundreds of subroutines, and improved accuracy, efficiency, and, ultimately, accessibility. This review will include LOWTRAN, HITRAN (atlas of high-resolution molecular spectroscopic data), FASCODE (Fast Atmospheric Signature Code), and MODTRAN (Moderate Resolution Transmittance Code), their permutations, validations, and applications, particularly as related to passive remote sensing and energy deposition.
ERIC Educational Resources Information Center
Boerman-Cornell, William
2012-01-01
Recent studies of graphic novels (book-length fiction or non-fiction narratives that employ the conventions of comic books to convey meaning) and multimodality have hinted that graphic novels (GNs) might offer a great deal of meaning-making potential to readers. Some studies have argued that graphic novels could be useful for English Language…
NASA Technical Reports Server (NTRS)
Stockwell, Alan E.; Cooper, Paul A.
1991-01-01
The Integrated Multidisciplinary Analysis Tool (IMAT) consists of a menu driven executive system coupled with a relational database which links commercial structures, structural dynamics and control codes. The IMAT graphics system, a key element of the software, provides a common interface for storing, retrieving, and displaying graphical information. The IMAT Graphics Manual shows users of commercial analysis codes (MATRIXx, MSC/NASTRAN and I-DEAS) how to use the IMAT graphics system to obtain high quality graphical output using familiar plotting procedures. The manual explains the key features of the IMAT graphics system, illustrates their use with simple step-by-step examples, and provides a reference for users who wish to take advantage of the flexibility of the software to customize their own applications.
Living Color Frame System: PC graphics tool for data visualization
NASA Technical Reports Server (NTRS)
Truong, Long V.
1993-01-01
Living Color Frame System (LCFS) is a personal computer software tool for generating real-time graphics applications. It is highly applicable for a wide range of data visualization in virtual environment applications. Engineers often use computer graphics to enhance the interpretation of data under observation. These graphics become more complicated when 'run time' animations are required, such as found in many typical modern artificial intelligence and expert systems. Living Color Frame System solves many of these real-time graphics problems.
NASA Astrophysics Data System (ADS)
Testan, Peter R.
1987-04-01
A number of Color Hard Copy (CHC) market drivers are currently indicating strong growth in the use of CHC technologies for the business graphics marketplace. These market drivers relate to product, software, color monitors and color copiers. The use of color in business graphics allows more information to be relayed than is normally the case in a monochrome format. The communicative powers of full-color computer generated output in the business graphics application area will continue to induce end users to desire and require color in their future applications. A number of color hard copy technologies will be utilized in the presentation graphics arena. Thermal transfer, ink jet, photographic and electrophotographic technologies are all expected to be utilized in the business graphics presentation application area in the future. Since the end of 1984, the availability of color application software packages has grown significantly. Sales revenue generated by business graphics software is expected to grow at a compound annual growth rate of just over 40 percent to 1990. Increased availability of packages to allow the integration of text and graphics is expected. Currently, the latest versions of page description languages such as Postscript, Interpress and DDL all support color output. The use of color monitors will also drive the demand for color hard copy in the business graphics market place. The availability of higher resolution screens is allowing color monitors to be easily used for both text and graphics applications in the office environment. During 1987, the sales of color monitors are expected to surpass the sales of monochrome monitors. Another major color hard copy market driver will be the color copier. In order to take advantage of the communications power of computer generated color output, multiple copies are required for distribution. Product introductions of a new generation of color copiers is now underway with additional introductions expected during 1987. The color hard copy market continues to be in a state of constant change, typical of any immature market. However, much of the change is positive. During 1985, the color hard copy market generated 1.2 billion. By 1990, total market revenue is expected to exceed 5.5 billion. The business graphics CHC application area is expected to grow at a compound annual growth rate greater than 40 percent to 1990.
Using 1H2O MR to measure and map sodium pump activity in vivo.
Springer, Charles S
2018-06-01
The cell plasma membrane Na + ,K + -ATPase [NKA] is one of biology's most [if not the most] significant enzymes. By actively transporting Na + out [and K + in], it maintains the vital trans-membrane ion concentration gradients and the membrane potential. The forward NKA reaction is shown in the Graphical Abstract [which is elaborated in the text]. Crucially, NKA does not operate in isolation. There are other transporters that conduct K + back out of [II, Graphical Abstract] and Na + back into [III, Graphical Abstract] the cell. Thus, NKA must function continually. Principal routes for ATP replenishment include mitochondrial oxidative phosphorylation, glycolysis, and creatine kinase [CrK] activity. However, it has never been possible to measure, let alone map, this integrated, cellular homeostatic NKA activity in vivo. Active trans-membrane water cycling [AWC] promises a way to do this with 1 H 2 O MR. Inthe Graphical Abstract, the AWC system is characterized by active contributions totheunidirectional rate constants for steady-state water efflux and influx, respectively, k io (a) and k oi (a). The discovery, validation, and initial exploration of active water cycling are reviewed here. Promising applications in cancer, cardiological, and neurological MRI are covered. This initial work employed paramagnetic Gd(III)chelate contrast agents [CAs]. However, the significant problems associated with in vivo CA use are also reviewed. A new analysis of water diffusion-weighted MRI [DWI] is presented. Preliminary results suggest a non-invasive way to measure the cell number density [ρ (cells/μL)], the mean cell volume [V (pL)], and the cellular NKA metabolic rate [ c MR NKA (fmol(ATP)/s/cell)] with high spatial resolution. These crucial cell biology properties have not before been accessible invivo. Furthermore, initial findings indicate their absolute values can be determined. Copyright © 2018 The Author. Published by Elsevier Inc. All rights reserved.
Computer graphics applications to crew displays
NASA Technical Reports Server (NTRS)
Wyzkoski, J.
1983-01-01
Astronauts are provided much data and information via the monochrome CRT displays on the orbiter. For this project two areas were investigated for the possible introduction of computer graphics to enhance and extend the utility of these displays. One involved reviewing the current orbiter displays and identifying those which could be improved via computer graphics. As an example, the tabular data on electrical power distribution and control was enhanced by the addition of color and bar charts. The other dealt with the development of an aid to berthing a payload with the Remote Manipulator System (RMS). This aid consists of a graphics display of the top, front and side views of the payload and cargo bay and point of resolution (POR) position and attitude data for the current location of the payload. The initial implementation was on an IBM PC clone. The demonstration software installed in the Johnson Space Center Manipulator Development Facility (MD) was reviewed. Due to current hardware limitations, the MDF verision is slow, i.e., about a 40+ seond update rate and, hence, not real-time. Despite this fact, the evaluation of this additional visual cue as an RMS operator aid indicates that this display, with modifications for speed, etc., can assist the crew. Further development is appropriate.
NASA Astrophysics Data System (ADS)
Oliveira, Henrique; Rodrigues, Marco; Radius, Andrea
2012-01-01
Airport Obstruction Charts (AOCs) are graphical representations of natural or man-made obstructions (its locations and heights) around airfields, according to International Civil Aviation Organization (ICAO) Annexes 4, 14 and 15. One of the most important types of data used in AOCs production/update tasks is a Digital Surface Model (first reflective surface) of the surveyed area. The development of advanced remote sensing technologies provide the available tools for obstruction data acquisition, while Geographic Information Systems (GIS) present the perfect platform for storing and analyzing this type of data, enabling the production of digital ACOs, greatly contributing to the increase of the situational awareness of pilots and enhancing the air navigation safety level [1]. Data acquisition corresponding to the first reflective surface can be obtained through the use of Airborne Laser-Scanning and Light Detection and Ranging (ALS/LIDAR) or Spaceborne SAR Systems. The need of surveying broad areas, like the entire territory of a state, shows that Spaceborne SAR systems are the most adequate in economic and feasibility terms of the process, to perform the monitoring and producing a high resolution Digital Surface Model (DSM). The high resolution DSM generation depends on many factors: the available data set, the used technique and the setting parameters. To increase the precision and obtain high resolution products, two techniques are available using a stack of data: the PS (Permanent Scatterers) technique [2], that uses large stack of data to identify many stable and coherent targets through multi- temporal analysis, removing the atmospheric contribution and to minimize the estimation errors, and the Small Baseline Subset (SBAS) technique ([3],[4]), that relies on the use of small baseline SAR interferograms and on the application of the so called singular value decomposition (SVD) method, in order to link independent SAR acquisition data sets, separated by large baselines, thus increasing the number of data used for the analysis.
VitaPad: visualization tools for the analysis of pathway data.
Holford, Matthew; Li, Naixin; Nadkarni, Prakash; Zhao, Hongyu
2005-04-15
Packages that support the creation of pathway diagrams are limited by their inability to be readily extended to new classes of pathway-related data. VitaPad is a cross-platform application that enables users to create and modify biological pathway diagrams and incorporate microarray data with them. It improves on existing software in the following areas: (i) It can create diagrams dynamically through graph layout algorithms. (ii) It is open-source and uses an open XML format to store data, allowing for easy extension or integration with other tools. (iii) It features a cutting-edge user interface with intuitive controls, high-resolution graphics and fully customizable appearance. http://bioinformatics.med.yale.edu matthew.holford@yale.edu; hongyu.zhao@yale.edu.
ITA, a portable program for the interactive analysis of data from tracer experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wootton, R.; Ashley, K.
ITA is a portable program for analyzing data from tracer experiments, most of the mathematical and graphical work being carried out by subroutines from the NAG and DASL libraries. The program can be used in batch or interactive mode, commands being typed in an English-like language, in free format. Data can be entered from a terminal keyboard or read from a file, and can be validated by printing or plotting them. Erroneous values can be corrected by appropriate editing. Analysis can involve elementary statistics, multiple-isotope crossover corrections, convolution or deconvolution, polyexponential curve-fitting, spline interpolation and/or compartmental analysis. On those installationsmore » with the appropriate hardware, high-resolution graphs can be drawn.« less
NASA Astrophysics Data System (ADS)
Plebe, Alice; Grasso, Giorgio
2016-12-01
This paper describes a system developed for the simulation of flames inside an open-source 3D computer graphic software, Blender, with the aim of analyzing in virtual reality scenarios of hazards in large-scale industrial plants. The advantages of Blender are of rendering at high resolution the very complex structure of large industrial plants, and of embedding a physical engine based on smoothed particle hydrodynamics. This particle system is used to evolve a simulated fire. The interaction of this fire with the components of the plant is computed using polyhedron separation distance, adopting a Voronoi-based strategy that optimizes the number of feature distance computations. Results on a real oil and gas refining industry are presented.
Parallel Visualization of Large-Scale Aerodynamics Calculations: A Case Study on the Cray T3E
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu; Crockett, Thomas W.
1999-01-01
This paper reports the performance of a parallel volume rendering algorithm for visualizing a large-scale, unstructured-grid dataset produced by a three-dimensional aerodynamics simulation. This dataset, containing over 18 million tetrahedra, allows us to extend our performance results to a problem which is more than 30 times larger than the one we examined previously. This high resolution dataset also allows us to see fine, three-dimensional features in the flow field. All our tests were performed on the Silicon Graphics Inc. (SGI)/Cray T3E operated by NASA's Goddard Space Flight Center. Using 511 processors, a rendering rate of almost 9 million tetrahedra/second was achieved with a parallel overhead of 26%.
M.S.L.A.P. Modular Spectral Line Analysis Program documentation
NASA Technical Reports Server (NTRS)
Joseph, Charles L.; Jenkins, Edward B.
1991-01-01
MSLAP is a software for analyzing spectra, providing the basic structure to identify spectral features, to make quantitative measurements of this features, and to store the measurements for convenient access. MSLAP can be used to measure not only the zeroth moment (equivalent width) of a profile, but also the first and second moments. Optical depths and the corresponding column densities across the profile can be measured as well for sufficiently high resolution data. The software was developed for an interactive, graphical analysis where the computer carries most of the computational and data organizational burden and the investigator is responsible only for all judgement decisions. It employs sophisticated statistical techniques for determining the best polynomial fit to the continuum and for calculating the uncertainties.
NASA Astrophysics Data System (ADS)
Duarte, Débora; Santos, Joana; Terrinha, Pedro; Brito, Pedro; Noiva, João; Ribeiro, Carlos; Roque, Cristina
2017-04-01
More than 300 nautical miles of multichannel seismic reflection data were acquired in the scope of the ASTARTE project (Assessment Strategy and Risk Reduction for Tsunamis in Europe), off Quarteira, Algarve, South Portugal. The main goal of this very high resolution multichannel seismic survey was to obtain high-resolution images of the sedimentary record to try to discern the existence of high energy events, possibly tsunami backwash deposits associated with large magnitude earthquakes generated at the Africa-Eurasia plate boundary This seismic dataset was processed at the Instituto Português do Mar e da Atmosfera (IPMA), with the SeisSpace PROMAX Seismic Processing software. A tailor-made processing flow was applied, focusing in the removal of the seafloor multiple and in the enhancement of the superficial layers. A sparker source, using with 300 J of energy and a fire rate of 0,5 s was used onboard Xunauta, an 18 m long vessel. The preliminary seismostratigraphic interpretation of the Algarve ASTARTE seismic dataset allowed the identification of a complex sequence seismic units of progradational and agradational bodies as well as Mass Transported Deposits (MTD). The MTD package of sediments has a very complex internal structure, 20m of thickness, is apparently spatially controlled by an escarpment probably associated to past sea level low stands. The MTD covers across an area, approximately parallel to an ancient coastline, with >30 km (length) x 5 km (across). Acknowledgements: This work was developed as part of the project ASTARTE (603839 FP7) supported by the grant agreement No 603839 of the European Union's Seventh. The Instituto Portugues do Mar e da Atmosfera acknowledges support by Landmark Graphics (SeisWorks) via the Landmark University Grant Program.
MEMS phase former kit for high-resolution wavefront control
NASA Astrophysics Data System (ADS)
Gehner, Andreas; Wildenhain, Michael; Neumann, Hannes; Elgner, Andreas; Schenk, Harald
2005-08-01
The MEMS Phase Former Kit developed by the Fraunhofer IPMS is a complete Spatial Light Modulator system based on a piston-type Micro Mirror Array (MMA) for the use in high-resolution, high-speed optical phase control. It has been designed for an easy system integration into an user-specific environment to offer a platform for first practical investigations to open up new applications in Adaptive Optics. The key component is a fine segmented 240 x 200 array of 40 μm piston-type mirror elements capable of 400 nm analog deflection for a 2pi phase modulation in the visible. Each mirror can be addressed and deflected independently by means of an integrated CMOS backplane address circuitry at an 8bit height resolution. Full user programmability and control is provided by a newly developed comfortable driver software for Windows XP based PCs supporting both a Graphical User Interface (GUI) for stand-alone operation with pre-defined data patterns as well as an open ActiveX programming interface for a closed-loop operation with real-time data from an external source. An IEEE1394a FireWire interface is used for high-speed data communication with an electronic driving board performing the actual MMA programming and control allowing for an overall frame rate of up to 500 Hz. Successful proof-of-concept demonstrations already have been given for eye aberration correction in ophthalmology, for error compensation of leightweight primary mirrors of future space telescopes and for ultra-short laser pulse shaping. Besides a presentation of the basic device concept and system architecture the paper will give an overview of the obtained results from these applications.
Ultrascale collaborative visualization using a display-rich global cyberinfrastructure.
Jeong, Byungil; Leigh, Jason; Johnson, Andrew; Renambot, Luc; Brown, Maxine; Jagodic, Ratko; Nam, Sungwon; Hur, Hyejung
2010-01-01
The scalable adaptive graphics environment (SAGE) is high-performance graphics middleware for ultrascale collaborative visualization using a display-rich global cyberinfrastructure. Dozens of sites worldwide use this cyberinfrastructure middleware, which connects high-performance-computing resources over high-speed networks to distributed ultraresolution displays.
The Universe Adventure - Credits
Basel), and George Smoot (LBNL) Content, Graphic/Web Design Artie Konrad (student, UC Berkeley) 2004 Berkeley) Laurie Kerrigan (teacher, Mercy High School) Graphic/Web Design Melissa McClure (student ) Graphic/Web Design Paul Higgins (student, Contra Costa College) Other Gordon Aubrecht (Ohio State
User's Guide for MapIMG 2: Map Image Re-projection Software Package
Finn, Michael P.; Trent, Jason R.; Buehler, Robert A.
2006-01-01
BACKGROUND Scientists routinely accomplish small-scale geospatial modeling in the raster domain, using high-resolution datasets for large parts of continents and low-resolution to high-resolution datasets for the entire globe. Direct implementation of point-to-point transformation with appropriate functions yields the variety of projections available in commercial software packages, but implementation with data other than points requires specific adaptation of the transformation equations or prior preparation of the data to allow the transformation to succeed. It seems that some of these packages use the U.S. Geological Survey's (USGS) General Cartographic Transformation Package (GCTP) or similar point transformations without adaptation to the specific characteristics of raster data (Usery and others, 2003a). Usery and others (2003b) compiled and tabulated the accuracy of categorical areas in projected raster datasets of global extent. Based on the shortcomings identified in these studies, geographers and applications programmers at the USGS expanded and evolved a USGS software package, MapIMG, for raster map projection transformation (Finn and Trent, 2004). Daniel R. Steinwand of Science Applications International Corporation, National Center for Earth Resources Observation and Science, originally developed MapIMG for the USGS, basing it on GCTP. Through previous and continuing efforts at the USGS' National Geospatial Technical Operations Center, this program has been transformed from an application based on command line input into a software package based on a graphical user interface for Windows, Linux, and other UNIX machines.
NASA Astrophysics Data System (ADS)
Querol, M.; Rodríguez, J.; Toledo, J.; Esteve, R.; Álvarez, V.; Herrero, V.
2016-12-01
Among the different techniques available, the SiPM power supply described in this paper uses output voltage and sensor temperature feedback. A high-resolution ADC digitizes both the output voltage and an analog signal proportional to the SiPM temperature for each of its 16 independent outputs. The appropriate change in the bias voltage is computed in a micro-controller and this correction is applied via a high resolution DAC to the control input of a DC/DC module that produces the output voltage. This method allows a reduction in gain variations from typically 30% to only 0.5% in a 10 °C range. The power supply is housed in a 3U-height aluminum box. A 2.8'' touch screen on the front panel provides local access to the configuration and monitoring functions using a graphical interface. The unit has an Ethernet interface on its rear side to provide remote operation and integration in slow control systems using the encrypted and secure SSH protocol. A LabVIEW application with SSH interface has been designed to operate the power supply from a remote computer. The power supply has good characteristics, such as 85 V output range with 1 mV resolution and stability better than 2 mVP, excellent output load regulation and programmable rise and fall voltage ramps. Commercial power supplies from well-known manufacturers can show far better specifications though can also result in an over featured and over costly solution for typical applications.
NASA Astrophysics Data System (ADS)
Idehara, H.; Carbon, D. F.
2004-12-01
We present two new, publicly available tools to support the examination and interpretation of spectra. SCAMP is a specialized graphical user interface for MATLAB. It allows researchers to rapidly intercompare sets of observational, theoretical, and/or laboratory spectra. Users have extensive control over the colors and placement of individual spectra, and over spectrum normalization from one spectral region to another. Spectra can be interactively assigned to user-defined groups and the groupings recalled at a later time. The user can measure/record positions and intensities of spectral features, interactively spline-fit spectra, and normalize spectra by fitted splines. User-defined wavelengths can be automatically highlighted in SCAMP plots. The user can save/print annotated graphical output suitable for a scientific notebook depicting the work at any point. The ASP is a WWW portal that provides interactive access to two spectrum data sets: a library of synthetic stellar spectra and a library of laboratory PAH spectra. The synthetic stellar spectra in the ASP are appropriate to the giant branch with an assortment of compositions. Each spectrum spans the full range from 2 to 600 microns at a variety of resolutions. The ASP is designed to allow users to quickly identify individual features at any resolution that arise from any of the included isotopic species. The user may also retrieve the depth of formation of individual features at any resolution. PAH spectra accessible through the ASP are drawn from the extensive library of spectra measured by the NASA Ames Astrochemistry Laboratory. The user may interactively choose any subset of PAHs in the data set, combine them with user-defined weights and temperatures, and view/download the resultant spectrum at any user-defined resolution. This work was funded by the NASA Advanced Supercomputing Division, NASA Ames Research Center.
Engineering Graphics in Education: Programming and Ready Programs.
ERIC Educational Resources Information Center
Audi, M. S.
1987-01-01
Suggests a method of integrating teaching microcomputer graphics in engineering curricula without encroaching on the fundamental engineering courses. Includes examples of engineering graphics produced by commercial programs and others produced by high-level language programing in a limited credit hour segment of an educational program. (CW)
REQUIREMENTS FOR GRAPHIC TEACHING MACHINES.
ERIC Educational Resources Information Center
HICKEY, ALBERT; AND OTHERS
AN EXPERIMENT WAS REPORTED WHICH DEMONSTRATES THAT GRAPHICS ARE MORE EFFECTIVE THAN SYMBOLS IN ACQUIRING ALGEBRA CONCEPTS. THE SECOND PHASE OF THE STUDY DEMONSTRATED THAT GRAPHICS IN HIGH SCHOOL TEXTBOOKS WERE RELIABLY CLASSIFIED IN A MATRIX OF 480 FUNCTIONAL STIMULUS-RESPONSE CATEGORIES. SUGGESTIONS WERE MADE FOR EXTENDING THE CLASSIFICATION…
Kobayashi, M; Irino, T; Sweldens, W
2001-10-23
Multiscale computing (MSC) involves the computation, manipulation, and analysis of information at different resolution levels. Widespread use of MSC algorithms and the discovery of important relationships between different approaches to implementation were catalyzed, in part, by the recent interest in wavelets. We present two examples that demonstrate how MSC can help scientists understand complex data. The first is from acoustical signal processing and the second is from computer graphics.
Improved crystallization and diffraction of caffeine-induced death suppressor protein 1 (Cid1)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yates, Luke A., E-mail: luke@strubi.ox.ac.uk; Durrant, Benjamin P.; Barber, Michael
The use of truncation and RNA-binding mutations of caffeine induced death suppressor protein 1 (Cid1) as a means to enhance crystallogenesis leading to an improvement of X-ray diffraction resolution by 1.5 Å is reported. The post-transcriptional addition of uridines to the 3′-end of RNAs is an important regulatory process that is critical for coding and noncoding RNA stability. In fission yeast and metazoans this untemplated 3′-uridylylation is catalysed by a single family of terminal uridylyltransferases (TUTs) whose members are adapted to specific RNA targets. In Schizosaccharomyces pombe the TUT Cid1 is responsible for the uridylylation of polyadenylated mRNAs, targeting themmore » for destruction. In metazoans, the Cid1 orthologues ZCCHC6 and ZCCHC11 uridylate histone mRNAs, targeting them for degradation, but also uridylate microRNAs, altering their maturation. Cid1 has been studied as a model TUT that has provided insights into the larger and more complex metazoan enzyme system. In this paper, two strategies are described that led to improvements both in the crystallogenesis of Cid1 and in the resolution of diffraction by ∼1.5 Å. These advances have allowed high-resolution crystallo@@graphic studies of this TUT system to be initiated.« less
NASA Astrophysics Data System (ADS)
Cich, Matthew J.; Guillaume, Alexandre; Drouin, Brian; Benner, D. Chris
2017-06-01
Multispectrum analysis can be a challenge for a variety of reasons. It can be computationally intensive to fit a proper line shape model especially for high resolution experimental data. Band-wide analyses including many transitions along with interactions, across many pressures and temperatures are essential to accurately model, for example, atmospherically relevant systems. Labfit is a fast multispectrum analysis program originally developed by D. Chris Benner with a text-based interface. More recently at JPL a graphical user interface was developed with the goal of increasing the ease of use but also the number of potential users. The HTP lineshape model has been added to Labfit keeping it up-to-date with community standards. Recent analyses using labfit will be shown to demonstrate its ability to competently handle large experimental datasets, including high order lineshape effects, that are otherwise unmanageable.
GIF Animation of Mode Shapes and Other Data on the Internet
NASA Technical Reports Server (NTRS)
Pappa, Richard S.
1998-01-01
The World Wide Web abounds with animated cartoons and advertisements competing for our attention. Most of these figures are animated Graphics Interchange Format (GIF) files. These files contain a series of ordinary GIF images plus control information, and they provide an exceptionally simple, effective way to animate on the Internet. To date, however, this format has rarely been used for technical data, although there is no inherent reason not to do so. This paper describes a procedure for creating high-resolution animated GIFs of mode shapes and other types of structural dynamics data with readily available software. The paper shows three example applications using recent modal test data and video footage of a high-speed sled run. A fairly detailed summary of the GIF file format is provided in the appendix. All of the animations discussed in the paper are posted on the Internet available through the following address: http://sdb-www.larc.nasa.gov/.
Advanced Certification Program for Computer Graphic Specialists. Final Performance Report.
ERIC Educational Resources Information Center
Parkland Coll., Champaign, IL.
A pioneer program in computer graphics was implemented at Parkland College (Illinois) to meet the demand for specialized technicians to visualize data generated on high performance computers. In summer 1989, 23 students were accepted into the pilot program. Courses included C programming, calculus and analytic geometry, computer graphics, and…
Graphic Communications Objectives. Career Education. DS Manual 2860.1.
ERIC Educational Resources Information Center
Dependents Schools (DOD), Washington, DC.
This instructional guide provides materials for a program in the Department of Defense Dependents Schools designed to provide the high school student with the opportunity to explore graphic communications. Introductory materials include the philosophy of graphic communications, organization and numbering code, and use of symbols. The general and…
Graphic Communications. Curriculum Guide.
ERIC Educational Resources Information Center
North Dakota State Board for Vocational Education, Bismarck.
This guide provides the basic foundation to develop a one-semester course based on the cluster concept, graphic communications. One of a set of six guides for an industrial arts curriculum at the junior high school level, it suggests exploratory experiences designed to (1) develop an awareness and understanding of the drafting and graphic arts…
High Fidelity Images--How They Affect Learning.
ERIC Educational Resources Information Center
Kwinn, Ann
1997-01-01
Discusses the use of graphics in instruction and concludes that cosmetic and motivational graphics can be more realistic and detailed for affective goals, while schematic graphics may be best for the more cognitive functions of focusing attention and presenting actual content. Domains of learning, mental models, and visualization are examined.…
Stable stress‐drop measurements and their variability: Implications for ground‐motion prediction
Hanks, Thomas C.; Baltay, Annemarie S.; Beroza, Gregory C.
2013-01-01
We estimate the arms‐stress drop, Graphic, (Hanks, 1979) using acceleration time records of 59 earthquakes from two earthquake sequences in eastern Honshu, Japan. These acceleration‐based static stress drops compare well to stress drops calculated for the same events by Baltay et al. (2011) using an empirical Green’s function (eGf) approach. This agreement supports the assumption that earthquake acceleration time histories in the bandwidth between the corner frequency and a maximum observed frequency can be considered white, Gaussian, noise. Although the Graphic is computationally simpler than the eGf‐based Graphic‐stress drop, and is used as the “stress parameter” to describe the earthquake source in ground‐motion prediction equations, we find that it only compares well to the Graphic at source‐station distances of ∼20 km or less because there is no consideration of whole‐path anelastic attenuation or scattering. In these circumstances, the correlation between the Graphic and Graphic is strong. Events with high and low stress drops obtained through the eGf method have similarly high and low Graphic. We find that the inter‐event standard deviation of stress drop, for the population of earthquakes considered, is similar for both methods, 0.40 for the Graphic method and 0.42 for the Graphic, in log10 units, provided we apply the ∼20 km distance restriction to Graphic. This indicates that the observed variability is inherent to the source, rather than attributable to uncertainties in stress‐drop estimates
Graphic overlays in high-precision teleoperation: Current and future work at JPL
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Venema, Steven C.
1989-01-01
In space teleoperation additional problems arise, including signal transmission time delays. These can greatly reduce operator performance. Recent advances in graphics open new possibilities for addressing these and other problems. Currently a multi-camera system with normal 3-D TV and video graphics capabilities is being developed. Trained and untrained operators will be tested for high precision performance using two force reflecting hand controllers and a voice recognition system to control two robot arms and up to 5 movable stereo or non-stereo TV cameras. A number of new techniques of integrating TV and video graphics displays to improve operator training and performance in teleoperation and supervised automation are evaluated.
Transformation of Graphical ECA Policies into Executable PonderTalk Code
NASA Astrophysics Data System (ADS)
Romeikat, Raphael; Sinsel, Markus; Bauer, Bernhard
Rules are becoming more and more important in business modeling and systems engineering and are recognized as a high-level programming paradigma. For the effective development of rules it is desired to start at a high level, e.g. with graphical rules, and to refine them into code of a particular rule language for implementation purposes later. An model-driven approach is presented in this paper to transform graphical rules into executable code in a fully automated way. The focus is on event-condition-action policies as a special rule type. These are modeled graphically and translated into the PonderTalk language. The approach may be extended to integrate other rule types and languages as well.
Orthorectification by Using Gpgpu Method
NASA Astrophysics Data System (ADS)
Sahin, H.; Kulur, S.
2012-07-01
Thanks to the nature of the graphics processing, the newly released products offer highly parallel processing units with high-memory bandwidth and computational power of more than teraflops per second. The modern GPUs are not only powerful graphic engines but also they are high level parallel programmable processors with very fast computing capabilities and high-memory bandwidth speed compared to central processing units (CPU). Data-parallel computations can be shortly described as mapping data elements to parallel processing threads. The rapid development of GPUs programmability and capabilities attracted the attentions of researchers dealing with complex problems which need high level calculations. This interest has revealed the concepts of "General Purpose Computation on Graphics Processing Units (GPGPU)" and "stream processing". The graphic processors are powerful hardware which is really cheap and affordable. So the graphic processors became an alternative to computer processors. The graphic chips which were standard application hardware have been transformed into modern, powerful and programmable processors to meet the overall needs. Especially in recent years, the phenomenon of the usage of graphics processing units in general purpose computation has led the researchers and developers to this point. The biggest problem is that the graphics processing units use different programming models unlike current programming methods. Therefore, an efficient GPU programming requires re-coding of the current program algorithm by considering the limitations and the structure of the graphics hardware. Currently, multi-core processors can not be programmed by using traditional programming methods. Event procedure programming method can not be used for programming the multi-core processors. GPUs are especially effective in finding solution for repetition of the computing steps for many data elements when high accuracy is needed. Thus, it provides the computing process more quickly and accurately. Compared to the GPUs, CPUs which perform just one computing in a time according to the flow control are slower in performance. This structure can be evaluated for various applications of computer technology. In this study covers how general purpose parallel programming and computational power of the GPUs can be used in photogrammetric applications especially direct georeferencing. The direct georeferencing algorithm is coded by using GPGPU method and CUDA (Compute Unified Device Architecture) programming language. Results provided by this method were compared with the traditional CPU programming. In the other application the projective rectification is coded by using GPGPU method and CUDA programming language. Sample images of various sizes, as compared to the results of the program were evaluated. GPGPU method can be used especially in repetition of same computations on highly dense data, thus finding the solution quickly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Huizhen; Zhao, Dian; Cui, Yuangjing, E-mail: cuiyj@zju.edu.cn
Temperature measurements and thermal mapping using luminescent MOF operating in the high-temperature range are of great interest in the micro-electronic diagnosis. In this paper, we report a thermostable Eu/Tb-mixed MOF Eu{sub 0.37}Tb{sub 0.63}-BTC-a exhibiting strong luminescence at elevated temperature, which can serve as a ratiometric luminescent thermometer for high-temperature range. The high-temperature operating range (313–473 K), high relative sensitivity and accurate temperature resolution, make such a Eu/Tb-mixed MOF useful for micro-electronic diagnosis. - Graphical abstract: A thermostable Eu/Tb-mixed MOF Eu{sub 0.37}Tb{sub 0.63}-BTC-a was developed as a ratiometric luminescent thermometers in the high-temperature range of 313–473 K. - Highlights: • Amore » thermostable Eu/Tb-codoped MOF exhibiting strong luminescent at elevated temperature is reported. • The high-temperature operating range of Eu{sub 0.37}Tb{sub 0.63}-BTC-a is 313–473 K. • The mechanism of Eu{sub 0.37}Tb{sub 0.63}-BTC-a used as thermometers are also discussed.« less
Sun, Ryan; Bouchard, Matthew B.; Hillman, Elizabeth M. C.
2010-01-01
Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software’s framework and provide details to guide users with development of this and similar software. PMID:21258475
Wind resource assessment in heterogeneous terrain
NASA Astrophysics Data System (ADS)
Vanderwel, C.; Placidi, M.; Ganapathisubramani, B.
2017-03-01
High-resolution particle image velocimetry data obtained in rough-wall boundary layer experiments are re-analysed to examine the influence of surface roughness heterogeneities on wind resource. Two different types of heterogeneities are examined: (i) surfaces with repeating roughness units of the order of the boundary layer thickness (Placidi & Ganapathisubramani. 2015 J. Fluid Mech. 782, 541-566. (doi:10.1017/jfm.2015.552)) and (ii) surfaces with streamwise-aligned elevated strips that mimic adjacent hills and valleys (Vanderwel & Ganapathisubramani. 2015 J. Fluid Mech. 774, 1-12. (doi:10.1017/jfm.2015.228)). For the first case, the data show that the power extraction potential is highly dependent on the surface morphology with a variation of up to 20% in the available wind resource across the different surfaces examined. A strong correlation is shown to exist between the frontal and plan solidities of the rough surfaces and the equivalent wind speed, and hence the wind resource potential. These differences are also found in profiles of
Real-time terrain storage generation from multiple sensors towards mobile robot operation interface.
Song, Wei; Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun; Um, Kyhyun
2014-01-01
A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots.
Yuan, Jie; Xu, Guan; Yu, Yao; Zhou, Yu; Carson, Paul L; Wang, Xueding; Liu, Xiaojun
2013-08-01
Photoacoustic tomography (PAT) offers structural and functional imaging of living biological tissue with highly sensitive optical absorption contrast and excellent spatial resolution comparable to medical ultrasound (US) imaging. We report the development of a fully integrated PAT and US dual-modality imaging system, which performs signal scanning, image reconstruction, and display for both photoacoustic (PA) and US imaging all in a truly real-time manner. The back-projection (BP) algorithm for PA image reconstruction is optimized to reduce the computational cost and facilitate parallel computation on a state of the art graphics processing unit (GPU) card. For the first time, PAT and US imaging of the same object can be conducted simultaneously and continuously, at a real-time frame rate, presently limited by the laser repetition rate of 10 Hz. Noninvasive PAT and US imaging of human peripheral joints in vivo were achieved, demonstrating the satisfactory image quality realized with this system. Another experiment, simultaneous PAT and US imaging of contrast agent flowing through an artificial vessel, was conducted to verify the performance of this system for imaging fast biological events. The GPU-based image reconstruction software code for this dual-modality system is open source and available for download from http://sourceforge.net/projects/patrealtime.
NASA Astrophysics Data System (ADS)
Pascal, Christophe
2004-04-01
Stress inversion programs are nowadays frequently used in tectonic analysis. The purpose of this family of programs is to reconstruct the stress tensor characteristics from fault slip data acquired in the field or derived from earthquake focal mechanisms (i.e. inverse methods). Until now, little attention has been paid to direct methods (i.e. to determine fault slip directions from an inferred stress tensor). During the 1990s, the fast increase in resolution in 3D seismic reflection techniques made it possible to determine the geometry of subsurface faults with a satisfactory accuracy but not to determine precisely their kinematics. This recent improvement allows the use of direct methods. A computer program, namely SORTAN, is introduced. The program is highly portable on Unix platforms, straightforward to install and user-friendly. The computation is based on classical stress-fault slip relationships and allows for fast treatment of a set of faults and graphical presentation of the results (i.e. slip directions). In addition, the SORTAN program permits one to test the sensitivity of the results to input uncertainties. It is a complementary tool to classical stress inversion methods and can be used to check the mechanical consistency and the limits of structural interpretations based upon 3D seismic reflection surveys.
Volumetric three-dimensional display system with rasterization hardware
NASA Astrophysics Data System (ADS)
Favalora, Gregg E.; Dorval, Rick K.; Hall, Deirdre M.; Giovinco, Michael; Napoli, Joshua
2001-06-01
An 8-color multiplanar volumetric display is being developed by Actuality Systems, Inc. It will be capable of utilizing an image volume greater than 90 million voxels, which we believe is the greatest utilizable voxel set of any volumetric display constructed to date. The display is designed to be used for molecular visualization, mechanical CAD, e-commerce, entertainment, and medical imaging. As such, it contains a new graphics processing architecture, novel high-performance line- drawing algorithms, and an API similar to a current standard. Three-dimensional imagery is created by projecting a series of 2-D bitmaps ('image slices') onto a diffuse screen that rotates at 600 rpm. Persistence of vision fuses the slices into a volume-filling 3-D image. A modified three-panel Texas Instruments projector provides slices at approximately 4 kHz, resulting in 8-color 3-D imagery comprised of roughly 200 radially-disposed slices which are updated at 20 Hz. Each slice has a resolution of 768 by 768 pixels, subtending 10 inches. An unusual off-axis projection scheme incorporating tilted rotating optics is used to maintain good focus across the projection screen. The display electronics includes a custom rasterization architecture which converts the user's 3- D geometry data into image slices, as well as 6 Gbits of DDR SDRAM graphics memory.
Real-Time Terrain Storage Generation from Multiple Sensors towards Mobile Robot Operation Interface
Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun
2014-01-01
A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots. PMID:25101321
Pairwise graphical models for structural health monitoring with dense sensor arrays
NASA Astrophysics Data System (ADS)
Mohammadi Ghazi, Reza; Chen, Justin G.; Büyüköztürk, Oral
2017-09-01
Through advances in sensor technology and development of camera-based measurement techniques, it has become affordable to obtain high spatial resolution data from structures. Although measured datasets become more informative by increasing the number of sensors, the spatial dependencies between sensor data are increased at the same time. Therefore, appropriate data analysis techniques are needed to handle the inference problem in presence of these dependencies. In this paper, we propose a novel approach that uses graphical models (GM) for considering the spatial dependencies between sensor measurements in dense sensor networks or arrays to improve damage localization accuracy in structural health monitoring (SHM) application. Because there are always unobserved damaged states in this application, the available information is insufficient for learning the GMs. To overcome this challenge, we propose an approximated model that uses the mutual information between sensor measurements to learn the GMs. The study is backed by experimental validation of the method on two test structures. The first is a three-story two-bay steel model structure that is instrumented by MEMS accelerometers. The second experimental setup consists of a plate structure and a video camera to measure the displacement field of the plate. Our results show that considering the spatial dependencies by the proposed algorithm can significantly improve damage localization accuracy.
NASA Astrophysics Data System (ADS)
Reilly, B. T.; Stoner, J. S.; Wiest, J.
2017-08-01
Computed tomography (CT) of sediment cores allows for high-resolution images, three-dimensional volumes, and down core profiles. These quantitative data are generated through the attenuation of X-rays, which are sensitive to sediment density and atomic number, and are stored in pixels as relative gray scale values or Hounsfield units (HU). We present a suite of MATLAB™ tools specifically designed for routine sediment core analysis as a means to standardize and better quantify the products of CT data collected on medical CT scanners. SedCT uses a graphical interface to process Digital Imaging and Communications in Medicine (DICOM) files, stitch overlapping scanned intervals, and create down core HU profiles in a manner robust to normal coring imperfections. Utilizing a random sampling technique, SedCT reduces data size and allows for quick processing on typical laptop computers. SedCTimage uses a graphical interface to create quality tiff files of CT slices that are scaled to a user-defined HU range, preserving the quantitative nature of CT images and easily allowing for comparison between sediment cores with different HU means and variance. These tools are presented along with examples from lacustrine and marine sediment cores to highlight the robustness and quantitative nature of this method.
Acquisition of multiple image stacks with a confocal laser scanning microscope
NASA Astrophysics Data System (ADS)
Zuschratter, Werner; Steffen, Thomas; Braun, Katharina; Herzog, Andreas; Michaelis, Bernd; Scheich, Henning
1998-06-01
Image acquisition at high magnification is inevitably correlated with a limited view over the entire tissue section. To overcome this limitation we designed software for multiple image-stack acquisition (3D-MISA) in confocal laser scanning microscopy (CLSM). The system consists of a 4 channel Leica CLSM equipped with a high resolution z- scanning stage mounted on a xy-monitorized stage. The 3D- MISA software is implemented into the microscope scanning software and uses the microscope settings for the movements of the xy-stage. It allows storage and recall of 70 xyz- positions and the automatic 3D-scanning of image arrays between selected xyz-coordinates. The number of images within one array is limited only by the amount of disk space or memory available. Although for most applications the accuracy of the xy-scanning stage is sufficient for a precise alignment of tiled views, the software provides the possibility of an adjustable overlap between two image stacks by shifting the moving steps of the xy-scanning stage. After scanning a tiled image gallery of the extended focus-images of each channel will be displayed on a graphic monitor. In addition, a tiled image gallery of individual focal planes can be created. In summary, the 3D-MISA allows 3D-image acquisition of coherent regions in combination with high resolution of single images.
Aceña, Jaume; Pérez, Sandra; Eichhorn, Peter; Solé, Montserrat; Barceló, Damià
2017-09-01
The widespread occurrence of pharmaceuticals in the aquatic environment has raised concerns about potential adverse effects on exposed wildlife. Very little is currently known on exposure levels and clearance mechanisms of drugs in marine fish. Within this context, our research was focused on the identification of main metabolic reactions, generated metabolites, and caused effects after exposure of fish to carbamazepine (CBZ) and ibuprofen (IBU). To this end, juveniles of Solea senegalensis acclimated to two temperature regimes of 15 and 20 °C for 60 days received a single intraperitoneal dose of these drugs. A control group was administered the vehicle (sunflower oil). Bile samples were analyzed by ultra-high-performance liquid chromatography-high-resolution mass spectrometry on a Q Exactive (Orbitrap) system, allowing to propose plausible identities for 11 metabolites of CBZ and 13 metabolites of IBU in fish bile. In case of CBZ metabolites originated from aromatic and benzylic hydroxylation, epoxidation, and ensuing O-glucuronidation, O-methylation of a catechol-like metabolite was also postulated. Ibuprofen, in turn, formed multiple hydroxyl metabolites, O-glucuronides, and (hydroxyl)-acyl glucuronides, in addition to several taurine conjugates. Enzymatic responses after drug exposures revealed a water temperature-dependent induction of microsomal carboxylesterases. The metabolite profiling in fish bile provides an important tool for pharmaceutical exposure assessment. Graphical abstract Studies of metabolism of carbamazepine and ibuprofen in fish.
High-performance computing in image registration
NASA Astrophysics Data System (ADS)
Zanin, Michele; Remondino, Fabio; Dalla Mura, Mauro
2012-10-01
Thanks to the recent technological advances, a large variety of image data is at our disposal with variable geometric, radiometric and temporal resolution. In many applications the processing of such images needs high performance computing techniques in order to deliver timely responses e.g. for rapid decisions or real-time actions. Thus, parallel or distributed computing methods, Digital Signal Processor (DSP) architectures, Graphical Processing Unit (GPU) programming and Field-Programmable Gate Array (FPGA) devices have become essential tools for the challenging issue of processing large amount of geo-data. The article focuses on the processing and registration of large datasets of terrestrial and aerial images for 3D reconstruction, diagnostic purposes and monitoring of the environment. For the image alignment procedure, sets of corresponding feature points need to be automatically extracted in order to successively compute the geometric transformation that aligns the data. The feature extraction and matching are ones of the most computationally demanding operations in the processing chain thus, a great degree of automation and speed is mandatory. The details of the implemented operations (named LARES) exploiting parallel architectures and GPU are thus presented. The innovative aspects of the implementation are (i) the effectiveness on a large variety of unorganized and complex datasets, (ii) capability to work with high-resolution images and (iii) the speed of the computations. Examples and comparisons with standard CPU processing are also reported and commented.
High Resolution Mass Spectrometry of Polyfluorinated Polyether-Based Formulation.
Dimzon, Ian Ken; Trier, Xenia; Frömel, Tobias; Helmus, Rick; Knepper, Thomas P; de Voogt, Pim
2016-02-01
High resolution mass spectrometry (HRMS) was successfully applied to elucidate the structure of a polyfluorinated polyether (PFPE)-based formulation. The mass spectrum generated from direct injection into the MS was examined by identifying the different repeating units manually and with the aid of an instrument data processor. Highly accurate mass spectral data enabled the calculation of higher-order mass defects. The different plots of MW and the nth-order mass defects (up to n = 3) could aid in assessing the structure of the different repeating units and estimating their absolute and relative number per molecule. The three major repeating units were -C2H4O-, -C2F4O-, and -CF2O-. Tandem MS was used to identify the end groups that appeared to be phosphates, as well as the possible distribution of the repeating units. Reversed-phase HPLC separated of the polymer molecules on the basis of number of nonpolar repeating units. The elucidated structure resembles the structure in the published manufacturer technical data. This analytical approach to the characterization of a PFPE-based formulation can serve as a guide in analyzing not just other PFPE-based formulations but also other fluorinated and non-fluorinated polymers. The information from MS is essential in studying the physico-chemical properties of PFPEs and can help in assessing the risks they pose to the environment and to human health. Graphical Abstract ᅟ.
Tempest: GPU-CPU computing for high-throughput database spectral matching.
Milloy, Jeffrey A; Faherty, Brendan K; Gerber, Scott A
2012-07-06
Modern mass spectrometers are now capable of producing hundreds of thousands of tandem (MS/MS) spectra per experiment, making the translation of these fragmentation spectra into peptide matches a common bottleneck in proteomics research. When coupled with experimental designs that enrich for post-translational modifications such as phosphorylation and/or include isotopically labeled amino acids for quantification, additional burdens are placed on this computational infrastructure by shotgun sequencing. To address this issue, we have developed a new database searching program that utilizes the massively parallel compute capabilities of a graphical processing unit (GPU) to produce peptide spectral matches in a very high throughput fashion. Our program, named Tempest, combines efficient database digestion and MS/MS spectral indexing on a CPU with fast similarity scoring on a GPU. In our implementation, the entire similarity score, including the generation of full theoretical peptide candidate fragmentation spectra and its comparison to experimental spectra, is conducted on the GPU. Although Tempest uses the classical SEQUEST XCorr score as a primary metric for evaluating similarity for spectra collected at unit resolution, we have developed a new "Accelerated Score" for MS/MS spectra collected at high resolution that is based on a computationally inexpensive dot product but exhibits scoring accuracy similar to that of the classical XCorr. In our experience, Tempest provides compute-cluster level performance in an affordable desktop computer.
Digitally enhanced GLORIA images for petroleum exploration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prindle, R.O.; Lanz, K
1990-05-01
This poster presentation graphically depicts the geological and structural information that can be derived from digitally enhanced Geological Long Range Inclined Asdic (GLORIA) sonar images. This presentation illustrates the advantages of scale enlargement as an interpreter's tool in an offshore area within the Eel River Basin, Northern California. Sonographs were produced from digital tapes originally collected for the exclusive economic zone (EEZ)-SCAN 1984 survey, which was published in the Atlas of the Western Conterminous US at a scale of 1:500,000. This scale is suitable for displaying regional offshore tectonic features but does not have the resolution required for detailed geologicalmore » mapping necessary for petroleum exploration. Applications of digital enhancing techniques which utilize contrast stretching and assign false colors to wide-swath sonar imagery (approximately 40 km) with 50-m resolution enables the acquisition and interpretation of significantly more geological and structural data. This, combined with a scale enlargement to 1:100,000 and high contrast contact prints vs. the offset prints of the atlas, increases the resolution and sharpness of bathymetric features so that many more subtle features may be mapped in detail. A tectonic interpretation of these digitally enhanced GLORIA sonographs from the Eel River basin is presented, displaying anticlines, lineaments, ridge axis, pathways of sediment flow, and subtle doming. Many of these features are not present on published bathymetric maps and have not been derived from seismic data because the plan view spatial resolution is much less than that available from the GLORIA imagery.« less
Effectiveness of Using Graphic Illustrations with Social Studies Textual Materials. Final Report.
ERIC Educational Resources Information Center
Davis, O. L., Jr.
This study explores the effectiveness of using graphic illustrations with written text in promoting learning in social studies by junior high students. Two groups of experimental reading materials, one group composed of three narratives with related graphic illustrations and the other composed of three narratives alone, were prepared and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ushizima, Daniela; Perciano, Talita; Krishnan, Harinarayan
Fibers provide exceptional strength-to-weight ratio capabilities when woven into ceramic composites, transforming them into materials with exceptional resistance to high temperature, and high strength combined with improved fracture toughness. Microcracks are inevitable when the material is under strain, which can be imaged using synchrotron X-ray computed micro-tomography (mu-CT) for assessment of material mechanical toughness variation. An important part of this analysis is to recognize fibrillar features. This paper presents algorithms for detecting and quantifying composite cracks and fiber breaks from high-resolution image stacks. First, we propose recognition algorithms to identify the different structures of the composite, including matrix cracks andmore » fibers breaks. Second, we introduce our package F3D for fast filtering of large 3D imagery, implemented in OpenCL to take advantage of graphic cards. Results show that our algorithms automatically identify micro-damage and that the GPU-based implementation introduced here takes minutes, being 17x faster than similar tools on a typical image file.« less
The PALM-3000 high-order adaptive optics system for Palomar Observatory
NASA Astrophysics Data System (ADS)
Bouchez, Antonin H.; Dekany, Richard G.; Angione, John R.; Baranec, Christoph; Britton, Matthew C.; Bui, Khanh; Burruss, Rick S.; Cromer, John L.; Guiwits, Stephen R.; Henning, John R.; Hickey, Jeff; McKenna, Daniel L.; Moore, Anna M.; Roberts, Jennifer E.; Trinh, Thang Q.; Troy, Mitchell; Truong, Tuan N.; Velur, Viswa
2008-07-01
Deployed as a multi-user shared facility on the 5.1 meter Hale Telescope at Palomar Observatory, the PALM-3000 highorder upgrade to the successful Palomar Adaptive Optics System will deliver extreme AO correction in the near-infrared, and diffraction-limited images down to visible wavelengths, using both natural and sodium laser guide stars. Wavefront control will be provided by two deformable mirrors, a 3368 active actuator woofer and 349 active actuator tweeter, controlled at up to 3 kHz using an innovative wavefront processor based on a cluster of 17 graphics processing units. A Shack-Hartmann wavefront sensor with selectable pupil sampling will provide high-order wavefront sensing, while an infrared tip/tilt sensor and visible truth wavefront sensor will provide low-order LGS control. Four back-end instruments are planned at first light: the PHARO near-infrared camera/spectrograph, the SWIFT visible light integral field spectrograph, Project 1640, a near-infrared coronagraphic integral field spectrograph, and 888Cam, a high-resolution visible light imager.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-17
... Suitable for High-Quality Print Graphics Using Sheet-Fed Presses From Indonesia: Antidumping Duty Order... Indonesia. DATES: Effective Date: November 17, 2010. FOR FURTHER INFORMATION CONTACT: Gemal Brangman or... duty investigation of certain coated paper from Indonesia. See Certain Coated Paper Suitable for High...
GENFIRE: A generalized Fourier iterative reconstruction algorithm for high-resolution 3D imaging
Pryor, Alan; Yang, Yongsoo; Rana, Arjun; ...
2017-09-05
Tomography has made a radical impact on diverse fields ranging from the study of 3D atomic arrangements in matter to the study of human health in medicine. Despite its very diverse applications, the core of tomography remains the same, that is, a mathematical method must be implemented to reconstruct the 3D structure of an object from a number of 2D projections. Here, we present the mathematical implementation of a tomographic algorithm, termed GENeralized Fourier Iterative REconstruction (GENFIRE), for high-resolution 3D reconstruction from a limited number of 2D projections. GENFIRE first assembles a 3D Fourier grid with oversampling and then iteratesmore » between real and reciprocal space to search for a global solution that is concurrently consistent with the measured data and general physical constraints. The algorithm requires minimal human intervention and also incorporates angular refinement to reduce the tilt angle error. We demonstrate that GENFIRE can produce superior results relative to several other popular tomographic reconstruction techniques through numerical simulations and by experimentally reconstructing the 3D structure of a porous material and a frozen-hydrated marine cyanobacterium. As a result, equipped with a graphical user interface, GENFIRE is freely available from our website and is expected to find broad applications across different disciplines.« less
GENFIRE: A generalized Fourier iterative reconstruction algorithm for high-resolution 3D imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pryor, Alan; Yang, Yongsoo; Rana, Arjun
Tomography has made a radical impact on diverse fields ranging from the study of 3D atomic arrangements in matter to the study of human health in medicine. Despite its very diverse applications, the core of tomography remains the same, that is, a mathematical method must be implemented to reconstruct the 3D structure of an object from a number of 2D projections. Here, we present the mathematical implementation of a tomographic algorithm, termed GENeralized Fourier Iterative REconstruction (GENFIRE), for high-resolution 3D reconstruction from a limited number of 2D projections. GENFIRE first assembles a 3D Fourier grid with oversampling and then iteratesmore » between real and reciprocal space to search for a global solution that is concurrently consistent with the measured data and general physical constraints. The algorithm requires minimal human intervention and also incorporates angular refinement to reduce the tilt angle error. We demonstrate that GENFIRE can produce superior results relative to several other popular tomographic reconstruction techniques through numerical simulations and by experimentally reconstructing the 3D structure of a porous material and a frozen-hydrated marine cyanobacterium. As a result, equipped with a graphical user interface, GENFIRE is freely available from our website and is expected to find broad applications across different disciplines.« less
A system and method for online high-resolution mapping of gastric slow-wave activity.
Bull, Simon H; O'Grady, Gregory; Du, Peng; Cheng, Leo K
2014-11-01
High-resolution (HR) mapping employs multielectrode arrays to achieve spatially detailed analyses of propagating bioelectrical events. A major current limitation is that spatial analyses must currently be performed "off-line" (after experiments), compromising timely recording feedback and restricting experimental interventions. These problems motivated development of a system and method for "online" HR mapping. HR gastric recordings were acquired and streamed to a novel software client. Algorithms were devised to filter data, identify slow-wave events, eliminate corrupt channels, and cluster activation events. A graphical user interface animated data and plotted electrograms and maps. Results were compared against off-line methods. The online system analyzed 256-channel serosal recordings with no unexpected system terminations with a mean delay 18 s. Activation time marking sensitivity was 0.92; positive predictive value was 0.93. Abnormal slow-wave patterns including conduction blocks, ectopic pacemaking, and colliding wave fronts were reliably identified. Compared to traditional analysis methods, online mapping had comparable results with equivalent coverage of 90% of electrodes, average RMS errors of less than 1 s, and CC of activation maps of 0.99. Accurate slow-wave mapping was achieved in near real-time, enabling monitoring of recording quality and experimental interventions targeted to dysrhythmic onset. This work also advances the translation of HR mapping toward real-time clinical application.
Kaufmann, Anton; Widmer, Mirjam; Maden, Kathryn; Butcher, Patrick; Walker, Stephan
2018-03-05
A reversed-phase ion-pairing chromatographic method was developed for the detection and quantification of inorganic and organic anionic food additives. A single-stage high-resolution mass spectrometer (orbitrap ion trap, Orbitrap) was used to detect the accurate masses of the unfragmented analyte ions. The developed ion-pairing chromatography method was based on a dibutylamine/hexafluoro-2-propanol buffer. Dibutylamine can be charged to serve as a chromatographic ion-pairing agent. This ensures sufficient retention of inorganic and organic anions. Yet, unlike quaternary amines, it can be de-charged in the electrospray to prevent the formation of neutral analyte ion-pairing agent adducts. This process is significantly facilitated by the added hexafluoro-2-propanol. This approach permits the sensitive detection and quantification of additives like nitrate and mono-, di-, and triphosphate as well as citric acid, a number of artificial sweeteners like cyclamate and aspartame, flavor enhancers like glutamate, and preservatives like sorbic acid. This is a major advantage, since the currently used analytical methods as utilized in food safety laboratories are only capable in monitoring a few compounds or a particular category of food additives. Graphical abstract Deptotonation of ion pair agent in the electrospray interface.
The Science Behind the NASA/NOAA Electronic Theater 2002
NASA Technical Reports Server (NTRS)
Hasler, A. Fritz; Starr, David (Technical Monitor)
2002-01-01
Details of the science stories and scientific results behind the Etheater Earth Science Visualizations from the major remote sensing institutions around the country will be explained. The NASA Electronic Theater presents Earth science observations and visualizations in a historical perspective. Fly in from outer space to Temple Square and the University of Utah Campus. Go back to the early weather satellite images from the 1960s see them contrasted with the latest US/Europe/Japan global weather data. See the latest images and image sequences from NASA & NOAA missions like Terra, GOES, NOAA, TRMM, SeaWiFS, Landsat 7 visualized with state-of-the art tools. A similar retrospective of numerical weather models from the 1960s will be compared with the latest "year 2002" high-resolution models. See the inner workings of a powerful hurricane as it is sliced and dissected using the University of Wisconsin Vis-5D interactive visualization system. The largest super computers are now capable of realistic modeling of the global oceans. See ocean vortexes and currents that bring up the nutrients to feed phitoplankton and zooplankton as well as draw the crill fish, whales and fisherman. See the how the ocean blooms in response to these currents and El Nino/La Nina climate regimes. The Internet and networks have appeared while computers and visualizations have vastly improved over the last 40 years. These advances make it possible to present the broad scope and detailed structure of the huge new observed and simulated datasets in a compelling and instructive manner. New visualization tools allow us to interactively roam & zoom through massive global images larger than 40,000 x 20,000 pixels. Powerful movie players allow us to interactively roam, zoom & loop through 4000 x 4000 pixel bigger than HDTV movies of up to 5000 frames. New 3D tools allow highly interactive manipulation of detailed perspective views of many changing model quantities. See the 1m resolution before and after shots of lower Manhattan and the Pentagon after the September 11 disaster as well as shots of Afghanistan from the Space Imaging IKONOS as well as debris plume images from Terra MODIS and SPOT Image. Shown by the SGI-Octane Graphics-Supercomputer are visualizations of hurricanes Michelle 2001, Floyd, Mitch, Fran and Linda. Our visualizations of these storms have been featured on the covers of the National Geographic, Time, Newsweek and Popular Science. Highlights will be shown from the NASA's large collection of High Definition TV (HDTV) visualizations clips New visualizations of a Los Alamos global ocean model, and high-resolution results of a NASA/JPL Atlantic ocean basin model showing currents, and salinity features will be shown. El Nino/La Nina effects on sea surface temperature and sea surface height of the Pacific Ocean will also be shown. The SST simulations will be compared with GOES Gulf Stream animations and ocean productivity observations. Tours will be given of the entire Earth's land surface at 500 m resolution from recently composited Terra MODIS data, Visualizations will be shown from the Earth Science Etheater 2001 recently presented over the last years in New Zealand, Johannesburg, Tokyo, Paris, Munich, Sydney, Melbourne, Honolulu, Washington, New York City, Pasadena, UCAR/Boulder, and Penn State University. The presentation will use a 2-CPU SGI/CRAY Octane Super Graphics workstation with 4 GB RAM and terabyte disk array at 2048 x 768 resolution plus multimedia laptop with three high resolution projectors. Visualizations will also be featured from museum exhibits and presentations including: the Smithsonian Air & Space Museum in Washington, IMAX theater at the Maryland Science Center in Baltimore, the James Lovell Discovery World Science museum in Milwaukee, the American Museum of Natural History (NYC) Hayden Planetarium IMAX theater, etc. The Etheater is sponsored by NASA, NOAA and the American Meteorological Society. This presentation is brought to you by the University of Utah College of Mines and Earth Sciences and, the Utah Museum of Natural History.
NASA Astrophysics Data System (ADS)
Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri
2015-04-01
Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.
Specification and Analysis of Parallel Machine Architecture
1990-03-17
Parallel Machine Architeture C.V. Ramamoorthy Computer Science Division Dept. of Electrical Engineering and Computer Science University of California...capacity. (4) Adaptive: The overhead in resolution of deadlocks, etc. should be in proportion to their frequency. (5) Avoid rollbacks: Rollbacks can be...snapshots of system state graphically at a rate proportional to simulation time. Some of the examples are as follow: (1) When the simulation clock of
Large scale track analysis for wide area motion imagery surveillance
NASA Astrophysics Data System (ADS)
van Leeuwen, C. J.; van Huis, J. R.; Baan, J.
2016-10-01
Wide Area Motion Imagery (WAMI) enables image based surveillance of areas that can cover multiple square kilometers. Interpreting and analyzing information from such sources, becomes increasingly time consuming as more data is added from newly developed methods for information extraction. Captured from a moving Unmanned Aerial Vehicle (UAV), the high-resolution images allow detection and tracking of moving vehicles, but this is a highly challenging task. By using a chain of computer vision detectors and machine learning techniques, we are capable of producing high quality track information of more than 40 thousand vehicles per five minutes. When faced with such a vast number of vehicular tracks, it is useful for analysts to be able to quickly query information based on region of interest, color, maneuvers or other high-level types of information, to gain insight and find relevant activities in the flood of information. In this paper we propose a set of tools, combined in a graphical user interface, which allows data analysts to survey vehicles in a large observed area. In order to retrieve (parts of) images from the high-resolution data, we developed a multi-scale tile-based video file format that allows to quickly obtain only a part, or a sub-sampling of the original high resolution image. By storing tiles of a still image according to a predefined order, we can quickly retrieve a particular region of the image at any relevant scale, by skipping to the correct frames and reconstructing the image. Location based queries allow a user to select tracks around a particular region of interest such as landmark, building or street. By using an integrated search engine, users can quickly select tracks that are in the vicinity of locations of interest. Another time-reducing method when searching for a particular vehicle, is to filter on color or color intensity. Automatic maneuver detection adds information to the tracks that can be used to find vehicles based on their behavior.
Improving aircraft conceptual design - A PHIGS interactive graphics interface for ACSYNT
NASA Technical Reports Server (NTRS)
Wampler, S. G.; Myklebust, A.; Jayaram, S.; Gelhausen, P.
1988-01-01
A CAD interface has been created for the 'ACSYNT' aircraft conceptual design code that permits the execution and control of the design process via interactive graphics menus. This CAD interface was coded entirely with the new three-dimensional graphics standard, the Programmer's Hierarchical Interactive Graphics System. The CAD/ACSYNT system is designed for use by state-of-the-art high-speed imaging work stations. Attention is given to the approaches employed in modeling, data storage, and rendering.
Commercial Off-The-Shelf (COTS) Graphics Processing Board (GPB) Radiation Test Evaluation Report
NASA Technical Reports Server (NTRS)
Salazar, George A.; Steele, Glen F.
2013-01-01
Large round trip communications latency for deep space missions will require more onboard computational capabilities to enable the space vehicle to undertake many tasks that have traditionally been ground-based, mission control responsibilities. As a result, visual display graphics will be required to provide simpler vehicle situational awareness through graphical representations, as well as provide capabilities never before done in a space mission, such as augmented reality for in-flight maintenance or Telepresence activities. These capabilities will require graphics processors and associated support electronic components for high computational graphics processing. In an effort to understand the performance of commercial graphics card electronics operating in the expected radiation environment, a preliminary test was performed on five commercial offthe- shelf (COTS) graphics cards. This paper discusses the preliminary evaluation test results of five COTS graphics processing cards tested to the International Space Station (ISS) low earth orbit radiation environment. Three of the five graphics cards were tested to a total dose of 6000 rads (Si). The test articles, test configuration, preliminary results, and recommendations are discussed.
Effects of and attention to graphic warning labels on cigarette packages.
Süssenbach, Philipp; Niemeier, Sarah; Glock, Sabine
2013-01-01
The present study investigates the effects of graphic cigarette warnings compared to text-only cigarette warnings on smokers' explicit (i.e. ratings of the packages, cognitions about smoking, perceived health risk, quit intentions) and implicit attitudes. In addition, participants' visual attention towards the graphic warnings was recorded using eye-tracking methodology. Sixty-three smokers participated in the present study and either viewed graphic cigarette warnings with aversive and non-aversive images or text-only warnings. Data were analysed using analysis of variance and correlation analysis. Especially, graphic cigarette warnings with aversive content drew attention and elicited high threat. However, whereas attention directed to the textual information of the graphic warnings predicted smokers' risk perceptions, attention directed to the images of the graphic warnings did not. Moreover, smokers' in the graphic warning condition reported more positive cognitions about smoking, thus revealing cognitive dissonance. Smokers employ defensive psychological mechanisms when confronted with threatening warnings. Although aversive images attract attention, they do not promote health knowledge. Implications for graphic health warnings and the importance of taking their content (i.e. aversive vs. non-aversive images) into account are discussed.
3D SPH numerical simulation of the wave generated by the Vajont rockslide
NASA Astrophysics Data System (ADS)
Vacondio, R.; Mignosa, P.; Pagani, S.
2013-09-01
A 3D numerical modeling of the wave generated by the Vajont slide, one of the most destructive ever occurred, is presented in this paper. A meshless Lagrangian Smoothed Particle Hydrodynamics (SPH) technique was adopted to simulate the highly fragmented violent flow generated by the falling slide in the artificial reservoir. The speed-up achievable via General Purpose Graphic Processing Units (GP-GPU) allowed to adopt the adequate resolution to describe the phenomenon. The comparison with the data available in literature showed that the results of the numerical simulation reproduce satisfactorily the maximum run-up, also the water surface elevation in the residual lake after the event. Moreover, the 3D velocity field of the flow during the event and the discharge hydrograph which overtopped the dam, were obtained.
LookSeq: a browser-based viewer for deep sequencing data.
Manske, Heinrich Magnus; Kwiatkowski, Dominic P
2009-11-01
Sequencing a genome to great depth can be highly informative about heterogeneity within an individual or a population. Here we address the problem of how to visualize the multiple layers of information contained in deep sequencing data. We propose an interactive AJAX-based web viewer for browsing large data sets of aligned sequence reads. By enabling seamless browsing and fast zooming, the LookSeq program assists the user to assimilate information at different levels of resolution, from an overview of a genomic region to fine details such as heterogeneity within the sample. A specific problem, particularly if the sample is heterogeneous, is how to depict information about structural variation. LookSeq provides a simple graphical representation of paired sequence reads that is more revealing about potential insertions and deletions than are conventional methods.
Qin, Caidie; Bai, Xue; Zhang, Yue; Gao, Kai
2018-05-03
A photoelectrochemical wire microelectrode was constructed based on the use of a TiO 2 nanotube array with electrochemically deposited CdSe semiconductor. A strongly amplified photocurrent is generated on the sensor surface. The microsensor has a response in the 0.05-20 μM dopamine (DA) concentration range and a 16.7 μM detection limit at a signal-to-noise ratio of 3. Sensitivity, recovery and reproducibility of the sensor were validated by detecting DA in spiked human urine, and satisfactory results were obtained. Graphical abstract Schematic of a sensitive photoelectrochemical microsensor based on CdSe modified TiO 2 nanotube array. The photoelectrochemical microsensor was successfully applied to the determination of dopamine in urine samples.
NASA Technical Reports Server (NTRS)
Gregory, G. L.; Beck, S. M.; Mathis, J. J., Jr.
1981-01-01
In situ correlative measurements were obtained with a NASA aircraft in support of two NASA airborne remote sensors participating in the Environmental Protection Agency's 1980persistent elevated pollution episode (PEPE) and Northeast regional oxidant study (NEROS) field program in order to provide data for evaluating the capability of two remote sensors for measuring mixing layer height, and ozone and aerosol concentrations in the troposphere during the 1980 PEPE/NEROS program. The in situ aircraft was instrumented to measure temperature, dewpoint temperature, ozone concentrations, and light scattering coefficient. In situ measurements for ten correlative missions are given and discussed. Each data set is presented in graphical and tabular format aircraft flight plans are included.
StePS: Stereographically Projected Cosmological Simulations
NASA Astrophysics Data System (ADS)
Rácz, Gábor; Szapudi, István; Csabai, István; Dobos, László
2018-05-01
StePS (Stereographically Projected Cosmological Simulations) compactifies the infinite spatial extent of the Universe into a finite sphere with isotropic boundary conditions to simulate the evolution of the large-scale structure. This eliminates the need for periodic boundary conditions, which are a numerical convenience unsupported by observation and which modifies the law of force on large scales in an unrealistic fashion. StePS uses stereographic projection for space compactification and naive O(N2) force calculation; this arrives at a correlation function of the same quality more quickly than standard (tree or P3M) algorithms with similar spatial and mass resolution. The N2 force calculation is easy to adapt to modern graphics cards, hence StePS can function as a high-speed prediction tool for modern large-scale surveys.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoshimura, Ann S.; Brandt, Larry D.
2009-11-01
The NUclear EVacuation Analysis Code (NUEVAC) has been developed by Sandia National Laboratories to support the analysis of shelter-evacuate (S-E) strategies following an urban nuclear detonation. This tool can model a range of behaviors, including complex evacuation timing and path selection, as well as various sheltering or mixed evacuation and sheltering strategies. The calculations are based on externally generated, high resolution fallout deposition and plume data. Scenario setup and calculation outputs make extensive use of graphics and interactive features. This software is designed primarily to produce quantitative evaluations of nuclear detonation response options. However, the outputs have also proven usefulmore » in the communication of technical insights concerning shelter-evacuate tradeoffs to urban planning or response personnel.« less
Potential digitization/compression techniques for Shuttle video
NASA Technical Reports Server (NTRS)
Habibi, A.; Batson, B. H.
1978-01-01
The Space Shuttle initially will be using a field-sequential color television system but it is possible that an NTSC color TV system may be used for future missions. In addition to downlink color TV transmission via analog FM links, the Shuttle will use a high resolution slow-scan monochrome system for uplink transmission of text and graphics information. This paper discusses the characteristics of the Shuttle video systems, and evaluates digitization and/or bandwidth compression techniques for the various links. The more attractive techniques for the downlink video are based on a two-dimensional DPCM encoder that utilizes temporal and spectral as well as the spatial correlation of the color TV imagery. An appropriate technique for distortion-free coding of the uplink system utilizes two-dimensional HCK codes.
Mars Data analysis and visualization with Marsoweb
NASA Astrophysics Data System (ADS)
Gulick, V. G.; Deardorff, D. G.
2003-04-01
Marsoweb is a collaborative web environment that has been developed for the Mars research community to better visualize and analyze Mars orbiter data. Its goal is to enable online data discovery by providing an intuitive, interactive interface to data from the Mars Global Surveyor and other orbiters. Recently Marsoweb has served a prominent role as a resource center for the site selection process for the Mars Explorer Rover 2003 missions. In addition to hosting a repository of landing site memoranda and workshop talks, it includes a Java-based interface to a variety of data maps and images. This interface enables the display and numerical querying of data, and allows data profiles to be rendered from user-drawn cross-sections. High-resolution Mars Orbiter Camera (MOC) images (currently, over 100,000) can be graphically perused; browser-based image processing tools can be used on MOC images of potential landing sites. An automated VRML atlas allows users to construct "flyovers" of their own regions-of-interest in 3D. These capabilities enable Marsoweb to be used for general global data studies, in addition to those specific to landing site selection. As of December 2002, Marsoweb has been viewed by 88,000 distinct users with a total of 3.3 million hits (801,000 page requests in all) from NASA, USGS, academia, and the general public have accessed Marsoweb. The High Resolution Imaging Experiment team for the Mars 2005 Orbiter (HiRISE, PI Alfred McEwen) plans to cast a wide net to collect targeting suggestions. Members of the general public as well as the broad Mars science community will be able to submit suggestions of high resolution imaging targets. The web-based interface for target suggestion input (HiWeb) will be based upon Marsoweb (http://marsoweb.nas.nasa.gov).
NASA Technical Reports Server (NTRS)
Wilmouth, David M.; Hanisco, Thomas F.; Donahue, Neil M.; Anderson, James G.
1999-01-01
The first spectra of the A (2)Pi(sub 3/2) from X (2)Pi(sub 3/2) electronic transition of BrO using Fourier transform ultraviolet spectroscopy are obtained. Broadband vibrational spectra acquired at 298 +/- 2 K and 228 +/- 5 K, as well as high-resolution rotational spectra of the A from X 7,0 and 12,0 vibrational bands are presented. Wavenumber positions for the spectra are obtained with high accuracy, and cross section assignments are made, incorporating the existing literature. With 35 cm(exp -1) (0.40 nm) resolution the absolute cross section at the peak of the 7,0 band is determined to be (1.58 +/- 0.12) x 10(exp -17) sq cm/molecule at 298 +/- 2 K and (1.97 +/- 0.15) x 10(exp -17) sq cm/molecule at 228 +/- 5 K. BrO dissociation energies are determined with a graphical Birge-Sponer technique, using Le Roy-Bernstein theory to place an upper limit on the extrapolation. From the ground-state dissociation energy, D(sub o)" = 231.0 +/- 1.7 kJ/mol, the heat of formation of BrO(g) is calculated, del(sub f)H(0 K) = 133.7 +/- 1.7 kJ/mol and del(sub f)H(298.15 K) = 126.2 +/- 1.7 kJ/mol. Cross sections for the high-resolution 7,0 and 12,0 rotational peaks are the first to be reported. The band structures are modeled, and improved band origins, rotational constants, centrifugal distortion constants, and linewidths are determined. In particular, J-dependent linewidths and lifetimes are observed for the both the 7,0 and 12,0 bands.
Vector generator scan converter
Moore, James M.; Leighton, James F.
1990-01-01
High printing speeds for graphics data are achieved with a laser printer by transmitting compressed graphics data from a main processor over an I/O (input/output) channel to a vector generator scan converter which reconstructs a full graphics image for input to the laser printer through a raster data input port. The vector generator scan converter includes a microprocessor with associated microcode memory containing a microcode instruction set, a working memory for storing compressed data, vector generator hardward for drawing a full graphic image from vector parameters calculated by the microprocessor, image buffer memory for storing the reconstructed graphics image and an output scanner for reading the graphics image data and inputting the data to the printer. The vector generator scan converter eliminates the bottleneck created by the I/O channel for transmitting graphics data from the main processor to the laser printer, and increases printer speed up to thirty fold.
Vector generator scan converter
Moore, J.M.; Leighton, J.F.
1988-02-05
High printing speeds for graphics data are achieved with a laser printer by transmitting compressed graphics data from a main processor over an I/O channel to a vector generator scan converter which reconstructs a full graphics image for input to the laser printer through a raster data input port. The vector generator scan converter includes a microprocessor with associated microcode memory containing a microcode instruction set, a working memory for storing compressed data, vector generator hardware for drawing a full graphic image from vector parameters calculated by the microprocessor, image buffer memory for storing the reconstructed graphics image and an output scanner for reading the graphics image data and inputting the data to the printer. The vector generator scan converter eliminates the bottleneck created by the I/O channel for transmitting graphics data from the main processor to the laser printer, and increases printer speed up to thirty fold. 7 figs.
Rusjan, Pablo M; Wilson, Alan A; Miler, Laura; Fan, Ian; Mizrahi, Romina; Houle, Sylvain; Vasdev, Neil; Meyer, Jeffrey H
2014-05-01
This article describes the kinetic modeling of [(11)C]SL25.1188 ([(S)-5-methoxymethyl-3-[6-(4,4,4-trifluorobutoxy)-benzo[d]isoxazol-3-yl]-oxazolidin-2-[(11)C]one]) binding to monoamine oxidase B (MAO-B) in the human brain using high-resolution positron emission tomography (PET). Seven healthy subjects underwent two separate 90- minute PET scans after an intravenous injection of [(11)C]SL25.1188. Complementary arterial blood sampling was acquired. Radioactivity was quickly eliminated from plasma with 80% of parent compound remaining at 90 minutes. Metabolites were more polar than the parent compound. Time-activity curves showed high brain uptake, early peak and washout rate consistent with known regional MAO-B concentration. A two-tissue compartment model (2-TCM) provided better fits to the data than a 1-TCM. Measurement of total distribution volume (VT) showed very good identifiability (based on coefficient of variation (COV)) for all regions of interest (ROIs) (COV(VT)<8%), low between-subject variability (∼20%), and quick temporal convergence (within 5% of final value at 45 minutes). Logan graphical method produces very good estimation of VT. Regional VT highly correlated with previous postmortem report of MAO-B level (r(2)= ≥ 0.9). Specific binding would account from 70% to 90% of VT. Hence, VT measurement of [(11)C]SL25.1(1)88 PET is an excellent estimation of MAO-B concentration.
High resolution renderings and interactive visualization of the 2006 Huntington Beach experiment
NASA Astrophysics Data System (ADS)
Im, T.; Nayak, A.; Keen, C.; Samilo, D.; Matthews, J.
2006-12-01
The Visualization Center at the Scripps Institution of Oceanography investigates innovative ways to represent graphically interactive 3D virtual landscapes and to produce high resolution, high quality renderings of Earth sciences data and the sensors and instruments used to collect the data . Among the Visualization Center's most recent work is the visualization of the Huntington Beach experiment, a study launched in July 2006 by the Southern California Ocean Observing System (http://www.sccoos.org/) to record and synthesize data of the Huntington Beach coastal region. Researchers and students at the Visualization Center created visual presentations that combine bathymetric data provided by SCCOOS with USGS aerial photography and with 3D polygonal models of sensors created in Maya into an interactive 3D scene using the Fledermaus suite of visualization tools (http://www.ivs3d.com). In addition, the Visualization Center has produced high definition (HD) animations of SCCOOS sensor instruments (e.g. REMUS, drifters, spray glider, nearshore mooring, OCSD/USGS mooring and CDIP mooring) using the Maya modeling and animation software and rendered over multiple nodes of the OptIPuter Visualization Cluster at Scripps. These visualizations are aimed at providing researchers with a broader context of sensor locations relative to geologic characteristics, to promote their use as an educational resource for informal education settings and increasing public awareness, and also as an aid for researchers' proposals and presentations. These visualizations are available for download on the Visualization Center website at http://siovizcenter.ucsd.edu/sccoos/hb2006.php.
Structural and Functional Model of Organization of Geometric and Graphic Training of the Students
ERIC Educational Resources Information Center
Poluyanov, Valery B.; Pyankova, Zhanna A.; Chukalkina, Marina I.; Smolina, Ekaterina S.
2016-01-01
The topicality of the investigated problem is stipulated by the social need for training competitive engineers with a high level of graphical literacy; especially geometric and graphic training of students and its projected results in a competence-based approach; individual characteristics and interests of the students, as well as methodological…
ERIC Educational Resources Information Center
Masuchika, Glenn; Boldt, Gail
2010-01-01
American graphic novels are increasingly recognized as high-quality literature and an interesting genre for academic study. Graphic novels of Japan, called manga, have established a strong foothold in American culture. This preliminary survey of 44 United States university libraries demonstrates that Japanese manga in translation are consistently…
The Interpretation of Cellular Transport Graphics by Students with Low and High Prior Knowledge
ERIC Educational Resources Information Center
Cook, Michelle; Carter, Glenda; Wiebe, Eric N.
2008-01-01
The purpose of this study was to examine how prior knowledge of cellular transport influenced how high school students in the USA viewed and interpreted graphic representations of this topic. The participants were Advanced Placement Biology students (n = 65); each participant had previously taken a biology course in high school. After assessing…
O'Hara, Charles J.
1980-01-01
Six hundred-seventy kilometers of closely spaced high-resolution seismic-reflection data have been collected from eastern Rhode Island Sound and Vineyard Sound, Mass, by the U.S. Geological Survey in cooperation with the Massachusetts Department of Public Works. These data were obtained during the June 1975 cruise of the R/V ASTERIAS as part of a continuing regional study of the Massachusetts offshore area to assess potential mineral resources, to evaluate environmental impact of mining of resources and of offshore disposal of solid waste and harbor dredge-spoil materials, and to map the offshore geology and shallow structure.The data were obtained by using a surface-towed EG&G Unit Pulse Boomer* (300 joules: 400 Hz-8kHz frequency) sound source. Reflected acoustic energy was detected by a 4.6-m, a-element hydrophone array, was amplified, was actively filtered (400 Hz-4kHz bandpass), and was graphically displayed on an EPC* dry paper recorder at a 0.25-second sweep rate. System resolution was generally 1 to 1.5 m. Navigational control was provided by Loran C (positional accuracy within 0.2 km) and was supplemented by radar and visual fixes. Positional information was logged at 15-minute intervals and at major course changes.The original records may be examined at the Data Library, U.S. Geological Survey, Woods Hole, MA 02543. Microfilm copies of the data are available for purchase from the National Geophysical and Solar-Terrestrial Data Center (NGSDC), Boulder, CO 80302.
SPARX, a new environment for Cryo-EM image processing.
Hohn, Michael; Tang, Grant; Goodyear, Grant; Baldwin, P R; Huang, Zhong; Penczek, Pawel A; Yang, Chao; Glaeser, Robert M; Adams, Paul D; Ludtke, Steven J
2007-01-01
SPARX (single particle analysis for resolution extension) is a new image processing environment with a particular emphasis on transmission electron microscopy (TEM) structure determination. It includes a graphical user interface that provides a complete graphical programming environment with a novel data/process-flow infrastructure, an extensive library of Python scripts that perform specific TEM-related computational tasks, and a core library of fundamental C++ image processing functions. In addition, SPARX relies on the EMAN2 library and cctbx, the open-source computational crystallography library from PHENIX. The design of the system is such that future inclusion of other image processing libraries is a straightforward task. The SPARX infrastructure intelligently handles retention of intermediate values, even those inside programming structures such as loops and function calls. SPARX and all dependencies are free for academic use and available with complete source.
Markov Random Field Based Automatic Image Alignment for ElectronTomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moussavi, Farshid; Amat, Fernando; Comolli, Luis R.
2007-11-30
Cryo electron tomography (cryo-ET) is the primary method for obtaining 3D reconstructions of intact bacteria, viruses, and complex molecular machines ([7],[2]). It first flash freezes a specimen in a thin layer of ice, and then rotates the ice sheet in a transmission electron microscope (TEM) recording images of different projections through the sample. The resulting images are aligned and then back projected to form the desired 3-D model. The typical resolution of biological electron microscope is on the order of 1 nm per pixel which means that small imprecision in the microscope's stage or lenses can cause large alignment errors.more » To enable a high precision alignment, biologists add a small number of spherical gold beads to the sample before it is frozen. These beads generate high contrast dots in the image that can be tracked across projections. Each gold bead can be seen as a marker with a fixed location in 3D, which provides the reference points to bring all the images to a common frame as in the classical structure from motion problem. A high accuracy alignment is critical to obtain a high resolution tomogram (usually on the order of 5-15nm resolution). While some methods try to automate the task of tracking markers and aligning the images ([8],[4]), they require user intervention if the SNR of the image becomes too low. Unfortunately, cryogenic electron tomography (or cryo-ET) often has poor SNR, since the samples are relatively thick (for TEM) and the restricted electron dose usually results in projections with SNR under 0 dB. This paper shows that formulating this problem as a most-likely estimation task yields an approach that is able to automatically align with high precision cryo-ET datasets using inference in graphical models. This approach has been packaged into a publicly available software called RAPTOR-Robust Alignment and Projection estimation for Tomographic Reconstruction.« less
GPUs: An Emerging Platform for General-Purpose Computation
2007-08-01
programming; real-time cinematic quality graphics Peak stream (26) License required (limited time no- cost evaluation program) Commercially...folding.stanford.edu (accessed 30 March 2007). 2. Fan, Z.; Qiu, F.; Kaufman, A.; Yoakum-Stover, S. GPU Cluster for High Performance Computing. ACM/IEEE...accessed 30 March 2007). 8. Goodnight, N.; Wang, R.; Humphreys, G. Computation on Programmable Graphics Hardware. IEEE Computer Graphics and
ERIC Educational Resources Information Center
Louisiana State Dept. of Education, Baton Rouge.
The tentative guide in graphic arts technology for senior high schools is part of a series of industrial arts curriculum materials developed by the State of Louisiana. The course is designed to provide "hands-on" experience with tools and materials along with a study of the industrial processes in graphic arts technology. In addition,…
Using Graphic Novels in the High School Classroom: Engaging Deaf Students with a New Genre
ERIC Educational Resources Information Center
Smetana, Linda; Odelson, Darah; Burns, Heidi; Grisham, Dana L.
2009-01-01
Two high school teachers of Deaf students and two teacher educators present this article about the use of graphic novels as an important genre for teaching literacy and academic skills in the high school classroom. During a summer session for failing Deaf students at a state-sponsored school, two English teachers taught and documented their…
NASA Astrophysics Data System (ADS)
Huang, Yong; Wicks, Robert; Zhang, Kang; Zhao, Mingtao; Tyler, Betty M.; Hwang, Lee; Pradilla, Gustavo; Kang, Jin U.
2013-03-01
Carotid endarterectomy is a common vascular surgical procedure which may help prevent patients' risk of having a stroke. A high resolution real-time imaging technique that can detect the position and size of vascular plaques would provide great value to reduce the risk level and increase the surgical outcome. Optical coherence tomography (OCT), as a high resolution high speed noninvasive imaging technique, was evaluated in this study. Twenty-four 24-week old apolipoprotein E-deficient (ApoE-/-) mice were divided into three groups with 8 in each. One served as the control group fed with normal diet. One served as the study group fed with high-fat diet to induce atherosclerosis. The last served as the treatment group fed with both high-fat diet and medicine to treat atherosclerosis. Full-range, complex-conjugate-free spectral-domain OCT was used to image the mouse aorta near the neck area in-vivo with aorta exposed to the imaging head through surgical procedure. 2D and 3D images of the area of interest were presented real-time through graphics processing unit accelerated algorithm. In-situ imaging of all the mice after perfusion were performed again to validate the invivo detection result and to show potential capability of OCT if combined with surgical saline flush. Later all the imaged arteries were stained with H and E to perform histology analysis. Preliminary results confirmed the accuracy and fast imaging speed of OCT imaging technique in determining atherosclerosis.
Piazza, Rocco; Magistroni, Vera; Pirola, Alessandra; Redaelli, Sara; Spinelli, Roberta; Redaelli, Serena; Galbiati, Marta; Valletta, Simona; Giudici, Giovanni; Cazzaniga, Giovanni; Gambacorti-Passerini, Carlo
2013-01-01
Copy number alterations (CNA) are common events occurring in leukaemias and solid tumors. Comparative Genome Hybridization (CGH) is actually the gold standard technique to analyze CNAs; however, CGH analysis requires dedicated instruments and is able to perform only low resolution Loss of Heterozygosity (LOH) analyses. Here we present CEQer (Comparative Exome Quantification analyzer), a new graphical, event-driven tool for CNA/allelic-imbalance (AI) coupled analysis of exome sequencing data. By using case-control matched exome data, CEQer performs a comparative digital exonic quantification to generate CNA data and couples this information with exome-wide LOH and allelic imbalance detection. This data is used to build mixed statistical/heuristic models allowing the identification of CNA/AI events. To test our tool, we initially used in silico generated data, then we performed whole-exome sequencing from 20 leukemic specimens and corresponding matched controls and we analyzed the results using CEQer. Taken globally, these analyses showed that the combined use of comparative digital exon quantification and LOH/AI allows generating very accurate CNA data. Therefore, we propose CEQer as an efficient, robust and user-friendly graphical tool for the identification of CNA/AI in the context of whole-exome sequencing data.
The End of the Rainbow? Color Schemes for Improved Data Graphics
NASA Astrophysics Data System (ADS)
Light, Adam; Bartlein, Patrick J.
2004-10-01
Modern computer displays and printers enable the widespread use of color in scientific communication, but the expertise for designing effective graphics has not kept pace with the technology for producing them. Historically, even the most prestigious publications have tolerated high defect rates in figures and illustrations, and technological advances that make creating and reproducing graphics easier do not appear to have decreased the frequency of errors. Flawed graphics consequently beget more flawed graphics as authors emulate published examples. Color has the potential to enhance communication, but design mistakes can result in color figures that are less effective than gray scale displays of the same data. Empirical research on human subjects can build a fundamental understanding of visual perception and scientific methods can be used to evaluate existing designs, but creating effective data graphics is a design task and not fundamentally a scientific pursuit. Like writing well, creating good data graphics requires a combination of formal knowledge and artistic sensibility tempered by experience: a combination of ``substance, statistics, and design''.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-06
... INTERNATIONAL TRADE COMMISSION [Investigation Nos. 701-TA-470-471 and 731-TA-1169-1170 (Final)] Certain Coated Paper Suitable for High-Quality Print Graphics Using Sheet-Fed Presses From China and Indonesia AGENCY: United States International Trade Commission. ACTION: Revised schedule for the subject...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-08
... INTERNATIONAL TRADE COMMISSION [Investigation Nos. 701-TA-470-471 and 731-TA-1169-1170 (Final)] Certain Coated Paper Suitable for High-Quality Print Graphics Using Sheet-Fed Presses From China and Indonesia AGENCY: United States International Trade Commission. ACTION: Revised schedule for the subject...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-19
... Paper Suitable for High-Quality Print Graphics Using Sheet-Fed Presses From Indonesia and the People's...: February 19, 2010. FOR FURTHER INFORMATION CONTACT: Gemal Brangman (Indonesia) or Demitrios Kalogeropoulos... using sheet-fed presses from Indonesia and the People's Republic of China. See Certain Coated Paper...
Program Aids Specification Of Multiple-Block Grids
NASA Technical Reports Server (NTRS)
Sorenson, R. L.; Mccann, K. M.
1993-01-01
3DPREP computer program aids specification of multiple-block computational grids. Highly interactive graphical preprocessing program designed for use on powerful graphical scientific computer workstation. Divided into three main parts, each corresponding to principal graphical-and-alphanumerical display. Relieves user of some burden of collecting and formatting many data needed to specify blocks and grids, and prepares input data for NASA's 3DGRAPE grid-generating computer program.
NASA Astrophysics Data System (ADS)
Aubert, Dominique; Teyssier, Romain
2010-11-01
We present a set of cosmological simulations with radiative transfer in order to model the reionization history of the universe from z = 18 down to z = 6. Galaxy formation and the associated star formation are followed self-consistently with gas and dark matter dynamics using the RAMSES code, while radiative transfer is performed as a post-processing step using a moment-based method with the M1 closure relation in the ATON code. The latter has been ported to a multiple Graphics Processing Unit (GPU) architecture using the CUDA language together with the MPI library, resulting in an overall acceleration that allows us to tackle radiative transfer problems at a significantly higher resolution than previously reported: 10243 + 2 levels of refinement for the hydrodynamic adaptive grid and 10243 for the radiative transfer Cartesian grid. We reach a typical acceleration factor close to 100× when compared to the CPU version, allowing us to perform 1/4 million time steps in less than 3000 GPU hr. We observe good convergence properties between our different resolution runs for various volume- and mass-averaged quantities such as neutral fraction, UV background, and Thomson optical depth, as long as the effects of finite resolution on the star formation history are properly taken into account. We also show that the neutral fraction depends on the total mass density, in a way close to the predictions of photoionization equilibrium, as long as the effect of self-shielding are included in the background radiation model. Although our simulation suite has reached unprecedented mass and spatial resolution, we still fail in reproducing the z ~ 6 constraints on the neutral fraction of hydrogen and the intensity of the UV background. In order to account for unresolved density fluctuations, we have modified our chemistry solver with a simple clumping factor model. Using our most spatially resolved simulation (12.5 Mpc h -1 with 10243 particles) to calibrate our subgrid model, we have resimulated our largest box (100 Mpc h -1 with 10243 particles) with the modified chemistry, successfully reproducing the observed level of neutral hydrogen in the spectra of high-redshift quasars. We however did not reproduce the average photoionization rate inferred from the same observations. We argue that this discrepancy could be partly explained by the fact that the average radiation intensity and the average neutral fraction depend on different regions of the gas density distribution, so that one quantity cannot be simply deduced from the other.
SEURAT: visual analytics for the integrated analysis of microarray data.
Gribov, Alexander; Sill, Martin; Lück, Sonja; Rücker, Frank; Döhner, Konstanze; Bullinger, Lars; Benner, Axel; Unwin, Antony
2010-06-03
In translational cancer research, gene expression data is collected together with clinical data and genomic data arising from other chip based high throughput technologies. Software tools for the joint analysis of such high dimensional data sets together with clinical data are required. We have developed an open source software tool which provides interactive visualization capability for the integrated analysis of high-dimensional gene expression data together with associated clinical data, array CGH data and SNP array data. The different data types are organized by a comprehensive data manager. Interactive tools are provided for all graphics: heatmaps, dendrograms, barcharts, histograms, eventcharts and a chromosome browser, which displays genetic variations along the genome. All graphics are dynamic and fully linked so that any object selected in a graphic will be highlighted in all other graphics. For exploratory data analysis the software provides unsupervised data analytics like clustering, seriation algorithms and biclustering algorithms. The SEURAT software meets the growing needs of researchers to perform joint analysis of gene expression, genomical and clinical data.
NASA Astrophysics Data System (ADS)
Palmeri, Anthony
This research project was developed to provide extensive practice and exposure to data collection and data representation in a high school science classroom. The student population engaged in this study included 40 high school sophomores enrolled in two microbiology classes. Laboratory investigations and activities were deliberately designed to include quantitative data collection that necessitated organization and graphical representation. These activities were embedded into the curriculum and conducted in conjunction with the normal and expected course content, rather than as a separate entity. It was expected that routine practice with graph construction and interpretation would result in improved competency when graphing data and proficiency in analyzing graphs. To objectively test the effectiveness in achieving this goal, a pre-test and post-test that included graph construction, interpretation, interpolation, extrapolation, and analysis was administered. Based on the results of a paired T-Test, graphical literacy was significantly enhanced by extensive practice and exposure to data representation.
ERIC Educational Resources Information Center
Poehls, Eddie; And Others
This course guide for a design/drafting course is one of four developed for the graphic communications area in the North Dakota senior high industrial arts education program. (Eight other guides are available for two other areas of Industrial Arts--energy/power and production.) Part 1 provides such introductory information as a definition and…
Blakely, Timothy; Ojemann, Jeffrey G.; Rao, Rajesh P.N.
2014-01-01
Background Electrocorticography (ECoG) signals can provide high spatio-temporal resolution and high signal to noise ratio recordings of local neural activity from the surface of the brain. Previous studies have shown that broad-band, spatially focal, high-frequency increases in ECoG signals are highly correlated with movement and other cognitive tasks and can be volitionally modulated. However, significant additional information may be present in inter-electrode interactions, but adding additional higher order inter-electrode interactions can be impractical from a computational aspect, if not impossible. New method In this paper we present a new method of calculating high frequency interactions between electrodes called Short-Time Windowed Covariance (STWC) that builds on mathematical techniques currently used in neural signal analysis, along with an implementation that accelerates the algorithm by orders of magnitude by leveraging commodity, off-the-shelf graphics processing unit (GPU) hardware. Results Using the hardware-accelerated implementation of STWC, we identify many types of event-related inter-electrode interactions from human ECoG recordings on global and local scales that have not been identified by previous methods. Unique temporal patterns are observed for digit flexion in both low- (10 mm spacing) and high-resolution (3 mm spacing) electrode arrays. Comparison with existing methods Covariance is a commonly used metric for identifying correlated signals, but the standard covariance calculations do not allow for temporally varying covariance. In contrast STWC allows and identifies event-driven changes in covariance without identifying spurious noise correlations. Conclusions: STWC can be used to identify event-related neural interactions whose high computational load is well suited to GPU capabilities. PMID:24211499
Configurable software for satellite graphics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartzman, P D
An important goal in interactive computer graphics is to provide users with both quick system responses for basic graphics functions and enough computing power for complex calculations. One solution is to have a distributed graphics system in which a minicomputer and a powerful large computer share the work. The most versatile type of distributed system is an intelligent satellite system in which the minicomputer is programmable by the application user and can do most of the work while the large remote machine is used for difficult computations. At New York University, the hardware was configured from available equipment. The levelmore » of system intelligence resulted almost completely from software development. Unlike previous work with intelligent satellites, the resulting system had system control centered in the satellite. It also had the ability to reconfigure software during realtime operation. The design of the system was done at a very high level using set theoretic language. The specification clearly illustrated processor boundaries and interfaces. The high-level specification also produced a compact, machine-independent virtual graphics data structure for picture representation. The software was written in a systems implementation language; thus, only one set of programs was needed for both machines. A user can program both machines in a single language. Tests of the system with an application program indicate that is has very high potential. A major result of this work is the demonstration that a gigantic investment in new hardware is not necessary for computing facilities interested in graphics.« less
The development and validation of command schedules for SeaWiFS
NASA Astrophysics Data System (ADS)
Woodward, Robert H.; Gregg, Watson W.; Patt, Frederick S.
1994-11-01
An automated method for developing and assessing spacecraft and instrument command schedules is presented for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) project. SeaWiFS is to be carried on the polar-orbiting SeaStar satellite in 1995. The primary goal of the SeaWiFS mission is to provide global ocean chlorophyll concentrations every four days by employing onboard recorders and a twice-a-day data downlink schedule. Global Area Coverage (GAC) data with about 4.5 km resolution will be used to produce the global coverage. Higher resolution (1.1 km resolution) Local Area Coverage (LAC) data will also be recorded to calibrate the sensor. In addition, LAC will be continuously transmitted from the satellite and received by High Resolution Picture Transmission (HRPT) stations. The methods used to generate commands for SeaWiFS employ numerous hierarchical checks as a means of maximizing coverage of the Earth's surface and fulfilling the LAC data requirements. The software code is modularized and written in Fortran with constructs to mirror the pre-defined mission rules. The overall method is specifically developed for low orbit Earth-observing satellites with finite onboard recording capabilities and regularly scheduled data downlinks. Two software packages using the Interactive Data Language (IDL) for graphically displaying and verifying the resultant command decisions are presented. Displays can be generated which show portions of the Earth viewed by the sensor and spacecraft sub-orbital locations during onboard calibration activities. An IDL-based interactive method of selecting and testing LAC targets and calibration activities for command generation is also discussed.
NASA Technical Reports Server (NTRS)
Apodaca, Tony; Porter, Tom
1989-01-01
The two worlds of interactive graphics and realistic graphics have remained separate. Fast graphics hardware runs simple algorithms and generates simple looking images. Photorealistic image synthesis software runs slowly on large expensive computers. The time has come for these two branches of computer graphics to merge. The speed and expense of graphics hardware is no longer the barrier to the wide acceptance of photorealism. There is every reason to believe that high quality image synthesis will become a standard capability of every graphics machine, from superworkstation to personal computer. The significant barrier has been the lack of a common language, an agreed-upon set of terms and conditions, for 3-D modeling systems to talk to 3-D rendering systems for computing an accurate rendition of that scene. Pixar has introduced RenderMan to serve as that common language. RenderMan, specifically the extensibility it offers in shading calculations, is discussed.
NASA Astrophysics Data System (ADS)
Silva, Dilson; Cortez, Celia Martins
2015-12-01
In the present work we used a high-resolution, low-cost apparatus capable of detecting waves fit inside the sound bandwidth, and the software package GoldwaveTM for graphical display, processing and monitoring the signals, to study aspects of the electric heart activity of early avian embryos, specifically at the 18th Hamburger & Hamilton stage of the embryo development. The species used was the domestic chick (Gallus gallus), and we carried out 23 experiments in which cardiographic spectra of QRS complex waves representing the propagation of depolarization waves through ventricles was recorded using microprobes and reference electrodes directly on the embryos. The results show that technique using 16 bit audio card monitored by the GoldwaveTM software was efficient to study signal aspects of heart electric activity of early avian embryos.
NASA Technical Reports Server (NTRS)
Yoshino, K.; Esmond, J. R.; Cheung, A. S.-C.; Freeman, D. E.; Parkinson, W. H.
1992-01-01
Results are presented on measurements, conducted in the wavelength region 180-195 nm, and at different pressures of oxygen (between 2.5-760 torr) in order to separate the pressure-dependent absorption from the main cross sections, of the absorption cross sections of the Schumann-Runge bands in the window region between the rotational lines of S-R bands of O2. The present cross sections supersede the earlier published cross sections (Yoshino et al., 1983). The combined cross sections are presented graphically; they are available at wavenumber intervals of about 0.1/cm from the National Space Science Data Center. The Herzberg continuum cross sections are derived after subtracting calculated contributions from the Schumann-Runge bands. These are significantly smaller than any previous measurements.
Development of critical dimension measurement scanning electron microscope for ULSI (S-8000 series)
NASA Astrophysics Data System (ADS)
Ezumi, Makoto; Otaka, Tadashi; Mori, Hiroyoshi; Todokoro, Hideo; Ose, Yoichi
1996-05-01
The semiconductor industry is moving from half-micron to quarter-micron design rules. To support this evolution, Hitachi has developed a new critical dimension measurement scanning electron microscope (CD-SEM), the model S-8800 series, for quality control of quarter- micron process lines. The new CD-SEM provides detailed examination of process conditions with 5 nm resolution and 5 nm repeatability (3 sigma) at accelerating voltage 800 V using secondary electron imaging. In addition, a newly developed load-lock system has a capability of achieving a high sample throughput of 20 wafers/hour (5 point measurements per wafer) under continuous operation. To support user friendliness, the system incorporates a graphical user interface (GUI), an automated pattern recognition system which helps locating measurement points, both manual and semi-automated operation, and user-programmable operating parameters.
An HTML Tool for Production of Interactive Stereoscopic Compositions.
Chistyakov, Alexey; Soto, Maria Teresa; Martí, Enric; Carrabina, Jordi
2016-12-01
The benefits of stereoscopic vision in medical applications were appreciated and have been thoroughly studied for more than a century. The usage of the stereoscopic displays has a proven positive impact on performance in various medical tasks. At the same time the market of 3D-enabled technologies is blooming. New high resolution stereo cameras, TVs, projectors, monitors, and head mounted displays become available. This equipment, completed with a corresponding application program interface (API), could be relatively easy implemented in a system. Such complexes could open new possibilities for medical applications exploiting the stereoscopic depth. This work proposes a tool for production of interactive stereoscopic graphical user interfaces, which could represent a software layer for web-based medical systems facilitating the stereoscopic effect. Further the tool's operation mode and the results of the conducted subjective and objective performance tests will be exposed.
Fiber Optic Communication System For Medical Images
NASA Astrophysics Data System (ADS)
Arenson, Ronald L.; Morton, Dan E.; London, Jack W.
1982-01-01
This paper discusses a fiber optic communication system linking ultrasound devices, Computerized tomography scanners, Nuclear Medicine computer system, and a digital fluoro-graphic system to a central radiology research computer. These centrally archived images are available for near instantaneous recall at various display consoles. When a suitable laser optical disk is available for mass storage, more extensive image archiving will be added to the network including digitized images of standard radiographs for comparison purposes and for remote display in such areas as the intensive care units, the operating room, and selected outpatient departments. This fiber optic system allows for a transfer of high resolution images in less than a second over distances exceeding 2,000 feet. The advantages of using fiber optic cables instead of typical parallel or serial communication techniques will be described. The switching methodology and communication protocols will also be discussed.
NASA Technical Reports Server (NTRS)
Lucero, John M.
2003-01-01
A new optically based measuring capability that characterizes surface topography, geometry, and wear has been employed by NASA Glenn Research Center s Tribology and Surface Science Branch. To characterize complex parts in more detail, we are using a three-dimensional, surface structure analyzer-the NewView5000 manufactured by Zygo Corporation (Middlefield, CT). This system provides graphical images and high-resolution numerical analyses to accurately characterize surfaces. Because of the inherent complexity of the various analyzed assemblies, the machine has been pushed to its limits. For example, special hardware fixtures and measuring techniques were developed to characterize Oil- Free thrust bearings specifically. We performed a more detailed wear analysis using scanning white light interferometry to image and measure the bearing structure and topography, enabling a further understanding of bearing failure causes.
Research on three-dimensional visualization based on virtual reality and Internet
NASA Astrophysics Data System (ADS)
Wang, Zongmin; Yang, Haibo; Zhao, Hongling; Li, Jiren; Zhu, Qiang; Zhang, Xiaohong; Sun, Kai
2007-06-01
To disclose and display water information, a three-dimensional visualization system based on Virtual Reality (VR) and Internet is researched for demonstrating "digital water conservancy" application and also for routine management of reservoir. To explore and mine in-depth information, after completion of modeling high resolution DEM with reliable quality, topographical analysis, visibility analysis and reservoir volume computation are studied. And also, some parameters including slope, water level and NDVI are selected to classify easy-landslide zone in water-level-fluctuating zone of reservoir area. To establish virtual reservoir scene, two kinds of methods are used respectively for experiencing immersion, interaction and imagination (3I). First virtual scene contains more detailed textures to increase reality on graphical workstation with virtual reality engine Open Scene Graph (OSG). Second virtual scene is for internet users with fewer details for assuring fluent speed.
Combined illumination cylindrical millimeter-wave imaging technique for concealed weapon detection
NASA Astrophysics Data System (ADS)
Sheen, David M.; McMakin, Douglas L.; Hall, Thomas E.
2000-07-01
A novel millimeter-wave imaging technique has been developed for personnel surveillance applications, including the detection of concealed weapons, explosives, drugs, and other contraband material. Millimeter-waves are high-frequency radio waves in the frequency band of 30 - 300 GHz, and pose no health threat to humans at moderate power levels. These waves readily penetrate common clothing materials, and are reflected by the human body and by concealed items. The combined illumination cylindrical imaging concept consists of a vertical, high-resolution, millimeter-wave array of antennas which is scanned in a cylindrical manner about the person under surveillance. Using a computer, the data from this scan is mathematically reconstructed into a series of focused 3D images of the person. After reconstruction, the images are combined into a single high-resolution 3D image of the person under surveillance. This combined image is then rendered using 3D computer graphics techniques. The combined cylindrical illumination is critical as it allows the display of information from all angles. This is necessary because millimeter-waves do not penetrate the body. Ultimately, the images displayed to the operate will be icon-based to protect the privacy of the person being screened. Novel aspects of this technique include the cylindrical scanning concept and the image reconstruction algorithm, which was developed specifically for this imaging system. An engineering prototype based on this cylindrical imaging technique has been fabricated and tested. This work has been sponsored by the Federal Aviation Administration.
Kling-Petersen, T; Pascher, R; Rydmark, M
1999-01-01
Academic and medical imaging are increasingly using computer based 3D reconstruction and/or visualization. Three-dimensional interactive models play a major role in areas such as preclinical medical education, clinical visualization and medical research. While 3D is comparably easy to do on a high end workstations, distribution and use of interactive 3D graphics necessitate the use of personal computers and the web. Several new techniques have been demonstrated providing interactive 3D via a web browser thereby allowing a limited version of VR to be experienced by a larger majority of students, medical practitioners and researchers. These techniques include QuickTimeVR2 (QTVR), VRML2, QuickDraw3D, OpenGL and Java3D. In order to test the usability of the different techniques, Mednet have initiated a number of projects designed to evaluate the potentials of 3D techniques for scientific reporting, clinical visualization and medical education. These include datasets created by manual tracing followed by triangulation, smoothing and 3D visualization, MRI or high-resolution laserscanning. Preliminary results indicate that both VRML and QTVR fulfills most of the requirements of web based, interactive 3D visualization, whereas QuickDraw3D is too limited. Presently, the JAVA 3D has not yet reached a level where in depth testing is possible. The use of high-resolution laserscanning is an important addition to 3D digitization.
Breast ultrasound computed tomography using waveform inversion with source encoding
NASA Astrophysics Data System (ADS)
Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A.
2015-03-01
Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the speed-of-sound distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Computer-simulation studies are conducted to demonstrate the use of the WISE method. Using a single graphics processing unit card, each iteration can be completed within 25 seconds for a 128 × 128 mm2 reconstruction region. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.
Ellefsen, Kyle L; Settle, Brett; Parker, Ian; Smith, Ian F
2014-09-01
Local Ca(2+) transients such as puffs and sparks form the building blocks of cellular Ca(2+) signaling in numerous cell types. They have traditionally been studied by linescan confocal microscopy, but advances in TIRF microscopy together with improved electron-multiplied CCD (EMCCD) cameras now enable rapid (>500 frames s(-1)) imaging of subcellular Ca(2+) signals with high spatial resolution in two dimensions. This approach yields vastly more information (ca. 1 Gb min(-1)) than linescan imaging, rendering visual identification and analysis of local events imaged both laborious and subject to user bias. Here we describe a routine to rapidly automate identification and analysis of local Ca(2+) events. This features an intuitive graphical user-interfaces and runs under Matlab and the open-source Python software. The underlying algorithm features spatial and temporal noise filtering to reliably detect even small events in the presence of noisy and fluctuating baselines; localizes sites of Ca(2+) release with sub-pixel resolution; facilitates user review and editing of data; and outputs time-sequences of fluorescence ratio signals for identified event sites along with Excel-compatible tables listing amplitudes and kinetics of events. Copyright © 2014 Elsevier Ltd. All rights reserved.
Development and Evaluation of a Reverse-Entry Ion Source Orbitrap Mass Spectrometer.
Poltash, Michael L; McCabe, Jacob W; Patrick, John W; Laganowsky, Arthur; Russell, David H
2018-05-23
As a step towards development of a high-resolution ion mobility mass spectrometer using the orbitrap mass analyzer platform, we describe herein a novel reverse-entry ion source (REIS) coupled to the higher-energy C-trap dissociation (HCD) cell of an orbitrap mass spectrometer with extended mass range. Development of the REIS is a first step in the development of a drift tube ion mobility-orbitrap MS. The REIS approach retains the functionality of the commercial instrument ion source which permits the uninterrupted use of the instrument during development as well as performance comparisons between the two ion sources. Ubiquitin (8.5 kDa) and lipid binding to the ammonia transport channel (AmtB, 126 kDa) protein complex were used as model soluble and membrane proteins, respectively, to evaluate the performance of the REIS instrument. Mass resolution obtained with the REIS is comparable to that obtained using the commercial ion source. The charge state distributions for ubiquitin and AmtB obtained on the REIS are in agreement with previous studies which suggests that the REIS-orbitrap EMR retains native structure in the gas phase. Graphical Abstract ᅟ.
STOCK: Structure mapper and online coarse-graining kit for molecular simulations
Bevc, Staš; Junghans, Christoph; Praprotnik, Matej
2015-03-15
We present a web toolkit STructure mapper and Online Coarse-graining Kit for setting up coarse-grained molecular simulations. The kit consists of two tools: structure mapping and Boltzmann inversion tools. The aim of the first tool is to define a molecular mapping from high, e.g. all-atom, to low, i.e. coarse-grained, resolution. Using a graphical user interface it generates input files, which are compatible with standard coarse-graining packages, e.g. VOTCA and DL_CGMAP. Our second tool generates effective potentials for coarse-grained simulations preserving the structural properties, e.g. radial distribution functions, of the underlying higher resolution model. The required distribution functions can be providedmore » by any simulation package. Simulations are performed on a local machine and only the distributions are uploaded to the server. The applicability of the toolkit is validated by mapping atomistic pentane and polyalanine molecules to a coarse-grained representation. Effective potentials are derived for systems of TIP3P (transferable intermolecular potential 3 point) water molecules and salt solution. The presented coarse-graining web toolkit is available at http://stock.cmm.ki.si.« less
Jiansen Li; Jianqi Sun; Ying Song; Yanran Xu; Jun Zhao
2014-01-01
An effective way to improve the data acquisition speed of magnetic resonance imaging (MRI) is using under-sampled k-space data, and dictionary learning method can be used to maintain the reconstruction quality. Three-dimensional dictionary trains the atoms in dictionary in the form of blocks, which can utilize the spatial correlation among slices. Dual-dictionary learning method includes a low-resolution dictionary and a high-resolution dictionary, for sparse coding and image updating respectively. However, the amount of data is huge for three-dimensional reconstruction, especially when the number of slices is large. Thus, the procedure is time-consuming. In this paper, we first utilize the NVIDIA Corporation's compute unified device architecture (CUDA) programming model to design the parallel algorithms on graphics processing unit (GPU) to accelerate the reconstruction procedure. The main optimizations operate in the dictionary learning algorithm and the image updating part, such as the orthogonal matching pursuit (OMP) algorithm and the k-singular value decomposition (K-SVD) algorithm. Then we develop another version of CUDA code with algorithmic optimization. Experimental results show that more than 324 times of speedup is achieved compared with the CPU-only codes when the number of MRI slices is 24.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gibbard, G; Khanal, S; Shang, C
Purpose: To validate a novel real time quality assurance device, as a means to test coincidence of light fields to electron radiation fields. Method and Materials: Use a Raven™ (LAP of America Laser Applications) detector, both light fields and electron radiation fields of various electron cones and cutouts from two Clinacs (Varian Medical) were sensed by the phosphor screen and registered by a CCD camera of the device. During measurements, the screen surface of Raven was placed at ISO center level facing the radiation field at any setup angle. Subsequently the light field and the electron radiation field (with 100more » MU) were captured separately by the device. The measurements were then analyzed using the Raven software with a maximal 25 by 25 cm^2 field size and 0.25 mm resolution. The results were further compared against those using chromic graphic film (EBT3). To ensure consistence only the dimensions through the central axis were recorded. All the films were analyzed with Dose Lab Pro™ v6.70. Coincidence comparisons were done within the tests on one Clinac and between the two Clinacs. Results: The similar difference means of the field coincidence are indicated from the 18 samples by EBT films and 34 by the Raven as 1.33 ± 0.87 mm verses 1.06 ± 0.58 mm (p = 0.74). The mean differences of the field coincidence between the two Clinacs, 1.16 ± 0.68 mm (n=16) and 1.16 ± 0.48 mm (n=18) respectively, also suggest no statistical difference (p= 0.40). Conclusion: The Raven™ is a comparable quality verification device to the chromic graphic film (EBT3) in checking the coincidence between light and electron radiation fields. In this investigation, the Raven device provided highly reproducible results with real time analysis. Further improvements in its resolution and automatic analyzing capability are warranted.« less
Planetary Photojournal Home Page Graphic
NASA Technical Reports Server (NTRS)
2004-01-01
This image is an unannotated version of the Planetary Photojournal Home Page graphic. This digital collage contains a highly stylized rendition of our solar system and points beyond. As this graphic was intended to be used as a navigation aid in searching for data within the Photojournal, certain artistic embellishments have been added (color, location, etc.). Several data sets from various planetary and astronomy missions were combined to create this image.Real-Time Distributed Algorithms for Visual and Battlefield Reasoning
2006-08-01
High-Level Task Definition Language, Graphical User Interface (GUI), Story Analysis, Story Interpretation, SensIT Nodes 16. SECURITY...or more actions to be taken in the event the conditions are satisfied. We developed graphical user interfaces that may be used to express such...actions to be taken in the event the conditions are satisfied. We developed graphical user interfaces that may be used to express such task
Computational high-resolution heart phantoms for medical imaging and dosimetry simulations
NASA Astrophysics Data System (ADS)
Gu, Songxiang; Gupta, Rajiv; Kyprianou, Iacovos
2011-09-01
Cardiovascular disease in general and coronary artery disease (CAD) in particular, are the leading cause of death worldwide. They are principally diagnosed using either invasive percutaneous transluminal coronary angiograms or non-invasive computed tomography angiograms (CTA). Minimally invasive therapies for CAD such as angioplasty and stenting are rendered under fluoroscopic guidance. Both invasive and non-invasive imaging modalities employ ionizing radiation and there is concern for deterministic and stochastic effects of radiation. Accurate simulation to optimize image quality with minimal radiation dose requires detailed, gender-specific anthropomorphic phantoms with anatomically correct heart and associated vasculature. Such phantoms are currently unavailable. This paper describes an open source heart phantom development platform based on a graphical user interface. Using this platform, we have developed seven high-resolution cardiac/coronary artery phantoms for imaging and dosimetry from seven high-quality CTA datasets. To extract a phantom from a coronary CTA, the relationship between the intensity distribution of the myocardium, the ventricles and the coronary arteries is identified via histogram analysis of the CTA images. By further refining the segmentation using anatomy-specific criteria such as vesselness, connectivity criteria required by the coronary tree and image operations such as active contours, we are able to capture excellent detail within our phantoms. For example, in one of the female heart phantoms, as many as 100 coronary artery branches could be identified. Triangular meshes are fitted to segmented high-resolution CTA data. We have also developed a visualization tool for adding stenotic lesions to the coronaries. The male and female heart phantoms generated so far have been cross-registered and entered in the mesh-based Virtual Family of phantoms with matched age/gender information. Any phantom in this family, along with user-defined stenoses, can be used to obtain clinically realistic projection images with the Monte Carlo code penMesh for optimizing imaging and dosimetry.
Learning with Interactive Computer Graphics in the Undergraduate Neuroscience Classroom
Pani, John R.; Chariker, Julia H.; Naaz, Farah; Mattingly, William; Roberts, Joshua; Sephton, Sandra E.
2014-01-01
Instruction of neuroanatomy depends on graphical representation and extended self-study. As a consequence, computer-based learning environments that incorporate interactive graphics should facilitate instruction in this area. The present study evaluated such a system in the undergraduate neuroscience classroom. The system used the method of adaptive exploration, in which exploration in a high fidelity graphical environment is integrated with immediate testing and feedback in repeated cycles of learning. The results of this study were that students considered the graphical learning environment to be superior to typical classroom materials used for learning neuroanatomy. Students managed the frequency and duration of study, test, and feedback in an efficient and adaptive manner. For example, the number of tests taken before reaching a minimum test performance of 90% correct closely approximated the values seen in more regimented experimental studies. There was a wide range of student opinion regarding the choice between a simpler and a more graphically compelling program for learning sectional anatomy. Course outcomes were predicted by individual differences in the use of the software that reflected general work habits of the students, such as the amount of time committed to testing. The results of this introduction into the classroom are highly encouraging for development of computer-based instruction in biomedical disciplines. PMID:24449123
NASA Astrophysics Data System (ADS)
Huang, Yong; Zhang, Kang; Yi, WonJin; Kang, Jin U.
2012-01-01
Frequent monitoring of gingival sulcus will provide valuable information for judging the presence and severity of periodontal disease. Optical coherence tomography, as a 3D high resolution high speed imaging modality is able to provide information for pocket depth, gum contour, gum texture, gum recession simultaneously. A handheld forward-viewing miniature resonant fiber-scanning probe was developed for in-vivo gingival sulcus imaging. The fiber cantilever driven by magnetic force vibrates at resonant frequency. A synchronized linear phase-modulation was applied in the reference arm by the galvanometer-driven reference mirror. Full-range, complex-conjugate-free, real-time endoscopic SD-OCT was achieved by accelerating the data process using graphics processing unit. Preliminary results showed a real-time in-vivo imaging at 33 fps with an imaging range of lateral 2 mm by depth 3 mm. Gap between the tooth and gum area was clearly visualized. Further quantification analysis of the gingival sulcus will be performed on the image acquired.
Thiophene-based rhodamine as selectivef luorescence probe for Fe(III) and Al(III) in living cells.
Wang, Kun-Peng; Chen, Ju-Peng; Zhang, Si-Jie; Lei, Yang; Zhong, Hua; Chen, Shaojin; Zhou, Xin-Hong; Hu, Zhi-Qiang
2017-09-01
The thiophene-modified rhodamine 6G (GYJ) has been synthesized as a novel chemosensor. The sensor has sufficiently high selectivity and sensitivity for the detection of Fe 3+ and Al 3+ ions (M 3+ ) by fluorescence and ultraviolet spectroscopy with a strong ability for anti-interference performance. The binding ratio of M 3+ -GYJ complex was determined to be 2:1 according to the Job's plot. The binding constants for Fe 3+ and Al 3+ were calculated to be 3.91 × 10 8 and 5.26 × 10 8 M -2 , respectively. All these unique features made it particularly favorable for cellular imaging applications. The obvious fluorescence microscopy experiments demonstrated that the probes could contribute to the detection of Fe 3+ and Al 3+ in related cells and biological organs with satisfying resolution. Graphical abstract GYJ has high selectivity and sensitivity for the detection of Fe(III) and Al(III) with the binding ratio of 2:1.
ArControl: An Arduino-Based Comprehensive Behavioral Platform with Real-Time Performance.
Chen, Xinfeng; Li, Haohong
2017-01-01
Studying animal behavior in the lab requires reliable delivering stimulations and monitoring responses. We constructed a comprehensive behavioral platform (ArControl: Arduino Control Platform) that was an affordable, easy-to-use, high-performance solution combined software and hardware components. The hardware component was consisted of an Arduino UNO board and a simple drive circuit. As for software, the ArControl provided a stand-alone and intuitive GUI (graphical user interface) application that did not require users to master scripts. The experiment data were automatically recorded with the built in DAQ (data acquisition) function. The ArControl also allowed the behavioral schedule to be entirely stored in and operated on the Arduino chip. This made the ArControl a genuine, real-time system with high temporal resolution (<1 ms). We tested the ArControl, based on strict performance measurements and two mice behavioral experiments. The results showed that the ArControl was an adaptive and reliable system suitable for behavioral research.
ArControl: An Arduino-Based Comprehensive Behavioral Platform with Real-Time Performance
Chen, Xinfeng; Li, Haohong
2017-01-01
Studying animal behavior in the lab requires reliable delivering stimulations and monitoring responses. We constructed a comprehensive behavioral platform (ArControl: Arduino Control Platform) that was an affordable, easy-to-use, high-performance solution combined software and hardware components. The hardware component was consisted of an Arduino UNO board and a simple drive circuit. As for software, the ArControl provided a stand-alone and intuitive GUI (graphical user interface) application that did not require users to master scripts. The experiment data were automatically recorded with the built in DAQ (data acquisition) function. The ArControl also allowed the behavioral schedule to be entirely stored in and operated on the Arduino chip. This made the ArControl a genuine, real-time system with high temporal resolution (<1 ms). We tested the ArControl, based on strict performance measurements and two mice behavioral experiments. The results showed that the ArControl was an adaptive and reliable system suitable for behavioral research. PMID:29321735
Document segmentation for high-quality printing
NASA Astrophysics Data System (ADS)
Ancin, Hakan
1997-04-01
A technique to segment dark texts on light background of mixed mode color documents is presented. This process does not perceptually change graphics and photo regions. Color documents are scanned and printed from various media which usually do not have clean background. This is especially the case for the printouts generated from thin magazine samples, these printouts usually include text and figures form the back of the page, which is called bleeding. Removal of bleeding artifacts improves the perceptual quality of the printed document and reduces the color ink usage. By detecting the light background of the document, these artifacts are removed from background regions. Also detection of dark text regions enables the halftoning algorithms to use true black ink for the black text pixels instead of composite black. The processed document contains sharp black text on white background, resulting improved perceptual quality and better ink utilization. The described method is memory efficient and requires a small number of scan lines of high resolution color documents during processing.
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful. PMID:28626348
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models.
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful.
NASA Astrophysics Data System (ADS)
Keshet, Aviv; Ketterle, Wolfgang
2013-01-01
Atomic physics experiments often require a complex sequence of precisely timed computer controlled events. This paper describes a distributed graphical user interface-based control system designed with such experiments in mind, which makes use of off-the-shelf output hardware from National Instruments. The software makes use of a client-server separation between a user interface for sequence design and a set of output hardware servers. Output hardware servers are designed to use standard National Instruments output cards, but the client-server nature should allow this to be extended to other output hardware. Output sequences running on multiple servers and output cards can be synchronized using a shared clock. By using a field programmable gate array-generated variable frequency clock, redundant buffers can be dramatically shortened, and a time resolution of 100 ns achieved over effectively arbitrary sequence lengths.
Keshet, Aviv; Ketterle, Wolfgang
2013-01-01
Atomic physics experiments often require a complex sequence of precisely timed computer controlled events. This paper describes a distributed graphical user interface-based control system designed with such experiments in mind, which makes use of off-the-shelf output hardware from National Instruments. The software makes use of a client-server separation between a user interface for sequence design and a set of output hardware servers. Output hardware servers are designed to use standard National Instruments output cards, but the client-server nature should allow this to be extended to other output hardware. Output sequences running on multiple servers and output cards can be synchronized using a shared clock. By using a field programmable gate array-generated variable frequency clock, redundant buffers can be dramatically shortened, and a time resolution of 100 ns achieved over effectively arbitrary sequence lengths.
High-speed real-time animated displays on the ADAGE (trademark) RDS 3000 raster graphics system
NASA Technical Reports Server (NTRS)
Kahlbaum, William M., Jr.; Ownbey, Katrina L.
1989-01-01
Techniques which may be used to increase the animation update rate of real-time computer raster graphic displays are discussed. They were developed on the ADAGE RDS 3000 graphic system in support of the Advanced Concepts Simulator at the NASA Langley Research Center. These techniques involve the use of a special purpose parallel processor, for high-speed character generation. The description of the parallel processor includes the Barrel Shifter which is part of the hardware and is the key to the high-speed character rendition. The final result of this total effort was a fourfold increase in the update rate of an existing primary flight display from 4 to 16 frames per second.
The evolvement of pits and dislocations on TiO{sub 2}-B nanowires via oriented attachment growth
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao Bin; Chen Feng, E-mail: Fengchen@ecust.edu.c; Qu Wenwu
2009-08-15
TiO{sub 2}-B nanowires were synthesized by an ion exchanging-thermal treatment. The unique morphology of pits and dislocations interspersed on TiO{sub 2}-B nanowires were firstly characterized and studied by high-resolution transmission electron microscopy (HRTEM). Oriented attachment is suggested as an important growth mechanism in the evolvement of pits and dislocations on TiO{sub 2}-B nanowires. Lattice shears and fractures were originally formed during the ion exchanging process of the sodium titanate nanowires, which resulted in the formation of primary crystalline units and vacancies in the layered hydrogen titanate nanowires. Then the (110) lattice planes of TiO{sub 2}-B grown in [110] direction ismore » faster than the other lattice planes, which caused the exhibition of long dislocations on TiO{sub 2}-B nanowires. The enlargement of the vacancies, which was caused by the rearrangement of primary crystalline units, should be the reason of the formation of pits. Additionally, the transformation from TiO{sub 2}-B to anatase could be also elucidated by oriented attachment mechanism. - Graphical abstract: The unique morphology of pits and dislocations on TiO{sub 2}-B nanowires shown in high-resolution transmission electron microscopy (HRTEM) and a proposed evolvement mechanism of pits and dislocations on TiO{sub 2}-B nanowires.« less
Owen, R; Ramlakhan, S; Saatchi, R; Burke, D
2018-06-01
Acute limp is a common presenting condition in the paediatric emergency department. There are a number of causes of acute limp that include traumatic injury, infection and malignancy. These causes in young children are not easily distinguished. In this pilot study, an infrared thermographic imaging technique to diagnose acute undifferentiated limp in young children was developed. Following required ethics approval, 30 children (mean age = 5.2 years, standard deviation = 3.3 years) were recruited. The exposed lower limbs of participants were imaged using a high-resolution thermal camera. Using predefined regions of interest (ROI), any skin surface temperature difference between the healthy and affected legs was statistically analysed, with the aim of identifying limp. In all examined ROIs, the median skin surface temperature for the affected limb was higher than that of the healthy limb. The small sample size recruited for each group, however, meant that the statistical tests of significant difference need to be interpreted in this context. Thermal imaging showed potential in helping with the diagnosis of acute limp in children. Repeating a similar study with a larger sample size will be beneficial to establish reproducibility of the results. Graphical abstract A young child with an acute undifferentiated limp undergoes thermal imaging and the follow on image analysis assists the limp diagnosis.
A System and Method for Online High-Resolution Mapping of Gastric Slow-Wave Activity
Bull, Simon H.; O’Grady, Gregory; Du, Peng
2015-01-01
High-resolution (HR) mapping employs multielectrode arrays to achieve spatially detailed analyses of propagating bioelectrical events. A major current limitation is that spatial analyses must currently be performed “off-line” (after experiments), compromising timely recording feedback and restricting experimental interventions. These problems motivated development of a system and method for “online” HR mapping. HR gastric recordings were acquired and streamed to a novel software client. Algorithms were devised to filter data, identify slow-wave events, eliminate corrupt channels, and cluster activation events. A graphical user interface animated data and plotted electrograms and maps. Results were compared against off-line methods. The online system analyzed 256-channel serosal recordings with no unexpected system terminations with a mean delay 18 s. Activation time marking sensitivity was 0.92; positive predictive value was 0.93. Abnormal slow-wave patterns including conduction blocks, ectopic pacemaking, and colliding wave fronts were reliably identified. Compared to traditional analysis methods, online mapping had comparable results with equivalent coverage of 90% of electrodes, average RMS errors of less than 1 s, and CC of activation maps of 0.99. Accurate slow-wave mapping was achieved in near real-time, enabling monitoring of recording quality and experimental interventions targeted to dysrhythmic onset. This work also advances the translation of HR mapping toward real-time clinical application. PMID:24860024
Body as Echoes: Cyber Archiving of Dazu Rock Carvings
NASA Astrophysics Data System (ADS)
Chen, W.-W.
2017-08-01
"Body As Echoes: Cyber Archiving of Dazu Rock Carvings (BAE project in short)" strives to explore the tangible/intangible aspects of digital heritage conservation. Aiming at Dazu Rock Carvings - World Heritage Site of Sichuan Province, BAE project utilizes photogrammetry and digital sculpting technique to investigate digital narrative of cultural heritage conservation. It further provides collaborative opportunities to conduct the high-resolution site survey for scholars and institutions at local authorities. For preserving and making sustainable of the tangible cultural heritage at Dazu Rock Carvings, BAE project cyber-archives the selected niches and the caves at Dazu, and transform them into high-resolution, three-dimensional models. For extending the established results and making the digital resources available to broader audiences, BAE project will further develop interactive info-motion interface and apply the knowledge of digital heritage from BAE project to STEM education. BAE project expects to bridge the platform for archeology, computer graphics, and interactive info-motion design. Digital sculpting, projection mapping, interactive info-motion and VR will be the core techniques to explore the narrative of digital heritage conservation. For further protecting, educating and consolidating "building dwelling thinking" through digital heritage preservation, BAE project helps to preserve the digital humanity, and reach out to museum staffs and academia. By the joint effort of global institutions and local authorities, BAE project will also help to foster and enhance the mutual understanding through intercultural collaborations.
Ren, Wei; Han, Lingyu; Luo, Mengyi; Bian, Baolin; Guan, Ming; Yang, Hui; Han, Chao; Li, Na; Li, Tuo; Li, Shilei; Zhang, Yangyang; Zhao, Zhenwen; Zhao, Haiyu
2018-04-28
Traditional Chinese medicines (TCMs) are undoubtedly treasured natural resources for discovering effective medicines in treating and preventing various diseases. However, it is still extremely difficult for screening the bioactive compounds due to the tremendous constituents in TCMs. In this work, the chemical composition of toad venom was comprehensively analyzed using ultra-high performance liquid chromatography (UPLC) coupled with high-resolution LTQ-Orbitrap mass spectrometry and 93 compounds were detected. Among them, 17 constituents were confirmed by standard substances and 8 constituents were detected in toad venom for the first time. Further, a compound database of toad venom containing the fullest compounds was further constructed using UPLC coupled with high-sensitivity Qtrap MS. Then a target cell-based approach for screening potential bioactive compounds from toad venom was developed by analyzing the target cell extracts. The reliability of this method was validated by negative controls and positive controls. In total, 17 components in toad venom were discovered to interact with the target cancer cells. Further, in vitro pharmacological trials were performed to confirm the anti-cancer activity of four of them. The results showed that the six bufogenins and seven bufotoxins detected in our research represented a promising resource to explore bufogenins/bufotoxins-based anticancer agents with low cardiotoxic effect. The target cell-based screening method coupled with the compound database of toad venom constructed by UPLC-Qtrap-MS with high sensitivity provide us a new strategy to rapidly screen and identify the potential bioactive constituents with low content in natural products, which was beneficial for drug discovery from other TCMs. ᅟ Graphical abstract.
Biological imaging by soft X-ray diffraction microscopy
NASA Astrophysics Data System (ADS)
Shapiro, David
We have developed a microscope for soft x-ray diffraction imaging of dry or frozen hydrated biological specimens. This lensless imaging system does not suffer from the resolution or specimen thickness limitations that other short wavelength microscopes experience. The microscope, currently situated at beamline 9.0.1 of the Advanced Light Source, can collect diffraction data to 12 nm resolution with 750 eV photons and 17 nm resolution with 520 eV photons. The specimen can be rotated with a precision goniometer through an angle of 160 degrees allowing for the collection of nearly complete three-dimensional diffraction data. The microscope is fully computer controlled through a graphical user interface and a scripting language automates the collection of both two-dimensional and three-dimensional data. Diffraction data from a freeze-dried dwarf yeast cell, Saccharomyces cerevisiae carrying the CLN3-1 mutation, was collected to 12 run resolution from 8 specimen orientations spanning a total rotation of 8 degrees. The diffraction data was phased using the difference map algorithm and the reconstructions provide real space images of the cell to 30 nm resolution from each of the orientations. The agreement of the different reconstructions provides confidence in the recovered, and previously unknown, structure and indicates the three dimensionality of the cell. This work represents the first imaging of the natural complex refractive contrast from a whole unstained cell by the diffraction microscopy method and has achieved a resolution superior to lens based x-ray tomographic reconstructions of similar specimens. Studies of the effects of exposure to large radiation doses were also carried out. It was determined that the freeze-dried cell suffers from an initial collapse, which is followed by a uniform, but slow, shrinkage. This structural damage to the cell is not accompanied by a diminished ability to see small features in the specimen. Preliminary measurements on frozen-hydrated yeast indicate that the frozen specimens do not exhibit these changes even with doses as high as 5 x 109 Gray.
LaMotte, Andrew E.; Wieczorek, Michael
2010-01-01
This 30-meter resolution data set represents the imperviousness layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition. The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp.. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.
LaMotte, Andrew E.; Wieczorek, Michael
2010-01-01
This 30-meter resolution data set represents the tree canopy layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition.jpg The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.
Wieczorek, Michael; LaMotte, Andrew E.
2010-01-01
This 30-meter resolution data set represents the imperviousness layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition. The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp.. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.
LaMotte, Andrew E.; Wieczorek, Michael
2010-01-01
This 30-meter resolution data set represents the tree canopy layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition.jpg. The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov
LaMotte, Andrew E.; Wieczorek, Michael
2010-01-01
This 30-meter resolution data set represents the imperviousness layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition. The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp.. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.
LaMotte, Andrew E.; Wieczorek, Michael
2010-01-01
This 30-meter resolution data set represents the tree canopy layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition.jpg The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.
LaMotte, Andrew E.; Wieczorek, Michael
2010-01-01
This 30-meter resolution data set represents the imperviousness layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition. The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp.. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.
LaMotte, Andrew E.; Wieczorek, Michael
2010-01-01
This 30-meter resolution data set represents the tree canopy layer for the conterminous United States for the 2001 time period. The data have been arranged into four tiles to facilitate timely display and manipulation within a Geographic Information System, browse graphic: nlcd01-partition.jpg The National Land Cover Data Set for 2001 was produced through a cooperative project conducted by the Multi-Resolution Land Characteristics (MRLC) Consortium. The MRLC Consortium is a partnership of Federal agencies (www.mrlc.gov), consisting of the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (USEPA), the U.S. Department of Agriculture (USDA), the U.S. Forest Service (USFS), the National Park Service (NPS), the U.S. Fish and Wildlife Service (USFWS), the Bureau of Land Management (BLM), and the USDA Natural Resources Conservation Service (NRCS). One of the primary goals of the project is to generate a current, consistent, seamless, and accurate National Land Cover Database (NLCD) circa 2001 for the United States at medium spatial resolution. For a detailed definition and discussion on MRLC and the NLCD 2001 products, refer to Homer and others (2004) and http://www.mrlc.gov/mrlc2k.asp. The NLCD 2001 was created by partitioning the United States into mapping-zones. A total of 68 mapping-zones browse graphic: nlcd01-mappingzones.jpg were delineated within the conterminous United States based on ecoregion and geographical characteristics, edge-matching features, and the size requirement of Landsat mosaics. Mapping-zones encompass the whole or parts of several states. Questions about the NLCD mapping zones can be directed to the NLCD 2001 Land Cover Mapping Team at the USGS/EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov.
Volumetric graphics in liquid using holographic femtosecond laser pulse excitations
NASA Astrophysics Data System (ADS)
Kumagai, Kota; Hayasaki, Yoshio
2017-06-01
Much attention has been paid to the development of three-dimensional volumetric displays in the fields of optics and computer graphics, and it is a dream of we display researchers. However, full-color volumetric displays are challenging because many voxels with different colors have to be formed to render volumetric graphics in real three-dimensional space. Here, we show a new volumetric display in which microbubble voxels are three-dimensionally generated in a liquid by focused femtosecond laser pulses. Use of a high-viscosity liquid, which is the key idea of this system, slows down the movement of the microbubbles, and as a result, volumetric graphics can be displayed. This "volumetric bubble display" has a wide viewing angle and simple refresh and requires no addressing wires because it involves optical access to transparent liquid and achieves full-color graphics composed on light-scattering voxels controlled by illumination light sources. In addition, a bursting of bubble graphics system using an ultrasonic vibrator also has been demonstrated. This technology will open up a wide range of applications in three-dimensional displays, augmented reality and computer graphics.
Methodologie experimentale pour evaluer les caracteristiques des plateformes graphiques avioniques
NASA Astrophysics Data System (ADS)
Legault, Vincent
Within a context where the aviation industry intensifies the development of new visually appealing features and where time-to-market must be as short as possible, rapid graphics processing benchmarking in a certified avionics environment becomes an important issue. With this work we intend to demonstrate that it is possible to deploy a high-performance graphics application on an avionics platform that uses certified graphical COTS components. Moreover, we would like to bring to the avionics community a methodology which will allow developers to identify the needed elements for graphics system optimisation and provide them tools that can measure the complexity of this type of application and measure the amount of resources to properly scale a graphics system according to their needs. As far as we know, no graphics performance profiling tool dedicated to critical embedded architectures has been proposed. We thus had the idea of implementing a specialized benchmarking tool that would be an appropriate and effective solution to this problem. Our solution resides in the extraction of the key graphics specifications from an inherited application to use them afterwards in a 3D image generation application.
Atmospheric Pressure Ionization Using a High Voltage Target Compared to Electrospray Ionization.
Lubin, Arnaud; Bajic, Steve; Cabooter, Deirdre; Augustijns, Patrick; Cuyckens, Filip
2017-02-01
A new atmospheric pressure ionization (API) source, viz. UniSpray, was evaluated for mass spectrometry (MS) analysis of pharmaceutical compounds by head-to-head comparison with electrospray ionization (ESI) on the same high-resolution MS system. The atmospheric pressure ionization source is composed of a grounded nebulizer spraying onto a high voltage, cylindrical stainless steel target. Molecules are ionized in a similar fashion to electrospray ionization, predominantly producing protonated or deprotonated species. Adduct formation (e.g., proton and sodium adducts) and in-source fragmentation is shown to be almost identical between the two sources. The performance of the new API source was compared with electrospray by infusion of a mix of 22 pharmaceutical compounds with a wide variety of functional groups and physico-chemical properties (molecular weight, logP, and pKa) in more than 100 different conditions (mobile phase strength, solvents, pH, and flow rate). The new API source shows an intensity gain of a factor 2.2 compared with ESI considering all conditions on all compounds tested. Finally, some hypotheses on the ionization mechanism, similarities, and differences with ESI, are discussed. Graphical Abstract ᅟ.
Photojournal Home Page Graphic 2007
NASA Technical Reports Server (NTRS)
2008-01-01
This image is an unannotated version of the Photojournal Home Page graphic released in October 2007. This digital collage contains a highly stylized rendition of our solar system and points beyond. As this graphic was intended to be used as a navigation aid in searching for data within the Photojournal, certain artistic embellishments have been added (color, location, etc.). Several data sets from various planetary and astronomy missions were combined to create this image.Real time thermal imaging for analysis and control of crystal growth by the Czochralski technique
NASA Technical Reports Server (NTRS)
Wargo, M. J.; Witt, A. F.
1992-01-01
A real time thermal imaging system with temperature resolution better than +/- 0.5 C and spatial resolution of better than 0.5 mm has been developed. It has been applied to the analysis of melt surface thermal field distributions in both Czochralski and liquid encapsulated Czochralski growth configurations. The sensor can provide single/multiple point thermal information; a multi-pixel averaging algorithm has been developed which permits localized, low noise sensing and display of optical intensity variations at any location in the hot zone as a function of time. Temperature distributions are measured by extraction of data along a user selectable linear pixel array and are simultaneously displayed, as a graphic overlay, on the thermal image.
Wang, An-Li; Lowen, Steven B; Romer, Daniel; Giorno, Mario; Langleben, Daniel D
2015-01-01
Background Warning labels on cigarette packages are an important venue for information about the hazards of smoking. The 2009 US Family Smoking Prevention and Tobacco Control Act mandated replacing the current text-only labels with graphic warning labels. However, labels proposed by the Food and Drug Administration (FDA) were challenged in court by the tobacco companies, who argued successfully that the proposed labels needlessly encroached on their right to free speech, in part because they included images of high emotional salience that indiscriminately frightened rather than informed consumers. Methods We used functional MRI to examine the effects of graphic warning labels' emotional salience on smokers' brain activity and cognition. Twenty-four smokers viewed a random sequence of blocks of graphic warning labels that have been rated high or low on an ‘emotional reaction’ scale in previous research. Results We found that labels rated high on emotional reaction were better remembered, associated with reduction in the urge to smoke, and produced greater brain response in the amygdala, hippocampi, inferior frontal gyri and the insulae. Conclusions Recognition memory and craving are, respectively, correlates of effectiveness of addiction related public health communications and interventions, and amygdala activation facilitates the encoding of emotional memories. Thus, our results suggest that emotional reaction to graphic warning labels contributes to their public health impact and may be an integral part of the neural mechanisms underlying their effectiveness. Given the urgency of the debate about the constitutional risks and public health benefits of graphic warning labels, these preliminary findings warrant consideration while longitudinal clinical studies are underway PMID:25564288
Creating Realistic 3D Graphics with Excel at High School--Vector Algebra in Practice
ERIC Educational Resources Information Center
Benacka, Jan
2015-01-01
The article presents the results of an experiment in which Excel applications that depict rotatable and sizable orthographic projection of simple 3D figures with face overlapping were developed with thirty gymnasium (high school) students of age 17-19 as an introduction to 3D computer graphics. A questionnaire survey was conducted to find out…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-06
... Industry Support Calculation Coated Paper Suitable For High-Quality Print Graphics Using Sheet-Fed Presses... employs an industry-wide test to determine whether, under section 773(c)(1)(B), available information in... sections 771(33)(E) and (F) of the Act. In addition, we find that Shandong Sun Paper Industry Joint Stock...
Benchmark of Client and Server-Side Catchment Delineation Approaches on Web-Based Systems
NASA Astrophysics Data System (ADS)
Demir, I.; Sermet, M. Y.; Sit, M. A.
2016-12-01
Recent advances in internet and cyberinfrastructure technologies have provided the capability to acquire large scale spatial data from various gauges and sensor networks. The collection of environmental data increased demand for applications which are capable of managing and processing large-scale and high-resolution data sets. With the amount and resolution of data sets provided, one of the challenging tasks for organizing and customizing hydrological data sets is delineation of watersheds on demand. Watershed delineation is a process for creating a boundary that represents the contributing area for a specific control point or water outlet, with intent of characterization and analysis of portions of a study area. Although many GIS tools and software for watershed analysis are available on desktop systems, there is a need for web-based and client-side techniques for creating a dynamic and interactive environment for exploring hydrological data. In this project, we demonstrated several watershed delineation techniques on the web with various techniques implemented on the client-side using JavaScript and WebGL, and on the server-side using Python and C++. We also developed a client-side GPGPU (General Purpose Graphical Processing Unit) algorithm to analyze high-resolution terrain data for watershed delineation which allows parallelization using GPU. The web-based real-time analysis of watershed segmentation can be helpful for decision-makers and interested stakeholders while eliminating the need of installing complex software packages and dealing with large-scale data sets. Utilization of the client-side hardware resources also eliminates the need of servers due its crowdsourcing nature. Our goal for future work is to improve other hydrologic analysis methods such as rain flow tracking by adapting presented approaches.
Photojournal Home Page Graphic 2009 Artist Concept
2009-07-07
This digital collage contains a highly stylized rendition of our solar system and points beyond. As this graphic was intended to be used as a navigation aid in searching for data within the Photojournal, certain artistic embellishments have been added.
New Integrated Video and Graphics Technology: Digital Video Interactive.
ERIC Educational Resources Information Center
Optical Information Systems, 1987
1987-01-01
Describes digital video interactive (DVI), a new technology which combines the interactivity of the graphics capabilities in personal computers with the realism of high-quality motion video and multitrack audio in an all-digital integrated system. (MES)
Grace: A cross-platform micromagnetic simulator on graphics processing units
NASA Astrophysics Data System (ADS)
Zhu, Ru
2015-12-01
A micromagnetic simulator running on graphics processing units (GPUs) is presented. Different from GPU implementations of other research groups which are predominantly running on NVidia's CUDA platform, this simulator is developed with C++ Accelerated Massive Parallelism (C++ AMP) and is hardware platform independent. It runs on GPUs from venders including NVidia, AMD and Intel, and achieves significant performance boost as compared to previous central processing unit (CPU) simulators, up to two orders of magnitude. The simulator paved the way for running large size micromagnetic simulations on both high-end workstations with dedicated graphics cards and low-end personal computers with integrated graphics cards, and is freely available to download.
Evans, Abigail T; Peters, Ellen; Shoben, Abigail B; Meilleur, Louise R; Klein, Elizabeth G; Tompkins, Mary Kate; Romer, Daniel; Tusler, Martin
2017-10-01
Cigarette graphic-warning labels elicit negative emotion. Research suggests negative emotion drives greater risk perceptions and quit intentions through multiple processes. The present research compares text-only warning effectiveness to that of graphic warnings eliciting more or less negative emotion. Nationally representative online panels of 736 adult smokers and 469 teen smokers/vulnerable smokers were randomly assigned to view one of three warning types (text-only, text with low-emotion images, or text with high-emotion images) four times over 2 weeks. Participants recorded their emotional reaction to the warnings (measured as arousal), smoking risk perceptions, and quit intentions. Primary analyses used structural equation modeling. Participants in the high-emotion condition reported greater emotional reaction than text-only participants (bAdult = 0.21; bTeen = 0.27, p's < .004); those in the low-emotion condition reported lower emotional reaction than text-only participants (bAdult = -0.18; bTeen = -0.22, p's < .018). Stronger emotional reaction was associated with increased risk perceptions in both samples (bAdult = 0.66; bTeen = 0.85, p's < .001) and greater quit intentions among adults (bAdult = 1.00, p < .001). Compared to text-only warnings, low-emotion warnings were associated with reduced risk perceptions and quit intentions whereas high-emotion warnings were associated with increased risk perceptions and quit intentions. Warning labels with images that elicit more negative emotional reaction are associated with increased risk perceptions and quit intentions in adults and teens relative to text-only warnings. However, graphic warnings containing images which evoke little emotional reaction can backfire and reduce risk perceptions and quit intentions versus text-only warnings. This research is the first to directly manipulate two emotion levels in sets of nine cigarette graphic warning images and compare them with text-only warnings. Among adult and teen smokers, high-emotion graphic warnings were associated with increased risk perceptions and quit intentions versus text-only warnings. Low-emotion graphic warnings backfired and tended to reduce risk perceptions and quit intentions versus text-only warnings. Policy makers should be aware that merely placing images on cigarette packaging is insufficient to increase smokers' risk perceptions and quit intentions. Low-emotion graphic warnings will not necessarily produce desired population-level benefits relative to text-only or high-emotion warnings. © The Author 2016. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
Johnson, Walter; Battiste, Vernol
2016-01-01
The 3D-Cockpit Display of Traffic Information (3D-CDTI) is a flight deck tool that presents aircrew with: proximal traffic aircraft location, their current status and flight plan data; strategic conflict detection and alerting; automated conflict resolution strategies; the facility to graphically plan manual route changes; time-based, in-trail spacing on approach. The CDTI is manipulated via a touchpad on the flight deck, and by mouse when presented as part of a desktop flight simulator.
SEURAT: Visual analytics for the integrated analysis of microarray data
2010-01-01
Background In translational cancer research, gene expression data is collected together with clinical data and genomic data arising from other chip based high throughput technologies. Software tools for the joint analysis of such high dimensional data sets together with clinical data are required. Results We have developed an open source software tool which provides interactive visualization capability for the integrated analysis of high-dimensional gene expression data together with associated clinical data, array CGH data and SNP array data. The different data types are organized by a comprehensive data manager. Interactive tools are provided for all graphics: heatmaps, dendrograms, barcharts, histograms, eventcharts and a chromosome browser, which displays genetic variations along the genome. All graphics are dynamic and fully linked so that any object selected in a graphic will be highlighted in all other graphics. For exploratory data analysis the software provides unsupervised data analytics like clustering, seriation algorithms and biclustering algorithms. Conclusions The SEURAT software meets the growing needs of researchers to perform joint analysis of gene expression, genomical and clinical data. PMID:20525257
Support for fast comprehension of ICU data: visualization using metaphor graphics.
Horn, W; Popow, C; Unterasinger, L
2001-01-01
The time-oriented analysis of electronic patient records on (neonatal) intensive care units is a tedious and time-consuming task. Graphic data visualization should make it easier for physicians to assess the overall situation of a patient and to recognize essential changes over time. Metaphor graphics are used to sketch the most relevant parameters for characterizing a patient's situation. By repetition of the graphic object in 24 frames the situation of the ICU patient is presented in one display, usually summarizing the last 24 h. VIE-VISU is a data visualization system which uses multiples to present the change in the patient's status over time in graphic form. Each multiple is a highly structured metaphor graphic object. Each object visualizes important ICU parameters from circulation, ventilation, and fluid balance. The design using multiples promotes a focus on stability and change. A stable patient is recognizable at first sight, continuous improvement or worsening condition are easy to analyze, drastic changes in the patient's situation get the viewers attention immediately.
Real-time dose computation: GPU-accelerated source modeling and superposition/convolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacques, Robert; Wong, John; Taylor, Russell
Purpose: To accelerate dose calculation to interactive rates using highly parallel graphics processing units (GPUs). Methods: The authors have extended their prior work in GPU-accelerated superposition/convolution with a modern dual-source model and have enhanced performance. The primary source algorithm supports both focused leaf ends and asymmetric rounded leaf ends. The extra-focal algorithm uses a discretized, isotropic area source and models multileaf collimator leaf height effects. The spectral and attenuation effects of static beam modifiers were integrated into each source's spectral function. The authors introduce the concepts of arc superposition and delta superposition. Arc superposition utilizes separate angular sampling for themore » total energy released per unit mass (TERMA) and superposition computations to increase accuracy and performance. Delta superposition allows single beamlet changes to be computed efficiently. The authors extended their concept of multi-resolution superposition to include kernel tilting. Multi-resolution superposition approximates solid angle ray-tracing, improving performance and scalability with a minor loss in accuracy. Superposition/convolution was implemented using the inverse cumulative-cumulative kernel and exact radiological path ray-tracing. The accuracy analyses were performed using multiple kernel ray samplings, both with and without kernel tilting and multi-resolution superposition. Results: Source model performance was <9 ms (data dependent) for a high resolution (400{sup 2}) field using an NVIDIA (Santa Clara, CA) GeForce GTX 280. Computation of the physically correct multispectral TERMA attenuation was improved by a material centric approach, which increased performance by over 80%. Superposition performance was improved by {approx}24% to 0.058 and 0.94 s for 64{sup 3} and 128{sup 3} water phantoms; a speed-up of 101-144x over the highly optimized Pinnacle{sup 3} (Philips, Madison, WI) implementation. Pinnacle{sup 3} times were 8.3 and 94 s, respectively, on an AMD (Sunnyvale, CA) Opteron 254 (two cores, 2.8 GHz). Conclusions: The authors have completed a comprehensive, GPU-accelerated dose engine in order to provide a substantial performance gain over CPU based implementations. Real-time dose computation is feasible with the accuracy levels of the superposition/convolution algorithm.« less
ERIC Educational Resources Information Center
Jordan, Jim
1988-01-01
Summarizes how infograhics are produced and how they provide information graphically in high school publications. Offers suggestions concerning information gathering, graphic format, and software selection, and provides examples of computer/student designed infographics. (MM)
Recent trends in water analysis triggering future monitoring of organic micropollutants.
Schmidt, Torsten C
2018-03-21
Water analysis has been an important area since the beginning of analytical chemistry. The focus though has shifted substantially: from minerals and the main constituents of water in the time of Carl Remigius Fresenius to a multitude of, in particular, organic compounds at concentrations down to the sub-nanogram per liter level nowadays. This was possible only because of numerous innovations in instrumentation in recent decades, drivers of which are briefly discussed. In addition to the high demands on sensitivity, high throughput by automation and short analysis times are major requirements. In this article, some recent developments in the chemical analysis of organic micropollutants (OMPs) are presented. These include the analysis of priority pollutants in whole water samples, extension of the analytical window, in particular to encompass highly polar compounds, the trend toward more than one separation dimension before mass spectrometric detection, and ways of coping with unknown analytes by suspect and nontarget screening approaches involving high-resolution mass spectrometry. Furthermore, beyond gathering reliable concentration data for many OMPs, the question of the relevance of such data for the aquatic system under scrutiny is becoming ever more important. To that end, effect-based analytics can be used and may become part of future routine monitoring, mostly with a focus on adverse effects of OMPs in specific test systems mimicking environmental impacts. Despite advances in the field of water analysis in recent years, there are still many challenges for further analytical research. Graphical abstract Recent trends in water analysis of organic micropollutants that open new opportunities in future water monitoring. HRMS high-resolution mass spectrometry, PMOC persistent mobile organic compounds.
Piazza, Rocco; Magistroni, Vera; Pirola, Alessandra; Redaelli, Sara; Spinelli, Roberta; Redaelli, Serena; Galbiati, Marta; Valletta, Simona; Giudici, Giovanni; Cazzaniga, Giovanni; Gambacorti-Passerini, Carlo
2013-01-01
Copy number alterations (CNA) are common events occurring in leukaemias and solid tumors. Comparative Genome Hybridization (CGH) is actually the gold standard technique to analyze CNAs; however, CGH analysis requires dedicated instruments and is able to perform only low resolution Loss of Heterozygosity (LOH) analyses. Here we present CEQer (Comparative Exome Quantification analyzer), a new graphical, event-driven tool for CNA/allelic-imbalance (AI) coupled analysis of exome sequencing data. By using case-control matched exome data, CEQer performs a comparative digital exonic quantification to generate CNA data and couples this information with exome-wide LOH and allelic imbalance detection. This data is used to build mixed statistical/heuristic models allowing the identification of CNA/AI events. To test our tool, we initially used in silico generated data, then we performed whole-exome sequencing from 20 leukemic specimens and corresponding matched controls and we analyzed the results using CEQer. Taken globally, these analyses showed that the combined use of comparative digital exon quantification and LOH/AI allows generating very accurate CNA data. Therefore, we propose CEQer as an efficient, robust and user-friendly graphical tool for the identification of CNA/AI in the context of whole-exome sequencing data. PMID:24124457
Bedez, Mathieu; Belhachmi, Zakaria; Haeberlé, Olivier; Greget, Renaud; Moussaoui, Saliha; Bouteiller, Jean-Marie; Bischoff, Serge
2016-01-15
The resolution of a model describing the electrical activity of neural tissue and its propagation within this tissue is highly consuming in term of computing time and requires strong computing power to achieve good results. In this study, we present a method to solve a model describing the electrical propagation in neuronal tissue, using parareal algorithm, coupling with parallelization space using CUDA in graphical processing unit (GPU). We applied the method of resolution to different dimensions of the geometry of our model (1-D, 2-D and 3-D). The GPU results are compared with simulations from a multi-core processor cluster, using message-passing interface (MPI), where the spatial scale was parallelized in order to reach a comparable calculation time than that of the presented method using GPU. A gain of a factor 100 in term of computational time between sequential results and those obtained using the GPU has been obtained, in the case of 3-D geometry. Given the structure of the GPU, this factor increases according to the fineness of the geometry used in the computation. To the best of our knowledge, it is the first time such a method is used, even in the case of neuroscience. Parallelization time coupled with GPU parallelization space allows for drastically reducing computational time with a fine resolution of the model describing the propagation of the electrical signal in a neuronal tissue. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Buckner, J. D.; Council, H. W.; Edwards, T. R.
1974-01-01
Description of the hardware and software implementing the system of time-lapse reproduction of images through interactive graphics (TRIIG). The system produces a quality hard copy of processed images in a fast and inexpensive manner. This capability allows for optimal development of processing software through the rapid viewing of many image frames in an interactive mode. Three critical optical devices are used to reproduce an image: an Optronics photo reader/writer, the Adage Graphics Terminal, and Polaroid Type 57 high speed film. Typical sources of digitized images are observation satellites, such as ERTS or Mariner, computer coupled electron microscopes for high-magnification studies, or computer coupled X-ray devices for medical research.
NASA Astrophysics Data System (ADS)
De Martini, Francesco
2017-10-01
The nature of the scalar field responsible for the cosmological inflation is found to be rooted in the most fundamental concept of Weyl's differential geometry: the parallel displacement of vectors in curved space-time. Within this novel geometrical scenario, the standard electroweak theory of leptons based on the SU(2)L⊗U(1)Y as well as on the conformal groups of space-time Weyl's transformations is analysed within the framework of a general-relativistic, conformally covariant scalar-tensor theory that includes the electromagnetic and the Yang-Mills fields. A Higgs mechanism within a spontaneous symmetry breaking process is identified and this offers formal connections between some relevant properties of the elementary particles and the dark energy content of the Universe. An `effective cosmological potential': Veff is expressed in terms of the dark energy potential:
BIM-Sim: Interactive Simulation of Broadband Imaging Using Mie Theory
Berisha, Sebastian; van Dijk, Thomas; Bhargava, Rohit; Carney, P. Scott; Mayerich, David
2017-01-01
Understanding the structure of a scattered electromagnetic (EM) field is critical to improving the imaging process. Mechanisms such as diffraction, scattering, and interference affect an image, limiting the resolution, and potentially introducing artifacts. Simulation and visualization of scattered fields thus plays an important role in imaging science. However, EM fields are high-dimensional, making them time-consuming to simulate, and difficult to visualize. In this paper, we present a framework for interactively computing and visualizing EM fields scattered by micro and nano-particles. Our software uses graphics hardware for evaluating the field both inside and outside of these particles. We then use Monte-Carlo sampling to reconstruct and visualize the three-dimensional structure of the field, spectral profiles at individual points, the structure of the field at the surface of the particle, and the resulting image produced by an optical system. PMID:29170738
CheckDen, a program to compute quantum molecular properties on spatial grids.
Pacios, Luis F; Fernandez, Alberto
2009-09-01
CheckDen, a program to compute quantum molecular properties on a variety of spatial grids is presented. The program reads as unique input wavefunction files written by standard quantum packages and calculates the electron density rho(r), promolecule and density difference function, gradient of rho(r), Laplacian of rho(r), information entropy, electrostatic potential, kinetic energy densities G(r) and K(r), electron localization function (ELF), and localized orbital locator (LOL) function. These properties can be calculated on a wide range of one-, two-, and three-dimensional grids that can be processed by widely used graphics programs to render high-resolution images. CheckDen offers also other options as extracting separate atom contributions to the property computed, converting grid output data into CUBE and OpenDX volumetric data formats, and perform arithmetic combinations with grid files in all the recognized formats.
Introducing GAMER: A fast and accurate method for ray-tracing galaxies using procedural noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Groeneboom, N. E.; Dahle, H., E-mail: nicolaag@astro.uio.no
2014-03-10
We developed a novel approach for fast and accurate ray-tracing of galaxies using procedural noise fields. Our method allows for efficient and realistic rendering of synthetic galaxy morphologies, where individual components such as the bulge, disk, stars, and dust can be synthesized in different wavelengths. These components follow empirically motivated overall intensity profiles but contain an additional procedural noise component that gives rise to complex natural patterns that mimic interstellar dust and star-forming regions. These patterns produce more realistic-looking galaxy images than using analytical expressions alone. The method is fully parallelized and creates accurate high- and low- resolution images thatmore » can be used, for example, in codes simulating strong and weak gravitational lensing. In addition to having a user-friendly graphical user interface, the C++ software package GAMER is easy to implement into an existing code.« less
SLIDE - a web-based tool for interactive visualization of large-scale -omics data.
Ghosh, Soumita; Datta, Abhik; Tan, Kaisen; Choi, Hyungwon
2018-06-28
Data visualization is often regarded as a post hoc step for verifying statistically significant results in the analysis of high-throughput data sets. This common practice leaves a large amount of raw data behind, from which more information can be extracted. However, existing solutions do not provide capabilities to explore large-scale raw datasets using biologically sensible queries, nor do they allow user interaction based real-time customization of graphics. To address these drawbacks, we have designed an open-source, web-based tool called Systems-Level Interactive Data Exploration, or SLIDE to visualize large-scale -omics data interactively. SLIDE's interface makes it easier for scientists to explore quantitative expression data in multiple resolutions in a single screen. SLIDE is publicly available under BSD license both as an online version as well as a stand-alone version at https://github.com/soumitag/SLIDE. Supplementary Information are available at Bioinformatics online.
Color visualization for fluid flow prediction
NASA Technical Reports Server (NTRS)
Smith, R. E.; Speray, D. E.
1982-01-01
High-resolution raster scan color graphics allow variables to be presented as a continuum, in a color-coded picture that is referenced to a geometry such as a flow field grid or a boundary surface. Software is used to map a scalar variable such as pressure or temperature, defined on a two-dimensional slice of a flow field. The geometric shape is preserved in the resulting picture, and the relative magnitude of the variable is color-coded onto the geometric shape. The primary numerical process for color coding is an efficient search along a raster scan line to locate the quadrilteral block in the grid that bounds each pixel on the line. Tension spline interpolation is performed relative to the grid for specific values of the scalar variable, which is then color coded. When all pixels for the field of view are color-defined, a picture is played back from a memory device onto a television screen.
Protein Dynamics from NMR and Computer Simulation
NASA Astrophysics Data System (ADS)
Wu, Qiong; Kravchenko, Olga; Kemple, Marvin; Likic, Vladimir; Klimtchuk, Elena; Prendergast, Franklyn
2002-03-01
Proteins exhibit internal motions from the millisecond to sub-nanosecond time scale. The challenge is to relate these internal motions to biological function. A strategy to address this aim is to apply a combination of several techniques including high-resolution NMR, computer simulation of molecular dynamics (MD), molecular graphics, and finally molecular biology, the latter to generate appropriate samples. Two difficulties that arise are: (1) the time scale which is most directly biologically relevant (ms to μs) is not readily accessible by these techniques and (2) the techniques focus on local and not collective motions. We will outline methods using ^13C-NMR to help alleviate the second problem, as applied to intestinal fatty acid binding protein, a relatively small intracellular protein believed to be involved in fatty acid transport and metabolism. This work is supported in part by PHS Grant GM34847 (FGP) and by a fellowship from the American Heart Association (QW).
Bankey, Viki; Grauch, V.J.S.; Drenth, B.J.; ,
2006-01-01
This report contains digital data, image files, and text files describing data formats and survey procedures for aeromagnetic data collected during high-resolution aeromagnetic surveys in southern Colorado and northern New Mexico in December, 2005. One survey covers the eastern edge of the San Luis basin, including the towns of Questa, New Mexico and San Luis, Colorado. A second survey covers the mountain front east of Santa Fe, New Mexico, including the town of Chimayo and portions of the Pueblos of Tesuque and Nambe. Several derivative products from these data are also presented as grids and images, including reduced-to-pole data and data continued to a reference surface. Images are presented in various formats and are intended to be used as input to geographic information systems, standard graphics software, or map plotting packages.
Fast optically sectioned fluorescence HiLo endomicroscopy.
Ford, Tim N; Lim, Daryl; Mertz, Jerome
2012-02-01
We describe a nonscanning, fiber bundle endomicroscope that performs optically sectioned fluorescence imaging with fast frame rates and real-time processing. Our sectioning technique is based on HiLo imaging, wherein two widefield images are acquired under uniform and structured illumination and numerically processed to reject out-of-focus background. This work is an improvement upon an earlier demonstration of widefield optical sectioning through a flexible fiber bundle. The improved device features lateral and axial resolutions of 2.6 and 17 μm, respectively, a net frame rate of 9.5 Hz obtained by real-time image processing with a graphics processing unit (GPU) and significantly reduced motion artifacts obtained by the use of a double-shutter camera. We demonstrate the performance of our system with optically sectioned images and videos of a fluorescently labeled chorioallantoic membrane (CAM) in the developing G. gallus embryo. HiLo endomicroscopy is a candidate technique for low-cost, high-speed clinical optical biopsies.
Fast optically sectioned fluorescence HiLo endomicroscopy
NASA Astrophysics Data System (ADS)
Ford, Tim N.; Lim, Daryl; Mertz, Jerome
2012-02-01
We describe a nonscanning, fiber bundle endomicroscope that performs optically sectioned fluorescence imaging with fast frame rates and real-time processing. Our sectioning technique is based on HiLo imaging, wherein two widefield images are acquired under uniform and structured illumination and numerically processed to reject out-of-focus background. This work is an improvement upon an earlier demonstration of widefield optical sectioning through a flexible fiber bundle. The improved device features lateral and axial resolutions of 2.6 and 17 μm, respectively, a net frame rate of 9.5 Hz obtained by real-time image processing with a graphics processing unit (GPU) and significantly reduced motion artifacts obtained by the use of a double-shutter camera. We demonstrate the performance of our system with optically sectioned images and videos of a fluorescently labeled chorioallantoic membrane (CAM) in the developing G. gallus embryo. HiLo endomicroscopy is a candidate technique for low-cost, high-speed clinical optical biopsies.
van Agthoven, Maria A; Barrow, Mark P; Chiron, Lionel; Coutouly, Marie-Aude; Kilgour, David; Wootton, Christopher A; Wei, Juan; Soulby, Andrew; Delsuc, Marc-André; Rolando, Christian; O'Connor, Peter B
2015-12-01
Two-dimensional Fourier transform ion cyclotron resonance mass spectrometry is a data-independent analytical method that records the fragmentation patterns of all the compounds in a sample. This study shows the implementation of atmospheric pressure photoionization with two-dimensional (2D) Fourier transform ion cyclotron resonance mass spectrometry. In the resulting 2D mass spectrum, the fragmentation patterns of the radical and protonated species from cholesterol are differentiated. This study shows the use of fragment ion lines, precursor ion lines, and neutral loss lines in the 2D mass spectrum to determine fragmentation mechanisms of known compounds and to gain information on unknown ion species in the spectrum. In concert with high resolution mass spectrometry, 2D Fourier transform ion cyclotron resonance mass spectrometry can be a useful tool for the structural analysis of small molecules. Graphical Abstract ᅟ.
De Marco, Tommaso; Ries, Florian; Guermandi, Marco; Guerrieri, Roberto
2012-05-01
Electrical impedance tomography (EIT) is an imaging technology based on impedance measurements. To retrieve meaningful insights from these measurements, EIT relies on detailed knowledge of the underlying electrical properties of the body. This is obtained from numerical models of current flows therein. The nonhomogeneous and anisotropic electric properties of human tissues make accurate modeling and simulation very challenging, leading to a tradeoff between physical accuracy and technical feasibility, which at present severely limits the capabilities of EIT. This work presents a complete algorithmic flow for an accurate EIT modeling environment featuring high anatomical fidelity with a spatial resolution equal to that provided by an MRI and a novel realistic complete electrode model implementation. At the same time, we demonstrate that current graphics processing unit (GPU)-based platforms provide enough computational power that a domain discretized with five million voxels can be numerically modeled in about 30 s.
A neural-based remote eye gaze tracker under natural head motion.
Torricelli, Diego; Conforto, Silvia; Schmid, Maurizio; D'Alessio, Tommaso
2008-10-01
A novel approach to view-based eye gaze tracking for human computer interface (HCI) is presented. The proposed method combines different techniques to address the problems of head motion, illumination and usability in the framework of low cost applications. Feature detection and tracking algorithms have been designed to obtain an automatic setup and strengthen the robustness to light conditions. An extensive analysis of neural solutions has been performed to deal with the non-linearity associated with gaze mapping under free-head conditions. No specific hardware, such as infrared illumination or high-resolution cameras, is needed, rather a simple commercial webcam working in visible light spectrum suffices. The system is able to classify the gaze direction of the user over a 15-zone graphical interface, with a success rate of 95% and a global accuracy of around 2 degrees , comparable with the vast majority of existing remote gaze trackers.
Introducing GAMER: A Fast and Accurate Method for Ray-tracing Galaxies Using Procedural Noise
NASA Astrophysics Data System (ADS)
Groeneboom, N. E.; Dahle, H.
2014-03-01
We developed a novel approach for fast and accurate ray-tracing of galaxies using procedural noise fields. Our method allows for efficient and realistic rendering of synthetic galaxy morphologies, where individual components such as the bulge, disk, stars, and dust can be synthesized in different wavelengths. These components follow empirically motivated overall intensity profiles but contain an additional procedural noise component that gives rise to complex natural patterns that mimic interstellar dust and star-forming regions. These patterns produce more realistic-looking galaxy images than using analytical expressions alone. The method is fully parallelized and creates accurate high- and low- resolution images that can be used, for example, in codes simulating strong and weak gravitational lensing. In addition to having a user-friendly graphical user interface, the C++ software package GAMER is easy to implement into an existing code.
Frequency domain zero padding for accurate autofocusing based on digital holography
NASA Astrophysics Data System (ADS)
Shin, Jun Geun; Kim, Ju Wan; Eom, Tae Joong; Lee, Byeong Ha
2018-01-01
The numerical refocusing feature of digital holography enables the reconstruction of a well-focused image from a digital hologram captured at an arbitrary out-of-focus plane without the supervision of end users. However, in general, the autofocusing process for getting a highly focused image requires a considerable computational cost. In this study, to reconstruct a better-focused image, we propose the zero padding technique implemented in the frequency domain. Zero padding in the frequency domain enhances the visibility or numerical resolution of the image, which allows one to measure the degree of focus with more accuracy. A coarse-to-fine search algorithm is used to reduce the computing load, and a graphics processing unit (GPU) is employed to accelerate the process. The performance of the proposed scheme is evaluated with simulation and experiment, and the possibility of obtaining a well-refocused image with an enhanced accuracy and speed are presented.
[Usefulness of volume rendering stereo-movie in neurosurgical craniotomies].
Fukunaga, Tateya; Mokudai, Toshihiko; Fukuoka, Masaaki; Maeda, Tomonori; Yamamoto, Kouji; Yamanaka, Kozue; Minakuchi, Kiyomi; Miyake, Hirohisa; Moriki, Akihito; Uchida, Yasufumi
2007-12-20
In recent years, the advancements in MR technology combined with the development of the multi-channel coil have resulted in substantially shortened inspection times. In addition, rapid improvement in functional performance in the workstation has produced a more simplified imaging-making process. Consequently, graphical images of intra-cranial lesions can be easily created. For example, the use of three-dimensional spoiled gradient echo (3D-SPGR) volume rendering (VR) after injection of a contrast medium is applied clinically as a preoperative reference image. Recently, improvements in 3D-SPGR VR high-resolution have enabled accurate surface images of the brain to be obtained. We used stereo-imaging created by weighted maximum intensity projection (Weighted MIP) to determine the skin incision line. Furthermore, the stereo imaging technique utilizing 3D-SPGR VR was actually used in cases presented here. The techniques we report here seemed to be very useful in the pre-operative simulation of neurosurgical craniotomy.
Authoritative Authoring: Software That Makes Multimedia Happen.
ERIC Educational Resources Information Center
Florio, Chris; Murie, Michael
1996-01-01
Compares seven mid- to high-end multimedia authoring software systems that combine graphics, sound, animation, video, and text for Windows and Macintosh platforms. A run-time project was created with each program using video, animation, graphics, sound, formatted text, hypertext, and buttons. (LRW)
ERIC Educational Resources Information Center
Hooper, Kristina
1982-01-01
Provides the rationale for considering communication in a graphic domain and suggests a specific goal for designing work stations which provide graphic capabilities in educational settings. The central element of this recommendation is the "pictorial conversation", a highly interactive exchange that includes pictures as the central elements.…
Using Graphic Novels, Anime, and the Internet in an Urban High School
ERIC Educational Resources Information Center
Frey, Nancy; Fisher, Douglas
2004-01-01
Alternative genres such as graphic novels, manga, and anime are employed to build on students' multiple literacies. It is observed that use of visual stories allowed students to discuss how the authors conveyed mood and tone through images.
NASA Astrophysics Data System (ADS)
Gordov, Evgeny; Lykosov, Vasily; Krupchatnikov, Vladimir; Okladnikov, Igor; Titov, Alexander; Shulgina, Tamara
2013-04-01
Analysis of growing volume of related to climate change data from sensors and model outputs requires collaborative multidisciplinary efforts of researchers. To do it timely and in reliable way one needs in modern information-computational infrastructure supporting integrated studies in the field of environmental sciences. Recently developed experimental software and hardware platform Climate (http://climate.scert.ru/) provides required environment for regional climate change related investigations. The platform combines modern web 2.0 approach, GIS-functionality and capabilities to run climate and meteorological models, process large geophysical datasets and support relevant analysis. It also supports joint software development by distributed research groups, and organization of thematic education for students and post-graduate students. In particular, platform software developed includes dedicated modules for numerical processing of regional and global modeling results for consequent analysis and visualization. Also run of integrated into the platform WRF and «Planet Simulator» models, modeling results data preprocessing and visualization is provided. All functions of the platform are accessible by a user through a web-portal using common graphical web-browser in the form of an interactive graphical user interface which provides, particularly, capabilities of selection of geographical region of interest (pan and zoom), data layers manipulation (order, enable/disable, features extraction) and visualization of results. Platform developed provides users with capabilities of heterogeneous geophysical data analysis, including high-resolution data, and discovering of tendencies in climatic and ecosystem changes in the framework of different multidisciplinary researches. Using it even unskilled user without specific knowledge can perform reliable computational processing and visualization of large meteorological, climatic and satellite monitoring datasets through unified graphical web-interface. Partial support of RF Ministry of Education and Science grant 8345, SB RAS Program VIII.80.2 and Projects 69, 131, 140 and APN CBA2012-16NSY project is acknowledged.
Design of automation tools for management of descent traffic
NASA Technical Reports Server (NTRS)
Erzberger, Heinz; Nedell, William
1988-01-01
The design of an automated air traffic control system based on a hierarchy of advisory tools for controllers is described. Compatibility of the tools with the human controller, a key objective of the design, is achieved by a judicious selection of tasks to be automated and careful attention to the design of the controller system interface. The design comprises three interconnected subsystems referred to as the Traffic Management Advisor, the Descent Advisor, and the Final Approach Spacing Tool. Each of these subsystems provides a collection of tools for specific controller positions and tasks. This paper focuses primarily on the Descent Advisor which provides automation tools for managing descent traffic. The algorithms, automation modes, and graphical interfaces incorporated in the design are described. Information generated by the Descent Advisor tools is integrated into a plan view traffic display consisting of a high-resolution color monitor. Estimated arrival times of aircraft are presented graphically on a time line, which is also used interactively in combination with a mouse input device to select and schedule arrival times. Other graphical markers indicate the location of the fuel-optimum top-of-descent point and the predicted separation distances of aircraft at a designated time-control point. Computer generated advisories provide speed and descent clearances which the controller can issue to aircraft to help them arrive at the feeder gate at the scheduled times or with specified separation distances. Two types of horizontal guidance modes, selectable by the controller, provide markers for managing the horizontal flightpaths of aircraft under various conditions. The entire system consisting of descent advisor algorithm, a library of aircraft performance models, national airspace system data bases, and interactive display software has been implemented on a workstation made by Sun Microsystems, Inc. It is planned to use this configuration in operational evaluations at an en route center.
NASA Technical Reports Server (NTRS)
Long, D.
1994-01-01
This library is a set of subroutines designed for vector plotting to CRT's, plotters, dot matrix, and laser printers. LONGLIB subroutines are invoked by program calls similar to standard CALCOMP routines. In addition to the basic plotting routines, LONGLIB contains an extensive set of routines to allow viewport clipping, extended character sets, graphic input, shading, polar plots, and 3-D plotting with or without hidden line removal. LONGLIB capabilities include surface plots, contours, histograms, logarithm axes, world maps, and seismic plots. LONGLIB includes master subroutines, which are self-contained series of commonly used individual subroutines. When invoked, the master routine will initialize the plotting package, and will plot multiple curves, scatter plots, log plots, 3-D plots, etc. and then close the plot package, all with a single call. Supported devices include VT100 equipped with Selanar GR100 or GR100+ boards, VT125s, VT240s, VT220 equipped with Selanar SG220, Tektronix 4010/4014 or 4107/4109 and compatibles, and Graphon GO-235 terminals. Dot matrix printer output is available by using the provided raster scan conversion routines for DEC LA50, Printronix printers, and high or low resolution Trilog printers. Other output devices include QMS laser printers, Postscript compatible laser printers, and HPGL compatible plotters. The LONGLIB package includes the graphics library source code, an on-line help library, scan converter and meta file conversion programs, and command files for installing, creating, and testing the library. The latest version, 5.0, is significantly enhanced and has been made more portable. Also, the new version's meta file format has been changed and is incompatible with previous versions. A conversion utility is included to port the old meta files to the new format. Color terminal plotting has been incorporated. LONGLIB is written in FORTRAN 77 for batch or interactive execution and has been implemented on a DEC VAX series computer operating under VMS. This program was developed in 1985, and last updated in 1988.
Wang, An-Li; Lowen, Steven B; Romer, Daniel; Giorno, Mario; Langleben, Daniel D
2015-05-01
Warning labels on cigarette packages are an important venue for information about the hazards of smoking. The 2009 US Family Smoking Prevention and Tobacco Control Act mandated replacing the current text-only labels with graphic warning labels. However, labels proposed by the Food and Drug Administration (FDA) were challenged in court by the tobacco companies, who argued successfully that the proposed labels needlessly encroached on their right to free speech, in part because they included images of high emotional salience that indiscriminately frightened rather than informed consumers. We used functional MRI to examine the effects of graphic warning labels' emotional salience on smokers' brain activity and cognition. Twenty-four smokers viewed a random sequence of blocks of graphic warning labels that have been rated high or low on an 'emotional reaction' scale in previous research. We found that labels rated high on emotional reaction were better remembered, associated with reduction in the urge to smoke, and produced greater brain response in the amygdala, hippocampi, inferior frontal gyri and the insulae. Recognition memory and craving are, respectively, correlates of effectiveness of addiction-related public health communications and interventions, and amygdala activation facilitates the encoding of emotional memories. Thus, our results suggest that emotional reaction to graphic warning labels contributes to their public health impact and may be an integral part of the neural mechanisms underlying their effectiveness. Given the urgency of the debate about the constitutional risks and public health benefits of graphic warning labels, these preliminary findings warrant consideration while longitudinal clinical studies are underway. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
GPU-computing in econophysics and statistical physics
NASA Astrophysics Data System (ADS)
Preis, T.
2011-03-01
A recent trend in computer science and related fields is general purpose computing on graphics processing units (GPUs), which can yield impressive performance. With multiple cores connected by high memory bandwidth, today's GPUs offer resources for non-graphics parallel processing. This article provides a brief introduction into the field of GPU computing and includes examples. In particular computationally expensive analyses employed in financial market context are coded on a graphics card architecture which leads to a significant reduction of computing time. In order to demonstrate the wide range of possible applications, a standard model in statistical physics - the Ising model - is ported to a graphics card architecture as well, resulting in large speedup values.
Visual design for the user interface, Part 1: Design fundamentals.
Lynch, P J
1994-01-01
Digital audiovisual media and computer-based documents will be the dominant forms of professional communication in both clinical medicine and the biomedical sciences. The design of highly interactive multimedia systems will shortly become a major activity for biocommunications professionals. The problems of human-computer interface design are intimately linked with graphic design for multimedia presentations and on-line document systems. This article outlines the history of graphic interface design and the theories that have influenced the development of today's major graphic user interfaces.
Zeng, Yun; Liu, Gang; Ma, Ying; Chen, Xiaoyuan; Ito, Yoichiro
2012-01-01
A new series of organic-high ionic strength aqueous two-phase solvents systems was designed for separation of highly polar compounds by spiral high-speed counter-current chromatography. A total of 21 solvent systems composed of 1-butanol-ethanol-saturated ammonium sulfate-water at various volume ratios are arranged according to an increasing order of polarity. Selection of the two-phase solvent system for a single compound or a multiple sample mixture can be achieved by two steps of partition coefficient measurements using a graphic method. The capability of the method is demonstrated by optimization of partition coefficient for seven highly polar samples including tartrazine (K=0.77), tryptophan (K=1.00), methyl green (K= 0.93), tyrosine (0.81), metanephrine (K=0.89), tyramine (K=0.98), and normetanephrine (K=0.96). Three sulfonic acid components in D&C Green No. 8 were successfully separated by HSCCC using the graphic selection of the two-phase solvent system. PMID:23467197
Evaluation of display technologies for Internet of Things (IoT)
NASA Astrophysics Data System (ADS)
Sabo, Julia; Fegert, Tobias; Cisowski, Matthäus Stephanus; Marsal, Anatolij; Eichberger, Domenik; Blankenbach, Karlheinz
2017-02-01
Internet of Things (IoT) is a booming industry. We investigated several (semi-) professional IoT devices in combination with displays (focus on reflective technologies) and LEDs. First, these displays were compared for reflectance and ambient light performance. Two measurement set-ups with diffuse conditions were used for simulating typical indoor lighting conditions of IoT displays. E-paper displays were evaluated best as they combine a relative high reflectance with large contrast ratio. Reflective monochrome LCDs show a lower reflectance but are widely available. Second we studied IoT microprocessors interfaces to displays. A µP can drive single LEDs and one or two Seg 8 LED digits directly by GPIOs. Other display technologies require display controllers with a parallel or serial interface to the microprocessor as they need dedicated waveforms for driving the pixels. Most suitable are display modules with built-in display RAM as only pixel data have to be transferred which changes. A HDMI output (e.g. Raspberry Pi) results in high cost for the displays, therefore AMLCDs are not suitable for low to medium cost IoT systems. We compared and evaluated furthermore status indicators, icons, text and graphics IoT display systems regarding human machine interface (HMI) characteristics and effectiveness as well as power consumption. We found out that low resolution graphics bistable e-paper displays are the most appropriate display technology for IoT systems as they show as well information after a power failure or power switch off during maintenance or e.g. QR codes for installation. LED indicators are the most cost effective approach which has however very limited HMI capabilities.
NASA Astrophysics Data System (ADS)
Sterling, K.; Denbo, D. W.; Eble, M. C.
2016-12-01
Short-term Inundation Forecasting for Tsunamis (SIFT) software was developed by NOAA's Pacific Marine Environmental Laboratory (PMEL) for use in tsunami forecasting and has been used by both U.S. Tsunami Warning Centers (TWCs) since 2012, when SIFTv3.1 was operationally accepted. Since then, advancements in research and modeling have resulted in several new features being incorporated into SIFT forecasting. Following the priorities and needs of the TWCs, upgrades to SIFT forecasting were implemented into SIFTv4.0, scheduled to become operational in October 2016. Because every minute counts in the early warning process, two major time saving features were implemented in SIFT 4.0. To increase processing speeds and generate high-resolution flooding forecasts more quickly, the tsunami propagation and inundation codes were modified to run on Graphics Processing Units (GPUs). To reduce time demand on duty scientists during an event, an automated DART inversion (or fitting) process was implemented. To increase forecasting accuracy, the forecasted amplitudes and inundations were adjusted to include dynamic tidal oscillations, thereby reducing the over-estimates of flooding common in SIFTv3.1 due to the static tide stage conservatively set at Mean High Water. Further improvements to forecasts were gained through the assimilation of additional real-time observations. Cabled array measurements from Bottom Pressure Recorders (BPRs) in the Oceans Canada NEPTUNE network are now available to SIFT for use in the inversion process. To better meet the needs of harbor masters and emergency managers, SIFTv4.0 adds a tsunami currents graphical product to the suite of disseminated forecast results. When delivered, these new features in SIFTv4.0 will improve the operational tsunami forecasting speed, accuracy, and capabilities at NOAA's Tsunami Warning Centers.
Method for visualization and presentation of priceless old prints based on precise 3D scan
NASA Astrophysics Data System (ADS)
Bunsch, Eryk; Sitnik, Robert
2014-02-01
Graphic prints and manuscripts constitute main part of the cultural heritage objects created by the most of the known civilizations. Their presentation was always a problem due to their high sensitivity to light and changes of external conditions (temperature, humidity). Today it is possible to use an advanced digitalization techniques for documentation and visualization of mentioned objects. In the situation when presentation of the original heritage object is impossible, there is a need to develop a method allowing documentation and then presentation to the audience of all the aesthetical features of the object. During the course of the project scans of several pages of one of the most valuable books in collection of Museum of Warsaw Archdiocese were performed. The book known as "Great Dürer Trilogy" consists of three series of woodcuts by the Albrecht Dürer. The measurement system used consists of a custom designed, structured light-based, high-resolution measurement head with automated digitization system mounted on the industrial robot. This device was custom built to meet conservators' requirements, especially the lack of ultraviolet or infrared radiation emission in the direction of measured object. Documentation of one page from the book requires about 380 directional measurements which constitute about 3 billion sample points. The distance between the points in the cloud is 20 μm. Provided that the measurement with MSD (measurement sampling density) of 2500 points makes it possible to show to the publicity the spatial structure of this graphics print. An important aspect is the complexity of the software environment created for data processing, in which massive data sets can be automatically processed and visualized. Very important advantage of the software which is using directly clouds of points is the possibility to manipulate freely virtual light source.
Memory-Efficient Analysis of Dense Functional Connectomes.
Loewe, Kristian; Donohue, Sarah E; Schoenfeld, Mircea A; Kruse, Rudolf; Borgelt, Christian
2016-01-01
The functioning of the human brain relies on the interplay and integration of numerous individual units within a complex network. To identify network configurations characteristic of specific cognitive tasks or mental illnesses, functional connectomes can be constructed based on the assessment of synchronous fMRI activity at separate brain sites, and then analyzed using graph-theoretical concepts. In most previous studies, relatively coarse parcellations of the brain were used to define regions as graphical nodes. Such parcellated connectomes are highly dependent on parcellation quality because regional and functional boundaries need to be relatively consistent for the results to be interpretable. In contrast, dense connectomes are not subject to this limitation, since the parcellation inherent to the data is used to define graphical nodes, also allowing for a more detailed spatial mapping of connectivity patterns. However, dense connectomes are associated with considerable computational demands in terms of both time and memory requirements. The memory required to explicitly store dense connectomes in main memory can render their analysis infeasible, especially when considering high-resolution data or analyses across multiple subjects or conditions. Here, we present an object-based matrix representation that achieves a very low memory footprint by computing matrix elements on demand instead of explicitly storing them. In doing so, memory required for a dense connectome is reduced to the amount needed to store the underlying time series data. Based on theoretical considerations and benchmarks, different matrix object implementations and additional programs (based on available Matlab functions and Matlab-based third-party software) are compared with regard to their computational efficiency. The matrix implementation based on on-demand computations has very low memory requirements, thus enabling analyses that would be otherwise infeasible to conduct due to insufficient memory. An open source software package containing the created programs is available for download.
Memory-Efficient Analysis of Dense Functional Connectomes
Loewe, Kristian; Donohue, Sarah E.; Schoenfeld, Mircea A.; Kruse, Rudolf; Borgelt, Christian
2016-01-01
The functioning of the human brain relies on the interplay and integration of numerous individual units within a complex network. To identify network configurations characteristic of specific cognitive tasks or mental illnesses, functional connectomes can be constructed based on the assessment of synchronous fMRI activity at separate brain sites, and then analyzed using graph-theoretical concepts. In most previous studies, relatively coarse parcellations of the brain were used to define regions as graphical nodes. Such parcellated connectomes are highly dependent on parcellation quality because regional and functional boundaries need to be relatively consistent for the results to be interpretable. In contrast, dense connectomes are not subject to this limitation, since the parcellation inherent to the data is used to define graphical nodes, also allowing for a more detailed spatial mapping of connectivity patterns. However, dense connectomes are associated with considerable computational demands in terms of both time and memory requirements. The memory required to explicitly store dense connectomes in main memory can render their analysis infeasible, especially when considering high-resolution data or analyses across multiple subjects or conditions. Here, we present an object-based matrix representation that achieves a very low memory footprint by computing matrix elements on demand instead of explicitly storing them. In doing so, memory required for a dense connectome is reduced to the amount needed to store the underlying time series data. Based on theoretical considerations and benchmarks, different matrix object implementations and additional programs (based on available Matlab functions and Matlab-based third-party software) are compared with regard to their computational efficiency. The matrix implementation based on on-demand computations has very low memory requirements, thus enabling analyses that would be otherwise infeasible to conduct due to insufficient memory. An open source software package containing the created programs is available for download. PMID:27965565
Virtual Reality Calibration for Telerobotic Servicing
NASA Technical Reports Server (NTRS)
Kim, W.
1994-01-01
A virtual reality calibration technique of matching a virtual environment of simulated graphics models in 3-D geometry and perspective with actual camera views of the remote site task environment has been developed to enable high-fidelity preview/predictive displays with calibrated graphics overlay on live video.
Method and System for Air Traffic Rerouting for Airspace Constraint Resolution
NASA Technical Reports Server (NTRS)
Erzberger, Heinz (Inventor); Morando, Alexander R. (Inventor); Sheth, Kapil S. (Inventor); McNally, B. David (Inventor); Clymer, Alexis A. (Inventor); Shih, Fu-tai (Inventor)
2017-01-01
A dynamic constraint avoidance route system automatically analyzes routes of aircraft flying, or to be flown, in or near constraint regions and attempts to find more time and fuel efficient reroutes around current and predicted constraints. The dynamic constraint avoidance route system continuously analyzes all flight routes and provides reroute advisories that are dynamically updated in real time. The dynamic constraint avoidance route system includes a graphical user interface that allows users to visualize, evaluate, modify if necessary, and implement proposed reroutes.
KOLAM: a cross-platform architecture for scalable visualization and tracking in wide-area imagery
NASA Astrophysics Data System (ADS)
Fraser, Joshua; Haridas, Anoop; Seetharaman, Guna; Rao, Raghuveer M.; Palaniappan, Kannappan
2013-05-01
KOLAM is an open, cross-platform, interoperable, scalable and extensible framework supporting a novel multi- scale spatiotemporal dual-cache data structure for big data visualization and visual analytics. This paper focuses on the use of KOLAM for target tracking in high-resolution, high throughput wide format video also known as wide-area motion imagery (WAMI). It was originally developed for the interactive visualization of extremely large geospatial imagery of high spatial and spectral resolution. KOLAM is platform, operating system and (graphics) hardware independent, and supports embedded datasets scalable from hundreds of gigabytes to feasibly petabytes in size on clusters, workstations, desktops and mobile computers. In addition to rapid roam, zoom and hyper- jump spatial operations, a large number of simultaneously viewable embedded pyramid layers (also referred to as multiscale or sparse imagery), interactive colormap and histogram enhancement, spherical projection and terrain maps are supported. The KOLAM software architecture was extended to support airborne wide-area motion imagery by organizing spatiotemporal tiles in very large format video frames using a temporal cache of tiled pyramid cached data structures. The current version supports WAMI animation, fast intelligent inspection, trajectory visualization and target tracking (digital tagging); the latter by interfacing with external automatic tracking software. One of the critical needs for working with WAMI is a supervised tracking and visualization tool that allows analysts to digitally tag multiple targets, quickly review and correct tracking results and apply geospatial visual analytic tools on the generated trajectories. One-click manual tracking combined with multiple automated tracking algorithms are available to assist the analyst and increase human effectiveness.
Fast iterative image reconstruction using sparse matrix factorization with GPU acceleration
NASA Astrophysics Data System (ADS)
Zhou, Jian; Qi, Jinyi
2011-03-01
Statistically based iterative approaches for image reconstruction have gained much attention in medical imaging. An accurate system matrix that defines the mapping from the image space to the data space is the key to high-resolution image reconstruction. However, an accurate system matrix is often associated with high computational cost and huge storage requirement. Here we present a method to address this problem by using sparse matrix factorization and parallel computing on a graphic processing unit (GPU).We factor the accurate system matrix into three sparse matrices: a sinogram blurring matrix, a geometric projection matrix, and an image blurring matrix. The sinogram blurring matrix models the detector response. The geometric projection matrix is based on a simple line integral model. The image blurring matrix is to compensate for the line-of-response (LOR) degradation due to the simplified geometric projection matrix. The geometric projection matrix is precomputed, while the sinogram and image blurring matrices are estimated by minimizing the difference between the factored system matrix and the original system matrix. The resulting factored system matrix has much less number of nonzero elements than the original system matrix and thus substantially reduces the storage and computation cost. The smaller size also allows an efficient implement of the forward and back projectors on GPUs, which have limited amount of memory. Our simulation studies show that the proposed method can dramatically reduce the computation cost of high-resolution iterative image reconstruction. The proposed technique is applicable to image reconstruction for different imaging modalities, including x-ray CT, PET, and SPECT.
Gallego, Sandra F; Højlund, Kurt; Ejsing, Christer S
2018-01-01
Reliable, cost-effective, and gold-standard absolute quantification of non-esterified cholesterol in human plasma is of paramount importance in clinical lipidomics and for the monitoring of metabolic health. Here, we compared the performance of three mass spectrometric approaches available for direct detection and quantification of cholesterol in extracts of human plasma. These approaches are high resolution full scan Fourier transform mass spectrometry (FTMS) analysis, parallel reaction monitoring (PRM), and novel multiplexed MS/MS (MSX) technology, where fragments from selected precursor ions are detected simultaneously. Evaluating the performance of these approaches in terms of dynamic quantification range, linearity, and analytical precision showed that the MSX-based approach is superior to that of the FTMS and PRM-based approaches. To further show the efficacy of this approach, we devised a simple routine for extensive plasma lipidome characterization using only 8 μL of plasma, using a new commercially available ready-to-spike-in mixture with 14 synthetic lipid standards, and executing a single 6 min sample injection with combined MSX analysis for cholesterol quantification and FTMS analysis for quantification of sterol esters, glycerolipids, glycerophospholipids, and sphingolipids. Using this simple routine afforded reproducible and absolute quantification of 200 lipid species encompassing 13 lipid classes in human plasma samples. Notably, the analysis time of this procedure can be shortened for high throughput-oriented clinical lipidomics studies or extended with more advanced MS ALL technology (Almeida R. et al., J. Am. Soc. Mass Spectrom. 26, 133-148 [1]) to support in-depth structural elucidation of lipid molecules. Graphical Abstract ᅟ.
A versatile diffractive maskless lithography for single-shot and serial microfabrication.
Jenness, Nathan J; Hill, Ryan T; Hucknall, Angus; Chilkoti, Ashutosh; Clark, Robert L
2010-05-24
We demonstrate a diffractive maskless lithographic system that is capable of rapidly performing both serial and single-shot micropatterning. Utilizing the diffractive properties of phase holograms displayed on a spatial light modulator, arbitrary intensity distributions were produced to form two and three dimensional micropatterns/structures in a variety of substrates. A straightforward graphical user interface was implemented to allow users to load templates and change patterning modes within the span of a few minutes. A minimum resolution of approximately 700 nm is demonstrated for both patterning modes, which compares favorably to the 232 nm resolution limit predicted by the Rayleigh criterion. The presented method is rapid and adaptable, allowing for the parallel fabrication of microstructures in photoresist as well as the fabrication of protein microstructures that retain functional activity.
NASA Astrophysics Data System (ADS)
Fuhrer, Oliver; Chadha, Tarun; Hoefler, Torsten; Kwasniewski, Grzegorz; Lapillonne, Xavier; Leutwyler, David; Lüthi, Daniel; Osuna, Carlos; Schär, Christoph; Schulthess, Thomas C.; Vogt, Hannes
2018-05-01
The best hope for reducing long-standing global climate model biases is by increasing resolution to the kilometer scale. Here we present results from an ultrahigh-resolution non-hydrostatic climate model for a near-global setup running on the full Piz Daint supercomputer on 4888 GPUs (graphics processing units). The dynamical core of the model has been completely rewritten using a domain-specific language (DSL) for performance portability across different hardware architectures. Physical parameterizations and diagnostics have been ported using compiler directives. To our knowledge this represents the first complete atmospheric model being run entirely on accelerators on this scale. At a grid spacing of 930 m (1.9 km), we achieve a simulation throughput of 0.043 (0.23) simulated years per day and an energy consumption of 596 MWh per simulated year. Furthermore, we propose a new memory usage efficiency (MUE) metric that considers how efficiently the memory bandwidth - the dominant bottleneck of climate codes - is being used.
Interactive Dynamic Mission Scheduling for ASCA
NASA Astrophysics Data System (ADS)
Antunes, A.; Nagase, F.; Isobe, T.
The Japanese X-ray astronomy satellite ASCA (Advanced Satellite for Cosmology and Astrophysics) mission requires scheduling for each 6-month observation phase, further broken down into weekly schedules at a few minutes resolution. Two tools, SPIKE and NEEDLE, written in Lisp and C, use artificial intelligence (AI) techniques combined with a graphic user interface for fast creation and alteration of mission schedules. These programs consider viewing and satellite attitude constraints as well as observer-requested criteria and present an optimized set of solutions for review by the planner. Six-month schedules at 1 day resolution are created for an oversubscribed set of targets by the SPIKE software, originally written for HST and presently being adapted for EUVE, XTE and AXAF. The NEEDLE code creates weekly schedules at 1 min resolution using in-house orbital routines and creates output for processing by the command generation software. Schedule creation on both the long- and short-term scale is rapid, less than 1 day for long-term, and one hour for short-term.
A Fast Full Tensor Gravity computation algorithm for High Resolution 3D Geologic Interpretations
NASA Astrophysics Data System (ADS)
Jayaram, V.; Crain, K.; Keller, G. R.
2011-12-01
We present an algorithm to rapidly calculate the vertical gravity and full tensor gravity (FTG) values due to a 3-D geologic model. This algorithm can be implemented on single, multi-core CPU and graphical processing units (GPU) architectures. Our technique is based on the line element approximation with a constant density within each grid cell. This type of parameterization is well suited for high-resolution elevation datasets with grid size typically in the range of 1m to 30m. The large high-resolution data grids in our studies employ a pre-filtered mipmap pyramid type representation for the grid data known as the Geometry clipmap. The clipmap was first introduced by Microsoft Research in 2004 to do fly-through terrain visualization. This method caches nested rectangular extents of down-sampled data layers in the pyramid to create view-dependent calculation scheme. Together with the simple grid structure, this allows the gravity to be computed conveniently on-the-fly, or stored in a highly compressed format. Neither of these capabilities has previously been available. Our approach can perform rapid calculations on large topographies including crustal-scale models derived from complex geologic interpretations. For example, we used a 1KM Sphere model consisting of 105000 cells at 10m resolution with 100000 gravity stations. The line element approach took less than 90 seconds to compute the FTG and vertical gravity on an Intel Core i7 CPU at 3.07 GHz utilizing just its single core. Also, unlike traditional gravity computational algorithms, the line-element approach can calculate gravity effects at locations interior or exterior to the model. The only condition that must be met is the observation point cannot be located directly above the line element. Therefore, we perform a location test and then apply appropriate formulation to those data points. We will present and compare the computational performance of the traditional prism method versus the line element approach on different CPU-GPU system configurations. The algorithm calculates the expected gravity at station locations where the observed gravity and FTG data were acquired. This algorithm can be used for all fast forward model calculations of 3D geologic interpretations for data from airborne, space and submarine gravity, and FTG instrumentation.
Robust point cloud classification based on multi-level semantic relationships for urban scenes
NASA Astrophysics Data System (ADS)
Zhu, Qing; Li, Yuan; Hu, Han; Wu, Bo
2017-07-01
The semantic classification of point clouds is a fundamental part of three-dimensional urban reconstruction. For datasets with high spatial resolution but significantly more noises, a general trend is to exploit more contexture information to surmount the decrease of discrimination of features for classification. However, previous works on adoption of contexture information are either too restrictive or only in a small region and in this paper, we propose a point cloud classification method based on multi-level semantic relationships, including point-homogeneity, supervoxel-adjacency and class-knowledge constraints, which is more versatile and incrementally propagate the classification cues from individual points to the object level and formulate them as a graphical model. The point-homogeneity constraint clusters points with similar geometric and radiometric properties into regular-shaped supervoxels that correspond to the vertices in the graphical model. The supervoxel-adjacency constraint contributes to the pairwise interactions by providing explicit adjacent relationships between supervoxels. The class-knowledge constraint operates at the object level based on semantic rules, guaranteeing the classification correctness of supervoxel clusters at that level. International Society of Photogrammetry and Remote Sensing (ISPRS) benchmark tests have shown that the proposed method achieves state-of-the-art performance with an average per-area completeness and correctness of 93.88% and 95.78%, respectively. The evaluation of classification of photogrammetric point clouds and DSM generated from aerial imagery confirms the method's reliability in several challenging urban scenes.
SPIKY: a graphical user interface for monitoring spike train synchrony
Mulansky, Mario; Bozanic, Nebojsa
2015-01-01
Techniques for recording large-scale neuronal spiking activity are developing very fast. This leads to an increasing demand for algorithms capable of analyzing large amounts of experimental spike train data. One of the most crucial and demanding tasks is the identification of similarity patterns with a very high temporal resolution and across different spatial scales. To address this task, in recent years three time-resolved measures of spike train synchrony have been proposed, the ISI-distance, the SPIKE-distance, and event synchronization. The Matlab source codes for calculating and visualizing these measures have been made publicly available. However, due to the many different possible representations of the results the use of these codes is rather complicated and their application requires some basic knowledge of Matlab. Thus it became desirable to provide a more user-friendly and interactive interface. Here we address this need and present SPIKY, a graphical user interface that facilitates the application of time-resolved measures of spike train synchrony to both simulated and real data. SPIKY includes implementations of the ISI-distance, the SPIKE-distance, and the SPIKE-synchronization (an improved and simplified extension of event synchronization) that have been optimized with respect to computation speed and memory demand. It also comprises a spike train generator and an event detector that makes it capable of analyzing continuous data. Finally, the SPIKY package includes additional complementary programs aimed at the analysis of large numbers of datasets and the estimation of significance levels. PMID:25744888
SPIKY: a graphical user interface for monitoring spike train synchrony.
Kreuz, Thomas; Mulansky, Mario; Bozanic, Nebojsa
2015-05-01
Techniques for recording large-scale neuronal spiking activity are developing very fast. This leads to an increasing demand for algorithms capable of analyzing large amounts of experimental spike train data. One of the most crucial and demanding tasks is the identification of similarity patterns with a very high temporal resolution and across different spatial scales. To address this task, in recent years three time-resolved measures of spike train synchrony have been proposed, the ISI-distance, the SPIKE-distance, and event synchronization. The Matlab source codes for calculating and visualizing these measures have been made publicly available. However, due to the many different possible representations of the results the use of these codes is rather complicated and their application requires some basic knowledge of Matlab. Thus it became desirable to provide a more user-friendly and interactive interface. Here we address this need and present SPIKY, a graphical user interface that facilitates the application of time-resolved measures of spike train synchrony to both simulated and real data. SPIKY includes implementations of the ISI-distance, the SPIKE-distance, and the SPIKE-synchronization (an improved and simplified extension of event synchronization) that have been optimized with respect to computation speed and memory demand. It also comprises a spike train generator and an event detector that makes it capable of analyzing continuous data. Finally, the SPIKY package includes additional complementary programs aimed at the analysis of large numbers of datasets and the estimation of significance levels. Copyright © 2015 the American Physiological Society.
NASA Technical Reports Server (NTRS)
Basu, Saikat; Ganguly, Sangram; Michaelis, Andrew; Votava, Petr; Roy, Anshuman; Mukhopadhyay, Supratik; Nemani, Ramakrishna
2015-01-01
Tree cover delineation is a useful instrument in deriving Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) airborne imagery data. Numerous algorithms have been designed to address this problem, but most of them do not scale to these datasets, which are of the order of terabytes. In this paper, we present a semi-automated probabilistic framework for the segmentation and classification of 1-m National Agriculture Imagery Program (NAIP) for tree-cover delineation for the whole of Continental United States, using a High Performance Computing Architecture. Classification is performed using a multi-layer Feedforward Backpropagation Neural Network and segmentation is performed using a Statistical Region Merging algorithm. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field, which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by relabeling misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the whole state of California, spanning a total of 11,095 NAIP tiles covering a total geographical area of 163,696 sq. miles. The framework produced true positive rates of around 88% for fragmented forests and 74% for urban tree cover areas, with false positive rates lower than 2% for both landscapes. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR canopy height model (CHM) showed the effectiveness of our framework for generating accurate high-resolution tree-cover maps.
Dufresne, Martin; Guneysu, Daniel; Patterson, Nathan Heath; Marcinkiewicz, Mieczyslaw Martin; Regina, Anthony; Demeule, Michel; Chaurand, Pierre
2017-02-01
Mucopolysaccharidosis type II (Hunter's disease) mouse model (IdS-KO) was investigated by both imaging mass spectrometry (IMS) and immunohistochemistry (IHC) performed on the same tissue sections. For this purpose, IdS-KO mice brain sections were coated with sublimated 1,5-diaminonaphtalene and analyzed by high spatial resolution IMS (5 μm) and anti-GM3 IHC on the same tissue sections to characterize the ganglioside monosialated ganglioside (GM) deposits found in Hunter's disease. IMS analysis have found that two species of GM3 and GM2 that are only different due to the length of their fatty acid residue (stearic or arachidic residue) were overexpressed in the IdS-KO mice compared to a control mouse. GM3 and GM2 were characterized by on-tissue exact mass and MS/MS compared to a GM3 standard. Realignment of both IMS and IHC data sets further confirmed the observed regioselective signal previously detected by providing direct correlation of the IMS image for the two GM3 overly expressed MS signals with the anti-GM3 IHC image. Furthermore, these regioselective GM MS signals were also found to have highly heterogeneous distributions within the GM3-IHC staining. Some deposits showed high content in GM3 and GM2 stearic species (r = 0.74) and others had more abundant GM3 and GM2 arachidic species (r = 0.76). Same-section analysis of Hunter's disease mouse model by both high spatial resolution IMS and IHC provides a more in-depth analysis of the composition of the GM aggregates while providing spatial distribution of the observed molecular species. Graphical Abstract Ganglioside imaging mass spectrometry followed by immunohistochemistry performed on the same tissue section.
NASA Astrophysics Data System (ADS)
Basu, S.; Ganguly, S.; Michaelis, A.; Votava, P.; Roy, A.; Mukhopadhyay, S.; Nemani, R. R.
2015-12-01
Tree cover delineation is a useful instrument in deriving Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) airborne imagery data. Numerous algorithms have been designed to address this problem, but most of them do not scale to these datasets which are of the order of terabytes. In this paper, we present a semi-automated probabilistic framework for the segmentation and classification of 1-m National Agriculture Imagery Program (NAIP) for tree-cover delineation for the whole of Continental United States, using a High Performance Computing Architecture. Classification is performed using a multi-layer Feedforward Backpropagation Neural Network and segmentation is performed using a Statistical Region Merging algorithm. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field, which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by relabeling misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the whole state of California, spanning a total of 11,095 NAIP tiles covering a total geographical area of 163,696 sq. miles. The framework produced true positive rates of around 88% for fragmented forests and 74% for urban tree cover areas, with false positive rates lower than 2% for both landscapes. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR canopy height model (CHM) showed the effectiveness of our framework for generating accurate high-resolution tree-cover maps.
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Rothmann, Elizabeth; Mittal, Nitin; Koppen, Sandra Howell
1994-01-01
The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of highly reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed at the outset to be compatible with most computing platforms and operating systems, and some programs have been beta tested within the aerospace community for over 8 years. This document is a user's guide for the HiRel graphical preprocessor Graphics Oriented (GO) program. GO is a graphical user interface for the HARP engine that enables the drawing of reliability/availability models on a monitor. A mouse is used to select fault tree gates or Markov graphical symbols from a menu for drawing.
The effect of using graphic organizers in the teaching of standard biology
NASA Astrophysics Data System (ADS)
Pepper, Wade Louis, Jr.
This study was conducted to determine if the use of graphic organizers in the teaching of standard biology would increase student achievement, involvement and quality of activities. The subjects were 10th grade standard biology students in a large southern inner city high school. The study was conducted over a six-week period in an instructional setting using action research as the investigative format. After calculation of the homogeneity between classes, random selection was used to determine the graphic organizer class and the control class. The graphic organizer class was taught unit material through a variety of instructional methods along with the use of teacher generated graphic organizers. The control class was taught the same unit material using the same instructional methods, but without the use of graphic organizers. Data for the study were gathered from in-class written assignments, teacher-generated tests and text-generated tests, and rubric scores of an out-of-class written assignment and project. Also, data were gathered from student reactions, comments, observations and a teacher's research journal. Results were analyzed using descriptive statistics and qualitative interpretation. By comparing statistical results, it was determined that the use of graphic organizers did not make a statistically significant difference in the understanding of biological concepts and retention of factual information. Furthermore, the use of graphic organizers did not make a significant difference in motivating students to fulfill all class assignments with quality efforts and products. However, based upon student reactions and comments along with observations by the researcher, graphic organizers were viewed by the students as a favorable and helpful instructional tool. In lieu of statistical results, student gains from instructional activities using graphic organizers were positive and merit the continuation of their use as an instructional tool.
Effects of game-like interactive graphics on risk perceptions and decisions.
Ancker, Jessica S; Weber, Elke U; Kukafka, Rita
2011-01-01
Many patients have difficulty interpreting risks described in statistical terms as percentages. Computer game technology offers the opportunity to experience how often an event occurs, rather than simply read about its frequency. . To assess effects of interactive graphics on risk perceptions and decisions. . Electronic questionnaire. Participants and setting. Respondents (n = 165) recruited online or at an urban hospital. Intervention. Health risks were illustrated by either static graphics or interactive game-like graphics. The interactive search graphic was a grid of squares, which, when clicked, revealed stick figures underneath. Respondents had to click until they found a figure affected by the disease. Measurements. Risk feelings, risk estimates, intention to take preventive action. . Different graphics did not affect mean risk estimates, risk feelings, or intention. Low-numeracy participants reported significantly higher risk feelings than high-numeracy ones except with the interactive search graphic. Unexpectedly, respondents reported stronger intentions to take preventive action when the intention question followed questions about efficacy and disease severity than when it followed perceived risk questions (65% v. 34%; P < 0.001). When respondents reported risk feelings immediately after using the search graphic, the interaction affected perceived risk (the longer the search to find affected stick figures, the higher the risk feeling: ρ = 0.57; P = 0.009). Limitations. The authors used hypothetical decisions. . A game-like graphic that allowed consumers to search for stick figures affected by disease had no main effect on risk perception but reduced differences based on numeracy. In one condition, the game-like graphic increased concern about rare risks. Intentions for preventive action were stronger with a question order that focused first on efficacy and disease severity than with one that focused first on perceived risk.
Rusu, Mirabela; Birmanns, Stefan
2010-04-01
A structural characterization of multi-component cellular assemblies is essential to explain the mechanisms governing biological function. Macromolecular architectures may be revealed by integrating information collected from various biophysical sources - for instance, by interpreting low-resolution electron cryomicroscopy reconstructions in relation to the crystal structures of the constituent fragments. A simultaneous registration of multiple components is beneficial when building atomic models as it introduces additional spatial constraints to facilitate the native placement inside the map. The high-dimensional nature of such a search problem prevents the exhaustive exploration of all possible solutions. Here we introduce a novel method based on genetic algorithms, for the efficient exploration of the multi-body registration search space. The classic scheme of a genetic algorithm was enhanced with new genetic operations, tabu search and parallel computing strategies and validated on a benchmark of synthetic and experimental cryo-EM datasets. Even at a low level of detail, for example 35-40 A, the technique successfully registered multiple component biomolecules, measuring accuracies within one order of magnitude of the nominal resolutions of the maps. The algorithm was implemented using the Sculptor molecular modeling framework, which also provides a user-friendly graphical interface and enables an instantaneous, visual exploration of intermediate solutions. (c) 2009 Elsevier Inc. All rights reserved.
Blincoe, William D; Rodriguez-Granillo, Agustina; Saurí, Josep; Pierson, Nicholas A; Joyce, Leo A; Mangion, Ian; Sheng, Huaming
2018-04-01
Benzoic acid/ester/amide derivatives are common moieties in pharmaceutical compounds and present a challenge in positional isomer identification by traditional tandem mass spectrometric analysis. A method is presented for exploiting the gas-phase neighboring group participation (NGP) effect to differentiate ortho-substituted benzoic acid/ester derivatives with high resolution mass spectrometry (HRMS 1 ). Significant water/alcohol loss (>30% abundance in MS 1 spectra) was observed for ortho-substituted nucleophilic groups; these fragment peaks are not observable for the corresponding para and meta-substituted analogs. Experiments were also extended to the analysis of two intermediates in the synthesis of suvorexant (Belsomra) with additional analysis conducted with nuclear magnetic resonance (NMR), density functional theory (DFT), and ion mobility spectrometry-mass spectrometry (IMS-MS) studies. Significant water/alcohol loss was also observed for 1-substituted 1, 2, 3-triazoles but not for the isomeric 2-substituted 1, 2, 3-triazole analogs. IMS-MS, NMR, and DFT studies were conducted to show that the preferred orientation of the 2-substituted triazole rotamer was away from the electrophilic center of the reaction, whereas the 1-subtituted triazole was oriented in close proximity to the center. Abundance of NGP product was determined to be a product of three factors: (1) proton affinity of the nucleophilic group; (2) steric impact of the nucleophile; and (3) proximity of the nucleophile to carboxylic acid/ester functional groups. Graphical Abstract ᅟ.
Angeli, Timothy R; O'Grady, Gregory; Paskaranandavadivel, Niranchan; Erickson, Jonathan C; Du, Peng; Pullan, Andrew J; Bissett, Ian P
2013-01-01
Background/Aims Small intestine motility is governed by an electrical slow wave activity, and abnormal slow wave events have been associated with intestinal dysmotility. High-resolution (HR) techniques are necessary to analyze slow wave propagation, but progress has been limited by few available electrode options and laborious manual analysis. This study presents novel methods for in vivo HR mapping of small intestine slow wave activity. Methods Recordings were obtained from along the porcine small intestine using flexible printed circuit board arrays (256 electrodes; 4 mm spacing). Filtering options were compared, and analysis was automated through adaptations of the falling-edge variable-threshold (FEVT) algorithm and graphical visualization tools. Results A Savitzky-Golay filter was chosen with polynomial-order 9 and window size 1.7 seconds, which maintained 94% of slow wave amplitude, 57% of gradient and achieved a noise correction ratio of 0.083. Optimized FEVT parameters achieved 87% sensitivity and 90% positive-predictive value. Automated activation mapping and animation successfully revealed slow wave propagation patterns, and frequency, velocity, and amplitude were calculated and compared at 5 locations along the intestine (16.4 ± 0.3 cpm, 13.4 ± 1.7 mm/sec, and 43 ± 6 µV, respectively, in the proximal jejunum). Conclusions The methods developed and validated here will greatly assist small intestine HR mapping, and will enable experimental and translational work to evaluate small intestine motility in health and disease. PMID:23667749
Learning-based 3D surface optimization from medical image reconstruction
NASA Astrophysics Data System (ADS)
Wei, Mingqiang; Wang, Jun; Guo, Xianglin; Wu, Huisi; Xie, Haoran; Wang, Fu Lee; Qin, Jing
2018-04-01
Mesh optimization has been studied from the graphical point of view: It often focuses on 3D surfaces obtained by optical and laser scanners. This is despite the fact that isosurfaced meshes of medical image reconstruction suffer from both staircases and noise: Isotropic filters lead to shape distortion, while anisotropic ones maintain pseudo-features. We present a data-driven method for automatically removing these medical artifacts while not introducing additional ones. We consider mesh optimization as a combination of vertex filtering and facet filtering in two stages: Offline training and runtime optimization. In specific, we first detect staircases based on the scanning direction of CT/MRI scanners, and design a staircase-sensitive Laplacian filter (vertex-based) to remove them; and then design a unilateral filtered facet normal descriptor (uFND) for measuring the geometry features around each facet of a given mesh, and learn the regression functions from a set of medical meshes and their high-resolution reference counterparts for mapping the uFNDs to the facet normals of the reference meshes (facet-based). At runtime, we first perform staircase-sensitive Laplacian filter on an input MC (Marching Cubes) mesh, and then filter the mesh facet normal field using the learned regression functions, and finally deform it to match the new normal field for obtaining a compact approximation of the high-resolution reference model. Tests show that our algorithm achieves higher quality results than previous approaches regarding surface smoothness and surface accuracy.
MEMS analog light processing: an enabling technology for adaptive optical phase control
NASA Astrophysics Data System (ADS)
Gehner, Andreas; Wildenhain, Michael; Neumann, Hannes; Knobbe, Jens; Komenda, Ondrej
2006-01-01
Various applications in modern optics are demanding for Spatial Light Modulators (SLM) with a true analog light processing capability, e.g. the generation of arbitrary analog phase patterns for an adaptive optical phase control. For that purpose the Fraunhofer IPMS has developed a high-resolution MEMS Micro Mirror Array (MMA) with an integrated active-matrix CMOS address circuitry. The device provides 240 x 200 piston-type mirror elements with 40 μm pixel size, where each of them can be addressed and deflected independently at an 8bit height resolution with a vertical analog deflection range of up to 400 nm suitable for a 2pi phase modulation in the visible. Full user programmability and control is provided by a newly developed comfortable driver software for Windows XP based PCs supporting both a Graphical User Interface (GUI) for stand-alone operation with pre-defined data patterns as well as an open ActiveX programming interface for a direct data feed-through within a closed-loop environment. High-speed data communication is established by an IEEE1394a FireWire interface together with an electronic driving board performing the actual MMA programming and control at a maximum frame rate of up to 500 Hz. Successful application demonstrations have been given in eye aberration correction, coupling efficiency optimization into a monomode fiber, ultra-short laser pulse modulation and diffractive beam shaping. Besides a presentation of the basic device concept the paper will give an overview of the obtained results from these applications.
GPU-based cone beam computed tomography.
Noël, Peter B; Walczak, Alan M; Xu, Jinhui; Corso, Jason J; Hoffmann, Kenneth R; Schafer, Sebastian
2010-06-01
The use of cone beam computed tomography (CBCT) is growing in the clinical arena due to its ability to provide 3D information during interventions, its high diagnostic quality (sub-millimeter resolution), and its short scanning times (60 s). In many situations, the short scanning time of CBCT is followed by a time-consuming 3D reconstruction. The standard reconstruction algorithm for CBCT data is the filtered backprojection, which for a volume of size 256(3) takes up to 25 min on a standard system. Recent developments in the area of Graphic Processing Units (GPUs) make it possible to have access to high-performance computing solutions at a low cost, allowing their use in many scientific problems. We have implemented an algorithm for 3D reconstruction of CBCT data using the Compute Unified Device Architecture (CUDA) provided by NVIDIA (NVIDIA Corporation, Santa Clara, California), which was executed on a NVIDIA GeForce GTX 280. Our implementation results in improved reconstruction times from minutes, and perhaps hours, to a matter of seconds, while also giving the clinician the ability to view 3D volumetric data at higher resolutions. We evaluated our implementation on ten clinical data sets and one phantom data set to observe if differences occur between CPU and GPU-based reconstructions. By using our approach, the computation time for 256(3) is reduced from 25 min on the CPU to 3.2 s on the GPU. The GPU reconstruction time for 512(3) volumes is 8.5 s. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.
The Visualization Toolkit (VTK): Rewriting the rendering code for modern graphics cards
NASA Astrophysics Data System (ADS)
Hanwell, Marcus D.; Martin, Kenneth M.; Chaudhary, Aashish; Avila, Lisa S.
2015-09-01
The Visualization Toolkit (VTK) is an open source, permissively licensed, cross-platform toolkit for scientific data processing, visualization, and data analysis. It is over two decades old, originally developed for a very different graphics card architecture. Modern graphics cards feature fully programmable, highly parallelized architectures with large core counts. VTK's rendering code was rewritten to take advantage of modern graphics cards, maintaining most of the toolkit's programming interfaces. This offers the opportunity to compare the performance of old and new rendering code on the same systems/cards. Significant improvements in rendering speeds and memory footprints mean that scientific data can be visualized in greater detail than ever before. The widespread use of VTK means that these improvements will reap significant benefits.
ERIC Educational Resources Information Center
Osterer, Irv
2012-01-01
One of the things that many post-secondary graphic-design schools look for in student portfolios is one or two typography projects with hand-rendered type. At Merivale High School, junior graphic-design classes work with traditional media, and every year they receive an assignment that encourages them to play with letter forms. In this article,…
Animation as a Distractor to Learning.
ERIC Educational Resources Information Center
Rieber, Lloyd P.
1996-01-01
A study of 364 fifth graders investigated distractibility of animated graphics in a computer-based tutorial about Newton's Laws of Motion. Found no difference in post-test performance for those with high, medium, or no distraction graphics. Students in the two distraction conditions took less time to process instructional frames than students in…
Reacting to Graphic Horror: A Model of Empathy and Emotional Behavior.
ERIC Educational Resources Information Center
Tamborini, Ron; And Others
1990-01-01
Studies viewer response to graphic horror films. Reports that undergraduate mass communication students viewed clips from two horror films and a scientific television program. Concludes that people who score high on measures for wandering imagination, fictional involvement, humanistic orientation, and emotional contagion tend to find horror films…
Literacy and Graphic Communication: Getting the Words out
ERIC Educational Resources Information Center
Fletcher, Tina; Sampson, Mary Beth
2012-01-01
Although it may seem logical to assume that giftedness automatically equates with high academic achievement, research has shown that assumption is not always true especially in areas that deal with the communication of understanding and knowledge of a subject. If problems occur in graphic output venues that include handwriting, intervention…
Assessing the impact of graphical quality on automatic text recognition in digital maps
NASA Astrophysics Data System (ADS)
Chiang, Yao-Yi; Leyk, Stefan; Honarvar Nazari, Narges; Moghaddam, Sima; Tan, Tian Xiang
2016-08-01
Converting geographic features (e.g., place names) in map images into a vector format is the first step for incorporating cartographic information into a geographic information system (GIS). With the advancement in computational power and algorithm design, map processing systems have been considerably improved over the last decade. However, the fundamental map processing techniques such as color image segmentation, (map) layer separation, and object recognition are sensitive to minor variations in graphical properties of the input image (e.g., scanning resolution). As a result, most map processing results would not meet user expectations if the user does not "properly" scan the map of interest, pre-process the map image (e.g., using compression or not), and train the processing system, accordingly. These issues could slow down the further advancement of map processing techniques as such unsuccessful attempts create a discouraged user community, and less sophisticated tools would be perceived as more viable solutions. Thus, it is important to understand what kinds of maps are suitable for automatic map processing and what types of results and process-related errors can be expected. In this paper, we shed light on these questions by using a typical map processing task, text recognition, to discuss a number of map instances that vary in suitability for automatic processing. We also present an extensive experiment on a diverse set of scanned historical maps to provide measures of baseline performance of a standard text recognition tool under varying map conditions (graphical quality) and text representations (that can vary even within the same map sheet). Our experimental results help the user understand what to expect when a fully or semi-automatic map processing system is used to process a scanned map with certain (varying) graphical properties and complexities in map content.
Surgical pathology report in the era of desktop publishing.
Pillarisetti, S G
1993-01-01
Since it is believed that "a picture is worth a thousand words," incorporation of computer-generated line art was used as a adjunct to gross description in surgical pathology reporting in selected cases. The lack of an integrated software program was overcome by using commercially available graphic and word processing software. A library of drawings was developed over the last few years. Most time-consuming is the development of templates and the graphic library. With some effort it is possible to integrate graphics of high quality into surgical pathology reports.
Full resolution hologram-like autostereoscopic display
NASA Technical Reports Server (NTRS)
Eichenlaub, Jesse B.; Hutchins, Jamie
1995-01-01
Under this program, Dimension Technologies Inc. (DTI) developed a prototype display that uses a proprietary illumination technique to create autostereoscopic hologram-like full resolution images on an LCD operating at 180 fps. The resulting 3D image possesses a resolution equal to that of the LCD along with properties normally associated with holograms, including change of perspective with observer position and lack of viewing position restrictions. Furthermore, this autostereoscopic technique eliminates the need to wear special glasses to achieve the parallax effect. Under the program a prototype display was developed which demonstrates the hologram-like full resolution concept. To implement such a system, DTI explored various concept designs and enabling technologies required to support those designs. Specifically required were: a parallax illumination system with sufficient brightness and control; an LCD with rapid address and pixel response; and an interface to an image generation system for creation of computer graphics. Of the possible parallax illumination system designs, we chose a design which utilizes an array of fluorescent lamps. This system creates six sets of illumination areas to be imaged behind an LCD. This controlled illumination array is interfaced to a lenticular lens assembly which images the light segments into thin vertical light lines to achieve the parallax effect. This light line formation is the foundation of DTI's autostereoscopic technique. The David Sarnoff Research Center (Sarnoff) was subcontracted to develop an LCD that would operate with a fast scan rate and pixel response. Sarnoff chose a surface mode cell technique and produced the world's first large area pi-cell active matrix TFT LCD. The device provided adequate performance to evaluate five different perspective stereo viewing zones. A Silicon Graphics' Iris Indigo system was used for image generation which allowed for static and dynamic multiple perspective image rendering. During the development of the prototype display, we identified many critical issues associated with implementing such a technology. Testing and evaluation enabled us to prove that this illumination technique provides autostereoscopic 3D multi perspective images with a wide range of view, smooth transition, and flickerless operation given suitable enabling technologies.
Interactive graphics for expressing health risks: development and qualitative evaluation.
Ancker, Jessica S; Chan, Connie; Kukafka, Rita
2009-01-01
Recent findings suggest that interactive game-like graphics might be useful in communicating probabilities. We developed a prototype for a risk communication module, focusing on eliciting users' preferences for different interactive graphics and assessing usability and user interpretations. Feedback from five focus groups was used to design the graphics. The final version displayed a matrix of square buttons; clicking on any button allowed the user to see whether the stick figure underneath was affected by the health outcome. When participants used this interaction to learn about a risk, they expressed more emotional responses, both positive and negative, than when viewing any static graphic or numerical description of a risk. Their responses included relief about small risks and concern about large risks. The groups also commented on static graphics: arranging the figures affected by disease randomly throughout a group of figures made it more difficult to judge the proportion affected but often was described as more realistic. Interactive graphics appear to have potential for expressing risk magnitude as well as the feeling of risk. This affective impact could be useful in increasing perceived threat of high risks, calming fears about low risks, or comparing risks. Quantitative studies are planned to assess the effect on perceived risks and estimated risk magnitudes.
Evaluation of graphic cardiovascular display in a high-fidelity simulator.
Agutter, James; Drews, Frank; Syroid, Noah; Westneskow, Dwayne; Albert, Rob; Strayer, David; Bermudez, Julio; Weinger, Matthew B
2003-11-01
"Human error" in anesthesia can be attributed to misleading information from patient monitors or to the physician's failure to recognize a pattern. A graphic representation of monitored data may provide better support for detection, diagnosis, and treatment. We designed a graphic display to show hemodynamic variables. Twenty anesthesiologists were asked to assume care of a simulated patient. Half the participants used the graphic cardiovascular display; the other half used a Datex As/3 monitor. One scenario was a total hip replacement with a transfusion reaction to mismatched blood. The second scenario was a radical prostatectomy with 1.5 L of blood loss and myocardial ischemia. Subjects who used the graphic display detected myocardial ischemia 2 min sooner than those who did not use the display. Treatment was initiated sooner (2.5 versus 4.9 min). There were no significant differences between groups in the hip replacement scenario. Systolic blood pressure deviated less from baseline, central venous pressure was closer to its baseline, and arterial oxygen saturation was higher at the end of the case when the graphic display was used. The study lends some support for the hypothesis that providing clinical information graphically in a display designed with emergent features and functional relationships can improve clinicians' ability to detect, diagnose, manage, and treat critical cardiovascular events in a simulated environment. A graphic representation of monitored data may provide better support for detection, diagnosis, and treatment. A user-centered design process led to a novel object-oriented graphic display of hemodynamic variables containing emergent features and functional relationships. In a simulated environment, this display appeared to support clinicians' ability to diagnose, manage, and treat a critical cardiovascular event in a simulated environment. We designed a graphic display to show hemodynamic variables. The study provides some support for the hypothesis that providing clinical information graphically in a display designed with emergent features and functional relationships can improve clinicians' ability to detect, diagnosis, mange, and treat critical cardiovascular events in a simulated environment.
NASA Astrophysics Data System (ADS)
Zhou, H.; Yu, X.; Chen, C.; Zeng, L.; Lu, S.; Wu, L.
2016-12-01
In this research, we combined synchrotron-based X-ray micro-computed tomography (SR-mCT), with three-dimensional lattice Bolzmann (LB) method, to quantify how the change in pore space architecture affected macroscopic hydraulic of two clayey soils amended with biochar. SR-mCT was used to characterize pore structures of the soils before and after biochar addition. The high-resolution soil pore structures were then directly used as internal boundary conditions for three-dimensional water flow simulations with the LB method, which was accelerated by graphics processing unit (GPU) parallel computing. It was shown that, due to the changes in soil pore geometry, the application of biochar increased the soil permeability by at least 1 order of magnitude, and decreased the tortuosity by 20-30%. This work was the first physics based modeling study on the effect of biochar amendment on soil hydraulic properties. The developed theories and techniques have promising potential in understanding the mechanisms of water and nutrient transport in soil at the pore scale.
2016-09-20
This graphic depicts the Asteroid Redirect Vehicle conducting a flyby of its target asteroid. During these flybys, the Asteroid Redirect Mission (ARM) would come within 0.6 miles (1 kilometer), generating imagery with resolution of up to 0.4 of an inch (1 centimeter) per pixel. The robotic segment of ARM will demonstrate advanced, high-power, high-throughput solar electric propulsion; advanced autonomous precision proximity operations at a low-gravity planetary body; and controlled touchdown and liftoff with a multi-ton mass. The crew segment of the mission will include spacewalk activities for sample selection, extraction, containment and return; and mission operations of integrated robotic and crewed vehicle stack -- all key components of future in-space operations for human missions to the Mars system. After collecting a multi-ton boulder from the asteroid, the robotic spacecraft will redirect the boulder to a crew-accessible orbit around the moon, where NASA plans to conduct a series of proving ground missions in the 2020s that will help validate capabilities needed for NASA's Journey to Mars. http://photojournal.jpl.nasa.gov/catalog/PIA21062
The MAP Autonomous Mission Control System
NASA Technical Reports Server (NTRS)
Breed, Juile; Coyle, Steven; Blahut, Kevin; Dent, Carolyn; Shendock, Robert; Rowe, Roger
2000-01-01
The Microwave Anisotropy Probe (MAP) mission is the second mission in NASA's Office of Space Science low-cost, Medium-class Explorers (MIDEX) program. The Explorers Program is designed to accomplish frequent, low cost, high quality space science investigations utilizing innovative, streamlined, efficient management, design and operations approaches. The MAP spacecraft will produce an accurate full-sky map of the cosmic microwave background temperature fluctuations with high sensitivity and angular resolution. The MAP spacecraft is planned for launch in early 2001, and will be staffed by only single-shift operations. During the rest of the time the spacecraft must be operated autonomously, with personnel available only on an on-call basis. Four (4) innovations will work cooperatively to enable a significant reduction in operations costs for the MAP spacecraft. First, the use of a common ground system for Spacecraft Integration and Test (I&T) as well as Operations. Second, the use of Finite State Modeling for intelligent autonomy. Third, the integration of a graphical planning engine to drive the autonomous systems without an intermediate manual step. And fourth, the ability for distributed operations via Web and pager access.