Considerations on communications network protocols in deep space
NASA Technical Reports Server (NTRS)
Clare, L. P.; Agre, J. R.; Yan, T.
2001-01-01
Communications supporting deep space missions impose numerous unique constraints that impact the architectural choices made for cost-effectiveness. We are entering the era where networks that exist in deep space are needed to support planetary exploration. Cost-effective performance will require a balanced integration of applicable widely used standard protocols with new and innovative designs.
The Deep Impact Network Experiment Operations Center Monitor and Control System
NASA Technical Reports Server (NTRS)
Wang, Shin-Ywan (Cindy); Torgerson, J. Leigh; Schoolcraft, Joshua; Brenman, Yan
2009-01-01
The Interplanetary Overlay Network (ION) software at JPL is an implementation of Delay/Disruption Tolerant Networking (DTN) which has been proposed as an interplanetary protocol to support space communication. The JPL Deep Impact Network (DINET) is a technology development experiment intended to increase the technical readiness of the JPL implemented ION suite. The DINET Experiment Operations Center (EOC) developed by JPL's Protocol Technology Lab (PTL) was critical in accomplishing the experiment. EOC, containing all end nodes of simulated spaces and one administrative node, exercised publish and subscribe functions for payload data among all end nodes to verify the effectiveness of data exchange over ION protocol stacks. A Monitor and Control System was created and installed on the administrative node as a multi-tiered internet-based Web application to support the Deep Impact Network Experiment by allowing monitoring and analysis of the data delivery and statistics from ION. This Monitor and Control System includes the capability of receiving protocol status messages, classifying and storing status messages into a database from the ION simulation network, and providing web interfaces for viewing the live results in addition to interactive database queries.
Stromatias, Evangelos; Neil, Daniel; Pfeiffer, Michael; Galluppi, Francesco; Furber, Steve B; Liu, Shih-Chii
2015-01-01
Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time.
Stromatias, Evangelos; Neil, Daniel; Pfeiffer, Michael; Galluppi, Francesco; Furber, Steve B.; Liu, Shih-Chii
2015-01-01
Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time. PMID:26217169
Deep learning for computational chemistry.
Goh, Garrett B; Hodas, Nathan O; Vishnu, Abhinav
2017-06-15
The rise and fall of artificial neural networks is well documented in the scientific literature of both computer science and computational chemistry. Yet almost two decades later, we are now seeing a resurgence of interest in deep learning, a machine learning algorithm based on multilayer neural networks. Within the last few years, we have seen the transformative impact of deep learning in many domains, particularly in speech recognition and computer vision, to the extent that the majority of expert practitioners in those field are now regularly eschewing prior established models in favor of deep learning models. In this review, we provide an introductory overview into the theory of deep neural networks and their unique properties that distinguish them from traditional machine learning algorithms used in cheminformatics. By providing an overview of the variety of emerging applications of deep neural networks, we highlight its ubiquity and broad applicability to a wide range of challenges in the field, including quantitative structure activity relationship, virtual screening, protein structure prediction, quantum chemistry, materials design, and property prediction. In reviewing the performance of deep neural networks, we observed a consistent outperformance against non-neural networks state-of-the-art models across disparate research topics, and deep neural network-based models often exceeded the "glass ceiling" expectations of their respective tasks. Coupled with the maturity of GPU-accelerated computing for training deep neural networks and the exponential growth of chemical data on which to train these networks on, we anticipate that deep learning algorithms will be a valuable tool for computational chemistry. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Deep learning for computational chemistry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goh, Garrett B.; Hodas, Nathan O.; Vishnu, Abhinav
The rise and fall of artificial neural networks is well documented in the scientific literature of both the fields of computer science and computational chemistry. Yet almost two decades later, we are now seeing a resurgence of interest in deep learning, a machine learning algorithm based on “deep” neural networks. Within the last few years, we have seen the transformative impact of deep learning the computer science domain, notably in speech recognition and computer vision, to the extent that the majority of practitioners in those field are now regularly eschewing prior established models in favor of deep learning models. Inmore » this review, we provide an introductory overview into the theory of deep neural networks and their unique properties as compared to traditional machine learning algorithms used in cheminformatics. By providing an overview of the variety of emerging applications of deep neural networks, we highlight its ubiquity and broad applicability to a wide range of challenges in the field, including QSAR, virtual screening, protein structure modeling, QM calculations, materials synthesis and property prediction. In reviewing the performance of deep neural networks, we observed a consistent outperformance against non neural networks state-of-the-art models across disparate research topics, and deep neural network based models often exceeded the “glass ceiling” expectations of their respective tasks. Coupled with the maturity of GPU-accelerated computing for training deep neural networks and the exponential growth of chemical data on which to train these networks on, we anticipate that deep learning algorithms will be a useful tool and may grow into a pivotal role for various challenges in the computational chemistry field.« less
The Deep Impact Network Experiment Operations Center
NASA Technical Reports Server (NTRS)
Torgerson, J. Leigh; Clare, Loren; Wang, Shin-Ywan
2009-01-01
Delay/Disruption Tolerant Networking (DTN) promises solutions in solving space communications challenges arising from disconnections as orbiters lose line-of-sight with landers, long propagation delays over interplanetary links, and other phenomena. DTN has been identified as the basis for the future NASA space communications network backbone, and international standardization is progressing through both the Consultative Committee for Space Data Systems (CCSDS) and the Internet Engineering Task Force (IETF). JPL has developed an implementation of the DTN architecture, called the Interplanetary Overlay Network (ION). ION is specifically implemented for space use, including design for use in a real-time operating system environment and high processing efficiency. In order to raise the Technology Readiness Level of ION, the first deep space flight demonstration of DTN is underway, using the Deep Impact (DI) spacecraft. Called the Deep Impact Network (DINET), operations are planned for Fall 2008. An essential component of the DINET project is the Experiment Operations Center (EOC), which will generate and receive the test communications traffic as well as "out-of-DTN band" command and control of the DTN experiment, store DTN flight test information in a database, provide display systems for monitoring DTN operations status and statistics (e.g., bundle throughput), and support query and analyses of the data collected. This paper describes the DINET EOC and its value in the DTN flight experiment and potential for further DTN testing.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. A worker at Astrotech Space Operations in Titusville, Fla., begins fueling the Deep Impact spacecraft. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth, and reveal the secrets of its interior. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will collect pictures and data of how the crater forms, measuring the craters depth and diameter, as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. Workers at Astrotech Space Operations in Titusville, Fla., suit up before fueling the Deep Impact spacecraft. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth, and reveal the secrets of its interior. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will collect pictures and data of how the crater forms, measuring the craters depth and diameter, as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. Workers at Astrotech Space Operations in Titusville, Fla., suit up before fueling the Deep Impact spacecraft. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth, and reveal the secrets of its interior. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will collect pictures and data of how the crater forms, measuring the craters depth and diameter, as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. A worker at Astrotech Space Operations in Titusville, Fla., begins fueling the Deep Impact spacecraft. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth, and reveal the secrets of its interior. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will collect pictures and data of how the crater forms, measuring the craters depth and diameter, as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. Workers at Astrotech Space Operations in Titusville, Fla., get ready to begin fueling the Deep Impact spacecraft, seen wrapped in a protective cover in the background. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth, and reveal the secrets of its interior. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will collect pictures and data of how the crater forms, measuring the craters depth and diameter, as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. Workers at Astrotech Space Operations in Titusville, Fla., begin fueling operations of the Deep Impact spacecraft, seen wrapped in a protective cover in the background. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth, and reveal the secrets of its interior. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will collect pictures and data of how the crater forms, measuring the craters depth and diameter, as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. Workers at Astrotech Space Operations in Titusville, Fla., begin fueling operations of the Deep Impact spacecraft, seen wrapped in a protective cover in the background. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth, and reveal the secrets of its interior. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will collect pictures and data of how the crater forms, measuring the craters depth and diameter, as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. On Launch Pad 17-B, Cape Canaveral Air Force Station, Fla., the Boeing Delta II rocket carrying the Deep Impact spacecraft stands out against an early dawn sky. Scheduled for liftoff at 1:47 p.m. EST today, Deep Impact will head for space and a rendezvous with Comet Tempel 1 when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile (impactor) to crash onto the surface July 4, 2005, Deep Impacts flyby spacecraft will reveal the secrets of the comets interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. On Launch Pad 17-B, Cape Canaveral Air Force Station, Fla., the Boeing Delta II rocket carrying the Deep Impact spacecraft is bathed in light waiting for tower rollback before launch. Scheduled for liftoff at 1:47 p.m. EST today, Deep Impact will head for space and a rendezvous with Comet Tempel 1 when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile (impactor) to crash onto the surface July 4, 2005, Deep Impacts flyby spacecraft will reveal the secrets of the comets interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. Workers at Astrotech Space Operations in Titusville, Fla., get ready to begin fueling the Deep Impact spacecraft, seen wrapped in a protective cover in the background. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth, and reveal the secrets of its interior. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will collect pictures and data of how the crater forms, measuring the craters depth and diameter, as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
A Whale of a Tale: Creating Spacecraft Telemetry Data Analysis Products for the Deep Impact Mission
NASA Technical Reports Server (NTRS)
Sturdevant, Kathryn
2006-01-01
A description of the Whale product generation utility and its means of analyzing project data for Deep Impact Missions is presented. The topics include: 1) Whale Definition; 2) Whale Overview; 3) Whale Challenges; 4) Network Configuration; 5) Network Diagram; 6) Whale Data Flow: Design Decisions; 7) Whale Data Flow Diagram; 8) Whale Data Flow; 9) Whale Team and Users; 10) Creeping Requirements; 11) Whale Competition; 12) Statistics: Processing Time; 13) CPU and Disk Usage; 14) The Ripple Effect of More Data; and 15) Data Validation and the Automation Challenge.
Arcos-García, Álvaro; Álvarez-García, Juan A; Soria-Morillo, Luis M
2018-03-01
This paper presents a Deep Learning approach for traffic sign recognition systems. Several classification experiments are conducted over publicly available traffic sign datasets from Germany and Belgium using a Deep Neural Network which comprises Convolutional layers and Spatial Transformer Networks. Such trials are built to measure the impact of diverse factors with the end goal of designing a Convolutional Neural Network that can improve the state-of-the-art of traffic sign classification task. First, different adaptive and non-adaptive stochastic gradient descent optimisation algorithms such as SGD, SGD-Nesterov, RMSprop and Adam are evaluated. Subsequently, multiple combinations of Spatial Transformer Networks placed at distinct positions within the main neural network are analysed. The recognition rate of the proposed Convolutional Neural Network reports an accuracy of 99.71% in the German Traffic Sign Recognition Benchmark, outperforming previous state-of-the-art methods and also being more efficient in terms of memory requirements. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. On Launch Pad 17-B, Cape Canaveral Air Force Station, Fla., shadows paint the Boeing Delta II rocket carrying the Deep Impact spacecraft as the mobile service tower at left is rolled back before launch.Scheduled for liftoff at 1:47 p.m. EST today, Deep Impact will head for space and a rendezvous with Comet Tempel 1 when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile (impactor) to crash onto the surface July 4, 2005, Deep Impacts flyby spacecraft will reveal the secrets of the comets interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. On Launch Pad 17-B, Cape Canaveral Air Force Station, Fla., the Boeing Delta II rocket carrying the Deep Impact spacecraft looms into the night sky as the mobile service tower at right is rolled back before launch. Scheduled for liftoff at 1:47 p.m. EST today, Deep Impact will head for space and a rendezvous with Comet Tempel 1 when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile (impactor) to crash onto the surface July 4, 2005, Deep Impacts flyby spacecraft will reveal the secrets of the comets interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. On Launch Pad 17-B, Cape Canaveral Air Force Station, Fla., the Boeing Delta II carrying the Deep Impact spacecraft rocket shines under spotlights in the early dawn hours as it waits for launch. Scheduled for liftoff at 1:47 p.m. EST today, Deep Impact will head for space and a rendezvous with Comet Tempel 1 when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile (impactor) to crash onto the surface July 4, 2005, Deep Impacts flyby spacecraft will reveal the secrets of the comets interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. The sun rises behind Launch Pad 17-B, Cape Canaveral Air Force Station, Fla., where the Boeing Delta II rocket carrying the Deep Impact spacecraft waits for launch. Gray clouds above the horizon belie the favorable weather forecast for the afternoon launch. Scheduled for liftoff at 1:47 p.m. EST today, Deep Impact will head for space and a rendezvous with Comet Tempel 1 when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile (impactor) to crash onto the surface July 4, 2005, Deep Impacts flyby spacecraft will reveal the secrets of the comets interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
Distrubtion Tolerant Network Technology Flight Validation Report: DINET
NASA Technical Reports Server (NTRS)
Jones, Ross M.
2009-01-01
In October and November of 2008, the Jet Propulsion Laboratory installed and tested essential elements of Delay/Disruption Tolerant Networking (DTN) technology on the Deep Impact spacecraft. This experiment, called Deep Impact Network Experiment (DINET), was performed in close cooperation with the EPOXI project which has responsibility for the spacecraft. During DINET some 300 images were transmitted from the JPL nodes to the spacecraft. Then, they were automatically forwarded from the spacecraft back to the JPL nodes, exercising DTN's bundle origination, transmission, acquisition, dynamic route computation, congestion control, prioritization, custody transfer, and automatic retransmission procedures, both on the spacecraft and on the ground, over a period of 27 days. All transmitted bundles were successfully received, without corruption. The DINET experiment demonstrated DTN readiness for operational use in space missions.
Distribution Tolerant Network Technology Flight Validation Report: DINET
NASA Technical Reports Server (NTRS)
Jones, Ross M.
2009-01-01
In October and November of 2008, the Jet Propulsion Laboratory installed and tested essential elements of Delay/Disruption Tolerant Networking (DTN) technology on the Deep Impact spacecraft. This experiment, called Deep Impact Network Experiment (DINET), was performed in close cooperation with the EPOXI project which has responsibility for the spacecraft. During DINET some 300 images were transmitted from the JPL nodes to the spacecraft. Then, they were automatically forwarded from the spacecraft back to the JPL nodes, exercising DTN's bundle origination, transmission, acquisition, dynamic route computation, congestion control, prioritization, custody transfer, and automatic retransmission procedures, both on the spacecraft and on the ground, over a period of 27 days. All transmitted bundles were successfully received, without corruption. The DINET experiment demonstrated DTN readiness for operational use in space missions.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. From the nearby Press Site at Cape Canaveral Air Force Station, Fla., photographers capture the exciting launch of the Deep Impact spacecraft at 1:47 p.m. EST. A NASA Discovery mission, Deep Impact is heading for space and a rendezvous 83 million miles from Earth with Comet Tempel 1. After releasing a 3- by 3-foot projectile (impactor) to crash onto the surface July 4, 2005, Deep Impacts flyby spacecraft will reveal the secrets of the comets interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. Erupting from the flames and smoke beneath it, NASAs Deep Impact spacecraft lifts off at 1:47 p.m. EST today from Launch Pad 17-B, Cape Canaveral Air Force Station, Fla. A NASA Discovery mission, Deep Impact is heading for space and a rendezvous 83 million miles from Earth with Comet Tempel 1. After releasing a 3- by 3-foot projectile (impactor) to crash onto the surface July 4, 2005, Deep Impacts flyby spacecraft will reveal the secrets of the comets interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. Erupting from the flames and smoke beneath it, NASAs Deep Impact spacecraft lifts off at 1:47 p.m. EST today from Launch Pad 17-B, Cape Canaveral Air Force Station, Fla. A NASA Discovery mission, Deep Impact is heading for space and a rendezvous 83 million miles from Earth with Comet Tempel 1. After releasing a 3- by 3-foot projectile (impactor) to crash onto the surface July 4, 2005, Deep Impacts flyby spacecraft will reveal the secrets of the comets interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. Erupting from the flames and smoke beneath it, NASAs Deep Impact spacecraft lifts off at 1:47 p.m. EST today from Launch Pad 17-B, Cape Canaveral Air Force Station, Fla. A NASA Discovery mission, Deep Impact is heading for space and a rendezvous 83 million miles from Earth with Comet Tempel 1. After releasing a 3- by 3-foot projectile (impactor) to crash onto the surface July 4, 2005, Deep Impacts flyby spacecraft will reveal the secrets of the comets interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. Engulfed by flames and smoke, NASAs Deep Impact spacecraft lifts off at 1:47 p.m. EST today from Launch Pad 17-B, Cape Canaveral Air Force Station, Fla. A NASA Discovery mission, Deep Impact is heading for space and a rendezvous 83 million miles from Earth with Comet Tempel 1. After releasing a 3- by 3-foot projectile (impactor) to crash onto the surface July 4, 2005, Deep Impacts flyby spacecraft will reveal the secrets of the comets interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. With a burst of flames, NASAs Deep Impact spacecraft lifts off at 1:47 p.m. EST today from Launch Pad 17-B, Cape Canaveral Air Force Station, Fla. A NASA Discovery mission, Deep Impact is heading for space and a rendezvous 83 million miles from Earth with Comet Tempel 1. After releasing a 3- by 3-foot projectile (impactor) to crash onto the surface July 4, 2005, Deep Impacts flyby spacecraft will reveal the secrets of the comets interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network.
NASA Technical Reports Server (NTRS)
2005-01-01
JET PROPULSION LABORATORY, CALIF. At Ball Aerospace in Boulder, Colo., the infrared (IR) spectrometer for the Deep Impact flyby spacecraft is inspected in the instrument assembly area in the Fisher Assembly building clean room. Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth, and reveal the secrets of its interior. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will collect pictures and data of how the crater forms, measuring the craters depth and diameter, as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. The spectrometer is part of the High Resolution Instrument in the spacecraft. This imager will be aimed at the ejected matter as the crater forms, and an infrared 'fingerprint' of the material from inside of the comet's nucleus will be taken. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission. Launch of Deep Impact is scheduled for Jan. 12 from Launch Pad 17-B, Cape Canaveral Air Force Station, Fla.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. Emerging through the smoke and steam, the Boeing Delta II rocket carrying NASAs Deep Impact spacecraft lifts off at 1:47 p.m. EST from Launch Pad 17-B, Cape Canaveral Air Force Station, Fla. A NASA Discovery mission, Deep Impact is heading for space and a rendezvous 83 million miles from Earth with Comet Tempel 1. After releasing a 3- by 3-foot projectile (impactor) to crash onto the surface July 4, 2005, Deep Impacts flyby spacecraft will reveal the secrets of the comets interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. After a perfect liftoff at 1:47 p.m. EST today from Launch Pad 17-B, Cape Canaveral Air Force Station, Fla., the Boeing Delta II rocket with Deep Impact spacecraft aboard soars through the clear blue sky. A NASA Discovery mission, Deep Impact is heading for space and a rendezvous 83 million miles from Earth with Comet Tempel 1. After releasing a 3- by 3-foot projectile (impactor) to crash onto the surface July 4, 2005, Deep Impacts flyby spacecraft will reveal the secrets of the comets interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. Guests of NASA gather near the launch site at Cape Canaveral Air Force Station, Fla., to watch the Deep Impact spacecraft as it speeds through the air after a perfect launch at 1:47 p.m. EST. A NASA Discovery mission, Deep Impact is heading for space and a rendezvous 83 million miles from Earth with Comet Tempel 1. After releasing a 3- by 3-foot projectile (impactor) to crash onto the surface July 4, 2005, Deep Impacts flyby spacecraft will reveal the secrets of the comets interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. On Launch Pad 17-B at Cape Canaveral Air Force Station, the second stage of the Boeing Delta II rocket arrives at the top of the mobile service tower. The element will be mated to the Delta II, which will launch NASAs Deep Impact spacecraft. A NASA Discovery mission, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth, and reveal the secrets of its interior. After releasing an impactor on a course to hit the comets sunlit side, Deep Impacts flyby spacecraft will collect pictures and data of how the crater forms, measure the craters depth and diameter, as well as the composition of the interior of the crater and any material thrown out, and determine the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. From a vantage point above, a worker observes the Deep Impact spacecraft exposed after removal of the canister and protective cover. Next the fairing will be installed around the spacecraft. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth joint, protecting the spacecraft during launch and ascent. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will reveal the secrets of its interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. On Launch Pad 17-B, Cape Canaveral Air Force Station, Fla., a second Solid Rocket Booster (SRB) is raised off a transporter to be lifted up the mobile service tower. It will be attached to the Boeing Delta II launch vehicle for launch of the Deep Impact spacecraft. A NASA Discovery mission, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth, and reveal the secrets of its interior. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will collect pictures and data of how the crater forms, measuring the craters depth and diameter, as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact project management is handled by the Jet Propulsion Laboratory in Pasadena, Calif. The spacecraft is scheduled to launch Dec. 30, 2004.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. The Deep Impact spacecraft waits inside the mobile service tower on Launch Pad 17-B, Cape Canaveral Air force Station, Fla., for fairing installation. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth nosecone, protecting the spacecraft during launch and ascent. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will reveal the secrets of its interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. At Astrotech Space Operations in Titusville, Fla., the Deep Impact spacecraft is mated to the Boeing Delta II third stage. When the spacecraft and third stage are mated, they will be moved to Launch Pad 17-B at Cape Canaveral Air Force Station, Fla. There they will be mated to the Delta II rocket and the fairing installed around them for protection during launch and ascent. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will reveal the secrets of its interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. Inside the mobile service tower on Launch Pad 17-B, Cape Canaveral Air force Station, Fla., the partly enclosed Deep Impact spacecraft (background) waits while the second half of the fairing (foreground left) moves toward it. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth nosecone, protecting the spacecraft during launch and ascent. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will reveal the secrets of its interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. Inside the mobile service tower on Launch Pad 17-B, Cape Canaveral Air force Station, Fla., the first half of the fairing is moved toward the Deep Impact spacecraft for installation. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth nosecone, protecting the spacecraft during launch and ascent. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will reveal the secrets of its interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. Inside the mobile service tower on Launch Pad 17-B, Cape Canaveral Air force Station, Fla., the first half of the fairing is moved into place around the Deep Impact spacecraft. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth nosecone, protecting the spacecraft during launch and ascent. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will reveal the secrets of its interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. Boeing technicians at Astrotech Space Operations in Titusville, Fla., prepare the third stage of a Delta II rocket for mating with the Deep Impact spacecraft. When the spacecraft and third stage are mated, they will be moved to Launch Pad 17-B at Cape Canaveral Air Force Station, Fla. There they will be mated to the Delta II rocket and the fairing installed around them for protection during launch and ascent. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will reveal the secrets of its interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. The Deep Impact spacecraft waits inside the mobile service tower on Launch Pad 17-B, Cape Canaveral Air force Station, Fla., for fairing installation. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth nosecone, protecting the spacecraft during launch and ascent. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will reveal the secrets of its interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. Inside the mobile service tower on Launch Pad 17-B, Cape Canaveral Air force Station, Fla., the first half of the fairing is moved into place around the Deep Impact spacecraft. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth nosecone, protecting the spacecraft during launch and ascent. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will reveal the secrets of its interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. The Deep Impact spacecraft waits inside the mobile service tower on Launch Pad 17-B, Cape Canaveral Air force Station, Fla., for fairing installation. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth nosecone, protecting the spacecraft during launch and ascent. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will reveal the secrets of its interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. Inside the mobile service tower on Launch Pad 17-B, Cape Canaveral Air force Station, Fla., workers attach the two halves of the fairing around the Deep Impact spacecraft. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth nosecone, protecting the spacecraft during launch and ascent. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will reveal the secrets of its interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
Bianchini, Monica; Scarselli, Franco
2014-08-01
Recently, researchers in the artificial neural network field have focused their attention on connectionist models composed by several hidden layers. In fact, experimental results and heuristic considerations suggest that deep architectures are more suitable than shallow ones for modern applications, facing very complex problems, e.g., vision and human language understanding. However, the actual theoretical results supporting such a claim are still few and incomplete. In this paper, we propose a new approach to study how the depth of feedforward neural networks impacts on their ability in implementing high complexity functions. First, a new measure based on topological concepts is introduced, aimed at evaluating the complexity of the function implemented by a neural network, used for classification purposes. Then, deep and shallow neural architectures with common sigmoidal activation functions are compared, by deriving upper and lower bounds on their complexity, and studying how the complexity depends on the number of hidden units and the used activation function. The obtained results seem to support the idea that deep networks actually implements functions of higher complexity, so that they are able, with the same number of resources, to address more difficult problems.
On Deep Learning for Trust-Aware Recommendations in Social Networks.
Deng, Shuiguang; Huang, Longtao; Xu, Guandong; Wu, Xindong; Wu, Zhaohui
2017-05-01
With the emergence of online social networks, the social network-based recommendation approach is popularly used. The major benefit of this approach is the ability of dealing with the problems with cold-start users. In addition to social networks, user trust information also plays an important role to obtain reliable recommendations. Although matrix factorization (MF) becomes dominant in recommender systems, the recommendation largely relies on the initialization of the user and item latent feature vectors. Aiming at addressing these challenges, we develop a novel trust-based approach for recommendation in social networks. In particular, we attempt to leverage deep learning to determinate the initialization in MF for trust-aware social recommendations and to differentiate the community effect in user's trusted friendships. A two-phase recommendation process is proposed to utilize deep learning in initialization and to synthesize the users' interests and their trusted friends' interests together with the impact of community effect for recommendations. We perform extensive experiments on real-world social network data to demonstrate the accuracy and effectiveness of our proposed approach in comparison with other state-of-the-art methods.
Multi-level deep supervised networks for retinal vessel segmentation.
Mo, Juan; Zhang, Lei
2017-12-01
Changes in the appearance of retinal blood vessels are an important indicator for various ophthalmologic and cardiovascular diseases, including diabetes, hypertension, arteriosclerosis, and choroidal neovascularization. Vessel segmentation from retinal images is very challenging because of low blood vessel contrast, intricate vessel topology, and the presence of pathologies such as microaneurysms and hemorrhages. To overcome these challenges, we propose a neural network-based method for vessel segmentation. A deep supervised fully convolutional network is developed by leveraging multi-level hierarchical features of the deep networks. To improve the discriminative capability of features in lower layers of the deep network and guide the gradient back propagation to overcome gradient vanishing, deep supervision with auxiliary classifiers is incorporated in some intermediate layers of the network. Moreover, the transferred knowledge learned from other domains is used to alleviate the issue of insufficient medical training data. The proposed approach does not rely on hand-crafted features and needs no problem-specific preprocessing or postprocessing, which reduces the impact of subjective factors. We evaluate the proposed method on three publicly available databases, the DRIVE, STARE, and CHASE_DB1 databases. Extensive experiments demonstrate that our approach achieves better or comparable performance to state-of-the-art methods with a much faster processing speed, making it suitable for real-world clinical applications. The results of cross-training experiments demonstrate its robustness with respect to the training set. The proposed approach segments retinal vessels accurately with a much faster processing speed and can be easily applied to other biomedical segmentation tasks.
2005-01-12
KENNEDY SPACE CENTER, FLA. - Emerging through the smoke and steam, the Boeing Delta II rocket carrying NASA’s Deep Impact spacecraft lifts off at 1:47 p.m. EST from Launch Pad 17-B, Cape Canaveral Air Force Station, Fla. A NASA Discovery mission, Deep Impact is heading for space and a rendezvous 83 million miles from Earth with Comet Tempel 1. After releasing a 3- by 3-foot projectile (impactor) to crash onto the surface July 4, 2005, Deep Impact’s flyby spacecraft will reveal the secrets of the comet’s interior by collecting pictures and data of how the crater forms, measuring the crater’s depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. On Launch Pad 17-B, Cape Canaveral Air Force Station, Fla., a crane begins lifting the third in a set of three Solid Rocket Boosters (SRBs). The SRBs will be hoisted up the mobile service tower and join three others already mated to the Boeing Delta II rocket that will launch the Deep Impact spacecraft. A NASA Discovery mission, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth, and reveal the secrets of its interior. After releasing an impactor on a course to hit the comets sunlit side, Deep Impacts flyby spacecraft will collect pictures and data of how the crater forms, measure the craters depth and diameter, as well as the composition of the interior of the crater and any material thrown out, and determine the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. This view from inside the mobile service tower on Launch Pad 17-B, Cape Canaveral Air Force Station, shows the Boeing Delta II second stage as it reaches the top. The component will be reattached to the interstage adapter on the Delta II. The rocket is the launch vehicle for the Deep Impact spacecraft, scheduled for liftoff no earlier than Jan. 12. A NASA Discovery mission, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth, and reveal the secrets of its interior. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will collect pictures and data of how the crater forms, measuring the craters depth and diameter, as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. At Launch Pad 17-B, Cape Canaveral Air Force Station, the Boeing Delta II second stage reaches the top of the mobile service tower. The component will be reattached to the interstage adapter on the Delta II. The rocket is the launch vehicle for the Deep Impact spacecraft, scheduled for liftoff no earlier than Jan. 12. A NASA Discovery mission, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth, and reveal the secrets of its interior. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will collect pictures and data of how the crater forms, measuring the craters depth and diameter, as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. This view from inside the mobile service tower on Launch Pad 17-B, Cape Canaveral Air Force Station, shows the Boeing Delta II second stage as it reaches the top. The component will be reattached to the interstage adapter on the Delta II. The rocket is the launch vehicle for the Deep Impact spacecraft, scheduled for liftoff no earlier than Jan. 12. A NASA Discovery mission, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth, and reveal the secrets of its interior. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will collect pictures and data of how the crater forms, measuring the craters depth and diameter, as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. At Astrotech Space Operations in Titusville, Fla., Boeing technicians oversee the final movement of the Deep Impact spacecraft being lowered onto the Delta II third stage for mating. When the spacecraft and third stage are mated, they will be moved to Launch Pad 17-B at Cape Canaveral Air Force Station, Fla. There they will be mated to the Delta II rocket and the fairing installed around them for protection during launch and ascent. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will reveal the secrets of its interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. At Astrotech Space Operations in Titusville, Fla., Boeing technicians oversee the final movement of the Deep Impact spacecraft being lowered onto the Delta II third stage for mating. When the spacecraft and third stage are mated, they will be moved to Launch Pad 17-B at Cape Canaveral Air Force Station, Fla. There they will be mated to the Delta II rocket and the fairing installed around them for protection during launch and ascent. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will reveal the secrets of its interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. The Deep Impact spacecraft is lifted from its transporter into the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, Fla. the spacecraft will be attached to the second stage of the Boeing Delta II rocket. Next the fairing will be installed around the spacecraft. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth joint, protecting the spacecraft during launch and ascent. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will reveal the secrets of its interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. At Astrotech Space Operations in Titusville, Fla., the Deep Impact spacecraft is secure in the canister for its move to Launch Pad 17-B on Cape Canaveral Air Force Station, Fla. Then, in the mobile service tower, the fairing will be installed around the spacecraft. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth joint, protecting the spacecraft during launch. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will reveal the secrets of its interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. At Astrotech Space Operations in Titusville, Fla., Boeing technicians watch as an overhead crane lowers the Deep Impact spacecraft onto the Delta II third stage for mating. When the spacecraft and third stage are mated, they will be moved to Launch Pad 17-B at Cape Canaveral Air Force Station, Fla. There they will be mated to the Delta II rocket and the fairing installed around them for protection during launch and ascent. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth. After releasing a 3- by 3- foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will reveal the secrets of its interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. The Deep Impact spacecraft arrives before dawn at the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, Fla. The spacecraft will be attached to the second stage of the Boeing Delta II rocket. Next the fairing will be installed around the spacecraft. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth joint, protecting the spacecraft during launch and ascent. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will reveal the secrets of its interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. In the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, Fla., workers stand by as the canister is lifted away from the Deep Impact spacecraft. Next the fairing will be installed around the spacecraft. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth joint, protecting the spacecraft during launch and ascent. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will reveal the secrets of its interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. At Astrotech Space Operations in Titusville, Fla., Boeing technicians watch as an overhead crane lifts the Deep Impact spacecraft, which is being moved for mating to the Delta II third stage. When the spacecraft and third stage are mated, they will be moved to Launch Pad 17-B at Cape Canaveral Air Force Station, Fla. There they will be mated to the Delta II rocket and the fairing installed around them for protection during launch and ascent. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will reveal the secrets of its interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. In the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, Fla., workers watch as the protective cover surrounding the Deep Impact spacecraft is lifted away. Next the fairing will be installed around the spacecraft. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth joint, protecting the spacecraft during launch and ascent. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will reveal the secrets of its interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. At Astrotech Space Operations in Titusville, Fla., Boeing technicians attach a crane to the Deep Impact spacecraft in order to move it to the Delta II third stage at left for mating. When the spacecraft and third stage are mated, they will be moved to Launch Pad 17-B at Cape Canaveral Air Force Station, Fla. There they will be mated to the Delta II rocket and the fairing installed around them for protection during launch and ascent. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will reveal the secrets of its interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. In the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, Fla., workers begin lowering the Deep Impact spacecraft toward the second stage of the Boeing Delta II launch vehicle below for mating. Next the fairing will be installed around the spacecraft. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth joint, protecting the spacecraft during launch and ascent. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will reveal the secrets of its interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. In the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, Fla., workers attach the third stage motor, connected to the Deep Impact spacecraft, to the spin table on the second stage of the Boeing Delta II launch vehicle below. Next the fairing will be installed around the spacecraft. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth joint, protecting the spacecraft during launch and ascent. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will reveal the secrets of its interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. The Deep Impact spacecraft is lifted into the top of the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station, Fla. the spacecraft will be attached to the second stage of the Boeing Delta II rocket. Next the fairing will be installed around the spacecraft. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth joint, protecting the spacecraft during launch and ascent. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will reveal the secrets of its interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
Structure, functioning, and cumulative stressors of Mediterranean deep-sea ecosystems
NASA Astrophysics Data System (ADS)
Tecchio, Samuele; Coll, Marta; Sardà, Francisco
2015-06-01
Environmental stressors, such as climate fluctuations, and anthropogenic stressors, such as fishing, are of major concern for the management of deep-sea ecosystems. Deep-water habitats are limited by primary productivity and are mainly dependent on the vertical input of organic matter from the surface. Global change over the latest decades is imparting variations in primary productivity levels across oceans, and thus it has an impact on the amount of organic matter landing on the deep seafloor. In addition, anthropogenic impacts are now reaching the deep ocean. The Mediterranean Sea, the largest enclosed basin on the planet, is not an exception. However, ecosystem-level studies of response to varying food input and anthropogenic stressors on deep-sea ecosystems are still scant. We present here a comparative ecological network analysis of three food webs of the deep Mediterranean Sea, with contrasting trophic structure. After modelling the flows of these food webs with the Ecopath with Ecosim approach, we compared indicators of network structure and functioning. We then developed temporal dynamic simulations varying the organic matter input to evaluate its potential effect. Results show that, following the west-to-east gradient in the Mediterranean Sea of marine snow input, organic matter recycling increases, net production decreases to negative values and trophic organisation is overall reduced. The levels of food-web activity followed the gradient of organic matter availability at the seafloor, confirming that deep-water ecosystems directly depend on marine snow and are therefore influenced by variations of energy input, such as climate-driven changes. In addition, simulations of varying marine snow arrival at the seafloor, combined with the hypothesis of a possible fishery expansion on the lower continental slope in the western basin, evidence that the trawling fishery may pose an impact which could be an order of magnitude stronger than a climate-driven reduction of marine snow.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. The Deep Impact spacecraft waits at Astrotech Space Operations in Titusville, Fla., for placement of a protective cover before the canister is installed around it. Once the spacecraft is completely covered, it will be transferred to Launch Pad 17-B on Cape Canaveral Air Force Station, Fla. Then, in the mobile service tower, the fairing will be installed around the spacecraft. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth joint, protecting the spacecraft during launch. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will reveal the secrets of its interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. At Astrotech Space Operations in Titusville, Fla., Boeing technicians place the lower segments of a protective canister around the Deep Impact spacecraft. Once the spacecraft is completely covered, it will be transferred to Launch Pad 17-B on Cape Canaveral Air Force Station, Fla. Then, in the mobile service tower, the fairing will be installed around the spacecraft. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth joint, protecting the spacecraft during launch. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will reveal the secrets of its interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. At Astrotech Space Operations in Titusville, Fla., technicians lower the upper canister toward the Deep Impact spacecraft. It will be attached to the lower segments already surrounding the spacecraft. Once the spacecraft is completely covered, it will be transferred to Launch Pad 17-B on Cape Canaveral Air Force Station, Fla. Then, in the mobile service tower, the fairing will be installed around the spacecraft. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth joint, protecting the spacecraft during launch. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will reveal the secrets of its interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. At Astrotech Space Operations in Titusville, Fla., Boeing technicians roll the Deep Impact spacecraft into another area where the upper canister can be lowered around it. Once the spacecraft is completely covered, it will be transferred to Launch Pad 17-B on Cape Canaveral Air Force Station, Fla. Then, in the mobile service tower, the fairing will be installed around the spacecraft. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth joint, protecting the spacecraft during launch. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will reveal the secrets of its interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. At Astrotech Space Operations in Titusville, Fla., a protective cover is being lowered over the Deep Impact spacecraft to protect it before the canister is installed around it. Once the spacecraft is completely covered, it will be transferred to Launch Pad 17-B on Cape Canaveral Air Force Station, Fla. Then, in the mobile service tower, the fairing will be installed around the spacecraft. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth joint, protecting the spacecraft during launch. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will reveal the secrets of its interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. The Deep Impact spacecraft leaves Astrotech Space Operations in Titusville, Fla., in the pre-dawn hours on a journey to Launch Pad 17-B at Cape Canaveral Air Force Station, Fla. There the spacecraft will be attached to the second stage of the Boeing Delta II rocket. Next the fairing will be installed around the spacecraft. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth joint, protecting the spacecraft during launch and ascent. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will reveal the secrets of its interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. At Astrotech Space Operations in Titusville, Fla., Boeing technicians lower a protective cover over the Deep Impact spacecraft to protect it before the canister is installed around it. Once the spacecraft is completely covered, it will be transferred to Launch Pad 17-B on Cape Canaveral Air Force Station, Fla. Then, in the mobile service tower, the fairing will be installed around the spacecraft. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth joint, protecting the spacecraft during launch. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth. After releasing a 3- by 3- foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will reveal the secrets of its interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. At Astrotech Space Operations in Titusville, Fla., technicians install a crane onto the upper canister before lifting it to install around the Deep Impact spacecraft. Once the spacecraft is completely covered, it will be transferred to Launch Pad 17-B on Cape Canaveral Air Force Station, Fla. Then, in the mobile service tower, the fairing will be installed around the spacecraft. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth joint, protecting the spacecraft during launch. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will reveal the secrets of its interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. At Astrotech Space Operations in Titusville, Fla., Boeing technicians attach the upper canister with the lower segments surrounding the Deep Impact spacecraft. Once the spacecraft is completely covered, it will be transferred to Launch Pad 17-B on Cape Canaveral Air Force Station, Fla. Then, in the mobile service tower, the fairing will be installed around the spacecraft. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth joint, protecting the spacecraft during launch. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will reveal the secrets of its interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2005-01-01
KENNEDY SPACE CENTER, FLA. At Astrotech Space Operations in Titusville, Fla., technicians lower the upper canister toward the Deep Impact spacecraft. It will be attached to the lower segments already surrounding the spacecraft. Once the spacecraft is completely covered, it will be transferred to Launch Pad 17-B on Cape Canaveral Air Force Station, Fla. Then, in the mobile service tower, the fairing will be installed around the spacecraft. The fairing is a molded structure that fits flush with the outside surface of the Delta II upper stage booster and forms an aerodynamically smooth joint, protecting the spacecraft during launch. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will reveal the secrets of its interior by collecting pictures and data of how the crater forms, measuring the craters depth and diameter as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
Detection and Prediction of Hail Storms in Satellite Imagery using Deep Learning
NASA Astrophysics Data System (ADS)
Pullman, M.; Gurung, I.; Ramachandran, R.; Maskey, M.
2017-12-01
Natural hazards, such as damaging hail storms, dramatically disrupt both industry and agriculture, having significant socio-economic impacts in the United States. In 2016, hail was responsible for 3.5 billion and 23 million dollars in damage to property and crops, respectively, making it the second costliest 2016 weather phenomenon in the United States. The destructive nature and high cost of hail storms has driven research into the development of more accurate hail-prediction algorithms in an effort to mitigate societal impacts. Recently, weather forecasting efforts have turned to deep learning neural networks because neural networks can more effectively model complex, nonlinear, dynamical phenomenon that exist in large datasets through multiple stages of transformation and representation. In an effort to improve hail-prediction techniques, we propose a deep learning technique that leverages satellite imagery to detect and predict the occurrence of hail storms. The technique is applied to satellite imagery from 2006 to 2016 for the contiguous United States and incorporates hail reports obtained from the National Center for Environmental Information Storm Events Database for training and validation purposes. In this presentation, we describe a novel approach to predicting hail via a neural network model that creates a large labeled dataset of hail storms, the accuracy and results of the model, and its applications for improving hail forecasting.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. At Astrotech Space Operations in Titusville, Fla., Joe Galamback mounts a bracket on a solar panel on the Deep Impact spacecraft. Galamback is a lead mechanic technician with Ball Aerospace and Technologies Corp. in Boulder, Colo. The spacecraft is undergoing verification testing after its long road trip from Colorado.A NASA Discovery mission, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth, and reveal the secrets of its interior. After releasing a 3- by 3- foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will collect pictures and data of how the crater forms, measuring the craters depth and diameter, as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. The spacecraft is scheduled to launch Dec. 30, 2004, aboard a Boeing Delta II rocket from Launch Complex 17 at Cape Canaveral Air Force Station, Fla.
Kahan, Joshua; Urner, Maren; Moran, Rosalyn; Flandin, Guillaume; Marreiros, Andre; Mancini, Laura; White, Mark; Thornton, John; Yousry, Tarek; Zrinzo, Ludvic; Hariz, Marwan; Limousin, Patricia; Friston, Karl; Foltynie, Tom
2014-04-01
Depleted of dopamine, the dynamics of the parkinsonian brain impact on both 'action' and 'resting' motor behaviour. Deep brain stimulation has become an established means of managing these symptoms, although its mechanisms of action remain unclear. Non-invasive characterizations of induced brain responses, and the effective connectivity underlying them, generally appeals to dynamic causal modelling of neuroimaging data. When the brain is at rest, however, this sort of characterization has been limited to correlations (functional connectivity). In this work, we model the 'effective' connectivity underlying low frequency blood oxygen level-dependent fluctuations in the resting Parkinsonian motor network-disclosing the distributed effects of deep brain stimulation on cortico-subcortical connections. Specifically, we show that subthalamic nucleus deep brain stimulation modulates all the major components of the motor cortico-striato-thalamo-cortical loop, including the cortico-striatal, thalamo-cortical, direct and indirect basal ganglia pathways, and the hyperdirect subthalamic nucleus projections. The strength of effective subthalamic nucleus afferents and efferents were reduced by stimulation, whereas cortico-striatal, thalamo-cortical and direct pathways were strengthened. Remarkably, regression analysis revealed that the hyperdirect, direct, and basal ganglia afferents to the subthalamic nucleus predicted clinical status and therapeutic response to deep brain stimulation; however, suppression of the sensitivity of the subthalamic nucleus to its hyperdirect afferents by deep brain stimulation may subvert the clinical efficacy of deep brain stimulation. Our findings highlight the distributed effects of stimulation on the resting motor network and provide a framework for analysing effective connectivity in resting state functional MRI with strong a priori hypotheses.
Deep Logic Networks: Inserting and Extracting Knowledge From Deep Belief Networks.
Tran, Son N; d'Avila Garcez, Artur S
2018-02-01
Developments in deep learning have seen the use of layerwise unsupervised learning combined with supervised learning for fine-tuning. With this layerwise approach, a deep network can be seen as a more modular system that lends itself well to learning representations. In this paper, we investigate whether such modularity can be useful to the insertion of background knowledge into deep networks, whether it can improve learning performance when it is available, and to the extraction of knowledge from trained deep networks, and whether it can offer a better understanding of the representations learned by such networks. To this end, we use a simple symbolic language-a set of logical rules that we call confidence rules-and show that it is suitable for the representation of quantitative reasoning in deep networks. We show by knowledge extraction that confidence rules can offer a low-cost representation for layerwise networks (or restricted Boltzmann machines). We also show that layerwise extraction can produce an improvement in the accuracy of deep belief networks. Furthermore, the proposed symbolic characterization of deep networks provides a novel method for the insertion of prior knowledge and training of deep networks. With the use of this method, a deep neural-symbolic system is proposed and evaluated, with the experimental results indicating that modularity through the use of confidence rules and knowledge insertion can be beneficial to network performance.
Buetler, Karin A; de León Rodríguez, Diego; Laganaro, Marina; Müri, René; Nyffeler, Thomas; Spierer, Lucas; Annoni, Jean-Marie
2015-11-01
Referred to as orthographic depth, the degree of consistency of grapheme/phoneme correspondences varies across languages from high in shallow orthographies to low in deep orthographies. The present study investigates the impact of orthographic depth on reading route by analyzing evoked potentials to words in a deep (French) and shallow (German) language presented to highly proficient bilinguals. ERP analyses to German and French words revealed significant topographic modulations 240-280 ms post-stimulus onset, indicative of distinct brain networks engaged in reading over this time window. Source estimations revealed that these effects stemmed from modulations of left insular, inferior frontal and dorsolateral regions (German>French) previously associated to phonological processing. Our results show that reading in a shallow language was associated to a stronger engagement of phonological pathways than reading in a deep language. Thus, the lexical pathways favored in word reading are reinforced by phonological networks more strongly in the shallow than deep orthography. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
1973-01-01
The objectives, functions, and organization of the Deep Space Network are summarized. The Deep Space Instrumentation Facility, the Ground Communications Facility, and the Network Control System are described.
Testolin, Alberto; De Filippo De Grazia, Michele; Zorzi, Marco
2017-01-01
The recent "deep learning revolution" in artificial neural networks had strong impact and widespread deployment for engineering applications, but the use of deep learning for neurocomputational modeling has been so far limited. In this article we argue that unsupervised deep learning represents an important step forward for improving neurocomputational models of perception and cognition, because it emphasizes the role of generative learning as opposed to discriminative (supervised) learning. As a case study, we present a series of simulations investigating the emergence of neural coding of visual space for sensorimotor transformations. We compare different network architectures commonly used as building blocks for unsupervised deep learning by systematically testing the type of receptive fields and gain modulation developed by the hidden neurons. In particular, we compare Restricted Boltzmann Machines (RBMs), which are stochastic, generative networks with bidirectional connections trained using contrastive divergence, with autoencoders, which are deterministic networks trained using error backpropagation. For both learning architectures we also explore the role of sparse coding, which has been identified as a fundamental principle of neural computation. The unsupervised models are then compared with supervised, feed-forward networks that learn an explicit mapping between different spatial reference frames. Our simulations show that both architectural and learning constraints strongly influenced the emergent coding of visual space in terms of distribution of tuning functions at the level of single neurons. Unsupervised models, and particularly RBMs, were found to more closely adhere to neurophysiological data from single-cell recordings in the primate parietal cortex. These results provide new insights into how basic properties of artificial neural networks might be relevant for modeling neural information processing in biological systems.
Testolin, Alberto; De Filippo De Grazia, Michele; Zorzi, Marco
2017-01-01
The recent “deep learning revolution” in artificial neural networks had strong impact and widespread deployment for engineering applications, but the use of deep learning for neurocomputational modeling has been so far limited. In this article we argue that unsupervised deep learning represents an important step forward for improving neurocomputational models of perception and cognition, because it emphasizes the role of generative learning as opposed to discriminative (supervised) learning. As a case study, we present a series of simulations investigating the emergence of neural coding of visual space for sensorimotor transformations. We compare different network architectures commonly used as building blocks for unsupervised deep learning by systematically testing the type of receptive fields and gain modulation developed by the hidden neurons. In particular, we compare Restricted Boltzmann Machines (RBMs), which are stochastic, generative networks with bidirectional connections trained using contrastive divergence, with autoencoders, which are deterministic networks trained using error backpropagation. For both learning architectures we also explore the role of sparse coding, which has been identified as a fundamental principle of neural computation. The unsupervised models are then compared with supervised, feed-forward networks that learn an explicit mapping between different spatial reference frames. Our simulations show that both architectural and learning constraints strongly influenced the emergent coding of visual space in terms of distribution of tuning functions at the level of single neurons. Unsupervised models, and particularly RBMs, were found to more closely adhere to neurophysiological data from single-cell recordings in the primate parietal cortex. These results provide new insights into how basic properties of artificial neural networks might be relevant for modeling neural information processing in biological systems. PMID:28377709
Disruption Tolerant Networking Flight Validation Experiment on NASA's EPOXI Mission
NASA Technical Reports Server (NTRS)
Wyatt, Jay; Burleigh, Scott; Jones, Ross; Torgerson, Leigh; Wissler, Steve
2009-01-01
In October and November of 2008, the Jet Propulsion Laboratory installed and tested essential elements of Delay/Disruption Tolerant Networking (DTN) technology on the Deep Impact spacecraft. This experiment, called Deep Impact Network Experiment (DINET), was performed in close cooperation with the EPOXI project which has responsibility for the spacecraft. During DINET some 300 images were transmitted from the JPL nodes to the spacecraft. Then they were automatically forwarded from the spacecraft back to the JPL nodes, exercising DTN's bundle origination, transmission, acquisition, dynamic route computation, congestion control, prioritization, custody transfer, and automatic retransmission procedures, both on the spacecraft and on the ground, over a period of 27 days. All transmitted bundles were successfully received, without corruption. The DINET experiment demonstrated DTN readiness for operational use in space missions. This activity was part of a larger NASA space DTN development program to mature DTN to flight readiness for a wide variety of mission types by the end of 2011. This paper describes the DTN protocols, the flight demo implementation, validation metrics which were created for the experiment, and validation results.
Parallel Distributed Processing Theory in the Age of Deep Networks.
Bowers, Jeffrey S
2017-12-01
Parallel distributed processing (PDP) models in psychology are the precursors of deep networks used in computer science. However, only PDP models are associated with two core psychological claims, namely that all knowledge is coded in a distributed format and cognition is mediated by non-symbolic computations. These claims have long been debated in cognitive science, and recent work with deep networks speaks to this debate. Specifically, single-unit recordings show that deep networks learn units that respond selectively to meaningful categories, and researchers are finding that deep networks need to be supplemented with symbolic systems to perform some tasks. Given the close links between PDP and deep networks, it is surprising that research with deep networks is challenging PDP theory. Copyright © 2017. Published by Elsevier Ltd.
Thermalnet: a Deep Convolutional Network for Synthetic Thermal Image Generation
NASA Astrophysics Data System (ADS)
Kniaz, V. V.; Gorbatsevich, V. S.; Mizginov, V. A.
2017-05-01
Deep convolutional neural networks have dramatically changed the landscape of the modern computer vision. Nowadays methods based on deep neural networks show the best performance among image recognition and object detection algorithms. While polishing of network architectures received a lot of scholar attention, from the practical point of view the preparation of a large image dataset for a successful training of a neural network became one of major challenges. This challenge is particularly profound for image recognition in wavelengths lying outside the visible spectrum. For example no infrared or radar image datasets large enough for successful training of a deep neural network are available to date in public domain. Recent advances of deep neural networks prove that they are also capable to do arbitrary image transformations such as super-resolution image generation, grayscale image colorisation and imitation of style of a given artist. Thus a natural question arise: how could be deep neural networks used for augmentation of existing large image datasets? This paper is focused on the development of the Thermalnet deep convolutional neural network for augmentation of existing large visible image datasets with synthetic thermal images. The Thermalnet network architecture is inspired by colorisation deep neural networks.
Temperature based Restricted Boltzmann Machines
NASA Astrophysics Data System (ADS)
Li, Guoqi; Deng, Lei; Xu, Yi; Wen, Changyun; Wang, Wei; Pei, Jing; Shi, Luping
2016-01-01
Restricted Boltzmann machines (RBMs), which apply graphical models to learning probability distribution over a set of inputs, have attracted much attention recently since being proposed as building blocks of multi-layer learning systems called deep belief networks (DBNs). Note that temperature is a key factor of the Boltzmann distribution that RBMs originate from. However, none of existing schemes have considered the impact of temperature in the graphical model of DBNs. In this work, we propose temperature based restricted Boltzmann machines (TRBMs) which reveals that temperature is an essential parameter controlling the selectivity of the firing neurons in the hidden layers. We theoretically prove that the effect of temperature can be adjusted by setting the parameter of the sharpness of the logistic function in the proposed TRBMs. The performance of RBMs can be improved by adjusting the temperature parameter of TRBMs. This work provides a comprehensive insights into the deep belief networks and deep learning architectures from a physical point of view.
Ghasemi, Fahimeh; Fassihi, Afshin; Pérez-Sánchez, Horacio; Mehri Dehnavi, Alireza
2017-02-05
Thousands of molecules and descriptors are available for a medicinal chemist thanks to the technological advancements in different branches of chemistry. This fact as well as the correlation between them has raised new problems in quantitative structure activity relationship studies. Proper parameter initialization in statistical modeling has merged as another challenge in recent years. Random selection of parameters leads to poor performance of deep neural network (DNN). In this research, deep belief network (DBN) was applied to initialize DNNs. DBN is composed of some stacks of restricted Boltzmann machine, an energy-based method that requires computing log likelihood gradient for all samples. Three different sampling approaches were suggested to solve this gradient. In this respect, the impact of DBN was applied based on the different sampling approaches mentioned above to initialize the DNN architecture in predicting biological activity of all fifteen Kaggle targets that contain more than 70k molecules. The same as other fields of processing research, the outputs of these models demonstrated significant superiority to that of DNN with random parameters. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Is Multitask Deep Learning Practical for Pharma?
Ramsundar, Bharath; Liu, Bowen; Wu, Zhenqin; Verras, Andreas; Tudor, Matthew; Sheridan, Robert P; Pande, Vijay
2017-08-28
Multitask deep learning has emerged as a powerful tool for computational drug discovery. However, despite a number of preliminary studies, multitask deep networks have yet to be widely deployed in the pharmaceutical and biotech industries. This lack of acceptance stems from both software difficulties and lack of understanding of the robustness of multitask deep networks. Our work aims to resolve both of these barriers to adoption. We introduce a high-quality open-source implementation of multitask deep networks as part of the DeepChem open-source platform. Our implementation enables simple python scripts to construct, fit, and evaluate sophisticated deep models. We use our implementation to analyze the performance of multitask deep networks and related deep models on four collections of pharmaceutical data (three of which have not previously been analyzed in the literature). We split these data sets into train/valid/test using time and neighbor splits to test multitask deep learning performance under challenging conditions. Our results demonstrate that multitask deep networks are surprisingly robust and can offer strong improvement over random forests. Our analysis and open-source implementation in DeepChem provide an argument that multitask deep networks are ready for widespread use in commercial drug discovery.
Topology reduction in deep convolutional feature extraction networks
NASA Astrophysics Data System (ADS)
Wiatowski, Thomas; Grohs, Philipp; Bölcskei, Helmut
2017-08-01
Deep convolutional neural networks (CNNs) used in practice employ potentially hundreds of layers and 10,000s of nodes. Such network sizes entail significant computational complexity due to the large number of convolutions that need to be carried out; in addition, a large number of parameters needs to be learned and stored. Very deep and wide CNNs may therefore not be well suited to applications operating under severe resource constraints as is the case, e.g., in low-power embedded and mobile platforms. This paper aims at understanding the impact of CNN topology, specifically depth and width, on the network's feature extraction capabilities. We address this question for the class of scattering networks that employ either Weyl-Heisenberg filters or wavelets, the modulus non-linearity, and no pooling. The exponential feature map energy decay results in Wiatowski et al., 2017, are generalized to O(a-N), where an arbitrary decay factor a > 1 can be realized through suitable choice of the Weyl-Heisenberg prototype function or the mother wavelet. We then show how networks of fixed (possibly small) depth N can be designed to guarantee that ((1 - ɛ) · 100)% of the input signal's energy are contained in the feature vector. Based on the notion of operationally significant nodes, we characterize, partly rigorously and partly heuristically, the topology-reducing effects of (effectively) band-limited input signals, band-limited filters, and feature map symmetries. Finally, for networks based on Weyl-Heisenberg filters, we determine the prototype function bandwidth that minimizes - for fixed network depth N - the average number of operationally significant nodes per layer.
Evaluation of Deep Learning Models for Predicting CO2 Flux
NASA Astrophysics Data System (ADS)
Halem, M.; Nguyen, P.; Frankel, D.
2017-12-01
Artificial neural networks have been employed to calculate surface flux measurements from station data because they are able to fit highly nonlinear relations between input and output variables without knowing the detail relationships between the variables. However, the accuracy in performing neural net estimates of CO2 flux from observations of CO2 and other atmospheric variables is influenced by the architecture of the neural model, the availability, and complexity of interactions between physical variables such as wind, temperature, and indirect variables like latent heat, and sensible heat, etc. We evaluate two deep learning models, feed forward and recurrent neural network models to learn how they each respond to the physical measurements, time dependency of the measurements of CO2 concentration, humidity, pressure, temperature, wind speed etc. for predicting the CO2 flux. In this paper, we focus on a) building neural network models for estimating CO2 flux based on DOE data from tower Atmospheric Radiation Measurement data; b) evaluating the impact of choosing the surface variables and model hyper-parameters on the accuracy and predictions of surface flux; c) assessing the applicability of the neural network models on estimate CO2 flux by using OCO-2 satellite data; d) studying the efficiency of using GPU-acceleration for neural network performance using IBM Power AI deep learning software and packages on IBM Minsky system.
Resolution of Singularities Introduced by Hierarchical Structure in Deep Neural Networks.
Nitta, Tohru
2017-10-01
We present a theoretical analysis of singular points of artificial deep neural networks, resulting in providing deep neural network models having no critical points introduced by a hierarchical structure. It is considered that such deep neural network models have good nature for gradient-based optimization. First, we show that there exist a large number of critical points introduced by a hierarchical structure in deep neural networks as straight lines, depending on the number of hidden layers and the number of hidden neurons. Second, we derive a sufficient condition for deep neural networks having no critical points introduced by a hierarchical structure, which can be applied to general deep neural networks. It is also shown that the existence of critical points introduced by a hierarchical structure is determined by the rank and the regularity of weight matrices for a specific class of deep neural networks. Finally, two kinds of implementation methods of the sufficient conditions to have no critical points are provided. One is a learning algorithm that can avoid critical points introduced by the hierarchical structure during learning (called avoidant learning algorithm). The other is a neural network that does not have some critical points introduced by the hierarchical structure as an inherent property (called avoidant neural network).
NASA Astrophysics Data System (ADS)
QingJie, Wei; WenBin, Wang
2017-06-01
In this paper, the image retrieval using deep convolutional neural network combined with regularization and PRelu activation function is studied, and improves image retrieval accuracy. Deep convolutional neural network can not only simulate the process of human brain to receive and transmit information, but also contains a convolution operation, which is very suitable for processing images. Using deep convolutional neural network is better than direct extraction of image visual features for image retrieval. However, the structure of deep convolutional neural network is complex, and it is easy to over-fitting and reduces the accuracy of image retrieval. In this paper, we combine L1 regularization and PRelu activation function to construct a deep convolutional neural network to prevent over-fitting of the network and improve the accuracy of image retrieval
The Deep Space Network, volume 17
NASA Technical Reports Server (NTRS)
1973-01-01
The objectives, functions, and organization of the Deep Space Network are summarized. The Deep Space Instrumentation Facility, the Ground Communications Facility, and the Network Control System are described.
Stable architectures for deep neural networks
NASA Astrophysics Data System (ADS)
Haber, Eldad; Ruthotto, Lars
2018-01-01
Deep neural networks have become invaluable tools for supervised machine learning, e.g. classification of text or images. While often offering superior results over traditional techniques and successfully expressing complicated patterns in data, deep architectures are known to be challenging to design and train such that they generalize well to new data. Critical issues with deep architectures are numerical instabilities in derivative-based learning algorithms commonly called exploding or vanishing gradients. In this paper, we propose new forward propagation techniques inspired by systems of ordinary differential equations (ODE) that overcome this challenge and lead to well-posed learning problems for arbitrarily deep networks. The backbone of our approach is our interpretation of deep learning as a parameter estimation problem of nonlinear dynamical systems. Given this formulation, we analyze stability and well-posedness of deep learning and use this new understanding to develop new network architectures. We relate the exploding and vanishing gradient phenomenon to the stability of the discrete ODE and present several strategies for stabilizing deep learning for very deep networks. While our new architectures restrict the solution space, several numerical experiments show their competitiveness with state-of-the-art networks.
The Deep Space Network. [tracking and communication functions and facilities
NASA Technical Reports Server (NTRS)
1974-01-01
The objectives, functions, and organization of the Deep Space Network are summarized. The Deep Space Instrumentation Facility, the Ground Communications Facility, and the Network Control System are described.
The deep space network, volume 13
NASA Technical Reports Server (NTRS)
1973-01-01
The objectives, functions, and organization of the Deep Space Network are summarized. The deep space instrumentation facility, the ground communications facility, and the network control system are described. Other areas reported include: Helios Mission support, DSN support of the Mariner Mars 1971 extended mission, Mariner Venus/Mercury 1973 mission support, Viking mission support, radio science, tracking and ground-based navigation, network control and data processing, and deep space stations.
Sadeghi, Zahra
2016-09-01
In this paper, I investigate conceptual categories derived from developmental processing in a deep neural network. The similarity matrices of deep representation at each layer of neural network are computed and compared with their raw representation. While the clusters generated by raw representation stand at the basic level of abstraction, conceptual categories obtained from deep representation shows a bottom-up transition procedure. Results demonstrate a developmental course of learning from specific to general level of abstraction through learned layers of representations in a deep belief network. © The Author(s) 2016.
A Deep Learning Network Approach to ab initio Protein Secondary Structure Prediction
Spencer, Matt; Eickholt, Jesse; Cheng, Jianlin
2014-01-01
Ab initio protein secondary structure (SS) predictions are utilized to generate tertiary structure predictions, which are increasingly demanded due to the rapid discovery of proteins. Although recent developments have slightly exceeded previous methods of SS prediction, accuracy has stagnated around 80% and many wonder if prediction cannot be advanced beyond this ceiling. Disciplines that have traditionally employed neural networks are experimenting with novel deep learning techniques in attempts to stimulate progress. Since neural networks have historically played an important role in SS prediction, we wanted to determine whether deep learning could contribute to the advancement of this field as well. We developed an SS predictor that makes use of the position-specific scoring matrix generated by PSI-BLAST and deep learning network architectures, which we call DNSS. Graphical processing units and CUDA software optimize the deep network architecture and efficiently train the deep networks. Optimal parameters for the training process were determined, and a workflow comprising three separately trained deep networks was constructed in order to make refined predictions. This deep learning network approach was used to predict SS for a fully independent test data set of 198 proteins, achieving a Q3 accuracy of 80.7% and a Sov accuracy of 74.2%. PMID:25750595
A Deep Learning Network Approach to ab initio Protein Secondary Structure Prediction.
Spencer, Matt; Eickholt, Jesse; Jianlin Cheng
2015-01-01
Ab initio protein secondary structure (SS) predictions are utilized to generate tertiary structure predictions, which are increasingly demanded due to the rapid discovery of proteins. Although recent developments have slightly exceeded previous methods of SS prediction, accuracy has stagnated around 80 percent and many wonder if prediction cannot be advanced beyond this ceiling. Disciplines that have traditionally employed neural networks are experimenting with novel deep learning techniques in attempts to stimulate progress. Since neural networks have historically played an important role in SS prediction, we wanted to determine whether deep learning could contribute to the advancement of this field as well. We developed an SS predictor that makes use of the position-specific scoring matrix generated by PSI-BLAST and deep learning network architectures, which we call DNSS. Graphical processing units and CUDA software optimize the deep network architecture and efficiently train the deep networks. Optimal parameters for the training process were determined, and a workflow comprising three separately trained deep networks was constructed in order to make refined predictions. This deep learning network approach was used to predict SS for a fully independent test dataset of 198 proteins, achieving a Q3 accuracy of 80.7 percent and a Sov accuracy of 74.2 percent.
Detection of Hail Storms in Radar Imagery Using Deep Learning
NASA Technical Reports Server (NTRS)
Pullman, Melinda; Gurung, Iksha; Ramachandran, Rahul; Maskey, Manil
2017-01-01
In 2016, hail was responsible for 3.5 billion and 23 million dollars in damage to property and crops, respectively, making it the second costliest weather phenomenon in the United States. In an effort to improve hail-prediction techniques and reduce the societal impacts associated with hail storms, we propose a deep learning technique that leverages radar imagery for automatic detection of hail storms. The technique is applied to radar imagery from 2011 to 2016 for the contiguous United States and achieved a precision of 0.848. Hail storms are primarily detected through the visual interpretation of radar imagery (Mrozet al., 2017). With radars providing data every two minutes, the detection of hail storms has become a big data task. As a result, scientists have turned to neural networks that employ computer vision to identify hail-bearing storms (Marzbanet al., 2001). In this study, we propose a deep Convolutional Neural Network (ConvNet) to understand the spatial features and patterns of radar echoes for detecting hailstorms.
Delay/Disruption Tolerant Networking for the International Space Station (ISS)
NASA Technical Reports Server (NTRS)
Schlesinger, Adam; Willman, Brett M.; Pitts, Lee; Davidson, Suzanne R.; Pohlchuck, William A.
2017-01-01
Disruption Tolerant Networking (DTN) is an emerging data networking technology designed to abstract the hardware communication layer from the spacecraft/payload computing resources. DTN is specifically designed to operate in environments where link delays and disruptions are common (e.g., space-based networks). The National Aeronautics and Space Administration (NASA) has demonstrated DTN on several missions, such as the Deep Impact Networking (DINET) experiment, the Earth Observing Mission 1 (EO-1) and the Lunar Laser Communication Demonstration (LLCD). To further the maturation of DTN, NASA is implementing DTN protocols on the International Space Station (ISS). This paper explains the architecture of the ISS DTN network, the operational support for the system, the results from integrated ground testing, and the future work for DTN expansion.
Deep space network energy program
NASA Technical Reports Server (NTRS)
Friesema, S. E.
1980-01-01
If the Deep Space Network is to exist in a cost effective and reliable manner in the next decade, the problems presented by international energy cost increases and energy availability must be addressed. The Deep Space Network Energy Program was established to implement solutions compatible with the ongoing development of the total network.
Imbalance aware lithography hotspot detection: a deep learning approach
NASA Astrophysics Data System (ADS)
Yang, Haoyu; Luo, Luyang; Su, Jing; Lin, Chenxi; Yu, Bei
2017-03-01
With the advancement of VLSI technology nodes, light diffraction caused lithographic hotspots have become a serious problem affecting manufacture yield. Lithography hotspot detection at the post-OPC stage is imperative to check potential circuit failures when transferring designed patterns onto silicon wafers. Although conventional lithography hotspot detection methods, such as machine learning, have gained satisfactory performance, with extreme scaling of transistor feature size and more and more complicated layout patterns, conventional methodologies may suffer from performance degradation. For example, manual or ad hoc feature extraction in a machine learning framework may lose important information when predicting potential errors in ultra-large-scale integrated circuit masks. In this paper, we present a deep convolutional neural network (CNN) targeting representative feature learning in lithography hotspot detection. We carefully analyze impact and effectiveness of different CNN hyper-parameters, through which a hotspot-detection-oriented neural network model is established. Because hotspot patterns are always minorities in VLSI mask design, the training data set is highly imbalanced. In this situation, a neural network is no longer reliable, because a trained model with high classification accuracy may still suffer from high false negative results (missing hotspots), which is fatal in hotspot detection problems. To address the imbalance problem, we further apply minority upsampling and random-mirror flipping before training the network. Experimental results show that our proposed neural network model achieves highly comparable or better performance on the ICCAD 2012 contest benchmark compared to state-of-the-art hotspot detectors based on deep or representative machine leaning.
The U.S. Rosetta Project : eighteen months in flight
NASA Technical Reports Server (NTRS)
Alexander, Claudia J.; Gulkis, Samuel; Frerking, Margaret A.; Holmes, Dwight P.; Weissman, Paul A.; Burch, J.; Stern, A.; Goldstein, R.; Parker, J.; Cravens, T.;
2006-01-01
In this paper we will update the status of the instruments following the commissioning exercise, an exercise that was only partially complete when a report was prepared for the 2005 IEEE conference.We will present an overview of the 2005 Earth/Moon activities, and the Deep Impact set of observations. The paper will also provide an update of the role of NASA's Deep Space Network in supporting an ESA request for Delta Difference One-way Ranging to provide improved tracking and navigation capability in preparation for the Mars flyby in 2007.
Conditional random field modelling of interactions between findings in mammography
NASA Astrophysics Data System (ADS)
Kooi, Thijs; Mordang, Jan-Jurre; Karssemeijer, Nico
2017-03-01
Recent breakthroughs in training deep neural network architectures, in particular deep Convolutional Neural Networks (CNNs), made a big impact on vision research and are increasingly responsible for advances in Computer Aided Diagnosis (CAD). Since many natural scenes and medical images vary in size and are too large to feed to the networks as a whole, two stage systems are typically employed, where in the first stage, small regions of interest in the image are located and presented to the network as training and test data. These systems allow us to harness accurate region based annotations, making the problem easier to learn. However, information is processed purely locally and context is not taken into account. In this paper, we present preliminary work on the employment of a Conditional Random Field (CRF) that is trained on top the CNN to model contextual interactions such as the presence of other suspicious regions, for mammography CAD. The model can easily be extended to incorporate other sources of information, such as symmetry, temporal change and various patient covariates and is general in the sense that it can have application in other CAD problems.
Extreme Event impacts on Seafloor Ecosystems
NASA Astrophysics Data System (ADS)
Canals, Miquel; Sanchez-Vidal, Anna; Calafat, Antoni; Pedrosa-Pàmies, Rut; Lastras, Galderic
2013-04-01
The Mediterranean region is among those presenting the highest concentration of cyclogenesis during the northern hemisphere winter, thus is frequently subjected to sudden events of extreme weather. The highest frequency of storm winds occur in its northwestern basin, and is associated to NE and NW storms. The occurrence of such extreme climatic events represents an opportunity of high scientific value to investigate how natural processes at their peaks of activity transfer matter and energy, as well as how impact ecosystems. Due to the approximately NE-SW orientation of the western Mediterranean coast, windforced motion coming from eastern storms generate the most intense waves and with very long fetch in the continental shelf and the coast, causing beach erosion, overwash and inundation of low-lying areas, and damage to infrastructures and coastal resources. On December 26, 2008 a huge storm afforded us the opportunity to understand the effect of storms on the deep sea ecosystems, as impacted violently an area of the Catalan coast covered by a dense network of monitoring devices including sediment traps and currentmeters. The storm, with measured wind gusts of more than 70 km h-1 and associated storm surge reaching 8 m, lead to the remobilisation of a shallow water large reservoir of marine organic carbon associated to fine particles and to its redistribution across the deep basin, and also ignited the motion of large amounts of coarse shelf sediment resulting in the abrasion and burial of benthic communities. In addition to eastern storms, increasing evidence has accumulated during the last few years showing the significance of Dense Shelf Water Cascading (DSWC), a type of marine current driven exclusively by seawater density contrast caused by strong and persistent NW winds, as a key driver of the deep Mediterranean Sea in many aspects. A network of mooring lines with sediment traps and currentmeters deployed in the Cap de Creus canyon in winter 2005-06 recorded a major DSWC event, the latest to date. Data show that DSWC modifies the properties of intermediate and deep waters, carries massive amounts of organic carbon to the basin thus fuelling the deep ecosystem, transports huge quantities of coarse and fine sedimentary particles that abrade canyon floors and rise the load of suspended particles, and also exports pollutants from the coastal area to deeper compartment. Our findings demonstrate that both types of climate-driven extreme events (coastal storms and DSWC) are highly efficient in transporting organic carbon from shallow to deep, thus contributing to its sequestration, and have the potential to tremendously impact the deep-sea ecosystems.
NASA Technical Reports Server (NTRS)
1979-01-01
Deep Space Network progress in flight project support, tracking and data acquisition, research and technology, network engineering, hardware and software implementation, and operations is cited. Topics covered include: tracking and ground based navigation; spacecraft/ground communication; station control and operations technology; ground communications; and deep space stations.
NASA Technical Reports Server (NTRS)
Thorman, H. C.
1975-01-01
Key characteristics of the Deep Space Network Test and Training System were presented. Completion of the Mark III-75 system implementation is reported. Plans are summarized for upgrading the system to a Mark III-77 configuration to support Deep Space Network preparations for the Mariner Jupiter/Saturn 1977 and Pioneer Venus 1978 missions. A general description of the Deep Space Station, Ground Communications Facility, and Network Operations Control Center functions that comprise the Deep Space Network Test and Training System is also presented.
Development and application of deep convolutional neural network in target detection
NASA Astrophysics Data System (ADS)
Jiang, Xiaowei; Wang, Chunping; Fu, Qiang
2018-04-01
With the development of big data and algorithms, deep convolution neural networks with more hidden layers have more powerful feature learning and feature expression ability than traditional machine learning methods, making artificial intelligence surpass human level in many fields. This paper first reviews the development and application of deep convolutional neural networks in the field of object detection in recent years, then briefly summarizes and ponders some existing problems in the current research, and the future development of deep convolutional neural network is prospected.
NASA Technical Reports Server (NTRS)
1974-01-01
The progress is reported of Deep Space Network (DSN) research in the following areas: (1) flight project support, (2) spacecraft/ground communications, (3) station control and operations technology, (4) network control and processing, and (5) deep space stations. A description of the DSN functions and facilities is included.
The Deep Space Network. An instrument for radio navigation of deep space probes
NASA Technical Reports Server (NTRS)
Renzetti, N. A.; Jordan, J. F.; Berman, A. L.; Wackley, J. A.; Yunck, T. P.
1982-01-01
The Deep Space Network (DSN) network configurations used to generate the navigation observables and the basic process of deep space spacecraft navigation, from data generation through flight path determination and correction are described. Special emphasis is placed on the DSN Systems which generate the navigation data: the DSN Tracking and VLBI Systems. In addition, auxiliary navigational support functions are described.
Two-Stage Approach to Image Classification by Deep Neural Networks
NASA Astrophysics Data System (ADS)
Ososkov, Gennady; Goncharov, Pavel
2018-02-01
The paper demonstrates the advantages of the deep learning networks over the ordinary neural networks on their comparative applications to image classifying. An autoassociative neural network is used as a standalone autoencoder for prior extraction of the most informative features of the input data for neural networks to be compared further as classifiers. The main efforts to deal with deep learning networks are spent for a quite painstaking work of optimizing the structures of those networks and their components, as activation functions, weights, as well as the procedures of minimizing their loss function to improve their performances and speed up their learning time. It is also shown that the deep autoencoders develop the remarkable ability for denoising images after being specially trained. Convolutional Neural Networks are also used to solve a quite actual problem of protein genetics on the example of the durum wheat classification. Results of our comparative study demonstrate the undoubted advantage of the deep networks, as well as the denoising power of the autoencoders. In our work we use both GPU and cloud services to speed up the calculations.
A deep learning framework for causal shape transformation.
Lore, Kin Gwn; Stoecklein, Daniel; Davies, Michael; Ganapathysubramanian, Baskar; Sarkar, Soumik
2018-02-01
Recurrent neural network (RNN) and Long Short-term Memory (LSTM) networks are the common go-to architecture for exploiting sequential information where the output is dependent on a sequence of inputs. However, in most considered problems, the dependencies typically lie in the latent domain which may not be suitable for applications involving the prediction of a step-wise transformation sequence that is dependent on the previous states only in the visible domain with a known terminal state. We propose a hybrid architecture of convolution neural networks (CNN) and stacked autoencoders (SAE) to learn a sequence of causal actions that nonlinearly transform an input visual pattern or distribution into a target visual pattern or distribution with the same support and demonstrated its practicality in a real-world engineering problem involving the physics of fluids. We solved a high-dimensional one-to-many inverse mapping problem concerning microfluidic flow sculpting, where the use of deep learning methods as an inverse map is very seldom explored. This work serves as a fruitful use-case to applied scientists and engineers in how deep learning can be beneficial as a solution for high-dimensional physical problems, and potentially opening doors to impactful advance in fields such as material sciences and medical biology where multistep topological transformations is a key element. Copyright © 2017 Elsevier Ltd. All rights reserved.
Facial expression recognition based on improved deep belief networks
NASA Astrophysics Data System (ADS)
Wu, Yao; Qiu, Weigen
2017-08-01
In order to improve the robustness of facial expression recognition, a method of face expression recognition based on Local Binary Pattern (LBP) combined with improved deep belief networks (DBNs) is proposed. This method uses LBP to extract the feature, and then uses the improved deep belief networks as the detector and classifier to extract the LBP feature. The combination of LBP and improved deep belief networks is realized in facial expression recognition. In the JAFFE (Japanese Female Facial Expression) database on the recognition rate has improved significantly.
The deep space network, volume 7
NASA Technical Reports Server (NTRS)
1972-01-01
The objectives, functions, and organization of the Deep Space Network are summarized. The Deep Space Instrumentation Facility, the Ground Communications Facility, and the Space Flight Operations Facility are described.
Deep Learning Neural Networks and Bayesian Neural Networks in Data Analysis
NASA Astrophysics Data System (ADS)
Chernoded, Andrey; Dudko, Lev; Myagkov, Igor; Volkov, Petr
2017-10-01
Most of the modern analyses in high energy physics use signal-versus-background classification techniques of machine learning methods and neural networks in particular. Deep learning neural network is the most promising modern technique to separate signal and background and now days can be widely and successfully implemented as a part of physical analysis. In this article we compare Deep learning and Bayesian neural networks application as a classifiers in an instance of top quark analysis.
NASA Technical Reports Server (NTRS)
1977-01-01
Presented is Deep Space Network (DSN) progress in flight project support, tracking and data acquisition (TDA) research and technology, network engineering, hardware and software implementation, and operations.
NASA Technical Reports Server (NTRS)
1975-01-01
Summaries are given of Deep Space Network progress in flight project support, tracking and data acquisition research and technology, network engineering, hardware and software implementation, and operations.
Imbalance aware lithography hotspot detection: a deep learning approach
NASA Astrophysics Data System (ADS)
Yang, Haoyu; Luo, Luyang; Su, Jing; Lin, Chenxi; Yu, Bei
2017-07-01
With the advancement of very large scale integrated circuits (VLSI) technology nodes, lithographic hotspots become a serious problem that affects manufacture yield. Lithography hotspot detection at the post-OPC stage is imperative to check potential circuit failures when transferring designed patterns onto silicon wafers. Although conventional lithography hotspot detection methods, such as machine learning, have gained satisfactory performance, with the extreme scaling of transistor feature size and layout patterns growing in complexity, conventional methodologies may suffer from performance degradation. For example, manual or ad hoc feature extraction in a machine learning framework may lose important information when predicting potential errors in ultra-large-scale integrated circuit masks. We present a deep convolutional neural network (CNN) that targets representative feature learning in lithography hotspot detection. We carefully analyze the impact and effectiveness of different CNN hyperparameters, through which a hotspot-detection-oriented neural network model is established. Because hotspot patterns are always in the minority in VLSI mask design, the training dataset is highly imbalanced. In this situation, a neural network is no longer reliable, because a trained model with high classification accuracy may still suffer from a high number of false negative results (missing hotspots), which is fatal in hotspot detection problems. To address the imbalance problem, we further apply hotspot upsampling and random-mirror flipping before training the network. Experimental results show that our proposed neural network model achieves comparable or better performance on the ICCAD 2012 contest benchmark compared to state-of-the-art hotspot detectors based on deep or representative machine leaning.
From principles to practice: a spatial approach to systematic conservation planning in the deep sea.
Wedding, L M; Friedlander, A M; Kittinger, J N; Watling, L; Gaines, S D; Bennett, M; Hardy, S M; Smith, C R
2013-12-22
Increases in the demand and price for industrial metals, combined with advances in technological capabilities have now made deep-sea mining more feasible and economically viable. In order to balance economic interests with the conservation of abyssal plain ecosystems, it is becoming increasingly important to develop a systematic approach to spatial management and zoning of the deep sea. Here, we describe an expert-driven systematic conservation planning process applied to inform science-based recommendations to the International Seabed Authority for a system of deep-sea marine protected areas (MPAs) to safeguard biodiversity and ecosystem function in an abyssal Pacific region targeted for nodule mining (e.g. the Clarion-Clipperton fracture zone, CCZ). Our use of geospatial analysis and expert opinion in forming the recommendations allowed us to stratify the proposed network by biophysical gradients, maximize the number of biologically unique seamounts within each subregion, and minimize socioeconomic impacts. The resulting proposal for an MPA network (nine replicate 400 × 400 km MPAs) covers 24% (1 440 000 km(2)) of the total CCZ planning region and serves as example of swift and pre-emptive conservation planning across an unprecedented area in the deep sea. As pressure from resource extraction increases in the future, the scientific guiding principles outlined in this research can serve as a basis for collaborative international approaches to ocean management.
From principles to practice: a spatial approach to systematic conservation planning in the deep sea
Wedding, L. M.; Friedlander, A. M.; Kittinger, J. N.; Watling, L.; Gaines, S. D.; Bennett, M.; Hardy, S. M.; Smith, C. R.
2013-01-01
Increases in the demand and price for industrial metals, combined with advances in technological capabilities have now made deep-sea mining more feasible and economically viable. In order to balance economic interests with the conservation of abyssal plain ecosystems, it is becoming increasingly important to develop a systematic approach to spatial management and zoning of the deep sea. Here, we describe an expert-driven systematic conservation planning process applied to inform science-based recommendations to the International Seabed Authority for a system of deep-sea marine protected areas (MPAs) to safeguard biodiversity and ecosystem function in an abyssal Pacific region targeted for nodule mining (e.g. the Clarion–Clipperton fracture zone, CCZ). Our use of geospatial analysis and expert opinion in forming the recommendations allowed us to stratify the proposed network by biophysical gradients, maximize the number of biologically unique seamounts within each subregion, and minimize socioeconomic impacts. The resulting proposal for an MPA network (nine replicate 400 × 400 km MPAs) covers 24% (1 440 000 km2) of the total CCZ planning region and serves as example of swift and pre-emptive conservation planning across an unprecedented area in the deep sea. As pressure from resource extraction increases in the future, the scientific guiding principles outlined in this research can serve as a basis for collaborative international approaches to ocean management. PMID:24197407
Deep Space Networking Experiments on the EPOXI Spacecraft
NASA Technical Reports Server (NTRS)
Jones, Ross M.
2011-01-01
NASA's Space Communications & Navigation Program within the Space Operations Directorate is operating a program to develop and deploy Disruption Tolerant Networking [DTN] technology for a wide variety of mission types by the end of 2011. DTN is an enabling element of the Interplanetary Internet where terrestrial networking protocols are generally unsuitable because they rely on timely and continuous end-to-end delivery of data and acknowledgments. In fall of 2008 and 2009 and 2011 the Jet Propulsion Laboratory installed and tested essential elements of DTN technology on the Deep Impact spacecraft. These experiments, called Deep Impact Network Experiment (DINET 1) were performed in close cooperation with the EPOXI project which has responsibility for the spacecraft. The DINET 1 software was installed on the backup software partition on the backup flight computer for DINET 1. For DINET 1, the spacecraft was at a distance of about 15 million miles (24 million kilometers) from Earth. During DINET 1 300 images were transmitted from the JPL nodes to the spacecraft. Then, they were automatically forwarded from the spacecraft back to the JPL nodes, exercising DTN's bundle origination, transmission, acquisition, dynamic route computation, congestion control, prioritization, custody transfer, and automatic retransmission procedures, both on the spacecraft and on the ground, over a period of 27 days. The first DINET 1 experiment successfully validated many of the essential elements of the DTN protocols. DINET 2 demonstrated: 1) additional DTN functionality, 2) automated certain tasks which were manually implemented in DINET 1 and 3) installed the ION SW on nodes outside of JPL. DINET 3 plans to: 1) upgrade the LTP convergence-layer adapter to conform to the international LTP CL specification, 2) add convergence-layer "stewardship" procedures and 3) add the BSP security elements [PIB & PCB]. This paper describes the planning and execution of the flight experiment and the validation results.
Next-Generation Machine Learning for Biological Networks.
Camacho, Diogo M; Collins, Katherine M; Powers, Rani K; Costello, James C; Collins, James J
2018-06-14
Machine learning, a collection of data-analytical techniques aimed at building predictive models from multi-dimensional datasets, is becoming integral to modern biological research. By enabling one to generate models that learn from large datasets and make predictions on likely outcomes, machine learning can be used to study complex cellular systems such as biological networks. Here, we provide a primer on machine learning for life scientists, including an introduction to deep learning. We discuss opportunities and challenges at the intersection of machine learning and network biology, which could impact disease biology, drug discovery, microbiome research, and synthetic biology. Copyright © 2018 Elsevier Inc. All rights reserved.
Constructing fine-granularity functional brain network atlases via deep convolutional autoencoder.
Zhao, Yu; Dong, Qinglin; Chen, Hanbo; Iraji, Armin; Li, Yujie; Makkie, Milad; Kou, Zhifeng; Liu, Tianming
2017-12-01
State-of-the-art functional brain network reconstruction methods such as independent component analysis (ICA) or sparse coding of whole-brain fMRI data can effectively infer many thousands of volumetric brain network maps from a large number of human brains. However, due to the variability of individual brain networks and the large scale of such networks needed for statistically meaningful group-level analysis, it is still a challenging and open problem to derive group-wise common networks as network atlases. Inspired by the superior spatial pattern description ability of the deep convolutional neural networks (CNNs), a novel deep 3D convolutional autoencoder (CAE) network is designed here to extract spatial brain network features effectively, based on which an Apache Spark enabled computational framework is developed for fast clustering of larger number of network maps into fine-granularity atlases. To evaluate this framework, 10 resting state networks (RSNs) were manually labeled from the sparsely decomposed networks of Human Connectome Project (HCP) fMRI data and 5275 network training samples were obtained, in total. Then the deep CAE models are trained by these functional networks' spatial maps, and the learned features are used to refine the original 10 RSNs into 17 network atlases that possess fine-granularity functional network patterns. Interestingly, it turned out that some manually mislabeled outliers in training networks can be corrected by the deep CAE derived features. More importantly, fine granularities of networks can be identified and they reveal unique network patterns specific to different brain task states. By further applying this method to a dataset of mild traumatic brain injury study, it shows that the technique can effectively identify abnormal small networks in brain injury patients in comparison with controls. In general, our work presents a promising deep learning and big data analysis solution for modeling functional connectomes, with fine granularities, based on fMRI data. Copyright © 2017 Elsevier B.V. All rights reserved.
Wishart Deep Stacking Network for Fast POLSAR Image Classification.
Jiao, Licheng; Liu, Fang
2016-05-11
Inspired by the popular deep learning architecture - Deep Stacking Network (DSN), a specific deep model for polarimetric synthetic aperture radar (POLSAR) image classification is proposed in this paper, which is named as Wishart Deep Stacking Network (W-DSN). First of all, a fast implementation of Wishart distance is achieved by a special linear transformation, which speeds up the classification of POLSAR image and makes it possible to use this polarimetric information in the following Neural Network (NN). Then a single-hidden-layer neural network based on the fast Wishart distance is defined for POLSAR image classification, which is named as Wishart Network (WN) and improves the classification accuracy. Finally, a multi-layer neural network is formed by stacking WNs, which is in fact the proposed deep learning architecture W-DSN for POLSAR image classification and improves the classification accuracy further. In addition, the structure of WN can be expanded in a straightforward way by adding hidden units if necessary, as well as the structure of the W-DSN. As a preliminary exploration on formulating specific deep learning architecture for POLSAR image classification, the proposed methods may establish a simple but clever connection between POLSAR image interpretation and deep learning. The experiment results tested on real POLSAR image show that the fast implementation of Wishart distance is very efficient (a POLSAR image with 768000 pixels can be classified in 0.53s), and both the single-hidden-layer architecture WN and the deep learning architecture W-DSN for POLSAR image classification perform well and work efficiently.
deepNF: Deep network fusion for protein function prediction.
Gligorijevic, Vladimir; Barot, Meet; Bonneau, Richard
2018-06-01
The prevalence of high-throughput experimental methods has resulted in an abundance of large-scale molecular and functional interaction networks. The connectivity of these networks provides a rich source of information for inferring functional annotations for genes and proteins. An important challenge has been to develop methods for combining these heterogeneous networks to extract useful protein feature representations for function prediction. Most of the existing approaches for network integration use shallow models that encounter difficulty in capturing complex and highly-nonlinear network structures. Thus, we propose deepNF, a network fusion method based on Multimodal Deep Autoencoders to extract high-level features of proteins from multiple heterogeneous interaction networks. We apply this method to combine STRING networks to construct a common low-dimensional representation containing high-level protein features. We use separate layers for different network types in the early stages of the multimodal autoencoder, later connecting all the layers into a single bottleneck layer from which we extract features to predict protein function. We compare the cross-validation and temporal holdout predictive performance of our method with state-of-the-art methods, including the recently proposed method Mashup. Our results show that our method outperforms previous methods for both human and yeast STRING networks. We also show substantial improvement in the performance of our method in predicting GO terms of varying type and specificity. deepNF is freely available at: https://github.com/VGligorijevic/deepNF. vgligorijevic@flatironinstitute.org, rb133@nyu.edu. Supplementary data are available at Bioinformatics online.
Movahedi, Faezeh; Coyle, James L; Sejdic, Ervin
2018-05-01
Deep learning, a relatively new branch of machine learning, has been investigated for use in a variety of biomedical applications. Deep learning algorithms have been used to analyze different physiological signals and gain a better understanding of human physiology for automated diagnosis of abnormal conditions. In this paper, we provide an overview of deep learning approaches with a focus on deep belief networks in electroencephalography applications. We investigate the state-of-the-art algorithms for deep belief networks and then cover the application of these algorithms and their performances in electroencephalographic applications. We covered various applications of electroencephalography in medicine, including emotion recognition, sleep stage classification, and seizure detection, in order to understand how deep learning algorithms could be modified to better suit the tasks desired. This review is intended to provide researchers with a broad overview of the currently existing deep belief network methodology for electroencephalography signals, as well as to highlight potential challenges for future research.
NASA Technical Reports Server (NTRS)
1977-01-01
A Deep Space Network progress report is presented dealing with in flight project support, tracking and data acquisition research and technology, network engineering, hardware and software implementation, and operations.
Intrusion Detection System Using Deep Neural Network for In-Vehicle Network Security.
Kang, Min-Joo; Kang, Je-Won
2016-01-01
A novel intrusion detection system (IDS) using a deep neural network (DNN) is proposed to enhance the security of in-vehicular network. The parameters building the DNN structure are trained with probability-based feature vectors that are extracted from the in-vehicular network packets. For a given packet, the DNN provides the probability of each class discriminating normal and attack packets, and, thus the sensor can identify any malicious attack to the vehicle. As compared to the traditional artificial neural network applied to the IDS, the proposed technique adopts recent advances in deep learning studies such as initializing the parameters through the unsupervised pre-training of deep belief networks (DBN), therefore improving the detection accuracy. It is demonstrated with experimental results that the proposed technique can provide a real-time response to the attack with a significantly improved detection ratio in controller area network (CAN) bus.
Intrusion Detection System Using Deep Neural Network for In-Vehicle Network Security
Kang, Min-Joo
2016-01-01
A novel intrusion detection system (IDS) using a deep neural network (DNN) is proposed to enhance the security of in-vehicular network. The parameters building the DNN structure are trained with probability-based feature vectors that are extracted from the in-vehicular network packets. For a given packet, the DNN provides the probability of each class discriminating normal and attack packets, and, thus the sensor can identify any malicious attack to the vehicle. As compared to the traditional artificial neural network applied to the IDS, the proposed technique adopts recent advances in deep learning studies such as initializing the parameters through the unsupervised pre-training of deep belief networks (DBN), therefore improving the detection accuracy. It is demonstrated with experimental results that the proposed technique can provide a real-time response to the attack with a significantly improved detection ratio in controller area network (CAN) bus. PMID:27271802
NASA Technical Reports Server (NTRS)
1975-01-01
The objectives, functions, and organization of the Deep Space Network are summarized along with deep space station, ground communication, and network operations control capabilities. Mission support of ongoing planetary/interplanetary flight projects is discussed with emphasis on Viking orbiter radio frequency compatibility tests, the Pioneer Venus orbiter mission, and Helios-1 mission status and operations. Progress is also reported in tracking and data acquisition research and technology, network engineering, hardware and software implementation, and operations.
NASA Technical Reports Server (NTRS)
1974-01-01
The objectives, functions, and organization, of the Deep Space Network are summarized. Deep Space stations, ground communications, and network operations control capabilities are described. The network is designed for two-way communications with unmanned spacecraft traveling approximately 1600 km from earth to the farthest planets in the solar system. It has provided tracking and data acquisition support for the following projects: Ranger, Surveyor, Mariner, Pioneer, Apollo, Helios, Viking, and the Lunar Orbiter.
DANoC: An Efficient Algorithm and Hardware Codesign of Deep Neural Networks on Chip.
Zhou, Xichuan; Li, Shengli; Tang, Fang; Hu, Shengdong; Lin, Zhi; Zhang, Lei
2017-07-18
Deep neural networks (NNs) are the state-of-the-art models for understanding the content of images and videos. However, implementing deep NNs in embedded systems is a challenging task, e.g., a typical deep belief network could exhaust gigabytes of memory and result in bandwidth and computational bottlenecks. To address this challenge, this paper presents an algorithm and hardware codesign for efficient deep neural computation. A hardware-oriented deep learning algorithm, named the deep adaptive network, is proposed to explore the sparsity of neural connections. By adaptively removing the majority of neural connections and robustly representing the reserved connections using binary integers, the proposed algorithm could save up to 99.9% memory utility and computational resources without undermining classification accuracy. An efficient sparse-mapping-memory-based hardware architecture is proposed to fully take advantage of the algorithmic optimization. Different from traditional Von Neumann architecture, the deep-adaptive network on chip (DANoC) brings communication and computation in close proximity to avoid power-hungry parameter transfers between on-board memory and on-chip computational units. Experiments over different image classification benchmarks show that the DANoC system achieves competitively high accuracy and efficiency comparing with the state-of-the-art approaches.
DeepMitosis: Mitosis detection via deep detection, verification and segmentation networks.
Li, Chao; Wang, Xinggang; Liu, Wenyu; Latecki, Longin Jan
2018-04-01
Mitotic count is a critical predictor of tumor aggressiveness in the breast cancer diagnosis. Nowadays mitosis counting is mainly performed by pathologists manually, which is extremely arduous and time-consuming. In this paper, we propose an accurate method for detecting the mitotic cells from histopathological slides using a novel multi-stage deep learning framework. Our method consists of a deep segmentation network for generating mitosis region when only a weak label is given (i.e., only the centroid pixel of mitosis is annotated), an elaborately designed deep detection network for localizing mitosis by using contextual region information, and a deep verification network for improving detection accuracy by removing false positives. We validate the proposed deep learning method on two widely used Mitosis Detection in Breast Cancer Histological Images (MITOSIS) datasets. Experimental results show that we can achieve the highest F-score on the MITOSIS dataset from ICPR 2012 grand challenge merely using the deep detection network. For the ICPR 2014 MITOSIS dataset that only provides the centroid location of mitosis, we employ the segmentation model to estimate the bounding box annotation for training the deep detection network. We also apply the verification model to eliminate some false positives produced from the detection model. By fusing scores of the detection and verification models, we achieve the state-of-the-art results. Moreover, our method is very fast with GPU computing, which makes it feasible for clinical practice. Copyright © 2018 Elsevier B.V. All rights reserved.
Lee, Christine K; Hofer, Ira; Gabel, Eilon; Baldi, Pierre; Cannesson, Maxime
2018-04-17
The authors tested the hypothesis that deep neural networks trained on intraoperative features can predict postoperative in-hospital mortality. The data used to train and validate the algorithm consists of 59,985 patients with 87 features extracted at the end of surgery. Feed-forward networks with a logistic output were trained using stochastic gradient descent with momentum. The deep neural networks were trained on 80% of the data, with 20% reserved for testing. The authors assessed improvement of the deep neural network by adding American Society of Anesthesiologists (ASA) Physical Status Classification and robustness of the deep neural network to a reduced feature set. The networks were then compared to ASA Physical Status, logistic regression, and other published clinical scores including the Surgical Apgar, Preoperative Score to Predict Postoperative Mortality, Risk Quantification Index, and the Risk Stratification Index. In-hospital mortality in the training and test sets were 0.81% and 0.73%. The deep neural network with a reduced feature set and ASA Physical Status classification had the highest area under the receiver operating characteristics curve, 0.91 (95% CI, 0.88 to 0.93). The highest logistic regression area under the curve was found with a reduced feature set and ASA Physical Status (0.90, 95% CI, 0.87 to 0.93). The Risk Stratification Index had the highest area under the receiver operating characteristics curve, at 0.97 (95% CI, 0.94 to 0.99). Deep neural networks can predict in-hospital mortality based on automatically extractable intraoperative data, but are not (yet) superior to existing methods.
Diabetic retinopathy screening using deep neural network.
Ramachandran, Nishanthan; Hong, Sheng Chiong; Sime, Mary J; Wilson, Graham A
2017-09-07
There is a burgeoning interest in the use of deep neural network in diabetic retinal screening. To determine whether a deep neural network could satisfactorily detect diabetic retinopathy that requires referral to an ophthalmologist from a local diabetic retinal screening programme and an international database. Retrospective audit. Diabetic retinal photos from Otago database photographed during October 2016 (485 photos), and 1200 photos from Messidor international database. Receiver operating characteristic curve to illustrate the ability of a deep neural network to identify referable diabetic retinopathy (moderate or worse diabetic retinopathy or exudates within one disc diameter of the fovea). Area under the receiver operating characteristic curve, sensitivity and specificity. For detecting referable diabetic retinopathy, the deep neural network had an area under receiver operating characteristic curve of 0.901 (95% confidence interval 0.807-0.995), with 84.6% sensitivity and 79.7% specificity for Otago and 0.980 (95% confidence interval 0.973-0.986), with 96.0% sensitivity and 90.0% specificity for Messidor. This study has shown that a deep neural network can detect referable diabetic retinopathy with sensitivities and specificities close to or better than 80% from both an international and a domestic (New Zealand) database. We believe that deep neural networks can be integrated into community screening once they can successfully detect both diabetic retinopathy and diabetic macular oedema. © 2017 Royal Australian and New Zealand College of Ophthalmologists.
NASA Technical Reports Server (NTRS)
1980-01-01
The functions and facilities of the Deep Space Network are considered. Progress in flight project support, tracking and data acquisition research and technology, network engineering, hardware and software implementation, and operations is reported.
NASA Technical Reports Server (NTRS)
1979-01-01
Progress is reported in flight project support, tracking and data acquisition research and technology, network engineering, hardware and software implementation, and operations. The functions and facilities of the Deep Space Network are emphasized.
Zhao, Yu; Ge, Fangfei; Liu, Tianming
2018-07-01
fMRI data decomposition techniques have advanced significantly from shallow models such as Independent Component Analysis (ICA) and Sparse Coding and Dictionary Learning (SCDL) to deep learning models such Deep Belief Networks (DBN) and Convolutional Autoencoder (DCAE). However, interpretations of those decomposed networks are still open questions due to the lack of functional brain atlases, no correspondence across decomposed or reconstructed networks across different subjects, and significant individual variabilities. Recent studies showed that deep learning, especially deep convolutional neural networks (CNN), has extraordinary ability of accommodating spatial object patterns, e.g., our recent works using 3D CNN for fMRI-derived network classifications achieved high accuracy with a remarkable tolerance for mistakenly labelled training brain networks. However, the training data preparation is one of the biggest obstacles in these supervised deep learning models for functional brain network map recognitions, since manual labelling requires tedious and time-consuming labours which will sometimes even introduce label mistakes. Especially for mapping functional networks in large scale datasets such as hundreds of thousands of brain networks used in this paper, the manual labelling method will become almost infeasible. In response, in this work, we tackled both the network recognition and training data labelling tasks by proposing a new iteratively optimized deep learning CNN (IO-CNN) framework with an automatic weak label initialization, which enables the functional brain networks recognition task to a fully automatic large-scale classification procedure. Our extensive experiments based on ABIDE-II 1099 brains' fMRI data showed the great promise of our IO-CNN framework. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
1979-01-01
A report is given of the Deep Space Networks progress in (1) flight project support, (2) tracking and data acquisition research and technology, (3) network engineering, (4) hardware and software implementation, and (5) operations.
Magnetoencephalographic imaging of deep corticostriatal network activity during a rewards paradigm.
Kanal, Eliezer Y; Sun, Mingui; Ozkurt, Tolga E; Jia, Wenyan; Sclabassi, Robert
2009-01-01
The human rewards network is a complex system spanning both cortical and subcortical regions. While much is known about the functions of the various components of the network, research on the behavior of the network as a whole has been stymied due to an inability to detect signals at a high enough temporal resolution from both superficial and deep network components simultaneously. In this paper, we describe the application of magnetoencephalographic imaging (MEG) combined with advanced signal processing techniques to this problem. Using data collected while subjects performed a rewards-related gambling paradigm demonstrated to activate the rewards network, we were able to identify neural signals which correspond to deep network activity. We also show that this signal was not observable prior to filtration. These results suggest that MEG imaging may be a viable tool for the detection of deep neural activity.
Robust hepatic vessel segmentation using multi deep convolution network
NASA Astrophysics Data System (ADS)
Kitrungrotsakul, Titinunt; Han, Xian-Hua; Iwamoto, Yutaro; Foruzan, Amir Hossein; Lin, Lanfen; Chen, Yen-Wei
2017-03-01
Extraction of blood vessels of the organ is a challenging task in the area of medical image processing. It is really difficult to get accurate vessel segmentation results even with manually labeling by human being. The difficulty of vessels segmentation is the complicated structure of blood vessels and its large variations that make them hard to recognize. In this paper, we present deep artificial neural network architecture to automatically segment the hepatic vessels from computed tomography (CT) image. We proposed novel deep neural network (DNN) architecture for vessel segmentation from a medical CT volume, which consists of three deep convolution neural networks to extract features from difference planes of CT data. The three networks have share features at the first convolution layer but will separately learn their own features in the second layer. All three networks will join again at the top layer. To validate effectiveness and efficiency of our proposed method, we conduct experiments on 12 CT volumes which training data are randomly generate from 5 CT volumes and 7 using for test. Our network can yield an average dice coefficient 0.830, while 3D deep convolution neural network can yield around 0.7 and multi-scale can yield only 0.6.
Future Plans for NASA's Deep Space Network
NASA Technical Reports Server (NTRS)
Deutsch, Leslie J.; Preston, Robert A.; Geldzahler, Barry J.
2008-01-01
This slide presentation reviews the importance of NASA's Deep Space Network (DSN) to space exploration, and future planned improvements to the communication capabilities that the network allows, in terms of precision, and communication power.
NASA Technical Reports Server (NTRS)
1977-01-01
The facilities, programming system, and monitor and control system for the deep space network are described. Ongoing planetary and interplanetary flight projects are reviewed, along with tracking and ground-based navigation, communications, and network and facility engineering.
Office of Tracking and Data Acquisition. [deep space network and spacecraft tracking
NASA Technical Reports Server (NTRS)
1975-01-01
The Office of Tracking and Data Acquisition (OTDA) and its two worldwide tracking network facilities, the Spaceflight Tracking and Data Network and the Deep Space Network, are described. Other topics discussed include the NASA communications network, the tracking and data relay satellite system, other OTDA tracking activities, and OTDA milestones.
The Future of the Deep Space Network: Technology Development for K2-Band Deep Space Communications
NASA Technical Reports Server (NTRS)
Bhanji, Alaudin M.
1999-01-01
Projections indicate that in the future the number of NASA's robotic deep space missions is likely to increase significantly. A launch rate of up to 4-6 launches per year is projected with up to 25 simultaneous missions active [I]. Future high resolution mapping missions to other planetary bodies as well as other experiments are likely to require increased downlink capacity. These future deep space communications requirements will, according to baseline loading analysis, exceed the capacity of NASA's Deep Space Network in its present form. There are essentially two approaches for increasing the channel capacity of the Deep Space Network. Given the near-optimum performance of the network at the two deep space communications bands, S-Band (uplink 2.025-2.120 GHz, downlink 2.2-2.3 GHz), and X-Band (uplink 7.145-7.19 GHz, downlink 8.48.5 GHz), additional improvements bring only marginal return for the investment. Thus the only way to increase channel capacity is simply to construct more antennas, receivers, transmitters and other hardware. This approach is relatively low-risk but involves increasing both the number of assets in the network and operational costs.
NASA Technical Reports Server (NTRS)
Giorgini, J. D.; Slade, M. A.; Silva, A.; Preston, R. A.; Brozovic, M.; Taylor, P. A.; Magri, C.
2009-01-01
Add radar capability to the existing southern hemisphere 70-m Deep Space Network (DSN) site at Canberra, Australia, thereby increasing by 1.5-2x the observing time available for high-precision NEO trajectory refinement and characterization. Estimated cost: approx.$16 million over 3 years, $2.5 million/year for operations (FY09).
Plant Species Identification by Bi-channel Deep Convolutional Networks
NASA Astrophysics Data System (ADS)
He, Guiqing; Xia, Zhaoqiang; Zhang, Qiqi; Zhang, Haixi; Fan, Jianping
2018-04-01
Plant species identification achieves much attention recently as it has potential application in the environmental protection and human life. Although deep learning techniques can be directly applied for plant species identification, it still needs to be designed for this specific task to obtain the state-of-art performance. In this paper, a bi-channel deep learning framework is developed for identifying plant species. In the framework, two different sub-networks are fine-tuned over their pretrained models respectively. And then a stacking layer is used to fuse the output of two different sub-networks. We construct a plant dataset of Orchidaceae family for algorithm evaluation. Our experimental results have demonstrated that our bi-channel deep network can achieve very competitive performance on accuracy rates compared to the existing deep learning algorithm.
DeepQA: improving the estimation of single protein model quality with deep belief networks.
Cao, Renzhi; Bhattacharya, Debswapna; Hou, Jie; Cheng, Jianlin
2016-12-05
Protein quality assessment (QA) useful for ranking and selecting protein models has long been viewed as one of the major challenges for protein tertiary structure prediction. Especially, estimating the quality of a single protein model, which is important for selecting a few good models out of a large model pool consisting of mostly low-quality models, is still a largely unsolved problem. We introduce a novel single-model quality assessment method DeepQA based on deep belief network that utilizes a number of selected features describing the quality of a model from different perspectives, such as energy, physio-chemical characteristics, and structural information. The deep belief network is trained on several large datasets consisting of models from the Critical Assessment of Protein Structure Prediction (CASP) experiments, several publicly available datasets, and models generated by our in-house ab initio method. Our experiments demonstrate that deep belief network has better performance compared to Support Vector Machines and Neural Networks on the protein model quality assessment problem, and our method DeepQA achieves the state-of-the-art performance on CASP11 dataset. It also outperformed two well-established methods in selecting good outlier models from a large set of models of mostly low quality generated by ab initio modeling methods. DeepQA is a useful deep learning tool for protein single model quality assessment and protein structure prediction. The source code, executable, document and training/test datasets of DeepQA for Linux is freely available to non-commercial users at http://cactus.rnet.missouri.edu/DeepQA/ .
NASA Astrophysics Data System (ADS)
Dutta, Sandeep; Gros, Eric
2018-03-01
Deep Learning (DL) has been successfully applied in numerous fields fueled by increasing computational power and access to data. However, for medical imaging tasks, limited training set size is a common challenge when applying DL. This paper explores the applicability of DL to the task of classifying a single axial slice from a CT exam into one of six anatomy regions. A total of 29000 images selected from 223 CT exams were manually labeled for ground truth. An additional 54 exams were labeled and used as an independent test set. The network architecture developed for this application is composed of 6 convolutional layers and 2 fully connected layers with RELU non-linear activations between each layer. Max-pooling was used after every second convolutional layer, and a softmax layer was used at the end. Given this base architecture, the effect of inclusion of network architecture components such as Dropout and Batch Normalization on network performance and training is explored. The network performance as a function of training and validation set size is characterized by training each network architecture variation using 5,10,20,40,50 and 100% of the available training data. The performance comparison of the various network architectures was done for anatomy classification as well as two computer vision datasets. The anatomy classifier accuracy varied from 74.1% to 92.3% in this study depending on the training size and network layout used. Dropout layers improved the model accuracy for all training sizes.
The Deep Space Network: A Radio Communications Instrument for Deep Space Exploration
NASA Technical Reports Server (NTRS)
Renzetti, N. A.; Stelzried, C. T.; Noreen, G. K.; Slobin, S. D.; Petty, S. M.; Trowbridge, D. L.; Donnelly, H.; Kinman, P. W.; Armstrong, J. W.; Burow, N. A.
1983-01-01
The primary purpose of the Deep Space Network (DSN) is to serve as a communications instrument for deep space exploration, providing communications between the spacecraft and the ground facilities. The uplink communications channel provides instructions or commands to the spacecraft. The downlink communications channel provides command verification and spacecraft engineering and science instrument payload data.
Kahan, Joshua; Urner, Maren; Moran, Rosalyn; Flandin, Guillaume; Marreiros, Andre; Mancini, Laura; White, Mark; Thornton, John; Yousry, Tarek; Zrinzo, Ludvic; Hariz, Marwan; Limousin, Patricia; Friston, Karl
2014-01-01
Depleted of dopamine, the dynamics of the parkinsonian brain impact on both ‘action’ and ‘resting’ motor behaviour. Deep brain stimulation has become an established means of managing these symptoms, although its mechanisms of action remain unclear. Non-invasive characterizations of induced brain responses, and the effective connectivity underlying them, generally appeals to dynamic causal modelling of neuroimaging data. When the brain is at rest, however, this sort of characterization has been limited to correlations (functional connectivity). In this work, we model the ‘effective’ connectivity underlying low frequency blood oxygen level-dependent fluctuations in the resting Parkinsonian motor network—disclosing the distributed effects of deep brain stimulation on cortico-subcortical connections. Specifically, we show that subthalamic nucleus deep brain stimulation modulates all the major components of the motor cortico-striato-thalamo-cortical loop, including the cortico-striatal, thalamo-cortical, direct and indirect basal ganglia pathways, and the hyperdirect subthalamic nucleus projections. The strength of effective subthalamic nucleus afferents and efferents were reduced by stimulation, whereas cortico-striatal, thalamo-cortical and direct pathways were strengthened. Remarkably, regression analysis revealed that the hyperdirect, direct, and basal ganglia afferents to the subthalamic nucleus predicted clinical status and therapeutic response to deep brain stimulation; however, suppression of the sensitivity of the subthalamic nucleus to its hyperdirect afferents by deep brain stimulation may subvert the clinical efficacy of deep brain stimulation. Our findings highlight the distributed effects of stimulation on the resting motor network and provide a framework for analysing effective connectivity in resting state functional MRI with strong a priori hypotheses. PMID:24566670
The deep space network, Volume 11
NASA Technical Reports Server (NTRS)
1972-01-01
Deep Space Network progress in flight project support, Tracking and Data Acquisition research and technology, network engineering, hardware and software implementation, and operations are presented. Material is presented in each of the following categories: description of DSN; mission support; radio science; support research and technology; network engineering and implementation; and operations and facilities.
An improved advertising CTR prediction approach based on the fuzzy deep neural network
Gao, Shu; Li, Mingjiang
2018-01-01
Combining a deep neural network with fuzzy theory, this paper proposes an advertising click-through rate (CTR) prediction approach based on a fuzzy deep neural network (FDNN). In this approach, fuzzy Gaussian-Bernoulli restricted Boltzmann machine (FGBRBM) is first applied to input raw data from advertising datasets. Next, fuzzy restricted Boltzmann machine (FRBM) is used to construct the fuzzy deep belief network (FDBN) with the unsupervised method layer by layer. Finally, fuzzy logistic regression (FLR) is utilized for modeling the CTR. The experimental results show that the proposed FDNN model outperforms several baseline models in terms of both data representation capability and robustness in advertising click log datasets with noise. PMID:29727443
Deep multi-scale convolutional neural network for hyperspectral image classification
NASA Astrophysics Data System (ADS)
Zhang, Feng-zhe; Yang, Xia
2018-04-01
In this paper, we proposed a multi-scale convolutional neural network for hyperspectral image classification task. Firstly, compared with conventional convolution, we utilize multi-scale convolutions, which possess larger respective fields, to extract spectral features of hyperspectral image. We design a deep neural network with a multi-scale convolution layer which contains 3 different convolution kernel sizes. Secondly, to avoid overfitting of deep neural network, dropout is utilized, which randomly sleeps neurons, contributing to improve the classification accuracy a bit. In addition, new skills like ReLU in deep learning is utilized in this paper. We conduct experiments on University of Pavia and Salinas datasets, and obtained better classification accuracy compared with other methods.
An improved advertising CTR prediction approach based on the fuzzy deep neural network.
Jiang, Zilong; Gao, Shu; Li, Mingjiang
2018-01-01
Combining a deep neural network with fuzzy theory, this paper proposes an advertising click-through rate (CTR) prediction approach based on a fuzzy deep neural network (FDNN). In this approach, fuzzy Gaussian-Bernoulli restricted Boltzmann machine (FGBRBM) is first applied to input raw data from advertising datasets. Next, fuzzy restricted Boltzmann machine (FRBM) is used to construct the fuzzy deep belief network (FDBN) with the unsupervised method layer by layer. Finally, fuzzy logistic regression (FLR) is utilized for modeling the CTR. The experimental results show that the proposed FDNN model outperforms several baseline models in terms of both data representation capability and robustness in advertising click log datasets with noise.
Spatiotemporal Recurrent Convolutional Networks for Traffic Prediction in Transportation Networks
Yu, Haiyang; Wu, Zhihai; Wang, Shuqin; Wang, Yunpeng; Ma, Xiaolei
2017-01-01
Predicting large-scale transportation network traffic has become an important and challenging topic in recent decades. Inspired by the domain knowledge of motion prediction, in which the future motion of an object can be predicted based on previous scenes, we propose a network grid representation method that can retain the fine-scale structure of a transportation network. Network-wide traffic speeds are converted into a series of static images and input into a novel deep architecture, namely, spatiotemporal recurrent convolutional networks (SRCNs), for traffic forecasting. The proposed SRCNs inherit the advantages of deep convolutional neural networks (DCNNs) and long short-term memory (LSTM) neural networks. The spatial dependencies of network-wide traffic can be captured by DCNNs, and the temporal dynamics can be learned by LSTMs. An experiment on a Beijing transportation network with 278 links demonstrates that SRCNs outperform other deep learning-based algorithms in both short-term and long-term traffic prediction. PMID:28672867
Spatiotemporal Recurrent Convolutional Networks for Traffic Prediction in Transportation Networks.
Yu, Haiyang; Wu, Zhihai; Wang, Shuqin; Wang, Yunpeng; Ma, Xiaolei
2017-06-26
Predicting large-scale transportation network traffic has become an important and challenging topic in recent decades. Inspired by the domain knowledge of motion prediction, in which the future motion of an object can be predicted based on previous scenes, we propose a network grid representation method that can retain the fine-scale structure of a transportation network. Network-wide traffic speeds are converted into a series of static images and input into a novel deep architecture, namely, spatiotemporal recurrent convolutional networks (SRCNs), for traffic forecasting. The proposed SRCNs inherit the advantages of deep convolutional neural networks (DCNNs) and long short-term memory (LSTM) neural networks. The spatial dependencies of network-wide traffic can be captured by DCNNs, and the temporal dynamics can be learned by LSTMs. An experiment on a Beijing transportation network with 278 links demonstrates that SRCNs outperform other deep learning-based algorithms in both short-term and long-term traffic prediction.
Salient object detection based on multi-scale contrast.
Wang, Hai; Dai, Lei; Cai, Yingfeng; Sun, Xiaoqiang; Chen, Long
2018-05-01
Due to the development of deep learning networks, a salient object detection based on deep learning networks, which are used to extract the features, has made a great breakthrough compared to the traditional methods. At present, the salient object detection mainly relies on very deep convolutional network, which is used to extract the features. In deep learning networks, an dramatic increase of network depth may cause more training errors instead. In this paper, we use the residual network to increase network depth and to mitigate the errors caused by depth increase simultaneously. Inspired by image simplification, we use color and texture features to obtain simplified image with multiple scales by means of region assimilation on the basis of super-pixels in order to reduce the complexity of images and to improve the accuracy of salient target detection. We refine the feature on pixel level by the multi-scale feature correction method to avoid the feature error when the image is simplified at the above-mentioned region level. The final full connection layer not only integrates features of multi-scale and multi-level but also works as classifier of salient targets. The experimental results show that proposed model achieves better results than other salient object detection models based on original deep learning networks. Copyright © 2018 Elsevier Ltd. All rights reserved.
Student beats the teacher: deep neural networks for lateral ventricles segmentation in brain MR
NASA Astrophysics Data System (ADS)
Ghafoorian, Mohsen; Teuwen, Jonas; Manniesing, Rashindra; Leeuw, Frank-Erik d.; van Ginneken, Bram; Karssemeijer, Nico; Platel, Bram
2018-03-01
Ventricular volume and its progression are known to be linked to several brain diseases such as dementia and schizophrenia. Therefore accurate measurement of ventricle volume is vital for longitudinal studies on these disorders, making automated ventricle segmentation algorithms desirable. In the past few years, deep neural networks have shown to outperform the classical models in many imaging domains. However, the success of deep networks is dependent on manually labeled data sets, which are expensive to acquire especially for higher dimensional data in the medical domain. In this work, we show that deep neural networks can be trained on muchcheaper-to-acquire pseudo-labels (e.g., generated by other automated less accurate methods) and still produce more accurate segmentations compared to the quality of the labels. To show this, we use noisy segmentation labels generated by a conventional region growing algorithm to train a deep network for lateral ventricle segmentation. Then on a large manually annotated test set, we show that the network significantly outperforms the conventional region growing algorithm which was used to produce the training labels for the network. Our experiments report a Dice Similarity Coefficient (DSC) of 0.874 for the trained network compared to 0.754 for the conventional region growing algorithm (p < 0.001).
White blood cells identification system based on convolutional deep neural learning networks.
Shahin, A I; Guo, Yanhui; Amin, K M; Sharawi, Amr A
2017-11-16
White blood cells (WBCs) differential counting yields valued information about human health and disease. The current developed automated cell morphology equipments perform differential count which is based on blood smear image analysis. Previous identification systems for WBCs consist of successive dependent stages; pre-processing, segmentation, feature extraction, feature selection, and classification. There is a real need to employ deep learning methodologies so that the performance of previous WBCs identification systems can be increased. Classifying small limited datasets through deep learning systems is a major challenge and should be investigated. In this paper, we propose a novel identification system for WBCs based on deep convolutional neural networks. Two methodologies based on transfer learning are followed: transfer learning based on deep activation features and fine-tuning of existed deep networks. Deep acrivation featues are extracted from several pre-trained networks and employed in a traditional identification system. Moreover, a novel end-to-end convolutional deep architecture called "WBCsNet" is proposed and built from scratch. Finally, a limited balanced WBCs dataset classification is performed through the WBCsNet as a pre-trained network. During our experiments, three different public WBCs datasets (2551 images) have been used which contain 5 healthy WBCs types. The overall system accuracy achieved by the proposed WBCsNet is (96.1%) which is more than different transfer learning approaches or even the previous traditional identification system. We also present features visualization for the WBCsNet activation which reflects higher response than the pre-trained activated one. a novel WBCs identification system based on deep learning theory is proposed and a high performance WBCsNet can be employed as a pre-trained network. Copyright © 2017. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Gaonkar, Bilwaj; Hovda, David; Martin, Neil; Macyszyn, Luke
2016-03-01
Deep Learning, refers to large set of neural network based algorithms, have emerged as promising machine- learning tools in the general imaging and computer vision domains. Convolutional neural networks (CNNs), a specific class of deep learning algorithms, have been extremely effective in object recognition and localization in natural images. A characteristic feature of CNNs, is the use of a locally connected multi layer topology that is inspired by the animal visual cortex (the most powerful vision system in existence). While CNNs, perform admirably in object identification and localization tasks, typically require training on extremely large datasets. Unfortunately, in medical image analysis, large datasets are either unavailable or are extremely expensive to obtain. Further, the primary tasks in medical imaging are organ identification and segmentation from 3D scans, which are different from the standard computer vision tasks of object recognition. Thus, in order to translate the advantages of deep learning to medical image analysis, there is a need to develop deep network topologies and training methodologies, that are geared towards medical imaging related tasks and can work in a setting where dataset sizes are relatively small. In this paper, we present a technique for stacked supervised training of deep feed forward neural networks for segmenting organs from medical scans. Each `neural network layer' in the stack is trained to identify a sub region of the original image, that contains the organ of interest. By layering several such stacks together a very deep neural network is constructed. Such a network can be used to identify extremely small regions of interest in extremely large images, inspite of a lack of clear contrast in the signal or easily identifiable shape characteristics. What is even more intriguing is that the network stack achieves accurate segmentation even when it is trained on a single image with manually labelled ground truth. We validate this approach,using a publicly available head and neck CT dataset. We also show that a deep neural network of similar depth, if trained directly using backpropagation, cannot acheive the tasks achieved using our layer wise training paradigm.
Saliency U-Net: A regional saliency map-driven hybrid deep learning network for anomaly segmentation
NASA Astrophysics Data System (ADS)
Karargyros, Alex; Syeda-Mahmood, Tanveer
2018-02-01
Deep learning networks are gaining popularity in many medical image analysis tasks due to their generalized ability to automatically extract relevant features from raw images. However, this can make the learning problem unnecessarily harder requiring network architectures of high complexity. In case of anomaly detection, in particular, there is often sufficient regional difference between the anomaly and the surrounding parenchyma that could be easily highlighted through bottom-up saliency operators. In this paper we propose a new hybrid deep learning network using a combination of raw image and such regional maps to more accurately learn the anomalies using simpler network architectures. Specifically, we modify a deep learning network called U-Net using both the raw and pre-segmented images as input to produce joint encoding (contraction) and expansion paths (decoding) in the U-Net. We present results of successfully delineating subdural and epidural hematomas in brain CT imaging and liver hemangioma in abdominal CT images using such network.
Statistical porcess control in Deep Space Network operation
NASA Technical Reports Server (NTRS)
Hodder, J. A.
2002-01-01
This report describes how the Deep Space Mission System (DSMS) Operations Program Office at the Jet Propulsion Laboratory's (EL) uses Statistical Process Control (SPC) to monitor performance and evaluate initiatives for improving processes on the National Aeronautics and Space Administration's (NASA) Deep Space Network (DSN).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwong, S.; Jivkov, A.P.
2013-07-01
Deep geologic disposal of high activity and long-lived radioactive waste is being actively considered and pursued in many countries, where low permeability geological formations are used to provide long term waste contaminant with minimum impact to the environment and risk to the biosphere. A multi-barrier approach that makes use of both engineered and natural barriers (i.e. geological formations) is often used to further enhance the containment performance of the repository. As the deep repository system subjects to a variety of thermo-hydro-chemo-mechanical (THCM) effects over its long 'operational' lifespan (e.g. 0.1 to 1.0 million years, the integrity of the barrier systemmore » will decrease over time (e.g. fracturing in rock or clay)). This is broadly referred as media degradation in the present study. This modelling study examines the effects of media degradation on diffusion dominant solute transport in fractured media that are typical of deep geological environment. In particular, reactive solute transport through fractured media is studied using a 2-D model, that considers advection and diffusion, to explore the coupled effects of kinetic and equilibrium chemical processes, while the effects of degradation is studied using a pore network model that considers the media diffusivity and network changes. Model results are presented to demonstrate the use of a 3D pore-network model, using a novel architecture, to calculate macroscopic properties of the medium such as diffusivity, subject to pore space changes as the media degrade. Results from a reactive transport model of a representative geological waste disposal package are also presented to demonstrate the effect of media property change on the solute migration behaviour, illustrating the complex interplay between kinetic biogeochemical processes and diffusion dominant transport. The initial modelling results demonstrate the feasibility of a coupled modelling approach (using pore-network model and reactive transport model) to examine the long term behaviour of deep geological repositories with media property change under complex geochemical conditions. (authors)« less
NASA Technical Reports Server (NTRS)
Wang, Charles C.; Sue, Miles K.; Manshadi, Farzin; Kinman, Peter
2005-01-01
This paper will first describe the characteristics of interference from a typical EESS satellite, including the intensity, frequency and duration of such interference. The paper will then discuss the DSN interference susceptibility, including the various components in the receiving systems that are susceptible to interference and the recovery time after a strong interference. Finally, the paper will discuss the impact of interference on science data and missions operations.
Hu, Weiming; Fan, Yabo; Xing, Junliang; Sun, Liang; Cai, Zhaoquan; Maybank, Stephen
2018-09-01
We construct a new efficient near duplicate image detection method using a hierarchical hash code learning neural network and load-balanced locality-sensitive hashing (LSH) indexing. We propose a deep constrained siamese hash coding neural network combined with deep feature learning. Our neural network is able to extract effective features for near duplicate image detection. The extracted features are used to construct a LSH-based index. We propose a load-balanced LSH method to produce load-balanced buckets in the hashing process. The load-balanced LSH significantly reduces the query time. Based on the proposed load-balanced LSH, we design an effective and feasible algorithm for near duplicate image detection. Extensive experiments on three benchmark data sets demonstrate the effectiveness of our deep siamese hash encoding network and load-balanced LSH.
Detecting atrial fibrillation by deep convolutional neural networks.
Xia, Yong; Wulan, Naren; Wang, Kuanquan; Zhang, Henggui
2018-02-01
Atrial fibrillation (AF) is the most common cardiac arrhythmia. The incidence of AF increases with age, causing high risks of stroke and increased morbidity and mortality. Efficient and accurate diagnosis of AF based on the ECG is valuable in clinical settings and remains challenging. In this paper, we proposed a novel method with high reliability and accuracy for AF detection via deep learning. The short-term Fourier transform (STFT) and stationary wavelet transform (SWT) were used to analyze ECG segments to obtain two-dimensional (2-D) matrix input suitable for deep convolutional neural networks. Then, two different deep convolutional neural network models corresponding to STFT output and SWT output were developed. Our new method did not require detection of P or R peaks, nor feature designs for classification, in contrast to existing algorithms. Finally, the performances of the two models were evaluated and compared with those of existing algorithms. Our proposed method demonstrated favorable performances on ECG segments as short as 5 s. The deep convolutional neural network using input generated by STFT, presented a sensitivity of 98.34%, specificity of 98.24% and accuracy of 98.29%. For the deep convolutional neural network using input generated by SWT, a sensitivity of 98.79%, specificity of 97.87% and accuracy of 98.63% was achieved. The proposed method using deep convolutional neural networks shows high sensitivity, specificity and accuracy, and, therefore, is a valuable tool for AF detection. Copyright © 2017 Elsevier Ltd. All rights reserved.
Deep Space Network equipment performance, reliability, and operations management information system
NASA Technical Reports Server (NTRS)
Cooper, T.; Lin, J.; Chatillon, M.
2002-01-01
The Deep Space Mission System (DSMS) Operations Program Office and the DeepSpace Network (DSN) facilities utilize the Discrepancy Reporting Management System (DRMS) to collect, process, communicate and manage data discrepancies, equipment resets, physical equipment status, and to maintain an internal Station Log. A collaborative effort development between JPL and the Canberra Deep Space Communication Complex delivered a system to support DSN Operations.
Study on the Classification of GAOFEN-3 Polarimetric SAR Images Using Deep Neural Network
NASA Astrophysics Data System (ADS)
Zhang, J.; Zhang, J.; Zhao, Z.
2018-04-01
Polarimetric Synthetic Aperture Radar (POLSAR) imaging principle determines that the image quality will be affected by speckle noise. So the recognition accuracy of traditional image classification methods will be reduced by the effect of this interference. Since the date of submission, Deep Convolutional Neural Network impacts on the traditional image processing methods and brings the field of computer vision to a new stage with the advantages of a strong ability to learn deep features and excellent ability to fit large datasets. Based on the basic characteristics of polarimetric SAR images, the paper studied the types of the surface cover by using the method of Deep Learning. We used the fully polarimetric SAR features of different scales to fuse RGB images to the GoogLeNet model based on convolution neural network Iterative training, and then use the trained model to test the classification of data validation.First of all, referring to the optical image, we mark the surface coverage type of GF-3 POLSAR image with 8m resolution, and then collect the samples according to different categories. To meet the GoogLeNet model requirements of 256 × 256 pixel image input and taking into account the lack of full-resolution SAR resolution, the original image should be pre-processed in the process of resampling. In this paper, POLSAR image slice samples of different scales with sampling intervals of 2 m and 1 m to be trained separately and validated by the verification dataset. Among them, the training accuracy of GoogLeNet model trained with resampled 2-m polarimetric SAR image is 94.89 %, and that of the trained SAR image with resampled 1 m is 92.65 %.
Prediction of properties of wheat dough using intelligent deep belief networks
NASA Astrophysics Data System (ADS)
Guha, Paramita; Bhatnagar, Taru; Pal, Ishan; Kamboj, Uma; Mishra, Sunita
2017-11-01
In this paper, the rheological and chemical properties of wheat dough are predicted using deep belief networks. Wheat grains are stored at controlled environmental conditions. The internal parameters of grains viz., protein, fat, carbohydrates, moisture, ash are determined using standard chemical analysis and viscosity of the dough is measured using Rheometer. Here, fat, carbohydrates, moisture, ash and temperature are considered as inputs whereas protein and viscosity are chosen as outputs. The prediction algorithm is developed using deep neural network where each layer is trained greedily using restricted Boltzmann machine (RBM) networks. The overall network is finally fine-tuned using standard neural network technique. In most literature, it has been found that fine-tuning is done using back-propagation technique. In this paper, a new algorithm is proposed in which each layer is tuned using RBM and the final network is fine-tuned using deep neural network (DNN). It has been observed that with the proposed algorithm, errors between the actual and predicted outputs are less compared to the conventional algorithm. Hence, the given network can be considered as beneficial as it predicts the outputs more accurately. Numerical results along with discussions are presented.
NASA Astrophysics Data System (ADS)
Miritello, Giovanna; Lara, Rubén; Moro, Esteban
Recent research has shown the deep impact of the dynamics of human interactions (or temporal social networks) on the spreading of information, opinion formation, etc. In general, the bursty nature of human interactions lowers the interaction between people to the extent that both the speed and reach of information diffusion are diminished. Using a large database of 20 million users of mobile phone calls we show evidence this effect is not homogeneous in the social network but in fact, there is a large correlation between this effect and the social topological structure around a given individual. In particular, we show that social relations of hubs in a network are relatively weaker from the dynamical point than those that are poorer connected in the information diffusion process. Our results show the importance of the temporal patterns of communication when analyzing and modeling dynamical process on social networks.
Training Deep Spiking Neural Networks Using Backpropagation.
Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael
2016-01-01
Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.
The deep space network, volume 15
NASA Technical Reports Server (NTRS)
1973-01-01
The DSN progress is reported in flight project support, TDA research and technology, network engineering, hardware and software implementation, and operations. Topics discussed include: DSN functions and facilities, planetary flight projects, tracking and ground-based navigation, communications, data processing, network control system, and deep space stations.
The Deep Space Network, volume 39
NASA Technical Reports Server (NTRS)
1977-01-01
The functions, facilities, and capabilities of the Deep Space Network and its support of the Pioneer, Helios, and Viking missions are described. Progress in tracking and data acquisition research and technology, network engineering and modifications, as well as hardware and software implementation and operations are reported.
Deep space network Mark 4A description
NASA Technical Reports Server (NTRS)
Wallace, R. J.; Burt, R. W.
1986-01-01
The general system configuration for the Mark 4A Deep Space Network is described. The arrangement and complement of antennas at the communications complexes and subsystem equipment at the signal processing centers are described. A description of the Network Operations Control Center is also presented.
Toolkits and Libraries for Deep Learning.
Erickson, Bradley J; Korfiatis, Panagiotis; Akkus, Zeynettin; Kline, Timothy; Philbrick, Kenneth
2017-08-01
Deep learning is an important new area of machine learning which encompasses a wide range of neural network architectures designed to complete various tasks. In the medical imaging domain, example tasks include organ segmentation, lesion detection, and tumor classification. The most popular network architecture for deep learning for images is the convolutional neural network (CNN). Whereas traditional machine learning requires determination and calculation of features from which the algorithm learns, deep learning approaches learn the important features as well as the proper weighting of those features to make predictions for new data. In this paper, we will describe some of the libraries and tools that are available to aid in the construction and efficient execution of deep learning as applied to medical images.
Future Mission Trends and their Implications for the Deep Space Network
NASA Technical Reports Server (NTRS)
Abraham, Douglas S.
2006-01-01
This viewgraph presentation discusses the direction of future missions and it's significance to the Deep Space Network. The topics include: 1) The Deep Space Network (DSN); 2) Past Missions Driving DSN Evolution; 3) The Changing Mission Paradigm; 4) Assessing Future Mission Needs; 5) Link Support Trends; 6) Downlink Rate Trends; 7) Uplink Rate Trends; 8) End-to-End Link Difficulty Trends; 9) Summary: Future Mission Trend Drivers; and 10) Conclusion: Implications for the DSN.
The deep space network, volume 6
NASA Technical Reports Server (NTRS)
1971-01-01
Progress on Deep Space Network (DSN) supporting research and technology is presented, together with advanced development and engineering, implementation, and DSN operations of flight projects. The DSN is described. Interplanetary and planetary flight projects and radio science experiments are discussed. Tracking and navigational accuracy analysis, communications systems and elements research, and supporting research are considered. Development of the ground communications and deep space instrumentation facilities is also presented. Network allocation schedules and angle tracking and test development are included.
NASA Technical Reports Server (NTRS)
1977-01-01
The various systems and subsystems are discussed for the Deep Space Network (DSN). A description of the DSN is presented along with mission support, program planning, facility engineering, implementation and operations.
NASA Astrophysics Data System (ADS)
Eslami, E.; Choi, Y.; Roy, A.
2017-12-01
Air quality forecasting carried out by chemical transport models often show significant error. This study uses a deep-learning approach over the Houston-Galveston-Brazoria (HGB) area to overcome this forecasting challenge, for the DISCOVER-AQ period (September 2013). Two approaches, deep neural network (DNN) using a Multi-Layer Perceptron (MLP) and Restricted Boltzmann Machine (RBM) were utilized. The proposed approaches analyzed input data by identifying features abstracted from its previous layer using a stepwise method. The approaches predicted hourly ozone and PM in September 2013 using several predictors of prior three days, including wind fields, temperature, relative humidity, cloud fraction, precipitation along with PM, ozone, and NOx concentrations. Model-measurement comparisons for available monitoring sites reported Indexes of Agreement (IOA) of around 0.95 for both DNN and RBM. A standard artificial neural network (ANN) (IOA=0.90) with similar architecture showed poorer performance than the deep networks, clearly demonstrating the superiority of the deep approaches. Additionally, each network (both deep and standard) performed significantly better than a previous CMAQ study, which showed an IOA of less than 0.80. The most influential input variables were identified using their associated weights, which represented the sensitivity of ozone to input parameters. The results indicate deep learning approaches can achieve more accurate ozone forecasting and identify the important input variables for ozone predictions in metropolitan areas.
De novo peptide sequencing by deep learning
Tran, Ngoc Hieu; Zhang, Xianglilan; Xin, Lei; Shan, Baozhen; Li, Ming
2017-01-01
De novo peptide sequencing from tandem MS data is the key technology in proteomics for the characterization of proteins, especially for new sequences, such as mAbs. In this study, we propose a deep neural network model, DeepNovo, for de novo peptide sequencing. DeepNovo architecture combines recent advances in convolutional neural networks and recurrent neural networks to learn features of tandem mass spectra, fragment ions, and sequence patterns of peptides. The networks are further integrated with local dynamic programming to solve the complex optimization task of de novo sequencing. We evaluated the method on a wide variety of species and found that DeepNovo considerably outperformed state of the art methods, achieving 7.7–22.9% higher accuracy at the amino acid level and 38.1–64.0% higher accuracy at the peptide level. We further used DeepNovo to automatically reconstruct the complete sequences of antibody light and heavy chains of mouse, achieving 97.5–100% coverage and 97.2–99.5% accuracy, without assisting databases. Moreover, DeepNovo is retrainable to adapt to any sources of data and provides a complete end-to-end training and prediction solution to the de novo sequencing problem. Not only does our study extend the deep learning revolution to a new field, but it also shows an innovative approach in solving optimization problems by using deep learning and dynamic programming. PMID:28720701
Subsidence monitoring network: an Italian example aimed at a sustainable hydrocarbon E&P activity
NASA Astrophysics Data System (ADS)
Dacome, M. C.; Miandro, R.; Vettorel, M.; Roncari, G.
2015-11-01
According to the Italian law in order to start-up any new hydrocarbon exploitation activity, an Environmental Impact Assessment study has to be presented, including a monitoring plan, addressed to foresee, measure and analyze in real time any possible impact of the project on the coastal areas and on those ones in the close inland located. The occurrence of subsidence, that could partly be related to hydrocarbon production, both on-shore and off-shore, can generate great concern in those areas where its occurrence may have impacts on the local environment. ENI, following the international scientific community recommendations on the matter, since the beginning of 90's years, implemented a cutting-edge monitoring network, with the aim to prevent, mitigate and control geodynamics phenomena generated in the activity areas, with a particular attention to conservation and protection of environmental and territorial equilibrium, taking care of what is known as "sustainable development". The current ENI implemented monitoring surveys can be divided as: - Shallow monitoring: spirit levelling surveys, continuous GPS surveys in permanent stations, SAR surveys, assestimeter subsurface compaction monitoring, ground water level monitoring, LiDAR surveys, bathymetrical surveys. - Deep monitoring: reservoir deep compaction trough radioactive markers, reservoir static (bottom hole) pressure monitoring. All the information, gathered through the monitoring network, allow: 1. to verify if the produced subsidence is evolving accordingly with the simulated forecast. 2. to provide data to revise and adjust the prediction compaction models 3. to put in place the remedial actions if the impact exceeds the threshold magnitude originally agreed among the involved parties. ENI monitoring plan to measure and monitor the subsidence process, during field production and also after the field closure, is therefore intended to support a sustainable field development and an acceptable exploitation programme in which the actual risk connected with the field production is evaluated in advance, shared and agreed among all the involved subjects: oil company, stakeholders and local community (with interests in the affected area).
Major technological innovations introduced in the large antennas of the Deep Space Network
NASA Technical Reports Server (NTRS)
Imbriale, W. A.
2002-01-01
The NASA Deep Space Network (DSN) is the largest and most sensitive scientific, telecommunications and radio navigation network in the world. Its principal responsibilities are to provide communications, tracking, and science services to most of the world's spacecraft that travel beyond low Earth orbit. The network consists of three Deep Space Communications Complexes. Each of the three complexes consists of multiple large antennas equipped with ultra sensitive receiving systems. A centralized Signal Processing Center (SPC) remotely controls the antennas, generates and transmits spacecraft commands, and receives and processes the spacecraft telemetry.
7.3 Communications and Navigation
NASA Technical Reports Server (NTRS)
Manning, Rob
2005-01-01
This presentation gives an overview of the networks NASA currently uses to support space communications and navigation, and the requirements for supporting future deep space missions, including manned lunar and Mars missions. The presentation addresses the Space Network, Deep Space Network, and Ground Network, why new support systems are needed, and the potential for catastrophic failure of aging antennas. Space communications and navigation are considered during Aerocapture, Entry, Descent and Landing (AEDL) only in order to precisely position, track and interact with the spacecraft at its destination (moon, Mars and Earth return) arrival. The presentation recommends a combined optical/radio frequency strategy for deep space communications.
Accurate identification of RNA editing sites from primitive sequence with deep neural networks.
Ouyang, Zhangyi; Liu, Feng; Zhao, Chenghui; Ren, Chao; An, Gaole; Mei, Chuan; Bo, Xiaochen; Shu, Wenjie
2018-04-16
RNA editing is a post-transcriptional RNA sequence alteration. Current methods have identified editing sites and facilitated research but require sufficient genomic annotations and prior-knowledge-based filtering steps, resulting in a cumbersome, time-consuming identification process. Moreover, these methods have limited generalizability and applicability in species with insufficient genomic annotations or in conditions of limited prior knowledge. We developed DeepRed, a deep learning-based method that identifies RNA editing from primitive RNA sequences without prior-knowledge-based filtering steps or genomic annotations. DeepRed achieved 98.1% and 97.9% area under the curve (AUC) in training and test sets, respectively. We further validated DeepRed using experimentally verified U87 cell RNA-seq data, achieving 97.9% positive predictive value (PPV). We demonstrated that DeepRed offers better prediction accuracy and computational efficiency than current methods with large-scale, mass RNA-seq data. We used DeepRed to assess the impact of multiple factors on editing identification with RNA-seq data from the Association of Biomolecular Resource Facilities and Sequencing Quality Control projects. We explored developmental RNA editing pattern changes during human early embryogenesis and evolutionary patterns in Drosophila species and the primate lineage using DeepRed. Our work illustrates DeepRed's state-of-the-art performance; it may decipher the hidden principles behind RNA editing, making editing detection convenient and effective.
MicroRNAs play critical roles during plant development and in response to abiotic stresses.
de Lima, Júlio César; Loss-Morais, Guilherme; Margis, Rogerio
2012-12-01
MicroRNAs (miRNAs) have been identified as key molecules in regulatory networks. The fine-tuning role of miRNAs in addition to the regulatory role of transcription factors has shown that molecular events during development are tightly regulated. In addition, several miRNAs play crucial roles in the response to abiotic stress induced by drought, salinity, low temperatures, and metals such as aluminium. Interestingly, several miRNAs have overlapping roles with regard to development, stress responses, and nutrient homeostasis. Moreover, in response to the same abiotic stresses, different expression patterns for some conserved miRNA families among different plant species revealed different metabolic adjustments. The use of deep sequencing technologies for the characterisation of miRNA frequency and the identification of new miRNAs adds complexity to regulatory networks in plants. In this review, we consider the regulatory role of miRNAs in plant development and abiotic stresses, as well as the impact of deep sequencing technologies on the generation of miRNA data.
Project CONVERGE: Impacts of local oceanographic processes on Adélie penguin foraging ecology
NASA Astrophysics Data System (ADS)
Kohut, J. T.; Bernard, K. S.; Fraser, W.; Oliver, M. J.; Statscewich, H.; Patterson-Fraser, D.; Winsor, P.; Cimino, M. A.; Miles, T. N.
2016-02-01
During the austral summer of 2014-2015, project CONVERGE deployed a multi-platform network to sample the Adélie penguin foraging hotspot associated with Palmer Deep Canyon along the Western Antarctic Peninsula. The focus of CONVERGE was to assess the impact of prey-concentrating ocean circulation dynamics on Adélie penguin foraging behavior. Food web links between phytoplankton and zooplankton abundance and penguin behavior were examined to better understand the within-season variability in Adélie foraging ecology. Since the High Frequency Radar (HFR) network installation in November 2014, the radial component current data from each of the three sites were combined to provide a high resolution (0.5 km) surface velocity maps. These hourly maps have revealed an incredibly dynamic system with strong fronts and frequent eddies extending across the Palmer Deep foraging area. A coordinated fleet of underwater gliders were used in concert with the HFR fields to sample the hydrography and phytoplankton distributions associated with convergent and divergent features. Three gliders mapped the along and across canyon variability of the hydrography, chlorophyll fluorescence and acoustic backscatter in the context of the observed surface currents and simultaneous penguin tracks. This presentation will highlight these synchronized measures of the food web in the context of the observed HFR fronts and eddies. The location and persistence of these features coupled with ecological sampling through the food web offer an unprecedented view of the Palmer Deep ecosystem. Specific examples will highlight how the vertical structure of the water column beneath the surface features stack the primary and secondary producers relative to observed penguin foraging behavior. The coupling from the physics through the food web as observed by our multi-platform network gives strong evidence for the critical role that distribution patterns of lower trophic levels have on Adélie foraging.
Fujita, Junta; Drumm, David T; Iguchi, Akira; Ueda, Yuji; Yamashita, Yuho; Ito, Masaki; Tominaga, Osamu; Kai, Yoshiaki; Ueno, Masahiro; Yamashita, Yoh
2017-10-01
The deep-sea crangonid shrimp, Argis lar, is a highly abundant species from the northern Pacific Ocean. We investigated its phylogeographic and demographic structure across the species' extensive range, using mitochondrial DNA sequence variation to evaluate the impact of deep-sea paleoenvironmental dynamics in the Sea of Japan on population histories. The haplotype network detected three distinct lineages with allopatric isolation, which roughly corresponded to the Sea of Japan (Lineage A), the northwestern Pacific off the Japanese Archipelago (Lineage B), and the Bering Sea/Gulf of Alaska (Lineage C). Lineage A showed relatively low haplotype and nucleotide diversity, a significantly negative value of Tajima's D, and a star-shaped network, suggesting that anoxic bottom-water in the Sea of Japan over the last glacial period may have brought about a reduction in the Sea of Japan population. Furthermore, unexpectedly, the distributions of Lineage A and B were closely related to the pathways of the two ocean currents, especially along the Sanriku Coast. This result indicated that A. lar could disperse across shallow straits through the ocean current, despite their deep-sea adult habitat. Bayesian inference of divergence time revealed that A. lar separated into three lineages approximately 1 million years before present (BP) in the Pleistocene, and then had been influenced by deep-sea paleoenvironmental change in the Sea of Japan during the last glacial period, followed by a more recent larval dispersal with the ocean current since ca. 6 kilo years BP.
NASA Astrophysics Data System (ADS)
Lee, Y. J.; Bonfanti, C. E.; Trailovic, L.; Etherton, B.; Govett, M.; Stewart, J.
2017-12-01
At present, a fraction of all satellite observations are ultimately used for model assimilation. The satellite data assimilation process is computationally expensive and data are often reduced in resolution to allow timely incorporation into the forecast. This problem is only exacerbated by the recent launch of Geostationary Operational Environmental Satellite (GOES)-16 satellite and future satellites providing several order of magnitude increase in data volume. At the NOAA Earth System Research Laboratory (ESRL) we are researching the use of machine learning the improve the initial selection of satellite data to be used in the model assimilation process. In particular, we are investigating the use of deep learning. Deep learning is being applied to many image processing and computer vision problems with great success. Through our research, we are using convolutional neural network to find and mark regions of interest (ROI) to lead to intelligent extraction of observations from satellite observation systems. These targeted observations will be used to improve the quality of data selected for model assimilation and ultimately improve the impact of satellite data on weather forecasts. Our preliminary efforts to identify the ROI's are focused in two areas: applying and comparing state-of-art convolutional neural network models using the analysis data from the National Center for Environmental Prediction (NCEP) Global Forecast System (GFS) weather model, and using these results as a starting point to optimize convolution neural network model for pattern recognition on the higher resolution water vapor data from GOES-WEST and other satellite. This presentation will provide an introduction to our convolutional neural network model to identify and process these ROI's, along with the challenges of data preparation, training the model, and parameter optimization.
Quantitative phase microscopy using deep neural networks
NASA Astrophysics Data System (ADS)
Li, Shuai; Sinha, Ayan; Lee, Justin; Barbastathis, George
2018-02-01
Deep learning has been proven to achieve ground-breaking accuracy in various tasks. In this paper, we implemented a deep neural network (DNN) to achieve phase retrieval in a wide-field microscope. Our DNN utilized the residual neural network (ResNet) architecture and was trained using the data generated by a phase SLM. The results showed that our DNN was able to reconstruct the profile of the phase target qualitatively. In the meantime, large error still existed, which indicated that our approach still need to be improved.
Le, Nguyen-Quoc-Khanh; Ho, Quang-Thai; Ou, Yu-Yen
2018-06-13
Deep learning has been increasingly used to solve a number of problems with state-of-the-art performance in a wide variety of fields. In biology, deep learning can be applied to reduce feature extraction time and achieve high levels of performance. In our present work, we apply deep learning via two-dimensional convolutional neural networks and position-specific scoring matrices to classify Rab protein molecules, which are main regulators in membrane trafficking for transferring proteins and other macromolecules throughout the cell. The functional loss of specific Rab molecular functions has been implicated in a variety of human diseases, e.g., choroideremia, intellectual disabilities, cancer. Therefore, creating a precise model for classifying Rabs is crucial in helping biologists understand the molecular functions of Rabs and design drug targets according to such specific human disease information. We constructed a robust deep neural network for classifying Rabs that achieved an accuracy of 99%, 99.5%, 96.3%, and 97.6% for each of four specific molecular functions. Our approach demonstrates superior performance to traditional artificial neural networks. Therefore, from our proposed study, we provide both an effective tool for classifying Rab proteins and a basis for further research that can improve the performance of biological modeling using deep neural networks. Copyright © 2018 Elsevier Inc. All rights reserved.
Towards deep learning with segregated dendrites
Guerguiev, Jordan; Lillicrap, Timothy P
2017-01-01
Deep learning has led to significant advances in artificial intelligence, in part, by adopting strategies motivated by neurophysiology. However, it is unclear whether deep learning could occur in the real brain. Here, we show that a deep learning algorithm that utilizes multi-compartment neurons might help us to understand how the neocortex optimizes cost functions. Like neocortical pyramidal neurons, neurons in our model receive sensory information and higher-order feedback in electrotonically segregated compartments. Thanks to this segregation, neurons in different layers of the network can coordinate synaptic weight updates. As a result, the network learns to categorize images better than a single layer network. Furthermore, we show that our algorithm takes advantage of multilayer architectures to identify useful higher-order representations—the hallmark of deep learning. This work demonstrates that deep learning can be achieved using segregated dendritic compartments, which may help to explain the morphology of neocortical pyramidal neurons. PMID:29205151
Towards deep learning with segregated dendrites.
Guerguiev, Jordan; Lillicrap, Timothy P; Richards, Blake A
2017-12-05
Deep learning has led to significant advances in artificial intelligence, in part, by adopting strategies motivated by neurophysiology. However, it is unclear whether deep learning could occur in the real brain. Here, we show that a deep learning algorithm that utilizes multi-compartment neurons might help us to understand how the neocortex optimizes cost functions. Like neocortical pyramidal neurons, neurons in our model receive sensory information and higher-order feedback in electrotonically segregated compartments. Thanks to this segregation, neurons in different layers of the network can coordinate synaptic weight updates. As a result, the network learns to categorize images better than a single layer network. Furthermore, we show that our algorithm takes advantage of multilayer architectures to identify useful higher-order representations-the hallmark of deep learning. This work demonstrates that deep learning can be achieved using segregated dendritic compartments, which may help to explain the morphology of neocortical pyramidal neurons.
Katzman, Jared L; Shaham, Uri; Cloninger, Alexander; Bates, Jonathan; Jiang, Tingting; Kluger, Yuval
2018-02-26
Medical practitioners use survival models to explore and understand the relationships between patients' covariates (e.g. clinical and genetic features) and the effectiveness of various treatment options. Standard survival models like the linear Cox proportional hazards model require extensive feature engineering or prior medical knowledge to model treatment interaction at an individual level. While nonlinear survival methods, such as neural networks and survival forests, can inherently model these high-level interaction terms, they have yet to be shown as effective treatment recommender systems. We introduce DeepSurv, a Cox proportional hazards deep neural network and state-of-the-art survival method for modeling interactions between a patient's covariates and treatment effectiveness in order to provide personalized treatment recommendations. We perform a number of experiments training DeepSurv on simulated and real survival data. We demonstrate that DeepSurv performs as well as or better than other state-of-the-art survival models and validate that DeepSurv successfully models increasingly complex relationships between a patient's covariates and their risk of failure. We then show how DeepSurv models the relationship between a patient's features and effectiveness of different treatment options to show how DeepSurv can be used to provide individual treatment recommendations. Finally, we train DeepSurv on real clinical studies to demonstrate how it's personalized treatment recommendations would increase the survival time of a set of patients. The predictive and modeling capabilities of DeepSurv will enable medical researchers to use deep neural networks as a tool in their exploration, understanding, and prediction of the effects of a patient's characteristics on their risk of failure.
The Network Information Management System (NIMS) in the Deep Space Network
NASA Technical Reports Server (NTRS)
Wales, K. J.
1983-01-01
In an effort to better manage enormous amounts of administrative, engineering, and management data that is distributed worldwide, a study was conducted which identified the need for a network support system. The Network Information Management System (NIMS) will provide the Deep Space Network with the tools to provide an easily accessible source of valid information to support management activities and provide a more cost-effective method of acquiring, maintaining, and retrieval data.
Deep learning in bioinformatics.
Min, Seonwoo; Lee, Byunghan; Yoon, Sungroh
2017-09-01
In the era of big data, transformation of biomedical big data into valuable knowledge has been one of the most important challenges in bioinformatics. Deep learning has advanced rapidly since the early 2000s and now demonstrates state-of-the-art performance in various fields. Accordingly, application of deep learning in bioinformatics to gain insight from data has been emphasized in both academia and industry. Here, we review deep learning in bioinformatics, presenting examples of current research. To provide a useful and comprehensive perspective, we categorize research both by the bioinformatics domain (i.e. omics, biomedical imaging, biomedical signal processing) and deep learning architecture (i.e. deep neural networks, convolutional neural networks, recurrent neural networks, emergent architectures) and present brief descriptions of each study. Additionally, we discuss theoretical and practical issues of deep learning in bioinformatics and suggest future research directions. We believe that this review will provide valuable insights and serve as a starting point for researchers to apply deep learning approaches in their bioinformatics studies. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
The Deep Space Network: The challenges of the next 20 years - The 21st century
NASA Technical Reports Server (NTRS)
Dumas, L. N.; Edwards, C. D.; Hall, J. R.; Posner, E. C.
1990-01-01
The Deep Space Network (DSN) has been the radio navigation and communications link between NASA's lunar and deep space missions for 30 years. In this paper, new mission opportunities over the next 20 years are discussed. The system design drivers and the DSN architectural concepts for those challenges are briefly considered.
Deep Convolutional Framelet Denosing for Low-Dose CT via Wavelet Residual Network.
Kang, Eunhee; Chang, Won; Yoo, Jaejun; Ye, Jong Chul
2018-06-01
Model-based iterative reconstruction algorithms for low-dose X-ray computed tomography (CT) are computationally expensive. To address this problem, we recently proposed a deep convolutional neural network (CNN) for low-dose X-ray CT and won the second place in 2016 AAPM Low-Dose CT Grand Challenge. However, some of the textures were not fully recovered. To address this problem, here we propose a novel framelet-based denoising algorithm using wavelet residual network which synergistically combines the expressive power of deep learning and the performance guarantee from the framelet-based denoising algorithms. The new algorithms were inspired by the recent interpretation of the deep CNN as a cascaded convolution framelet signal representation. Extensive experimental results confirm that the proposed networks have significantly improved performance and preserve the detail texture of the original images.
Alabama Ground Operations during the Deep Convective Clouds and Chemistry Experiment
NASA Technical Reports Server (NTRS)
Carey, Lawrence; Blakeslee, Richard; Koshak, William; Bain, Lamont; Rogers, Ryan; Kozlowski, Danielle; Sherrer, Adam; Saari, Matt; Bigelbach, Brandon; Scott, Mariana;
2013-01-01
The Deep Convective Clouds and Chemistry (DC3) field campaign investigates the impact of deep, midlatitude convective clouds, including their dynamical, physical and lighting processes, on upper tropospheric composition and chemistry. DC3 science operations took place from 14 May to 30 June 2012. The DC3 field campaign utilized instrumented aircraft and ground ]based observations. The NCAR Gulfstream ]V (GV) observed a variety of gas ]phase species, radiation and cloud particle characteristics in the high ]altitude outflow of storms while the NASA DC ]8 characterized the convective inflow. Groundbased radar networks were used to document the kinematic and microphysical characteristics of storms. In order to study the impact of lightning on convective outflow composition, VHF ]based lightning mapping arrays (LMAs) provided detailed three ]dimensional measurements of flashes. Mobile soundings were utilized to characterize the meteorological environment of the convection. Radar, sounding and lightning observations were also used in real ]time to provide forecasting and mission guidance to the aircraft operations. Combined aircraft and ground ]based observations were conducted at three locations, 1) northeastern Colorado, 2) Oklahoma/Texas and 3) northern Alabama, to study different modes of deep convection in a variety of meteorological and chemical environments. The objective of this paper is to summarize the Alabama ground operations and provide a preliminary assessment of the ground ]based observations collected over northern Alabama during DC3. The multi ] Doppler, dual ]polarization radar network consisted of the UAHuntsville Advanced Radar for Meteorological and Operational Research (ARMOR), the UAHuntsville Mobile Alabama X ]band (MAX) radar and the Hytop (KHTX) Weather Surveillance Radar 88 Doppler (WSR ]88D). Lightning frequency and structure were observed in near real ]time by the NASA MSFC Northern Alabama LMA (NALMA). Pre ]storm and inflow proximity soundings were obtained with the UAHuntsville mobile sounding unit and the Redstone Arsenal (QAG) morning sounding.
NASA Technical Reports Server (NTRS)
1988-01-01
The Deep Space Network (DSN) is the largest and most sensitive scientific telecommunications and radio navigation network in the world. Its principal responsibilities are to support unmanned interplanetary spacecraft missions and to support radio and radar astronomy observations in the exploration of the solar system and the universe. The DSN facilities and capabilities as of January 1988 are described.
The deep space network. [tracking and communication support for space probes
NASA Technical Reports Server (NTRS)
1974-01-01
The objectives, functions, and organization of the deep space network are summarized. Progress in flight project support, tracking and data acquisition research and technology, network engineering, hardware and software implementation, and operations is reported. Interface support for the Mariner Venus Mercury 1973 flight and Pioneer 10 and 11 missions is included.
The deep space network, volume 12
NASA Technical Reports Server (NTRS)
1972-01-01
Progress in the development of the DSN is reported along with TDA research and technology, network engineering, hardware, and software implementation. Included are descriptions of the DSN function and facilities, Helios mission support, Mariner Venus/Mercury 1973 mission support, Viking mission support, tracking and ground-based navigation, communications, network control and data processing, and deep space stations.
Ahn, Jungmo; Park, JaeYeon; Park, Donghwan; Paek, Jeongyeup; Ko, JeongGil
2018-01-01
With the introduction of various advanced deep learning algorithms, initiatives for image classification systems have transitioned over from traditional machine learning algorithms (e.g., SVM) to Convolutional Neural Networks (CNNs) using deep learning software tools. A prerequisite in applying CNN to real world applications is a system that collects meaningful and useful data. For such purposes, Wireless Image Sensor Networks (WISNs), that are capable of monitoring natural environment phenomena using tiny and low-power cameras on resource-limited embedded devices, can be considered as an effective means of data collection. However, with limited battery resources, sending high-resolution raw images to the backend server is a burdensome task that has direct impact on network lifetime. To address this problem, we propose an energy-efficient pre- and post- processing mechanism using image resizing and color quantization that can significantly reduce the amount of data transferred while maintaining the classification accuracy in the CNN at the backend server. We show that, if well designed, an image in its highly compressed form can be well-classified with a CNN model trained in advance using adequately compressed data. Our evaluation using a real image dataset shows that an embedded device can reduce the amount of transmitted data by ∼71% while maintaining a classification accuracy of ∼98%. Under the same conditions, this process naturally reduces energy consumption by ∼71% compared to a WISN that sends the original uncompressed images.
Winkler, David A; Le, Tu C
2017-01-01
Neural networks have generated valuable Quantitative Structure-Activity/Property Relationships (QSAR/QSPR) models for a wide variety of small molecules and materials properties. They have grown in sophistication and many of their initial problems have been overcome by modern mathematical techniques. QSAR studies have almost always used so-called "shallow" neural networks in which there is a single hidden layer between the input and output layers. Recently, a new and potentially paradigm-shifting type of neural network based on Deep Learning has appeared. Deep learning methods have generated impressive improvements in image and voice recognition, and are now being applied to QSAR and QSAR modelling. This paper describes the differences in approach between deep and shallow neural networks, compares their abilities to predict the properties of test sets for 15 large drug data sets (the kaggle set), discusses the results in terms of the Universal Approximation theorem for neural networks, and describes how DNN may ameliorate or remove troublesome "activity cliffs" in QSAR data sets. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Automatic Classification of volcano-seismic events based on Deep Neural Networks.
NASA Astrophysics Data System (ADS)
Titos Luzón, M.; Bueno Rodriguez, A.; Garcia Martinez, L.; Benitez, C.; Ibáñez, J. M.
2017-12-01
Seismic monitoring of active volcanoes is a popular remote sensing technique to detect seismic activity, often associated to energy exchanges between the volcano and the environment. As a result, seismographs register a wide range of volcano-seismic signals that reflect the nature and underlying physics of volcanic processes. Machine learning and signal processing techniques provide an appropriate framework to analyze such data. In this research, we propose a new classification framework for seismic events based on deep neural networks. Deep neural networks are composed by multiple processing layers, and can discover intrinsic patterns from the data itself. Internal parameters can be initialized using a greedy unsupervised pre-training stage, leading to an efficient training of fully connected architectures. We aim to determine the robustness of these architectures as classifiers of seven different types of seismic events recorded at "Volcán de Fuego" (Colima, Mexico). Two deep neural networks with different pre-training strategies are studied: stacked denoising autoencoder and deep belief networks. Results are compared to existing machine learning algorithms (SVM, Random Forest, Multilayer Perceptron). We used 5 LPC coefficients over three non-overlapping segments as training features in order to characterize temporal evolution, avoid redundancy and encode the signal, regardless of its duration. Experimental results show that deep architectures can classify seismic events with higher accuracy than classical algorithms, attaining up to 92% recognition accuracy. Pre-training initialization helps these models to detect events that occur simultaneously in time (such explosions and rockfalls), increase robustness against noisy inputs, and provide better generalization. These results demonstrate deep neural networks are robust classifiers, and can be deployed in real-environments to monitor the seismicity of restless volcanoes.
Evolving Deep Networks Using HPC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, Steven R.; Rose, Derek C.; Johnston, Travis
While a large number of deep learning networks have been studied and published that produce outstanding results on natural image datasets, these datasets only make up a fraction of those to which deep learning can be applied. These datasets include text data, audio data, and arrays of sensors that have very different characteristics than natural images. As these “best” networks for natural images have been largely discovered through experimentation and cannot be proven optimal on some theoretical basis, there is no reason to believe that they are the optimal network for these drastically different datasets. Hyperparameter search is thus oftenmore » a very important process when applying deep learning to a new problem. In this work we present an evolutionary approach to searching the possible space of network hyperparameters and construction that can scale to 18, 000 nodes. This approach is applied to datasets of varying types and characteristics where we demonstrate the ability to rapidly find best hyperparameters in order to enable practitioners to quickly iterate between idea and result.« less
Multispectral embedding-based deep neural network for three-dimensional human pose recovery
NASA Astrophysics Data System (ADS)
Yu, Jialin; Sun, Jifeng
2018-01-01
Monocular image-based three-dimensional (3-D) human pose recovery aims to retrieve 3-D poses using the corresponding two-dimensional image features. Therefore, the pose recovery performance highly depends on the image representations. We propose a multispectral embedding-based deep neural network (MSEDNN) to automatically obtain the most discriminative features from multiple deep convolutional neural networks and then embed their penultimate fully connected layers into a low-dimensional manifold. This compact manifold can explore not only the optimum output from multiple deep networks but also the complementary properties of them. Furthermore, the distribution of each hierarchy discriminative manifold is sufficiently smooth so that the training process of our MSEDNN can be effectively implemented only using few labeled data. Our proposed network contains a body joint detector and a human pose regressor that are jointly trained. Extensive experiments conducted on four databases show that our proposed MSEDNN can achieve the best recovery performance compared with the state-of-the-art methods.
Mace, Michael; Pavese, Nicola; Borisyuk, Roman; Bain, Peter
2017-01-01
Essential tremor (ET), a movement disorder characterised by an uncontrollable shaking of the affected body part, is often professed to be the most common movement disorder, affecting up to one percent of adults over 40 years of age. The precise cause of ET is unknown, however pathological oscillations of a network of a number of brain regions are implicated in leading to the disorder. Deep brain stimulation (DBS) is a clinical therapy used to alleviate the symptoms of a number of movement disorders. DBS involves the surgical implantation of electrodes into specific nuclei in the brain. For ET the targeted region is the ventralis intermedius (Vim) nucleus of the thalamus. Though DBS is effective for treating ET, the mechanism through which the therapeutic effect is obtained is not understood. To elucidate the mechanism underlying the pathological network activity and the effect of DBS on such activity, we take a computational modelling approach combined with electrophysiological data. The pathological brain activity was recorded intra-operatively via implanted DBS electrodes, whilst simultaneously recording muscle activity of the affected limbs. We modelled the network hypothesised to underlie ET using the Wilson-Cowan approach. The modelled network exhibited oscillatory behaviour within the tremor frequency range, as did our electrophysiological data. By applying a DBS-like input we suppressed these oscillations. This study shows that the dynamics of the ET network support oscillations at the tremor frequency and the application of a DBS-like input disrupts this activity, which could be one mechanism underlying the therapeutic benefit. PMID:28068428
NASA Astrophysics Data System (ADS)
Calvin Frans Mariel, Wahyu; Mariyah, Siti; Pramana, Setia
2018-03-01
Deep learning is a new era of machine learning techniques that essentially imitate the structure and function of the human brain. It is a development of deeper Artificial Neural Network (ANN) that uses more than one hidden layer. Deep Learning Neural Network has a great ability on recognizing patterns from various data types such as picture, audio, text, and many more. In this paper, the authors tries to measure that algorithm’s ability by applying it into the text classification. The classification task herein is done by considering the content of sentiment in a text which is also called as sentiment analysis. By using several combinations of text preprocessing and feature extraction techniques, we aim to compare the precise modelling results of Deep Learning Neural Network with the other two commonly used algorithms, the Naϊve Bayes and Support Vector Machine (SVM). This algorithm comparison uses Indonesian text data with balanced and unbalanced sentiment composition. Based on the experimental simulation, Deep Learning Neural Network clearly outperforms the Naϊve Bayes and SVM and offers a better F-1 Score while for the best feature extraction technique which improves that modelling result is Bigram.
Sub-microradian pointing for deep space optical telecommunications network
NASA Technical Reports Server (NTRS)
Ortiz, G.; Lee, S.; Alexander, J.
2001-01-01
This presentation will cover innovative hardware, algorithms, architectures, techniques and recent laboratory results that are applicable to all deep space optical communication links, such as the Mars Telecommunication Network to future interstellar missions.
Creating a NASA-Wide Museum Alliance
NASA Technical Reports Server (NTRS)
Sohus, Anita M.
2006-01-01
NASA's Museum Alliance is a nationwide network of informal educators at museums, science centers, and planetariums that present NASA information to their local audiences. Begun in 2002 as the Mars Museum Visualization Alliance with advisors from a dozen museums, the network has grown to over 300 people from 200 organizations, including a dozen or so international partners. The network has become a community of practice among these informal educators who work with students, educators, and the general public on a daily basis, presenting information and fielding questions about space exploration. Communications are primarily through an active listserve, regular telecons, and a pass word protected website. Professional development is delivered via telecons and downloadable presentations. Current content offerings include Mars exploration, Cassini, Stardust, Genesis, Deep Impact, Earth observations, STEREO, and missions to explore beyond our solar system.
NASA Astrophysics Data System (ADS)
Sohus, Anita
2006-12-01
NASA’s Museum Alliance is a nationwide network of informal educators at museums, science centers, and planetariums that present NASA information to their local audiences. Begun in 2002 as the Mars Museum Visualization Alliance with advisors from a dozen museums, the network has grown to over 300 people from 200 organizations, including a dozen or so international partners. The network has become a community of practice among these informal educators who work with students, educators, and the general public on a daily basis, presenting information and fielding questions about space exploration. Communications are primarily through an active listserve, regular telecons, and a password-protected website. Professional development is delivered via telecons and downloadable presentations. Current content offerings include Mars exploration, Cassini, Stardust, Genesis, Deep Impact, Earth observations, STEREO, and missions to explore beyond our solar system.
Multi-Lingual Deep Neural Networks for Language Recognition
2016-08-08
training configurations for the NIST 2011 and 2015 lan- guage recognition evaluations (LRE11 and LRE15). The best per- forming multi-lingual BN-DNN...very ef- fective approach in the NIST 2015 language recognition evaluation (LRE15) open training condition [4, 5]. In this work we evaluate the impact...language are summarized in Table 2. Two language recognition tasks are used for evaluating the multi-lingual bottleneck systems. The first is the NIST
Deep Learning: A Primer for Radiologists.
Chartrand, Gabriel; Cheng, Phillip M; Vorontsov, Eugene; Drozdzal, Michal; Turcotte, Simon; Pal, Christopher J; Kadoury, Samuel; Tang, An
2017-01-01
Deep learning is a class of machine learning methods that are gaining success and attracting interest in many domains, including computer vision, speech recognition, natural language processing, and playing games. Deep learning methods produce a mapping from raw inputs to desired outputs (eg, image classes). Unlike traditional machine learning methods, which require hand-engineered feature extraction from inputs, deep learning methods learn these features directly from data. With the advent of large datasets and increased computing power, these methods can produce models with exceptional performance. These models are multilayer artificial neural networks, loosely inspired by biologic neural systems. Weighted connections between nodes (neurons) in the network are iteratively adjusted based on example pairs of inputs and target outputs by back-propagating a corrective error signal through the network. For computer vision tasks, convolutional neural networks (CNNs) have proven to be effective. Recently, several clinical applications of CNNs have been proposed and studied in radiology for classification, detection, and segmentation tasks. This article reviews the key concepts of deep learning for clinical radiologists, discusses technical requirements, describes emerging applications in clinical radiology, and outlines limitations and future directions in this field. Radiologists should become familiar with the principles and potential applications of deep learning in medical imaging. © RSNA, 2017.
Using deep learning in image hyper spectral segmentation, classification, and detection
NASA Astrophysics Data System (ADS)
Zhao, Xiuying; Su, Zhenyu
2018-02-01
Recent years have shown that deep learning neural networks are a valuable tool in the field of computer vision. Deep learning method can be used in applications like remote sensing such as Land cover Classification, Detection of Vehicle in Satellite Images, Hyper spectral Image classification. This paper addresses the use of the deep learning artificial neural network in Satellite image segmentation. Image segmentation plays an important role in image processing. The hue of the remote sensing image often has a large hue difference, which will result in the poor display of the images in the VR environment. Image segmentation is a pre processing technique applied to the original images and splits the image into many parts which have different hue to unify the color. Several computational models based on supervised, unsupervised, parametric, probabilistic region based image segmentation techniques have been proposed. Recently, one of the machine learning technique known as, deep learning with convolution neural network has been widely used for development of efficient and automatic image segmentation models. In this paper, we focus on study of deep neural convolution network and its variants for automatic image segmentation rather than traditional image segmentation strategies.
ShapeShop: Towards Understanding Deep Learning Representations via Interactive Experimentation.
Hohman, Fred; Hodas, Nathan; Chau, Duen Horng
2017-05-01
Deep learning is the driving force behind many recent technologies; however, deep neural networks are often viewed as "black-boxes" due to their internal complexity that is hard to understand. Little research focuses on helping people explore and understand the relationship between a user's data and the learned representations in deep learning models. We present our ongoing work, ShapeShop, an interactive system for visualizing and understanding what semantics a neural network model has learned. Built using standard web technologies, ShapeShop allows users to experiment with and compare deep learning models to help explore the robustness of image classifiers.
DeepNAT: Deep convolutional neural network for segmenting neuroanatomy.
Wachinger, Christian; Reuter, Martin; Klein, Tassilo
2018-04-15
We introduce DeepNAT, a 3D Deep convolutional neural network for the automatic segmentation of NeuroAnaTomy in T1-weighted magnetic resonance images. DeepNAT is an end-to-end learning-based approach to brain segmentation that jointly learns an abstract feature representation and a multi-class classification. We propose a 3D patch-based approach, where we do not only predict the center voxel of the patch but also neighbors, which is formulated as multi-task learning. To address a class imbalance problem, we arrange two networks hierarchically, where the first one separates foreground from background, and the second one identifies 25 brain structures on the foreground. Since patches lack spatial context, we augment them with coordinates. To this end, we introduce a novel intrinsic parameterization of the brain volume, formed by eigenfunctions of the Laplace-Beltrami operator. As network architecture, we use three convolutional layers with pooling, batch normalization, and non-linearities, followed by fully connected layers with dropout. The final segmentation is inferred from the probabilistic output of the network with a 3D fully connected conditional random field, which ensures label agreement between close voxels. The roughly 2.7million parameters in the network are learned with stochastic gradient descent. Our results show that DeepNAT compares favorably to state-of-the-art methods. Finally, the purely learning-based method may have a high potential for the adaptation to young, old, or diseased brains by fine-tuning the pre-trained network with a small training sample on the target application, where the availability of larger datasets with manual annotations may boost the overall segmentation accuracy in the future. Copyright © 2017 Elsevier Inc. All rights reserved.
Ma, Tao; Wang, Fen; Cheng, Jianjun; Yu, Yang; Chen, Xiaoyun
2016-01-01
The development of intrusion detection systems (IDS) that are adapted to allow routers and network defence systems to detect malicious network traffic disguised as network protocols or normal access is a critical challenge. This paper proposes a novel approach called SCDNN, which combines spectral clustering (SC) and deep neural network (DNN) algorithms. First, the dataset is divided into k subsets based on sample similarity using cluster centres, as in SC. Next, the distance between data points in a testing set and the training set is measured based on similarity features and is fed into the deep neural network algorithm for intrusion detection. Six KDD-Cup99 and NSL-KDD datasets and a sensor network dataset were employed to test the performance of the model. These experimental results indicate that the SCDNN classifier not only performs better than backpropagation neural network (BPNN), support vector machine (SVM), random forest (RF) and Bayes tree models in detection accuracy and the types of abnormal attacks found. It also provides an effective tool of study and analysis of intrusion detection in large networks. PMID:27754380
Ma, Tao; Wang, Fen; Cheng, Jianjun; Yu, Yang; Chen, Xiaoyun
2016-10-13
The development of intrusion detection systems (IDS) that are adapted to allow routers and network defence systems to detect malicious network traffic disguised as network protocols or normal access is a critical challenge. This paper proposes a novel approach called SCDNN, which combines spectral clustering (SC) and deep neural network (DNN) algorithms. First, the dataset is divided into k subsets based on sample similarity using cluster centres, as in SC. Next, the distance between data points in a testing set and the training set is measured based on similarity features and is fed into the deep neural network algorithm for intrusion detection. Six KDD-Cup99 and NSL-KDD datasets and a sensor network dataset were employed to test the performance of the model. These experimental results indicate that the SCDNN classifier not only performs better than backpropagation neural network (BPNN), support vector machine (SVM), random forest (RF) and Bayes tree models in detection accuracy and the types of abnormal attacks found. It also provides an effective tool of study and analysis of intrusion detection in large networks.
A Composite Model of Wound Segmentation Based on Traditional Methods and Deep Neural Networks
Wang, Changjian; Liu, Xiaohui; Jin, Shiyao
2018-01-01
Wound segmentation plays an important supporting role in the wound observation and wound healing. Current methods of image segmentation include those based on traditional process of image and those based on deep neural networks. The traditional methods use the artificial image features to complete the task without large amounts of labeled data. Meanwhile, the methods based on deep neural networks can extract the image features effectively without the artificial design, but lots of training data are required. Combined with the advantages of them, this paper presents a composite model of wound segmentation. The model uses the skin with wound detection algorithm we designed in the paper to highlight image features. Then, the preprocessed images are segmented by deep neural networks. And semantic corrections are applied to the segmentation results at last. The model shows a good performance in our experiment. PMID:29955227
Machine Learning and Quantum Mechanics
NASA Astrophysics Data System (ADS)
Chapline, George
The author has previously pointed out some similarities between selforganizing neural networks and quantum mechanics. These types of neural networks were originally conceived of as away of emulating the cognitive capabilities of the human brain. Recently extensions of these networks, collectively referred to as deep learning networks, have strengthened the connection between self-organizing neural networks and human cognitive capabilities. In this note we consider whether hardware quantum devices might be useful for emulating neural networks with human-like cognitive capabilities, or alternatively whether implementations of deep learning neural networks using conventional computers might lead to better algorithms for solving the many body Schrodinger equation.
The Deep Space Network: An instrument for radio astronomy research
NASA Technical Reports Server (NTRS)
Renzetti, N. A.; Levy, G. S.; Kuiper, T. B. H.; Walken, P. R.; Chandlee, R. C.
1988-01-01
The NASA Deep Space Network operates and maintains the Earth-based two-way communications link for unmanned spacecraft exploring the solar system. It is NASA's policy to also make the Network's facilities available for radio astronomy observations. The Network's microwave communication systems and facilities are being continually upgraded. This revised document, first published in 1982, describes the Network's current radio astronomy capabilities and future capabilities that will be made available by the ongoing Network upgrade. The Bibliography, which includes published papers and articles resulting from radio astronomy observations conducted with Network facilities, has been updated to include papers to May 1987.
Deep space communication - A one billion mile noisy channel
NASA Technical Reports Server (NTRS)
Smith, J. G.
1982-01-01
Deep space exploration is concerned with the study of natural phenomena in the solar system with the aid of measurements made at spacecraft on deep space missions. Deep space communication refers to communication between earth and spacecraft in deep space. The Deep Space Network is an earth-based facility employed for deep space communication. It includes a network of large tracking antennas located at various positions around the earth. The goals and achievements of deep space exploration over the past 20 years are discussed along with the broad functional requirements of deep space missions. Attention is given to the differences in space loss between communication satellites and deep space vehicles, effects of the long round-trip light time on spacecraft autonomy, requirements for the use of massive nuclear power plants on spacecraft at large distances from the sun, and the kinds of scientific return provided by a deep space mission. Problems concerning a deep space link of one billion miles are also explored.
NASA Astrophysics Data System (ADS)
Alapaty, K.; Zhang, G. J.; Song, X.; Kain, J. S.; Herwehe, J. A.
2012-12-01
Short lived pollutants such as aerosols play an important role in modulating not only the radiative balance but also cloud microphysical properties and precipitation rates. In the past, to understand the interactions of aerosols with clouds, several cloud-resolving modeling studies were conducted. These studies indicated that in the presence of anthropogenic aerosols, single-phase deep convection precipitation is reduced or suppressed. On the other hand, anthropogenic aerosol pollution led to enhanced precipitation for mixed-phase deep convective clouds. To date, there have not been many efforts to incorporate such aerosol indirect effects (AIE) in mesoscale models or global models that use parameterization schemes for deep convection. Thus, the objective of this work is to implement a diagnostic cloud microphysical scheme directly into a deep convection parameterization facilitating aerosol indirect effects in the WRF-CMAQ integrated modeling systems. Major research issues addressed in this study are: What is the sensitivity of a deep convection scheme to cloud microphysical processes represented by a bulk double-moment scheme? How close are the simulated cloud water paths as compared to observations? Does increased aerosol pollution lead to increased precipitation for mixed-phase clouds? These research questions are addressed by performing several WRF simulations using the Kain-Fritsch convection parameterization and a diagnostic cloud microphysical scheme. In the first set of simulations (control simulations) the WRF model is used to simulate two scenarios of deep convection over the continental U.S. during two summer periods at 36 km grid resolution. In the second set, these simulations are repeated after incorporating a diagnostic cloud microphysical scheme to study the impacts of inclusion of cloud microphysical processes. Finally, in the third set, aerosol concentrations simulated by the CMAQ modeling system are supplied to the embedded cloud microphysical scheme to study impacts of aerosol concentrations on precipitation and radiation fields. Observations available from the ARM microbase data, the SURFRAD network, GOES imagery, and other reanalysis and measurements will be used to analyze the impacts of a cloud microphysical scheme and aerosol concentrations on parameterized convection.
The Telecommunications and Data Acquisition Report. [Deep Space Network
NASA Technical Reports Server (NTRS)
Posner, E. C. (Editor)
1986-01-01
This publication, one of a series formerly titled The Deep Space Network Progress Report, documents DSN progress in flight project support, tracking and data acquisition research and technology, network engineering, hardware and software implementation, and operations. In addition, developments in Earth-based radio technology as applied to geodynamics, astrophysics and the radio search for extraterrestrial intelligence are reported.
A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potok, Thomas E; Schuman, Catherine D; Young, Steven R
Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determinemore » network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation. The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware. This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.« less
NASA Astrophysics Data System (ADS)
Mills, Kyle; Tamblyn, Isaac
2018-03-01
We demonstrate the capability of a convolutional deep neural network in predicting the nearest-neighbor energy of the 4 ×4 Ising model. Using its success at this task, we motivate the study of the larger 8 ×8 Ising model, showing that the deep neural network can learn the nearest-neighbor Ising Hamiltonian after only seeing a vanishingly small fraction of configuration space. Additionally, we show that the neural network has learned both the energy and magnetization operators with sufficient accuracy to replicate the low-temperature Ising phase transition. We then demonstrate the ability of the neural network to learn other spin models, teaching the convolutional deep neural network to accurately predict the long-range interaction of a screened Coulomb Hamiltonian, a sinusoidally attenuated screened Coulomb Hamiltonian, and a modified Potts model Hamiltonian. In the case of the long-range interaction, we demonstrate the ability of the neural network to recover the phase transition with equivalent accuracy to the numerically exact method. Furthermore, in the case of the long-range interaction, the benefits of the neural network become apparent; it is able to make predictions with a high degree of accuracy, and do so 1600 times faster than a CUDA-optimized exact calculation. Additionally, we demonstrate how the neural network succeeds at these tasks by looking at the weights learned in a simplified demonstration.
NASA Astrophysics Data System (ADS)
Chen, Xinyuan; Song, Li; Yang, Xiaokang
2016-09-01
Video denoising can be described as the problem of mapping from a specific length of noisy frames to clean one. We propose a deep architecture based on Recurrent Neural Network (RNN) for video denoising. The model learns a patch-based end-to-end mapping between the clean and noisy video sequences. It takes the corrupted video sequences as the input and outputs the clean one. Our deep network, which we refer to as deep Recurrent Neural Networks (deep RNNs or DRNNs), stacks RNN layers where each layer receives the hidden state of the previous layer as input. Experiment shows (i) the recurrent architecture through temporal domain extracts motion information and does favor to video denoising, and (ii) deep architecture have large enough capacity for expressing mapping relation between corrupted videos as input and clean videos as output, furthermore, (iii) the model has generality to learned different mappings from videos corrupted by different types of noise (e.g., Poisson-Gaussian noise). By training on large video databases, we are able to compete with some existing video denoising methods.
Deep learning for medical image segmentation - using the IBM TrueNorth neurosynaptic system
NASA Astrophysics Data System (ADS)
Moran, Steven; Gaonkar, Bilwaj; Whitehead, William; Wolk, Aidan; Macyszyn, Luke; Iyer, Subramanian S.
2018-03-01
Deep convolutional neural networks have found success in semantic image segmentation tasks in computer vision and medical imaging. These algorithms are executed on conventional von Neumann processor architectures or GPUs. This is suboptimal. Neuromorphic processors that replicate the structure of the brain are better-suited to train and execute deep learning models for image segmentation by relying on massively-parallel processing. However, given that they closely emulate the human brain, on-chip hardware and digital memory limitations also constrain them. Adapting deep learning models to execute image segmentation tasks on such chips, requires specialized training and validation. In this work, we demonstrate for the first-time, spinal image segmentation performed using a deep learning network implemented on neuromorphic hardware of the IBM TrueNorth Neurosynaptic System and validate the performance of our network by comparing it to human-generated segmentations of spinal vertebrae and disks. To achieve this on neuromorphic hardware, the training model constrains the coefficients of individual neurons to {-1,0,1} using the Energy Efficient Deep Neuromorphic (EEDN)1 networks training algorithm. Given the 1 million neurons and 256 million synapses, the scale and size of the neural network implemented by the IBM TrueNorth allows us to execute the requisite mapping between segmented images and non-uniform intensity MR images >20 times faster than on a GPU-accelerated network and using <0.1 W. This speed and efficiency implies that a trained neuromorphic chip can be deployed in intra-operative environments where real-time medical image segmentation is necessary.
Fiber Orientation Estimation Guided by a Deep Network.
Ye, Chuyang; Prince, Jerry L
2017-09-01
Diffusion magnetic resonance imaging (dMRI) is currently the only tool for noninvasively imaging the brain's white matter tracts. The fiber orientation (FO) is a key feature computed from dMRI for tract reconstruction. Because the number of FOs in a voxel is usually small, dictionary-based sparse reconstruction has been used to estimate FOs. However, accurate estimation of complex FO configurations in the presence of noise can still be challenging. In this work we explore the use of a deep network for FO estimation in a dictionary-based framework and propose an algorithm named Fiber Orientation Reconstruction guided by a Deep Network (FORDN). FORDN consists of two steps. First, we use a smaller dictionary encoding coarse basis FOs to represent diffusion signals. To estimate the mixture fractions of the dictionary atoms, a deep network is designed to solve the sparse reconstruction problem. Second, the coarse FOs inform the final FO estimation, where a larger dictionary encoding a dense basis of FOs is used and a weighted ℓ 1 -norm regularized least squares problem is solved to encourage FOs that are consistent with the network output. FORDN was evaluated and compared with state-of-the-art algorithms that estimate FOs using sparse reconstruction on simulated and typical clinical dMRI data. The results demonstrate the benefit of using a deep network for FO estimation.
Iterative deep convolutional encoder-decoder network for medical image segmentation.
Jung Uk Kim; Hak Gu Kim; Yong Man Ro
2017-07-01
In this paper, we propose a novel medical image segmentation using iterative deep learning framework. We have combined an iterative learning approach and an encoder-decoder network to improve segmentation results, which enables to precisely localize the regions of interest (ROIs) including complex shapes or detailed textures of medical images in an iterative manner. The proposed iterative deep convolutional encoder-decoder network consists of two main paths: convolutional encoder path and convolutional decoder path with iterative learning. Experimental results show that the proposed iterative deep learning framework is able to yield excellent medical image segmentation performances for various medical images. The effectiveness of the proposed method has been proved by comparing with other state-of-the-art medical image segmentation methods.
NASA Technical Reports Server (NTRS)
Hartley, R. B.
1974-01-01
The Deep Space Network (DSN) activities in support of Project Apollo during the period of 1971 and 1972 are reported. Beginning with the Apollo 14 mission and concluding with the Apollo 17 mission, the narrative includes, (1) a mission description, (2) the NASA support requirements placed on the DSN, and, (3) a comprehensive account of the support activities provided by each committed DSN deep space communication station. Associated equipment and activities of the three elements of the DSN (the Deep Space Instrumentation Facility (DSIF), the Space Flight Operations Facility (SFOF), and the Ground Communications Facility (GCF)) used in meeting the radio-metric and telemetry demands of the missions are documented.
Sharma, Harshita; Zerbe, Norman; Klempert, Iris; Hellwich, Olaf; Hufnagl, Peter
2017-11-01
Deep learning using convolutional neural networks is an actively emerging field in histological image analysis. This study explores deep learning methods for computer-aided classification in H&E stained histopathological whole slide images of gastric carcinoma. An introductory convolutional neural network architecture is proposed for two computerized applications, namely, cancer classification based on immunohistochemical response and necrosis detection based on the existence of tumor necrosis in the tissue. Classification performance of the developed deep learning approach is quantitatively compared with traditional image analysis methods in digital histopathology requiring prior computation of handcrafted features, such as statistical measures using gray level co-occurrence matrix, Gabor filter-bank responses, LBP histograms, gray histograms, HSV histograms and RGB histograms, followed by random forest machine learning. Additionally, the widely known AlexNet deep convolutional framework is comparatively analyzed for the corresponding classification problems. The proposed convolutional neural network architecture reports favorable results, with an overall classification accuracy of 0.6990 for cancer classification and 0.8144 for necrosis detection. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
1975-01-01
Work accomplished on the Deep Space Network (DSN) was described, including the following topics: supporting research and technology, advanced development and engineering, system implementation, and DSN operations pertaining to mission-independent or multiple-mission development as well as to support of flight projects.
Assessing the Linguistic Productivity of Unsupervised Deep Neural Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillips, Lawrence A.; Hodas, Nathan O.
Increasingly, cognitive scientists have demonstrated interest in applying tools from deep learning. One use for deep learning is in language acquisition where it is useful to know if a linguistic phenomenon can be learned through domain-general means. To assess whether unsupervised deep learning is appropriate, we first pose a smaller question: Can unsupervised neural networks apply linguistic rules productively, using them in novel situations. We draw from the literature on determiner/noun productivity by training an unsupervised, autoencoder network measuring its ability to combine nouns with determiners. Our simple autoencoder creates combinations it has not previously encountered, displaying a degree ofmore » overlap similar to actual children. While this preliminary work does not provide conclusive evidence for productivity, it warrants further investigation with more complex models. Further, this work helps lay the foundations for future collaboration between the deep learning and cognitive science communities.« less
NASA Astrophysics Data System (ADS)
Liyanagedera, Chamika M.; Sengupta, Abhronil; Jaiswal, Akhilesh; Roy, Kaushik
2017-12-01
Stochastic spiking neural networks based on nanoelectronic spin devices can be a possible pathway to achieving "brainlike" compact and energy-efficient cognitive intelligence. The computational model attempt to exploit the intrinsic device stochasticity of nanoelectronic synaptic or neural components to perform learning or inference. However, there has been limited analysis on the scaling effect of stochastic spin devices and its impact on the operation of such stochastic networks at the system level. This work attempts to explore the design space and analyze the performance of nanomagnet-based stochastic neuromorphic computing architectures for magnets with different barrier heights. We illustrate how the underlying network architecture must be modified to account for the random telegraphic switching behavior displayed by magnets with low barrier heights as they are scaled into the superparamagnetic regime. We perform a device-to-system-level analysis on a deep neural-network architecture for a digit-recognition problem on the MNIST data set.
Simple techniques for improving deep neural network outcomes on commodity hardware
NASA Astrophysics Data System (ADS)
Colina, Nicholas Christopher A.; Perez, Carlos E.; Paraan, Francis N. C.
2017-08-01
We benchmark improvements in the performance of deep neural networks (DNN) on the MNIST data test upon imple-menting two simple modifications to the algorithm that have little overhead computational cost. First is GPU parallelization on a commodity graphics card, and second is initializing the DNN with random orthogonal weight matrices prior to optimization. Eigenspectra analysis of the weight matrices reveal that the initially orthogonal matrices remain nearly orthogonal after training. The probability distributions from which these orthogonal matrices are drawn are also shown to significantly affect the performance of these deep neural networks.
ShapeShop: Towards Understanding Deep Learning Representations via Interactive Experimentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hohman, Frederick M.; Hodas, Nathan O.; Chau, Duen Horng
Deep learning is the driving force behind many recent technologies; however, deep neural networks are often viewed as “black-boxes” due to their internal complexity that is hard to understand. Little research focuses on helping people explore and understand the relationship between a user’s data and the learned representations in deep learning models. We present our ongoing work, ShapeShop, an interactive system for visualizing and understanding what semantics a neural network model has learned. Built using standard web technologies, ShapeShop allows users to experiment with and compare deep learning models to help explore the robustness of image classifiers.
Deep Space Network (DSN), Network Operations Control Center (NOCC) computer-human interfaces
NASA Technical Reports Server (NTRS)
Ellman, Alvin; Carlton, Magdi
1993-01-01
The technical challenges, engineering solutions, and results of the NOCC computer-human interface design are presented. The use-centered design process was as follows: determine the design criteria for user concerns; assess the impact of design decisions on the users; and determine the technical aspects of the implementation (tools, platforms, etc.). The NOCC hardware architecture is illustrated. A graphical model of the DSN that represented the hierarchical structure of the data was constructed. The DSN spacecraft summary display is shown. Navigation from top to bottom is accomplished by clicking the appropriate button for the element about which the user desires more detail. The telemetry summary display and the antenna color decision table are also shown.
Yildirim, Özal
2018-05-01
Long-short term memory networks (LSTMs), which have recently emerged in sequential data analysis, are the most widely used type of recurrent neural networks (RNNs) architecture. Progress on the topic of deep learning includes successful adaptations of deep versions of these architectures. In this study, a new model for deep bidirectional LSTM network-based wavelet sequences called DBLSTM-WS was proposed for classifying electrocardiogram (ECG) signals. For this purpose, a new wavelet-based layer is implemented to generate ECG signal sequences. The ECG signals were decomposed into frequency sub-bands at different scales in this layer. These sub-bands are used as sequences for the input of LSTM networks. New network models that include unidirectional (ULSTM) and bidirectional (BLSTM) structures are designed for performance comparisons. Experimental studies have been performed for five different types of heartbeats obtained from the MIT-BIH arrhythmia database. These five types are Normal Sinus Rhythm (NSR), Ventricular Premature Contraction (VPC), Paced Beat (PB), Left Bundle Branch Block (LBBB), and Right Bundle Branch Block (RBBB). The results show that the DBLSTM-WS model gives a high recognition performance of 99.39%. It has been observed that the wavelet-based layer proposed in the study significantly improves the recognition performance of conventional networks. This proposed network structure is an important approach that can be applied to similar signal processing problems. Copyright © 2018 Elsevier Ltd. All rights reserved.
Clinical Named Entity Recognition Using Deep Learning Models.
Wu, Yonghui; Jiang, Min; Xu, Jun; Zhi, Degui; Xu, Hua
2017-01-01
Clinical Named Entity Recognition (NER) is a critical natural language processing (NLP) task to extract important concepts (named entities) from clinical narratives. Researchers have extensively investigated machine learning models for clinical NER. Recently, there have been increasing efforts to apply deep learning models to improve the performance of current clinical NER systems. This study examined two popular deep learning architectures, the Convolutional Neural Network (CNN) and the Recurrent Neural Network (RNN), to extract concepts from clinical texts. We compared the two deep neural network architectures with three baseline Conditional Random Fields (CRFs) models and two state-of-the-art clinical NER systems using the i2b2 2010 clinical concept extraction corpus. The evaluation results showed that the RNN model trained with the word embeddings achieved a new state-of-the- art performance (a strict F1 score of 85.94%) for the defined clinical NER task, outperforming the best-reported system that used both manually defined and unsupervised learning features. This study demonstrates the advantage of using deep neural network architectures for clinical concept extraction, including distributed feature representation, automatic feature learning, and long-term dependencies capture. This is one of the first studies to compare the two widely used deep learning models and demonstrate the superior performance of the RNN model for clinical NER.
MUFOLD-SS: New deep inception-inside-inception networks for protein secondary structure prediction.
Fang, Chao; Shang, Yi; Xu, Dong
2018-05-01
Protein secondary structure prediction can provide important information for protein 3D structure prediction and protein functions. Deep learning offers a new opportunity to significantly improve prediction accuracy. In this article, a new deep neural network architecture, named the Deep inception-inside-inception (Deep3I) network, is proposed for protein secondary structure prediction and implemented as a software tool MUFOLD-SS. The input to MUFOLD-SS is a carefully designed feature matrix corresponding to the primary amino acid sequence of a protein, which consists of a rich set of information derived from individual amino acid, as well as the context of the protein sequence. Specifically, the feature matrix is a composition of physio-chemical properties of amino acids, PSI-BLAST profile, and HHBlits profile. MUFOLD-SS is composed of a sequence of nested inception modules and maps the input matrix to either eight states or three states of secondary structures. The architecture of MUFOLD-SS enables effective processing of local and global interactions between amino acids in making accurate prediction. In extensive experiments on multiple datasets, MUFOLD-SS outperformed the best existing methods and other deep neural networks significantly. MUFold-SS can be downloaded from http://dslsrv8.cs.missouri.edu/~cf797/MUFoldSS/download.html. © 2018 Wiley Periodicals, Inc.
Clinical Named Entity Recognition Using Deep Learning Models
Wu, Yonghui; Jiang, Min; Xu, Jun; Zhi, Degui; Xu, Hua
2017-01-01
Clinical Named Entity Recognition (NER) is a critical natural language processing (NLP) task to extract important concepts (named entities) from clinical narratives. Researchers have extensively investigated machine learning models for clinical NER. Recently, there have been increasing efforts to apply deep learning models to improve the performance of current clinical NER systems. This study examined two popular deep learning architectures, the Convolutional Neural Network (CNN) and the Recurrent Neural Network (RNN), to extract concepts from clinical texts. We compared the two deep neural network architectures with three baseline Conditional Random Fields (CRFs) models and two state-of-the-art clinical NER systems using the i2b2 2010 clinical concept extraction corpus. The evaluation results showed that the RNN model trained with the word embeddings achieved a new state-of-the- art performance (a strict F1 score of 85.94%) for the defined clinical NER task, outperforming the best-reported system that used both manually defined and unsupervised learning features. This study demonstrates the advantage of using deep neural network architectures for clinical concept extraction, including distributed feature representation, automatic feature learning, and long-term dependencies capture. This is one of the first studies to compare the two widely used deep learning models and demonstrate the superior performance of the RNN model for clinical NER. PMID:29854252
Sabokrou, Mohammad; Fayyaz, Mohsen; Fathy, Mahmood; Klette, Reinhard
2017-02-17
This paper proposes a fast and reliable method for anomaly detection and localization in video data showing crowded scenes. Time-efficient anomaly localization is an ongoing challenge and subject of this paper. We propose a cubicpatch- based method, characterised by a cascade of classifiers, which makes use of an advanced feature-learning approach. Our cascade of classifiers has two main stages. First, a light but deep 3D auto-encoder is used for early identification of "many" normal cubic patches. This deep network operates on small cubic patches as being the first stage, before carefully resizing remaining candidates of interest, and evaluating those at the second stage using a more complex and deeper 3D convolutional neural network (CNN). We divide the deep autoencoder and the CNN into multiple sub-stages which operate as cascaded classifiers. Shallow layers of the cascaded deep networks (designed as Gaussian classifiers, acting as weak single-class classifiers) detect "simple" normal patches such as background patches, and more complex normal patches are detected at deeper layers. It is shown that the proposed novel technique (a cascade of two cascaded classifiers) performs comparable to current top-performing detection and localization methods on standard benchmarks, but outperforms those in general with respect to required computation time.
Learning representations for the early detection of sepsis with deep neural networks.
Kam, Hye Jin; Kim, Ha Young
2017-10-01
Sepsis is one of the leading causes of death in intensive care unit patients. Early detection of sepsis is vital because mortality increases as the sepsis stage worsens. This study aimed to develop detection models for the early stage of sepsis using deep learning methodologies, and to compare the feasibility and performance of the new deep learning methodology with those of the regression method with conventional temporal feature extraction. Study group selection adhered to the InSight model. The results of the deep learning-based models and the InSight model were compared. With deep feedforward networks, the area under the ROC curve (AUC) of the models were 0.887 and 0.915 for the InSight and the new feature sets, respectively. For the model with the combined feature set, the AUC was the same as that of the basic feature set (0.915). For the long short-term memory model, only the basic feature set was applied and the AUC improved to 0.929 compared with the existing 0.887 of the InSight model. The contributions of this paper can be summarized in three ways: (i) improved performance without feature extraction using domain knowledge, (ii) verification of feature extraction capability of deep neural networks through comparison with reference features, and (iii) improved performance with feedforward neural networks using long short-term memory, a neural network architecture that can learn sequential patterns. Copyright © 2017 Elsevier Ltd. All rights reserved.
Applications of deep convolutional neural networks to digitized natural history collections.
Schuettpelz, Eric; Frandsen, Paul B; Dikow, Rebecca B; Brown, Abel; Orli, Sylvia; Peters, Melinda; Metallo, Adam; Funk, Vicki A; Dorr, Laurence J
2017-01-01
Natural history collections contain data that are critical for many scientific endeavors. Recent efforts in mass digitization are generating large datasets from these collections that can provide unprecedented insight. Here, we present examples of how deep convolutional neural networks can be applied in analyses of imaged herbarium specimens. We first demonstrate that a convolutional neural network can detect mercury-stained specimens across a collection with 90% accuracy. We then show that such a network can correctly distinguish two morphologically similar plant families 96% of the time. Discarding the most challenging specimen images increases accuracy to 94% and 99%, respectively. These results highlight the importance of mass digitization and deep learning approaches and reveal how they can together deliver powerful new investigative tools.
Background rejection in NEXT using deep neural networks
Renner, J.; Farbin, A.; Vidal, J. Muñoz; ...
2017-01-16
Here, we investigate the potential of using deep learning techniques to reject background events in searches for neutrinoless double beta decay with high pressure xenon time projection chambers capable of detailed track reconstruction. The differences in the topological signatures of background and signal events can be learned by deep neural networks via training over many thousands of events. These networks can then be used to classify further events as signal or background, providing an additional background rejection factor at an acceptable loss of efficiency. The networks trained in this study performed better than previous methods developed based on the usemore » of the same topological signatures by a factor of 1.2 to 1.6, and there is potential for further improvement.« less
NASA Astrophysics Data System (ADS)
Fang, K.; Shen, C.; Kifer, D.; Yang, X.
2017-12-01
The Soil Moisture Active Passive (SMAP) mission has delivered high-quality and valuable sensing of surface soil moisture since 2015. However, its short time span, coarse resolution, and irregular revisit schedule have limited its use. Utilizing a state-of-the-art deep-in-time neural network, Long Short-Term Memory (LSTM), we created a system that predicts SMAP level-3 soil moisture data using climate forcing, model-simulated moisture, and static physical attributes as inputs. The system removes most of the bias with model simulations and also improves predicted moisture climatology, achieving a testing accuracy of 0.025 to 0.03 in most parts of Continental United States (CONUS). As the first application of LSTM in hydrology, we show that it is more robust than simpler methods in either temporal or spatial extrapolation tests. We also discuss roles of different predictors, the effectiveness of regularization algorithms and impacts of training strategies. With high fidelity to SMAP products, our data can aid various applications including data assimilation, weather forecasting, and soil moisture hindcasting.
Rispoli, Marco; Savastano, Maria Cristina; Lumbroso, Bruno
2015-11-01
To analyze the foveal microvasculature features in eyes with branch retinal vein occlusion (BRVO) using optical coherence tomography angiography based on split spectrum amplitude decorrelation angiography technology. A total of 10 BRVO eyes (mean age 64.2 ± 8.02 range between 52 years and 76 years) were evaluated by optical coherence tomography angiography (XR-Avanti; Optovue). The macular angiography scan protocol covered a 3 mm × 3 mm area. The focus of angiography analysis were two retinal layers: superficial vascular network and deep vascular network. The following vascular morphological congestion parameters were assessed in the vein occlusion area in both the superficial and deep networks: foveal avascular zone enlargement, capillary non-perfusion occurrence, microvascular abnormalities appearance, and vascular congestion signs. Image analyses were performed by 2 masked observers and interobserver agreement of image analyses was 0.90 (κ = 0.225, P < 0.01). In both superficial and deep network of BRVO, a decrease in capillary density with foveal avascular zone enlargement, capillary non-perfusion occurrence, and microvascular abnormalities appearance was observed (P < 0.01). The deep network showed the main vascular congestion at the boundary between healthy and nonperfused retina. Optical coherence tomography angiography in BRVO allows to detect foveal avascular zone enlargement, capillary nonperfusion, microvascular abnormalities, and vascular congestion signs both in the superficial and deep capillary network in all eyes. Optical coherence tomography angiography technology is a potential clinical tool for BRVO diagnosis and follow-up, providing stratigraphic vascular details that have not been previously observed by standard fluorescein angiography. The normal retinal vascular nets and areas of nonperfusion and congestion can be identified at various retinal levels. Optical coherence tomography angiography provides noninvasive images of the retinal capillaries and vascular networks.
Deep Space Network Antenna Monitoring Using Adaptive Time Series Methods and Hidden Markov Models
NASA Technical Reports Server (NTRS)
Smyth, Padhraic; Mellstrom, Jeff
1993-01-01
The Deep Space Network (DSN)(designed and operated by the Jet Propulsion Laboratory for the National Aeronautics and Space Administration (NASA) provides end-to-end telecommunication capabilities between earth and various interplanetary spacecraft throughout the solar system.
The deep space network, volume 19
NASA Technical Reports Server (NTRS)
1974-01-01
The progress is reported in the DSN for Nov. and Dec. 1973. Research is described for the following areas: functions and facilities, mission support for flight projects, tracking and ground-based navigation, spacecraft/ground communication, network control and operations technology, and deep space stations.
NASA Astrophysics Data System (ADS)
Gan, Wen-Cong; Shu, Fu-Wen
Quantum many-body problem with exponentially large degrees of freedom can be reduced to a tractable computational form by neural network method [G. Carleo and M. Troyer, Science 355 (2017) 602, arXiv:1606.02318.] The power of deep neural network (DNN) based on deep learning is clarified by mapping it to renormalization group (RG), which may shed lights on holographic principle by identifying a sequence of RG transformations to the AdS geometry. In this paper, we show that any network which reflects RG process has intrinsic hyperbolic geometry, and discuss the structure of entanglement encoded in the graph of DNN. We find the entanglement structure of DNN is of Ryu-Takayanagi form. Based on these facts, we argue that the emergence of holographic gravitational theory is related to deep learning process of the quantum-field theory.
Automated Melanoma Recognition in Dermoscopy Images via Very Deep Residual Networks.
Yu, Lequan; Chen, Hao; Dou, Qi; Qin, Jing; Heng, Pheng-Ann
2017-04-01
Automated melanoma recognition in dermoscopy images is a very challenging task due to the low contrast of skin lesions, the huge intraclass variation of melanomas, the high degree of visual similarity between melanoma and non-melanoma lesions, and the existence of many artifacts in the image. In order to meet these challenges, we propose a novel method for melanoma recognition by leveraging very deep convolutional neural networks (CNNs). Compared with existing methods employing either low-level hand-crafted features or CNNs with shallower architectures, our substantially deeper networks (more than 50 layers) can acquire richer and more discriminative features for more accurate recognition. To take full advantage of very deep networks, we propose a set of schemes to ensure effective training and learning under limited training data. First, we apply the residual learning to cope with the degradation and overfitting problems when a network goes deeper. This technique can ensure that our networks benefit from the performance gains achieved by increasing network depth. Then, we construct a fully convolutional residual network (FCRN) for accurate skin lesion segmentation, and further enhance its capability by incorporating a multi-scale contextual information integration scheme. Finally, we seamlessly integrate the proposed FCRN (for segmentation) and other very deep residual networks (for classification) to form a two-stage framework. This framework enables the classification network to extract more representative and specific features based on segmented results instead of the whole dermoscopy images, further alleviating the insufficiency of training data. The proposed framework is extensively evaluated on ISBI 2016 Skin Lesion Analysis Towards Melanoma Detection Challenge dataset. Experimental results demonstrate the significant performance gains of the proposed framework, ranking the first in classification and the second in segmentation among 25 teams and 28 teams, respectively. This study corroborates that very deep CNNs with effective training mechanisms can be employed to solve complicated medical image analysis tasks, even with limited training data.
An analysis of image storage systems for scalable training of deep neural networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Seung-Hwan; Young, Steven R; Patton, Robert M
This study presents a principled empirical evaluation of image storage systems for training deep neural networks. We employ the Caffe deep learning framework to train neural network models for three different data sets, MNIST, CIFAR-10, and ImageNet. While training the models, we evaluate five different options to retrieve training image data: (1) PNG-formatted image files on local file system; (2) pushing pixel arrays from image files into a single HDF5 file on local file system; (3) in-memory arrays to hold the pixel arrays in Python and C++; (4) loading the training data into LevelDB, a log-structured merge tree based key-valuemore » storage; and (5) loading the training data into LMDB, a B+tree based key-value storage. The experimental results quantitatively highlight the disadvantage of using normal image files on local file systems to train deep neural networks and demonstrate reliable performance with key-value storage based storage systems. When training a model on the ImageNet dataset, the image file option was more than 17 times slower than the key-value storage option. Along with measurements on training time, this study provides in-depth analysis on the cause of performance advantages/disadvantages of each back-end to train deep neural networks. We envision the provided measurements and analysis will shed light on the optimal way to architect systems for training neural networks in a scalable manner.« less
Pattern learning with deep neural networks in EMG-based speech recognition.
Wand, Michael; Schultz, Tanja
2014-01-01
We report on classification of phones and phonetic features from facial electromyographic (EMG) data, within the context of our EMG-based Silent Speech interface. In this paper we show that a Deep Neural Network can be used to perform this classification task, yielding a significant improvement over conventional Gaussian Mixture models. Our central contribution is the visualization of patterns which are learned by the neural network. With increasing network depth, these patterns represent more and more intricate electromyographic activity.
Neural network based satellite tracking for deep space applications
NASA Technical Reports Server (NTRS)
Amoozegar, F.; Ruggier, C.
2003-01-01
The objective of this paper is to provide a survey of neural network trends as applied to the tracking of spacecrafts in deep space at Ka-band under various weather conditions and examine the trade-off between tracing accuracy and communication link performance.
High-power transmitter automation. [deep space network
NASA Technical Reports Server (NTRS)
Gosline, R.
1980-01-01
The current status of the transmitter automation development applicable to all transmitters in the deep space network is described. Interface and software designs are described that improve reliability and reduce the time required for subsystem turn-on and klystron saturation to less than 10 minutes.
Convolutional networks for fast, energy-efficient neuromorphic computing
Esser, Steven K.; Merolla, Paul A.; Arthur, John V.; Cassidy, Andrew S.; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J.; McKinstry, Jeffrey L.; Melano, Timothy; Barch, Davis R.; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D.; Modha, Dharmendra S.
2016-01-01
Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer. PMID:27651489
Convolutional networks for fast, energy-efficient neuromorphic computing.
Esser, Steven K; Merolla, Paul A; Arthur, John V; Cassidy, Andrew S; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J; McKinstry, Jeffrey L; Melano, Timothy; Barch, Davis R; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D; Modha, Dharmendra S
2016-10-11
Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware's underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.
Shakeout: A New Approach to Regularized Deep Neural Network Training.
Kang, Guoliang; Li, Jun; Tao, Dacheng
2018-05-01
Recent years have witnessed the success of deep neural networks in dealing with a plenty of practical problems. Dropout has played an essential role in many successful deep neural networks, by inducing regularization in the model training. In this paper, we present a new regularized training approach: Shakeout. Instead of randomly discarding units as Dropout does at the training stage, Shakeout randomly chooses to enhance or reverse each unit's contribution to the next layer. This minor modification of Dropout has the statistical trait: the regularizer induced by Shakeout adaptively combines , and regularization terms. Our classification experiments with representative deep architectures on image datasets MNIST, CIFAR-10 and ImageNet show that Shakeout deals with over-fitting effectively and outperforms Dropout. We empirically demonstrate that Shakeout leads to sparser weights under both unsupervised and supervised settings. Shakeout also leads to the grouping effect of the input units in a layer. Considering the weights in reflecting the importance of connections, Shakeout is superior to Dropout, which is valuable for the deep model compression. Moreover, we demonstrate that Shakeout can effectively reduce the instability of the training process of the deep architecture.
DeepGene: an advanced cancer type classifier based on deep learning and somatic point mutations.
Yuan, Yuchen; Shi, Yi; Li, Changyang; Kim, Jinman; Cai, Weidong; Han, Zeguang; Feng, David Dagan
2016-12-23
With the developments of DNA sequencing technology, large amounts of sequencing data have become available in recent years and provide unprecedented opportunities for advanced association studies between somatic point mutations and cancer types/subtypes, which may contribute to more accurate somatic point mutation based cancer classification (SMCC). However in existing SMCC methods, issues like high data sparsity, small volume of sample size, and the application of simple linear classifiers, are major obstacles in improving the classification performance. To address the obstacles in existing SMCC studies, we propose DeepGene, an advanced deep neural network (DNN) based classifier, that consists of three steps: firstly, the clustered gene filtering (CGF) concentrates the gene data by mutation occurrence frequency, filtering out the majority of irrelevant genes; secondly, the indexed sparsity reduction (ISR) converts the gene data into indexes of its non-zero elements, thereby significantly suppressing the impact of data sparsity; finally, the data after CGF and ISR is fed into a DNN classifier, which extracts high-level features for accurate classification. Experimental results on our curated TCGA-DeepGene dataset, which is a reformulated subset of the TCGA dataset containing 12 selected types of cancer, show that CGF, ISR and DNN all contribute in improving the overall classification performance. We further compare DeepGene with three widely adopted classifiers and demonstrate that DeepGene has at least 24% performance improvement in terms of testing accuracy. Based on deep learning and somatic point mutation data, we devise DeepGene, an advanced cancer type classifier, which addresses the obstacles in existing SMCC studies. Experiments indicate that DeepGene outperforms three widely adopted existing classifiers, which is mainly attributed to its deep learning module that is able to extract the high level features between combinatorial somatic point mutations and cancer types.
SYNAPTIC DEPRESSION IN DEEP NEURAL NETWORKS FOR SPEECH PROCESSING.
Zhang, Wenhao; Li, Hanyu; Yang, Minda; Mesgarani, Nima
2016-03-01
A characteristic property of biological neurons is their ability to dynamically change the synaptic efficacy in response to variable input conditions. This mechanism, known as synaptic depression, significantly contributes to the formation of normalized representation of speech features. Synaptic depression also contributes to the robust performance of biological systems. In this paper, we describe how synaptic depression can be modeled and incorporated into deep neural network architectures to improve their generalization ability. We observed that when synaptic depression is added to the hidden layers of a neural network, it reduces the effect of changing background activity in the node activations. In addition, we show that when synaptic depression is included in a deep neural network trained for phoneme classification, the performance of the network improves under noisy conditions not included in the training phase. Our results suggest that more complete neuron models may further reduce the gap between the biological performance and artificial computing, resulting in networks that better generalize to novel signal conditions.
Deep learning methods for protein torsion angle prediction.
Li, Haiou; Hou, Jie; Adhikari, Badri; Lyu, Qiang; Cheng, Jianlin
2017-09-18
Deep learning is one of the most powerful machine learning methods that has achieved the state-of-the-art performance in many domains. Since deep learning was introduced to the field of bioinformatics in 2012, it has achieved success in a number of areas such as protein residue-residue contact prediction, secondary structure prediction, and fold recognition. In this work, we developed deep learning methods to improve the prediction of torsion (dihedral) angles of proteins. We design four different deep learning architectures to predict protein torsion angles. The architectures including deep neural network (DNN) and deep restricted Boltzmann machine (DRBN), deep recurrent neural network (DRNN) and deep recurrent restricted Boltzmann machine (DReRBM) since the protein torsion angle prediction is a sequence related problem. In addition to existing protein features, two new features (predicted residue contact number and the error distribution of torsion angles extracted from sequence fragments) are used as input to each of the four deep learning architectures to predict phi and psi angles of protein backbone. The mean absolute error (MAE) of phi and psi angles predicted by DRNN, DReRBM, DRBM and DNN is about 20-21° and 29-30° on an independent dataset. The MAE of phi angle is comparable to the existing methods, but the MAE of psi angle is 29°, 2° lower than the existing methods. On the latest CASP12 targets, our methods also achieved the performance better than or comparable to a state-of-the art method. Our experiment demonstrates that deep learning is a valuable method for predicting protein torsion angles. The deep recurrent network architecture performs slightly better than deep feed-forward architecture, and the predicted residue contact number and the error distribution of torsion angles extracted from sequence fragments are useful features for improving prediction accuracy.
Deep Learning for Computer Vision: A Brief Review
Doulamis, Nikolaos; Doulamis, Anastasios; Protopapadakis, Eftychios
2018-01-01
Over the last years deep learning methods have been shown to outperform previous state-of-the-art machine learning techniques in several fields, with computer vision being one of the most prominent cases. This review paper provides a brief overview of some of the most significant deep learning schemes used in computer vision problems, that is, Convolutional Neural Networks, Deep Boltzmann Machines and Deep Belief Networks, and Stacked Denoising Autoencoders. A brief account of their history, structure, advantages, and limitations is given, followed by a description of their applications in various computer vision tasks, such as object detection, face recognition, action and activity recognition, and human pose estimation. Finally, a brief overview is given of future directions in designing deep learning schemes for computer vision problems and the challenges involved therein. PMID:29487619
Classification of CT brain images based on deep learning networks.
Gao, Xiaohong W; Hui, Rui; Tian, Zengmin
2017-01-01
While computerised tomography (CT) may have been the first imaging tool to study human brain, it has not yet been implemented into clinical decision making process for diagnosis of Alzheimer's disease (AD). On the other hand, with the nature of being prevalent, inexpensive and non-invasive, CT does present diagnostic features of AD to a great extent. This study explores the significance and impact on the application of the burgeoning deep learning techniques to the task of classification of CT brain images, in particular utilising convolutional neural network (CNN), aiming at providing supplementary information for the early diagnosis of Alzheimer's disease. Towards this end, three categories of CT images (N = 285) are clustered into three groups, which are AD, lesion (e.g. tumour) and normal ageing. In addition, considering the characteristics of this collection with larger thickness along the direction of depth (z) (~3-5 mm), an advanced CNN architecture is established integrating both 2D and 3D CNN networks. The fusion of the two CNN networks is subsequently coordinated based on the average of Softmax scores obtained from both networks consolidating 2D images along spatial axial directions and 3D segmented blocks respectively. As a result, the classification accuracy rates rendered by this elaborated CNN architecture are 85.2%, 80% and 95.3% for classes of AD, lesion and normal respectively with an average of 87.6%. Additionally, this improved CNN network appears to outperform the others when in comparison with 2D version only of CNN network as well as a number of state of the art hand-crafted approaches. As a result, these approaches deliver accuracy rates in percentage of 86.3, 85.6 ± 1.10, 86.3 ± 1.04, 85.2 ± 1.60, 83.1 ± 0.35 for 2D CNN, 2D SIFT, 2D KAZE, 3D SIFT and 3D KAZE respectively. The two major contributions of the paper constitute a new 3-D approach while applying deep learning technique to extract signature information rooted in both 2D slices and 3D blocks of CT images and an elaborated hand-crated approach of 3D KAZE. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
DCMDN: Deep Convolutional Mixture Density Network
NASA Astrophysics Data System (ADS)
D'Isanto, Antonio; Polsterer, Kai Lars
2017-09-01
Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.
Applications of deep convolutional neural networks to digitized natural history collections
Frandsen, Paul B.; Dikow, Rebecca B.; Brown, Abel; Orli, Sylvia; Peters, Melinda; Metallo, Adam; Funk, Vicki A.; Dorr, Laurence J.
2017-01-01
Abstract Natural history collections contain data that are critical for many scientific endeavors. Recent efforts in mass digitization are generating large datasets from these collections that can provide unprecedented insight. Here, we present examples of how deep convolutional neural networks can be applied in analyses of imaged herbarium specimens. We first demonstrate that a convolutional neural network can detect mercury-stained specimens across a collection with 90% accuracy. We then show that such a network can correctly distinguish two morphologically similar plant families 96% of the time. Discarding the most challenging specimen images increases accuracy to 94% and 99%, respectively. These results highlight the importance of mass digitization and deep learning approaches and reveal how they can together deliver powerful new investigative tools. PMID:29200929
Maximum entropy methods for extracting the learned features of deep neural networks.
Finnegan, Alex; Song, Jun S
2017-10-01
New architectures of multilayer artificial neural networks and new methods for training them are rapidly revolutionizing the application of machine learning in diverse fields, including business, social science, physical sciences, and biology. Interpreting deep neural networks, however, currently remains elusive, and a critical challenge lies in understanding which meaningful features a network is actually learning. We present a general method for interpreting deep neural networks and extracting network-learned features from input data. We describe our algorithm in the context of biological sequence analysis. Our approach, based on ideas from statistical physics, samples from the maximum entropy distribution over possible sequences, anchored at an input sequence and subject to constraints implied by the empirical function learned by a network. Using our framework, we demonstrate that local transcription factor binding motifs can be identified from a network trained on ChIP-seq data and that nucleosome positioning signals are indeed learned by a network trained on chemical cleavage nucleosome maps. Imposing a further constraint on the maximum entropy distribution also allows us to probe whether a network is learning global sequence features, such as the high GC content in nucleosome-rich regions. This work thus provides valuable mathematical tools for interpreting and extracting learned features from feed-forward neural networks.
NASA Technical Reports Server (NTRS)
Dobinson, E.
1982-01-01
General requirements for an information management system for the deep space network (DSN) are examined. A concise review of available database management system technology is presented. It is recommended that a federation of logically decentralized databases be implemented for the Network Information Management System of the DSN. Overall characteristics of the federation are specified, as well as reasons for adopting this approach.
ChemNet: A Transferable and Generalizable Deep Neural Network for Small-Molecule Property Prediction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goh, Garrett B.; Siegel, Charles M.; Vishnu, Abhinav
With access to large datasets, deep neural networks through representation learning have been able to identify patterns from raw data, achieving human-level accuracy in image and speech recognition tasks. However, in chemistry, availability of large standardized and labelled datasets is scarce, and with a multitude of chemical properties of interest, chemical data is inherently small and fragmented. In this work, we explore transfer learning techniques in conjunction with the existing Chemception CNN model, to create a transferable and generalizable deep neural network for small-molecule property prediction. Our latest model, ChemNet learns in a semi-supervised manner from inexpensive labels computed frommore » the ChEMBL database. When fine-tuned to the Tox21, HIV and FreeSolv dataset, which are 3 separate chemical tasks that ChemNet was not originally trained on, we demonstrate that ChemNet exceeds the performance of existing Chemception models, contemporary MLP models that trains on molecular fingerprints, and it matches the performance of the ConvGraph algorithm, the current state-of-the-art. Furthermore, as ChemNet has been pre-trained on a large diverse chemical database, it can be used as a universal “plug-and-play” deep neural network, which accelerates the deployment of deep neural networks for the prediction of novel small-molecule chemical properties.« less
Modeling language and cognition with deep unsupervised learning: a tutorial overview
Zorzi, Marco; Testolin, Alberto; Stoianov, Ivilin P.
2013-01-01
Deep unsupervised learning in stochastic recurrent neural networks with many layers of hidden units is a recent breakthrough in neural computation research. These networks build a hierarchy of progressively more complex distributed representations of the sensory data by fitting a hierarchical generative model. In this article we discuss the theoretical foundations of this approach and we review key issues related to training, testing and analysis of deep networks for modeling language and cognitive processing. The classic letter and word perception problem of McClelland and Rumelhart (1981) is used as a tutorial example to illustrate how structured and abstract representations may emerge from deep generative learning. We argue that the focus on deep architectures and generative (rather than discriminative) learning represents a crucial step forward for the connectionist modeling enterprise, because it offers a more plausible model of cortical learning as well as a way to bridge the gap between emergentist connectionist models and structured Bayesian models of cognition. PMID:23970869
Deep convolutional neural network based antenna selection in multiple-input multiple-output system
NASA Astrophysics Data System (ADS)
Cai, Jiaxin; Li, Yan; Hu, Ying
2018-03-01
Antenna selection of wireless communication system has attracted increasing attention due to the challenge of keeping a balance between communication performance and computational complexity in large-scale Multiple-Input MultipleOutput antenna systems. Recently, deep learning based methods have achieved promising performance for large-scale data processing and analysis in many application fields. This paper is the first attempt to introduce the deep learning technique into the field of Multiple-Input Multiple-Output antenna selection in wireless communications. First, the label of attenuation coefficients channel matrix is generated by minimizing the key performance indicator of training antenna systems. Then, a deep convolutional neural network that explicitly exploits the massive latent cues of attenuation coefficients is learned on the training antenna systems. Finally, we use the adopted deep convolutional neural network to classify the channel matrix labels of test antennas and select the optimal antenna subset. Simulation experimental results demonstrate that our method can achieve better performance than the state-of-the-art baselines for data-driven based wireless antenna selection.
Modeling language and cognition with deep unsupervised learning: a tutorial overview.
Zorzi, Marco; Testolin, Alberto; Stoianov, Ivilin P
2013-01-01
Deep unsupervised learning in stochastic recurrent neural networks with many layers of hidden units is a recent breakthrough in neural computation research. These networks build a hierarchy of progressively more complex distributed representations of the sensory data by fitting a hierarchical generative model. In this article we discuss the theoretical foundations of this approach and we review key issues related to training, testing and analysis of deep networks for modeling language and cognitive processing. The classic letter and word perception problem of McClelland and Rumelhart (1981) is used as a tutorial example to illustrate how structured and abstract representations may emerge from deep generative learning. We argue that the focus on deep architectures and generative (rather than discriminative) learning represents a crucial step forward for the connectionist modeling enterprise, because it offers a more plausible model of cortical learning as well as a way to bridge the gap between emergentist connectionist models and structured Bayesian models of cognition.
Action-Driven Visual Object Tracking With Deep Reinforcement Learning.
Yun, Sangdoo; Choi, Jongwon; Yoo, Youngjoon; Yun, Kimin; Choi, Jin Young
2018-06-01
In this paper, we propose an efficient visual tracker, which directly captures a bounding box containing the target object in a video by means of sequential actions learned using deep neural networks. The proposed deep neural network to control tracking actions is pretrained using various training video sequences and fine-tuned during actual tracking for online adaptation to a change of target and background. The pretraining is done by utilizing deep reinforcement learning (RL) as well as supervised learning. The use of RL enables even partially labeled data to be successfully utilized for semisupervised learning. Through the evaluation of the object tracking benchmark data set, the proposed tracker is validated to achieve a competitive performance at three times the speed of existing deep network-based trackers. The fast version of the proposed method, which operates in real time on graphics processing unit, outperforms the state-of-the-art real-time trackers with an accuracy improvement of more than 8%.
Propagation Effects of Importance to the NASA/JPL Deep Space Network (DSN)
NASA Technical Reports Server (NTRS)
Slobin, Steve
1999-01-01
This paper presents Propagation Effects of Importance To The NASA/JPL Deep Space Network (DSN). The topics include: 1) DSN Antennas; 2) Deep Space Telecom Link Basics; 3) DSN Propagation Region of Interest; 4) Ka-Band Weather Effects Models and Examples; 5) Existing Goldstone Ka-Band Atmosphere Attenuation Model; 6) Existing Goldstone Atmosphere Noise Temperature Model; and 7) Ka-Band delta (G/T) Relative to Vacuum Condition. This paper summarizes the topics above.
A novel deep learning approach for classification of EEG motor imagery signals.
Tabar, Yousef Rezaei; Halici, Ugur
2017-02-01
Signal classification is an important issue in brain computer interface (BCI) systems. Deep learning approaches have been used successfully in many recent studies to learn features and classify different types of data. However, the number of studies that employ these approaches on BCI applications is very limited. In this study we aim to use deep learning methods to improve classification performance of EEG motor imagery signals. In this study we investigate convolutional neural networks (CNN) and stacked autoencoders (SAE) to classify EEG Motor Imagery signals. A new form of input is introduced to combine time, frequency and location information extracted from EEG signal and it is used in CNN having one 1D convolutional and one max-pooling layers. We also proposed a new deep network by combining CNN and SAE. In this network, the features that are extracted in CNN are classified through the deep network SAE. The classification performance obtained by the proposed method on BCI competition IV dataset 2b in terms of kappa value is 0.547. Our approach yields 9% improvement over the winner algorithm of the competition. Our results show that deep learning methods provide better classification performance compared to other state of art approaches. These methods can be applied successfully to BCI systems where the amount of data is large due to daily recording.
A Deep Neural Network Model for Rainfall Estimation UsingPolarimetric WSR-88DP Radar Observations
NASA Astrophysics Data System (ADS)
Tan, H.; Chandra, C. V.; Chen, H.
2016-12-01
Rainfall estimation based on radar measurements has been an important topic for a few decades. Generally, radar rainfall estimation is conducted through parametric algorisms such as reflectivity-rainfall relation (i.e., Z-R relation). On the other hand, neural networks are developed for ground rainfall estimation based on radar measurements. This nonparametric method, which takes into account of both radar observations and rainfall measurements from ground rain gauges, has been demonstrated successfully for rainfall rate estimation. However, the neural network-based rainfall estimation is limited in practice due to the model complexity and structure, data quality, as well as different rainfall microphysics. Recently, the deep learning approach has been introduced in pattern recognition and machine learning areas. Compared to traditional neural networks, the deep learning based methodologies have larger number of hidden layers and more complex structure for data representation. Through a hierarchical learning process, the high level structured information and knowledge can be extracted automatically from low level features of the data. In this paper, we introduce a novel deep neural network model for rainfall estimation based on ground polarimetric radar measurements .The model is designed to capture the complex abstractions of radar measurements at different levels using multiple layers feature identification and extraction. The abstractions at different levels can be used independently or fused with other data resource such as satellite-based rainfall products and/or topographic data to represent the rain characteristics at certain location. In particular, the WSR-88DP radar and rain gauge data collected in Dallas - Fort Worth Metroplex and Florida are used extensively to train the model, and for demonstration purposes. Quantitative evaluation of the deep neural network based rainfall products will also be presented, which is based on an independent rain gauge network.
NASA Astrophysics Data System (ADS)
Divett, T.; Ingham, M.; Beggan, C. D.; Richardson, G. S.; Rodger, C. J.; Thomson, A. W. P.; Dalzell, M.
2017-10-01
Transformers in New Zealand's South Island electrical transmission network have been impacted by geomagnetically induced currents (GIC) during geomagnetic storms. We explore the impact of GIC on this network by developing a thin-sheet conductance (TSC) model for the region, a geoelectric field model, and a GIC network model. (The TSC is composed of a thin-sheet conductance map with underlying layered resistivity structure.) Using modeling approaches that have been successfully used in the United Kingdom and Ireland, we applied a thin-sheet model to calculate the electric field as a function of magnetic field and ground conductance. We developed a TSC model based on magnetotelluric surveys, geology, and bathymetry, modified to account for offshore sediments. Using this representation, the thin sheet model gave good agreement with measured impedance vectors. Driven by a spatially uniform magnetic field variation, the thin-sheet model results in electric fields dominated by the ocean-land boundary with effects due to the deep ocean and steep terrain. There is a strong tendency for the electric field to align northwest-southeast, irrespective of the direction of the magnetic field. Applying this electric field to a GIC network model, we show that modeled GIC are dominated by northwest-southeast transmission lines rather than east-west lines usually assumed to dominate.
Microwave analog fiber-optic link for use in the deep space network
NASA Technical Reports Server (NTRS)
Logan, R. T., Jr.; Lutes, G. F.; Maleki, L.
1990-01-01
A novel fiber-optic system with dynamic range of up to 150 dB-Hz for transmission of microwave analog signals is described. The design, analysis, and laboratory evaluations of this system are reported, and potential applications in the NASA/JPL Deep Space Network are discussed.
Evolutionary Scheduler for the Deep Space Network
NASA Technical Reports Server (NTRS)
Guillaume, Alexandre; Lee, Seungwon; Wang, Yeou-Fang; Zheng, Hua; Chau, Savio; Tung, Yu-Wen; Terrile, Richard J.; Hovden, Robert
2010-01-01
A computer program assists human schedulers in satisfying, to the maximum extent possible, competing demands from multiple spacecraft missions for utilization of the transmitting/receiving Earth stations of NASA s Deep Space Network. The program embodies a concept of optimal scheduling to attain multiple objectives in the presence of multiple constraints.
Deep Visual Attention Prediction
NASA Astrophysics Data System (ADS)
Wang, Wenguan; Shen, Jianbing
2018-05-01
In this work, we aim to predict human eye fixation with view-free scenes based on an end-to-end deep learning architecture. Although Convolutional Neural Networks (CNNs) have made substantial improvement on human attention prediction, it is still needed to improve CNN based attention models by efficiently leveraging multi-scale features. Our visual attention network is proposed to capture hierarchical saliency information from deep, coarse layers with global saliency information to shallow, fine layers with local saliency response. Our model is based on a skip-layer network structure, which predicts human attention from multiple convolutional layers with various reception fields. Final saliency prediction is achieved via the cooperation of those global and local predictions. Our model is learned in a deep supervision manner, where supervision is directly fed into multi-level layers, instead of previous approaches of providing supervision only at the output layer and propagating this supervision back to earlier layers. Our model thus incorporates multi-level saliency predictions within a single network, which significantly decreases the redundancy of previous approaches of learning multiple network streams with different input scales. Extensive experimental analysis on various challenging benchmark datasets demonstrate our method yields state-of-the-art performance with competitive inference time.
Cytopathological image analysis using deep-learning networks in microfluidic microscopy.
Gopakumar, G; Hari Babu, K; Mishra, Deepak; Gorthi, Sai Siva; Sai Subrahmanyam, Gorthi R K
2017-01-01
Cytopathologic testing is one of the most critical steps in the diagnosis of diseases, including cancer. However, the task is laborious and demands skill. Associated high cost and low throughput drew considerable interest in automating the testing process. Several neural network architectures were designed to provide human expertise to machines. In this paper, we explore and propose the feasibility of using deep-learning networks for cytopathologic analysis by performing the classification of three important unlabeled, unstained leukemia cell lines (K562, MOLT, and HL60). The cell images used in the classification are captured using a low-cost, high-throughput cell imaging technique: microfluidics-based imaging flow cytometry. We demonstrate that without any conventional fine segmentation followed by explicit feature extraction, the proposed deep-learning algorithms effectively classify the coarsely localized cell lines. We show that the designed deep belief network as well as the deeply pretrained convolutional neural network outperform the conventionally used decision systems and are important in the medical domain, where the availability of labeled data is limited for training. We hope that our work enables the development of a clinically significant high-throughput microfluidic microscopy-based tool for disease screening/triaging, especially in resource-limited settings.
Charron, Odelin; Lallement, Alex; Jarnet, Delphine; Noblet, Vincent; Clavier, Jean-Baptiste; Meyer, Philippe
2018-04-01
Stereotactic treatments are today the reference techniques for the irradiation of brain metastases in radiotherapy. The dose per fraction is very high, and delivered in small volumes (diameter <1 cm). As part of these treatments, effective detection and precise segmentation of lesions are imperative. Many methods based on deep-learning approaches have been developed for the automatic segmentation of gliomas, but very little for that of brain metastases. We adapted an existing 3D convolutional neural network (DeepMedic) to detect and segment brain metastases on MRI. At first, we sought to adapt the network parameters to brain metastases. We then explored the single or combined use of different MRI modalities, by evaluating network performance in terms of detection and segmentation. We also studied the interest of increasing the database with virtual patients or of using an additional database in which the active parts of the metastases are separated from the necrotic parts. Our results indicated that a deep network approach is promising for the detection and the segmentation of brain metastases on multimodal MRI. Copyright © 2018 Elsevier Ltd. All rights reserved.
Deep learning for brain tumor classification
NASA Astrophysics Data System (ADS)
Paul, Justin S.; Plassard, Andrew J.; Landman, Bennett A.; Fabbri, Daniel
2017-03-01
Recent research has shown that deep learning methods have performed well on supervised machine learning, image classification tasks. The purpose of this study is to apply deep learning methods to classify brain images with different tumor types: meningioma, glioma, and pituitary. A dataset was publicly released containing 3,064 T1-weighted contrast enhanced MRI (CE-MRI) brain images from 233 patients with either meningioma, glioma, or pituitary tumors split across axial, coronal, or sagittal planes. This research focuses on the 989 axial images from 191 patients in order to avoid confusing the neural networks with three different planes containing the same diagnosis. Two types of neural networks were used in classification: fully connected and convolutional neural networks. Within these two categories, further tests were computed via the augmentation of the original 512×512 axial images. Training neural networks over the axial data has proven to be accurate in its classifications with an average five-fold cross validation of 91.43% on the best trained neural network. This result demonstrates that a more general method (i.e. deep learning) can outperform specialized methods that require image dilation and ring-forming subregions on tumors.
Processing of chromatic information in a deep convolutional neural network.
Flachot, Alban; Gegenfurtner, Karl R
2018-04-01
Deep convolutional neural networks are a class of machine-learning algorithms capable of solving non-trivial tasks, such as object recognition, with human-like performance. Little is known about the exact computations that deep neural networks learn, and to what extent these computations are similar to the ones performed by the primate brain. Here, we investigate how color information is processed in the different layers of the AlexNet deep neural network, originally trained on object classification of over 1.2M images of objects in their natural contexts. We found that the color-responsive units in the first layer of AlexNet learned linear features and were broadly tuned to two directions in color space, analogously to what is known of color responsive cells in the primate thalamus. Moreover, these directions are decorrelated and lead to statistically efficient representations, similar to the cardinal directions of the second-stage color mechanisms in primates. We also found, in analogy to the early stages of the primate visual system, that chromatic and achromatic information were segregated in the early layers of the network. Units in the higher layers of AlexNet exhibit on average a lower responsivity for color than units at earlier stages.
NASA Astrophysics Data System (ADS)
Jin, Hyeongmin; Heo, Changyong; Kim, Jong Hyo
2018-02-01
Differing reconstruction kernels are known to strongly affect the variability of imaging biomarkers and thus remain as a barrier in translating the computer aided quantification techniques into clinical practice. This study presents a deep learning application to CT kernel conversion which converts a CT image of sharp kernel to that of standard kernel and evaluates its impact on variability reduction of a pulmonary imaging biomarker, the emphysema index (EI). Forty cases of low-dose chest CT exams obtained with 120kVp, 40mAs, 1mm thickness, of 2 reconstruction kernels (B30f, B50f) were selected from the low dose lung cancer screening database of our institution. A Fully convolutional network was implemented with Keras deep learning library. The model consisted of symmetric layers to capture the context and fine structure characteristics of CT images from the standard and sharp reconstruction kernels. Pairs of the full-resolution CT data set were fed to input and output nodes to train the convolutional network to learn the appropriate filter kernels for converting the CT images of sharp kernel to standard kernel with a criterion of measuring the mean squared error between the input and target images. EIs (RA950 and Perc15) were measured with a software package (ImagePrism Pulmo, Seoul, South Korea) and compared for the data sets of B50f, B30f, and the converted B50f. The effect of kernel conversion was evaluated with the mean and standard deviation of pair-wise differences in EI. The population mean of RA950 was 27.65 +/- 7.28% for B50f data set, 10.82 +/- 6.71% for the B30f data set, and 8.87 +/- 6.20% for the converted B50f data set. The mean of pair-wise absolute differences in RA950 between B30f and B50f is reduced from 16.83% to 1.95% using kernel conversion. Our study demonstrates the feasibility of applying the deep learning technique for CT kernel conversion and reducing the kernel-induced variability of EI quantification. The deep learning model has a potential to improve the reliability of imaging biomarker, especially in evaluating the longitudinal changes of EI even when the patient CT scans were performed with different kernels.
Pang, Shuchao; Yu, Zhezhou; Orgun, Mehmet A
2017-03-01
Highly accurate classification of biomedical images is an essential task in the clinical diagnosis of numerous medical diseases identified from those images. Traditional image classification methods combined with hand-crafted image feature descriptors and various classifiers are not able to effectively improve the accuracy rate and meet the high requirements of classification of biomedical images. The same also holds true for artificial neural network models directly trained with limited biomedical images used as training data or directly used as a black box to extract the deep features based on another distant dataset. In this study, we propose a highly reliable and accurate end-to-end classifier for all kinds of biomedical images via deep learning and transfer learning. We first apply domain transferred deep convolutional neural network for building a deep model; and then develop an overall deep learning architecture based on the raw pixels of original biomedical images using supervised training. In our model, we do not need the manual design of the feature space, seek an effective feature vector classifier or segment specific detection object and image patches, which are the main technological difficulties in the adoption of traditional image classification methods. Moreover, we do not need to be concerned with whether there are large training sets of annotated biomedical images, affordable parallel computing resources featuring GPUs or long times to wait for training a perfect deep model, which are the main problems to train deep neural networks for biomedical image classification as observed in recent works. With the utilization of a simple data augmentation method and fast convergence speed, our algorithm can achieve the best accuracy rate and outstanding classification ability for biomedical images. We have evaluated our classifier on several well-known public biomedical datasets and compared it with several state-of-the-art approaches. We propose a robust automated end-to-end classifier for biomedical images based on a domain transferred deep convolutional neural network model that shows a highly reliable and accurate performance which has been confirmed on several public biomedical image datasets. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Computations in the deep vs superficial layers of the cerebral cortex.
Rolls, Edmund T; Mills, W Patrick C
2017-11-01
A fundamental question is how the cerebral neocortex operates functionally, computationally. The cerebral neocortex with its superficial and deep layers and highly developed recurrent collateral systems that provide a basis for memory-related processing might perform somewhat different computations in the superficial and deep layers. Here we take into account the quantitative connectivity within and between laminae. Using integrate-and-fire neuronal network simulations that incorporate this connectivity, we first show that attractor networks implemented in the deep layers that are activated by the superficial layers could be partly independent in that the deep layers might have a different time course, which might because of adaptation be more transient and useful for outputs from the neocortex. In contrast the superficial layers could implement more prolonged firing, useful for slow learning and for short-term memory. Second, we show that a different type of computation could in principle be performed in the superficial and deep layers, by showing that the superficial layers could operate as a discrete attractor network useful for categorisation and feeding information forward up a cortical hierarchy, whereas the deep layers could operate as a continuous attractor network useful for providing a spatially and temporally smooth output to output systems in the brain. A key advance is that we draw attention to the functions of the recurrent collateral connections between cortical pyramidal cells, often omitted in canonical models of the neocortex, and address principles of operation of the neocortex by which the superficial and deep layers might be specialized for different types of attractor-related memory functions implemented by the recurrent collaterals. Copyright © 2017 Elsevier Inc. All rights reserved.
Adaptive template generation for amyloid PET using a deep learning approach.
Kang, Seung Kwan; Seo, Seongho; Shin, Seong A; Byun, Min Soo; Lee, Dong Young; Kim, Yu Kyeong; Lee, Dong Soo; Lee, Jae Sung
2018-05-11
Accurate spatial normalization (SN) of amyloid positron emission tomography (PET) images for Alzheimer's disease assessment without coregistered anatomical magnetic resonance imaging (MRI) of the same individual is technically challenging. In this study, we applied deep neural networks to generate individually adaptive PET templates for robust and accurate SN of amyloid PET without using matched 3D MR images. Using 681 pairs of simultaneously acquired 11 C-PIB PET and T1-weighted 3D MRI scans of AD, MCI, and cognitively normal subjects, we trained and tested two deep neural networks [convolutional auto-encoder (CAE) and generative adversarial network (GAN)] that produce adaptive best PET templates. More specifically, the networks were trained using 685,100 pieces of augmented data generated by rotating 527 randomly selected datasets and validated using 154 datasets. The input to the supervised neural networks was the 3D PET volume in native space and the label was the spatially normalized 3D PET image using the transformation parameters obtained from MRI-based SN. The proposed deep learning approach significantly enhanced the quantitative accuracy of MRI-less amyloid PET assessment by reducing the SN error observed when an average amyloid PET template is used. Given an input image, the trained deep neural networks rapidly provide individually adaptive 3D PET templates without any discontinuity between the slices (in 0.02 s). As the proposed method does not require 3D MRI for the SN of PET images, it has great potential for use in routine analysis of amyloid PET images in clinical practice and research. © 2018 Wiley Periodicals, Inc.
2006-10-19
This image shows NASA Deep Impact spacecraft being built at Ball Aerospace & Technologies Corporation, Boulder, Colo. on July 2, 2005. The spacecraft impactor was released from Deep Impact flyby spacecraft.
Xu, W; LeBeau, J M
2018-05-01
We establish a series of deep convolutional neural networks to automatically analyze position averaged convergent beam electron diffraction patterns. The networks first calibrate the zero-order disk size, center position, and rotation without the need for pretreating the data. With the aligned data, additional networks then measure the sample thickness and tilt. The performance of the network is explored as a function of a variety of variables including thickness, tilt, and dose. A methodology to explore the response of the neural network to various pattern features is also presented. Processing patterns at a rate of ∼ 0.1 s/pattern, the network is shown to be orders of magnitude faster than a brute force method while maintaining accuracy. The approach is thus suitable for automatically processing big, 4D STEM data. We also discuss the generality of the method to other materials/orientations as well as a hybrid approach that combines the features of the neural network with least squares fitting for even more robust analysis. The source code is available at https://github.com/subangstrom/DeepDiffraction. Copyright © 2018 Elsevier B.V. All rights reserved.
Deep Learning and Its Applications in Biomedicine.
Cao, Chensi; Liu, Feng; Tan, Hai; Song, Deshou; Shu, Wenjie; Li, Weizhong; Zhou, Yiming; Bo, Xiaochen; Xie, Zhi
2018-02-01
Advances in biological and medical technologies have been providing us explosive volumes of biological and physiological data, such as medical images, electroencephalography, genomic and protein sequences. Learning from these data facilitates the understanding of human health and disease. Developed from artificial neural networks, deep learning-based algorithms show great promise in extracting features and learning patterns from complex data. The aim of this paper is to provide an overview of deep learning techniques and some of the state-of-the-art applications in the biomedical field. We first introduce the development of artificial neural network and deep learning. We then describe two main components of deep learning, i.e., deep learning architectures and model optimization. Subsequently, some examples are demonstrated for deep learning applications, including medical image classification, genomic sequence analysis, as well as protein structure classification and prediction. Finally, we offer our perspectives for the future directions in the field of deep learning. Copyright © 2018. Production and hosting by Elsevier B.V.
Operability engineering in the Deep Space Network
NASA Technical Reports Server (NTRS)
Wilkinson, Belinda
1993-01-01
Many operability problems exist at the three Deep Space Communications Complexes (DSCC's) of the Deep Space Network (DSN). Four years ago, the position of DSN Operability Engineer was created to provide the opportunity for someone to take a system-level approach to solving these problems. Since that time, a process has been developed for personnel and development engineers and for enforcing user interface standards in software designed for the DSCC's. Plans are for the participation of operations personnel in the product life-cycle to expand in the future.
Deep Potential Molecular Dynamics: A Scalable Model with the Accuracy of Quantum Mechanics
NASA Astrophysics Data System (ADS)
Zhang, Linfeng; Han, Jiequn; Wang, Han; Car, Roberto; E, Weinan
2018-04-01
We introduce a scheme for molecular simulations, the deep potential molecular dynamics (DPMD) method, based on a many-body potential and interatomic forces generated by a carefully crafted deep neural network trained with ab initio data. The neural network model preserves all the natural symmetries in the problem. It is first-principles based in the sense that there are no ad hoc components aside from the network model. We show that the proposed scheme provides an efficient and accurate protocol in a variety of systems, including bulk materials and molecules. In all these cases, DPMD gives results that are essentially indistinguishable from the original data, at a cost that scales linearly with system size.
Using Deep Learning for Gamma Ray Source Detection at the First G-APD Cherenkov Telescope (FACT)
NASA Astrophysics Data System (ADS)
Bieker, Jacob
2018-06-01
Finding gamma-ray sources is of paramount importance for Imaging Air Cherenkov Telescopes (IACT). This study looks at using deep neural networks on data from the First G-APD Cherenkov Telescope (FACT) as a proof-of-concept of finding gamma-ray sources with deep learning for the upcoming Cherenkov Telescope Array (CTA). In this study, FACT’s individual photon level observation data from the last 5 years was used with convolutional neural networks to determine if one or more sources were present. The neural networks used various architectures to determine which architectures were most successful in finding sources. Neural networks offer a promising method for finding faint and extended gamma-ray sources for IACTs. With further improvement and modifications, they offer a compelling method for source detection for the next generation of IACTs.
Li, Wei; Cao, Peng; Zhao, Dazhe; Wang, Junbo
2016-01-01
Computer aided detection (CAD) systems can assist radiologists by offering a second opinion on early diagnosis of lung cancer. Classification and feature representation play critical roles in false-positive reduction (FPR) in lung nodule CAD. We design a deep convolutional neural networks method for nodule classification, which has an advantage of autolearning representation and strong generalization ability. A specified network structure for nodule images is proposed to solve the recognition of three types of nodules, that is, solid, semisolid, and ground glass opacity (GGO). Deep convolutional neural networks are trained by 62,492 regions-of-interest (ROIs) samples including 40,772 nodules and 21,720 nonnodules from the Lung Image Database Consortium (LIDC) database. Experimental results demonstrate the effectiveness of the proposed method in terms of sensitivity and overall accuracy and that it consistently outperforms the competing methods.
Deep Gaze Velocity Analysis During Mammographic Reading for Biometric Identification of Radiologists
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoon, Hong-Jun; Alamudun, Folami T.; Hudson, Kathy
Several studies have confirmed that the gaze velocity of the human eye can be utilized as a behavioral biometric or personalized biomarker. In this study, we leverage the local feature representation capacity of convolutional neural networks (CNNs) for eye gaze velocity analysis as the basis for biometric identification of radiologists performing breast cancer screening. Using gaze data collected from 10 radiologists reading 100 mammograms of various diagnoses, we compared the performance of a CNN-based classification algorithm with two deep learning classifiers, deep neural network and deep belief network, and a previously presented hidden Markov model classifier. The study showed thatmore » the CNN classifier is superior compared to alternative classification methods based on macro F1-scores derived from 10-fold cross-validation experiments. Our results further support the efficacy of eye gaze velocity as a biometric identifier of medical imaging experts.« less
Deep Gaze Velocity Analysis During Mammographic Reading for Biometric Identification of Radiologists
Yoon, Hong-Jun; Alamudun, Folami T.; Hudson, Kathy; ...
2018-01-24
Several studies have confirmed that the gaze velocity of the human eye can be utilized as a behavioral biometric or personalized biomarker. In this study, we leverage the local feature representation capacity of convolutional neural networks (CNNs) for eye gaze velocity analysis as the basis for biometric identification of radiologists performing breast cancer screening. Using gaze data collected from 10 radiologists reading 100 mammograms of various diagnoses, we compared the performance of a CNN-based classification algorithm with two deep learning classifiers, deep neural network and deep belief network, and a previously presented hidden Markov model classifier. The study showed thatmore » the CNN classifier is superior compared to alternative classification methods based on macro F1-scores derived from 10-fold cross-validation experiments. Our results further support the efficacy of eye gaze velocity as a biometric identifier of medical imaging experts.« less
Enhanced Higgs boson to τ(+)τ(-) search with deep learning.
Baldi, P; Sadowski, P; Whiteson, D
2015-03-20
The Higgs boson is thought to provide the interaction that imparts mass to the fundamental fermions, but while measurements at the Large Hadron Collider (LHC) are consistent with this hypothesis, current analysis techniques lack the statistical power to cross the traditional 5σ significance barrier without more data. Deep learning techniques have the potential to increase the statistical power of this analysis by automatically learning complex, high-level data representations. In this work, deep neural networks are used to detect the decay of the Higgs boson to a pair of tau leptons. A Bayesian optimization algorithm is used to tune the network architecture and training algorithm hyperparameters, resulting in a deep network of eight nonlinear processing layers that improves upon the performance of shallow classifiers even without the use of features specifically engineered by physicists for this application. The improvement in discovery significance is equivalent to an increase in the accumulated data set of 25%.
Du, Tianchuan; Liao, Li; Wu, Cathy H; Sun, Bilin
2016-11-01
Protein-protein interactions play essential roles in many biological processes. Acquiring knowledge of the residue-residue contact information of two interacting proteins is not only helpful in annotating functions for proteins, but also critical for structure-based drug design. The prediction of the protein residue-residue contact matrix of the interfacial regions is challenging. In this work, we introduced deep learning techniques (specifically, stacked autoencoders) to build deep neural network models to tackled the residue-residue contact prediction problem. In tandem with interaction profile Hidden Markov Models, which was used first to extract Fisher score features from protein sequences, stacked autoencoders were deployed to extract and learn hidden abstract features. The deep learning model showed significant improvement over the traditional machine learning model, Support Vector Machines (SVM), with the overall accuracy increased by 15% from 65.40% to 80.82%. We showed that the stacked autoencoders could extract novel features, which can be utilized by deep neural networks and other classifiers to enhance learning, out of the Fisher score features. It is further shown that deep neural networks have significant advantages over SVM in making use of the newly extracted features. Copyright © 2016. Published by Elsevier Inc.
Underwater Inherent Optical Properties Estimation Using a Depth Aided Deep Neural Network.
Yu, Zhibin; Wang, Yubo; Zheng, Bing; Zheng, Haiyong; Wang, Nan; Gu, Zhaorui
2017-01-01
Underwater inherent optical properties (IOPs) are the fundamental clues to many research fields such as marine optics, marine biology, and underwater vision. Currently, beam transmissometers and optical sensors are considered as the ideal IOPs measuring methods. But these methods are inflexible and expensive to be deployed. To overcome this problem, we aim to develop a novel measuring method using only a single underwater image with the help of deep artificial neural network. The power of artificial neural network has been proved in image processing and computer vision fields with deep learning technology. However, image-based IOPs estimation is a quite different and challenging task. Unlike the traditional applications such as image classification or localization, IOP estimation looks at the transparency of the water between the camera and the target objects to estimate multiple optical properties simultaneously. In this paper, we propose a novel Depth Aided (DA) deep neural network structure for IOPs estimation based on a single RGB image that is even noisy. The imaging depth information is considered as an aided input to help our model make better decision.
Preliminary Concept of Operations for the Deep Space Array-Based Network
NASA Astrophysics Data System (ADS)
Bagri, D. S.; Statman, J. I.
2004-05-01
The Deep Space Array-Based Network (DSAN) will be an array-based system, part of a greater than 1000 times increase in the downlink/telemetry capability of the Deep Space Network. The key function of the DSAN is provision of cost-effective, robust telemetry, tracking, and command services to the space missions of NASA and its international partners. This article presents an expanded approach to the use of an array-based system. Instead of using the array as an element in the existing Deep Space Network (DSN), relying to a large extent on the DSN infrastructure, we explore a broader departure from the current DSN, using fewer elements of the existing DSN, and establishing a more modern concept of operations. For example, the DSAN will have a single 24 x 7 monitor and control (M&C) facility, while the DSN has four 24 x 7 M&C facilities. The article gives the architecture of the DSAN and its operations philosophy. It also briefly describes the customer's view of operations, operations management, logistics, anomaly analysis, and reporting.
2017-01-01
Although deep learning approaches have had tremendous success in image, video and audio processing, computer vision, and speech recognition, their applications to three-dimensional (3D) biomolecular structural data sets have been hindered by the geometric and biological complexity. To address this problem we introduce the element-specific persistent homology (ESPH) method. ESPH represents 3D complex geometry by one-dimensional (1D) topological invariants and retains important biological information via a multichannel image-like representation. This representation reveals hidden structure-function relationships in biomolecules. We further integrate ESPH and deep convolutional neural networks to construct a multichannel topological neural network (TopologyNet) for the predictions of protein-ligand binding affinities and protein stability changes upon mutation. To overcome the deep learning limitations from small and noisy training sets, we propose a multi-task multichannel topological convolutional neural network (MM-TCNN). We demonstrate that TopologyNet outperforms the latest methods in the prediction of protein-ligand binding affinities, mutation induced globular protein folding free energy changes, and mutation induced membrane protein folding free energy changes. Availability: weilab.math.msu.edu/TDL/ PMID:28749969
Deep Recurrent Neural Networks for Human Activity Recognition
Murad, Abdulmajid
2017-01-01
Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs) address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs) for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM) DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM) and k-nearest neighbors (KNN). Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs) and CNNs. PMID:29113103
Deep Recurrent Neural Networks for Human Activity Recognition.
Murad, Abdulmajid; Pyun, Jae-Young
2017-11-06
Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs) address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs) for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM) DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM) and k-nearest neighbors (KNN). Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs) and CNNs.
He, Ziyang; Zhang, Xiaoqing; Cao, Yangjie; Liu, Zhi; Zhang, Bo; Wang, Xiaoyan
2018-04-17
By running applications and services closer to the user, edge processing provides many advantages, such as short response time and reduced network traffic. Deep-learning based algorithms provide significantly better performances than traditional algorithms in many fields but demand more resources, such as higher computational power and more memory. Hence, designing deep learning algorithms that are more suitable for resource-constrained mobile devices is vital. In this paper, we build a lightweight neural network, termed LiteNet which uses a deep learning algorithm design to diagnose arrhythmias, as an example to show how we design deep learning schemes for resource-constrained mobile devices. Compare to other deep learning models with an equivalent accuracy, LiteNet has several advantages. It requires less memory, incurs lower computational cost, and is more feasible for deployment on resource-constrained mobile devices. It can be trained faster than other neural network algorithms and requires less communication across different processing units during distributed training. It uses filters of heterogeneous size in a convolutional layer, which contributes to the generation of various feature maps. The algorithm was tested using the MIT-BIH electrocardiogram (ECG) arrhythmia database; the results showed that LiteNet outperforms comparable schemes in diagnosing arrhythmias, and in its feasibility for use at the mobile devices.
LiteNet: Lightweight Neural Network for Detecting Arrhythmias at Resource-Constrained Mobile Devices
Zhang, Xiaoqing; Cao, Yangjie; Liu, Zhi; Zhang, Bo; Wang, Xiaoyan
2018-01-01
By running applications and services closer to the user, edge processing provides many advantages, such as short response time and reduced network traffic. Deep-learning based algorithms provide significantly better performances than traditional algorithms in many fields but demand more resources, such as higher computational power and more memory. Hence, designing deep learning algorithms that are more suitable for resource-constrained mobile devices is vital. In this paper, we build a lightweight neural network, termed LiteNet which uses a deep learning algorithm design to diagnose arrhythmias, as an example to show how we design deep learning schemes for resource-constrained mobile devices. Compare to other deep learning models with an equivalent accuracy, LiteNet has several advantages. It requires less memory, incurs lower computational cost, and is more feasible for deployment on resource-constrained mobile devices. It can be trained faster than other neural network algorithms and requires less communication across different processing units during distributed training. It uses filters of heterogeneous size in a convolutional layer, which contributes to the generation of various feature maps. The algorithm was tested using the MIT-BIH electrocardiogram (ECG) arrhythmia database; the results showed that LiteNet outperforms comparable schemes in diagnosing arrhythmias, and in its feasibility for use at the mobile devices. PMID:29673171
An introduction to deep learning on biological sequence data: examples and solutions.
Jurtz, Vanessa Isabell; Johansen, Alexander Rosenberg; Nielsen, Morten; Almagro Armenteros, Jose Juan; Nielsen, Henrik; Sønderby, Casper Kaae; Winther, Ole; Sønderby, Søren Kaae
2017-11-15
Deep neural network architectures such as convolutional and long short-term memory networks have become increasingly popular as machine learning tools during the recent years. The availability of greater computational resources, more data, new algorithms for training deep models and easy to use libraries for implementation and training of neural networks are the drivers of this development. The use of deep learning has been especially successful in image recognition; and the development of tools, applications and code examples are in most cases centered within this field rather than within biology. Here, we aim to further the development of deep learning methods within biology by providing application examples and ready to apply and adapt code templates. Given such examples, we illustrate how architectures consisting of convolutional and long short-term memory neural networks can relatively easily be designed and trained to state-of-the-art performance on three biological sequence problems: prediction of subcellular localization, protein secondary structure and the binding of peptides to MHC Class II molecules. All implementations and datasets are available online to the scientific community at https://github.com/vanessajurtz/lasagne4bio. skaaesonderby@gmail.com. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
NASA Technical Reports Server (NTRS)
Kuiper, T. B. H.; Resch, G. M.
2000-01-01
The increasing load on NASA's deep Space Network, the new capabilities for deep space missions inherent in a next-generation radio telescope, and the potential of new telescope technology for reducing construction and operation costs suggest a natural marriage between radio astronomy and deep space telecommunications in developing advanced radio telescope concepts.
Subnanosecond GPS-based clock synchronization and precision deep-space tracking
NASA Technical Reports Server (NTRS)
Dunn, C. E.; Lichten, S. M.; Jefferson, D. C.; Border, J. S.
1992-01-01
Interferometric spacecraft tracking is accomplished by the Deep Space Network (DSN) by comparing the arrival time of electromagnetic spacecraft signals at ground antennas separated by baselines on the order of 8000 km. Clock synchronization errors within and between DSN stations directly impact the attainable tracking accuracy, with a 0.3-nsec error in clock synchronization resulting in an 11-nrad angular position error. This level of synchronization is currently achieved by observing a quasar which is angularly close to the spacecraft just after the spacecraft observations. By determining the differential arrival times of the random quasar signal at the stations, clock offsets and propagation delays within the atmosphere and within the DSN stations are calibrated. Recent developments in time transfer techniques may allow medium accuracy (50-100 nrad) spacecraft tracking without near-simultaneous quasar-based calibrations. Solutions are presented for a worldwide network of Global Positioning System (GPS) receivers in which the formal errors for DSN clock offset parameters are less than 0.5 nsec. Comparisons of clock rate offsets derived from GPS measurements and from very long baseline interferometry (VLBI), as well as the examination of clock closure, suggest that these formal errors are a realistic measure of GPS-based clock offset precision and accuracy. Incorporating GPS-based clock synchronization measurements into a spacecraft differential ranging system would allow tracking without near-simultaneous quasar observations. The impact on individual spacecraft navigation-error sources due to elimination of quasar-based calibrations is presented. System implementation, including calibration of station electronic delays, is discussed.
Sub-nanosecond clock synchronization and precision deep space tracking
NASA Technical Reports Server (NTRS)
Dunn, Charles; Lichten, Stephen; Jefferson, David; Border, James S.
1992-01-01
Interferometric spacecraft tracking is accomplished at the NASA Deep Space Network (DSN) by comparing the arrival time of electromagnetic spacecraft signals to ground antennas separated by baselines on the order of 8000 km. Clock synchronization errors within and between DSN stations directly impact the attainable tracking accuracy, with a 0.3 ns error in clock synchronization resulting in an 11 nrad angular position error. This level of synchronization is currently achieved by observing a quasar which is angularly close to the spacecraft just after the spacecraft observations. By determining the differential arrival times of the random quasar signal at the stations, clock synchronization and propagation delays within the atmosphere and within the DSN stations are calibrated. Recent developments in time transfer techniques may allow medium accuracy (50-100 nrad) spacecraft observations without near-simultaneous quasar-based calibrations. Solutions are presented for a global network of GPS receivers in which the formal errors in clock offset parameters are less than 0.5 ns. Comparisons of clock rate offsets derived from GPS measurements and from very long baseline interferometry and the examination of clock closure suggest that these formal errors are a realistic measure of GPS-based clock offset precision and accuracy. Incorporating GPS-based clock synchronization measurements into a spacecraft differential ranging system would allow tracking without near-simultaneous quasar observations. The impact on individual spacecraft navigation error sources due to elimination of quasar-based calibrations is presented. System implementation, including calibration of station electronic delays, is discussed.
Deep Neural Network for Structural Prediction and Lane Detection in Traffic Scene.
Li, Jun; Mei, Xue; Prokhorov, Danil; Tao, Dacheng
2017-03-01
Hierarchical neural networks have been shown to be effective in learning representative image features and recognizing object classes. However, most existing networks combine the low/middle level cues for classification without accounting for any spatial structures. For applications such as understanding a scene, how the visual cues are spatially distributed in an image becomes essential for successful analysis. This paper extends the framework of deep neural networks by accounting for the structural cues in the visual signals. In particular, two kinds of neural networks have been proposed. First, we develop a multitask deep convolutional network, which simultaneously detects the presence of the target and the geometric attributes (location and orientation) of the target with respect to the region of interest. Second, a recurrent neuron layer is adopted for structured visual detection. The recurrent neurons can deal with the spatial distribution of visible cues belonging to an object whose shape or structure is difficult to explicitly define. Both the networks are demonstrated by the practical task of detecting lane boundaries in traffic scenes. The multitask convolutional neural network provides auxiliary geometric information to help the subsequent modeling of the given lane structures. The recurrent neural network automatically detects lane boundaries, including those areas containing no marks, without any explicit prior knowledge or secondary modeling.
A Low-Power Sensor Network for Long Duration Monitoring in Deep Caves
NASA Astrophysics Data System (ADS)
Silva, A.; Johnson, I.; Bick, T.; Winclechter, C.; Jorgensen, A. M.; Teare, S. W.; Arechiga, R. O.
2010-12-01
Monitoring deep and inaccessible caves is important and challenging for a variety of reasons. It is of interest to study caves environments for understanding cave ecosystems, and human impact on the ecosystems. Caves may also hold clues to past climate changes. Cave instrumentation must however carry out its job with minimal human intervention and without disturbing the fragile environment. This requires unobtrusive and autonomous instrumentation. Earth-bound caves can also serve as analogs for caves on other planets and act as testbeds for autonomous sensor networks. Here we report on a project to design and implement a low-power, ad-hoc, wireless sensor network for monitoring caves and similar environments. The implemented network is composed of individual nodes which consist of a sensor, processing unit, memory, transceiver and a power source. Data collected at these nodes is transmitted through a wireless ZigBee network to a central data collection point from which the researcher may transfer collected data to a laptop for further analysis. The project accomplished a node design with a physical footprint of 2 inches long by 3 inches wide. The design is based on the EZMSP430-RF2480, a Zigbee hardware base offered by Texas Instruments. Five functioning nodes have been constructed at very low cost and tested. Due to the use of an external analog-to-digital converter the design was able to achieve a 16-bit resolution. The operational time achieved by the prototype was calculated to be approximately 80 days of autonomous operation while sampling once per minute. Each node is able to support and record data from up to four different sensors.
NASA Astrophysics Data System (ADS)
Moore, E. K.; Jelen, B. I.; Giovannelli, D.; Prabhu, A.; Raanan, H.; Falkowski, P. G.
2017-12-01
Deep time changes in Earth surface redox conditions, particularly due to global oxygenation, has impacted the availability of different metals and substrates that are central in biology. Oxidoreductase proteins are molecular nanomachines responsible for all biological electron transfer processes across the tree of life. These enzymes largely contain transition metals in their active sites. Microbial metabolic pathways form a global network of electron transfer, which expanded throughout the Archean eon. Older metabolisms (sulfur reduction, methanogenesis, anoxygenic photosynthesis) accessed negative redox potentials, while later evolving metabolisms (oxygenic photosynthesis, nitrification/denitrification, aerobic respiration) accessed positive redox potentials. The incorporation of different transition metals facilitated biological innovation and the expansion of the network of microbial metabolism. Network analysis was used to examine the connections between microbial taxa, metabolic pathways, crucial metallocofactors, and substrates in deep time by incorporating biosignatures preserved in the geologic record. Nitrogen fixation and aerobic respiration have the highest level of betweenness among metabolisms in the network, indicating that the oldest metabolisms are not the most central. Fe has by far the highest betweenness among metals. Clustering analysis largely separates High Metal Bacteria (HMB), Low Metal Bacteria (LMB), and Archaea showing that simple un-weighted links between taxa, metabolism, and metals have phylogenetic relevance. On average HMB have the highest betweenness among taxa, followed by Archaea and LMB. There is a correlation between the number of metallocofactors and metabolic pathways in representative bacterial taxa, but Archaea do not follow this trend. In many cases older and more recently evolved metabolisms were clustered together supporting previous findings that proliferation of metabolic pathways is not necessarily chronological.
NASA Astrophysics Data System (ADS)
Luo, Chang; Wang, Jie; Feng, Gang; Xu, Suhui; Wang, Shiqiang
2017-10-01
Deep convolutional neural networks (CNNs) have been widely used to obtain high-level representation in various computer vision tasks. However, for remote scene classification, there are not sufficient images to train a very deep CNN from scratch. From two viewpoints of generalization power, we propose two promising kinds of deep CNNs for remote scenes and try to find whether deep CNNs need to be deep for remote scene classification. First, we transfer successful pretrained deep CNNs to remote scenes based on the theory that depth of CNNs brings the generalization power by learning available hypothesis for finite data samples. Second, according to the opposite viewpoint that generalization power of deep CNNs comes from massive memorization and shallow CNNs with enough neural nodes have perfect finite sample expressivity, we design a lightweight deep CNN (LDCNN) for remote scene classification. With five well-known pretrained deep CNNs, experimental results on two independent remote-sensing datasets demonstrate that transferred deep CNNs can achieve state-of-the-art results in an unsupervised setting. However, because of its shallow architecture, LDCNN cannot obtain satisfactory performance, regardless of whether in an unsupervised, semisupervised, or supervised setting. CNNs really need depth to obtain general features for remote scenes. This paper also provides baseline for applying deep CNNs to other remote sensing tasks.
How Deep Neural Networks Can Improve Emotion Recognition on Video Data
2016-09-25
HOW DEEP NEURAL NETWORKS CAN IMPROVE EMOTION RECOGNITION ON VIDEO DATA Pooya Khorrami1 , Tom Le Paine1, Kevin Brady2, Charlie Dagli2, Thomas S...this work, we present a system that per- forms emotion recognition on video data using both con- volutional neural networks (CNNs) and recurrent...neural net- works (RNNs). We present our findings on videos from the Audio/Visual+Emotion Challenge (AV+EC2015). In our experiments, we analyze the effects
Cai, Congbo; Wang, Chao; Zeng, Yiqing; Cai, Shuhui; Liang, Dong; Wu, Yawen; Chen, Zhong; Ding, Xinghao; Zhong, Jianhui
2018-04-24
An end-to-end deep convolutional neural network (CNN) based on deep residual network (ResNet) was proposed to efficiently reconstruct reliable T 2 mapping from single-shot overlapping-echo detachment (OLED) planar imaging. The training dataset was obtained from simulations that were carried out on SPROM (Simulation with PRoduct Operator Matrix) software developed by our group. The relationship between the original OLED image containing two echo signals and the corresponding T 2 mapping was learned by ResNet training. After the ResNet was trained, it was applied to reconstruct the T 2 mapping from simulation and in vivo human brain data. Although the ResNet was trained entirely on simulated data, the trained network was generalized well to real human brain data. The results from simulation and in vivo human brain experiments show that the proposed method significantly outperforms the echo-detachment-based method. Reliable T 2 mapping with higher accuracy is achieved within 30 ms after the network has been trained, while the echo-detachment-based OLED reconstruction method took approximately 2 min. The proposed method will facilitate real-time dynamic and quantitative MR imaging via OLED sequence, and deep convolutional neural network has the potential to reconstruct maps from complex MRI sequences efficiently. © 2018 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Aviles, Angelica I.; Alsaleh, Samar; Sobrevilla, Pilar; Casals, Alicia
2016-03-01
Robotic-Assisted Surgery approach overcomes the limitations of the traditional laparoscopic and open surgeries. However, one of its major limitations is the lack of force feedback. Since there is no direct interaction between the surgeon and the tissue, there is no way of knowing how much force the surgeon is applying which can result in irreversible injuries. The use of force sensors is not practical since they impose different constraints. Thus, we make use of a neuro-visual approach to estimate the applied forces, in which the 3D shape recovery together with the geometry of motion are used as input to a deep network based on LSTM-RNN architecture. When deep networks are used in real time, pre-processing of data is a key factor to reduce complexity and improve the network performance. A common pre-processing step is dimensionality reduction which attempts to eliminate redundant and insignificant information by selecting a subset of relevant features to use in model construction. In this work, we show the effects of dimensionality reduction in a real-time application: estimating the applied force in Robotic-Assisted Surgeries. According to the results, we demonstrated positive effects of doing dimensionality reduction on deep networks including: faster training, improved network performance, and overfitting prevention. We also show a significant accuracy improvement, ranging from about 33% to 86%, over existing approaches related to force estimation.
NASA Astrophysics Data System (ADS)
André, Michel; Favali, Paolo; Piatteli, Paolo; Miranda, Jorge; Waldmann, Christoph; Esonet Lido Demonstration Mission Team
2010-05-01
Understanding the link between natural and anthropogenic processes is essential for predicting the magnitude and impact of future changes of the natural balance of the oceans. Deep-sea observatories have the potential to play a key role in the assessment and monitoring of these changes. ESONET is a European Network of Excellence of deep-sea observatories that includes 55 partners belonging to 14 countries. ESONET NoE is providing data on key parameters from the subsurface down to the seafloor at representative locations that transmit them to shore. The strategies of deployment, data sampling, technological development, standardisation and data management are being integrated with projects dealing with the spatial and near surface time series. LIDO (Listening to the Deep Ocean environment) is one of these projects and proposes to establish a first nucleus of a regional network of multidisciplinary seafloor observatories contributing to the coordination of high quality research in the ESONET NoE by allowing the real-time long-term monitoring of Geohazards and Marine Ambient Noise in the Mediterranean Sea and the adjacent Atlantic waters. Specific activities address the long-term monitoring of earthquakes and tsunamis and the characterisation of ambient noise, marine mammal sounds and anthropogenic sources. The objective of this demonstration mission will be achieved through the extension of the present capabilities of the observatories working in the ESONET key-sites of Eastern Sicily (NEMO-SN1) and of the Gulf of Cadiz (GEOSTAR configured for NEAREST pilot experiment) by installing new sensor equipments related to Bioacoustics and Geohazards, as well as by implementing international standard methods in data acquisition and management.
Ebert, Lars C; Heimer, Jakob; Schweitzer, Wolf; Sieberth, Till; Leipner, Anja; Thali, Michael; Ampanozi, Garyfalia
2017-12-01
Post mortem computed tomography (PMCT) can be used as a triage tool to better identify cases with a possibly non-natural cause of death, especially when high caseloads make it impossible to perform autopsies on all cases. Substantial data can be generated by modern medical scanners, especially in a forensic setting where the entire body is documented at high resolution. A solution for the resulting issues could be the use of deep learning techniques for automatic analysis of radiological images. In this article, we wanted to test the feasibility of such methods for forensic imaging by hypothesizing that deep learning methods can detect and segment a hemopericardium in PMCT. For deep learning image analysis software, we used the ViDi Suite 2.0. We retrospectively selected 28 cases with, and 24 cases without, hemopericardium. Based on these data, we trained two separate deep learning networks. The first one classified images into hemopericardium/not hemopericardium, and the second one segmented the blood content. We randomly selected 50% of the data for training and 50% for validation. This process was repeated 20 times. The best performing classification network classified all cases of hemopericardium from the validation images correctly with only a few false positives. The best performing segmentation network would tend to underestimate the amount of blood in the pericardium, which is the case for most networks. This is the first study that shows that deep learning has potential for automated image analysis of radiological images in forensic medicine.
Nonlinear Deep Kernel Learning for Image Annotation.
Jiu, Mingyuan; Sahbi, Hichem
2017-02-08
Multiple kernel learning (MKL) is a widely used technique for kernel design. Its principle consists in learning, for a given support vector classifier, the most suitable convex (or sparse) linear combination of standard elementary kernels. However, these combinations are shallow and often powerless to capture the actual similarity between highly semantic data, especially for challenging classification tasks such as image annotation. In this paper, we redefine multiple kernels using deep multi-layer networks. In this new contribution, a deep multiple kernel is recursively defined as a multi-layered combination of nonlinear activation functions, each one involves a combination of several elementary or intermediate kernels, and results into a positive semi-definite deep kernel. We propose four different frameworks in order to learn the weights of these networks: supervised, unsupervised, kernel-based semisupervised and Laplacian-based semi-supervised. When plugged into support vector machines (SVMs), the resulting deep kernel networks show clear gain, compared to several shallow kernels for the task of image annotation. Extensive experiments and analysis on the challenging ImageCLEF photo annotation benchmark, the COREL5k database and the Banana dataset validate the effectiveness of the proposed method.
Deep learning with convolutional neural network in radiology.
Yasaka, Koichiro; Akai, Hiroyuki; Kunimatsu, Akira; Kiryu, Shigeru; Abe, Osamu
2018-04-01
Deep learning with a convolutional neural network (CNN) is gaining attention recently for its high performance in image recognition. Images themselves can be utilized in a learning process with this technique, and feature extraction in advance of the learning process is not required. Important features can be automatically learned. Thanks to the development of hardware and software in addition to techniques regarding deep learning, application of this technique to radiological images for predicting clinically useful information, such as the detection and the evaluation of lesions, etc., are beginning to be investigated. This article illustrates basic technical knowledge regarding deep learning with CNNs along the actual course (collecting data, implementing CNNs, and training and testing phases). Pitfalls regarding this technique and how to manage them are also illustrated. We also described some advanced topics of deep learning, results of recent clinical studies, and the future directions of clinical application of deep learning techniques.
NASA Astrophysics Data System (ADS)
Amit, Guy; Ben-Ari, Rami; Hadad, Omer; Monovich, Einat; Granot, Noa; Hashoul, Sharbell
2017-03-01
Diagnostic interpretation of breast MRI studies requires meticulous work and a high level of expertise. Computerized algorithms can assist radiologists by automatically characterizing the detected lesions. Deep learning approaches have shown promising results in natural image classification, but their applicability to medical imaging is limited by the shortage of large annotated training sets. In this work, we address automatic classification of breast MRI lesions using two different deep learning approaches. We propose a novel image representation for dynamic contrast enhanced (DCE) breast MRI lesions, which combines the morphological and kinetics information in a single multi-channel image. We compare two classification approaches for discriminating between benign and malignant lesions: training a designated convolutional neural network and using a pre-trained deep network to extract features for a shallow classifier. The domain-specific trained network provided higher classification accuracy, compared to the pre-trained model, with an area under the ROC curve of 0.91 versus 0.81, and an accuracy of 0.83 versus 0.71. Similar accuracy was achieved in classifying benign lesions, malignant lesions, and normal tissue images. The trained network was able to improve accuracy by using the multi-channel image representation, and was more robust to reductions in the size of the training set. A small-size convolutional neural network can learn to accurately classify findings in medical images using only a few hundred images from a few dozen patients. With sufficient data augmentation, such a network can be trained to outperform a pre-trained out-of-domain classifier. Developing domain-specific deep-learning models for medical imaging can facilitate technological advancements in computer-aided diagnosis.
Cough event classification by pretrained deep neural network.
Liu, Jia-Ming; You, Mingyu; Wang, Zheng; Li, Guo-Zheng; Xu, Xianghuai; Qiu, Zhongmin
2015-01-01
Cough is an essential symptom in respiratory diseases. In the measurement of cough severity, an accurate and objective cough monitor is expected by respiratory disease society. This paper aims to introduce a better performed algorithm, pretrained deep neural network (DNN), to the cough classification problem, which is a key step in the cough monitor. The deep neural network models are built from two steps, pretrain and fine-tuning, followed by a Hidden Markov Model (HMM) decoder to capture tamporal information of the audio signals. By unsupervised pretraining a deep belief network, a good initialization for a deep neural network is learned. Then the fine-tuning step is a back propogation tuning the neural network so that it can predict the observation probability associated with each HMM states, where the HMM states are originally achieved by force-alignment with a Gaussian Mixture Model Hidden Markov Model (GMM-HMM) on the training samples. Three cough HMMs and one noncough HMM are employed to model coughs and noncoughs respectively. The final decision is made based on viterbi decoding algorihtm that generates the most likely HMM sequence for each sample. A sample is labeled as cough if a cough HMM is found in the sequence. The experiments were conducted on a dataset that was collected from 22 patients with respiratory diseases. Patient dependent (PD) and patient independent (PI) experimental settings were used to evaluate the models. Five criteria, sensitivity, specificity, F1, macro average and micro average are shown to depict different aspects of the models. From overall evaluation criteria, the DNN based methods are superior to traditional GMM-HMM based method on F1 and micro average with maximal 14% and 11% error reduction in PD and 7% and 10% in PI, meanwhile keep similar performances on macro average. They also surpass GMM-HMM model on specificity with maximal 14% error reduction on both PD and PI. In this paper, we tried pretrained deep neural network in cough classification problem. Our results showed that comparing with the conventional GMM-HMM framework, the HMM-DNN could get better overall performance on cough classification task.
Deep learning in color: towards automated quark/gluon jet discrimination
Komiske, Patrick T.; Metodiev, Eric M.; Schwartz, Matthew D.
2017-01-25
Artificial intelligence offers the potential to automate challenging data-processing tasks in collider physics. Here, to establish its prospects, we explore to what extent deep learning with convolutional neural networks can discriminate quark and gluon jets better than observables designed by physicists. Our approach builds upon the paradigm that a jet can be treated as an image, with intensity given by the local calorimeter deposits. We supplement this construction by adding color to the images, with red, green and blue intensities given by the transverse momentum in charged particles, transverse momentum in neutral particles, and pixel-level charged particle counts. Overall, themore » deep networks match or outperform traditional jet variables. We also find that, while various simulations produce different quark and gluon jets, the neural networks are surprisingly insensitive to these differences, similar to traditional observables. This suggests that the networks can extract robust physical information from imperfect simulations.« less
Deep learning in color: towards automated quark/gluon jet discrimination
NASA Astrophysics Data System (ADS)
Komiske, Patrick T.; Metodiev, Eric M.; Schwartz, Matthew D.
2017-01-01
Artificial intelligence offers the potential to automate challenging data-processing tasks in collider physics. To establish its prospects, we explore to what extent deep learning with convolutional neural networks can discriminate quark and gluon jets better than observables designed by physicists. Our approach builds upon the paradigm that a jet can be treated as an image, with intensity given by the local calorimeter deposits. We supplement this construction by adding color to the images, with red, green and blue intensities given by the transverse momentum in charged particles, transverse momentum in neutral particles, and pixel-level charged particle counts. Overall, the deep networks match or outperform traditional jet variables. We also find that, while various simulations produce different quark and gluon jets, the neural networks are surprisingly insensitive to these differences, similar to traditional observables. This suggests that the networks can extract robust physical information from imperfect simulations.
Deep learning in color: towards automated quark/gluon jet discrimination
DOE Office of Scientific and Technical Information (OSTI.GOV)
Komiske, Patrick T.; Metodiev, Eric M.; Schwartz, Matthew D.
Artificial intelligence offers the potential to automate challenging data-processing tasks in collider physics. Here, to establish its prospects, we explore to what extent deep learning with convolutional neural networks can discriminate quark and gluon jets better than observables designed by physicists. Our approach builds upon the paradigm that a jet can be treated as an image, with intensity given by the local calorimeter deposits. We supplement this construction by adding color to the images, with red, green and blue intensities given by the transverse momentum in charged particles, transverse momentum in neutral particles, and pixel-level charged particle counts. Overall, themore » deep networks match or outperform traditional jet variables. We also find that, while various simulations produce different quark and gluon jets, the neural networks are surprisingly insensitive to these differences, similar to traditional observables. This suggests that the networks can extract robust physical information from imperfect simulations.« less
Kusumoto, Dai; Lachmann, Mark; Kunihiro, Takeshi; Yuasa, Shinsuke; Kishino, Yoshikazu; Kimura, Mai; Katsuki, Toshiomi; Itoh, Shogo; Seki, Tomohisa; Fukuda, Keiichi
2018-06-05
Deep learning technology is rapidly advancing and is now used to solve complex problems. Here, we used deep learning in convolutional neural networks to establish an automated method to identify endothelial cells derived from induced pluripotent stem cells (iPSCs), without the need for immunostaining or lineage tracing. Networks were trained to predict whether phase-contrast images contain endothelial cells based on morphology only. Predictions were validated by comparison to immunofluorescence staining for CD31, a marker of endothelial cells. Method parameters were then automatically and iteratively optimized to increase prediction accuracy. We found that prediction accuracy was correlated with network depth and pixel size of images to be analyzed. Finally, K-fold cross-validation confirmed that optimized convolutional neural networks can identify endothelial cells with high performance, based only on morphology. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.
The applications of deep neural networks to sdBV classification
NASA Astrophysics Data System (ADS)
Boudreaux, Thomas M.
2017-12-01
With several new large-scale surveys on the horizon, including LSST, TESS, ZTF, and Evryscope, faster and more accurate analysis methods will be required to adequately process the enormous amount of data produced. Deep learning, used in industry for years now, allows for advanced feature detection in minimally prepared datasets at very high speeds; however, despite the advantages of this method, its application to astrophysics has not yet been extensively explored. This dearth may be due to a lack of training data available to researchers. Here we generate synthetic data loosely mimicking the properties of acoustic mode pulsating stars and we show that two separate paradigms of deep learning - the Artificial Neural Network And the Convolutional Neural Network - can both be used to classify this synthetic data effectively. And that additionally this classification can be performed at relatively high levels of accuracy with minimal time spent adjusting network hyperparameters.
Deep learning of orthographic representations in baboons.
Hannagan, Thomas; Ziegler, Johannes C; Dufau, Stéphane; Fagot, Joël; Grainger, Jonathan
2014-01-01
What is the origin of our ability to learn orthographic knowledge? We use deep convolutional networks to emulate the primate's ventral visual stream and explore the recent finding that baboons can be trained to discriminate English words from nonwords. The networks were exposed to the exact same sequence of stimuli and reinforcement signals as the baboons in the experiment, and learned to map real visual inputs (pixels) of letter strings onto binary word/nonword responses. We show that the networks' highest levels of representations were indeed sensitive to letter combinations as postulated in our previous research. The model also captured the key empirical findings, such as generalization to novel words, along with some intriguing inter-individual differences. The present work shows the merits of deep learning networks that can simulate the whole processing chain all the way from the visual input to the response while allowing researchers to analyze the complex representations that emerge during the learning process.
NASA Technical Reports Server (NTRS)
Helfrich, Cliff; Berry, David S.; Bhat, Ramachandra; Border, James; Graat, Eric; Halsell, Allen; Kruizinga, Gerhard; Lau, Eunice; Mottinger, Neil; Rush, Brian;
2015-01-01
In late 2013, the Indian Space Research Organization (ISRO) launched its "Mars Orbiter Mission" (MOM). ISRO engaged NASA's Jet Propulsion Laboratory (JPL) for navigation services to support ISRO's objectives of MOM achieving and maintaining Mars orbit. The navigation support included planning, documentation, testing, orbit determination, maneuver design /analysis, and tracking data analysis. Several of MOM's attributes had an impact on navigation processes, e.g., S -band telecommunications, Earth Orbit Phase maneuvers, and frequent angular momentum desaturation s (AMDs). The primary source of tracking data was NASA/ JPL's Deep Space Network (DSN); JPL also conducted a performance assessment of Indian Deep Space Network (IDSN) tracking data. Planning for the Mars Orbit Insertion (MOI) was complicated by a pressure regulator failure that created uncertainty regarding MOM's main engine and raised potential planetary protection issues. A successful main engine test late on approach resolved these issues; it was quickly followed by a successful MOI on 24-September - 2014 at 02:00 UTC. Less than a month later, Comet Siding Spring's Mars flyby necessitated plans to minimize potential spacecraft damage. At the time of this writing, MOM's orbital operations continue, and plans to extend JPL 's support are in progress. This paper covers the JPL 's support of MOM through the Comet Siding Spring event.
NASA Technical Reports Server (NTRS)
Lu, Thomas; Pham, Timothy; Liao, Jason
2011-01-01
This paper presents the development of a fuzzy logic function trained by an artificial neural network to classify the system noise temperature (SNT) of antennas in the NASA Deep Space Network (DSN). The SNT data were classified into normal, marginal, and abnormal classes. The irregular SNT pattern was further correlated with link margin and weather data. A reasonably good correlation is detected among high SNT, low link margin and the effect of bad weather; however we also saw some unexpected non-correlations which merit further study in the future.
Temperature control simulation for a microwave transmitter cooling system. [deep space network
NASA Technical Reports Server (NTRS)
Yung, C. S.
1980-01-01
The thermal performance of a temperature control system for the antenna microwave transmitter (klystron tube) of the Deep Space Network antenna tracking system is discussed. In particular the mathematical model is presented along with the details of a computer program which is written for the system simulation and the performance parameterization. Analytical expressions are presented.
Gas Classification Using Deep Convolutional Neural Networks.
Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin
2018-01-08
In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP).
Gas Classification Using Deep Convolutional Neural Networks
Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin
2018-01-01
In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP). PMID:29316723
Implementation of an Antenna Array Signal Processing Breadboard for the Deep Space Network
NASA Technical Reports Server (NTRS)
Navarro, Robert
2006-01-01
The Deep Space Network Large Array will replace/augment 34 and 70 meter antenna assets. The array will mainly be used to support NASA's deep space telemetry, radio science, and navigation requirements. The array project will deploy three complexes in the western U.S., Australia, and European longitude each with 400 12m downlink antennas and a DSN central facility at JPL. THis facility will remotely conduct all real-time monitor and control for the network. Signal processing objectives include: provide a means to evaluate the performance of the Breadboard Array's antenna subsystem; design and build prototype hardware; demonstrate and evaluate proposed signal processing techniques; and gain experience with various technologies that may be used in the Large Array. Results are summarized..
NASA Astrophysics Data System (ADS)
Fortier, R.; Lemieux, J.; Molson, J. W.; Therrien, R.; Ouellet, M.; Bart, J.
2013-12-01
During a summer drilling campaign in 2012, a network of nine groundwater monitoring wells was installed in a small catchment basin in a zone of discontinuous permafrost near the Inuit community of Umiujaq in Northern Quebec, Canada. This network, named Immatsiak, is part of a provincial network of groundwater monitoring wells to monitor the impacts of climate change on groundwater resources. It provides a unique opportunity to study cold region groundwater dynamics in permafrost environments and to assess the impacts of permafrost degradation on groundwater quality and availability as a potential source of drinking water. Using the borehole logs from the drilling campaign and other information from previous investigations, an interpretative cryo-hydrogeological cross-section of the catchment basin was produced which identified the Quaternary deposit thickness and extent, the depth to bedrock, the location of permafrost, one superficial aquifer located in a sand deposit, and another deep aquifer in fluvio-glacial sediments and till. In the summer of 2013, data were recovered from water level and barometric loggers which were installed in the wells in August 2012. Although the wells were drilled in unfrozen zones, the groundwater temperature is very low, near 0.4 °C, with an annual variability of a few tenths of a degree Celsius at a depth of 35 m. The hydraulic head in the wells varied as much as 6 m over the last year. Pumping tests performed in the wells showed a very high hydraulic conductivity of the deep aquifer. Groundwater in the wells and surface water in small thermokarst lakes and at the catchment outlet were sampled for geochemical analysis (inorganic parameters, stable isotopes of oxygen (δ18O) and hydrogen (δ2H), and radioactive isotopes of carbon (δ14C), hydrogen (tritium δ3H) and helium (δ3He)) to assess groundwater quality and origin. Preliminary results show that the signature of melt water from permafrost thawing is observed in the geochemistry of groundwater and surface water at the catchment outlet. Following synthesis of the available information, including a cryo-hydrogeophysical investigation in progress, a three-dimensional hydrogeological conceptual and numerical model of the catchment basin will be developed. According to different scenarios of climate change, the potential of using groundwater as a sustainable resource in northern regions will be assessed by simulating the present and future impacts of climate change on this groundwater system.
The effects of deep network topology on mortality prediction.
Hao Du; Ghassemi, Mohammad M; Mengling Feng
2016-08-01
Deep learning has achieved remarkable results in the areas of computer vision, speech recognition, natural language processing and most recently, even playing Go. The application of deep-learning to problems in healthcare, however, has gained attention only in recent years, and it's ultimate place at the bedside remains a topic of skeptical discussion. While there is a growing academic interest in the application of Machine Learning (ML) techniques to clinical problems, many in the clinical community see little incentive to upgrade from simpler methods, such as logistic regression, to deep learning. Logistic regression, after all, provides odds ratios, p-values and confidence intervals that allow for ease of interpretation, while deep nets are often seen as `black-boxes' which are difficult to understand and, as of yet, have not demonstrated performance levels far exceeding their simpler counterparts. If deep learning is to ever take a place at the bedside, it will require studies which (1) showcase the performance of deep-learning methods relative to other approaches and (2) interpret the relationships between network structure, model performance, features and outcomes. We have chosen these two requirements as the goal of this study. In our investigation, we utilized a publicly available EMR dataset of over 32,000 intensive care unit patients and trained a Deep Belief Network (DBN) to predict patient mortality at discharge. Utilizing an evolutionary algorithm, we demonstrate automated topology selection for DBNs. We demonstrate that with the correct topology selection, DBNs can achieve better prediction performance compared to several bench-marking methods.
Ranking in evolving complex networks
NASA Astrophysics Data System (ADS)
Liao, Hao; Mariani, Manuel Sebastian; Medo, Matúš; Zhang, Yi-Cheng; Zhou, Ming-Yang
2017-05-01
Complex networks have emerged as a simple yet powerful framework to represent and analyze a wide range of complex systems. The problem of ranking the nodes and the edges in complex networks is critical for a broad range of real-world problems because it affects how we access online information and products, how success and talent are evaluated in human activities, and how scarce resources are allocated by companies and policymakers, among others. This calls for a deep understanding of how existing ranking algorithms perform, and which are their possible biases that may impair their effectiveness. Many popular ranking algorithms (such as Google's PageRank) are static in nature and, as a consequence, they exhibit important shortcomings when applied to real networks that rapidly evolve in time. At the same time, recent advances in the understanding and modeling of evolving networks have enabled the development of a wide and diverse range of ranking algorithms that take the temporal dimension into account. The aim of this review is to survey the existing ranking algorithms, both static and time-aware, and their applications to evolving networks. We emphasize both the impact of network evolution on well-established static algorithms and the benefits from including the temporal dimension for tasks such as prediction of network traffic, prediction of future links, and identification of significant nodes.
A Robust Deep-Learning-Based Detector for Real-Time Tomato Plant Diseases and Pests Recognition.
Fuentes, Alvaro; Yoon, Sook; Kim, Sang Cheol; Park, Dong Sun
2017-09-04
Plant Diseases and Pests are a major challenge in the agriculture sector. An accurate and a faster detection of diseases and pests in plants could help to develop an early treatment technique while substantially reducing economic losses. Recent developments in Deep Neural Networks have allowed researchers to drastically improve the accuracy of object detection and recognition systems. In this paper, we present a deep-learning-based approach to detect diseases and pests in tomato plants using images captured in-place by camera devices with various resolutions. Our goal is to find the more suitable deep-learning architecture for our task. Therefore, we consider three main families of detectors: Faster Region-based Convolutional Neural Network (Faster R-CNN), Region-based Fully Convolutional Network (R-FCN), and Single Shot Multibox Detector (SSD), which for the purpose of this work are called "deep learning meta-architectures". We combine each of these meta-architectures with "deep feature extractors" such as VGG net and Residual Network (ResNet). We demonstrate the performance of deep meta-architectures and feature extractors, and additionally propose a method for local and global class annotation and data augmentation to increase the accuracy and reduce the number of false positives during training. We train and test our systems end-to-end on our large Tomato Diseases and Pests Dataset, which contains challenging images with diseases and pests, including several inter- and extra-class variations, such as infection status and location in the plant. Experimental results show that our proposed system can effectively recognize nine different types of diseases and pests, with the ability to deal with complex scenarios from a plant's surrounding area.
Chen, Jian; Chen, Jie; Ding, Hong-Yan; Pan, Qin-Shi; Hong, Wan-Dong; Xu, Gang; Yu, Fang-You; Wang, Yu-Min
2015-01-01
The statistical methods to analyze and predict the related dangerous factors of deep fungal infection in lung cancer patients were several, such as logic regression analysis, meta-analysis, multivariate Cox proportional hazards model analysis, retrospective analysis, and so on, but the results are inconsistent. A total of 696 patients with lung cancer were enrolled. The factors were compared employing Student's t-test or the Mann-Whitney test or the Chi-square test and variables that were significantly related to the presence of deep fungal infection selected as candidates for input into the final artificial neural network analysis (ANN) model. The receiver operating characteristic (ROC) and area under curve (AUC) were used to evaluate the performance of the artificial neural network (ANN) model and logistic regression (LR) model. The prevalence of deep fungal infection from lung cancer in this entire study population was 32.04%(223/696), deep fungal infections occur in sputum specimens 44.05% (200/454). The ratio of candida albicans was 86.99% (194/223) in the total fungi. It was demonstrated that older (≥65 years), use of antibiotics, low serum albumin concentrations (≤37.18 g /L), radiotherapy, surgery, low hemoglobin hyperlipidemia (≤93.67 g /L), long time of hospitalization (≥14 days) were apt to deep fungal infection and the ANN model consisted of the seven factors. The AUC of ANN model (0.829±0.019) was higher than that of LR model (0.756±0.021). The artificial neural network model with variables consisting of age, use of antibiotics, serum albumin concentrations, received radiotherapy, received surgery, hemoglobin, time of hospitalization should be useful for predicting the deep fungal infection in lung cancer.
A Robust Deep-Learning-Based Detector for Real-Time Tomato Plant Diseases and Pests Recognition
Yoon, Sook; Kim, Sang Cheol; Park, Dong Sun
2017-01-01
Plant Diseases and Pests are a major challenge in the agriculture sector. An accurate and a faster detection of diseases and pests in plants could help to develop an early treatment technique while substantially reducing economic losses. Recent developments in Deep Neural Networks have allowed researchers to drastically improve the accuracy of object detection and recognition systems. In this paper, we present a deep-learning-based approach to detect diseases and pests in tomato plants using images captured in-place by camera devices with various resolutions. Our goal is to find the more suitable deep-learning architecture for our task. Therefore, we consider three main families of detectors: Faster Region-based Convolutional Neural Network (Faster R-CNN), Region-based Fully Convolutional Network (R-FCN), and Single Shot Multibox Detector (SSD), which for the purpose of this work are called “deep learning meta-architectures”. We combine each of these meta-architectures with “deep feature extractors” such as VGG net and Residual Network (ResNet). We demonstrate the performance of deep meta-architectures and feature extractors, and additionally propose a method for local and global class annotation and data augmentation to increase the accuracy and reduce the number of false positives during training. We train and test our systems end-to-end on our large Tomato Diseases and Pests Dataset, which contains challenging images with diseases and pests, including several inter- and extra-class variations, such as infection status and location in the plant. Experimental results show that our proposed system can effectively recognize nine different types of diseases and pests, with the ability to deal with complex scenarios from a plant’s surrounding area. PMID:28869539
Generating Seismograms with Deep Neural Networks
NASA Astrophysics Data System (ADS)
Krischer, L.; Fichtner, A.
2017-12-01
The recent surge of successful uses of deep neural networks in computer vision, speech recognition, and natural language processing, mainly enabled by the availability of fast GPUs and extremely large data sets, is starting to see many applications across all natural sciences. In seismology these are largely confined to classification and discrimination tasks. In this contribution we explore the use of deep neural networks for another class of problems: so called generative models.Generative modelling is a branch of statistics concerned with generating new observed data samples, usually by drawing from some underlying probability distribution. Samples with specific attributes can be generated by conditioning on input variables. In this work we condition on seismic source (mechanism and location) and receiver (location) parameters to generate multi-component seismograms.The deep neural networks are trained on synthetic data calculated with Instaseis (http://instaseis.net, van Driel et al. (2015)) and waveforms from the global ShakeMovie project (http://global.shakemovie.princeton.edu, Tromp et al. (2010)). The underlying radially symmetric or smoothly three dimensional Earth structures result in comparatively small waveform differences from similar events or at close receivers and the networks learn to interpolate between training data samples.Of particular importance is the chosen misfit functional. Generative adversarial networks (Goodfellow et al. (2014)) implement a system in which two networks compete: the generator network creates samples and the discriminator network distinguishes these from the true training examples. Both are trained in an adversarial fashion until the discriminator can no longer distinguish between generated and real samples. We show how this can be applied to seismograms and in particular how it compares to networks trained with more conventional misfit metrics. Last but not least we attempt to shed some light on the black-box nature of neural networks by estimating the quality and uncertainties of the generated seismograms.
Statistical downscaling of precipitation using long short-term memory recurrent neural networks
NASA Astrophysics Data System (ADS)
Misra, Saptarshi; Sarkar, Sudeshna; Mitra, Pabitra
2017-11-01
Hydrological impacts of global climate change on regional scale are generally assessed by downscaling large-scale climatic variables, simulated by General Circulation Models (GCMs), to regional, small-scale hydrometeorological variables like precipitation, temperature, etc. In this study, we propose a new statistical downscaling model based on Recurrent Neural Network with Long Short-Term Memory which captures the spatio-temporal dependencies in local rainfall. The previous studies have used several other methods such as linear regression, quantile regression, kernel regression, beta regression, and artificial neural networks. Deep neural networks and recurrent neural networks have been shown to be highly promising in modeling complex and highly non-linear relationships between input and output variables in different domains and hence we investigated their performance in the task of statistical downscaling. We have tested this model on two datasets—one on precipitation in Mahanadi basin in India and the second on precipitation in Campbell River basin in Canada. Our autoencoder coupled long short-term memory recurrent neural network model performs the best compared to other existing methods on both the datasets with respect to temporal cross-correlation, mean squared error, and capturing the extremes.
A Risk Stratification Model for Lung Cancer Based on Gene Coexpression Network and Deep Learning
2018-01-01
Risk stratification model for lung cancer with gene expression profile is of great interest. Instead of previous models based on individual prognostic genes, we aimed to develop a novel system-level risk stratification model for lung adenocarcinoma based on gene coexpression network. Using multiple microarray, gene coexpression network analysis was performed to identify survival-related networks. A deep learning based risk stratification model was constructed with representative genes of these networks. The model was validated in two test sets. Survival analysis was performed using the output of the model to evaluate whether it could predict patients' survival independent of clinicopathological variables. Five networks were significantly associated with patients' survival. Considering prognostic significance and representativeness, genes of the two survival-related networks were selected for input of the model. The output of the model was significantly associated with patients' survival in two test sets and training set (p < 0.00001, p < 0.0001 and p = 0.02 for training and test sets 1 and 2, resp.). In multivariate analyses, the model was associated with patients' prognosis independent of other clinicopathological features. Our study presents a new perspective on incorporating gene coexpression networks into the gene expression signature and clinical application of deep learning in genomic data science for prognosis prediction. PMID:29581968
Detection of eardrum abnormalities using ensemble deep learning approaches
NASA Astrophysics Data System (ADS)
Senaras, Caglar; Moberly, Aaron C.; Teknos, Theodoros; Essig, Garth; Elmaraghy, Charles; Taj-Schaal, Nazhat; Yua, Lianbo; Gurcan, Metin N.
2018-02-01
In this study, we proposed an approach to report the condition of the eardrum as "normal" or "abnormal" by ensembling two different deep learning architectures. In the first network (Network 1), we applied transfer learning to the Inception V3 network by using 409 labeled samples. As a second network (Network 2), we designed a convolutional neural network to take advantage of auto-encoders by using additional 673 unlabeled eardrum samples. The individual classification accuracies of the Network 1 and Network 2 were calculated as 84.4%(+/- 12.1%) and 82.6% (+/- 11.3%), respectively. Only 32% of the errors of the two networks were the same, making it possible to combine two approaches to achieve better classification accuracy. The proposed ensemble method allows us to achieve robust classification because it has high accuracy (84.4%) with the lowest standard deviation (+/- 10.3%).
Deep neural networks to enable real-time multimessenger astrophysics
NASA Astrophysics Data System (ADS)
George, Daniel; Huerta, E. A.
2018-02-01
Gravitational wave astronomy has set in motion a scientific revolution. To further enhance the science reach of this emergent field of research, there is a pressing need to increase the depth and speed of the algorithms used to enable these ground-breaking discoveries. We introduce Deep Filtering—a new scalable machine learning method for end-to-end time-series signal processing. Deep Filtering is based on deep learning with two deep convolutional neural networks, which are designed for classification and regression, to detect gravitational wave signals in highly noisy time-series data streams and also estimate the parameters of their sources in real time. Acknowledging that some of the most sensitive algorithms for the detection of gravitational waves are based on implementations of matched filtering, and that a matched filter is the optimal linear filter in Gaussian noise, the application of Deep Filtering using whitened signals in Gaussian noise is investigated in this foundational article. The results indicate that Deep Filtering outperforms conventional machine learning techniques, achieves similar performance compared to matched filtering, while being several orders of magnitude faster, allowing real-time signal processing with minimal resources. Furthermore, we demonstrate that Deep Filtering can detect and characterize waveform signals emitted from new classes of eccentric or spin-precessing binary black holes, even when trained with data sets of only quasicircular binary black hole waveforms. The results presented in this article, and the recent use of deep neural networks for the identification of optical transients in telescope data, suggests that deep learning can facilitate real-time searches of gravitational wave sources and their electromagnetic and astroparticle counterparts. In the subsequent article, the framework introduced herein is directly applied to identify and characterize gravitational wave events in real LIGO data.
Deep convolutional neural network for prostate MR segmentation
NASA Astrophysics Data System (ADS)
Tian, Zhiqiang; Liu, Lizhi; Fei, Baowei
2017-03-01
Automatic segmentation of the prostate in magnetic resonance imaging (MRI) has many applications in prostate cancer diagnosis and therapy. We propose a deep fully convolutional neural network (CNN) to segment the prostate automatically. Our deep CNN model is trained end-to-end in a single learning stage based on prostate MR images and the corresponding ground truths, and learns to make inference for pixel-wise segmentation. Experiments were performed on our in-house data set, which contains prostate MR images of 20 patients. The proposed CNN model obtained a mean Dice similarity coefficient of 85.3%+/-3.2% as compared to the manual segmentation. Experimental results show that our deep CNN model could yield satisfactory segmentation of the prostate.
NASA Astrophysics Data System (ADS)
Waldman, Robin; Herrmann, Marine; Somot, Samuel; Arsouze, Thomas; Benshila, Rachid; Bosse, Anthony; Chanut, Jérôme; Giordani, Hervé; Pennel, Romain; Sevault, Florence; Testor, Pierre
2017-04-01
Ocean deep convection is a major process of interaction between surface and deep ocean. The Gulf of Lions is a well-documented deep convection area in the Mediterranean Sea, and mesoscale dynamics is a known factor impacting this phenomenon. However, previous modelling studies don't allow to address the robustness of its impact with respect to the physical configuration and ocean intrinsic variability. In this study, the impact of mesoscale on ocean deep convection in the Gulf of Lions is investigated using a multi-resolution ensemble simulation of the northwestern Mediterranean sea. The eddy-permitting Mediterranean model NEMOMED12 (6km resolution) is compared to its eddy-resolving counterpart with the 2-way grid refinement AGRIF in the northwestern Mediterranean (2km resolution). We focus on the well-documented 2012-2013 period and on the multidecadal timescale (1979-2013). The impact of mesoscale on deep convection is addressed in terms of its mean and variability, its impact on deep water transformations and on associated dynamical structures. Results are interpreted by diagnosing regional mean and eddy circulation and using buoyancy budgets. We find a mean inhibition of deep convection by mesoscale with large interannual variability. It is associated with a large impact on mean and transient circulation and a large air-sea flux feedback.
Automatic Seismic-Event Classification with Convolutional Neural Networks.
NASA Astrophysics Data System (ADS)
Bueno Rodriguez, A.; Titos Luzón, M.; Garcia Martinez, L.; Benitez, C.; Ibáñez, J. M.
2017-12-01
Active volcanoes exhibit a wide range of seismic signals, providing vast amounts of unlabelled volcano-seismic data that can be analyzed through the lens of artificial intelligence. However, obtaining high-quality labelled data is time-consuming and expensive. Deep neural networks can process data in their raw form, compute high-level features and provide a better representation of the input data distribution. These systems can be deployed to classify seismic data at scale, enhance current early-warning systems and build extensive seismic catalogs. In this research, we aim to classify spectrograms from seven different seismic events registered at "Volcán de Fuego" (Colima, Mexico), during four eruptive periods. Our approach is based on convolutional neural networks (CNNs), a sub-type of deep neural networks that can exploit grid structure from the data. Volcano-seismic signals can be mapped into a grid-like structure using the spectrogram: a representation of the temporal evolution in terms of time and frequency. Spectrograms were computed from the data using Hamming windows with 4 seconds length, 2.5 seconds overlapping and 128 points FFT resolution. Results are compared to deep neural networks, random forest and SVMs. Experiments show that CNNs can exploit temporal and frequency information, attaining a classification accuracy of 93%, similar to deep networks 91% but outperforming SVM and random forest. These results empirically show that CNNs are powerful models to classify a wide range of volcano-seismic signals, and achieve good generalization. Furthermore, volcano-seismic spectrograms contains useful discriminative information for the CNN, as higher layers of the network combine high-level features computed for each frequency band, helping to detect simultaneous events in time. Being at the intersection of deep learning and geophysics, this research enables future studies of how CNNs can be used in volcano monitoring to accurately determine the detection and location of seismic events.
Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines
Neftci, Emre O.; Augustine, Charles; Paul, Somnath; Detorakis, Georgios
2017-01-01
An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Gradient Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory during learning, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated gradients are not essential for learning deep representations. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations. Using a two-compartment Leaky Integrate & Fire (I&F) neuron, the rule requires only one addition and two comparisons for each synaptic weight, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving classification accuracies on permutation invariant datasets comparable to those obtained in artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning. PMID:28680387
Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines.
Neftci, Emre O; Augustine, Charles; Paul, Somnath; Detorakis, Georgios
2017-01-01
An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Gradient Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory during learning, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated gradients are not essential for learning deep representations. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations. Using a two-compartment Leaky Integrate & Fire (I&F) neuron, the rule requires only one addition and two comparisons for each synaptic weight, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving classification accuracies on permutation invariant datasets comparable to those obtained in artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning.
Classify epithelium-stroma in histopathological images based on deep transferable network.
Yu, X; Zheng, H; Liu, C; Huang, Y; Ding, X
2018-04-20
Recently, the deep learning methods have received more attention in histopathological image analysis. However, the traditional deep learning methods assume that training data and test data have the same distributions, which causes certain limitations in real-world histopathological applications. However, it is costly to recollect a large amount of labeled histology data to train a new neural network for each specified image acquisition procedure even for similar tasks. In this paper, an unsupervised domain adaptation is introduced into a typical deep convolutional neural network (CNN) model to mitigate the repeating of the labels. The unsupervised domain adaptation is implemented by adding two regularisation terms, namely the feature-based adaptation and entropy minimisation, to the object function of a widely used CNN model called the AlexNet. Three independent public epithelium-stroma datasets were used to verify the proposed method. The experimental results have demonstrated that in the epithelium-stroma classification, the proposed method can achieve better performance than the commonly used deep learning methods and some existing deep domain adaptation methods. Therefore, the proposed method can be considered as a better option for the real-world applications of histopathological image analysis because there is no requirement for recollection of large-scale labeled data for every specified domain. © 2018 The Authors Journal of Microscopy © 2018 Royal Microscopical Society.
Fabric defect detection based on visual saliency using deep feature and low-rank recovery
NASA Astrophysics Data System (ADS)
Liu, Zhoufeng; Wang, Baorui; Li, Chunlei; Li, Bicao; Dong, Yan
2018-04-01
Fabric defect detection plays an important role in improving the quality of fabric product. In this paper, a novel fabric defect detection method based on visual saliency using deep feature and low-rank recovery was proposed. First, unsupervised training is carried out by the initial network parameters based on MNIST large datasets. The supervised fine-tuning of fabric image library based on Convolutional Neural Networks (CNNs) is implemented, and then more accurate deep neural network model is generated. Second, the fabric images are uniformly divided into the image block with the same size, then we extract their multi-layer deep features using the trained deep network. Thereafter, all the extracted features are concentrated into a feature matrix. Third, low-rank matrix recovery is adopted to divide the feature matrix into the low-rank matrix which indicates the background and the sparse matrix which indicates the salient defect. In the end, the iterative optimal threshold segmentation algorithm is utilized to segment the saliency maps generated by the sparse matrix to locate the fabric defect area. Experimental results demonstrate that the feature extracted by CNN is more suitable for characterizing the fabric texture than the traditional LBP, HOG and other hand-crafted features extraction method, and the proposed method can accurately detect the defect regions of various fabric defects, even for the image with complex texture.
Exemplar-Based Image and Video Stylization Using Fully Convolutional Semantic Features.
Zhu, Feida; Yan, Zhicheng; Bu, Jiajun; Yu, Yizhou
2017-05-10
Color and tone stylization in images and videos strives to enhance unique themes with artistic color and tone adjustments. It has a broad range of applications from professional image postprocessing to photo sharing over social networks. Mainstream photo enhancement softwares, such as Adobe Lightroom and Instagram, provide users with predefined styles, which are often hand-crafted through a trial-and-error process. Such photo adjustment tools lack a semantic understanding of image contents and the resulting global color transform limits the range of artistic styles it can represent. On the other hand, stylistic enhancement needs to apply distinct adjustments to various semantic regions. Such an ability enables a broader range of visual styles. In this paper, we first propose a novel deep learning architecture for exemplar-based image stylization, which learns local enhancement styles from image pairs. Our deep learning architecture consists of fully convolutional networks (FCNs) for automatic semantics-aware feature extraction and fully connected neural layers for adjustment prediction. Image stylization can be efficiently accomplished with a single forward pass through our deep network. To extend our deep network from image stylization to video stylization, we exploit temporal superpixels (TSPs) to facilitate the transfer of artistic styles from image exemplars to videos. Experiments on a number of datasets for image stylization as well as a diverse set of video clips demonstrate the effectiveness of our deep learning architecture.
DEEP MOTIF DASHBOARD: VISUALIZING AND UNDERSTANDING GENOMIC SEQUENCES USING DEEP NEURAL NETWORKS.
Lanchantin, Jack; Singh, Ritambhara; Wang, Beilun; Qi, Yanjun
2017-01-01
Deep neural network (DNN) models have recently obtained state-of-the-art prediction accuracy for the transcription factor binding (TFBS) site classification task. However, it remains unclear how these approaches identify meaningful DNA sequence signals and give insights as to why TFs bind to certain locations. In this paper, we propose a toolkit called the Deep Motif Dashboard (DeMo Dashboard) which provides a suite of visualization strategies to extract motifs, or sequence patterns from deep neural network models for TFBS classification. We demonstrate how to visualize and understand three important DNN models: convolutional, recurrent, and convolutional-recurrent networks. Our first visualization method is finding a test sequence's saliency map which uses first-order derivatives to describe the importance of each nucleotide in making the final prediction. Second, considering recurrent models make predictions in a temporal manner (from one end of a TFBS sequence to the other), we introduce temporal output scores, indicating the prediction score of a model over time for a sequential input. Lastly, a class-specific visualization strategy finds the optimal input sequence for a given TFBS positive class via stochastic gradient optimization. Our experimental results indicate that a convolutional-recurrent architecture performs the best among the three architectures. The visualization techniques indicate that CNN-RNN makes predictions by modeling both motifs as well as dependencies among them.
Deep Motif Dashboard: Visualizing and Understanding Genomic Sequences Using Deep Neural Networks
Lanchantin, Jack; Singh, Ritambhara; Wang, Beilun; Qi, Yanjun
2018-01-01
Deep neural network (DNN) models have recently obtained state-of-the-art prediction accuracy for the transcription factor binding (TFBS) site classification task. However, it remains unclear how these approaches identify meaningful DNA sequence signals and give insights as to why TFs bind to certain locations. In this paper, we propose a toolkit called the Deep Motif Dashboard (DeMo Dashboard) which provides a suite of visualization strategies to extract motifs, or sequence patterns from deep neural network models for TFBS classification. We demonstrate how to visualize and understand three important DNN models: convolutional, recurrent, and convolutional-recurrent networks. Our first visualization method is finding a test sequence’s saliency map which uses first-order derivatives to describe the importance of each nucleotide in making the final prediction. Second, considering recurrent models make predictions in a temporal manner (from one end of a TFBS sequence to the other), we introduce temporal output scores, indicating the prediction score of a model over time for a sequential input. Lastly, a class-specific visualization strategy finds the optimal input sequence for a given TFBS positive class via stochastic gradient optimization. Our experimental results indicate that a convolutional-recurrent architecture performs the best among the three architectures. The visualization techniques indicate that CNN-RNN makes predictions by modeling both motifs as well as dependencies among them. PMID:27896980
Fusion of shallow and deep features for classification of high-resolution remote sensing images
NASA Astrophysics Data System (ADS)
Gao, Lang; Tian, Tian; Sun, Xiao; Li, Hang
2018-02-01
Effective spectral and spatial pixel description plays a significant role for the classification of high resolution remote sensing images. Current approaches of pixel-based feature extraction are of two main kinds: one includes the widelyused principal component analysis (PCA) and gray level co-occurrence matrix (GLCM) as the representative of the shallow spectral and shape features, and the other refers to the deep learning-based methods which employ deep neural networks and have made great promotion on classification accuracy. However, the former traditional features are insufficient to depict complex distribution of high resolution images, while the deep features demand plenty of samples to train the network otherwise over fitting easily occurs if only limited samples are involved in the training. In view of the above, we propose a GLCM-based convolution neural network (CNN) approach to extract features and implement classification for high resolution remote sensing images. The employment of GLCM is able to represent the original images and eliminate redundant information and undesired noises. Meanwhile, taking shallow features as the input of deep network will contribute to a better guidance and interpretability. In consideration of the amount of samples, some strategies such as L2 regularization and dropout methods are used to prevent over-fitting. The fine-tuning strategy is also used in our study to reduce training time and further enhance the generalization performance of the network. Experiments with popular data sets such as PaviaU data validate that our proposed method leads to a performance improvement compared to individual involved approaches.
NASA Astrophysics Data System (ADS)
Fotin, Sergei V.; Yin, Yin; Haldankar, Hrishikesh; Hoffmeister, Jeffrey W.; Periaswamy, Senthil
2016-03-01
Computer-aided detection (CAD) has been used in screening mammography for many years and is likely to be utilized for digital breast tomosynthesis (DBT). Higher detection performance is desirable as it may have an impact on radiologist's decisions and clinical outcomes. Recently the algorithms based on deep convolutional architectures have been shown to achieve state of the art performance in object classification and detection. Similarly, we trained a deep convolutional neural network directly on patches sampled from two-dimensional mammography and reconstructed DBT volumes and compared its performance to a conventional CAD algorithm that is based on computation and classification of hand-engineered features. The detection performance was evaluated on the independent test set of 344 DBT reconstructions (GE SenoClaire 3D, iterative reconstruction algorithm) containing 328 suspicious and 115 malignant soft tissue densities including masses and architectural distortions. Detection sensitivity was measured on a region of interest (ROI) basis at the rate of five detection marks per volume. Moving from conventional to deep learning approach resulted in increase of ROI sensitivity from 0:832 +/- 0:040 to 0:893 +/- 0:033 for suspicious ROIs; and from 0:852 +/- 0:065 to 0:930 +/- 0:046 for malignant ROIs. These results indicate the high utility of deep feature learning in the analysis of DBT data and high potential of the method for broader medical image analysis tasks.
NASA Astrophysics Data System (ADS)
Rana, Narender; Chien, Chester
2018-03-01
A key sensor element in a Hard Disk Drive (HDD) is the read-write head device. The device is complex 3D shape and its fabrication requires over thousand process steps with many of them being various types of image inspection and critical dimension (CD) metrology steps. In order to have high yield of devices across a wafer, very tight inspection and metrology specifications are implemented. Many images are collected on a wafer and inspected for various types of defects and in CD metrology the quality of image impacts the CD measurements. Metrology noise need to be minimized in CD metrology to get better estimate of the process related variations for implementing robust process controls. Though there are specialized tools available for defect inspection and review allowing classification and statistics. However, due to unavailability of such advanced tools or other reasons, many times images need to be manually inspected. SEM Image inspection and CD-SEM metrology tools are different tools differing in software as well. SEM Image inspection and CD-SEM metrology tools are separate tools differing in software and purpose. There have been cases where a significant numbers of CD-SEM images are blurred or have some artefact and there is a need for image inspection along with the CD measurement. Tool may not report a practical metric highlighting the quality of image. Not filtering CD from these blurred images will add metrology noise to the CD measurement. An image classifier can be helpful here for filtering such data. This paper presents the use of artificial intelligence in classifying the SEM images. Deep machine learning is used to train a neural network which is then used to classify the new images as blurred and not blurred. Figure 1 shows the image blur artefact and contingency table of classification results from the trained deep neural network. Prediction accuracy of 94.9 % was achieved in the first model. Paper covers other such applications of the deep neural network in image classification for inspection, review and metrology.
The deep space network, volume 10
NASA Technical Reports Server (NTRS)
1972-01-01
Progress on the Deep Space Network (DSN) supporting research and technology is reported. The objectives, functions and facilities of the DSN are described along with the mission support for the following: interplanetary flight projects, planetary flight projects, and manned space flight projects. Work in advanced engineering and communications systems is reported along with changes in hardware and software configurations in the DSN/MSFN tracking stations.
A Spatiotemporal Prediction Framework for Air Pollution Based on Deep RNN
NASA Astrophysics Data System (ADS)
Fan, J.; Li, Q.; Hou, J.; Feng, X.; Karimian, H.; Lin, S.
2017-10-01
Time series data in practical applications always contain missing values due to sensor malfunction, network failure, outliers etc. In order to handle missing values in time series, as well as the lack of considering temporal properties in machine learning models, we propose a spatiotemporal prediction framework based on missing value processing algorithms and deep recurrent neural network (DRNN). By using missing tag and missing interval to represent time series patterns, we implement three different missing value fixing algorithms, which are further incorporated into deep neural network that consists of LSTM (Long Short-term Memory) layers and fully connected layers. Real-world air quality and meteorological datasets (Jingjinji area, China) are used for model training and testing. Deep feed forward neural networks (DFNN) and gradient boosting decision trees (GBDT) are trained as baseline models against the proposed DRNN. Performances of three missing value fixing algorithms, as well as different machine learning models are evaluated and analysed. Experiments show that the proposed DRNN framework outperforms both DFNN and GBDT, therefore validating the capacity of the proposed framework. Our results also provides useful insights for better understanding of different strategies that handle missing values.
Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition.
Ordóñez, Francisco Javier; Roggen, Daniel
2016-01-18
Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters' influence on performance to provide insights about their optimisation.
DRREP: deep ridge regressed epitope predictor.
Sher, Gene; Zhi, Degui; Zhang, Shaojie
2017-10-03
The ability to predict epitopes plays an enormous role in vaccine development in terms of our ability to zero in on where to do a more thorough in-vivo analysis of the protein in question. Though for the past decade there have been numerous advancements and improvements in epitope prediction, on average the best benchmark prediction accuracies are still only around 60%. New machine learning algorithms have arisen within the domain of deep learning, text mining, and convolutional networks. This paper presents a novel analytically trained and string kernel using deep neural network, which is tailored for continuous epitope prediction, called: Deep Ridge Regressed Epitope Predictor (DRREP). DRREP was tested on long protein sequences from the following datasets: SARS, Pellequer, HIV, AntiJen, and SEQ194. DRREP was compared to numerous state of the art epitope predictors, including the most recently published predictors called LBtope and DMNLBE. Using area under ROC curve (AUC), DRREP achieved a performance improvement over the best performing predictors on SARS (13.7%), HIV (8.9%), Pellequer (1.5%), and SEQ194 (3.1%), with its performance being matched only on the AntiJen dataset, by the LBtope predictor, where both DRREP and LBtope achieved an AUC of 0.702. DRREP is an analytically trained deep neural network, thus capable of learning in a single step through regression. By combining the features of deep learning, string kernels, and convolutional networks, the system is able to perform residue-by-residue prediction of continues epitopes with higher accuracy than the current state of the art predictors.
Deep hierarchical attention network for video description
NASA Astrophysics Data System (ADS)
Li, Shuohao; Tang, Min; Zhang, Jun
2018-03-01
Pairing video to natural language description remains a challenge in computer vision and machine translation. Inspired by image description, which uses an encoder-decoder model for reducing visual scene into a single sentence, we propose a deep hierarchical attention network for video description. The proposed model uses convolutional neural network (CNN) and bidirectional LSTM network as encoders while a hierarchical attention network is used as the decoder. Compared to encoder-decoder models used in video description, the bidirectional LSTM network can capture the temporal structure among video frames. Moreover, the hierarchical attention network has an advantage over single-layer attention network on global context modeling. To make a fair comparison with other methods, we evaluate the proposed architecture with different types of CNN structures and decoders. Experimental results on the standard datasets show that our model has a more superior performance than the state-of-the-art techniques.
Wen, Shameng; Meng, Qingkun; Feng, Chao; Tang, Chaojing
2017-01-01
Formal techniques have been devoted to analyzing whether network protocol specifications violate security policies; however, these methods cannot detect vulnerabilities in the implementations of the network protocols themselves. Symbolic execution can be used to analyze the paths of the network protocol implementations, but for stateful network protocols, it is difficult to reach the deep states of the protocol. This paper proposes a novel model-guided approach to detect vulnerabilities in network protocol implementations. Our method first abstracts a finite state machine (FSM) model, then utilizes the model to guide the symbolic execution. This approach achieves high coverage of both the code and the protocol states. The proposed method is implemented and applied to test numerous real-world network protocol implementations. The experimental results indicate that the proposed method is more effective than traditional fuzzing methods such as SPIKE at detecting vulnerabilities in the deep states of network protocol implementations.
NASA Technical Reports Server (NTRS)
Anderson, Michael L.; Wright, Nathaniel; Tai, Wallace
2012-01-01
Natural disasters, terrorist attacks, civil unrest, and other events have the potential of disrupting mission-essential operations in any space communications network. NASA's Space Communications and Navigation office (SCaN) is in the process of studying options for integrating the three existing NASA network elements, the Deep Space Network, the Near Earth Network, and the Space Network, into a single integrated network with common services and interfaces. The need to maintain Continuity of Operations (COOP) after a disastrous event has a direct impact on the future network design and operations concepts. The SCaN Integrated Network will provide support to a variety of user missions. The missions have diverse requirements and include anything from earth based platforms to planetary missions and rovers. It is presumed that an integrated network, with common interfaces and processes, provides an inherent advantage to COOP in that multiple elements and networks can provide cross-support in a seamless manner. The results of trade studies support this assumption but also show that centralization as a means of achieving integration can result in single points of failure that must be mitigated. The cost to provide this mitigation can be substantial. In support of this effort, the team evaluated the current approaches to COOP, developed multiple potential approaches to COOP in a future integrated network, evaluated the interdependencies of the various approaches to the various network control and operations options, and did a best value assessment of the options. The paper will describe the trade space, the study methods, and results of the study.
NASA Astrophysics Data System (ADS)
Lähivaara, Timo; Kärkkäinen, Leo; Huttunen, Janne M. J.; Hesthaven, Jan S.
2018-02-01
We study the feasibility of data based machine learning applied to ultrasound tomography to estimate water-saturated porous material parameters. In this work, the data to train the neural networks is simulated by solving wave propagation in coupled poroviscoelastic-viscoelastic-acoustic media. As the forward model, we consider a high-order discontinuous Galerkin method while deep convolutional neural networks are used to solve the parameter estimation problem. In the numerical experiment, we estimate the material porosity and tortuosity while the remaining parameters which are of less interest are successfully marginalized in the neural networks-based inversion. Computational examples confirms the feasibility and accuracy of this approach.
NASA Astrophysics Data System (ADS)
Kirst, Christoph
It is astonishing how the sub-parts of a brain co-act to produce coherent behavior. What are mechanism that coordinate information processing and communication and how can those be changed flexibly in order to cope with variable contexts? Here we show that when information is encoded in the deviations around a collective dynamical reference state of a recurrent network the propagation of these fluctuations is strongly dependent on precisely this underlying reference. Information here 'surfs' on top of the collective dynamics and switching between states enables fast and flexible rerouting of information. This in turn affects local processing and consequently changes in the global reference dynamics that re-regulate the distribution of information. This provides a generic mechanism for self-organized information processing as we demonstrate with an oscillatory Hopfield network that performs contextual pattern recognition. Deep neural networks have proven to be very successful recently. Here we show that generating information channels via collective reference dynamics can effectively compress a deep multi-layer architecture into a single layer making this mechanism a promising candidate for the organization of information processing in biological neuronal networks.
Cascaded deep decision networks for classification of endoscopic images
NASA Astrophysics Data System (ADS)
Murthy, Venkatesh N.; Singh, Vivek; Sun, Shanhui; Bhattacharya, Subhabrata; Chen, Terrence; Comaniciu, Dorin
2017-02-01
Both traditional and wireless capsule endoscopes can generate tens of thousands of images for each patient. It is desirable to have the majority of irrelevant images filtered out by automatic algorithms during an offline review process or to have automatic indication for highly suspicious areas during an online guidance. This also applies to the newly invented endomicroscopy, where online indication of tumor classification plays a significant role. Image classification is a standard pattern recognition problem and is well studied in the literature. However, performance on the challenging endoscopic images still has room for improvement. In this paper, we present a novel Cascaded Deep Decision Network (CDDN) to improve image classification performance over standard Deep neural network based methods. During the learning phase, CDDN automatically builds a network which discards samples that are classified with high confidence scores by a previously trained network and concentrates only on the challenging samples which would be handled by the subsequent expert shallow networks. We validate CDDN using two different types of endoscopic imaging, which includes a polyp classification dataset and a tumor classification dataset. From both datasets we show that CDDN can outperform other methods by about 10%. In addition, CDDN can also be applied to other image classification problems.
ERIC Educational Resources Information Center
Csermely, Peter
2017-01-01
Our century has unprecedented new challenges, which need creative solutions and deep thinking. Contemplative, deep thinking became an "endangered species" in our rushing world of Tweets, elevator pitches, and fast decisions. Here, we describe how important aspects of both creativity and deep thinking can be understood as network…
A history of the deep space network
NASA Technical Reports Server (NTRS)
Corliss, W. R.
1976-01-01
The Deep Space Network (DSN) has been managed and operated by the Jet Propulsion Laboratory (JPL) under NASA contract ever since NASA was formed in late 1958. The Tracking and data acquisition tasks of the DSN are markedly different from those of the other NASA network, STDN. STDN, which is an amalgamation of the satellite tracking network (STADAN) and the Manned Space Flight Network (MSFN), is primarily concerned with supporting manned and unmanned earth satellites. In contrast, the DSN deals with spacecraft that are thousands to hundreds of millions of miles away. The radio signals from these distant craft are many orders of magnitude weaker than those from nearby satellites. Distance also makes precise radio location more difficult; and accurate trajectory data are vital to deep space navigation in the vicinities of the other planets of the solar system. In addition to tracking spacecraft and acquiring data from them, the DSN is required to transmit many thousands of commands to control the sophisticated planetary probes and interplanetary monitoring stations. To meet these demanding requirements, the DSN has been compelled to be in the forefront of technology.
Feature to prototype transition in neural networks
NASA Astrophysics Data System (ADS)
Krotov, Dmitry; Hopfield, John
Models of associative memory with higher order (higher than quadratic) interactions, and their relationship to neural networks used in deep learning are discussed. Associative memory is conventionally described by recurrent neural networks with dynamical convergence to stable points. Deep learning typically uses feedforward neural nets without dynamics. However, a simple duality relates these two different views when applied to problems of pattern classification. From the perspective of associative memory such models deserve attention because they make it possible to store a much larger number of memories, compared to the quadratic case. In the dual description, these models correspond to feedforward neural networks with one hidden layer and unusual activation functions transmitting the activities of the visible neurons to the hidden layer. These activation functions are rectified polynomials of a higher degree rather than the rectified linear functions used in deep learning. The network learns representations of the data in terms of features for rectified linear functions, but as the power in the activation function is increased there is a gradual shift to a prototype-based representation, the two extreme regimes of pattern recognition known in cognitive psychology. Simons Center for Systems Biology.
Robust visual tracking via multiscale deep sparse networks
NASA Astrophysics Data System (ADS)
Wang, Xin; Hou, Zhiqiang; Yu, Wangsheng; Xue, Yang; Jin, Zefenfen; Dai, Bo
2017-04-01
In visual tracking, deep learning with offline pretraining can extract more intrinsic and robust features. It has significant success solving the tracking drift in a complicated environment. However, offline pretraining requires numerous auxiliary training datasets and is considerably time-consuming for tracking tasks. To solve these problems, a multiscale sparse networks-based tracker (MSNT) under the particle filter framework is proposed. Based on the stacked sparse autoencoders and rectifier linear unit, the tracker has a flexible and adjustable architecture without the offline pretraining process and exploits the robust and powerful features effectively only through online training of limited labeled data. Meanwhile, the tracker builds four deep sparse networks of different scales, according to the target's profile type. During tracking, the tracker selects the matched tracking network adaptively in accordance with the initial target's profile type. It preserves the inherent structural information more efficiently than the single-scale networks. Additionally, a corresponding update strategy is proposed to improve the robustness of the tracker. Extensive experimental results on a large scale benchmark dataset show that the proposed method performs favorably against state-of-the-art methods in challenging environments.
Overview of deep learning in medical imaging.
Suzuki, Kenji
2017-09-01
The use of machine learning (ML) has been increasing rapidly in the medical imaging field, including computer-aided diagnosis (CAD), radiomics, and medical image analysis. Recently, an ML area called deep learning emerged in the computer vision field and became very popular in many fields. It started from an event in late 2012, when a deep-learning approach based on a convolutional neural network (CNN) won an overwhelming victory in the best-known worldwide computer vision competition, ImageNet Classification. Since then, researchers in virtually all fields, including medical imaging, have started actively participating in the explosively growing field of deep learning. In this paper, the area of deep learning in medical imaging is overviewed, including (1) what was changed in machine learning before and after the introduction of deep learning, (2) what is the source of the power of deep learning, (3) two major deep-learning models: a massive-training artificial neural network (MTANN) and a convolutional neural network (CNN), (4) similarities and differences between the two models, and (5) their applications to medical imaging. This review shows that ML with feature input (or feature-based ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is the learning of image data directly without object segmentation or feature extraction; thus, it is the source of the power of deep learning, although the depth of the model is an important attribute. The class of ML with image input (or image-based ML) including deep learning has a long history, but recently gained popularity due to the use of the new terminology, deep learning. There are two major models in this class of ML in medical imaging, MTANN and CNN, which have similarities as well as several differences. In our experience, MTANNs were substantially more efficient in their development, had a higher performance, and required a lesser number of training cases than did CNNs. "Deep learning", or ML with image input, in medical imaging is an explosively growing, promising field. It is expected that ML with image input will be the mainstream area in the field of medical imaging in the next few decades.
Rueckauer, Bodo; Lungu, Iulia-Alexandra; Hu, Yuhuang; Pfeiffer, Michael; Liu, Shih-Chii
2017-01-01
Spiking neural networks (SNNs) can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs) can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications.
Rueckauer, Bodo; Lungu, Iulia-Alexandra; Hu, Yuhuang; Pfeiffer, Michael; Liu, Shih-Chii
2017-01-01
Spiking neural networks (SNNs) can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs) can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications. PMID:29375284
Searching for exoplanets using artificial intelligence
NASA Astrophysics Data System (ADS)
Pearson, Kyle A.; Palafox, Leon; Griffith, Caitlin A.
2018-02-01
In the last decade, over a million stars were monitored to detect transiting planets. Manual interpretation of potential exoplanet candidates is labor intensive and subject to human error, the results of which are difficult to quantify. Here we present a new method of detecting exoplanet candidates in large planetary search projects which, unlike current methods uses a neural network. Neural networks, also called "deep learning" or "deep nets" are designed to give a computer perception into a specific problem by training it to recognize patterns. Unlike past transit detection algorithms deep nets learn to recognize planet features instead of relying on hand-coded metrics that humans perceive as the most representative. Our convolutional neural network is capable of detecting Earth-like exoplanets in noisy time-series data with a greater accuracy than a least-squares method. Deep nets are highly generalizable allowing data to be evaluated from different time series after interpolation without compromising performance. As validated by our deep net analysis of Kepler light curves, we detect periodic transits consistent with the true period without any model fitting. Our study indicates that machine learning will facilitate the characterization of exoplanets in future analysis of large astronomy data sets.
Deep Restricted Kernel Machines Using Conjugate Feature Duality.
Suykens, Johan A K
2017-08-01
The aim of this letter is to propose a theory of deep restricted kernel machines offering new foundations for deep learning with kernel machines. From the viewpoint of deep learning, it is partially related to restricted Boltzmann machines, which are characterized by visible and hidden units in a bipartite graph without hidden-to-hidden connections and deep learning extensions as deep belief networks and deep Boltzmann machines. From the viewpoint of kernel machines, it includes least squares support vector machines for classification and regression, kernel principal component analysis (PCA), matrix singular value decomposition, and Parzen-type models. A key element is to first characterize these kernel machines in terms of so-called conjugate feature duality, yielding a representation with visible and hidden units. It is shown how this is related to the energy form in restricted Boltzmann machines, with continuous variables in a nonprobabilistic setting. In this new framework of so-called restricted kernel machine (RKM) representations, the dual variables correspond to hidden features. Deep RKM are obtained by coupling the RKMs. The method is illustrated for deep RKM, consisting of three levels with a least squares support vector machine regression level and two kernel PCA levels. In its primal form also deep feedforward neural networks can be trained within this framework.
Pubface: Celebrity face identification based on deep learning
NASA Astrophysics Data System (ADS)
Ouanan, H.; Ouanan, M.; Aksasse, B.
2018-05-01
In this paper, we describe a new real time application called PubFace, which allows to recognize celebrities in public spaces by employs a new pose invariant face recognition deep neural network algorithm with an extremely low error rate. To build this application, we make the following contributions: firstly, we build a novel dataset with over five million faces labelled. Secondly, we fine tuning the deep convolutional neural network (CNN) VGG-16 architecture on our new dataset that we have built. Finally, we deploy this model on the Raspberry Pi 3 model B using the OpenCv dnn module (OpenCV 3.3).
Deep learning on temporal-spectral data for anomaly detection
NASA Astrophysics Data System (ADS)
Ma, King; Leung, Henry; Jalilian, Ehsan; Huang, Daniel
2017-05-01
Detecting anomalies is important for continuous monitoring of sensor systems. One significant challenge is to use sensor data and autonomously detect changes that cause different conditions to occur. Using deep learning methods, we are able to monitor and detect changes as a result of some disturbance in the system. We utilize deep neural networks for sequence analysis of time series. We use a multi-step method for anomaly detection. We train the network to learn spectral and temporal features from the acoustic time series. We test our method using fiber-optic acoustic data from a pipeline.
Deep-water longline fishing has reduced impact on Vulnerable Marine Ecosystems
Pham, Christopher K.; Diogo, Hugo; Menezes, Gui; Porteiro, Filipe; Braga-Henriques, Andreia; Vandeperre, Frederic; Morato, Telmo
2014-01-01
Bottom trawl fishing threatens deep-sea ecosystems, modifying the seafloor morphology and its physical properties, with dramatic consequences on benthic communities. Therefore, the future of deep-sea fishing relies on alternative techniques that maintain the health of deep-sea ecosystems and tolerate appropriate human uses of the marine environment. In this study, we demonstrate that deep-sea bottom longline fishing has little impact on vulnerable marine ecosystems, reducing bycatch of cold-water corals and limiting additional damage to benthic communities. We found that slow-growing vulnerable species are still common in areas subject to more than 20 years of longlining activity and estimate that one deep-sea bottom trawl will have a similar impact to 296–1,719 longlines, depending on the morphological complexity of the impacted species. Given the pronounced differences in the magnitude of disturbances coupled with its selectivity and low fuel consumption, we suggest that regulated deep-sea longlining can be an alternative to deep-sea bottom trawling. PMID:24776718
Trullo, Roger; Petitjean, Caroline; Nie, Dong; Shen, Dinggang; Ruan, Su
2017-09-01
Computed Tomography (CT) is the standard imaging technique for radiotherapy planning. The delineation of Organs at Risk (OAR) in thoracic CT images is a necessary step before radiotherapy, for preventing irradiation of healthy organs. However, due to low contrast, multi-organ segmentation is a challenge. In this paper, we focus on developing a novel framework for automatic delineation of OARs. Different from previous works in OAR segmentation where each organ is segmented separately, we propose two collaborative deep architectures to jointly segment all organs, including esophagus, heart, aorta and trachea. Since most of the organ borders are ill-defined, we believe spatial relationships must be taken into account to overcome the lack of contrast. The aim of combining two networks is to learn anatomical constraints with the first network, which will be used in the second network, when each OAR is segmented in turn. Specifically, we use the first deep architecture, a deep SharpMask architecture, for providing an effective combination of low-level representations with deep high-level features, and then take into account the spatial relationships between organs by the use of Conditional Random Fields (CRF). Next, the second deep architecture is employed to refine the segmentation of each organ by using the maps obtained on the first deep architecture to learn anatomical constraints for guiding and refining the segmentations. Experimental results show superior performance on 30 CT scans, comparing with other state-of-the-art methods.
Le, Nguyen-Quoc-Khanh; Ho, Quang-Thai; Ou, Yu-Yen
2017-09-05
In several years, deep learning is a modern machine learning technique using in a variety of fields with state-of-the-art performance. Therefore, utilization of deep learning to enhance performance is also an important solution for current bioinformatics field. In this study, we try to use deep learning via convolutional neural networks and position specific scoring matrices to identify electron transport proteins, which is an important molecular function in transmembrane proteins. Our deep learning method can approach a precise model for identifying of electron transport proteins with achieved sensitivity of 80.3%, specificity of 94.4%, and accuracy of 92.3%, with MCC of 0.71 for independent dataset. The proposed technique can serve as a powerful tool for identifying electron transport proteins and can help biologists understand the function of the electron transport proteins. Moreover, this study provides a basis for further research that can enrich a field of applying deep learning in bioinformatics. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
He, Fei; Han, Ye; Wang, Han; Ji, Jinchao; Liu, Yuanning; Ma, Zhiqiang
2017-03-01
Gabor filters are widely utilized to detect iris texture information in several state-of-the-art iris recognition systems. However, the proper Gabor kernels and the generative pattern of iris Gabor features need to be predetermined in application. The traditional empirical Gabor filters and shallow iris encoding ways are incapable of dealing with such complex variations in iris imaging including illumination, aging, deformation, and device variations. Thereby, an adaptive Gabor filter selection strategy and deep learning architecture are presented. We first employ particle swarm optimization approach and its binary version to define a set of data-driven Gabor kernels for fitting the most informative filtering bands, and then capture complex pattern from the optimal Gabor filtered coefficients by a trained deep belief network. A succession of comparative experiments validate that our optimal Gabor filters may produce more distinctive Gabor coefficients and our iris deep representations be more robust and stable than traditional iris Gabor codes. Furthermore, the depth and scales of the deep learning architecture are also discussed.
pDeep: Predicting MS/MS Spectra of Peptides with Deep Learning.
Zhou, Xie-Xuan; Zeng, Wen-Feng; Chi, Hao; Luo, Chunjie; Liu, Chao; Zhan, Jianfeng; He, Si-Min; Zhang, Zhifei
2017-12-05
In tandem mass spectrometry (MS/MS)-based proteomics, search engines rely on comparison between an experimental MS/MS spectrum and the theoretical spectra of the candidate peptides. Hence, accurate prediction of the theoretical spectra of peptides appears to be particularly important. Here, we present pDeep, a deep neural network-based model for the spectrum prediction of peptides. Using the bidirectional long short-term memory (BiLSTM), pDeep can predict higher-energy collisional dissociation, electron-transfer dissociation, and electron-transfer and higher-energy collision dissociation MS/MS spectra of peptides with >0.9 median Pearson correlation coefficients. Further, we showed that intermediate layer of the neural network could reveal physicochemical properties of amino acids, for example the similarities of fragmentation behaviors between amino acids. We also showed the potential of pDeep to distinguish extremely similar peptides (peptides that contain isobaric amino acids, for example, GG = N, AG = Q, or even I = L), which were very difficult to distinguish using traditional search engines.
Uncertain Photometric Redshifts with Deep Learning Methods
NASA Astrophysics Data System (ADS)
D'Isanto, A.
2017-06-01
The need for accurate photometric redshifts estimation is a topic that has fundamental importance in Astronomy, due to the necessity of efficiently obtaining redshift information without the need of spectroscopic analysis. We propose a method for determining accurate multi-modal photo-z probability density functions (PDFs) using Mixture Density Networks (MDN) and Deep Convolutional Networks (DCN). A comparison with a Random Forest (RF) is performed.
Nonparametric Representations for Integrated Inference, Control, and Sensing
2015-10-01
Learning (ICML), 2013. [20] Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. DeCAF: A deep ...unlimited. Multi-layer feature learning “SuperVision” Convolutional Neural Network (CNN) ImageNet Classification with Deep Convolutional Neural Networks...to develop a new framework for autonomous operations that will extend the state of the art in distributed learning and modeling from data, and
Simple gain probability functions for large reflector antennas of JPL/NASA
NASA Technical Reports Server (NTRS)
Jamnejad, V.
2003-01-01
Simple models for the patterns as well as their cumulative gain probability and probability density functions of the Deep Space Network antennas are developed. These are needed for the study and evaluation of interference from unwanted sources such as the emerging terrestrial system, High Density Fixed Service, with the Ka-band receiving antenna systems in Goldstone Station of the Deep Space Network.
The Telecommunications and Data Acquisition Report. [Deep Space Network
NASA Technical Reports Server (NTRS)
Posner, E. C. (Editor)
1988-01-01
In space communications, radio navigation, radio science, and ground based radio and radar astronomy, activities of the Deep Space Network and its associated Ground Communications Facility in planning, in supporting research and technology, in implementation, and in operations are reported. Also included is TDA funded activity at JPL on data and information systems and reimbursable DSN work performed for other space agencies through NASA.
Ji, Zexuan; Chen, Qiang; Niu, Sijie; Leng, Theodore; Rubin, Daniel L.
2018-01-01
Purpose To automatically and accurately segment geographic atrophy (GA) in spectral-domain optical coherence tomography (SD-OCT) images by constructing a voting system with deep neural networks without the use of retinal layer segmentation. Methods An automatic GA segmentation method for SD-OCT images based on the deep network was constructed. The structure of the deep network was composed of five layers, including one input layer, three hidden layers, and one output layer. During the training phase, the labeled A-scans with 1024 features were directly fed into the network as the input layer to obtain the deep representations. Then a soft-max classifier was trained to determine the label of each individual pixel. Finally, a voting decision strategy was used to refine the segmentation results among 10 trained models. Results Two image data sets with GA were used to evaluate the model. For the first dataset, our algorithm obtained a mean overlap ratio (OR) 86.94% ± 8.75%, absolute area difference (AAD) 11.49% ± 11.50%, and correlation coefficients (CC) 0.9857; for the second dataset, the mean OR, AAD, and CC of the proposed method were 81.66% ± 10.93%, 8.30% ± 9.09%, and 0.9952, respectively. The proposed algorithm was capable of improving over 5% and 10% segmentation accuracy, respectively, when compared with several state-of-the-art algorithms on two data sets. Conclusions Without retinal layer segmentation, the proposed algorithm could produce higher segmentation accuracy and was more stable when compared with state-of-the-art methods that relied on retinal layer segmentation results. Our model may provide reliable GA segmentations from SD-OCT images and be useful in the clinical diagnosis of advanced nonexudative AMD. Translational Relevance Based on the deep neural networks, this study presents an accurate GA segmentation method for SD-OCT images without using any retinal layer segmentation results, and may contribute to improved understanding of advanced nonexudative AMD. PMID:29302382
Ji, Zexuan; Chen, Qiang; Niu, Sijie; Leng, Theodore; Rubin, Daniel L
2018-01-01
To automatically and accurately segment geographic atrophy (GA) in spectral-domain optical coherence tomography (SD-OCT) images by constructing a voting system with deep neural networks without the use of retinal layer segmentation. An automatic GA segmentation method for SD-OCT images based on the deep network was constructed. The structure of the deep network was composed of five layers, including one input layer, three hidden layers, and one output layer. During the training phase, the labeled A-scans with 1024 features were directly fed into the network as the input layer to obtain the deep representations. Then a soft-max classifier was trained to determine the label of each individual pixel. Finally, a voting decision strategy was used to refine the segmentation results among 10 trained models. Two image data sets with GA were used to evaluate the model. For the first dataset, our algorithm obtained a mean overlap ratio (OR) 86.94% ± 8.75%, absolute area difference (AAD) 11.49% ± 11.50%, and correlation coefficients (CC) 0.9857; for the second dataset, the mean OR, AAD, and CC of the proposed method were 81.66% ± 10.93%, 8.30% ± 9.09%, and 0.9952, respectively. The proposed algorithm was capable of improving over 5% and 10% segmentation accuracy, respectively, when compared with several state-of-the-art algorithms on two data sets. Without retinal layer segmentation, the proposed algorithm could produce higher segmentation accuracy and was more stable when compared with state-of-the-art methods that relied on retinal layer segmentation results. Our model may provide reliable GA segmentations from SD-OCT images and be useful in the clinical diagnosis of advanced nonexudative AMD. Based on the deep neural networks, this study presents an accurate GA segmentation method for SD-OCT images without using any retinal layer segmentation results, and may contribute to improved understanding of advanced nonexudative AMD.
Involving Scientists in the NASA / JPL Solar System Educators Program
NASA Astrophysics Data System (ADS)
Brunsell, E.; Hill, J.
2001-11-01
The NASA / JPL Solar System Educators Program (SSEP) is a professional development program with the goal of inspiring America's students, creating learning opportunities, and enlightening inquisitive minds by engaging them in the Solar System exploration efforts conducted by the Jet Propulsion Laboratory (JPL). SSEP is a Jet Propulsion Laboratory program managed by Space Explorers, Inc. (Green Bay, WI) and the Virginia Space Grant Consortium (Hampton, VA). The heart of the program is a large nationwide network of highly motivated educators. These Solar System Educators, representing more than 40 states, lead workshops around the country that show teachers how to successfully incorporate NASA materials into their teaching. During FY2001, more than 9500 educators were impacted through nearly 300 workshops conducted in 43 states. Solar System Educators attend annual training institutes at the Jet Propulsion Laboratory during their first two years in the program. All Solar System Educators receive additional online training, materials and support. The JPL missions and programs involved in SSEP include: Cassini Mission to Saturn, Galileo Mission to Jupiter, STARDUST Comet Sample Return Mission, Deep Impact Mission to a Comet, Mars Exploration Program, Outer Planets Program, Deep Space Network, JPL Space and Earth Science Directorate, and the NASA Office of Space Science Solar System Exploration Education and Public Outreach Forum. Scientists can get involved with this program by cooperatively presenting at workshops conducted in their area, acting as a content resource or by actively mentoring Solar System Educators. Additionally, SSEP will expand this year to include other missions and programs related to the Solar System and the Sun.
Active semi-supervised learning method with hybrid deep belief networks.
Zhou, Shusen; Chen, Qingcai; Wang, Xiaolong
2014-01-01
In this paper, we develop a novel semi-supervised learning algorithm called active hybrid deep belief networks (AHD), to address the semi-supervised sentiment classification problem with deep learning. First, we construct the previous several hidden layers using restricted Boltzmann machines (RBM), which can reduce the dimension and abstract the information of the reviews quickly. Second, we construct the following hidden layers using convolutional restricted Boltzmann machines (CRBM), which can abstract the information of reviews effectively. Third, the constructed deep architecture is fine-tuned by gradient-descent based supervised learning with an exponential loss function. Finally, active learning method is combined based on the proposed deep architecture. We did several experiments on five sentiment classification datasets, and show that AHD is competitive with previous semi-supervised learning algorithm. Experiments are also conducted to verify the effectiveness of our proposed method with different number of labeled reviews and unlabeled reviews respectively.
Deep Learning of Orthographic Representations in Baboons
Hannagan, Thomas; Ziegler, Johannes C.; Dufau, Stéphane; Fagot, Joël; Grainger, Jonathan
2014-01-01
What is the origin of our ability to learn orthographic knowledge? We use deep convolutional networks to emulate the primate's ventral visual stream and explore the recent finding that baboons can be trained to discriminate English words from nonwords [1]. The networks were exposed to the exact same sequence of stimuli and reinforcement signals as the baboons in the experiment, and learned to map real visual inputs (pixels) of letter strings onto binary word/nonword responses. We show that the networks' highest levels of representations were indeed sensitive to letter combinations as postulated in our previous research. The model also captured the key empirical findings, such as generalization to novel words, along with some intriguing inter-individual differences. The present work shows the merits of deep learning networks that can simulate the whole processing chain all the way from the visual input to the response while allowing researchers to analyze the complex representations that emerge during the learning process. PMID:24416300
Detection of bars in galaxies using a deep convolutional neural network
NASA Astrophysics Data System (ADS)
Abraham, Sheelu; Aniyan, A. K.; Kembhavi, Ajit K.; Philip, N. S.; Vaghmare, Kaustubh
2018-06-01
We present an automated method for the detection of bar structure in optical images of galaxies using a deep convolutional neural network that is easy to use and provides good accuracy. In our study, we use a sample of 9346 galaxies in the redshift range of 0.009-0.2 from the Sloan Digital Sky Survey (SDSS), which has 3864 barred galaxies, the rest being unbarred. We reach a top precision of 94 per cent in identifying bars in galaxies using the trained network. This accuracy matches the accuracy reached by human experts on the same data without additional information about the images. Since deep convolutional neural networks can be scaled to handle large volumes of data, the method is expected to have great relevance in an era where astronomy data is rapidly increasing in terms of volume, variety, volatility, and velocity along with other V's that characterize big data. With the trained model, we have constructed a catalogue of barred galaxies from SDSS and made it available online.
NASA Astrophysics Data System (ADS)
Zhao, Lei; Wang, Zengcai; Wang, Xiaojin; Qi, Yazhou; Liu, Qing; Zhang, Guoxin
2016-09-01
Human fatigue is an important cause of traffic accidents. To improve the safety of transportation, we propose, in this paper, a framework for fatigue expression recognition using image-based facial dynamic multi-information and a bimodal deep neural network. First, the landmark of face region and the texture of eye region, which complement each other in fatigue expression recognition, are extracted from facial image sequences captured by a single camera. Then, two stacked autoencoder neural networks are trained for landmark and texture, respectively. Finally, the two trained neural networks are combined by learning a joint layer on top of them to construct a bimodal deep neural network. The model can be used to extract a unified representation that fuses landmark and texture modalities together and classify fatigue expressions accurately. The proposed system is tested on a human fatigue dataset obtained from an actual driving environment. The experimental results demonstrate that the proposed method performs stably and robustly, and that the average accuracy achieves 96.2%.
NASA Astrophysics Data System (ADS)
Ding, Peng; Zhang, Ye; Deng, Wei-Jian; Jia, Ping; Kuijper, Arjan
2018-07-01
Detection of objects from satellite optical remote sensing images is very important for many commercial and governmental applications. With the development of deep convolutional neural networks (deep CNNs), the field of object detection has seen tremendous advances. Currently, objects in satellite remote sensing images can be detected using deep CNNs. In general, optical remote sensing images contain many dense and small objects, and the use of the original Faster Regional CNN framework does not yield a suitably high precision. Therefore, after careful analysis we adopt dense convoluted networks, a multi-scale representation and various combinations of improvement schemes to enhance the structure of the base VGG16-Net for improving the precision. We propose an approach to reduce the test-time (detection time) and memory requirements. To validate the effectiveness of our approach, we perform experiments using satellite remote sensing image datasets of aircraft and automobiles. The results show that the improved network structure can detect objects in satellite optical remote sensing images more accurately and efficiently.
DeepFix: A Fully Convolutional Neural Network for Predicting Human Eye Fixations.
Kruthiventi, Srinivas S S; Ayush, Kumar; Babu, R Venkatesh
2017-09-01
Understanding and predicting the human visual attention mechanism is an active area of research in the fields of neuroscience and computer vision. In this paper, we propose DeepFix, a fully convolutional neural network, which models the bottom-up mechanism of visual attention via saliency prediction. Unlike classical works, which characterize the saliency map using various hand-crafted features, our model automatically learns features in a hierarchical fashion and predicts the saliency map in an end-to-end manner. DeepFix is designed to capture semantics at multiple scales while taking global context into account, by using network layers with very large receptive fields. Generally, fully convolutional nets are spatially invariant-this prevents them from modeling location-dependent patterns (e.g., centre-bias). Our network handles this by incorporating a novel location-biased convolutional layer. We evaluate our model on multiple challenging saliency data sets and show that it achieves the state-of-the-art results.
NASA Astrophysics Data System (ADS)
Liu, Miaofeng
2017-07-01
In recent years, deep convolutional neural networks come into use in image inpainting and super-resolution in many fields. Distinct to most of the former methods requiring to know beforehand the local information for corrupted pixels, we propose a 20-depth fully convolutional network to learn an end-to-end mapping a dataset of damaged/ground truth subimage pairs realizing non-local blind inpainting and super-resolution. As there often exist image with huge corruptions or inpainting on a low-resolution image that the existing approaches unable to perform well, we also share parameters in local area of layers to achieve spatial recursion and enlarge the receptive field. To avoid the difficulty of training this deep neural network, skip-connections between symmetric convolutional layers are designed. Experimental results shows that the proposed method outperforms state-of-the-art methods for diverse corrupting and low-resolution conditions, it works excellently when realizing super-resolution and image inpainting simultaneously
Deep Space Network-Wide Portal Development: Planning Service Pilot Project
NASA Technical Reports Server (NTRS)
Doneva, Silviya
2011-01-01
The Deep Space Network (DSN) is an international network of antennas that supports interplanetary spacecraft missions and radio and radar astronomy observations for the exploration of the solar system and the universe. DSN provides the vital two-way communications link that guides and controls planetary explorers, and brings back the images and new scientific information they collect. In an attempt to streamline operations and improve overall services provided by the Deep Space Network a DSN-wide portal is under development. The project is one step in a larger effort to centralize the data collected from current missions including user input parameters for spacecraft to be tracked. This information will be placed into a principal repository where all operations related to the DSN are stored. Furthermore, providing statistical characterization of data volumes will help identify technically feasible tracking opportunities and more precise mission planning by providing upfront scheduling proposals. Business intelligence tools are to be incorporated in the output to deliver data visualization.
Video Salient Object Detection via Fully Convolutional Networks.
Wang, Wenguan; Shen, Jianbing; Shao, Ling
This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image data sets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the densely annotated video segmentation data set (MAE of .06) and the Freiburg-Berkeley Motion Segmentation data set (MAE of .07), and do so with much improved speed (2 fps with all steps).This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image data sets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the densely annotated video segmentation data set (MAE of .06) and the Freiburg-Berkeley Motion Segmentation data set (MAE of .07), and do so with much improved speed (2 fps with all steps).
Convolutional neural networks for event-related potential detection: impact of the architecture.
Cecotti, H
2017-07-01
The detection of brain responses at the single-trial level in the electroencephalogram (EEG) such as event-related potentials (ERPs) is a difficult problem that requires different processing steps to extract relevant discriminant features. While most of the signal and classification techniques for the detection of brain responses are based on linear algebra, different pattern recognition techniques such as convolutional neural network (CNN), as a type of deep learning technique, have shown some interests as they are able to process the signal after limited pre-processing. In this study, we propose to investigate the performance of CNNs in relation of their architecture and in relation to how they are evaluated: a single system for each subject, or a system for all the subjects. More particularly, we want to address the change of performance that can be observed between specifying a neural network to a subject, or by considering a neural network for a group of subjects, taking advantage of a larger number of trials from different subjects. The results support the conclusion that a convolutional neural network trained on different subjects can lead to an AUC above 0.9 by using an appropriate architecture using spatial filtering and shift invariant layers.
Lenselink, Eelke B; Ten Dijke, Niels; Bongers, Brandon; Papadatos, George; van Vlijmen, Herman W T; Kowalczyk, Wojtek; IJzerman, Adriaan P; van Westen, Gerard J P
2017-08-14
The increase of publicly available bioactivity data in recent years has fueled and catalyzed research in chemogenomics, data mining, and modeling approaches. As a direct result, over the past few years a multitude of different methods have been reported and evaluated, such as target fishing, nearest neighbor similarity-based methods, and Quantitative Structure Activity Relationship (QSAR)-based protocols. However, such studies are typically conducted on different datasets, using different validation strategies, and different metrics. In this study, different methods were compared using one single standardized dataset obtained from ChEMBL, which is made available to the public, using standardized metrics (BEDROC and Matthews Correlation Coefficient). Specifically, the performance of Naïve Bayes, Random Forests, Support Vector Machines, Logistic Regression, and Deep Neural Networks was assessed using QSAR and proteochemometric (PCM) methods. All methods were validated using both a random split validation and a temporal validation, with the latter being a more realistic benchmark of expected prospective execution. Deep Neural Networks are the top performing classifiers, highlighting the added value of Deep Neural Networks over other more conventional methods. Moreover, the best method ('DNN_PCM') performed significantly better at almost one standard deviation higher than the mean performance. Furthermore, Multi-task and PCM implementations were shown to improve performance over single task Deep Neural Networks. Conversely, target prediction performed almost two standard deviations under the mean performance. Random Forests, Support Vector Machines, and Logistic Regression performed around mean performance. Finally, using an ensemble of DNNs, alongside additional tuning, enhanced the relative performance by another 27% (compared with unoptimized 'DNN_PCM'). Here, a standardized set to test and evaluate different machine learning algorithms in the context of multi-task learning is offered by providing the data and the protocols. Graphical Abstract .
Trans-species learning of cellular signaling systems with bimodal deep belief networks
Chen, Lujia; Cai, Chunhui; Chen, Vicky; Lu, Xinghua
2015-01-01
Motivation: Model organisms play critical roles in biomedical research of human diseases and drug development. An imperative task is to translate information/knowledge acquired from model organisms to humans. In this study, we address a trans-species learning problem: predicting human cell responses to diverse stimuli, based on the responses of rat cells treated with the same stimuli. Results: We hypothesized that rat and human cells share a common signal-encoding mechanism but employ different proteins to transmit signals, and we developed a bimodal deep belief network and a semi-restricted bimodal deep belief network to represent the common encoding mechanism and perform trans-species learning. These ‘deep learning’ models include hierarchically organized latent variables capable of capturing the statistical structures in the observed proteomic data in a distributed fashion. The results show that the models significantly outperform two current state-of-the-art classification algorithms. Our study demonstrated the potential of using deep hierarchical models to simulate cellular signaling systems. Availability and implementation: The software is available at the following URL: http://pubreview.dbmi.pitt.edu/TransSpeciesDeepLearning/. The data are available through SBV IMPROVER website, https://www.sbvimprover.com/challenge-2/overview, upon publication of the report by the organizers. Contact: xinghua@pitt.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25995230
Trans-species learning of cellular signaling systems with bimodal deep belief networks.
Chen, Lujia; Cai, Chunhui; Chen, Vicky; Lu, Xinghua
2015-09-15
Model organisms play critical roles in biomedical research of human diseases and drug development. An imperative task is to translate information/knowledge acquired from model organisms to humans. In this study, we address a trans-species learning problem: predicting human cell responses to diverse stimuli, based on the responses of rat cells treated with the same stimuli. We hypothesized that rat and human cells share a common signal-encoding mechanism but employ different proteins to transmit signals, and we developed a bimodal deep belief network and a semi-restricted bimodal deep belief network to represent the common encoding mechanism and perform trans-species learning. These 'deep learning' models include hierarchically organized latent variables capable of capturing the statistical structures in the observed proteomic data in a distributed fashion. The results show that the models significantly outperform two current state-of-the-art classification algorithms. Our study demonstrated the potential of using deep hierarchical models to simulate cellular signaling systems. The software is available at the following URL: http://pubreview.dbmi.pitt.edu/TransSpeciesDeepLearning/. The data are available through SBV IMPROVER website, https://www.sbvimprover.com/challenge-2/overview, upon publication of the report by the organizers. xinghua@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Towards Scalable Deep Learning via I/O Analysis and Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pumma, Sarunya; Si, Min; Feng, Wu-Chun
Deep learning systems have been growing in prominence as a way to automatically characterize objects, trends, and anomalies. Given the importance of deep learning systems, researchers have been investigating techniques to optimize such systems. An area of particular interest has been using large supercomputing systems to quickly generate effective deep learning networks: a phase often referred to as “training” of the deep learning neural network. As we scale existing deep learning frameworks—such as Caffe—on these large supercomputing systems, we notice that the parallelism can help improve the computation tremendously, leaving data I/O as the major bottleneck limiting the overall systemmore » scalability. In this paper, we first present a detailed analysis of the performance bottlenecks of Caffe on large supercomputing systems. Our analysis shows that the I/O subsystem of Caffe—LMDB—relies on memory-mapped I/O to access its database, which can be highly inefficient on large-scale systems because of its interaction with the process scheduling system and the network-based parallel filesystem. Based on this analysis, we then present LMDBIO, our optimized I/O plugin for Caffe that takes into account the data access pattern of Caffe in order to vastly improve I/O performance. Our experimental results show that LMDBIO can improve the overall execution time of Caffe by nearly 20-fold in some cases.« less
Lee, Hyung-Chul; Ryu, Ho-Geol; Chung, Eun-Jin; Jung, Chul-Woo
2018-03-01
The discrepancy between predicted effect-site concentration and measured bispectral index is problematic during intravenous anesthesia with target-controlled infusion of propofol and remifentanil. We hypothesized that bispectral index during total intravenous anesthesia would be more accurately predicted by a deep learning approach. Long short-term memory and the feed-forward neural network were sequenced to simulate the pharmacokinetic and pharmacodynamic parts of an empirical model, respectively, to predict intraoperative bispectral index during combined use of propofol and remifentanil. Inputs of long short-term memory were infusion histories of propofol and remifentanil, which were retrieved from target-controlled infusion pumps for 1,800 s at 10-s intervals. Inputs of the feed-forward network were the outputs of long short-term memory and demographic data such as age, sex, weight, and height. The final output of the feed-forward network was the bispectral index. The performance of bispectral index prediction was compared between the deep learning model and previously reported response surface model. The model hyperparameters comprised 8 memory cells in the long short-term memory layer and 16 nodes in the hidden layer of the feed-forward network. The model training and testing were performed with separate data sets of 131 and 100 cases. The concordance correlation coefficient (95% CI) were 0.561 (0.560 to 0.562) in the deep learning model, which was significantly larger than that in the response surface model (0.265 [0.263 to 0.266], P < 0.001). The deep learning model-predicted bispectral index during target-controlled infusion of propofol and remifentanil more accurately compared to the traditional model. The deep learning approach in anesthetic pharmacology seems promising because of its excellent performance and extensibility.
A Multiobjective Sparse Feature Learning Model for Deep Neural Networks.
Gong, Maoguo; Liu, Jia; Li, Hao; Cai, Qing; Su, Linzhi
2015-12-01
Hierarchical deep neural networks are currently popular learning models for imitating the hierarchical architecture of human brain. Single-layer feature extractors are the bricks to build deep networks. Sparse feature learning models are popular models that can learn useful representations. But most of those models need a user-defined constant to control the sparsity of representations. In this paper, we propose a multiobjective sparse feature learning model based on the autoencoder. The parameters of the model are learnt by optimizing two objectives, reconstruction error and the sparsity of hidden units simultaneously to find a reasonable compromise between them automatically. We design a multiobjective induced learning procedure for this model based on a multiobjective evolutionary algorithm. In the experiments, we demonstrate that the learning procedure is effective, and the proposed multiobjective model can learn useful sparse features.
Xu, Kele; Feng, Dawei; Mi, Haibo
2017-11-23
The automatic detection of diabetic retinopathy is of vital importance, as it is the main cause of irreversible vision loss in the working-age population in the developed world. The early detection of diabetic retinopathy occurrence can be very helpful for clinical treatment; although several different feature extraction approaches have been proposed, the classification task for retinal images is still tedious even for those trained clinicians. Recently, deep convolutional neural networks have manifested superior performance in image classification compared to previous handcrafted feature-based image classification methods. Thus, in this paper, we explored the use of deep convolutional neural network methodology for the automatic classification of diabetic retinopathy using color fundus image, and obtained an accuracy of 94.5% on our dataset, outperforming the results obtained by using classical approaches.
Future Mission Trends and their Implications for the Deep Space Network
NASA Technical Reports Server (NTRS)
Abraham, Douglas S.
2006-01-01
Planning for the upgrade and/or replacement of Deep Space Network (DSN) assets that typically operate for forty or more years necessitates understanding potential customer needs as far into the future as possible. This paper describes the methodology Deep Space Network (DSN) planners use to develop this understanding, some key future mission trends that have emerged from application of this methodology, and the implications of the trends for the DSN's future evolution. For NASA's current plans out to 2030, these trends suggest the need to accommodate: three times as many communication links, downlink rates two orders of magnitude greater than today's, uplink rates some four orders of magnitude greater, and end-to-end link difficulties two-to-three orders of magnitude greater. To meet these challenges, both DSN capacity and capability will need to increase.
Interplanetary Overlay Network Bundle Protocol Implementation
NASA Technical Reports Server (NTRS)
Burleigh, Scott C.
2011-01-01
The Interplanetary Overlay Network (ION) system's BP package, an implementation of the Delay-Tolerant Networking (DTN) Bundle Protocol (BP) and supporting services, has been specifically designed to be suitable for use on deep-space robotic vehicles. Although the ION BP implementation is unique in its use of zero-copy objects for high performance, and in its use of resource-sensitive rate control, it is fully interoperable with other implementations of the BP specification (Internet RFC 5050). The ION BP implementation is built using the same software infrastructure that underlies the implementation of the CCSDS (Consultative Committee for Space Data Systems) File Delivery Protocol (CFDP) built into the flight software of Deep Impact. It is designed to minimize resource consumption, while maximizing operational robustness. For example, no dynamic allocation of system memory is required. Like all the other ION packages, ION's BP implementation is designed to port readily between Linux and Solaris (for easy development and for ground system operations) and VxWorks (for flight systems operations). The exact same source code is exercised in both environments. Initially included in the ION BP implementations are the following: libraries of functions used in constructing bundle forwarders and convergence-layer (CL) input and output adapters; a simple prototype bundle forwarder and associated CL adapters designed to run over an IPbased local area network; administrative tools for managing a simple DTN infrastructure built from these components; a background daemon process that silently destroys bundles whose time-to-live intervals have expired; a library of functions exposed to applications, enabling them to issue and receive data encapsulated in DTN bundles; and some simple applications that can be used for system checkout and benchmarking.
Single image super-resolution based on convolutional neural networks
NASA Astrophysics Data System (ADS)
Zou, Lamei; Luo, Ming; Yang, Weidong; Li, Peng; Jin, Liujia
2018-03-01
We present a deep learning method for single image super-resolution (SISR). The proposed approach learns end-to-end mapping between low-resolution (LR) images and high-resolution (HR) images. The mapping is represented as a deep convolutional neural network which inputs the LR image and outputs the HR image. Our network uses 5 convolution layers, which kernels size include 5×5, 3×3 and 1×1. In our proposed network, we use residual-learning and combine different sizes of convolution kernels at the same layer. The experiment results show that our proposed method performs better than the existing methods in reconstructing quality index and human visual effects on benchmarked images.
NASA Astrophysics Data System (ADS)
Faccenna, C.; Funiciello, F.
2012-04-01
EC-Marie Curie Initial Training Networks (ITN) projects aim to improve the career perspectives of young generations of researchers. Institutions from both academic and industry sectors form a collaborative network to recruit research fellows and provide them with opportunities to undertake research in the context of a joint research training program. In this frame, TOPOMOD - one of the training activities of EPOS, the new-born European Research Infrastructure for Geosciences - is a funded ITN project designed to investigate and model how surface processes interact with crustal tectonics and mantle convection to originate and develop topography of the continents over a wide range of spatial and temporal scales. The multi-disciplinary approach combines geophysics, geochemistry, tectonics and structural geology with advanced geodynamic numerical/analog modelling. TOPOMOD involves 8 European research teams internationally recognized for their excellence in complementary fields of Earth Sciences (Roma TRE, Utrecht, GFZ, ETH, Cambridge, Durham, Rennes, Barcelona), to which are associated 5 research institutions (CNR-Italy, Univ. Parma, Univ. Lausanne, Univ. Montpellier, Univ. Mainz) , 3 high-technology enterprises (Malvern Instruments, TNO, G.O. Logical Consulting) and 1 large multinational oil and gas company (ENI). This unique network places emphasis in experience-based training increasing the impact and international visibility of European research in modeling. Long-term collaboration and synergy are established among the overmentioned research teams through 15 cross-disciplinary research projects that combine case studies in well-chosen target areas from the Mediterranean, the Middle and Far East, west Africa, and South America, with new developments in structural geology, geomorphology, seismology, geochemistry, InSAR, laboratory and numerical modelling of geological processes from the deep mantle to the surface. These multidisciplinary projects altogether aim to answer a key question in earth Sciences: how do deep and surface processes interact to shape and control the topographic evolution of our planet.
Deep space network software cost estimation model
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1981-01-01
A parametric software cost estimation model prepared for Jet PRopulsion Laboratory (JPL) Deep Space Network (DSN) Data System implementation tasks is described. The resource estimation mdel modifies and combines a number of existing models. The model calibrates the task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit JPL software life-cycle statistics.
Xia, Peng; Hu, Jie; Peng, Yinghong
2017-10-25
A novel model based on deep learning is proposed to estimate kinematic information for myoelectric control from multi-channel electromyogram (EMG) signals. The neural information of limb movement is embedded in EMG signals that are influenced by all kinds of factors. In order to overcome the negative effects of variability in signals, the proposed model employs the deep architecture combining convolutional neural networks (CNNs) and recurrent neural networks (RNNs). The EMG signals are transformed to time-frequency frames as the input to the model. The limb movement is estimated by the model that is trained with the gradient descent and backpropagation procedure. We tested the model for simultaneous and proportional estimation of limb movement in eight healthy subjects and compared it with support vector regression (SVR) and CNNs on the same data set. The experimental studies show that the proposed model has higher estimation accuracy and better robustness with respect to time. The combination of CNNs and RNNs can improve the model performance compared with using CNNs alone. The model of deep architecture is promising in EMG decoding and optimization of network structures can increase the accuracy and robustness. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Ladevèze, P.; Séjourné, S.; Rivard, C.; Lavoie, D.; Lefebvre, R.; Rouleau, A.
2018-03-01
In the St. Lawrence sedimentary platform (eastern Canada), very little data are available between shallow fresh water aquifers and deep geological hydrocarbon reservoir units (here referred to as the intermediate zone). Characterization of this intermediate zone is crucial, as the latter controls aquifer vulnerability to operations carried out at depth. In this paper, the natural fracture networks in shallow aquifers and in the Utica shale gas reservoir are documented in an attempt to indirectly characterize the intermediate zone. This study used structural data from outcrops, shallow observation well logs and deep shale gas well logs to propose a conceptual model of the natural fracture network. Shallow and deep fractures were categorized into three sets of steeply-dipping fractures and into a set of bedding-parallel fractures. Some lithological and structural controls on fracture distribution were identified. The regional geologic history and similarities between the shallow and deep fracture datasets allowed the extrapolation of the fracture network characterization to the intermediate zone. This study thus highlights the benefits of using both datasets simultaneously, while they are generally interpreted separately. Recommendations are also proposed for future environmental assessment studies in which the existence of preferential flow pathways and potential upward fluid migration toward shallow aquifers need to be identified.
Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition
Ordóñez, Francisco Javier; Roggen, Daniel
2016-01-01
Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation. PMID:26797612
Simulation of noisy dynamical system by Deep Learning
NASA Astrophysics Data System (ADS)
Yeo, Kyongmin
2017-11-01
Deep learning has attracted huge attention due to its powerful representation capability. However, most of the studies on deep learning have been focused on visual analytics or language modeling and the capability of the deep learning in modeling dynamical systems is not well understood. In this study, we use a recurrent neural network to model noisy nonlinear dynamical systems. In particular, we use a long short-term memory (LSTM) network, which constructs internal nonlinear dynamics systems. We propose a cross-entropy loss with spatial ridge regularization to learn a non-stationary conditional probability distribution from a noisy nonlinear dynamical system. A Monte Carlo procedure to perform time-marching simulations by using the LSTM is presented. The behavior of the LSTM is studied by using noisy, forced Van der Pol oscillator and Ikeda equation.
Ma, Xiaolei; Dai, Zhuang; He, Zhengbing; Ma, Jihui; Wang, Yong; Wang, Yunpeng
2017-04-10
This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks.
Ma, Xiaolei; Dai, Zhuang; He, Zhengbing; Ma, Jihui; Wang, Yong; Wang, Yunpeng
2017-01-01
This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks. PMID:28394270
NASA Technical Reports Server (NTRS)
Sanders, Felicia A.; Jones, Grailing, Jr.; Levesque, Michael
2006-01-01
The CCSDS File Delivery Protocol (CFDP) Standard could reshape ground support architectures by enabling applications to communicate over the space link using reliable-symmetric transport services. JPL utilized the CFDP standard to support the Deep Impact Mission. The architecture was based on layering the CFDP applications on top of the CCSDS Space Link Extension Services for data transport from the mission control centers to the ground stations. On July 4, 2005 at 1:52 A.M. EDT, the Deep Impact impactor successfully collided with comet Tempel 1. During the final 48 hours prior to impact, over 300 files were uplinked to the spacecraft, while over 6 thousand files were downlinked from the spacecraft using the CFDP. This paper uses the Deep Impact Mission as a case study in a discussion of the CFDP architecture, Deep Impact Mission requirements, and design for integrating the CFDP into the JPL deep space support services. Issues and recommendations for future missions using CFDP are also provided.
Impacts of the transformation of the German energy system on the transmission grid
NASA Astrophysics Data System (ADS)
Pesch, T.; Allelein, H.-J.; Hake, J.-F.
2014-10-01
The German Energiewende, the transformation of the energy system, has deep impacts on all parts of the system. This paper presents an approach that has been developed to simultaneously analyse impacts on the energy system as a whole and on the electricity system in particular. In the analysis, special emphasis is placed on the transmission grid and the efficiency of recommended grid extensions according to the German Network Development Plan. The analysis reveals that the measures in the concept are basically suitable for integrating the assumed high share of renewables in the future electricity system. Whereas a high feed-in from PV will not cause problems in the transmission grid in 2022, congestion may occur in situations with a high proportion of wind feed-in. Moreover, future bottlenecks in the grid are located in the same regions as today.
Machine Learning and Deep Learning Models to Predict Runoff Water Quantity and Quality
NASA Astrophysics Data System (ADS)
Bradford, S. A.; Liang, J.; Li, W.; Murata, T.; Simunek, J.
2017-12-01
Contaminants can be rapidly transported at the soil surface by runoff to surface water bodies. Physically-based models, which are based on the mathematical description of main hydrological processes, are key tools for predicting surface water impairment. Along with physically-based models, data-driven models are becoming increasingly popular for describing the behavior of hydrological and water resources systems since these models can be used to complement or even replace physically based-models. In this presentation we propose a new data-driven model as an alternative to a physically-based overland flow and transport model. First, we have developed a physically-based numerical model to simulate overland flow and contaminant transport (the HYDRUS-1D overland flow module). A large number of numerical simulations were carried out to develop a database containing information about the impact of various input parameters (weather patterns, surface topography, vegetation, soil conditions, contaminants, and best management practices) on runoff water quantity and quality outputs. This database was used to train data-driven models. Three different methods (Neural Networks, Support Vector Machines, and Recurrence Neural Networks) were explored to prepare input- output functional relations. Results demonstrate the ability and limitations of machine learning and deep learning models to predict runoff water quantity and quality.
ICADx: interpretable computer aided diagnosis of breast masses
NASA Astrophysics Data System (ADS)
Kim, Seong Tae; Lee, Hakmin; Kim, Hak Gu; Ro, Yong Man
2018-02-01
In this study, a novel computer aided diagnosis (CADx) framework is devised to investigate interpretability for classifying breast masses. Recently, a deep learning technology has been successfully applied to medical image analysis including CADx. Existing deep learning based CADx approaches, however, have a limitation in explaining the diagnostic decision. In real clinical practice, clinical decisions could be made with reasonable explanation. So current deep learning approaches in CADx are limited in real world deployment. In this paper, we investigate interpretability in CADx with the proposed interpretable CADx (ICADx) framework. The proposed framework is devised with a generative adversarial network, which consists of interpretable diagnosis network and synthetic lesion generative network to learn the relationship between malignancy and a standardized description (BI-RADS). The lesion generative network and the interpretable diagnosis network compete in an adversarial learning so that the two networks are improved. The effectiveness of the proposed method was validated on public mammogram database. Experimental results showed that the proposed ICADx framework could provide the interpretability of mass as well as mass classification. It was mainly attributed to the fact that the proposed method was effectively trained to find the relationship between malignancy and interpretations via the adversarial learning. These results imply that the proposed ICADx framework could be a promising approach to develop the CADx system.
Deep Neural Networks Based Recognition of Plant Diseases by Leaf Image Classification.
Sladojevic, Srdjan; Arsenovic, Marko; Anderla, Andras; Culibrk, Dubravko; Stefanovic, Darko
2016-01-01
The latest generation of convolutional neural networks (CNNs) has achieved impressive results in the field of image classification. This paper is concerned with a new approach to the development of plant disease recognition model, based on leaf image classification, by the use of deep convolutional networks. Novel way of training and the methodology used facilitate a quick and easy system implementation in practice. The developed model is able to recognize 13 different types of plant diseases out of healthy leaves, with the ability to distinguish plant leaves from their surroundings. According to our knowledge, this method for plant disease recognition has been proposed for the first time. All essential steps required for implementing this disease recognition model are fully described throughout the paper, starting from gathering images in order to create a database, assessed by agricultural experts. Caffe, a deep learning framework developed by Berkley Vision and Learning Centre, was used to perform the deep CNN training. The experimental results on the developed model achieved precision between 91% and 98%, for separate class tests, on average 96.3%.
Deep Neural Networks Based Recognition of Plant Diseases by Leaf Image Classification
Sladojevic, Srdjan; Arsenovic, Marko; Culibrk, Dubravko; Stefanovic, Darko
2016-01-01
The latest generation of convolutional neural networks (CNNs) has achieved impressive results in the field of image classification. This paper is concerned with a new approach to the development of plant disease recognition model, based on leaf image classification, by the use of deep convolutional networks. Novel way of training and the methodology used facilitate a quick and easy system implementation in practice. The developed model is able to recognize 13 different types of plant diseases out of healthy leaves, with the ability to distinguish plant leaves from their surroundings. According to our knowledge, this method for plant disease recognition has been proposed for the first time. All essential steps required for implementing this disease recognition model are fully described throughout the paper, starting from gathering images in order to create a database, assessed by agricultural experts. Caffe, a deep learning framework developed by Berkley Vision and Learning Centre, was used to perform the deep CNN training. The experimental results on the developed model achieved precision between 91% and 98%, for separate class tests, on average 96.3%. PMID:27418923
Deep greedy learning under thermal variability in full diurnal cycles
NASA Astrophysics Data System (ADS)
Rauss, Patrick; Rosario, Dalton
2017-08-01
We study the generalization and scalability behavior of a deep belief network (DBN) applied to a challenging long-wave infrared hyperspectral dataset, consisting of radiance from several manmade and natural materials within a fixed site located 500 m from an observation tower. The collections cover multiple full diurnal cycles and include different atmospheric conditions. Using complementary priors, a DBN uses a greedy algorithm that can learn deep, directed belief networks one layer at a time and has two layers form to provide undirected associative memory. The greedy algorithm initializes a slower learning procedure, which fine-tunes the weights, using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of spectral data and their labels, despite significant data variability between and within classes due to environmental and temperature variation occurring within and between full diurnal cycles. We argue, however, that more questions than answers are raised regarding the generalization capacity of these deep nets through experiments aimed at investigating their training and augmented learning behavior.
NASA Astrophysics Data System (ADS)
Shen, C.; Fang, K.
2017-12-01
Deep Learning (DL) methods have made revolutionary strides in recent years. A core value proposition of DL is that abstract notions and patterns can be extracted purely from data, without the need for domain expertise. Process-based models (PBM), on the other hand, can be regarded as repositories of human knowledge or hypotheses about how systems function. Here, through computational examples, we argue that there is merit in integrating PBMs with DL due to the imbalance and lack of data in many situations, especially in hydrology. We trained a deep-in-time neural network, the Long Short-Term Memory (LSTM), to learn soil moisture dynamics from Soil Moisture Active Passive (SMAP) Level 3 product. We show that when PBM solutions are integrated into LSTM, the network is able to better generalize across regions. LSTM is able to better utilize PBM solutions than simpler statistical methods. Our results suggest PBMs have generalization value which should be carefully assessed and utilized. We also emphasize that when properly regularized, the deep network is robust and is of superior testing performance compared to simpler methods.
NASA Astrophysics Data System (ADS)
Zucker, Shay; Giryes, Raja
2018-04-01
Transits of habitable planets around solar-like stars are expected to be shallow, and to have long periods, which means low information content. The current bottleneck in the detection of such transits is caused in large part by the presence of red (correlated) noise in the light curves obtained from the dedicated space telescopes. Based on the groundbreaking results deep learning achieves in many signal and image processing applications, we propose to use deep neural networks to solve this problem. We present a feasibility study, in which we applied a convolutional neural network on a simulated training set. The training set comprised light curves received from a hypothetical high-cadence space-based telescope. We simulated the red noise by using Gaussian Processes with a wide variety of hyper-parameters. We then tested the network on a completely different test set simulated in the same way. Our study proves that very difficult cases can indeed be detected. Furthermore, we show how detection trends can be studied and detection biases quantified. We have also checked the robustness of the neural-network performance against practical artifacts such as outliers and discontinuities, which are known to affect space-based high-cadence light curves. Future work will allow us to use the neural networks to characterize the transit model and identify individual transits. This new approach will certainly be an indispensable tool for the detection of habitable planets in the future planet-detection space missions such as PLATO.
Recurrent neural networks for breast lesion classification based on DCE-MRIs
NASA Astrophysics Data System (ADS)
Antropova, Natasha; Huynh, Benjamin; Giger, Maryellen
2018-02-01
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) plays a significant role in breast cancer screening, cancer staging, and monitoring response to therapy. Recently, deep learning methods are being rapidly incorporated in image-based breast cancer diagnosis and prognosis. However, most of the current deep learning methods make clinical decisions based on 2-dimentional (2D) or 3D images and are not well suited for temporal image data. In this study, we develop a deep learning methodology that enables integration of clinically valuable temporal components of DCE-MRIs into deep learning-based lesion classification. Our work is performed on a database of 703 DCE-MRI cases for the task of distinguishing benign and malignant lesions, and uses the area under the ROC curve (AUC) as the performance metric in conducting that task. We train a recurrent neural network, specifically a long short-term memory network (LSTM), on sequences of image features extracted from the dynamic MRI sequences. These features are extracted with VGGNet, a convolutional neural network pre-trained on a large dataset of natural images ImageNet. The features are obtained from various levels of the network, to capture low-, mid-, and high-level information about the lesion. Compared to a classification method that takes as input only images at a single time-point (yielding an AUC = 0.81 (se = 0.04)), our LSTM method improves lesion classification with an AUC of 0.85 (se = 0.03).
Computational ghost imaging using deep learning
NASA Astrophysics Data System (ADS)
Shimobaba, Tomoyoshi; Endo, Yutaka; Nishitsuji, Takashi; Takahashi, Takayuki; Nagahama, Yuki; Hasegawa, Satoki; Sano, Marie; Hirayama, Ryuji; Kakue, Takashi; Shiraki, Atsushi; Ito, Tomoyoshi
2018-04-01
Computational ghost imaging (CGI) is a single-pixel imaging technique that exploits the correlation between known random patterns and the measured intensity of light transmitted (or reflected) by an object. Although CGI can obtain two- or three-dimensional images with a single or a few bucket detectors, the quality of the reconstructed images is reduced by noise due to the reconstruction of images from random patterns. In this study, we improve the quality of CGI images using deep learning. A deep neural network is used to automatically learn the features of noise-contaminated CGI images. After training, the network is able to predict low-noise images from new noise-contaminated CGI images.
NASA deep space network operations planning and preparation
NASA Technical Reports Server (NTRS)
Jensen, W. N.
1982-01-01
The responsibilities and structural organization of the Operations Planning Group of NASA Deep Space Network (DSN) Operations are outlined. The Operations Planning group establishes an early interface with a user's planning organization to educate the user on DSN capabilities and limitations for deep space tracking support. A team of one or two individuals works through all phases of the spacecraft launch and also provides planning and preparation for specific events such as planetary encounters. Coordinating interface is also provided for nonflight projects such as radio astronomy and VLBI experiments. The group is divided into a Long Range Support Planning element and a Near Term Operations Coordination element.
The Deep Space Network as an instrument for radio science research
NASA Technical Reports Server (NTRS)
Asmar, S. W.; Renzetti, N. A.
1993-01-01
Radio science experiments use radio links between spacecraft and sensor instrumentation that is implemented in the Deep Space Network. The deep space communication complexes along with the telecommunications subsystem on board the spacecraft constitute the major elements of the radio science instrumentation. Investigators examine small changes in the phase and/or amplitude of the radio signal propagating from a spacecraft to study the atmospheric and ionospheric structure of planets and satellites, planetary gravitational fields, shapes, masses, planetary rings, ephemerides of planets, solar corona, magnetic fields, cometary comae, and such aspects of the theory of general relativity as gravitational waves and gravitational redshift.
Fawzy, Amr S
2010-01-01
The aim was to characterize the variations in the structure and surface dehydration of acid demineralized intertubular dentin collagen network with the variations in dentin depth and time of air-exposure (3, 6, 9 and 12 min). In addition, to study the effect of these variations on the tensile bond strength (TBS) to dentin. Phosphoric acid demineralized superficial and deep dentin specimens were prepared. The structure of the dentin collagen network was characterized by AFM. The surface dehydration was characterized by probing the nano-scale adhesion force (F(ad)) between AFM tip and intertubular dentin surface as a new experimental approach. The TBS to dentin was evaluated using an alcohol-based dentin self-priming adhesive. AFM images revealed a demineralized open collagen network structure in both of superficial and deep dentin at 3 and 6 min of air-exposure. However, at 9 min, superficial dentin showed more collapsed network structure compared to deep dentin that partially preserved the open network structure. Total collapsed structure was found at 12 min for both of superficial and deep dentin. The value of the F(ad) is decreased with increasing the time of air-exposure and is increased with dentin depth at the same time of air-exposure. The TBS was higher for superficial dentin at 3 and 6 min, however, no difference was found at 9 and 12 min. The ability of the demineralized dentin collagen network to resist air-dehydration and to preserve the integrity of open network structure with the increase in air-exposure time is increased with dentin depth. Although superficial dentin achieves higher bond strength values, the difference in the bond strength is decreased by increasing the time of air-exposure. The AFM probed F(ad) showed to be sensitive approach to characterize surface dehydration, however, further researches are recommended regarding the validity of such approach.
Deep Unsupervised Learning on a Desktop PC: A Primer for Cognitive Scientists.
Testolin, Alberto; Stoianov, Ivilin; De Filippo De Grazia, Michele; Zorzi, Marco
2013-01-01
Deep belief networks hold great promise for the simulation of human cognition because they show how structured and abstract representations may emerge from probabilistic unsupervised learning. These networks build a hierarchy of progressively more complex distributed representations of the sensory data by fitting a hierarchical generative model. However, learning in deep networks typically requires big datasets and it can involve millions of connection weights, which implies that simulations on standard computers are unfeasible. Developing realistic, medium-to-large-scale learning models of cognition would therefore seem to require expertise in programing parallel-computing hardware, and this might explain why the use of this promising approach is still largely confined to the machine learning community. Here we show how simulations of deep unsupervised learning can be easily performed on a desktop PC by exploiting the processors of low cost graphic cards (graphic processor units) without any specific programing effort, thanks to the use of high-level programming routines (available in MATLAB or Python). We also show that even an entry-level graphic card can outperform a small high-performance computing cluster in terms of learning time and with no loss of learning quality. We therefore conclude that graphic card implementations pave the way for a widespread use of deep learning among cognitive scientists for modeling cognition and behavior.
Deep Unsupervised Learning on a Desktop PC: A Primer for Cognitive Scientists
Testolin, Alberto; Stoianov, Ivilin; De Filippo De Grazia, Michele; Zorzi, Marco
2013-01-01
Deep belief networks hold great promise for the simulation of human cognition because they show how structured and abstract representations may emerge from probabilistic unsupervised learning. These networks build a hierarchy of progressively more complex distributed representations of the sensory data by fitting a hierarchical generative model. However, learning in deep networks typically requires big datasets and it can involve millions of connection weights, which implies that simulations on standard computers are unfeasible. Developing realistic, medium-to-large-scale learning models of cognition would therefore seem to require expertise in programing parallel-computing hardware, and this might explain why the use of this promising approach is still largely confined to the machine learning community. Here we show how simulations of deep unsupervised learning can be easily performed on a desktop PC by exploiting the processors of low cost graphic cards (graphic processor units) without any specific programing effort, thanks to the use of high-level programming routines (available in MATLAB or Python). We also show that even an entry-level graphic card can outperform a small high-performance computing cluster in terms of learning time and with no loss of learning quality. We therefore conclude that graphic card implementations pave the way for a widespread use of deep learning among cognitive scientists for modeling cognition and behavior. PMID:23653617
Artificial intelligence for analyzing orthopedic trauma radiographs.
Olczak, Jakub; Fahlberg, Niklas; Maki, Atsuto; Razavian, Ali Sharif; Jilert, Anthony; Stark, André; Sköldenberg, Olof; Gordon, Max
2017-12-01
Background and purpose - Recent advances in artificial intelligence (deep learning) have shown remarkable performance in classifying non-medical images, and the technology is believed to be the next technological revolution. So far it has never been applied in an orthopedic setting, and in this study we sought to determine the feasibility of using deep learning for skeletal radiographs. Methods - We extracted 256,000 wrist, hand, and ankle radiographs from Danderyd's Hospital and identified 4 classes: fracture, laterality, body part, and exam view. We then selected 5 openly available deep learning networks that were adapted for these images. The most accurate network was benchmarked against a gold standard for fractures. We furthermore compared the network's performance with 2 senior orthopedic surgeons who reviewed images at the same resolution as the network. Results - All networks exhibited an accuracy of at least 90% when identifying laterality, body part, and exam view. The final accuracy for fractures was estimated at 83% for the best performing network. The network performed similarly to senior orthopedic surgeons when presented with images at the same resolution as the network. The 2 reviewer Cohen's kappa under these conditions was 0.76. Interpretation - This study supports the use for orthopedic radiographs of artificial intelligence, which can perform at a human level. While current implementation lacks important features that surgeons require, e.g. risk of dislocation, classifications, measurements, and combining multiple exam views, these problems have technical solutions that are waiting to be implemented for orthopedics.
PIV-DCNN: cascaded deep convolutional neural networks for particle image velocimetry
NASA Astrophysics Data System (ADS)
Lee, Yong; Yang, Hua; Yin, Zhouping
2017-12-01
Velocity estimation (extracting the displacement vector information) from the particle image pairs is of critical importance for particle image velocimetry. This problem is mostly transformed into finding the sub-pixel peak in a correlation map. To address the original displacement extraction problem, we propose a different evaluation scheme (PIV-DCNN) with four-level regression deep convolutional neural networks. At each level, the networks are trained to predict a vector from two input image patches. The low-level network is skilled at large displacement estimation and the high- level networks are devoted to improving the accuracy. Outlier replacement and symmetric window offset operation glue the well- functioning networks in a cascaded manner. Through comparison with the standard PIV methods (one-pass cross-correlation method, three-pass window deformation), the practicability of the proposed PIV-DCNN is verified by the application to a diversity of synthetic and experimental PIV images.
Reynolds averaged turbulence modelling using deep neural networks with embedded invariance
Ling, Julia; Kurzawski, Andrew; Templeton, Jeremy
2016-10-18
There exists significant demand for improved Reynolds-averaged Navier–Stokes (RANS) turbulence models that are informed by and can represent a richer set of turbulence physics. This paper presents a method of using deep neural networks to learn a model for the Reynolds stress anisotropy tensor from high-fidelity simulation data. A novel neural network architecture is proposed which uses a multiplicative layer with an invariant tensor basis to embed Galilean invariance into the predicted anisotropy tensor. It is demonstrated that this neural network architecture provides improved prediction accuracy compared with a generic neural network architecture that does not embed this invariance property.more » Furthermore, the Reynolds stress anisotropy predictions of this invariant neural network are propagated through to the velocity field for two test cases. For both test cases, significant improvement versus baseline RANS linear eddy viscosity and nonlinear eddy viscosity models is demonstrated.« less
Reynolds averaged turbulence modelling using deep neural networks with embedded invariance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ling, Julia; Kurzawski, Andrew; Templeton, Jeremy
There exists significant demand for improved Reynolds-averaged Navier–Stokes (RANS) turbulence models that are informed by and can represent a richer set of turbulence physics. This paper presents a method of using deep neural networks to learn a model for the Reynolds stress anisotropy tensor from high-fidelity simulation data. A novel neural network architecture is proposed which uses a multiplicative layer with an invariant tensor basis to embed Galilean invariance into the predicted anisotropy tensor. It is demonstrated that this neural network architecture provides improved prediction accuracy compared with a generic neural network architecture that does not embed this invariance property.more » Furthermore, the Reynolds stress anisotropy predictions of this invariant neural network are propagated through to the velocity field for two test cases. For both test cases, significant improvement versus baseline RANS linear eddy viscosity and nonlinear eddy viscosity models is demonstrated.« less
NASA Technical Reports Server (NTRS)
2005-01-01
Sixty-nine days before it gets up-close-and-personal with a comet, NASA's Deep Impact spacecraft successfully photographed its quarry, comet Tempel 1, at a distance of 39.7 million miles. The image, taken on April 25, 2005, is the first of many comet portraits Deep Impact will take leading up to its historic comet encounter on July 4.Automating Deep Space Network scheduling and conflict resolution
NASA Technical Reports Server (NTRS)
Johnston, Mark D.; Clement, Bradley
2005-01-01
The Deep Space Network (DSN) is a central part of NASA's infrastructure for communicating with active space missions, from earth orbit to beyond the solar system. We describe our recent work in modeling the complexities of user requirements, and then scheduling and resolving conflicts on that basis. We emphasize our innovative use of background 'intelligent' assistants' that carry out search asynchrnously while the user is focusing on various aspects of the schedule.
NASA Technical Reports Server (NTRS)
1975-01-01
Formalized technical reporting is described and indexed, which resulted from scientific and engineering work performed, or managed, by the Jet Propulsion Laboratory. The five classes of publications included are technical reports, technical memorandums, articles from the bimonthly Deep Space Network Progress Report, special publications, and articles published in the open literature. The publications are indexed by author, subject, and publication type and number.
Summary of DSN (Deep Space Network) reimbursable launch support
NASA Technical Reports Server (NTRS)
Fanelli, N. A.; Wyatt, M. E.
1988-01-01
The Deep Space Network is providing ground support to space agencies of foreign governments as well as to NASA and other agencies of the Federal government which are involved in space activities. DSN funding for support of missions other than NASA are on either a cooperative or a reimbursable basis. Cooperative funding and support are accomplished in the same manner as NASA sponsored missions. Reimbursable launch funding and support methods are described.
Two Stage Data Augmentation for Low Resourced Speech Recognition (Author’s Manuscript)
2016-09-12
speech recognition, deep neural networks, data augmentation 1. Introduction When training data is limited—whether it be audio or text—the obvious...Schwartz, and S. Tsakalidis, “Enhancing low resource keyword spotting with au- tomatically retrieved web documents,” in Interspeech, 2015, pp. 839–843. [2...and F. Seide, “Feature learning in deep neural networks - a study on speech recognition tasks,” in International Conference on Learning Representations
Boosting Contextual Information for Deep Neural Network Based Voice Activity Detection
2015-02-01
multi-resolution stacking (MRS), which is a stack of ensemble classifiers. Each classifier in a building block inputs the concatenation of the predictions ...a base classifier in MRS, named boosted deep neural network (bDNN). bDNN first generates multiple base predictions from different contexts of a single...frame by only one DNN and then aggregates the base predictions for a better prediction of the frame, and it is different from computationally
Building on prior knowledge without building it in.
Hansen, Steven S; Lampinen, Andrew K; Suri, Gaurav; McClelland, James L
2017-01-01
Lake et al. propose that people rely on "start-up software," "causal models," and "intuitive theories" built using compositional representations to learn new tasks more efficiently than some deep neural network models. We highlight the many drawbacks of a commitment to compositional representations and describe our continuing effort to explore how the ability to build on prior knowledge and to learn new tasks efficiently could arise through learning in deep neural networks.
Alexnet Feature Extraction and Multi-Kernel Learning for Objectoriented Classification
NASA Astrophysics Data System (ADS)
Ding, L.; Li, H.; Hu, C.; Zhang, W.; Wang, S.
2018-04-01
In view of the fact that the deep convolutional neural network has stronger ability of feature learning and feature expression, an exploratory research is done on feature extraction and classification for high resolution remote sensing images. Taking the Google image with 0.3 meter spatial resolution in Ludian area of Yunnan Province as an example, the image segmentation object was taken as the basic unit, and the pre-trained AlexNet deep convolution neural network model was used for feature extraction. And the spectral features, AlexNet features and GLCM texture features are combined with multi-kernel learning and SVM classifier, finally the classification results were compared and analyzed. The results show that the deep convolution neural network can extract more accurate remote sensing image features, and significantly improve the overall accuracy of classification, and provide a reference value for earthquake disaster investigation and remote sensing disaster evaluation.
Ship detection in optical remote sensing images based on deep convolutional neural networks
NASA Astrophysics Data System (ADS)
Yao, Yuan; Jiang, Zhiguo; Zhang, Haopeng; Zhao, Danpei; Cai, Bowen
2017-10-01
Automatic ship detection in optical remote sensing images has attracted wide attention for its broad applications. Major challenges for this task include the interference of cloud, wave, wake, and the high computational expenses. We propose a fast and robust ship detection algorithm to solve these issues. The framework for ship detection is designed based on deep convolutional neural networks (CNNs), which provide the accurate locations of ship targets in an efficient way. First, the deep CNN is designed to extract features. Then, a region proposal network (RPN) is applied to discriminate ship targets and regress the detection bounding boxes, in which the anchors are designed by intrinsic shape of ship targets. Experimental results on numerous panchromatic images demonstrate that, in comparison with other state-of-the-art ship detection methods, our method is more efficient and achieves higher detection accuracy and more precise bounding boxes in different complex backgrounds.
NASA Astrophysics Data System (ADS)
Bellotti, A.; Steffes, P. G.
2016-12-01
The Juno Microwave Radiometer (MWR) has six channels ranging from 1.36-50 cm and the ability to peer deep into the Jovian atmosphere. An Artifical Neural Network algorithm has been developed to rapidly perform inversion for the deep abundance of ammonia, the deep abundance of water vapor, and atmospheric "stretch" (a parameter that reflects the deviation from a wet adiabate in the higher atmosphere). This algorithm is "trained" by using simulated emissions at the six wavelengths computed using the Juno atmospheric microwave radiative transfer (JAMRT) model presented by Oyafuso et al. (This meeting). By exploiting the emission measurements conducted at six wavelengths and at various incident angles, the neural network can provide preliminary results to a useful precison in a computational method hundreds of times faster than conventional methods. This can quickly provide important insights into the variability and structure of the Jovian atmosphere.
Operation's Concept for Array-Based Deep Space Network
NASA Technical Reports Server (NTRS)
Bagri, Durgadas S.; Statman, Joseph I.; Gatti, Mark S.
2005-01-01
The Array-based Deep Space Network (DSNArray) will be a part of more than 10(exp 3) times increase in the downlink/telemetry capability of the Deep space Network (DSN). The key function of the DSN-Array is to provide cost-effective, robust Telemetry, Tracking and Command (TT&C) services to the space missions of NASA and its international partners. It provides an expanded approach to the use of an array-based system. Instead of using the array as an element in the existing DSN, relying to a large extent on the DSN infrastructure, we explore a broader departure from the current DSN, using fewer elements of the existing DSN, and establishing a more modern Concept of Operations. This paper gives architecture of DSN-Array and its operation's philosophy. It also describes customer's view of operations, operations management and logistics - including maintenance philosophy, anomaly analysis and reporting.
Deep Unfolding for Topic Models.
Chien, Jen-Tzung; Lee, Chao-Hsi
2018-02-01
Deep unfolding provides an approach to integrate the probabilistic generative models and the deterministic neural networks. Such an approach is benefited by deep representation, easy interpretation, flexible learning and stochastic modeling. This study develops the unsupervised and supervised learning of deep unfolded topic models for document representation and classification. Conventionally, the unsupervised and supervised topic models are inferred via the variational inference algorithm where the model parameters are estimated by maximizing the lower bound of logarithm of marginal likelihood using input documents without and with class labels, respectively. The representation capability or classification accuracy is constrained by the variational lower bound and the tied model parameters across inference procedure. This paper aims to relax these constraints by directly maximizing the end performance criterion and continuously untying the parameters in learning process via deep unfolding inference (DUI). The inference procedure is treated as the layer-wise learning in a deep neural network. The end performance is iteratively improved by using the estimated topic parameters according to the exponentiated updates. Deep learning of topic models is therefore implemented through a back-propagation procedure. Experimental results show the merits of DUI with increasing number of layers compared with variational inference in unsupervised as well as supervised topic models.
Predicting healthcare trajectories from medical records: A deep learning approach.
Pham, Trang; Tran, Truyen; Phung, Dinh; Venkatesh, Svetha
2017-05-01
Personalized predictive medicine necessitates the modeling of patient illness and care processes, which inherently have long-term temporal dependencies. Healthcare observations, stored in electronic medical records are episodic and irregular in time. We introduce DeepCare, an end-to-end deep dynamic neural network that reads medical records, stores previous illness history, infers current illness states and predicts future medical outcomes. At the data level, DeepCare represents care episodes as vectors and models patient health state trajectories by the memory of historical records. Built on Long Short-Term Memory (LSTM), DeepCare introduces methods to handle irregularly timed events by moderating the forgetting and consolidation of memory. DeepCare also explicitly models medical interventions that change the course of illness and shape future medical risk. Moving up to the health state level, historical and present health states are then aggregated through multiscale temporal pooling, before passing through a neural network that estimates future outcomes. We demonstrate the efficacy of DeepCare for disease progression modeling, intervention recommendation, and future risk prediction. On two important cohorts with heavy social and economic burden - diabetes and mental health - the results show improved prediction accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.
DeepID-Net: Deformable Deep Convolutional Neural Networks for Object Detection.
Ouyang, Wanli; Zeng, Xingyu; Wang, Xiaogang; Qiu, Shi; Luo, Ping; Tian, Yonglong; Li, Hongsheng; Yang, Shuo; Wang, Zhe; Li, Hongyang; Loy, Chen Change; Wang, Kun; Yan, Junjie; Tang, Xiaoou
2016-07-07
In this paper, we propose deformable deep convolutional neural networks for generic object detection. This new deep learning object detection framework has innovations in multiple aspects. In the proposed new deep architecture, a new deformation constrained pooling (def-pooling) layer models the deformation of object parts with geometric constraint and penalty. A new pre-training strategy is proposed to learn feature representations more suitable for the object detection task and with good generalization capability. By changing the net structures, training strategies, adding and removing some key components in the detection pipeline, a set of models with large diversity are obtained, which significantly improves the effectiveness of model averaging. The proposed approach improves the mean averaged precision obtained by RCNN [16], which was the state-of-the-art, from 31% to 50.3% on the ILSVRC2014 detection test set. It also outperforms the winner of ILSVRC2014, GoogLeNet, by 6.1%. Detailed component-wise analysis is also provided through extensive experimental evaluation, which provides a global view for people to understand the deep learning object detection pipeline.
Self-Paced Prioritized Curriculum Learning With Coverage Penalty in Deep Reinforcement Learning.
Ren, Zhipeng; Dong, Daoyi; Li, Huaxiong; Chen, Chunlin; Zhipeng Ren; Daoyi Dong; Huaxiong Li; Chunlin Chen; Dong, Daoyi; Li, Huaxiong; Chen, Chunlin; Ren, Zhipeng
2018-06-01
In this paper, a new training paradigm is proposed for deep reinforcement learning using self-paced prioritized curriculum learning with coverage penalty. The proposed deep curriculum reinforcement learning (DCRL) takes the most advantage of experience replay by adaptively selecting appropriate transitions from replay memory based on the complexity of each transition. The criteria of complexity in DCRL consist of self-paced priority as well as coverage penalty. The self-paced priority reflects the relationship between the temporal-difference error and the difficulty of the current curriculum for sample efficiency. The coverage penalty is taken into account for sample diversity. With comparison to deep Q network (DQN) and prioritized experience replay (PER) methods, the DCRL algorithm is evaluated on Atari 2600 games, and the experimental results show that DCRL outperforms DQN and PER on most of these games. More results further show that the proposed curriculum training paradigm of DCRL is also applicable and effective for other memory-based deep reinforcement learning approaches, such as double DQN and dueling network. All the experimental results demonstrate that DCRL can achieve improved training efficiency and robustness for deep reinforcement learning.
NASA Astrophysics Data System (ADS)
Pratt, K.; Fellowes, J.; Giovannelli, D.; Stagno, V.
2016-12-01
Building a network of collaborators and colleagues is a key professional development activity for early career scientists (ECS) dealing with a challenging job market. At large conferences, young scientists often focus on interacting with senior researchers, competing for a small number of positions in leading laboratories. However, building a strong, international network amongst their peers in related disciplines is often as valuable in the long run. The Deep Carbon Observatory (DCO) began funding a series of workshops in 2014 designed to connect early career researchers within its extensive network of multidisciplinary scientists. The workshops, by design, are by and for early career scientists, thus removing any element of competition and focusing on peer-to-peer networking, collaboration, and creativity. The successful workshops, organized by committees of early career deep carbon scientists, have nucleated a lively community of like-minded individuals from around the world. Indeed, the organizers themselves often benefit greatly from the leadership experience of pulling together an international workshop on budget and on deadline. We have found that a combination of presentations from all participants in classroom sessions, professional development training such as communication and data management, and field-based relationship building and networking is a recipe for success. Small groups within the DCO ECS network have formed; publishing papers together, forging new research directions, and planning novel and ambitious field campaigns. Many DCO ECS also have come together to convene sessions at major international conferences, including the AGU Fall Meeting. Most of all, there is a broad sense of camaraderie and accessibility within the DCO ECS Community, providing the foundation for a career in the new, international, and interdisciplinary field of deep carbon science.
The Livermore Brain: Massive Deep Learning Networks Enabled by High Performance Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Barry Y.
The proliferation of inexpensive sensor technologies like the ubiquitous digital image sensors has resulted in the collection and sharing of vast amounts of unsorted and unexploited raw data. Companies and governments who are able to collect and make sense of large datasets to help them make better decisions more rapidly will have a competitive advantage in the information era. Machine Learning technologies play a critical role for automating the data understanding process; however, to be maximally effective, useful intermediate representations of the data are required. These representations or “features” are transformations of the raw data into a form where patternsmore » are more easily recognized. Recent breakthroughs in Deep Learning have made it possible to learn these features from large amounts of labeled data. The focus of this project is to develop and extend Deep Learning algorithms for learning features from vast amounts of unlabeled data and to develop the HPC neural network training platform to support the training of massive network models. This LDRD project succeeded in developing new unsupervised feature learning algorithms for images and video and created a scalable neural network training toolkit for HPC. Additionally, this LDRD helped create the world’s largest freely-available image and video dataset supporting open multimedia research and used this dataset for training our deep neural networks. This research helped LLNL capture several work-for-others (WFO) projects, attract new talent, and establish collaborations with leading academic and commercial partners. Finally, this project demonstrated the successful training of the largest unsupervised image neural network using HPC resources and helped establish LLNL leadership at the intersection of Machine Learning and HPC research.« less
Avsec, Žiga; Cheng, Jun; Gagneur, Julien
2018-01-01
Abstract Motivation Regulatory sequences are not solely defined by their nucleic acid sequence but also by their relative distances to genomic landmarks such as transcription start site, exon boundaries or polyadenylation site. Deep learning has become the approach of choice for modeling regulatory sequences because of its strength to learn complex sequence features. However, modeling relative distances to genomic landmarks in deep neural networks has not been addressed. Results Here we developed spline transformation, a neural network module based on splines to flexibly and robustly model distances. Modeling distances to various genomic landmarks with spline transformations significantly increased state-of-the-art prediction accuracy of in vivo RNA-binding protein binding sites for 120 out of 123 proteins. We also developed a deep neural network for human splice branchpoint based on spline transformations that outperformed the current best, already distance-based, machine learning model. Compared to piecewise linear transformation, as obtained by composition of rectified linear units, spline transformation yields higher prediction accuracy as well as faster and more robust training. As spline transformation can be applied to further quantities beyond distances, such as methylation or conservation, we foresee it as a versatile component in the genomics deep learning toolbox. Availability and implementation Spline transformation is implemented as a Keras layer in the CONCISE python package: https://github.com/gagneurlab/concise. Analysis code is available at https://github.com/gagneurlab/Manuscript_Avsec_Bioinformatics_2017. Contact avsec@in.tum.de or gagneur@in.tum.de Supplementary information Supplementary data are available at Bioinformatics online. PMID:29155928
2010-08-25
The giant, 70-meter-wide antenna at NASA Deep Space Network complex in Goldstone, Calif., tracks a spacecraft on Nov. 17, 2009. This antenna, officially known as Deep Space Station 14, is also nicknamed the Mars antenna.
Airplane detection in remote sensing images using convolutional neural networks
NASA Astrophysics Data System (ADS)
Ouyang, Chao; Chen, Zhong; Zhang, Feng; Zhang, Yifei
2018-03-01
Airplane detection in remote sensing images remains a challenging problem and has also been taking a great interest to researchers. In this paper we propose an effective method to detect airplanes in remote sensing images using convolutional neural networks. Deep learning methods show greater advantages than the traditional methods with the rise of deep neural networks in target detection, and we give an explanation why this happens. To improve the performance on detection of airplane, we combine a region proposal algorithm with convolutional neural networks. And in the training phase, we divide the background into multi classes rather than one class, which can reduce false alarms. Our experimental results show that the proposed method is effective and robust in detecting airplane.
Convolutional neural network for road extraction
NASA Astrophysics Data System (ADS)
Li, Junping; Ding, Yazhou; Feng, Fajie; Xiong, Baoyu; Cui, Weihong
2017-11-01
In this paper, the convolution neural network with large block input and small block output was used to extract road. To reflect the complex road characteristics in the study area, a deep convolution neural network VGG19 was conducted for road extraction. Based on the analysis of the characteristics of different sizes of input block, output block and the extraction effect, the votes of deep convolutional neural networks was used as the final road prediction. The study image was from GF-2 panchromatic and multi-spectral fusion in Yinchuan. The precision of road extraction was 91%. The experiments showed that model averaging can improve the accuracy to some extent. At the same time, this paper gave some advice about the choice of input block size and output block size.
Lee, Jae-Hong; Kim, Do-Hyung; Jeong, Seong-Nyum; Choi, Seong-Ho
2018-04-01
The aim of the current study was to develop a computer-assisted detection system based on a deep convolutional neural network (CNN) algorithm and to evaluate the potential usefulness and accuracy of this system for the diagnosis and prediction of periodontally compromised teeth (PCT). Combining pretrained deep CNN architecture and a self-trained network, periapical radiographic images were used to determine the optimal CNN algorithm and weights. The diagnostic and predictive accuracy, sensitivity, specificity, positive predictive value, negative predictive value, receiver operating characteristic (ROC) curve, area under the ROC curve, confusion matrix, and 95% confidence intervals (CIs) were calculated using our deep CNN algorithm, based on a Keras framework in Python. The periapical radiographic dataset was split into training (n=1,044), validation (n=348), and test (n=348) datasets. With the deep learning algorithm, the diagnostic accuracy for PCT was 81.0% for premolars and 76.7% for molars. Using 64 premolars and 64 molars that were clinically diagnosed as severe PCT, the accuracy of predicting extraction was 82.8% (95% CI, 70.1%-91.2%) for premolars and 73.4% (95% CI, 59.9%-84.0%) for molars. We demonstrated that the deep CNN algorithm was useful for assessing the diagnosis and predictability of PCT. Therefore, with further optimization of the PCT dataset and improvements in the algorithm, a computer-aided detection system can be expected to become an effective and efficient method of diagnosing and predicting PCT.
Yao, Xiaohui; Yan, Jingwen; Ginda, Michael; Börner, Katy; Saykin, Andrew J; Shen, Li
2017-01-01
Alzheimer's disease neuroimaging initiative (ADNI) is a landmark imaging and omics study in AD. ADNI research literature has increased substantially over the past decade, which poses challenges for effectively communicating information about the results and impact of ADNI-related studies. In this work, we employed advanced information visualization techniques to perform a comprehensive and systematic mapping of the ADNI scientific growth and impact over a period of 12 years. Citation information of ADNI-related publications from 01/01/2003 to 05/12/2015 were downloaded from the Scopus database. Five fields, including authors, years, affiliations, sources (journals), and keywords, were extracted and preprocessed. Statistical analyses were performed on basic publication data as well as journal and citations information. Science mapping workflows were conducted using the Science of Science (Sci2) Tool to generate geospatial, topical, and collaboration visualizations at the micro (individual) to macro (global) levels such as geospatial layouts of institutional collaboration networks, keyword co-occurrence networks, and author collaboration networks evolving over time. During the studied period, 996 ADNI manuscripts were published across 233 journals and conference proceedings. The number of publications grew linearly from 2008 to 2015, so did the number of involved institutions. ADNI publications received much more citations than typical papers from the same set of journals. Collaborations were visualized at multiple levels, including authors, institutions, and research areas. The evolution of key ADNI research topics was also plotted over the studied period. Both statistical and visualization results demonstrate the increasing attention of ADNI research, strong citation impact of ADNI publications, the expanding collaboration networks among researchers, institutions and ADNI core areas, and the dynamic evolution of ADNI research topics. The visualizations presented here can help improve daily decision making based on a deep understanding of existing patterns and trends using proven and replicable data analysis and visualization methods. They have great potential to provide new insights and actionable knowledge for helping translational research in AD.
Yao, Xiaohui; Yan, Jingwen; Ginda, Michael; Börner, Katy; Saykin, Andrew J.
2017-01-01
Background Alzheimer’s disease neuroimaging initiative (ADNI) is a landmark imaging and omics study in AD. ADNI research literature has increased substantially over the past decade, which poses challenges for effectively communicating information about the results and impact of ADNI-related studies. In this work, we employed advanced information visualization techniques to perform a comprehensive and systematic mapping of the ADNI scientific growth and impact over a period of 12 years. Methods Citation information of ADNI-related publications from 01/01/2003 to 05/12/2015 were downloaded from the Scopus database. Five fields, including authors, years, affiliations, sources (journals), and keywords, were extracted and preprocessed. Statistical analyses were performed on basic publication data as well as journal and citations information. Science mapping workflows were conducted using the Science of Science (Sci2) Tool to generate geospatial, topical, and collaboration visualizations at the micro (individual) to macro (global) levels such as geospatial layouts of institutional collaboration networks, keyword co-occurrence networks, and author collaboration networks evolving over time. Results During the studied period, 996 ADNI manuscripts were published across 233 journals and conference proceedings. The number of publications grew linearly from 2008 to 2015, so did the number of involved institutions. ADNI publications received much more citations than typical papers from the same set of journals. Collaborations were visualized at multiple levels, including authors, institutions, and research areas. The evolution of key ADNI research topics was also plotted over the studied period. Conclusions Both statistical and visualization results demonstrate the increasing attention of ADNI research, strong citation impact of ADNI publications, the expanding collaboration networks among researchers, institutions and ADNI core areas, and the dynamic evolution of ADNI research topics. The visualizations presented here can help improve daily decision making based on a deep understanding of existing patterns and trends using proven and replicable data analysis and visualization methods. They have great potential to provide new insights and actionable knowledge for helping translational research in AD. PMID:29095836
Representational Distance Learning for Deep Neural Networks
McClure, Patrick; Kriegeskorte, Nikolaus
2016-01-01
Deep neural networks (DNNs) provide useful models of visual representational transformations. We present a method that enables a DNN (student) to learn from the internal representational spaces of a reference model (teacher), which could be another DNN or, in the future, a biological brain. Representational spaces of the student and the teacher are characterized by representational distance matrices (RDMs). We propose representational distance learning (RDL), a stochastic gradient descent method that drives the RDMs of the student to approximate the RDMs of the teacher. We demonstrate that RDL is competitive with other transfer learning techniques for two publicly available benchmark computer vision datasets (MNIST and CIFAR-100), while allowing for architectural differences between student and teacher. By pulling the student's RDMs toward those of the teacher, RDL significantly improved visual classification performance when compared to baseline networks that did not use transfer learning. In the future, RDL may enable combined supervised training of deep neural networks using task constraints (e.g., images and category labels) and constraints from brain-activity measurements, so as to build models that replicate the internal representational spaces of biological brains. PMID:28082889
NASA Astrophysics Data System (ADS)
Hon, Marc; Stello, Dennis; Yu, Jie
2018-05-01
Deep learning in the form of 1D convolutional neural networks have previously been shown to be capable of efficiently classifying the evolutionary state of oscillating red giants into red giant branch stars and helium-core burning stars by recognizing visual features in their asteroseismic frequency spectra. We elaborate further on the deep learning method by developing an improved convolutional neural network classifier. To make our method useful for current and future space missions such as K2, TESS, and PLATO, we train classifiers that are able to classify the evolutionary states of lower frequency resolution spectra expected from these missions. Additionally, we provide new classifications for 8633 Kepler red giants, out of which 426 have previously not been classified using asteroseismology. This brings the total to 14983 Kepler red giants classified with our new neural network. We also verify that our classifiers are remarkably robust to suboptimal data, including low signal-to-noise and incorrect training truth labels.
Representational Distance Learning for Deep Neural Networks.
McClure, Patrick; Kriegeskorte, Nikolaus
2016-01-01
Deep neural networks (DNNs) provide useful models of visual representational transformations. We present a method that enables a DNN (student) to learn from the internal representational spaces of a reference model (teacher), which could be another DNN or, in the future, a biological brain. Representational spaces of the student and the teacher are characterized by representational distance matrices (RDMs). We propose representational distance learning (RDL), a stochastic gradient descent method that drives the RDMs of the student to approximate the RDMs of the teacher. We demonstrate that RDL is competitive with other transfer learning techniques for two publicly available benchmark computer vision datasets (MNIST and CIFAR-100), while allowing for architectural differences between student and teacher. By pulling the student's RDMs toward those of the teacher, RDL significantly improved visual classification performance when compared to baseline networks that did not use transfer learning. In the future, RDL may enable combined supervised training of deep neural networks using task constraints (e.g., images and category labels) and constraints from brain-activity measurements, so as to build models that replicate the internal representational spaces of biological brains.
Deep graphs—A general framework to represent and analyze heterogeneous complex systems across scales
NASA Astrophysics Data System (ADS)
Traxl, Dominik; Boers, Niklas; Kurths, Jürgen
2016-06-01
Network theory has proven to be a powerful tool in describing and analyzing systems by modelling the relations between their constituent objects. Particularly in recent years, a great progress has been made by augmenting "traditional" network theory in order to account for the multiplex nature of many networks, multiple types of connections between objects, the time-evolution of networks, networks of networks and other intricacies. However, existing network representations still lack crucial features in order to serve as a general data analysis tool. These include, most importantly, an explicit association of information with possibly heterogeneous types of objects and relations, and a conclusive representation of the properties of groups of nodes as well as the interactions between such groups on different scales. In this paper, we introduce a collection of definitions resulting in a framework that, on the one hand, entails and unifies existing network representations (e.g., network of networks and multilayer networks), and on the other hand, generalizes and extends them by incorporating the above features. To implement these features, we first specify the nodes and edges of a finite graph as sets of properties (which are permitted to be arbitrary mathematical objects). Second, the mathematical concept of partition lattices is transferred to the network theory in order to demonstrate how partitioning the node and edge set of a graph into supernodes and superedges allows us to aggregate, compute, and allocate information on and between arbitrary groups of nodes. The derived partition lattice of a graph, which we denote by deep graph, constitutes a concise, yet comprehensive representation that enables the expression and analysis of heterogeneous properties, relations, and interactions on all scales of a complex system in a self-contained manner. Furthermore, to be able to utilize existing network-based methods and models, we derive different representations of multilayer networks from our framework and demonstrate the advantages of our representation. On the basis of the formal framework described here, we provide a rich, fully scalable (and self-explanatory) software package that integrates into the PyData ecosystem and offers interfaces to popular network packages, making it a powerful, general-purpose data analysis toolkit. We exemplify an application of deep graphs using a real world dataset, comprising 16 years of satellite-derived global precipitation measurements. We deduce a deep graph representation of these measurements in order to track and investigate local formations of spatio-temporal clusters of extreme precipitation events.
Traxl, Dominik; Boers, Niklas; Kurths, Jürgen
2016-06-01
Network theory has proven to be a powerful tool in describing and analyzing systems by modelling the relations between their constituent objects. Particularly in recent years, a great progress has been made by augmenting "traditional" network theory in order to account for the multiplex nature of many networks, multiple types of connections between objects, the time-evolution of networks, networks of networks and other intricacies. However, existing network representations still lack crucial features in order to serve as a general data analysis tool. These include, most importantly, an explicit association of information with possibly heterogeneous types of objects and relations, and a conclusive representation of the properties of groups of nodes as well as the interactions between such groups on different scales. In this paper, we introduce a collection of definitions resulting in a framework that, on the one hand, entails and unifies existing network representations (e.g., network of networks and multilayer networks), and on the other hand, generalizes and extends them by incorporating the above features. To implement these features, we first specify the nodes and edges of a finite graph as sets of properties (which are permitted to be arbitrary mathematical objects). Second, the mathematical concept of partition lattices is transferred to the network theory in order to demonstrate how partitioning the node and edge set of a graph into supernodes and superedges allows us to aggregate, compute, and allocate information on and between arbitrary groups of nodes. The derived partition lattice of a graph, which we denote by deep graph, constitutes a concise, yet comprehensive representation that enables the expression and analysis of heterogeneous properties, relations, and interactions on all scales of a complex system in a self-contained manner. Furthermore, to be able to utilize existing network-based methods and models, we derive different representations of multilayer networks from our framework and demonstrate the advantages of our representation. On the basis of the formal framework described here, we provide a rich, fully scalable (and self-explanatory) software package that integrates into the PyData ecosystem and offers interfaces to popular network packages, making it a powerful, general-purpose data analysis toolkit. We exemplify an application of deep graphs using a real world dataset, comprising 16 years of satellite-derived global precipitation measurements. We deduce a deep graph representation of these measurements in order to track and investigate local formations of spatio-temporal clusters of extreme precipitation events.
[Severity classification of chronic obstructive pulmonary disease based on deep learning].
Ying, Jun; Yang, Ceyuan; Li, Quanzheng; Xue, Wanguo; Li, Tanshi; Cao, Wenzhe
2017-12-01
In this paper, a deep learning method has been raised to build an automatic classification algorithm of severity of chronic obstructive pulmonary disease. Large sample clinical data as input feature were analyzed for their weights in classification. Through feature selection, model training, parameter optimization and model testing, a classification prediction model based on deep belief network was built to predict severity classification criteria raised by the Global Initiative for Chronic Obstructive Lung Disease (GOLD). We get accuracy over 90% in prediction for two different standardized versions of severity criteria raised in 2007 and 2011 respectively. Moreover, we also got the contribution ranking of different input features through analyzing the model coefficient matrix and confirmed that there was a certain degree of agreement between the more contributive input features and the clinical diagnostic knowledge. The validity of the deep belief network model was proved by this result. This study provides an effective solution for the application of deep learning method in automatic diagnostic decision making.
Survey on deep learning for radiotherapy.
Meyer, Philippe; Noblet, Vincent; Mazzara, Christophe; Lallement, Alex
2018-07-01
More than 50% of cancer patients are treated with radiotherapy, either exclusively or in combination with other methods. The planning and delivery of radiotherapy treatment is a complex process, but can now be greatly facilitated by artificial intelligence technology. Deep learning is the fastest-growing field in artificial intelligence and has been successfully used in recent years in many domains, including medicine. In this article, we first explain the concept of deep learning, addressing it in the broader context of machine learning. The most common network architectures are presented, with a more specific focus on convolutional neural networks. We then present a review of the published works on deep learning methods that can be applied to radiotherapy, which are classified into seven categories related to the patient workflow, and can provide some insights of potential future applications. We have attempted to make this paper accessible to both radiotherapy and deep learning communities, and hope that it will inspire new collaborations between these two communities to develop dedicated radiotherapy applications. Copyright © 2018 Elsevier Ltd. All rights reserved.
Boosting compound-protein interaction prediction by deep learning.
Tian, Kai; Shao, Mingyu; Wang, Yang; Guan, Jihong; Zhou, Shuigeng
2016-11-01
The identification of interactions between compounds and proteins plays an important role in network pharmacology and drug discovery. However, experimentally identifying compound-protein interactions (CPIs) is generally expensive and time-consuming, computational approaches are thus introduced. Among these, machine-learning based methods have achieved a considerable success. However, due to the nonlinear and imbalanced nature of biological data, many machine learning approaches have their own limitations. Recently, deep learning techniques show advantages over many state-of-the-art machine learning methods in some applications. In this study, we aim at improving the performance of CPI prediction based on deep learning, and propose a method called DL-CPI (the abbreviation of Deep Learning for Compound-Protein Interactions prediction), which employs deep neural network (DNN) to effectively learn the representations of compound-protein pairs. Extensive experiments show that DL-CPI can learn useful features of compound-protein pairs by a layerwise abstraction, and thus achieves better prediction performance than existing methods on both balanced and imbalanced datasets. Copyright © 2016 Elsevier Inc. All rights reserved.
Hu, T H; Wan, L; Liu, T A; Wang, M W; Chen, T; Wang, Y H
2017-12-01
Deep learning and neural network models have been new research directions and hot issues in the fields of machine learning and artificial intelligence in recent years. Deep learning has made a breakthrough in the applications of image and speech recognitions, and also has been extensively used in the fields of face recognition and information retrieval because of its special superiority. Bone X-ray images express different variations in black-white-gray gradations, which have image features of black and white contrasts and level differences. Based on these advantages of deep learning in image recognition, we combine it with the research of bone age assessment to provide basic datum for constructing a forensic automatic system of bone age assessment. This paper reviews the basic concept and network architectures of deep learning, and describes its recent research progress on image recognition in different research fields at home and abroad, and explores its advantages and application prospects in bone age assessment. Copyright© by the Editorial Department of Journal of Forensic Medicine.
ACTIVIS: Visual Exploration of Industry-Scale Deep Neural Network Models.
Kahng, Minsuk; Andrews, Pierre Y; Kalro, Aditya; Polo Chau, Duen Horng
2017-08-30
While deep learning models have achieved state-of-the-art accuracies for many prediction tasks, understanding these models remains a challenge. Despite the recent interest in developing visual tools to help users interpret deep learning models, the complexity and wide variety of models deployed in industry, and the large-scale datasets that they used, pose unique design challenges that are inadequately addressed by existing work. Through participatory design sessions with over 15 researchers and engineers at Facebook, we have developed, deployed, and iteratively improved ACTIVIS, an interactive visualization system for interpreting large-scale deep learning models and results. By tightly integrating multiple coordinated views, such as a computation graph overview of the model architecture, and a neuron activation view for pattern discovery and comparison, users can explore complex deep neural network models at both the instance- and subset-level. ACTIVIS has been deployed on Facebook's machine learning platform. We present case studies with Facebook researchers and engineers, and usage scenarios of how ACTIVIS may work with different models.
NASA Technical Reports Server (NTRS)
Wilson, K.; Parvin, B.; Fugate, R.; Kervin, P.; Zingales, S.
2003-01-01
Future NASA deep space missions will fly advanced high resolution imaging instruments that will require high bandwidth links to return the huge data volumes generated by these instruments. Optical communications is a key technology for returning these large data volumes from deep space probes. Yet to cost effectively realize the high bandwidth potential of the optical link will require deployment of ground receivers in diverse locations to provide high link availability. A recent analysis of GOES weather satellite data showed that a network of ground stations located in Hawaii and the Southwest continental US can provide an average of 90% availability for the deep space optical link. JPL and AFRL are exploring the use of large telescopes in Hawaii, California, and Albuquerque to support the Mars Telesat laser communications demonstration. Designed to demonstrate multi-Mbps communications from Mars, the mission will investigate key operational strategies of future deep space optical communications network.
White-matter functional networks changes in patients with schizophrenia.
Jiang, Yuchao; Luo, Cheng; Li, Xuan; Li, Yingjia; Yang, Hang; Li, Jianfu; Chang, Xin; Li, Hechun; Yang, Huanghao; Wang, Jijun; Duan, Mingjun; Yao, Dezhong
2018-04-13
Resting-state functional MRI (rsfMRI) is a useful technique for investigating the functional organization of human gray-matter in neuroscience and neuropsychiatry. Nevertheless, most studies have demonstrated the functional connectivity and/or task-related functional activity in the gray-matter. White-matter functional networks have been investigated in healthy subjects. Schizophrenia has been hypothesized to be a brain disorder involving insufficient or ineffective communication associated with white-matter abnormalities. However, previous studies have mainly examined the structural architecture of white-matter using MRI or diffusion tensor imaging and failed to uncover any dysfunctional connectivity within the white-matter on rsfMRI. The current study used rsfMRI to evaluate white-matter functional connectivity in a large cohort of ninety-seven schizophrenia patients and 126 healthy controls. Ten large-scale white-matter networks were identified by a cluster analysis of voxel-based white-matter functional connectivity and classified into superficial, middle and deep layers of networks. Evaluation of the spontaneous oscillation of white-matter networks and the functional connectivity between them showed that patients with schizophrenia had decreased amplitudes of low-frequency oscillation and increased functional connectivity in the superficial perception-motor networks. Additionally, we examined the interactions between white-matter and gray-matter networks. The superficial perception-motor white-matter network had decreased functional connectivity with the cortical perception-motor gray-matter networks. In contrast, the middle and deep white-matter networks had increased functional connectivity with the superficial perception-motor white-matter network and the cortical perception-motor gray-matter network. Thus, we presumed that the disrupted association between the gray-matter and white-matter networks in the perception-motor system may be compensated for through the middle-deep white-matter networks, which may be the foundation of the extensively disrupted connections in schizophrenia. Copyright © 2018 Elsevier Inc. All rights reserved.
Optimal social-networking strategy is a function of socioeconomic conditions.
Oishi, Shigehiro; Kesebir, Selin
2012-12-01
In the two studies reported here, we examined the relation among residential mobility, economic conditions, and optimal social-networking strategy. In study 1, a computer simulation showed that regardless of economic conditions, having a broad social network with weak friendship ties is advantageous when friends are likely to move away. By contrast, having a small social network with deep friendship ties is advantageous when the economy is unstable but friends are not likely to move away. In study 2, we examined the validity of the computer simulation using a sample of American adults. Results were consistent with the simulation: American adults living in a zip code where people are residentially stable but economically challenged were happier if they had a narrow but deep social network, whereas in other socioeconomic conditions, people were generally happier if they had a broad but shallow networking strategy. Together, our studies demonstrate that the optimal social-networking strategy varies as a function of socioeconomic conditions.
NASA Astrophysics Data System (ADS)
Cioaca, Alexandru
A deep scientific understanding of complex physical systems, such as the atmosphere, can be achieved neither by direct measurements nor by numerical simulations alone. Data assimila- tion is a rigorous procedure to fuse information from a priori knowledge of the system state, the physical laws governing the evolution of the system, and real measurements, all with associated error statistics. Data assimilation produces best (a posteriori) estimates of model states and parameter values, and results in considerably improved computer simulations. The acquisition and use of observations in data assimilation raises several important scientific questions related to optimal sensor network design, quantification of data impact, pruning redundant data, and identifying the most beneficial additional observations. These questions originate in operational data assimilation practice, and have started to attract considerable interest in the recent past. This dissertation advances the state of knowledge in four dimensional variational (4D-Var) data assimilation by developing, implementing, and validating a novel computational framework for estimating observation impact and for optimizing sensor networks. The framework builds on the powerful methodologies of second-order adjoint modeling and the 4D-Var sensitivity equations. Efficient computational approaches for quantifying the observation impact include matrix free linear algebra algorithms and low-rank approximations of the sensitivities to observations. The sensor network configuration problem is formulated as a meta-optimization problem. Best values for parameters such as sensor location are obtained by optimizing a performance criterion, subject to the constraint posed by the 4D-Var optimization. Tractable computational solutions to this "optimization-constrained" optimization problem are provided. The results of this work can be directly applied to the deployment of intelligent sensors and adaptive observations, as well as to reducing the operating costs of measuring networks, while preserving their ability to capture the essential features of the system under consideration.
Speech reconstruction using a deep partially supervised neural network.
McLoughlin, Ian; Li, Jingjie; Song, Yan; Sharifzadeh, Hamid R
2017-08-01
Statistical speech reconstruction for larynx-related dysphonia has achieved good performance using Gaussian mixture models and, more recently, restricted Boltzmann machine arrays; however, deep neural network (DNN)-based systems have been hampered by the limited amount of training data available from individual voice-loss patients. The authors propose a novel DNN structure that allows a partially supervised training approach on spectral features from smaller data sets, yielding very good results compared with the current state-of-the-art.
A Deep Space Network Portable Radio Science Receiver
NASA Technical Reports Server (NTRS)
Jongeling, Andre P.; Sigman, Elliott H.; Chandra, Kumar; Trinh, Joseph T.; Navarro, Robert; Rogstad, Stephen P.; Goodhart, Charles E.; Proctor, Robert C.; Finley, Susan G.; White, Leslie A.
2009-01-01
The Radio Science Receiver (RSR) is an open-loop receiver installed in NASA s Deep Space Network (DSN), which digitally filters and records intermediate-frequency (IF) analog signals. The RSR is an important tool for the Cassini Project, which uses it to measure perturbations of the radio-frequency wave as it travels between the spacecraft and the ground stations, allowing highly detailed study of the composition of the rings, atmosphere, and surface of Saturn and its satellites.
Large-scale transportation network congestion evolution prediction using deep learning theory.
Ma, Xiaolei; Yu, Haiyang; Wang, Yunpeng; Wang, Yinhai
2015-01-01
Understanding how congestion at one location can cause ripples throughout large-scale transportation network is vital for transportation researchers and practitioners to pinpoint traffic bottlenecks for congestion mitigation. Traditional studies rely on either mathematical equations or simulation techniques to model traffic congestion dynamics. However, most of the approaches have limitations, largely due to unrealistic assumptions and cumbersome parameter calibration process. With the development of Intelligent Transportation Systems (ITS) and Internet of Things (IoT), transportation data become more and more ubiquitous. This triggers a series of data-driven research to investigate transportation phenomena. Among them, deep learning theory is considered one of the most promising techniques to tackle tremendous high-dimensional data. This study attempts to extend deep learning theory into large-scale transportation network analysis. A deep Restricted Boltzmann Machine and Recurrent Neural Network architecture is utilized to model and predict traffic congestion evolution based on Global Positioning System (GPS) data from taxi. A numerical study in Ningbo, China is conducted to validate the effectiveness and efficiency of the proposed method. Results show that the prediction accuracy can achieve as high as 88% within less than 6 minutes when the model is implemented in a Graphic Processing Unit (GPU)-based parallel computing environment. The predicted congestion evolution patterns can be visualized temporally and spatially through a map-based platform to identify the vulnerable links for proactive congestion mitigation.
Large-Scale Transportation Network Congestion Evolution Prediction Using Deep Learning Theory
Ma, Xiaolei; Yu, Haiyang; Wang, Yunpeng; Wang, Yinhai
2015-01-01
Understanding how congestion at one location can cause ripples throughout large-scale transportation network is vital for transportation researchers and practitioners to pinpoint traffic bottlenecks for congestion mitigation. Traditional studies rely on either mathematical equations or simulation techniques to model traffic congestion dynamics. However, most of the approaches have limitations, largely due to unrealistic assumptions and cumbersome parameter calibration process. With the development of Intelligent Transportation Systems (ITS) and Internet of Things (IoT), transportation data become more and more ubiquitous. This triggers a series of data-driven research to investigate transportation phenomena. Among them, deep learning theory is considered one of the most promising techniques to tackle tremendous high-dimensional data. This study attempts to extend deep learning theory into large-scale transportation network analysis. A deep Restricted Boltzmann Machine and Recurrent Neural Network architecture is utilized to model and predict traffic congestion evolution based on Global Positioning System (GPS) data from taxi. A numerical study in Ningbo, China is conducted to validate the effectiveness and efficiency of the proposed method. Results show that the prediction accuracy can achieve as high as 88% within less than 6 minutes when the model is implemented in a Graphic Processing Unit (GPU)-based parallel computing environment. The predicted congestion evolution patterns can be visualized temporally and spatially through a map-based platform to identify the vulnerable links for proactive congestion mitigation. PMID:25780910
Accurate segmentation of lung fields on chest radiographs using deep convolutional networks
NASA Astrophysics Data System (ADS)
Arbabshirani, Mohammad R.; Dallal, Ahmed H.; Agarwal, Chirag; Patel, Aalpan; Moore, Gregory
2017-02-01
Accurate segmentation of lung fields on chest radiographs is the primary step for computer-aided detection of various conditions such as lung cancer and tuberculosis. The size, shape and texture of lung fields are key parameters for chest X-ray (CXR) based lung disease diagnosis in which the lung field segmentation is a significant primary step. Although many methods have been proposed for this problem, lung field segmentation remains as a challenge. In recent years, deep learning has shown state of the art performance in many visual tasks such as object detection, image classification and semantic image segmentation. In this study, we propose a deep convolutional neural network (CNN) framework for segmentation of lung fields. The algorithm was developed and tested on 167 clinical posterior-anterior (PA) CXR images collected retrospectively from picture archiving and communication system (PACS) of Geisinger Health System. The proposed multi-scale network is composed of five convolutional and two fully connected layers. The framework achieved IOU (intersection over union) of 0.96 on the testing dataset as compared to manual segmentation. The suggested framework outperforms state of the art registration-based segmentation by a significant margin. To our knowledge, this is the first deep learning based study of lung field segmentation on CXR images developed on a heterogeneous clinical dataset. The results suggest that convolutional neural networks could be employed reliably for lung field segmentation.
A Heavy-Duty Jack for a Giant Task
2010-11-03
A major refurbishment of the giant Mars antenna at NASA Deep Space Network Goldstone Deep Space Communications Complex in California Mojave Desert required workers to jack up millions of pounds of delicate scientific equipment.
Zhong, Bineng; Pan, Shengnan; Zhang, Hongbo; Wang, Tian; Du, Jixiang; Chen, Duansheng; Cao, Liujuan
2016-01-01
In this paper, we propose deep architecture to dynamically learn the most discriminative features from data for both single-cell and object tracking in computational biology and computer vision. Firstly, the discriminative features are automatically learned via a convolutional deep belief network (CDBN). Secondly, we design a simple yet effective method to transfer features learned from CDBNs on the source tasks for generic purpose to the object tracking tasks using only limited amount of training data. Finally, to alleviate the tracker drifting problem caused by model updating, we jointly consider three different types of positive samples. Extensive experiments validate the robustness and effectiveness of the proposed method.
Pan, Shengnan; Zhang, Hongbo; Wang, Tian; Du, Jixiang; Chen, Duansheng; Cao, Liujuan
2016-01-01
In this paper, we propose deep architecture to dynamically learn the most discriminative features from data for both single-cell and object tracking in computational biology and computer vision. Firstly, the discriminative features are automatically learned via a convolutional deep belief network (CDBN). Secondly, we design a simple yet effective method to transfer features learned from CDBNs on the source tasks for generic purpose to the object tracking tasks using only limited amount of training data. Finally, to alleviate the tracker drifting problem caused by model updating, we jointly consider three different types of positive samples. Extensive experiments validate the robustness and effectiveness of the proposed method. PMID:27847827
NASA Technical Reports Server (NTRS)
Mankins, J. C.
1982-01-01
A review of the Deep Space Network's (DSN) use of precision Doppler-tracking of deep space vehicles is presented. The review emphasizes operational and configurational aspects and considers: the projected configuration of the DSN's frequency and timing system; the environment within the DSN provided by the precision atomic standards within the frequency and timing system--both current and projected; and the general requirements placed on the DSN and the frequency and timing system for both the baseline and the nominal gravitational wave experiments. A comment is made concerning the current probability that such an experiment will be carried out in the foreseeable future.
Range Measurement as Practiced in the Deep Space Network
NASA Technical Reports Server (NTRS)
Berner, Jeff B.; Bryant, Scott H.; Kinman, Peter W.
2007-01-01
Range measurements are used to improve the trajectory models of spacecraft tracked by the Deep Space Network. The unique challenge of deep-space ranging is that the two-way delay is long, typically many minutes, and the signal-to-noise ratio is small. Accurate measurements are made under these circumstances by means of long correlations that incorporate Doppler rate-aiding. This processing is done with commercial digital signal processors, providing a flexibility in signal design that can accommodate both the traditional sequential ranging signal and pseudonoise range codes. Accurate range determination requires the calibration of the delay within the tracking station. Measurements with a standard deviation of 1 m have been made.
The deep space network, volume 14
NASA Technical Reports Server (NTRS)
1973-01-01
DSN progress during Jan. and Feb. 1973 is reported. Areas of accomplishment include: flight project support, TDA research and technology, network engineering, hardware and software implementation, and operations.
Networks consolidation program: Maintenance and Operations (M&O) staffing estimates
NASA Technical Reports Server (NTRS)
Goodwin, J. P.
1981-01-01
The Mark IV-A consolidate deep space and high elliptical Earth orbiter (HEEO) missions tracking and implements centralized control and monitoring at the deep space communications complexes (DSCC). One of the objectives of the network design is to reduce maintenance and operations (M&O) costs. To determine if the system design meets this objective an M&O staffing model for Goldstone was developed which was used to estimate the staffing levels required to support the Mark IV-A configuration. The study was performed for the Goldstone complex and the program office translated these estimates for the overseas complexes to derive the network estimates.
The Deep Space Network information system in the year 2000
NASA Technical Reports Server (NTRS)
Markley, R. W.; Beswick, C. A.
1992-01-01
The Deep Space Network (DSN), the largest, most sensitive scientific communications and radio navigation network in the world, is considered. Focus is made on the telemetry processing, monitor and control, and ground data transport architectures of the DSN ground information system envisioned for the year 2000. The telemetry architecture will be unified from the front-end area to the end user. It will provide highly automated monitor and control of the DSN, automated configuration of support activities, and a vastly improved human interface. Automated decision support systems will be in place for DSN resource management, performance analysis, fault diagnosis, and contingency management.
Van Valen, David A; Kudo, Takamasa; Lane, Keara M; Macklin, Derek N; Quach, Nicolas T; DeFelice, Mialy M; Maayan, Inbal; Tanouchi, Yu; Ashley, Euan A; Covert, Markus W
2016-11-01
Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domains of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems.