50 CFR 229.37 - False Killer Whale Take Reduction Plan.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Hawaii Pelagic and Hawaii Insular stocks of false killer whales in the Hawaii-based deep-set and shallow... section have the following meanings: (1) Deep-set or Deep-setting has the same meaning as the definition... this title. (c) Gear requirements. (1) While deep-setting, the owner and operator of a vessel...
50 CFR 229.37 - False Killer Whale Take Reduction Plan.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Hawaii Pelagic and Hawaii Insular stocks of false killer whales in the Hawaii-based deep-set and shallow... section have the following meanings: (1) Deep-set or Deep-setting has the same meaning as the definition... this title. (c) Gear requirements. (1) While deep-setting, the owner and operator of a vessel...
Lee, Christine K; Hofer, Ira; Gabel, Eilon; Baldi, Pierre; Cannesson, Maxime
2018-04-17
The authors tested the hypothesis that deep neural networks trained on intraoperative features can predict postoperative in-hospital mortality. The data used to train and validate the algorithm consists of 59,985 patients with 87 features extracted at the end of surgery. Feed-forward networks with a logistic output were trained using stochastic gradient descent with momentum. The deep neural networks were trained on 80% of the data, with 20% reserved for testing. The authors assessed improvement of the deep neural network by adding American Society of Anesthesiologists (ASA) Physical Status Classification and robustness of the deep neural network to a reduced feature set. The networks were then compared to ASA Physical Status, logistic regression, and other published clinical scores including the Surgical Apgar, Preoperative Score to Predict Postoperative Mortality, Risk Quantification Index, and the Risk Stratification Index. In-hospital mortality in the training and test sets were 0.81% and 0.73%. The deep neural network with a reduced feature set and ASA Physical Status classification had the highest area under the receiver operating characteristics curve, 0.91 (95% CI, 0.88 to 0.93). The highest logistic regression area under the curve was found with a reduced feature set and ASA Physical Status (0.90, 95% CI, 0.87 to 0.93). The Risk Stratification Index had the highest area under the receiver operating characteristics curve, at 0.97 (95% CI, 0.94 to 0.99). Deep neural networks can predict in-hospital mortality based on automatically extractable intraoperative data, but are not (yet) superior to existing methods.
NASA Astrophysics Data System (ADS)
Shi, Bibo; Hou, Rui; Mazurowski, Maciej A.; Grimm, Lars J.; Ren, Yinhao; Marks, Jeffrey R.; King, Lorraine M.; Maley, Carlo C.; Hwang, E. Shelley; Lo, Joseph Y.
2018-02-01
Purpose: To determine whether domain transfer learning can improve the performance of deep features extracted from digital mammograms using a pre-trained deep convolutional neural network (CNN) in the prediction of occult invasive disease for patients with ductal carcinoma in situ (DCIS) on core needle biopsy. Method: In this study, we collected digital mammography magnification views for 140 patients with DCIS at biopsy, 35 of which were subsequently upstaged to invasive cancer. We utilized a deep CNN model that was pre-trained on two natural image data sets (ImageNet and DTD) and one mammographic data set (INbreast) as the feature extractor, hypothesizing that these data sets are increasingly more similar to our target task and will lead to better representations of deep features to describe DCIS lesions. Through a statistical pooling strategy, three sets of deep features were extracted using the CNNs at different levels of convolutional layers from the lesion areas. A logistic regression classifier was then trained to predict which tumors contain occult invasive disease. The generalization performance was assessed and compared using repeated random sub-sampling validation and receiver operating characteristic (ROC) curve analysis. Result: The best performance of deep features was from CNN model pre-trained on INbreast, and the proposed classifier using this set of deep features was able to achieve a median classification performance of ROC-AUC equal to 0.75, which is significantly better (p<=0.05) than the performance of deep features extracted using ImageNet data set (ROCAUC = 0.68). Conclusion: Transfer learning is helpful for learning a better representation of deep features, and improves the prediction of occult invasive disease in DCIS.
Improving agar electrospinnability with choline-based deep eutectic solvents
USDA-ARS?s Scientific Manuscript database
One percent agar (% wt) was dissolved in the deep eutectic solvent (DES), (2-hydroxyethyl) trimethylammonium chloride/urea at a 1:2 molar ratio, and successfully electrospun into nanofibers. An existing electrospinning set-up, operated at 50 deg C, was adapted for use with an ethanol bath to collect...
House GOP Presses for Deep Cuts to Education
ERIC Educational Resources Information Center
Klein, Alyson
2011-01-01
Republicans in the U.S. House of Representatives appear determined to make deep cuts to education and related programs in the temporary spending bill that would keep the federal government operating for the rest of the fiscal year, even as President Barack Obama seeks a modest funding boost next year. That sets up a fiscal face-off in the…
Gulshan, Varun; Peng, Lily; Coram, Marc; Stumpe, Martin C; Wu, Derek; Narayanaswamy, Arunachalam; Venugopalan, Subhashini; Widner, Kasumi; Madams, Tom; Cuadros, Jorge; Kim, Ramasamy; Raman, Rajiv; Nelson, Philip C; Mega, Jessica L; Webster, Dale R
2016-12-13
Deep learning is a family of computational methods that allow an algorithm to program itself by learning from a large set of examples that demonstrate the desired behavior, removing the need to specify rules explicitly. Application of these methods to medical imaging requires further assessment and validation. To apply deep learning to create an algorithm for automated detection of diabetic retinopathy and diabetic macular edema in retinal fundus photographs. A specific type of neural network optimized for image classification called a deep convolutional neural network was trained using a retrospective development data set of 128 175 retinal images, which were graded 3 to 7 times for diabetic retinopathy, diabetic macular edema, and image gradability by a panel of 54 US licensed ophthalmologists and ophthalmology senior residents between May and December 2015. The resultant algorithm was validated in January and February 2016 using 2 separate data sets, both graded by at least 7 US board-certified ophthalmologists with high intragrader consistency. Deep learning-trained algorithm. The sensitivity and specificity of the algorithm for detecting referable diabetic retinopathy (RDR), defined as moderate and worse diabetic retinopathy, referable diabetic macular edema, or both, were generated based on the reference standard of the majority decision of the ophthalmologist panel. The algorithm was evaluated at 2 operating points selected from the development set, one selected for high specificity and another for high sensitivity. The EyePACS-1 data set consisted of 9963 images from 4997 patients (mean age, 54.4 years; 62.2% women; prevalence of RDR, 683/8878 fully gradable images [7.8%]); the Messidor-2 data set had 1748 images from 874 patients (mean age, 57.6 years; 42.6% women; prevalence of RDR, 254/1745 fully gradable images [14.6%]). For detecting RDR, the algorithm had an area under the receiver operating curve of 0.991 (95% CI, 0.988-0.993) for EyePACS-1 and 0.990 (95% CI, 0.986-0.995) for Messidor-2. Using the first operating cut point with high specificity, for EyePACS-1, the sensitivity was 90.3% (95% CI, 87.5%-92.7%) and the specificity was 98.1% (95% CI, 97.8%-98.5%). For Messidor-2, the sensitivity was 87.0% (95% CI, 81.1%-91.0%) and the specificity was 98.5% (95% CI, 97.7%-99.1%). Using a second operating point with high sensitivity in the development set, for EyePACS-1 the sensitivity was 97.5% and specificity was 93.4% and for Messidor-2 the sensitivity was 96.1% and specificity was 93.9%. In this evaluation of retinal fundus photographs from adults with diabetes, an algorithm based on deep machine learning had high sensitivity and specificity for detecting referable diabetic retinopathy. Further research is necessary to determine the feasibility of applying this algorithm in the clinical setting and to determine whether use of the algorithm could lead to improved care and outcomes compared with current ophthalmologic assessment.
NASA Astrophysics Data System (ADS)
Lim, D. S. S.; Abercromby, A.; Beaton, K.; Brady, A. L.; Cardman, Z.; Chappell, S.; Cockell, C. S.; Cohen, B. A.; Cohen, T.; Deans, M.; Deliz, I.; Downs, M.; Elphic, R. C.; Hamilton, J. C.; Heldmann, J.; Hillenius, S.; Hoffman, J.; Hughes, S. S.; Kobs-Nawotniak, S. E.; Lees, D. S.; Marquez, J.; Miller, M.; Milovsoroff, C.; Payler, S.; Sehlke, A.; Squyres, S. W.
2016-12-01
Analogs are destinations on Earth that allow researchers to approximate operational and/or physical conditions on other planetary bodies and within deep space. Over the past decade, our team has been conducting geobiological field science studies under simulated deep space and Mars mission conditions. Each of these missions integrate scientific and operational research with the goal to identify concepts of operations (ConOps) and capabilities that will enable and enhance scientific return during human and human-robotic missions to the Moon, into deep space and on Mars. Working under these simulated mission conditions presents a number of unique challenges that are not encountered during typical scientific field expeditions. However, there are significant benefits to this working model from the perspective of the human space flight and scientific operations research community. Specifically, by applying human (and human-robotic) mission architectures to real field science endeavors, we create a unique operational litmus test for those ConOps and capabilities that have otherwise been vetted under circumstances that did not necessarily demand scientific data return meeting the rigors of peer-review standards. The presentation will give an overview of our team's recent analog research, with a focus on the scientific operations research. The intent is to encourage collaborative dialog with a broader set of analog research community members with an eye towards future scientific field endeavors that will have a significant impact on how we design human and human-robotic missions to the Moon, into deep space and to Mars.
Deep drilling continues, though records don't show it
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1989-02-01
This article discusses how, although current prices may not appear to merit the expense of drilling for deep gas today, operators are looking beyond the immediate future. Faith in the future of deep gas drillers onward. Current prices may not justify it, but there is still a great deal of interest in the really deep plays. Technically, there was only one drilling record in 1988. The E.L. Spence Trust 1, in Missouri's Reelfoot Rift region of the Mississippi embayment, was drilled to the Lamotte formation at 10,089 ft. This well surpassed the old record of 4,906 ft set back inmore » 1966.« less
Multiple-Feed Design For DSN/SETI Antenna
NASA Technical Reports Server (NTRS)
Slobin, S. D.; Bathker, D. A.
1988-01-01
Frequency bands changed with little interruption of operation. Modification of feedhorn mounting on existing 34-m-diameter antenna in Deep Space Network (DSN) enables antenna to be shared by Search for Extra-Terrestrial Intelligence (SET) program with minimal interruption of DSN spacecraft tracking. Modified antenna useful in terrestrial communication systems requiring frequent changes of operating frequencies.
Galli, Giulia
2014-01-01
When we form new memories, their mnestic fate largely depends upon the cognitive operations set in train during encoding. A typical observation in experimental as well as everyday life settings is that if we learn an item using semantic or "deep" operations, such as attending to its meaning, memory will be better than if we learn the same item using more "shallow" operations, such as attending to its structural features. In the psychological literature, this phenomenon has been conceptualized within the "levels of processing" framework and has been consistently replicated since its original proposal by Craik and Lockhart in 1972. However, the exact mechanisms underlying the memory advantage for deeply encoded items are not yet entirely understood. A cognitive neuroscience perspective can add to this field by clarifying the nature of the processes involved in effective deep and shallow encoding and how they are instantiated in the brain, but so far there has been little work to systematically integrate findings from the literature. This work aims to fill this gap by reviewing, first, some of the key neuroimaging findings on the neural correlates of deep and shallow episodic encoding and second, emerging evidence from studies using neuromodulatory approaches such as psychopharmacology and non-invasive brain stimulation. Taken together, these studies help further our understanding of levels of processing. In addition, by showing that deep encoding can be modulated by acting upon specific brain regions or systems, the reviewed studies pave the way for selective enhancements of episodic encoding processes.
Deep Borehole Field Test Requirements and Controlled Assumptions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardin, Ernest
2015-07-01
This document presents design requirements and controlled assumptions intended for use in the engineering development and testing of: 1) prototype packages for radioactive waste disposal in deep boreholes; 2) a waste package surface handling system; and 3) a subsurface system for emplacing and retrieving packages in deep boreholes. Engineering development and testing is being performed as part of the Deep Borehole Field Test (DBFT; SNL 2014a). This document presents parallel sets of requirements for a waste disposal system and for the DBFT, showing the close relationship. In addition to design, it will also inform planning for drilling, construction, and scientificmore » characterization activities for the DBFT. The information presented here follows typical preparations for engineering design. It includes functional and operating requirements for handling and emplacement/retrieval equipment, waste package design and emplacement requirements, borehole construction requirements, sealing requirements, and performance criteria. Assumptions are included where they could impact engineering design. Design solutions are avoided in the requirements discussion. Deep Borehole Field Test Requirements and Controlled Assumptions July 21, 2015 iv ACKNOWLEDGEMENTS This set of requirements and assumptions has benefited greatly from reviews by Gordon Appel, Geoff Freeze, Kris Kuhlman, Bob MacKinnon, Steve Pye, David Sassani, Dave Sevougian, and Jiann Su.« less
Deep Space Habitat Concept of Operations for Transit Mission Phases
NASA Technical Reports Server (NTRS)
Hoffman, Stephen J.
2011-01-01
The National Aeronautics and Space Administration (NASA) has begun evaluating various mission and system components of possible implementations of what the U.S. Human Spaceflight Plans Committee (also known as the Augustine Committee) has named the flexible path (Anon., 2009). As human spaceflight missions expand further into deep space, the duration of these missions increases to the point where a dedicated crew habitat element appears necessary. There are several destinations included in this flexible path a near Earth asteroid (NEA) mission, a Phobos/Deimos (Ph/D) mission, and a Mars surface exploration mission that all include at least a portion of the total mission in which the crew spends significant periods of time (measured in months) in the deep space environment and are thus candidates for a dedicated habitat element. As one facet of a number of studies being conducted by the Human Spaceflight Architecture Team (HAT) a workshop was conducted to consider how best to define and quantify habitable volume for these future deep space missions. One conclusion reached during this workshop was the need for a description of the scope and scale of these missions and the intended uses of a habitat element. A group was set up to prepare a concept of operations document to address this need. This document describes a concept of operations for a habitat element used for these deep space missions. Although it may eventually be determined that there is significant overlap with this concept of operations and that of a habitat destined for use on planetary surfaces, such as the Moon and Mars, no such presumption is made in this document.
Deep Space One High-Voltage Bus Management
NASA Technical Reports Server (NTRS)
Rachocki, Ken; Nieraeth, Donald
1999-01-01
The design of the High Voltage Power Converter Unit on DS1 allows both the spacecraft avionics and ion propulsion to operate in a stable manner near the PPP of the solar array. This approach relies on a fairly well-defined solar array model to determine the projected PPP. The solar array voltage set-points have to be updated every week to maintain operation near PPP. Stable operation even to the LEFT of the Peak Power Point is achievable so long as you do not change the operating power level of the ion engine. The next step for this technology is to investigate the use of onboard autonomy to determine the optimum SA voltage regulation set-point (i.e. near the PPP); this is for future missions that have one or more ion propulsion subsystems.
Flexible-Path Human Exploration
NASA Technical Reports Server (NTRS)
Sherwood, B.; Adler, M.; Alkalai, L.; Burdick, G.; Coulter, D.; Jordan, F.; Naderi, F.; Graham, L.; Landis, R.; Drake, B.;
2010-01-01
In the fourth quarter of 2009 an in-house, multi-center NASA study team briefly examined "Flexible Path" concepts to begin understanding characteristics, content, and roles of potential missions consistent with the strategy proposed by the Augustine Committee. We present an overview of the study findings. Three illustrative human/robotic mission concepts not requiring planet surface operations are described: assembly of very large in-space telescopes in cis-lunar space; exploration of near Earth objects (NEOs); exploration of Mars' moon Phobos. For each, a representative mission is described, technology and science objectives are outlined, and a basic mission operations concept is quantified. A fourth type of mission, using the lunar surface as preparation for Mars, is also described. Each mission's "capability legacy" is summarized. All four illustrative missions could achieve NASA's stated human space exploration objectives and advance human space flight toward Mars surface exploration. Telescope assembly missions would require the fewest new system developments. NEO missions would offer a wide range of deep-space trip times between several months and two years. Phobos exploration would retire several Marsclass risks, leaving another large remainder set (associated with entry, descent, surface operations, and ascent) for retirement by subsequent missions. And extended lunar surface operations would build confidence for Mars surface missions by addressing a complementary set of risks. Six enabling developments (robotic precursors, ISS exploration testbed, heavy-lift launch, deep-space-capable crew capsule, deep-space habitat, and reusable in-space propulsion stage) would apply across multiple program sequence options, and thus could be started even without committing to a specific mission sequence now. Flexible Path appears to be a viable strategy, with meaningful and worthy mission content.
Field performance of self-siphon sediment cleansing set for sediment removal in deep CSO chamber.
Zhou, Yongchao; Zhang, Yiping; Tang, Ping
2013-01-01
This paper presents a study of the self-siphon sediment cleansing set (SSCS), a system designed to remove sediment from the deep combined sewer overflow (CSO) chamber during dry-weather periods. In order to get a better understanding of the sediment removal effectiveness and operational conditions of the SSCS system, we carried out a full-scale field study and comparison analysis on the sediment depth changes in the deep CSO chambers under the conditions with and without the SSCS. The field investigation results demonstrated that the SSCS drains the dry-weather flow that accumulated for 50-57 min from the sewer channel to the intercepting system in about 10 min. It is estimated that the bed shear stress in the CSO chamber and sewer channel is improved almost 25 times on average. The SSCS acts to remove the near bed solids with high pollution load efficiently. Moreover, it cleans up not only the new sediment layer but also part of the previously accumulated sediment.
Genomic region operation kit for flexible processing of deep sequencing data.
Ovaska, Kristian; Lyly, Lauri; Sahu, Biswajyoti; Jänne, Olli A; Hautaniemi, Sampsa
2013-01-01
Computational analysis of data produced in deep sequencing (DS) experiments is challenging due to large data volumes and requirements for flexible analysis approaches. Here, we present a mathematical formalism based on set algebra for frequently performed operations in DS data analysis to facilitate translation of biomedical research questions to language amenable for computational analysis. With the help of this formalism, we implemented the Genomic Region Operation Kit (GROK), which supports various DS-related operations such as preprocessing, filtering, file conversion, and sample comparison. GROK provides high-level interfaces for R, Python, Lua, and command line, as well as an extension C++ API. It supports major genomic file formats and allows storing custom genomic regions in efficient data structures such as red-black trees and SQL databases. To demonstrate the utility of GROK, we have characterized the roles of two major transcription factors (TFs) in prostate cancer using data from 10 DS experiments. GROK is freely available with a user guide from >http://csbi.ltdk.helsinki.fi/grok/.
Environmental risk management and preparations for the first deep water well in Nigeria
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berger, F.
Statoil is among the leaders in protecting health, environment and safety in all aspects of the business. The evaluations of business opportunities and development of blocks opened by authorities for petroleum exploration, are assessed in accordance with the goals for environmental protection. Progressive improvement of environmental performance is secured through proper environmental risk management. In 1995, Statoil, the technical operator on Block 210 off the Nigerian coast, was the first company to drill in deep waters in this area. An exploration well was drilled in a water depth of about 320 meters. The drilling preparations included environmental assessment, drillers Hazop,more » oil spill drift calculations, oil spill response plans and environmental risk analysis. In the environmental preparations for the well, Statoil adhered to local and national government legislation, as well as to international guidelines and company standards. Special attention was paid to the environmental sensitivity of potentially affected areas. Statoil co-operated with experienced local companies, with the authorities and other international and national oil companies. This being the first deep water well offshore Nigeria, it was a challenge to co-operate with other operators in the area. The preparations that were carried out, will set the standard for future environmental work in the area. Co-operation difficulties in the beginning were turned positively into a attitude to the environmental challenge.« less
Forecasting Flare Activity Using Deep Convolutional Neural Networks
NASA Astrophysics Data System (ADS)
Hernandez, T.
2017-12-01
Current operational flare forecasting relies on human morphological analysis of active regions and the persistence of solar flare activity through time (i.e. that the Sun will continue to do what it is doing right now: flaring or remaining calm). In this talk we present the results of applying deep Convolutional Neural Networks (CNNs) to the problem of solar flare forecasting. CNNs operate by training a set of tunable spatial filters that, in combination with neural layer interconnectivity, allow CNNs to automatically identify significant spatial structures predictive for classification and regression problems. We will start by discussing the applicability and success rate of the approach, the advantages it has over non-automated forecasts, and how mining our trained neural network provides a fresh look into the mechanisms behind magnetic energy storage and release.
Galli, Giulia
2014-01-01
When we form new memories, their mnestic fate largely depends upon the cognitive operations set in train during encoding. A typical observation in experimental as well as everyday life settings is that if we learn an item using semantic or “deep” operations, such as attending to its meaning, memory will be better than if we learn the same item using more “shallow” operations, such as attending to its structural features. In the psychological literature, this phenomenon has been conceptualized within the “levels of processing” framework and has been consistently replicated since its original proposal by Craik and Lockhart in 1972. However, the exact mechanisms underlying the memory advantage for deeply encoded items are not yet entirely understood. A cognitive neuroscience perspective can add to this field by clarifying the nature of the processes involved in effective deep and shallow encoding and how they are instantiated in the brain, but so far there has been little work to systematically integrate findings from the literature. This work aims to fill this gap by reviewing, first, some of the key neuroimaging findings on the neural correlates of deep and shallow episodic encoding and second, emerging evidence from studies using neuromodulatory approaches such as psychopharmacology and non-invasive brain stimulation. Taken together, these studies help further our understanding of levels of processing. In addition, by showing that deep encoding can be modulated by acting upon specific brain regions or systems, the reviewed studies pave the way for selective enhancements of episodic encoding processes. PMID:24904444
2011-04-01
a ‘strategy as process’ manner to develop capabilities that are flexible, adaptable and robust. 3.4 Future structures The need for agile...to develop models of the future security environment 3.4.10 Planning Under Deep Uncertainty Future structures The need for agile, flexible and... Organisation NEC Network Enabled Capability NGO Non Government Organisation NII Networking and Information Infrastructure PVO Private Voluntary
Action-Driven Visual Object Tracking With Deep Reinforcement Learning.
Yun, Sangdoo; Choi, Jongwon; Yoo, Youngjoon; Yun, Kimin; Choi, Jin Young
2018-06-01
In this paper, we propose an efficient visual tracker, which directly captures a bounding box containing the target object in a video by means of sequential actions learned using deep neural networks. The proposed deep neural network to control tracking actions is pretrained using various training video sequences and fine-tuned during actual tracking for online adaptation to a change of target and background. The pretraining is done by utilizing deep reinforcement learning (RL) as well as supervised learning. The use of RL enables even partially labeled data to be successfully utilized for semisupervised learning. Through the evaluation of the object tracking benchmark data set, the proposed tracker is validated to achieve a competitive performance at three times the speed of existing deep network-based trackers. The fast version of the proposed method, which operates in real time on graphics processing unit, outperforms the state-of-the-art real-time trackers with an accuracy improvement of more than 8%.
Ehteshami Bejnordi, Babak; Veta, Mitko; Johannes van Diest, Paul; van Ginneken, Bram; Karssemeijer, Nico; Litjens, Geert; van der Laak, Jeroen A W M; Hermsen, Meyke; Manson, Quirine F; Balkenhol, Maschenka; Geessink, Oscar; Stathonikos, Nikolaos; van Dijk, Marcory Crf; Bult, Peter; Beca, Francisco; Beck, Andrew H; Wang, Dayong; Khosla, Aditya; Gargeya, Rishab; Irshad, Humayun; Zhong, Aoxiao; Dou, Qi; Li, Quanzheng; Chen, Hao; Lin, Huang-Jing; Heng, Pheng-Ann; Haß, Christian; Bruni, Elia; Wong, Quincy; Halici, Ugur; Öner, Mustafa Ümit; Cetin-Atalay, Rengul; Berseth, Matt; Khvatkov, Vitali; Vylegzhanin, Alexei; Kraus, Oren; Shaban, Muhammad; Rajpoot, Nasir; Awan, Ruqayya; Sirinukunwattana, Korsuk; Qaiser, Talha; Tsang, Yee-Wah; Tellez, David; Annuscheit, Jonas; Hufnagl, Peter; Valkonen, Mira; Kartasalo, Kimmo; Latonen, Leena; Ruusuvuori, Pekka; Liimatainen, Kaisa; Albarqouni, Shadi; Mungal, Bharti; George, Ami; Demirci, Stefanie; Navab, Nassir; Watanabe, Seiryo; Seno, Shigeto; Takenaka, Yoichi; Matsuda, Hideo; Ahmady Phoulady, Hady; Kovalev, Vassili; Kalinovsky, Alexander; Liauchuk, Vitali; Bueno, Gloria; Fernandez-Carrobles, M Milagro; Serrano, Ismael; Deniz, Oscar; Racoceanu, Daniel; Venâncio, Rui
2017-12-12
Application of deep learning algorithms to whole-slide pathology images can potentially improve diagnostic accuracy and efficiency. Assess the performance of automated deep learning algorithms at detecting metastases in hematoxylin and eosin-stained tissue sections of lymph nodes of women with breast cancer and compare it with pathologists' diagnoses in a diagnostic setting. Researcher challenge competition (CAMELYON16) to develop automated solutions for detecting lymph node metastases (November 2015-November 2016). A training data set of whole-slide images from 2 centers in the Netherlands with (n = 110) and without (n = 160) nodal metastases verified by immunohistochemical staining were provided to challenge participants to build algorithms. Algorithm performance was evaluated in an independent test set of 129 whole-slide images (49 with and 80 without metastases). The same test set of corresponding glass slides was also evaluated by a panel of 11 pathologists with time constraint (WTC) from the Netherlands to ascertain likelihood of nodal metastases for each slide in a flexible 2-hour session, simulating routine pathology workflow, and by 1 pathologist without time constraint (WOTC). Deep learning algorithms submitted as part of a challenge competition or pathologist interpretation. The presence of specific metastatic foci and the absence vs presence of lymph node metastasis in a slide or image using receiver operating characteristic curve analysis. The 11 pathologists participating in the simulation exercise rated their diagnostic confidence as definitely normal, probably normal, equivocal, probably tumor, or definitely tumor. The area under the receiver operating characteristic curve (AUC) for the algorithms ranged from 0.556 to 0.994. The top-performing algorithm achieved a lesion-level, true-positive fraction comparable with that of the pathologist WOTC (72.4% [95% CI, 64.3%-80.4%]) at a mean of 0.0125 false-positives per normal whole-slide image. For the whole-slide image classification task, the best algorithm (AUC, 0.994 [95% CI, 0.983-0.999]) performed significantly better than the pathologists WTC in a diagnostic simulation (mean AUC, 0.810 [range, 0.738-0.884]; P < .001). The top 5 algorithms had a mean AUC that was comparable with the pathologist interpreting the slides in the absence of time constraints (mean AUC, 0.960 [range, 0.923-0.994] for the top 5 algorithms vs 0.966 [95% CI, 0.927-0.998] for the pathologist WOTC). In the setting of a challenge competition, some deep learning algorithms achieved better diagnostic performance than a panel of 11 pathologists participating in a simulation exercise designed to mimic routine pathology workflow; algorithm performance was comparable with an expert pathologist interpreting whole-slide images without time constraints. Whether this approach has clinical utility will require evaluation in a clinical setting.
NASA Astrophysics Data System (ADS)
Katavouta, Anna; Thompson, Keith
2017-04-01
A high resolution regional model (1/36 degree) of the Gulf of Maine, Scotian Shelf and adjacent deep ocean (GoMSS) is developed to downscale ocean conditions from an existing global operational system. First, predictions from the regional GoMSS model in a one-way nesting set up are evaluated using observations from multiple sources including satellite-borne sensors of surface temperature and sea level, CTDs, Argo floats and moored current meters. It is shown that on the shelf, the regional model predicts more realistic fields than the global system because it has higher resolution and includes tides that are absent from the global system. However, in deep water the regional model misplaces deep ocean eddies and meanders associated with the Gulf Stream. This is because of unrealistic internally generated variability (associated with the one-way nesting set up) that leads to decoupling of the regional model from the global system in the deep water. To overcome this problem, the large scales (length scales > 90 km) of the regional model are spectrally nudged towards the global system fields. This leads to more realistic predictions off the shelf. Wavenumber spectra show that even though spectral nudging constrains the large scales, it does not suppress the variability on small scales; on the contrary, it favours the formation of eddies with length scales below the cut-off wavelength of the spectral nudging.
1989-05-16
development and is manifested today in the Operational .Maneuver Group. As the name implies, the Soviet emphiasis is at the operational level. The mission of...high-intensity war! 10 answer this question I (1) analyze Soviet deep operations theory to determine how their concept developed and what they expect...USA, 32 pageF., In Soviet Army doctrine, deep operations has been a long time in development and is manifested today in the Operational Maneuver Group
Ziatdinov, Maxim; Dyck, Ondrej; Maksov, Artem; Li, Xufan; Sang, Xiahan; Xiao, Kai; Unocic, Raymond R; Vasudevan, Rama; Jesse, Stephen; Kalinin, Sergei V
2017-12-26
Recent advances in scanning transmission electron and scanning probe microscopies have opened exciting opportunities in probing the materials structural parameters and various functional properties in real space with angstrom-level precision. This progress has been accompanied by an exponential increase in the size and quality of data sets produced by microscopic and spectroscopic experimental techniques. These developments necessitate adequate methods for extracting relevant physical and chemical information from the large data sets, for which a priori information on the structures of various atomic configurations and lattice defects is limited or absent. Here we demonstrate an application of deep neural networks to extract information from atomically resolved images including location of the atomic species and type of defects. We develop a "weakly supervised" approach that uses information on the coordinates of all atomic species in the image, extracted via a deep neural network, to identify a rich variety of defects that are not part of an initial training set. We further apply our approach to interpret complex atomic and defect transformation, including switching between different coordination of silicon dopants in graphene as a function of time, formation of peculiar silicon dimer with mixed 3-fold and 4-fold coordination, and the motion of molecular "rotor". This deep learning-based approach resembles logic of a human operator, but can be scaled leading to significant shift in the way of extracting and analyzing information from raw experimental data.
Veta, Mitko; Johannes van Diest, Paul; van Ginneken, Bram; Karssemeijer, Nico; Litjens, Geert; van der Laak, Jeroen A. W. M.; Hermsen, Meyke; Manson, Quirine F; Balkenhol, Maschenka; Geessink, Oscar; Stathonikos, Nikolaos; van Dijk, Marcory CRF; Bult, Peter; Beca, Francisco; Beck, Andrew H; Wang, Dayong; Khosla, Aditya; Gargeya, Rishab; Irshad, Humayun; Zhong, Aoxiao; Dou, Qi; Li, Quanzheng; Chen, Hao; Lin, Huang-Jing; Heng, Pheng-Ann; Haß, Christian; Bruni, Elia; Wong, Quincy; Halici, Ugur; Öner, Mustafa Ümit; Cetin-Atalay, Rengul; Berseth, Matt; Khvatkov, Vitali; Vylegzhanin, Alexei; Kraus, Oren; Shaban, Muhammad; Rajpoot, Nasir; Awan, Ruqayya; Sirinukunwattana, Korsuk; Qaiser, Talha; Tsang, Yee-Wah; Tellez, David; Annuscheit, Jonas; Hufnagl, Peter; Valkonen, Mira; Kartasalo, Kimmo; Latonen, Leena; Ruusuvuori, Pekka; Liimatainen, Kaisa; Albarqouni, Shadi; Mungal, Bharti; George, Ami; Demirci, Stefanie; Navab, Nassir; Watanabe, Seiryo; Seno, Shigeto; Takenaka, Yoichi; Matsuda, Hideo; Ahmady Phoulady, Hady; Kovalev, Vassili; Kalinovsky, Alexander; Liauchuk, Vitali; Bueno, Gloria; Fernandez-Carrobles, M. Milagro; Serrano, Ismael; Deniz, Oscar; Racoceanu, Daniel; Venâncio, Rui
2017-01-01
Importance Application of deep learning algorithms to whole-slide pathology images can potentially improve diagnostic accuracy and efficiency. Objective Assess the performance of automated deep learning algorithms at detecting metastases in hematoxylin and eosin–stained tissue sections of lymph nodes of women with breast cancer and compare it with pathologists’ diagnoses in a diagnostic setting. Design, Setting, and Participants Researcher challenge competition (CAMELYON16) to develop automated solutions for detecting lymph node metastases (November 2015-November 2016). A training data set of whole-slide images from 2 centers in the Netherlands with (n = 110) and without (n = 160) nodal metastases verified by immunohistochemical staining were provided to challenge participants to build algorithms. Algorithm performance was evaluated in an independent test set of 129 whole-slide images (49 with and 80 without metastases). The same test set of corresponding glass slides was also evaluated by a panel of 11 pathologists with time constraint (WTC) from the Netherlands to ascertain likelihood of nodal metastases for each slide in a flexible 2-hour session, simulating routine pathology workflow, and by 1 pathologist without time constraint (WOTC). Exposures Deep learning algorithms submitted as part of a challenge competition or pathologist interpretation. Main Outcomes and Measures The presence of specific metastatic foci and the absence vs presence of lymph node metastasis in a slide or image using receiver operating characteristic curve analysis. The 11 pathologists participating in the simulation exercise rated their diagnostic confidence as definitely normal, probably normal, equivocal, probably tumor, or definitely tumor. Results The area under the receiver operating characteristic curve (AUC) for the algorithms ranged from 0.556 to 0.994. The top-performing algorithm achieved a lesion-level, true-positive fraction comparable with that of the pathologist WOTC (72.4% [95% CI, 64.3%-80.4%]) at a mean of 0.0125 false-positives per normal whole-slide image. For the whole-slide image classification task, the best algorithm (AUC, 0.994 [95% CI, 0.983-0.999]) performed significantly better than the pathologists WTC in a diagnostic simulation (mean AUC, 0.810 [range, 0.738-0.884]; P < .001). The top 5 algorithms had a mean AUC that was comparable with the pathologist interpreting the slides in the absence of time constraints (mean AUC, 0.960 [range, 0.923-0.994] for the top 5 algorithms vs 0.966 [95% CI, 0.927-0.998] for the pathologist WOTC). Conclusions and Relevance In the setting of a challenge competition, some deep learning algorithms achieved better diagnostic performance than a panel of 11 pathologists participating in a simulation exercise designed to mimic routine pathology workflow; algorithm performance was comparable with an expert pathologist interpreting whole-slide images without time constraints. Whether this approach has clinical utility will require evaluation in a clinical setting. PMID:29234806
NASA Astrophysics Data System (ADS)
Ladevèze, P.; Séjourné, S.; Rivard, C.; Lavoie, D.; Lefebvre, R.; Rouleau, A.
2018-03-01
In the St. Lawrence sedimentary platform (eastern Canada), very little data are available between shallow fresh water aquifers and deep geological hydrocarbon reservoir units (here referred to as the intermediate zone). Characterization of this intermediate zone is crucial, as the latter controls aquifer vulnerability to operations carried out at depth. In this paper, the natural fracture networks in shallow aquifers and in the Utica shale gas reservoir are documented in an attempt to indirectly characterize the intermediate zone. This study used structural data from outcrops, shallow observation well logs and deep shale gas well logs to propose a conceptual model of the natural fracture network. Shallow and deep fractures were categorized into three sets of steeply-dipping fractures and into a set of bedding-parallel fractures. Some lithological and structural controls on fracture distribution were identified. The regional geologic history and similarities between the shallow and deep fracture datasets allowed the extrapolation of the fracture network characterization to the intermediate zone. This study thus highlights the benefits of using both datasets simultaneously, while they are generally interpreted separately. Recommendations are also proposed for future environmental assessment studies in which the existence of preferential flow pathways and potential upward fluid migration toward shallow aquifers need to be identified.
Becker, A S; Blüthgen, C; Phi van, V D; Sekaggya-Wiltshire, C; Castelnuovo, B; Kambugu, A; Fehr, J; Frauenfelder, T
2018-03-01
To evaluate the feasibility of Deep Learning-based detection and classification of pathological patterns in a set of digital photographs of chest X-ray (CXR) images of tuberculosis (TB) patients. In this prospective, observational study, patients with previously diagnosed TB were enrolled. Photographs of their CXRs were taken using a consumer-grade digital still camera. The images were stratified by pathological patterns into classes: cavity, consolidation, effusion, interstitial changes, miliary pattern or normal examination. Image analysis was performed with commercially available Deep Learning software in two steps. Pathological areas were first localised; detected areas were then classified. Detection was assessed using receiver operating characteristics (ROC) analysis, and classification using a confusion matrix. The study cohort was 138 patients with human immunodeficiency virus (HIV) and TB co-infection (median age 34 years, IQR 28-40); 54 patients were female. Localisation of pathological areas was excellent (area under the ROC curve 0.82). The software could perfectly distinguish pleural effusions from intraparenchymal changes. The most frequent misclassifications were consolidations as cavitations, and miliary patterns as interstitial patterns (and vice versa). Deep Learning analysis of CXR photographs is a promising tool. Further efforts are needed to build larger, high-quality data sets to achieve better diagnostic performance.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-18
... number of vessels that engaged in both deep-setting and shallow-setting ranged from 17 to 35. The number... shallow-set and deep-set longlining. Based on an average of 127 active vessels during that period, the... analyses prepared for the 2009 rule are: (3) Shallow-set longline fishing for swordfish (for deep-setting...
Yasaka, Koichiro; Akai, Hiroyuki; Abe, Osamu; Kiryu, Shigeru
2018-03-01
Purpose To investigate diagnostic performance by using a deep learning method with a convolutional neural network (CNN) for the differentiation of liver masses at dynamic contrast agent-enhanced computed tomography (CT). Materials and Methods This clinical retrospective study used CT image sets of liver masses over three phases (noncontrast-agent enhanced, arterial, and delayed). Masses were diagnosed according to five categories (category A, classic hepatocellular carcinomas [HCCs]; category B, malignant liver tumors other than classic and early HCCs; category C, indeterminate masses or mass-like lesions [including early HCCs and dysplastic nodules] and rare benign liver masses other than hemangiomas and cysts; category D, hemangiomas; and category E, cysts). Supervised training was performed by using 55 536 image sets obtained in 2013 (from 460 patients, 1068 sets were obtained and they were augmented by a factor of 52 [rotated, parallel-shifted, strongly enlarged, and noise-added images were generated from the original images]). The CNN was composed of six convolutional, three maximum pooling, and three fully connected layers. The CNN was tested with 100 liver mass image sets obtained in 2016 (74 men and 26 women; mean age, 66.4 years ± 10.6 [standard deviation]; mean mass size, 26.9 mm ± 25.9; 21, nine, 35, 20, and 15 liver masses for categories A, B, C, D, and E, respectively). Training and testing were performed five times. Accuracy for categorizing liver masses with CNN model and the area under receiver operating characteristic curve for differentiating categories A-B versus categories C-E were calculated. Results Median accuracy of differential diagnosis of liver masses for test data were 0.84. Median area under the receiver operating characteristic curve for differentiating categories A-B from C-E was 0.92. Conclusion Deep learning with CNN showed high diagnostic performance in differentiation of liver masses at dynamic CT. © RSNA, 2017 Online supplemental material is available for this article.
Brown, James M; Campbell, J Peter; Beers, Andrew; Chang, Ken; Ostmo, Susan; Chan, R V Paul; Dy, Jennifer; Erdogmus, Deniz; Ioannidis, Stratis; Kalpathy-Cramer, Jayashree; Chiang, Michael F
2018-05-02
Retinopathy of prematurity (ROP) is a leading cause of childhood blindness worldwide. The decision to treat is primarily based on the presence of plus disease, defined as dilation and tortuosity of retinal vessels. However, clinical diagnosis of plus disease is highly subjective and variable. To implement and validate an algorithm based on deep learning to automatically diagnose plus disease from retinal photographs. A deep convolutional neural network was trained using a data set of 5511 retinal photographs. Each image was previously assigned a reference standard diagnosis (RSD) based on consensus of image grading by 3 experts and clinical diagnosis by 1 expert (ie, normal, pre-plus disease, or plus disease). The algorithm was evaluated by 5-fold cross-validation and tested on an independent set of 100 images. Images were collected from 8 academic institutions participating in the Imaging and Informatics in ROP (i-ROP) cohort study. The deep learning algorithm was tested against 8 ROP experts, each of whom had more than 10 years of clinical experience and more than 5 peer-reviewed publications about ROP. Data were collected from July 2011 to December 2016. Data were analyzed from December 2016 to September 2017. A deep learning algorithm trained on retinal photographs. Receiver operating characteristic analysis was performed to evaluate performance of the algorithm against the RSD. Quadratic-weighted κ coefficients were calculated for ternary classification (ie, normal, pre-plus disease, and plus disease) to measure agreement with the RSD and 8 independent experts. Of the 5511 included retinal photographs, 4535 (82.3%) were graded as normal, 805 (14.6%) as pre-plus disease, and 172 (3.1%) as plus disease, based on the RSD. Mean (SD) area under the receiver operating characteristic curve statistics were 0.94 (0.01) for the diagnosis of normal (vs pre-plus disease or plus disease) and 0.98 (0.01) for the diagnosis of plus disease (vs normal or pre-plus disease). For diagnosis of plus disease in an independent test set of 100 retinal images, the algorithm achieved a sensitivity of 93% with 94% specificity. For detection of pre-plus disease or worse, the sensitivity and specificity were 100% and 94%, respectively. On the same test set, the algorithm achieved a quadratic-weighted κ coefficient of 0.92 compared with the RSD, outperforming 6 of 8 ROP experts. This fully automated algorithm diagnosed plus disease in ROP with comparable or better accuracy than human experts. This has potential applications in disease detection, monitoring, and prognosis in infants at risk of ROP.
Condensation of atmospheric moisture from tropical maritime air masses as a freshwater resource.
Gerard, R D; Worzel, J L
1967-09-15
A method is proposed whereby potable water may be obtained by condensing moisture from the atmosphere in suitable seashore or island areas. Deep, cold, offshore seawater is used as a source of cold and is pumped to condensers set up on shore to intercept the flow of highly humid, tropical, maritime air masses. This air, when cooled, condenses moisture, which is conducted away and stored for use as a water supply. Windmill-driven generators would supply low-cost power for the operation. Side benefits are derived by using the nutritious deep water to support aquiculture in nearby lagoons or to enhance the productivity of the outfall area. Additional benefits are derived from the condenser as an air-conditioning device for nearby residents. The islands of the Caribbean are used as an example of a location in the trade-winds belt where nearly optimum conditions for the operation of this system can be found.
Deployment of the Oklahoma borehole seismic experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harben, P.E.; Rock, D.W.
1989-01-20
This paper discusses the Oklahoma borehole seismic experiment, currently in operation, set up by members of the Lawrence Livermore National Laboratory Treaty Verification Program and the Oklahoma Geophysical Observatory to determine deep-borehole seismic characteristics in geology typical of large regions in the Soviet Union. We evaluated and logged an existing 772-m deep borehole on the Observatory site by running caliper, cement bonding, casing inspection, and hole-deviation logs. Two Teledyne Geotech borehole-clamping seismometers were placed at various depths and spacings in the deep borehole. Currently, they are deployed at 727 and 730 m. A Teledyne Geotech shallow-borehole seismometer was mounted inmore » a 4.5-m hole, one meter from the deep borehole. The seismometers' system coherency were tested and found to be excellent to 35 Hz. We have recorded seismic noise, quarry blasts, regional earthquakes and teleseisms in the present configuration. We will begin a study of seismic noise and attenuation as a function of depth in the near future. 7 refs., 18 figs.« less
Deep Space Network equipment performance, reliability, and operations management information system
NASA Technical Reports Server (NTRS)
Cooper, T.; Lin, J.; Chatillon, M.
2002-01-01
The Deep Space Mission System (DSMS) Operations Program Office and the DeepSpace Network (DSN) facilities utilize the Discrepancy Reporting Management System (DRMS) to collect, process, communicate and manage data discrepancies, equipment resets, physical equipment status, and to maintain an internal Station Log. A collaborative effort development between JPL and the Canberra Deep Space Communication Complex delivered a system to support DSN Operations.
Hernandez, Arnaldo José; Almeida, Adriano Marques de; Fávaro, Edmar; Sguizzato, Guilherme Turola
2012-09-01
To evaluate the association between tourniquet and total operative time during total knee arthroplasty and the occurrence of deep vein thrombosis. Seventy-eight consecutive patients from our institution underwent cemented total knee arthroplasty for degenerative knee disorders. The pneumatic tourniquet time and total operative time were recorded in minutes. Four categories were established for total tourniquet time: <60, 61 to 90, 91 to 120, and >120 minutes. Three categories were defined for operative time: <120, 121 to 150, and >150 minutes. Between 7 and 12 days after surgery, the patients underwent ascending venography to evaluate the presence of distal or proximal deep vein thrombosis. We evaluated the association between the tourniquet time and total operative time and the occurrence of deep vein thrombosis after total knee arthroplasty. In total, 33 cases (42.3%) were positive for deep vein thrombosis; 13 (16.7%) cases involved the proximal type. We found no statistically significant difference in tourniquet time or operative time between patients with or without deep vein thrombosis. We did observe a higher frequency of proximal deep vein thrombosis in patients who underwent surgery lasting longer than 120 minutes. The mean total operative time was also higher in patients with proximal deep vein thrombosis. The tourniquet time did not significantly differ in these patients. We concluded that surgery lasting longer than 120 minutes increases the risk of proximal deep vein thrombosis.
Large deep neural networks for MS lesion segmentation
NASA Astrophysics Data System (ADS)
Prieto, Juan C.; Cavallari, Michele; Palotai, Miklos; Morales Pinzon, Alfredo; Egorova, Svetlana; Styner, Martin; Guttmann, Charles R. G.
2017-02-01
Multiple sclerosis (MS) is a multi-factorial autoimmune disorder, characterized by spatial and temporal dissemination of brain lesions that are visible in T2-weighted and Proton Density (PD) MRI. Assessment of lesion burden and is useful for monitoring the course of the disease, and assessing correlates of clinical outcomes. Although there are established semi-automated methods to measure lesion volume, most of them require human interaction and editing, which are time consuming and limits the ability to analyze large sets of data with high accuracy. The primary objective of this work is to improve existing segmentation algorithms and accelerate the time consuming operation of identifying and validating MS lesions. In this paper, a Deep Neural Network for MS Lesion Segmentation is implemented. The MS lesion samples are extracted from the Partners Comprehensive Longitudinal Investigation of Multiple Sclerosis (CLIMB) study. A set of 900 subjects with T2, PD and a manually corrected label map images were used to train a Deep Neural Network and identify MS lesions. Initial tests using this network achieved a 90% accuracy rate. A secondary goal was to enable this data repository for big data analysis by using this algorithm to segment the remaining cases available in the CLIMB repository.
Heat exchanger expert system logic
NASA Technical Reports Server (NTRS)
Cormier, R.
1988-01-01
The reduction is described of the operation and fault diagnostics of a Deep Space Network heat exchanger to a rule base by the application of propositional calculus to a set of logic statements. The value of this approach lies in the ease of converting the logic and subsequently implementing it on a computer as an expert system. The rule base was written in Process Intelligent Control software.
Matching Matched Filtering with Deep Networks for Gravitational-Wave Astronomy
NASA Astrophysics Data System (ADS)
Gabbard, Hunter; Williams, Michael; Hayes, Fergus; Messenger, Chris
2018-04-01
We report on the construction of a deep convolutional neural network that can reproduce the sensitivity of a matched-filtering search for binary black hole gravitational-wave signals. The standard method for the detection of well-modeled transient gravitational-wave signals is matched filtering. We use only whitened time series of measured gravitational-wave strain as an input, and we train and test on simulated binary black hole signals in synthetic Gaussian noise representative of Advanced LIGO sensitivity. We show that our network can classify signal from noise with a performance that emulates that of match filtering applied to the same data sets when considering the sensitivity defined by receiver-operator characteristics.
Matching Matched Filtering with Deep Networks for Gravitational-Wave Astronomy.
Gabbard, Hunter; Williams, Michael; Hayes, Fergus; Messenger, Chris
2018-04-06
We report on the construction of a deep convolutional neural network that can reproduce the sensitivity of a matched-filtering search for binary black hole gravitational-wave signals. The standard method for the detection of well-modeled transient gravitational-wave signals is matched filtering. We use only whitened time series of measured gravitational-wave strain as an input, and we train and test on simulated binary black hole signals in synthetic Gaussian noise representative of Advanced LIGO sensitivity. We show that our network can classify signal from noise with a performance that emulates that of match filtering applied to the same data sets when considering the sensitivity defined by receiver-operator characteristics.
Finessing filter scarcity problem in face recognition via multi-fold filter convolution
NASA Astrophysics Data System (ADS)
Low, Cheng-Yaw; Teoh, Andrew Beng-Jin
2017-06-01
The deep convolutional neural networks for face recognition, from DeepFace to the recent FaceNet, demand a sufficiently large volume of filters for feature extraction, in addition to being deep. The shallow filter-bank approaches, e.g., principal component analysis network (PCANet), binarized statistical image features (BSIF), and other analogous variants, endure the filter scarcity problem that not all PCA and ICA filters available are discriminative to abstract noise-free features. This paper extends our previous work on multi-fold filter convolution (ℳ-FFC), where the pre-learned PCA and ICA filter sets are exponentially diversified by ℳ folds to instantiate PCA, ICA, and PCA-ICA offspring. The experimental results unveil that the 2-FFC operation solves the filter scarcity state. The 2-FFC descriptors are also evidenced to be superior to that of PCANet, BSIF, and other face descriptors, in terms of rank-1 identification rate (%).
Eo, Taejoon; Jun, Yohan; Kim, Taeseong; Jang, Jinseong; Lee, Ho-Joon; Hwang, Dosik
2018-04-06
To demonstrate accurate MR image reconstruction from undersampled k-space data using cross-domain convolutional neural networks (CNNs) METHODS: Cross-domain CNNs consist of 3 components: (1) a deep CNN operating on the k-space (KCNN), (2) a deep CNN operating on an image domain (ICNN), and (3) an interleaved data consistency operations. These components are alternately applied, and each CNN is trained to minimize the loss between the reconstructed and corresponding fully sampled k-spaces. The final reconstructed image is obtained by forward-propagating the undersampled k-space data through the entire network. Performances of K-net (KCNN with inverse Fourier transform), I-net (ICNN with interleaved data consistency), and various combinations of the 2 different networks were tested. The test results indicated that K-net and I-net have different advantages/disadvantages in terms of tissue-structure restoration. Consequently, the combination of K-net and I-net is superior to single-domain CNNs. Three MR data sets, the T 2 fluid-attenuated inversion recovery (T 2 FLAIR) set from the Alzheimer's Disease Neuroimaging Initiative and 2 data sets acquired at our local institute (T 2 FLAIR and T 1 weighted), were used to evaluate the performance of 7 conventional reconstruction algorithms and the proposed cross-domain CNNs, which hereafter is referred to as KIKI-net. KIKI-net outperforms conventional algorithms with mean improvements of 2.29 dB in peak SNR and 0.031 in structure similarity. KIKI-net exhibits superior performance over state-of-the-art conventional algorithms in terms of restoring tissue structures and removing aliasing artifacts. The results demonstrate that KIKI-net is applicable up to a reduction factor of 3 to 4 based on variable-density Cartesian undersampling. © 2018 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Etnoyer, P. J.; Salgado, E.; Stierhoff, K.; Wickes, L.; Nehasil, S.; Kracker, L.; Lauermann, A.; Rosen, D.; Caldow, C.
2015-12-01
Southern California's deep-sea corals are diverse and abundant, but subject to multiple stressors, including corallivory, ocean acidification, and commercial bottom fishing. NOAA has surveyed these habitats using a remotely operated vehicle (ROV) since 2003. The ROV was equipped with high-resolution cameras to document deep-water groundfish and their habitat in a series of research expeditions from 2003 - 2011. Recent surveys 2011-2015 focused on in-situ measures of aragonite saturation and habitat mapping in notable habitats identified in previous years. Surveys mapped abundance and diversity of fishes and corals, as well as commercial fisheries landings and frequency of fishing gear. A novel priority setting algorithm was developed to identify hotspots of diversity and fishing intensity, and to determine where future conservation efforts may be warranted. High density coral aggregations identified in these analyses were also used to guide recent multibeam mapping efforts. The maps suggest a large extent of unexplored and unprotected hard-bottom habitat in the mesophotic zone and deep-sea reaches of Channel Islands National Marine Sanctuary.
Belianinov, Alex; Vasudevan, Rama K; Strelcov, Evgheni; ...
2015-05-13
The development of electron, and scanning probe microscopies in the second half of the twentieth century have produced spectacular images of internal structure and composition of matter with, at nanometer, molecular, and atomic resolution. Largely, this progress was enabled by computer-assisted methods of microscope operation, data acquisition and analysis. The progress in imaging technologies in the beginning of the twenty first century has opened the proverbial floodgates of high-veracity information on structure and functionality. High resolution imaging now allows information on atomic positions with picometer precision, allowing for quantitative measurements of individual bond length and angles. Functional imaging often leadsmore » to multidimensional data sets containing partial or full information on properties of interest, acquired as a function of multiple parameters (time, temperature, or other external stimuli). Here, we review several recent applications of the big and deep data analysis methods to visualize, compress, and translate this data into physically and chemically relevant information from imaging data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belianinov, Alex; Vasudevan, Rama K; Strelcov, Evgheni
The development of electron, and scanning probe microscopies in the second half of the twentieth century have produced spectacular images of internal structure and composition of matter with, at nanometer, molecular, and atomic resolution. Largely, this progress was enabled by computer-assisted methods of microscope operation, data acquisition and analysis. The progress in imaging technologies in the beginning of the twenty first century has opened the proverbial floodgates of high-veracity information on structure and functionality. High resolution imaging now allows information on atomic positions with picometer precision, allowing for quantitative measurements of individual bond length and angles. Functional imaging often leadsmore » to multidimensional data sets containing partial or full information on properties of interest, acquired as a function of multiple parameters (time, temperature, or other external stimuli). Here, we review several recent applications of the big and deep data analysis methods to visualize, compress, and translate this data into physically and chemically relevant information from imaging data.« less
Robotic autopositioning of the operating microscope.
Oppenlander, Mark E; Chowdhry, Shakeel A; Merkl, Brandon; Hattendorf, Guido M; Nakaji, Peter; Spetzler, Robert F
2014-06-01
Use of the operating microscope has become pervasive since its introduction to the neurosurgical world. Neuronavigation fused with the operating microscope has allowed accurate correlation of the focal point of the microscope and its location on the downloaded imaging study. However, the robotic ability of the Pentero microscope has not been utilized to orient the angle of the microscope or to change its focal length to hone in on a predefined target. To report a novel technology that allows automatic positioning of the operating microscope onto a set target and utilization of a planned trajectory, either determined with the StealthStation S7 by using preoperative imaging or intraoperatively with the microscope. By utilizing the current motorized capabilities of the Zeiss OPMI Pentero microscope, a robotic autopositioning feature was developed in collaboration with Surgical Technologies, Medtronic, Inc. (StealthStation S7). The system is currently being tested at the Barrow Neurological Institute. Three options were developed for automatically positioning the microscope: AutoLock Current Point, Align Parallel to Plan, and Point to Plan Target. These options allow the microscope to pivot around the lesion, hover in a set plane parallel to the determined trajectory, or rotate and point to a set target point, respectively. Integration of automatic microscope positioning into the operative workflow has potential to increase operative efficacy and safety. This technology is best suited for precise trajectories and entry points into deep-seated lesions.
Investigation of Lithium Metal Hydride Materials for Mitigation of Deep Space Radiation
NASA Technical Reports Server (NTRS)
Rojdev, Kristina; Atwell, William
2016-01-01
Radiation exposure to crew, electronics, and non-metallic materials is one of many concerns with long-term, deep space travel. Mitigating this exposure is approached via a multi-faceted methodology focusing on multi-functional materials, vehicle configuration, and operational or mission constraints. In this set of research, we are focusing on new multi-functional materials that may have advantages over traditional shielding materials, such as polyethylene. Metal hydride materials are of particular interest for deep space radiation shielding due to their ability to store hydrogen, a low-Z material known to be an excellent radiation mitigator and a potential fuel source. We have previously investigated 41 different metal hydrides for their radiation mitigation potential. Of these metal hydrides, we found a set of lithium hydrides to be of particular interest due to their excellent shielding of galactic cosmic radiation. Given these results, we will continue our investigation of lithium hydrides by expanding our data set to include dose equivalent and to further understand why these materials outperformed polyethylene in a heavy ion environment. For this study, we used HZETRN 2010, a one-dimensional transport code developed by NASA Langley Research Center, to simulate radiation transport through the lithium hydrides. We focused on the 1977 solar minimum Galactic Cosmic Radiation environment and thicknesses of 1, 5, 10, 20, 30, 50, and 100 g/cm2 to stay consistent with our previous studies. The details of this work and the subsequent results will be discussed in this paper.
NASA Astrophysics Data System (ADS)
Flores, Eileen; Yelamos, Oriol; Cordova, Miguel; Kose, Kivanc; Phillips, William; Rossi, Anthony; Nehal, Kishwer; Rajadhyaksha, Milind
2017-02-01
Reflectance confocal microscopy (RCM) imaging shows promise for guiding surgical treatment of skin cancers. Recent technological advancements such as the introduction of the handheld version of the reflectance confocal microscope, video acquisition and video-mosaicing have improved RCM as an emerging tool to evaluate cancer margins during routine surgical skin procedures such as Mohs micrographic surgery (MMS). Detection of residual non-melanoma skin cancer (NMSC) tumor during MMS is feasible, as demonstrated by the introduction of real-time perioperative imaging on patients in the surgical setting. Our study is currently testing the feasibility of a new mosaicing algorithm for perioperative RCM imaging of NMSC cancer margins on patients during MMS. We report progress toward imaging and image analysis on forty-five patients, who presented for MMS at the MSKCC Dermatology service. The first 10 patients were used as a training set to establish an RCM imaging algorithm, which was implemented on the remaining test set of 35 patients. RCM imaging, using 35% AlCl3 for nuclear contrast, was performed pre- and intra-operatively with the Vivascope 3000 (Caliber ID). Imaging was performed in quadrants in the wound, to simulate the Mohs surgeon's examination of pathology. Videos were taken at the epidermal and deep dermal margins. Our Mohs surgeons assessed all videos and video-mosaics for quality and correlation to histology. Overall, our RCM video-mosaicing algorithm is feasible. RCM videos and video-mosaics of the epidermal and dermal margins were found to be of clinically acceptable quality. Assessment of cancer margins was affected by type of NMSC, size and location. Among the test set of 35 patients, 83% showed acceptable imaging quality, resolution and contrast. Visualization of nuclear and cellular morphology of residual BCC/SCC tumor and normal skin features could be detected in the peripheral and deep dermal margins. We observed correlation between the RCM videos/video-mosaics and the corresponding histology in 32 lesions. Peri-operative RCM imaging shows promise for improved and faster detection of cancer margins and guiding MMS in the surgical setting.
Autonomous Science Operations Technologies for Deep Space Gateway
NASA Astrophysics Data System (ADS)
Barnes, P. K.; Haddock, A. T.; Cruzen, C. A.
2018-02-01
Autonomous Science Operations Technologies for Deep Space Gateway (DSG) is an overview of how the DSG would benefit from autonomous systems utilizing proven technologies performing telemetry monitoring and science operations.
The Intelligent Technologies of Electronic Information System
NASA Astrophysics Data System (ADS)
Li, Xianyu
2017-08-01
Based upon the synopsis of system intelligence and information services, this paper puts forward the attributes and the logic structure of information service, sets forth intelligent technology framework of electronic information system, and presents a series of measures, such as optimizing business information flow, advancing data decision capability, improving information fusion precision, strengthening deep learning application and enhancing prognostic and health management, and demonstrates system operation effectiveness. This will benefit the enhancement of system intelligence.
Documentation of a deep percolation model for estimating ground-water recharge
Bauer, H.H.; Vaccaro, J.J.
1987-01-01
A deep percolation model, which operates on a daily basis, was developed to estimate long-term average groundwater recharge from precipitation. It has been designed primarily to simulate recharge in large areas with variable weather, soils, and land uses, but it can also be used at any scale. The physical and mathematical concepts of the deep percolation model, its subroutines and data requirements, and input data sequence and formats are documented. The physical processes simulated are soil moisture accumulation, evaporation from bare soil, plant transpiration, surface water runoff, snow accumulation and melt, and accumulation and evaporation of intercepted precipitation. The minimum data sets for the operation of the model are daily values of precipitation and maximum and minimum air temperature, soil thickness and available water capacity, soil texture, and land use. Long-term average annual precipitation, actual daily stream discharge, monthly estimates of base flow, Soil Conservation Service surface runoff curve numbers, land surface altitude-slope-aspect, and temperature lapse rates are optional. The program is written in the FORTRAN 77 language with no enhancements and should run on most computer systems without modifications. Documentation has been prepared so that program modifications may be made for inclusions of additional physical processes or deletion of ones not considered important. (Author 's abstract)
Theory of Semiconducting Superlattices and Microstructures
1992-03-01
theory elucidated the various factors affecting deep levels, sets forth the conditions for obtaining shallow-deep transitions, and predicts that Si (a...theory elucidates the various factors affecting deep levels, sets forth the conditions for obtaining shallow-deep transitions, and predicts that Si (a...ondenotes the anion vacancy, which can be thought any quantitative theoretical factor are theof as originating from Column-O of the Period strengths of
Impact of Space Transportation System on planetary spacecraft and missions design
NASA Technical Reports Server (NTRS)
Barnett, P. M.
1975-01-01
Results of Jet Propulsion Laboratory (JPL) activities to define and understand alternatives for planetary spacecraft operations with the Space Transportation System (STS) are summarized. The STS presents a set of interfaces, operational alternatives, and constraints in the prelaunch, launch, and near-earth flight phases of a mission. Shuttle-unique features are defined and coupled with JPL's existing program experience to begin development of operationally efficient alternatives, concepts, and methods for STS-launched missions. The time frame considered begins with the arrival of the planetary spacecraft at Kennedy Space Center and includes prelaunch ground operations, Shuttle-powered flight, and near-earth operations, up to acquisition of the spacecraft signal by the Deep Space Network. The areas selected for study within this time frame were generally chosen because they represent the 'driving conditions' on planetary-mission as well as system design and operations.
NASA Technical Reports Server (NTRS)
1979-01-01
Deep Space Network progress in flight project support, tracking and data acquisition, research and technology, network engineering, hardware and software implementation, and operations is cited. Topics covered include: tracking and ground based navigation; spacecraft/ground communication; station control and operations technology; ground communications; and deep space stations.
MOS 2.0: Modeling the Next Revolutionary Mission Operations System
NASA Technical Reports Server (NTRS)
Delp, Christopher L.; Bindschadler, Duane; Wollaeger, Ryan; Carrion, Carlos; McCullar, Michelle; Jackson, Maddalena; Sarrel, Marc; Anderson, Louise; Lam, Doris
2011-01-01
Designed and implemented in the 1980's, the Advanced Multi-Mission Operations System (AMMOS) was a breakthrough for deep-space NASA missions, enabling significant reductions in the cost and risk of implementing ground systems. By designing a framework for use across multiple missions and adaptability to specific mission needs, AMMOS developers created a set of applications that have operated dozens of deep-space robotic missions over the past 30 years. We seek to leverage advances in technology and practice of architecting and systems engineering, using model-based approaches to update the AMMOS. We therefore revisit fundamental aspects of the AMMOS, resulting in a major update to the Mission Operations System (MOS): MOS 2.0. This update will ensure that the MOS can support an increasing range of mission types, (such as orbiters, landers, rovers, penetrators and balloons), and that the operations systems for deep-space robotic missions can reap the benefits of an iterative multi-mission framework.12 This paper reports on the first phase of this major update. Here we describe the methods and formal semantics used to address MOS 2.0 architecture and some early results. Early benefits of this approach include improved stakeholder input and buy-in, the ability to articulate and focus effort on key, system-wide principles, and efficiency gains obtained by use of well-architected design patterns and the use of models to improve the quality of documentation and decrease the effort required to produce and maintain it. We find that such methods facilitate reasoning, simulation, analysis on the system design in terms of design impacts, generation of products (e.g., project-review and software-delivery products), and use of formal process descriptions to enable goal-based operations. This initial phase yields a forward-looking and principled MOS 2.0 architectural vision, which considers both the mission-specific context and long-term system sustainability.
Looney, Pádraig; Stevenson, Gordon N; Nicolaides, Kypros H; Plasencia, Walter; Molloholli, Malid; Natsis, Stavros; Collins, Sally L
2018-06-07
We present a new technique to fully automate the segmentation of an organ from 3D ultrasound (3D-US) volumes, using the placenta as the target organ. Image analysis tools to estimate organ volume do exist but are too time consuming and operator dependant. Fully automating the segmentation process would potentially allow the use of placental volume to screen for increased risk of pregnancy complications. The placenta was segmented from 2,393 first trimester 3D-US volumes using a semiautomated technique. This was quality controlled by three operators to produce the "ground-truth" data set. A fully convolutional neural network (OxNNet) was trained using this ground-truth data set to automatically segment the placenta. OxNNet delivered state-of-the-art automatic segmentation. The effect of training set size on the performance of OxNNet demonstrated the need for large data sets. The clinical utility of placental volume was tested by looking at predictions of small-for-gestational-age babies at term. The receiver-operating characteristics curves demonstrated almost identical results between OxNNet and the ground-truth). Our results demonstrated good similarity to the ground-truth and almost identical clinical results for the prediction of SGA.
Operability engineering in the Deep Space Network
NASA Technical Reports Server (NTRS)
Wilkinson, Belinda
1993-01-01
Many operability problems exist at the three Deep Space Communications Complexes (DSCC's) of the Deep Space Network (DSN). Four years ago, the position of DSN Operability Engineer was created to provide the opportunity for someone to take a system-level approach to solving these problems. Since that time, a process has been developed for personnel and development engineers and for enforcing user interface standards in software designed for the DSCC's. Plans are for the participation of operations personnel in the product life-cycle to expand in the future.
Development of a prototype real-time automated filter for operational deep space navigation
NASA Technical Reports Server (NTRS)
Masters, W. C.; Pollmeier, V. M.
1994-01-01
Operational deep space navigation has been in the past, and is currently, performed using systems whose architecture requires constant human supervision and intervention. A prototype for a system which allows relatively automated processing of radio metric data received in near real-time from NASA's Deep Space Network (DSN) without any redesign of the existing operational data flow has been developed. This system can allow for more rapid response as well as much reduced staffing to support mission navigation operations.
Statistical porcess control in Deep Space Network operation
NASA Technical Reports Server (NTRS)
Hodder, J. A.
2002-01-01
This report describes how the Deep Space Mission System (DSMS) Operations Program Office at the Jet Propulsion Laboratory's (EL) uses Statistical Process Control (SPC) to monitor performance and evaluate initiatives for improving processes on the National Aeronautics and Space Administration's (NASA) Deep Space Network (DSN).
SPRUCE Whole Ecosystems Warming (WEW) Environmental Data Beginning August 2015
Hanson, P. J. [Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A.; Riggs, J. S. [Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A.; Nettles, W. R. [Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A.; Krassovski, M. B. [Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A.; Hook, L. A. [Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A.
2016-01-01
This data set provides the environmental measurements collected during the implementation of operational methods to achieve both deep soil heating (0-3 m) and whole-ecosystem warming (WEW) appropriate to the scale of tall-stature, high-carbon, boreal forest peatlands. The methods were developed to allow scientists to provide a plausible set of ecosystem warming scenarios within which immediate and longer term (one decade) responses of organisms (microbes to trees) and ecosystem functions (carbon, water and nutrient cycles) could be measured. Elevated CO2 was also incorporated to test how temperature responses may be modified by atmospheric CO2 effects on carbon cycle processes.
NASA Technical Reports Server (NTRS)
1975-01-01
The objectives, functions, and organization of the Deep Space Network are summarized along with deep space station, ground communication, and network operations control capabilities. Mission support of ongoing planetary/interplanetary flight projects is discussed with emphasis on Viking orbiter radio frequency compatibility tests, the Pioneer Venus orbiter mission, and Helios-1 mission status and operations. Progress is also reported in tracking and data acquisition research and technology, network engineering, hardware and software implementation, and operations.
Iris Transponder-Communications and Navigation for Deep Space
NASA Technical Reports Server (NTRS)
Duncan, Courtney B.; Smith, Amy E.; Aguirre, Fernando H.
2014-01-01
The Jet Propulsion Laboratory has developed the Iris CubeSat compatible deep space transponder for INSPIRE, the first CubeSat to deep space. Iris is 0.4 U, 0.4 kg, consumes 12.8 W, and interoperates with NASA's Deep Space Network (DSN) on X-Band frequencies (7.2 GHz uplink, 8.4 GHz downlink) for command, telemetry, and navigation. This talk discusses the Iris for INSPIRE, it's features and requirements; future developments and improvements underway; deep space and proximity operations applications for Iris; high rate earth orbit variants; and ground requirements, such as are implemented in the DSN, for deep space operations.
De novo transcriptome assembly and positive selection analysis of an individual deep-sea fish.
Lan, Yi; Sun, Jin; Xu, Ting; Chen, Chong; Tian, Renmao; Qiu, Jian-Wen; Qian, Pei-Yuan
2018-05-24
High hydrostatic pressure and low temperatures make the deep sea a harsh environment for life forms. Actin organization and microtubules assembly, which are essential for intracellular transport and cell motility, can be disrupted by high hydrostatic pressure. High hydrostatic pressure can also damage DNA. Nucleic acids exposed to low temperatures can form secondary structures that hinder genetic information processing. To study how deep-sea creatures adapt to such a hostile environment, one of the most straightforward ways is to sequence and compare their genes with those of their shallow-water relatives. We captured an individual of the fish species Aldrovandia affinis, which is a typical deep-sea inhabitant, from the Okinawa Trough at a depth of 1550 m using a remotely operated vehicle (ROV). We sequenced its transcriptome and analyzed its molecular adaptation. We obtained 27,633 protein coding sequences using an Illumina platform and compared them with those of several shallow-water fish species. Analysis of 4918 single-copy orthologs identified 138 positively selected genes in A. affinis, including genes involved in microtubule regulation. Particularly, functional domains related to cold shock as well as DNA repair are exposed to positive selection pressure in both deep-sea fish and hadal amphipod. Overall, we have identified a set of positively selected genes related to cytoskeleton structures, DNA repair and genetic information processing, which shed light on molecular adaptation to the deep sea. These results suggest that amino acid substitutions of these positively selected genes may contribute crucially to the adaptation of deep-sea animals. Additionally, we provide a high-quality transcriptome of a deep-sea fish for future deep-sea studies.
Korotcov, Alexandru; Tkachenko, Valery; Russo, Daniel P; Ekins, Sean
2017-12-04
Machine learning methods have been applied to many data sets in pharmaceutical research for several decades. The relative ease and availability of fingerprint type molecular descriptors paired with Bayesian methods resulted in the widespread use of this approach for a diverse array of end points relevant to drug discovery. Deep learning is the latest machine learning algorithm attracting attention for many of pharmaceutical applications from docking to virtual screening. Deep learning is based on an artificial neural network with multiple hidden layers and has found considerable traction for many artificial intelligence applications. We have previously suggested the need for a comparison of different machine learning methods with deep learning across an array of varying data sets that is applicable to pharmaceutical research. End points relevant to pharmaceutical research include absorption, distribution, metabolism, excretion, and toxicity (ADME/Tox) properties, as well as activity against pathogens and drug discovery data sets. In this study, we have used data sets for solubility, probe-likeness, hERG, KCNQ1, bubonic plague, Chagas, tuberculosis, and malaria to compare different machine learning methods using FCFP6 fingerprints. These data sets represent whole cell screens, individual proteins, physicochemical properties as well as a data set with a complex end point. Our aim was to assess whether deep learning offered any improvement in testing when assessed using an array of metrics including AUC, F1 score, Cohen's kappa, Matthews correlation coefficient and others. Based on ranked normalized scores for the metrics or data sets Deep Neural Networks (DNN) ranked higher than SVM, which in turn was ranked higher than all the other machine learning methods. Visualizing these properties for training and test sets using radar type plots indicates when models are inferior or perhaps over trained. These results also suggest the need for assessing deep learning further using multiple metrics with much larger scale comparisons, prospective testing as well as assessment of different fingerprints and DNN architectures beyond those used.
NASA Astrophysics Data System (ADS)
Samala, Ravi K.; Chan, Heang-Ping; Hadjiiski, Lubomir; Helvie, Mark A.; Richter, Caleb; Cha, Kenny
2018-02-01
We propose a cross-domain, multi-task transfer learning framework to transfer knowledge learned from non-medical images by a deep convolutional neural network (DCNN) to medical image recognition task while improving the generalization by multi-task learning of auxiliary tasks. A first stage cross-domain transfer learning was initiated from ImageNet trained DCNN to mammography trained DCNN. 19,632 regions-of-interest (ROI) from 2,454 mass lesions were collected from two imaging modalities: digitized-screen film mammography (SFM) and full-field digital mammography (DM), and split into training and test sets. In the multi-task transfer learning, the DCNN learned the mass classification task simultaneously from the training set of SFM and DM. The best transfer network for mammography was selected from three transfer networks with different number of convolutional layers frozen. The performance of single-task and multitask transfer learning on an independent SFM test set in terms of the area under the receiver operating characteristic curve (AUC) was 0.78+/-0.02 and 0.82+/-0.02, respectively. In the second stage cross-domain transfer learning, a set of 12,680 ROIs from 317 mass lesions on DBT were split into validation and independent test sets. We first studied the data requirements for the first stage mammography trained DCNN by varying the mammography training data from 1% to 100% and evaluated its learning on the DBT validation set in inference mode. We found that the entire available mammography set provided the best generalization. The DBT validation set was then used to train only the last four fully connected layers, resulting in an AUC of 0.90+/-0.04 on the independent DBT test set.
Burlina, Philippe M; Joshi, Neil; Pekala, Michael; Pacheco, Katia D; Freund, David E; Bressler, Neil M
2017-11-01
Age-related macular degeneration (AMD) affects millions of people throughout the world. The intermediate stage may go undetected, as it typically is asymptomatic. However, the preferred practice patterns for AMD recommend identifying individuals with this stage of the disease to educate how to monitor for the early detection of the choroidal neovascular stage before substantial vision loss has occurred and to consider dietary supplements that might reduce the risk of the disease progressing from the intermediate to the advanced stage. Identification, though, can be time-intensive and requires expertly trained individuals. To develop methods for automatically detecting AMD from fundus images using a novel application of deep learning methods to the automated assessment of these images and to leverage artificial intelligence advances. Deep convolutional neural networks that are explicitly trained for performing automated AMD grading were compared with an alternate deep learning method that used transfer learning and universal features and with a trained clinical grader. Age-related macular degeneration automated detection was applied to a 2-class classification problem in which the task was to distinguish the disease-free/early stages from the referable intermediate/advanced stages. Using several experiments that entailed different data partitioning, the performance of the machine algorithms and human graders in evaluating over 130 000 images that were deidentified with respect to age, sex, and race/ethnicity from 4613 patients against a gold standard included in the National Institutes of Health Age-related Eye Disease Study data set was evaluated. Accuracy, receiver operating characteristics and area under the curve, and kappa score. The deep convolutional neural network method yielded accuracy (SD) that ranged between 88.4% (0.5%) and 91.6% (0.1%), the area under the receiver operating characteristic curve was between 0.94 and 0.96, and kappa coefficient (SD) between 0.764 (0.010) and 0.829 (0.003), which indicated a substantial agreement with the gold standard Age-related Eye Disease Study data set. Applying a deep learning-based automated assessment of AMD from fundus images can produce results that are similar to human performance levels. This study demonstrates that automated algorithms could play a role that is independent of expert human graders in the current management of AMD and could address the costs of screening or monitoring, access to health care, and the assessment of novel treatments that address the development or progression of AMD.
A mission operations architecture for the 21st century
NASA Technical Reports Server (NTRS)
Tai, W.; Sweetnam, D.
1996-01-01
An operations architecture is proposed for low cost missions beyond the year 2000. The architecture consists of three elements: a service based architecture; a demand access automata; and distributed science hubs. The service based architecture is based on a set of standard multimission services that are defined, packaged and formalized by the deep space network and the advanced multi-mission operations system. The demand access automata is a suite of technologies which reduces the need to be in contact with the spacecraft, and thus reduces operating costs. The beacon signaling, the virtual emergency room, and the high efficiency tracking automata technologies are described. The distributed science hubs provide information system capabilities to the small science oriented flight teams: individual access to all traditional mission functions and services; multimedia intra-team communications, and automated direct transparent communications between the scientists and the instrument.
The Synergistic Engineering Environment
NASA Technical Reports Server (NTRS)
Cruz, Jonathan
2006-01-01
The Synergistic Engineering Environment (SEE) is a system of software dedicated to aiding the understanding of space mission operations. The SEE can integrate disparate sets of data with analytical capabilities, geometric models of spacecraft, and a visualization environment, all contributing to the creation of an interactive simulation of spacecraft. Initially designed to satisfy needs pertaining to the International Space Station, the SEE has been broadened in scope to include spacecraft ranging from those in low orbit around the Earth to those on deep-space missions. The SEE includes analytical capabilities in rigid-body dynamics, kinematics, orbital mechanics, and payload operations. These capabilities enable a user to perform real-time interactive engineering analyses focusing on diverse aspects of operations, including flight attitudes and maneuvers, docking of visiting spacecraft, robotic operations, impingement of spacecraft-engine exhaust plumes, obscuration of instrumentation fields of view, communications, and alternative assembly configurations. .
NASA Astrophysics Data System (ADS)
Chen, K.; Weinmann, M.; Gao, X.; Yan, M.; Hinz, S.; Jutzi, B.; Weinmann, M.
2018-05-01
In this paper, we address the deep semantic segmentation of aerial imagery based on multi-modal data. Given multi-modal data composed of true orthophotos and the corresponding Digital Surface Models (DSMs), we extract a variety of hand-crafted radiometric and geometric features which are provided separately and in different combinations as input to a modern deep learning framework. The latter is represented by a Residual Shuffling Convolutional Neural Network (RSCNN) combining the characteristics of a Residual Network with the advantages of atrous convolution and a shuffling operator to achieve a dense semantic labeling. Via performance evaluation on a benchmark dataset, we analyze the value of different feature sets for the semantic segmentation task. The derived results reveal that the use of radiometric features yields better classification results than the use of geometric features for the considered dataset. Furthermore, the consideration of data on both modalities leads to an improvement of the classification results. However, the derived results also indicate that the use of all defined features is less favorable than the use of selected features. Consequently, data representations derived via feature extraction and feature selection techniques still provide a gain if used as the basis for deep semantic segmentation.
Planning for Crew Exercise for Future Deep Space Mission Scenarios
NASA Technical Reports Server (NTRS)
Moore, Cherice; Ryder, Jeff
2015-01-01
Providing the necessary exercise capability to protect crew health for deep space missions will bring new sets of engineering and research challenges. Exercise has been found to be a necessary mitigation for maintaining crew health on-orbit and preparing the crew for return to earth's gravity. Health and exercise data from Apollo, Space Lab, Shuttle, and International Space Station missions have provided insight into crew deconditioning and the types of activities that can minimize the impacts of microgravity on the physiological systems. The hardware systems required to implement exercise can be challenging to incorporate into spaceflight vehicles. Exercise system design requires encompassing the hardware required to provide mission specific anthropometrical movement ranges, desired loads, and frequencies of desired movements as well as the supporting control and monitoring systems, crew and vehicle interfaces, and vibration isolation and stabilization subsystems. The number of crew and operational constraints also contribute to defining the what exercise systems will be needed. All of these features require flight vehicle mass and volume integrated with multiple vehicle systems. The International Space Station exercise hardware requires over 1,800 kg of equipment and over 24 m3 of volume for hardware and crew operational space. Improvements towards providing equivalent or better capabilities with a smaller vehicle impact will facilitate future deep space missions. Deep space missions will require more understanding of the physiological responses to microgravity, understanding appropriate mitigations, designing the exercise systems to provide needed mitigations, and integrating effectively into vehicle design with a focus to support planned mission scenarios. Recognizing and addressing the constraints and challenges can facilitate improved vehicle design and exercise system incorporation.
Deep rock nuclear waste disposal test: design and operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klett, Robert D.
1974-09-01
An electrically heated test of nuclear waste simulants in granitic rock was conducted to demonstrate the feasibility of the concept of deep rock nuclear waste disposal and to obtain design data. This report describes the deep rock disposal sytstems study and the design and operation of the first concept feasibility test.
NASA deep space network operations planning and preparation
NASA Technical Reports Server (NTRS)
Jensen, W. N.
1982-01-01
The responsibilities and structural organization of the Operations Planning Group of NASA Deep Space Network (DSN) Operations are outlined. The Operations Planning group establishes an early interface with a user's planning organization to educate the user on DSN capabilities and limitations for deep space tracking support. A team of one or two individuals works through all phases of the spacecraft launch and also provides planning and preparation for specific events such as planetary encounters. Coordinating interface is also provided for nonflight projects such as radio astronomy and VLBI experiments. The group is divided into a Long Range Support Planning element and a Near Term Operations Coordination element.
... individuals. Deep brain stimulation uses a surgically implanted, battery-operated medical device called a neurostimulator to delivery ... individuals. Deep brain stimulation uses a surgically implanted, battery-operated medical device called a neurostimulator to delivery ...
2013-08-21
CAPE CANAVERAL, Fla. – A set of maneuvering thrusters is seen prior to their installation into the Orion spacecraft being assembled by Lockheed Martin inside the Operations & Checkout Building's high bay at NASA's Kennedy Space Center. The spacecraft is being prepared for a test flight next year that calls for the Orion to fly without a crew on a mission to evaluate its systems and heat shield. The spacecraft is designed to carry astronauts into deep space and back safely. Photo credit: NASA/Charisse Nahsser
2013-08-21
CAPE CANAVERAL, Fla. – A technician works with a set of tanks prior to their installation into the Orion spacecraft being assembled by Lockheed Martin inside the Operations & Checkout Building's high bay at NASA's Kennedy Space Center. The spacecraft is being prepared for a test flight next year that calls for the Orion to fly without a crew on a mission to evaluate its systems and heat shield. The spacecraft is designed to carry astronauts into deep space and back safely. Photo credit: NASA/Charisse Nahsser
2013-08-21
CAPE CANAVERAL, Fla. – Technicians work with a set of maneuvering thrusters prior to their installation into the Orion spacecraft being assembled by Lockheed Martin inside the Operations & Checkout Building's high bay at NASA's Kennedy Space Center. The spacecraft is being prepared for a test flight next year that calls for the Orion to fly without a crew on a mission to evaluate its systems and heat shield. The spacecraft is designed to carry astronauts into deep space and back safely. Photo credit: NASA/Charisse Nahsser
Deep Throttle Turbopump Technology Testing
NASA Technical Reports Server (NTRS)
Ferguson, T. V.; Guinzburg, A.; McGlynn, R. D.; Williams, M.
2002-01-01
The objectives of this viewgraph presentation were to: (1) enhance and demonstrate critical technologies in support of planned RBCC flight test programs; and (2) obtain knowledge of wide flow range as it is applicable to liquid rocket engine turbopumps operating over extreme throttle ranges. This program was set up to demonstrate wide flow range diffuser technologies. The testing phase of the contract to provide data to anchor initial designs was partially successful. Data collected suggest flow phenomena exists at off-design flow rates.
NASA Technical Reports Server (NTRS)
Roberts, Craig; Case, Sara; Reagoso, John; Webster, Cassandra
2015-01-01
The Deep Space Climate Observatory mission launched on February 11, 2015, and inserted onto a transfer trajectory toward a Lissajous orbit around the Sun-Earth L1 libration point. This paper presents an overview of the baseline transfer orbit and early mission maneuver operations leading up to the start of nominal science orbit operations. In particular, the analysis and performance of the spacecraft insertion, mid-course correction maneuvers, and the deep-space Lissajous orbit insertion maneuvers are discussed, com-paring the baseline orbit with actual mission results and highlighting mission and operations constraints..
Deep learning predictions of survival based on MRI in amyotrophic lateral sclerosis.
van der Burgh, Hannelore K; Schmidt, Ruben; Westeneng, Henk-Jan; de Reus, Marcel A; van den Berg, Leonard H; van den Heuvel, Martijn P
2017-01-01
Amyotrophic lateral sclerosis (ALS) is a progressive neuromuscular disease, with large variation in survival between patients. Currently, it remains rather difficult to predict survival based on clinical parameters alone. Here, we set out to use clinical characteristics in combination with MRI data to predict survival of ALS patients using deep learning, a machine learning technique highly effective in a broad range of big-data analyses. A group of 135 ALS patients was included from whom high-resolution diffusion-weighted and T1-weighted images were acquired at the first visit to the outpatient clinic. Next, each of the patients was monitored carefully and survival time to death was recorded. Patients were labeled as short, medium or long survivors, based on their recorded time to death as measured from the time of disease onset. In the deep learning procedure, the total group of 135 patients was split into a training set for deep learning (n = 83 patients), a validation set (n = 20) and an independent evaluation set (n = 32) to evaluate the performance of the obtained deep learning networks. Deep learning based on clinical characteristics predicted survival category correctly in 68.8% of the cases. Deep learning based on MRI predicted 62.5% correctly using structural connectivity and 62.5% using brain morphology data. Notably, when we combined the three sources of information, deep learning prediction accuracy increased to 84.4%. Taken together, our findings show the added value of MRI with respect to predicting survival in ALS, demonstrating the advantage of deep learning in disease prognostication.
Shell appraising deepwater discovery off Philippines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scherer, M.; Lambers, E.J.T.; Steffens, G.S.
1993-05-10
Shell International Petroleum Co. Ltd. negotiated a farmout in 1990 from Occidental International Exploration and Production Co. for Block SC-38 in the South China Sea off Palawan, Philippines, following Oxy's discovery of gas in 1989 in a Miocene Nido limestone buildup. Under the terms of the farmout agreement, Shell became operator with a 50% share. Following the disappointing well North Iloc 1, Shell was successful in finding oil and gas in Malampaya 1. Water 700-1,000 m deep, remoteness, and adverse weather conditions have imposed major challenges for offshore operations. The paper describes the tectonic setting; the Nido limestone play; themore » Malampaya discovery; and Shell's appraisal studies.« less
The formation of Greenland Sea Deep Water: double diffusion or deep convection?
NASA Astrophysics Data System (ADS)
Clarke, R. Allyn; Swift, James H.; Reid, Joseph L.; Koltermann, K. Peter
1990-09-01
An examination of the extensive hydrographic data sets collected by C.S.S. Hudson and F.S. Meteor in the Norwegian and Greenland Seas during February-June 1982 reveals property distributions and circulation patterns broadly similar to those seen in earlier data sets. These data sets, however, reveal the even stronger role played by topography, with evidence of separate circulation patterns and separate water masses in each of the deep basins. The high precision temperature, salinity and oxygen data obtained reveals significant differences in the deep and bottom waters found in the various basins of the Norwegian and Greenland Seas. A comparison of the 1982 data set with earlier sets shows that the renewal of Greenland Sea Deep Water must have taken place sometime over the last decade; however there is no evidence that deep convective renewal of any of the deep and bottom waters in this region was taking place at the time of the observations. The large-scale density fields, however, do suggest that deep convection to the bottom is most likely to occure in the Greenland Basin due to its deep cyclonic circulation. The hypothesis that Greenland Sea Deep Water (GSDW) is formed through dipycnal mixing processes acting on the warm salty core of Atlantic Water entering the Greenland Sea is examined. θ-S correlations and oxygen concentrations suggest that the salinity maxima in the Greenland Sea are the product of at least two separate mixing processes, not the hypothesized single mixing process leading to GSDW. A simple one-dimensional mixed layer model with ice growth and decay demonstrates that convective renewal of GSDW would have occurred within the Greenland Sea had the winter been a little more severe. The new GSDW produced would have only 0.003 less salt and less than 0.04 ml 1 -1 greater oxygen concentration than that already in the basin. Consequently, detection of whether new deep water has been produced following a winter cooling season could be difficult even with the best of modern accuracy.
Merged and corrected 915 MHz Radar Wind Profiler moments
Jonathan Helmus,Virendra Ghate, Frederic Tridon
2014-06-25
The radar wind profiler (RWP) present at the SGP central facility operates at 915 MHz and was reconfigured in early 2011, to collect key sets of measurements for precipitation and boundary layer studies. The RWP is configured to run in two main operating modes: a precipitation (PR) mode with frequent vertical observations and a boundary layer (BL) mode that is similar to what has been traditionally applied to RWPs. To address issues regarding saturation of the radar signal, range resolution and maximum range, the RWP PR mode is set to operate with two different pulse lengths, termed as short pulse (SP) and long pulse (LP). Please refer to the RWP handbook (Coulter, 2012) for further information. Data from the RWP PR-SP and PR-LP modes have been extensively used to study deep precipitating clouds, especially their dynamical structure as the RWP data does not suffer from signal attenuation during these conditions (Giangrande et al., 2013). Tridon et al. (2013) used the data collected during the Mid-latitude Continental Convective Cloud Experiment (MC3E) to improve the estimation of noise floor of the RWP recorded Doppler spectra.
High fungal diversity and abundance recovered in the deep-sea sediments of the Pacific Ocean.
Xu, Wei; Pang, Ka-Lai; Luo, Zhu-Hua
2014-11-01
Knowledge about the presence and ecological significance of bacteria and archaea in the deep-sea environments has been well recognized, but the eukaryotic microorganisms, such as fungi, have rarely been reported. The present study investigated the composition and abundance of fungal community in the deep-sea sediments of the Pacific Ocean. In this study, a total of 1,947 internal transcribed spacer (ITS) regions of fungal rRNA gene clones were recovered from five sediment samples at the Pacific Ocean (water depths ranging from 5,017 to 6,986 m) using three different PCR primer sets. There were 16, 17, and 15 different operational taxonomic units (OTUs) identified from fungal-universal, Ascomycota-, and Basidiomycota-specific clone libraries, respectively. Majority of the recovered sequences belonged to diverse phylotypes of Ascomycota (25 phylotypes) and Basidiomycota (18 phylotypes). The multiple primer approach totally recovered 27 phylotypes which showed low similarities (≤97 %) with available fungal sequences in the GenBank, suggesting possible new fungal taxa occurring in the deep-sea environments or belonging to taxa not represented in the GenBank. Our results also recovered high fungal LSU rRNA gene copy numbers (3.52 × 10(6) to 5.23 × 10(7)copies/g wet sediment) from the Pacific Ocean sediment samples, suggesting that the fungi might be involved in important ecological functions in the deep-sea environments.
Data-driven discovery of Koopman eigenfunctions using deep learning
NASA Astrophysics Data System (ADS)
Lusch, Bethany; Brunton, Steven L.; Kutz, J. Nathan
2017-11-01
Koopman operator theory transforms any autonomous non-linear dynamical system into an infinite-dimensional linear system. Since linear systems are well-understood, a mapping of non-linear dynamics to linear dynamics provides a powerful approach to understanding and controlling fluid flows. However, finding the correct change of variables remains an open challenge. We present a strategy to discover an approximate mapping using deep learning. Our neural networks find this change of variables, its inverse, and a finite-dimensional linear dynamical system defined on the new variables. Our method is completely data-driven and only requires measurements of the system, i.e. it does not require derivatives or knowledge of the governing equations. We find a minimal set of approximate Koopman eigenfunctions that are sufficient to reconstruct and advance the system to future states. We demonstrate the method on several dynamical systems.
Edge systems in the deep ocean
NASA Astrophysics Data System (ADS)
Coon, Andrew; Earp, Samuel L.
2010-04-01
DARPA has initiated a program to explore persistent presence in the deep ocean. The deep ocean is difficult to access and presents a hostile environment. Persistent operations in the deep ocean will require new technology for energy, communications and autonomous operations. Several fundamental characteristics of the deep ocean shape any potential system architecture. The deep sea presents acoustic sensing opportunities that may provide significantly enhanced sensing footprints relative to sensors deployed at traditional depths. Communication limitations drive solutions towards autonomous operation of the platforms and automation of data collection and processing. Access to the seabed presents an opportunity for fixed infrastructure with no important limitations on size and weight. Difficult access and persistence impose requirements for long-life energy sources and potentially energy harvesting. The ocean is immense, so there is a need to scale the system footprint for presence over tens of thousands and perhaps hundreds of thousands of square nautical miles. This paper focuses on the aspect of distributed sensing, and the engineering of networks of sensors to cover the required footprint.
The JPL roadmap for Deep Space navigation
NASA Technical Reports Server (NTRS)
Martin-Mur, Tomas J.; Abraham, Douglas S.; Berry, David; Bhaskaran, Shyam; Cesarone, Robert J.; Wood, Lincoln
2006-01-01
This paper reviews the tentative set of deep space missions that will be supported by NASA's Deep Space Mission System in the next twenty-five years, and extracts the driving set of navigation capabilities that these missions will require. There will be many challenges including the support of new mission navigation approaches such as formation flying and rendezvous in deep space, low-energy and low-thrust orbit transfers, precise landing and ascent vehicles, and autonomous navigation. Innovative strategies and approaches will be needed to develop and field advanced navigation capabilities.
Area Estimation of Deep-Sea Surfaces from Oblique Still Images
Souto, Miguel; Afonso, Andreia; Calado, António; Madureira, Pedro; Campos, Aldino
2015-01-01
Estimating the area of seabed surfaces from pictures or videos is an important problem in seafloor surveys. This task is complex to achieve with moving platforms such as submersibles, towed or remotely operated vehicles (ROV), where the recording camera is typically not static and provides an oblique view of the seafloor. A new method for obtaining seabed surface area estimates is presented here, using the classical set up of two laser devices fixed to the ROV frame projecting two parallel lines over the seabed. By combining lengths measured directly from the image containing the laser lines, the area of seabed surfaces is estimated, as well as the camera’s distance to the seabed, pan and tilt angles. The only parameters required are the distance between the parallel laser lines and the camera’s horizontal and vertical angles of view. The method was validated with a controlled in situ experiment using a deep-sea ROV, yielding an area estimate error of 1.5%. Further applications and generalizations of the method are discussed, with emphasis on deep-sea applications. PMID:26177287
Evidence-based management of deep wound infection after spinal instrumentation.
Lall, Rishi R; Wong, Albert P; Lall, Rohan R; Lawton, Cort D; Smith, Zachary A; Dahdaleh, Nader S
2015-02-01
In this study, evidence-based medicine is used to assess optimal surgical and medical management of patients with post-operative deep wound infection following spinal instrumentation. A computerized literature search of the PubMed database was performed. Twenty pertinent studies were identified. Studies were separated into publications addressing instrumentation retention versus removal and publications addressing antibiotic therapy regimen. The findings were classified based on level of evidence (I-III) and findings were summarized into evidentiary tables. No level I or II evidence was identified. With regards to surgical management, five studies support instrumentation retention in the setting of early deep infection. In contrast, for delayed infection, the evidence favors removal of instrumentation at the time of initial debridement. Surgeons should be aware that for deformity patients, even if solid fusion is observed, removal of instrumentation may be associated with significant loss of correction. A course of intravenous antibiotics followed by long-term oral suppressive therapy should be pursued if instrumentation is retained. A shorter treatment course may be appropriate if hardware is removed. Copyright © 2014 Elsevier Ltd. All rights reserved.
Complications of deep brain stimulation: a collective review.
Chan, Danny T M; Zhu, Xian Lun; Yeung, Jonas H M; Mok, Vincent C T; Wong, Edith; Lau, Clara; Wong, Rosanna; Lau, Christine; Poon, Wai S
2009-10-01
Since the first deep brain stimulation (DBS) performed for movement disorder more than a decade ago, DBS has become a standard operation for advanced Parkinson's disease. Its indications are expanding to areas of dystonia, psychiatric conditions and refractory epilepsy. Additionally, a new set of DBS-related complications have arisen. Many teams found a slow learning curve from this complication-prone operation. We would like to investigate complications arising from 100 DBS electrode insertions and its prevention. We performed an audit in all DBS patients for operation-related complications in our centre from 1997 to 2008. Complications were classified into operation-related, hardware-related and stimulation-related. Operation-related complications included intracranial haemorrhages and electrode malposition. Hardware-related complications included fracture of electrodes, electrode migration, infection and erosion. Stimulation-related complications included sensorimotor conditions, psychiatric conditions and life-threatening conditions. From 1997 to the end of 2008, 100 DBS electrodes were inserted in 55 patients for movement disorders, mostly for Parkinsons disease (50 patients). There was one symptomatic cerebral haemorrhage (1%) and two electrode malpositions (2%). Meticulous surgical planning, use of microdriver and a reliable electrode anchorage device would minimise this group of complications. There were two electrode fractures, one electrode migration and one pulse-generator infection which contributed to the hardware-related complication rate of 5%. There were no sensorimotor or life-threatening complications in our group. However, three patients suffered from reversible psychiatric symptoms after DBS. DBS is, on the one hand, an effective surgical treatment for movement disorders. On the other hand, it is a complication-prone operation. A dedicated "Movement Disorder Team" consisting of neurologists, neurophysiologists, functional neurosurgeons, neuropsychologists and nursing specialists is essential. Liaison among team members in peri-operative periods and postoperative care is the key to avoiding complications and having a successful patient outcome.
Deep Multimodal Distance Metric Learning Using Click Constraints for Image Ranking.
Yu, Jun; Yang, Xiaokang; Gao, Fei; Tao, Dacheng
2017-12-01
How do we retrieve images accurately? Also, how do we rank a group of images precisely and efficiently for specific queries? These problems are critical for researchers and engineers to generate a novel image searching engine. First, it is important to obtain an appropriate description that effectively represent the images. In this paper, multimodal features are considered for describing images. The images unique properties are reflected by visual features, which are correlated to each other. However, semantic gaps always exist between images visual features and semantics. Therefore, we utilize click feature to reduce the semantic gap. The second key issue is learning an appropriate distance metric to combine these multimodal features. This paper develops a novel deep multimodal distance metric learning (Deep-MDML) method. A structured ranking model is adopted to utilize both visual and click features in distance metric learning (DML). Specifically, images and their related ranking results are first collected to form the training set. Multimodal features, including click and visual features, are collected with these images. Next, a group of autoencoders is applied to obtain initially a distance metric in different visual spaces, and an MDML method is used to assign optimal weights for different modalities. Next, we conduct alternating optimization to train the ranking model, which is used for the ranking of new queries with click features. Compared with existing image ranking methods, the proposed method adopts a new ranking model to use multimodal features, including click features and visual features in DML. We operated experiments to analyze the proposed Deep-MDML in two benchmark data sets, and the results validate the effects of the method.
Deep Flare Net (DeFN) Model for Solar Flare Prediction
NASA Astrophysics Data System (ADS)
Nishizuka, N.; Sugiura, K.; Kubo, Y.; Den, M.; Ishii, M.
2018-05-01
We developed a solar flare prediction model using a deep neural network (DNN) named Deep Flare Net (DeFN). This model can calculate the probability of flares occurring in the following 24 hr in each active region, which is used to determine the most likely maximum classes of flares via a binary classification (e.g., ≥M class versus
Long-term viability of carbon sequestration in deep-sea sediments
NASA Astrophysics Data System (ADS)
Teng, Y.; Zhang, D.
2017-12-01
Sequestration of carbon dioxide in deep-sea sediments has been proposed for the long-term storage of anthropogenic CO2, due to the negative buoyancy effect and hydrate formation under conditions of high pressure and low temperature. However, the multi-physics process of injection and post-injection fate of CO2 and the feasibility of sub-seabed disposal of CO2 under different geological and operational conditions have not been well studied. On the basis of a detailed study of the coupled processes, we investigate whether storing CO2 into deep-sea sediments is viable, efficient, and secure over the long term. Also studied are the evolution of the multiphase and multicomponent flow and the impact of hydrate formation on storage efficiency during the upward migration of the injected CO2. It is shown that low buoyancy and high viscosity slow down the ascending plume and the forming of the hydrate cap effectively reduces the permeability and finally becomes an impermeable seal, thus limiting the movement of CO2 towards the seafloor. Different flow patterns at varied time scales are identified through analyzing the mass distribution of CO2 in different phases over time. Observed is the formation of a fluid inclusion, which mainly consists of liquid CO2 and is encapsulated by an impermeable hydrate film in the diffusion-dominated stage. The trapped liquid CO2 and CO2 hydrate finally dissolve into the pore water through diffusion of the CO2 component. Sensitivity analyses are performed on storage efficiency under variable geological and operational conditions. It is found that under a deep-sea setting, CO2 sequestration in intact marine sediments is generally safe and permanent.
Kostrzewa, Michael; Kara, Kerim; Rathmann, Nils; Tsagogiorgas, Charalambos; Henzler, Thomas; Schoenberg, Stefan O; Hohenberger, Peter; Diehl, Steffen J; Roessner, Eric D
2017-06-01
Minimally invasive resection of small, deep intrapulmonary lesions can be challenging due to the difficulty of localizing them during video-assisted thoracoscopic surgery (VATS). We report our preliminary results evaluating the feasibility of an image-guided, minimally invasive, 1-stop-shop approach for the resection of small, deep intrapulmonary lesions in a hybrid operating room (OR). Fifteen patients (5 men, 10 women; mean age, 63 years) with a total of 16 solitary, deep intrapulmonary nodules of unknown malignant status were identified for intraoperative wire marking. Patients were placed on the operating table for resection by VATS. A marking wire was placed within the lesion under 3D laser and fluoroscopic guidance using a cone beam computed tomography system. Then, wedge resection by VATS was performed in the same setting without repositioning the patient. Complete resection with adequate safety margins was confirmed for all lesions. Marking wire placement facilitated resection in 15 of 16 lesions. Eleven lesions proved to be malignant, either primary or secondary; 5 were benign. Mean lesion size was 7.7 mm; mean distance to the pleural surface was 15.1 mm (mean lesion depth-diameter ratio, 2.2). Mean procedural time for marking wire placement was 35 minutes; mean VATS duration was 36 minutes. Computed tomography-assisted thoracoscopic surgery is a new, safe, and effective procedure for minimally invasive resection of small, deeply localized intrapulmonary lesions. The benefits of computed tomography-assisted thoracoscopic surgery are 1. One-stop-shop procedure, 2. Lower risk for the patient (no patient relocation, no marking wire loss), and 3. No need to coordinate scheduling between the CT room and OR.
2. AERIAL VIEW, SHOWING GLENDALE ROAD BRIDGE WITHIN ITS SETTING ...
2. AERIAL VIEW, SHOWING GLENDALE ROAD BRIDGE WITHIN ITS SETTING AT GLENDALE ROAD CROSSING OF DEEP CREEK LAKE (PHOTOGRAPH BY RUTHVAN MORROW) - Glendale Road Bridge, Spanning Deep Creek Lake on Glendale Road, McHenry, Garrett County, MD
1. AERIAL VIEW, SHOWING GLENDALE ROAD BRIDGE WITHIN ITS SETTING ...
1. AERIAL VIEW, SHOWING GLENDALE ROAD BRIDGE WITHIN ITS SETTING AT GLENDALE ROAD CROSSING OF DEEP CREEK LAKE (PHOTOGRAPH BY RUTHVAN MORROW) - Glendale Road Bridge, Spanning Deep Creek Lake on Glendale Road, McHenry, Garrett County, MD
Deep Eutectic Solvents pretreatment of agro-industrial food waste.
Procentese, Alessandra; Raganati, Francesca; Olivieri, Giuseppe; Russo, Maria Elena; Rehmann, Lars; Marzocchella, Antonio
2018-01-01
Waste biomass from agro-food industries are a reliable and readily exploitable resource. From the circular economy point of view, direct residues from these industries exploited for production of fuel/chemicals is a winning issue, because it reduces the environmental/cost impact and improves the eco-sustainability of productions. The present paper reports recent results of deep eutectic solvent (DES) pretreatment on a selected group of the agro-industrial food wastes (AFWs) produced in Europe. In particular, apple residues, potato peels, coffee silverskin, and brewer's spent grains were pretreated with two DESs, (choline chloride-glycerol and choline chloride-ethylene glycol) for fermentable sugar production. Pretreated biomass was enzymatic digested by commercial enzymes to produce fermentable sugars. Operating conditions of the DES pretreatment were changed in wide intervals. The solid to solvent ratio ranged between 1:8 and 1:32, and the temperature between 60 and 150 °C. The DES reaction time was set at 3 h. Optimal operating conditions were: 3 h pretreatment with choline chloride-glycerol at 1:16 biomass to solvent ratio and 115 °C. Moreover, to assess the expected European amount of fermentable sugars from the investigated AFWs, a market analysis was carried out. The overall sugar production was about 217 kt yr -1 , whose main fraction was from the hydrolysis of BSGs pretreated with choline chloride-glycerol DES at the optimal conditions. The reported results boost deep investigation on lignocellulosic biomass using DES. This investigated new class of solvents is easy to prepare, biodegradable and cheaper than ionic liquid. Moreover, they reported good results in terms of sugars' release at mild operating conditions (time, temperature and pressure).
DeepBlue epigenomic data server: programmatic data retrieval and analysis of epigenome region sets
Albrecht, Felipe; List, Markus; Bock, Christoph; Lengauer, Thomas
2016-01-01
Large amounts of epigenomic data are generated under the umbrella of the International Human Epigenome Consortium, which aims to establish 1000 reference epigenomes within the next few years. These data have the potential to unravel the complexity of epigenomic regulation. However, their effective use is hindered by the lack of flexible and easy-to-use methods for data retrieval. Extracting region sets of interest is a cumbersome task that involves several manual steps: identifying the relevant experiments, downloading the corresponding data files and filtering the region sets of interest. Here we present the DeepBlue Epigenomic Data Server, which streamlines epigenomic data analysis as well as software development. DeepBlue provides a comprehensive programmatic interface for finding, selecting, filtering, summarizing and downloading region sets. It contains data from four major epigenome projects, namely ENCODE, ROADMAP, BLUEPRINT and DEEP. DeepBlue comes with a user manual, examples and a well-documented application programming interface (API). The latter is accessed via the XML-RPC protocol supported by many programming languages. To demonstrate usage of the API and to enable convenient data retrieval for non-programmers, we offer an optional web interface. DeepBlue can be openly accessed at http://deepblue.mpi-inf.mpg.de. PMID:27084938
DSMS science operations concept
NASA Technical Reports Server (NTRS)
Connally, M. J.; Kuiper, T. B.
2001-01-01
The Deep Space Mission System (DSMS) Science Operations Concept describes the vision for enabling the use of the DSMS, particularly the Deep Space Network (DSN) for direct science observations in the areas of radio astronomy, planetary radar, radio science and VLBI.
NASA Astrophysics Data System (ADS)
Cordes, E. E.; Jones, D.; Levin, L. A.
2016-02-01
The oil and gas industry is one of the most active agents of the global industrialization of the deep sea. The wide array of impacts following the Deepwater Horizon oil spill highlighted the need for a systematic review of existing regulations both in US waters and internationally. Within different exclusive economic zones, there are a wide variety of regulations regarding the survey of deep-water areas prior to leasing and the acceptable set-back distances from vulnerable marine ecosystems once they are discovered. There are also varying mitigation strategies for accidental release of oil and gas, including active monitoring systems, temporary closings of oil and gas production, and marine protected areas. The majority of these regulations are based on previous studies of typical impacts from oil and gas drilling, rather than accidental releases. However, the probability of an accident from standard operations increases significantly with depth. The Oil & Gas working group of the Deep Ocean Stewardship Initiative is an international partnership of scientists, managers, non-governmental organizations, and industry professionals whose goal is to review existing regulations for the oil & gas industry and produce a best practices document to advise both developed and developing nations on their regulatory structure as energy development moves into deeper waters.
The deep space network, volume 7
NASA Technical Reports Server (NTRS)
1972-01-01
The objectives, functions, and organization of the Deep Space Network are summarized. The Deep Space Instrumentation Facility, the Ground Communications Facility, and the Space Flight Operations Facility are described.
Singh, Harnarayan; Patir, Rana; Vaishya, Sandeep; Miglani, Rahul; Kaur, Amandeep
2018-06-01
Minimally invasive transportal resection of deep intracranial lesions has become a widely accepted surgical technique. Many disposable, mountable port systems are available in the market for this purpose, like the ViewSite Brain Access System. The objective of this study was to find a cost-effective substitute for these systems. Deep-seated brain lesions were treated with a port system made from disposable syringes. The syringe port could be inserted through minicraniotomies placed and planned with navigation. All deep-seated lesions like ventricular tumours, colloid cysts, deep-seated gliomas, and basal ganglia hemorrhages were treated with this syringe port system and evaluated for safety, operative site hematomas, and blood loss. 62 patients were operated on during the study period from January 2015 to July 2017, using this innovative syringe port system for deep-seated lesions of the brain. No operative site hematoma or contusions were seen along the port entry site and tract. Syringe port is a cost-effective and safe alternative to the costly disposable brain port systems, especially for neurosurgical setups in developing countries for minimally invasive transportal resection of deep brain lesions. Copyright © 2018 Elsevier Inc. All rights reserved.
Betancur, Julian; Commandeur, Frederic; Motlagh, Mahsaw; Sharir, Tali; Einstein, Andrew J; Bokhari, Sabahat; Fish, Mathews B; Ruddy, Terrence D; Kaufmann, Philipp; Sinusas, Albert J; Miller, Edward J; Bateman, Timothy M; Dorbala, Sharmila; Di Carli, Marcelo; Germano, Guido; Otaki, Yuka; Tamarappoo, Balaji K; Dey, Damini; Berman, Daniel S; Slomka, Piotr J
2018-03-12
The study evaluated the automatic prediction of obstructive disease from myocardial perfusion imaging (MPI) by deep learning as compared with total perfusion deficit (TPD). Deep convolutional neural networks trained with a large multicenter population may provide improved prediction of per-patient and per-vessel coronary artery disease from single-photon emission computed tomography MPI. A total of 1,638 patients (67% men) without known coronary artery disease, undergoing stress 99m Tc-sestamibi or tetrofosmin MPI with new generation solid-state scanners in 9 different sites, with invasive coronary angiography performed within 6 months of MPI, were studied. Obstructive disease was defined as ≥70% narrowing of coronary arteries (≥50% for left main artery). Left ventricular myocardium was segmented using clinical nuclear cardiology software and verified by an expert reader. Stress TPD was computed using sex- and camera-specific normal limits. Deep learning was trained using raw and quantitative polar maps and evaluated for prediction of obstructive stenosis in a stratified 10-fold cross-validation procedure. A total of 1,018 (62%) patients and 1,797 of 4,914 (37%) arteries had obstructive disease. Area under the receiver-operating characteristic curve for disease prediction by deep learning was higher than for TPD (per patient: 0.80 vs. 0.78; per vessel: 0.76 vs. 0.73: p < 0.01). With deep learning threshold set to the same specificity as TPD, per-patient sensitivity improved from 79.8% (TPD) to 82.3% (deep learning) (p < 0.05), and per-vessel sensitivity improved from 64.4% (TPD) to 69.8% (deep learning) (p < 0.01). Deep learning has the potential to improve automatic interpretation of MPI as compared with current clinical methods. Copyright © 2018 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Luo, Chang; Wang, Jie; Feng, Gang; Xu, Suhui; Wang, Shiqiang
2017-10-01
Deep convolutional neural networks (CNNs) have been widely used to obtain high-level representation in various computer vision tasks. However, for remote scene classification, there are not sufficient images to train a very deep CNN from scratch. From two viewpoints of generalization power, we propose two promising kinds of deep CNNs for remote scenes and try to find whether deep CNNs need to be deep for remote scene classification. First, we transfer successful pretrained deep CNNs to remote scenes based on the theory that depth of CNNs brings the generalization power by learning available hypothesis for finite data samples. Second, according to the opposite viewpoint that generalization power of deep CNNs comes from massive memorization and shallow CNNs with enough neural nodes have perfect finite sample expressivity, we design a lightweight deep CNN (LDCNN) for remote scene classification. With five well-known pretrained deep CNNs, experimental results on two independent remote-sensing datasets demonstrate that transferred deep CNNs can achieve state-of-the-art results in an unsupervised setting. However, because of its shallow architecture, LDCNN cannot obtain satisfactory performance, regardless of whether in an unsupervised, semisupervised, or supervised setting. CNNs really need depth to obtain general features for remote scenes. This paper also provides baseline for applying deep CNNs to other remote sensing tasks.
High power laser downhole cutting tools and systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zediker, Mark S; Rinzler, Charles C; Faircloth, Brian O
Downhole cutting systems, devices and methods for utilizing 10 kW or more laser energy transmitted deep into the earth with the suppression of associated nonlinear phenomena. Systems and devices for the laser cutting operations within a borehole in the earth. These systems and devices can deliver high power laser energy down a deep borehole, while maintaining the high power to perform cutting operations in such boreholes deep within the earth.
NASA Human Spaceflight Architecture Team Cis-Lunar Analysis
NASA Technical Reports Server (NTRS)
Lupisella, M.; Bobskill, M. R.
2012-01-01
The Cis-Lunar Destination Team of NASA's Human Spaceflight Architecture Teait1 (HAT) has been perfom1ing analyses of a number of cis-lunar locations to infom1 architecture development, transportation and destination elements definition, and operations. The cis-lunar domain is defined as that area of deep space under the gravitation influence of the earth-moon system, including a set of orbital locations (low earth orbit (LEO]. geosynchronous earth orbit [GEO]. highly elliptical orbits [HEO]); earth-moon libration or "Lagrange·· points (EMLl through EMLS, and in particular, EMLI and EML2), and low lunar orbit (LLO). We developed a set of cis-lunar mission concepts defined by mission duration, pre-deployment, type of mission, and location, to develop mission concepts and the associated activities, capabilities, and architecture implications. To date, we have produced two destination operations J concepts based on present human space exploration architectural considerations. We have recently begun defining mission activities that could be conducted within an EM LI or EM L2 facility.
NASA Technical Reports Server (NTRS)
Hall, Justin R.; Hastrup, Rolf C.; Bell, David J.
1992-01-01
The general support requirements of a typical SEI mission set, along with the mission operations objectives and related telecommunications, navigation, and information management (TNIM) support infrastructure options are described. Responsive system architectures and designs are proposed, including a Mars orbiting communications relay satellite system and a Mars-centered navigation capability for servicing all Mars missions. With the TNIM architecture as a basis, key elements of the microwave link design are proposed. The needed new technologies which enable these designs are identified, and current maturity is assessed.
NASA Astrophysics Data System (ADS)
Hall, Justin R.; Hastrup, Rolf C.; Bell, David J.
1992-06-01
The general support requirements of a typical SEI mission set, along with the mission operations objectives and related telecommunications, navigation, and information management (TNIM) support infrastructure options are described. Responsive system architectures and designs are proposed, including a Mars orbiting communications relay satellite system and a Mars-centered navigation capability for servicing all Mars missions. With the TNIM architecture as a basis, key elements of the microwave link design are proposed. The needed new technologies which enable these designs are identified, and current maturity is assessed.
Study on Reventment-Protected and Non-Bottom-Protected Plunge Pool of High Arch Dam
NASA Astrophysics Data System (ADS)
Yingkui, Wang; Quxiu, Cao; Fanhui, Kong
2018-05-01
Lots of high arch dam have the characteristics of “High head, Large discharge and Narrow river valley”, therefore, the security researches of energy dissipation were always the focus in these hydro-projects. Statistically, the trajectory type energy dissipation is the most widely used in the built high arch dams, and the water plunge poor were always set downstream the dam body. However, the widely used protected plunge poor need large investment with the disadvantage of complicated operation and maintenance. Along with the construction of concrete high arch dam in the Southwest China, the river overburden and water cushion were deep in dam site, which is becoming a new characteristic of these hydro-projects. Accordingly, the deep water cushion can be used for the energy dissipation design, such as the “Reventment-Protected and Non-Bottom-Protected Plunge Pool”, which has the advantage of more simplified project design and more economy investment.
NASA Technical Reports Server (NTRS)
Vonroos, O. H.
1982-01-01
A theory of deep point defects imbedded in otherwise perfect semiconductor crystals is developed with the aid of pseudopotentials. The dominant short-range forces engendered by the impurity are sufficiently weakened in all cases where the cancellation theorem of the pseudopotential formalism is operative. Thus, effective-mass-like equations exhibiting local effective potentials derived from nonlocal pseudopotentials are shown to be valid for a large class of defects. A two-band secular determinant for the energy eigenvalues of deep defects is also derived from the set of integral equations which corresponds to the set of differential equations of the effective-mass type. Subsequently, the theory in its simplest form, is applied to the system Al(x)Ga(1-x)As:Se. It is shown that the one-electron donor level of Se within the forbidden gap of Al(x)Ga(1-x)As as a function of the AlAs mole fraction x reaches its maximum of about 300 meV (as measured from the conduction band edge) at the cross-over from the direct to the indirect band-gap at x = 0.44 in agreement with experiments.
Chan, Anne Y Y; Yeung, Jonas H M; Mok, Vincent C T; Ip, Vincent H L; Wong, Adrian; Kuo, S H; Chan, Danny T M; Zhu, X L; Wong, Edith; Lau, Claire K Y; Wong, Rosanna K M; Tang, Venus; Lau, Christine; Poon, W S
2014-12-01
To present the result and experience of subthalamic nucleus deep brain stimulation for Parkinson's disease. Case series. Prince of Wales Hospital, Hong Kong. A cohort of patients with Parkinson's disease received subthalamic nucleus deep brain stimulation from September 1998 to January 2010. Patient assessment data before and after the operation were collected prospectively. Forty-one patients (21 male and 20 female) with Parkinson's disease underwent bilateral subthalamic nucleus deep brain stimulation and were followed up for a median interval of 12 months. For the whole group, the mean improvements of Unified Parkinson's Disease Rating Scale (UPDRS) parts II and III were 32.5% and 31.5%, respectively (P<0.001). Throughout the years, a multidisciplinary team was gradually built. The deep brain stimulation protocol evolved and was substantiated by updated patient selection criteria and outcome assessment, integrated imaging and neurophysiological targeting, refinement of surgical technique as well as the accumulation of experience in deep brain stimulation programming. Most of the structural improvement occurred before mid-2005. Patients receiving the operation before June 2005 (19 cases) and after (22 cases) were compared; the improvements in UPDRS part III were 13.2% and 55.2%, respectively (P<0.001). There were three operative complications (one lead migration, one cerebral haematoma, and one infection) in the group operated on before 2005. There was no operative mortality. The functional state of Parkinson's disease patients with motor disabilities refractory to best medical treatment improved significantly after subthalamic nucleus deep brain stimulation. A dedicated multidisciplinary team building, refined protocol for patient selection and assessment, improvement of targeting methods, meticulous surgical technique, and experience in programming are the key factors contributing to the improved outcome.
Comprehensive Evaluation of Power Supplies at Cryogenic Temperatures for Deep Space Applications
NASA Technical Reports Server (NTRS)
Patterson, Richard L.; Gerber, Scott; Hammoud, Ahmad; Elbuluk, Malik E.; Lyons, Valerie (Technical Monitor)
2002-01-01
The operation of power electronic systems at cryogenic temperatures is anticipated in many future space missions such as planetary exploration and deep space probes. In addition to surviving the space hostile environments, electronics capable of low temperature operation would contribute to improving circuit performance, increasing system efficiency, and reducing development and launch costs. DC/DC converters are widely used in space power systems in the areas of power management, conditioning, and control. As part of the on-going Low Temperature Electronics Program at NASA, several commercial-off-the-shelf (COTS) DC/DC converters, with specifications that might fit the requirements of specific future space missions have been selected for investigation at cryogenic temperatures. The converters have been characterized in terms of their performance as a function of temperature in the range of 20 C to - 180 C. These converters ranged in electrical power from 8 W to 13 W, input voltage from 9 V to 72 V and an output voltage of 3.3 V. The experimental set-up and procedures along with the results obtained on the converters' steady state and dynamic characteristics are presented and discussed.
An Update on the CCSDS Optical Communications Working Group
NASA Technical Reports Server (NTRS)
Edwards, Bernard L.; Schulz, Klaus-Juergen; Hamkins, Jonathan; Robinson, Bryan; Alliss, Randall; Daddato, Robert; Schmidt, Christopher; Giggebach, Dirk; Braatz, Lena
2017-01-01
International space agencies around the world are currently developing optical communication systems for Near Earth and Deep Space applications for both robotic and human rated spacecraft. These applications include both links between spacecraft and links between spacecraft and ground. The Interagency Operation Advisory Group (IOAG) has stated that there is a strong business case for international cross support of spacecraft optical links. It further concluded that in order to enable cross support the links must be standardized. This paper will overview the history and structure of the space communications international standards body, the Consultative Committee for Space Data Systems (CCSDS), that will develop the standards and provide an update on the proceedings of the Optical Communications Working Group within CCSDS. This paper will also describe the set of optical communications standards being developed and outline some of the issues that must be addressed in the next few years. The paper will address in particular the ongoing work on application scenarios for deep space to ground called High Photon Efficiency, for LEO to ground called Low Complexity, for inter-satellite and near Earth to ground called High Data Rate, as well as associated atmospheric measurement techniques and link operations concepts.
NASA Astrophysics Data System (ADS)
Barnes, C.; Delaney, J.
2003-04-01
NEPTUNE is an innovative facility, a deep-water cabled observatory, that will transform marine science. MARS and VENUS are deep and shallow-water test bed facilities for NEPTUNE located in Monterey Canyon, California and in southern British Columbia, respectively; both were funded in 2002. NEPTUNE will be a network of over 30 subsea observatories covering the 200,000 sq. km Juan de Fuca tectonic plate, Northeast Pacific. It will draw power via two shore stations and receive and exchange data with scientists through 3000 km of submarine fiber-optic cables. Each observatory, and cabled extensions, will host and power many scientific instruments on the surrounding seafloor, in seafloor boreholes and buoyed through the water column. Remotely operated and autonomous vehicles will reside at depth, recharge at observatories, and respond to distant labs. Continuous near-real-time multidisciplinary measurement series will extend over 30 years. Free from the limitations of battery life, ship schedules/ accommodations, bad weather and delayed access to data, scientists will monitor remotely their deep-sea experiments in real time on the Internet, and routinely command instruments to respond to storms, plankton blooms, earthquakes, eruptions, slope slides and other events. Scientists will be able to pose entirely new sets of questions and experiments to understand complex, interacting Earth System processes such as the structure and seismic behavior of the ocean crust; dynamics of hot and cold fluids and gas hydrates in the upper ocean crust and overlying sediments; ocean climate change and its effect on the ocean biota at all depths; and the barely known deep-sea ecosystem dynamics and biodiversity. NEPTUNE is a US/Canada (70/30) partnership to design, test, build and operate the network on behalf of a wide scientific community. The total cost of the project is estimated at about U.S. 250 million from concept to operation. Over U.S. 50 million has already been funded for design, development, and the test beds. NEPTUNE will be among the first of many such cabled ocean observatories. Much is to be gained by being among the scientific and industrial pioneers. The multidisciplinary data archive will be an amazing, expanding resource for scientists and students. The public will share in the research discoveries of one of the last unexplored places on earth through an extensive education/outreach program.
Generalization error analysis: deep convolutional neural network in mammography
NASA Astrophysics Data System (ADS)
Richter, Caleb D.; Samala, Ravi K.; Chan, Heang-Ping; Hadjiiski, Lubomir; Cha, Kenny
2018-02-01
We conducted a study to gain understanding of the generalizability of deep convolutional neural networks (DCNNs) given their inherent capability to memorize data. We examined empirically a specific DCNN trained for classification of masses on mammograms. Using a data set of 2,454 lesions from 2,242 mammographic views, a DCNN was trained to classify masses into malignant and benign classes using transfer learning from ImageNet LSVRC-2010. We performed experiments with varying amounts of label corruption and types of pixel randomization to analyze the generalization error for the DCNN. Performance was evaluated using the area under the receiver operating characteristic curve (AUC) with an N-fold cross validation. Comparisons were made between the convergence times, the inference AUCs for both the training set and the test set of the original image patches without corruption, and the root-mean-squared difference (RMSD) in the layer weights of the DCNN trained with different amounts and methods of corruption. Our experiments observed trends which revealed that the DCNN overfitted by memorizing corrupted data. More importantly, this study improved our understanding of DCNN weight updates when learning new patterns or new labels. Although we used a specific classification task with the ImageNet as example, similar methods may be useful for analysis of the DCNN learning processes, especially those that employ transfer learning for medical image analysis where sample size is limited and overfitting risk is high.
DeepBlue epigenomic data server: programmatic data retrieval and analysis of epigenome region sets.
Albrecht, Felipe; List, Markus; Bock, Christoph; Lengauer, Thomas
2016-07-08
Large amounts of epigenomic data are generated under the umbrella of the International Human Epigenome Consortium, which aims to establish 1000 reference epigenomes within the next few years. These data have the potential to unravel the complexity of epigenomic regulation. However, their effective use is hindered by the lack of flexible and easy-to-use methods for data retrieval. Extracting region sets of interest is a cumbersome task that involves several manual steps: identifying the relevant experiments, downloading the corresponding data files and filtering the region sets of interest. Here we present the DeepBlue Epigenomic Data Server, which streamlines epigenomic data analysis as well as software development. DeepBlue provides a comprehensive programmatic interface for finding, selecting, filtering, summarizing and downloading region sets. It contains data from four major epigenome projects, namely ENCODE, ROADMAP, BLUEPRINT and DEEP. DeepBlue comes with a user manual, examples and a well-documented application programming interface (API). The latter is accessed via the XML-RPC protocol supported by many programming languages. To demonstrate usage of the API and to enable convenient data retrieval for non-programmers, we offer an optional web interface. DeepBlue can be openly accessed at http://deepblue.mpi-inf.mpg.de. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Improving OBS operations in ultra-deep ocean during the Southern Mariana Trench expeditions
NASA Astrophysics Data System (ADS)
Zeng, X.; Lin, J.; Xu, M.; Zhou, Z.
2017-12-01
The Mariana Trench Research Initiative, led by the South China Sea Institute of Oceanology of the Chinese Academy of Sciences and through international collaboration, focuses on investigating the deep and shallow lithospheric structure, earthquake characteristics, extreme geological environments, and the controlling geodynamic mechanisms for the formation of Earth's deepest basins in the southern Mariana Trench. Two multidisciplinary research expeditions were executed during December 2016 and June 2017, respectively, on board R/V Shiyan 3. A main task of the Mariana Initiative is to conduct the Southern Mariana OBS Experiment (SMOE), the first OBS seismic experiment across the Challenger Deep. The SMOE expeditions include both active and passive source seismic experiments and employed a large number of broadband OBS instruments. Due to the deep water, rough weather, strong winds, and other unfavorable factors, it was challenging to deploy/recover the OBSs. During the two expeditions we developed and experimented with a number of ways to improve the success rate of OBS operations in the harsh ultra-deep ocean environment of the Southern Mariana Trench. All newly acquired OBSs underwent a series of uniquely designed deep-ocean tests to improve the instrument performance and maximize reliability during their deployment under the ultra-high pressure conditions. The OBS deployment and recovery followed a unified standard operation procedure and aided by an instrumental checklist, which were specifically designed and strictly enforced for operation during the expeditions. Furthermore, an advanced ship-based radio positioning system was developed to rapidly and accurately locate the OBS instruments when they reached the sea surface; the system proved its effectiveness even under extreme weather conditions. Through the development and application of the novel methods for operation in deep oceans, we overcame the rough sea and other unfavorable factors during the first two expeditions to the southern Mariana Trench and achieved a highly successful OBS operation program.
NASA Technical Reports Server (NTRS)
Kuiper, T. B. H.; Resch, G. M.
2000-01-01
The increasing load on NASA's deep Space Network, the new capabilities for deep space missions inherent in a next-generation radio telescope, and the potential of new telescope technology for reducing construction and operation costs suggest a natural marriage between radio astronomy and deep space telecommunications in developing advanced radio telescope concepts.
Preliminary Concept of Operations for the Deep Space Array-Based Network
NASA Astrophysics Data System (ADS)
Bagri, D. S.; Statman, J. I.
2004-05-01
The Deep Space Array-Based Network (DSAN) will be an array-based system, part of a greater than 1000 times increase in the downlink/telemetry capability of the Deep Space Network. The key function of the DSAN is provision of cost-effective, robust telemetry, tracking, and command services to the space missions of NASA and its international partners. This article presents an expanded approach to the use of an array-based system. Instead of using the array as an element in the existing Deep Space Network (DSN), relying to a large extent on the DSN infrastructure, we explore a broader departure from the current DSN, using fewer elements of the existing DSN, and establishing a more modern concept of operations. For example, the DSAN will have a single 24 x 7 monitor and control (M&C) facility, while the DSN has four 24 x 7 M&C facilities. The article gives the architecture of the DSAN and its operations philosophy. It also briefly describes the customer's view of operations, operations management, logistics, anomaly analysis, and reporting.
Fernandez-Moure, Joseph S; Kim, Keemberly; Zubair, M Haseeb; Rosenberg, Wade R
2017-01-01
Deep vein thrombosis (DVT) continues to be a significant source of morbidity for surgical patients. Placement of a retrievable inferior vena cava (IVC) filter is used when patients have contraindications to anticoagulation or recurrent pulmonary embolism despite therapeutic anticoagulation. Although retrievable IVC filters are often used, they carry a unique set of risks. A 67-year-old man presents to the Emergency Room (ER) following large volume melena and complaining of syncope. One year prior, the patient had been diagnosed with Glioblastoma multiforme, for which he underwent a craniotomy with near-total resection of the mass. He subsequently developed a deep vein thrombosis and underwent placement of a retrievable inferior vena cava (IVC) filter. Computerized tomography (CT) and esophagogastroduodenoscopy showed duodenal perforation by the retrievable IVC filter. The filter was successfully retrieved through an endovascular approach. Retrievable IVC filter placement is the preferred method of pulmonary embolism prevention in patients with significant risk for bleeding. Duodenal perforation by a retrievable IVC filter is a rare and serious complication. It is usually managed surgically, but can also be managed non-operatively. For patients with significant comorbidities or patients who are poor surgical candidates, non-operative management with close monitoring can serve as an initial approach to the patient with a caval enteric perforation secondary to a retrievable IVC filter. Copyright © 2017. Published by Elsevier Ltd.
Computations in the deep vs superficial layers of the cerebral cortex.
Rolls, Edmund T; Mills, W Patrick C
2017-11-01
A fundamental question is how the cerebral neocortex operates functionally, computationally. The cerebral neocortex with its superficial and deep layers and highly developed recurrent collateral systems that provide a basis for memory-related processing might perform somewhat different computations in the superficial and deep layers. Here we take into account the quantitative connectivity within and between laminae. Using integrate-and-fire neuronal network simulations that incorporate this connectivity, we first show that attractor networks implemented in the deep layers that are activated by the superficial layers could be partly independent in that the deep layers might have a different time course, which might because of adaptation be more transient and useful for outputs from the neocortex. In contrast the superficial layers could implement more prolonged firing, useful for slow learning and for short-term memory. Second, we show that a different type of computation could in principle be performed in the superficial and deep layers, by showing that the superficial layers could operate as a discrete attractor network useful for categorisation and feeding information forward up a cortical hierarchy, whereas the deep layers could operate as a continuous attractor network useful for providing a spatially and temporally smooth output to output systems in the brain. A key advance is that we draw attention to the functions of the recurrent collateral connections between cortical pyramidal cells, often omitted in canonical models of the neocortex, and address principles of operation of the neocortex by which the superficial and deep layers might be specialized for different types of attractor-related memory functions implemented by the recurrent collaterals. Copyright © 2017 Elsevier Inc. All rights reserved.
Simulator Studies of the Deep Stall
NASA Technical Reports Server (NTRS)
White, Maurice D.; Cooper, George E.
1965-01-01
Simulator studies of the deep-stall problem encountered with modern airplanes are discussed. The results indicate that the basic deep-stall tendencies produced by aerodynamic characteristics are augmented by operational considerations. Because of control difficulties to be anticipated in the deep stall, it is desirable that adequate safeguards be provided against inadvertent penetrations.
The Role of Cis-Lunar Space in Future Global Space Exploration
NASA Technical Reports Server (NTRS)
Bobskill, Marianne R.; Lupisella, Mark L.
2012-01-01
Cis-lunar space offers affordable near-term opportunities to help pave the way for future global human exploration of deep space, acting as a bridge between present missions and future deep space missions. While missions in cis-lunar space have value unto themselves, they can also play an important role in enabling and reducing risk for future human missions to the Moon, Near-Earth Asteroids (NEAs), Mars, and other deep space destinations. The Cis-Lunar Destination Team of NASA's Human Spaceflight Architecture Team (HAT) has been analyzing cis-lunar destination activities and developing notional missions (or "destination Design Reference Missions" [DRMs]) for cis-lunar locations to inform roadmap and architecture development, transportation and destination elements definition, operations, and strategic knowledge gaps. The cis-lunar domain is defined as that area of deep space under the gravitational influence of the earth-moon system. This includes a set of earth-centered orbital locations in low earth orbit (LEO), geosynchronous earth orbit (GEO), highly elliptical and high earth orbits (HEO), earth-moon libration or "Lagrange" points (E-ML1 through E-ML5, and in particular, E-ML1 and E-ML2), and low lunar orbit (LLO). To help explore this large possibility space, we developed a set of high level cis-lunar mission concepts in the form of a large mission tree, defined primarily by mission duration, pre-deployment, type of mission, and location. The mission tree has provided an overall analytical context and has helped in developing more detailed design reference missions that are then intended to inform capabilities, operations, and architectures. With the mission tree as context, we will describe two destination DRMs to LEO and GEO, based on present human space exploration architectural considerations, as well as our recent work on defining mission activities that could be conducted with an EML1 or EML2 facility, the latter of which will be an emphasis of this paper, motivated in part by recent interest expressed at the Global Exploration Roadmap Stakeholder meeting. This paper will also explore the links between this HAT Cis-Lunar Destination Team analysis and the recently released ISECG Global Exploration Roadmap and other potential international considerations, such as preventing harmful interference to radio astronomy observations in the shielded zone of the moon.
Methods for enhancing the efficiency of creating a borehole using high power laser systems
Zediker, Mark S.; Rinzler, Charles C.; Faircloth, Brian O.; Koblick, Yeshaya; Moxley, Joel F.
2014-06-24
Methods for utilizing 10 kW or more laser energy transmitted deep into the earth with the suppression of associated nonlinear phenomena to enhance the formation of Boreholes. Methods for the laser operations to reduce the critical path for forming a borehole in the earth. These methods can deliver high power laser energy down a deep borehole, while maintaining the high power to perform operations in such boreholes deep within the earth.
Hack, Nawaz; Akbar, Umer; Monari, Erin H; Eilers, Amanda; Thompson-Avila, Amanda; Hwynn, Nelson H; Sriram, Ashok; Haq, Ihtsham; Hardwick, Angela; Malaty, Irene A; Okun, Michael S
2015-01-01
Objective. (1) To evaluate the feasibility of implementing and evaluating a home visit program for persons with Parkinson's disease (PD) in a rural setting. (2) To have movement disorders fellows coordinate and manage health care delivery. Background. The University of Florida, Center for Movement Disorders and Neurorestoration established Operation House Call to serve patients with PD who could not otherwise afford to travel to an expert center or to pay for medical care. PD is known to lead to significant disability, frequent hospitalization, early nursing home placement, and morbidity. Methods. This was designed as a quality improvement project. Movement disorders fellows travelled to the home(s) of underserved PD patients and coordinated their clinical care. The diagnosis of Parkinson's disease was confirmed using standardized criteria, and the Unified Parkinson's Disease Rating Scale was performed and best treatment practices were delivered. Results. All seven patients have been followed up longitudinally every 3 to 6 months in the home setting, and they remain functional and independent. None of the patients have been hospitalized for PD related complications. Each patient has a new updatable electronic medical record. All Operation House Call cases are presented during video rounds for the interdisciplinary PD team to make recommendations for care (neurology, neurosurgery, neuropsychology, psychiatry, physical therapy, occupational therapy, speech therapy, and social work). One Operation House Call patient has successfully received deep brain stimulation (DBS). Conclusion. This program is a pilot program that has demonstrated that it is possible to provide person-centered care in the home setting for PD patients. This program could provide a proof of concept for the construction of a larger visiting physician or nurse program.
Akbar, Umer; Eilers, Amanda; Thompson-Avila, Amanda; Malaty, Irene A.; Okun, Michael S.
2015-01-01
Objective. (1) To evaluate the feasibility of implementing and evaluating a home visit program for persons with Parkinson's disease (PD) in a rural setting. (2) To have movement disorders fellows coordinate and manage health care delivery. Background. The University of Florida, Center for Movement Disorders and Neurorestoration established Operation House Call to serve patients with PD who could not otherwise afford to travel to an expert center or to pay for medical care. PD is known to lead to significant disability, frequent hospitalization, early nursing home placement, and morbidity. Methods. This was designed as a quality improvement project. Movement disorders fellows travelled to the home(s) of underserved PD patients and coordinated their clinical care. The diagnosis of Parkinson's disease was confirmed using standardized criteria, and the Unified Parkinson's Disease Rating Scale was performed and best treatment practices were delivered. Results. All seven patients have been followed up longitudinally every 3 to 6 months in the home setting, and they remain functional and independent. None of the patients have been hospitalized for PD related complications. Each patient has a new updatable electronic medical record. All Operation House Call cases are presented during video rounds for the interdisciplinary PD team to make recommendations for care (neurology, neurosurgery, neuropsychology, psychiatry, physical therapy, occupational therapy, speech therapy, and social work). One Operation House Call patient has successfully received deep brain stimulation (DBS). Conclusion. This program is a pilot program that has demonstrated that it is possible to provide person-centered care in the home setting for PD patients. This program could provide a proof of concept for the construction of a larger visiting physician or nurse program. PMID:26078912
Short Answers to Deep Questions: Supporting Teachers in Large-Class Settings
ERIC Educational Resources Information Center
McDonald, J.; Bird, R. J.; Zouaq, A.; Moskal, A. C. M.
2017-01-01
In large class settings, individualized student-teacher interaction is difficult. However, teaching interactions (e.g., formative feedback) are central to encouraging deep approaches to learning. While there has been progress in automatic short-answer grading, analysing student responses to support formative feedback at scale is arguably some way…
Becker, Anton S; Mueller, Michael; Stoffel, Elina; Marcon, Magda; Ghafoor, Soleen; Boss, Andreas
2018-02-01
To train a generic deep learning software (DLS) to classify breast cancer on ultrasound images and to compare its performance to human readers with variable breast imaging experience. In this retrospective study, all breast ultrasound examinations from January 1, 2014 to December 31, 2014 at our institution were reviewed. Patients with post-surgical scars, initially indeterminate, or malignant lesions with histological diagnoses or 2-year follow-up were included. The DLS was trained with 70% of the images, and the remaining 30% were used to validate the performance. Three readers with variable expertise also evaluated the validation set (radiologist, resident, medical student). Diagnostic accuracy was assessed with a receiver operating characteristic analysis. 82 patients with malignant and 550 with benign lesions were included. Time needed for training was 7 min (DLS). Evaluation time for the test data set were 3.7 s (DLS) and 28, 22 and 25 min for human readers (decreasing experience). Receiver operating characteristic analysis revealed non-significant differences (p-values 0.45-0.47) in the area under the curve of 0.84 (DLS), 0.88 (experienced and intermediate readers) and 0.79 (inexperienced reader). DLS may aid diagnosing cancer on breast ultrasound images with an accuracy comparable to radiologists, and learns better and faster than a human reader with no prior experience. Further clinical trials with dedicated algorithms are warranted. Advances in knowledge: DLS can be trained classify cancer on breast ultrasound images high accuracy even with comparably few training cases. The fast evaluation speed makes real-time image analysis feasible.
Deep Space Gateway - Enabling Missions to Mars
NASA Technical Reports Server (NTRS)
Rucker, Michelle; Connolly, John
2017-01-01
There are many opportunities for commonality between Lunar vicinity and Mars mission hardware and operations. Best approach: Identify Mars mission risks that can be bought down with testing in the Lunar vicinity, then explore hardware and operational concepts that work for both missions with minimal compromise. Deep Space Transport will validate the systems and capabilities required to send humans to Mars orbit and return to Earth. Deep Space Gateway provides a convenient assembly, checkout, and refurbishment location to enable Mars missions Current deep space transport concept is to fly missions of increasing complexity: Shakedown cruise, Mars orbital mission, Mars surface mission; Mars surface mission would require additional elements.
Key Challenges for Life Science Payloads on the Deep Space Gateway
NASA Astrophysics Data System (ADS)
Anthony, J. H.; Niederwieser, T.; Zea, L.; Stodieck, L.
2018-02-01
Compared to ISS, Deep Space Gateway life science payloads will be challenged by deep space radiation and non-continuous habitation. The impacts of these two differences on payload requirements, design, and operations are discussed.
Litter in submarine canyons off the west coast of Portugal
NASA Astrophysics Data System (ADS)
Mordecai, Gideon; Tyler, Paul A.; Masson, Douglas G.; Huvenne, Veerle A. I.
2011-12-01
Marine litter is of global concern and is present in all the world's oceans, including deep benthic habitats where the extent of the problem is still largely unknown. Litter abundance and composition were investigated using video footage and still images from 16 Remotely Operated Vehicle (ROV) dives in Lisbon, Setúbal, Cascais and Nazaré Canyons located west of Portugal. Litter was most abundant at sites closest to the coastline and population centres, suggesting the majority of the litter was land sourced. Plastic was the dominant type of debris, followed by fishing gear. Standardised mean abundance was 1100 litter items km -2, but was as high as 6600 litter items km -2 in canyons close to Lisbon. Although all anthropogenic material may be harmful to biota, debris was also used as a habitat by some macro-invertebrates. Litter composition and abundance observed in the canyons of the Portuguese margin were comparable to those seen in other deep sea areas around the world. Accumulation of litter in the deep sea is a consequence of human activities both on land and at sea. This needs to be taken into account in future policy decisions regarding marine pollution.
The deep space network, Volume 11
NASA Technical Reports Server (NTRS)
1972-01-01
Deep Space Network progress in flight project support, Tracking and Data Acquisition research and technology, network engineering, hardware and software implementation, and operations are presented. Material is presented in each of the following categories: description of DSN; mission support; radio science; support research and technology; network engineering and implementation; and operations and facilities.
Learning representations for the early detection of sepsis with deep neural networks.
Kam, Hye Jin; Kim, Ha Young
2017-10-01
Sepsis is one of the leading causes of death in intensive care unit patients. Early detection of sepsis is vital because mortality increases as the sepsis stage worsens. This study aimed to develop detection models for the early stage of sepsis using deep learning methodologies, and to compare the feasibility and performance of the new deep learning methodology with those of the regression method with conventional temporal feature extraction. Study group selection adhered to the InSight model. The results of the deep learning-based models and the InSight model were compared. With deep feedforward networks, the area under the ROC curve (AUC) of the models were 0.887 and 0.915 for the InSight and the new feature sets, respectively. For the model with the combined feature set, the AUC was the same as that of the basic feature set (0.915). For the long short-term memory model, only the basic feature set was applied and the AUC improved to 0.929 compared with the existing 0.887 of the InSight model. The contributions of this paper can be summarized in three ways: (i) improved performance without feature extraction using domain knowledge, (ii) verification of feature extraction capability of deep neural networks through comparison with reference features, and (iii) improved performance with feedforward neural networks using long short-term memory, a neural network architecture that can learn sequential patterns. Copyright © 2017 Elsevier Ltd. All rights reserved.
Spaceflight Operations Services Grid Prototype
NASA Technical Reports Server (NTRS)
Bradford, Robert N.; Mehrotra, Piyush; Lisotta, Anthony
2004-01-01
NASA over the years has developed many types of technologies and conducted various types of science resulting in numerous variations of operations, data and applications. For example, operations range from deep space projects managed by JPL, Saturn and Shuttle operations managed from JSC and KSC, ISS science operations managed from MSFC and numerous low earth orbit satellites managed from GSFC that are varied and intrinsically different but require many of the same types of services to fulfill their missions. Also, large data sets (databases) of Shuttle flight data, solar system projects and earth observing data exist which because of their varied and sometimes outdated technologies are not and have not been fully examined for additional information and knowledge. Many of the applications/systems supporting operational services e.g. voice, video, telemetry and commanding, are outdated and obsolete. The vast amounts of data are located in various formats, at various locations and range over many years. The ability to conduct unified space operations, access disparate data sets and to develop systems and services that can provide operational services does not currently exist in any useful form. In addition, adding new services to existing operations is generally expensive and with the current budget constraints not feasible on any broad level of implementation. To understand these services a discussion of each one follows. The Spaceflight User-based Services are those services required to conduct space flight operations. Grid Services are those Grid services that will be used to overcome, through middleware software, some or all the problems that currently exists. In addition, Network Services will be discussed briefly. Network Services are crucial to any type of remedy and are evolving adequately to support any technology currently in development.
Operation's Concept for Array-Based Deep Space Network
NASA Technical Reports Server (NTRS)
Bagri, Durgadas S.; Statman, Joseph I.; Gatti, Mark S.
2005-01-01
The Array-based Deep Space Network (DSNArray) will be a part of more than 10(exp 3) times increase in the downlink/telemetry capability of the Deep space Network (DSN). The key function of the DSN-Array is to provide cost-effective, robust Telemetry, Tracking and Command (TT&C) services to the space missions of NASA and its international partners. It provides an expanded approach to the use of an array-based system. Instead of using the array as an element in the existing DSN, relying to a large extent on the DSN infrastructure, we explore a broader departure from the current DSN, using fewer elements of the existing DSN, and establishing a more modern Concept of Operations. This paper gives architecture of DSN-Array and its operation's philosophy. It also describes customer's view of operations, operations management and logistics - including maintenance philosophy, anomaly analysis and reporting.
Methods for intraoperative, sterile pose-setting of patient-specific microstereotactic frames
NASA Astrophysics Data System (ADS)
Vollmann, Benjamin; Müller, Samuel; Kundrat, Dennis; Ortmaier, Tobias; Kahrs, Lüder A.
2015-03-01
This work proposes new methods for a microstereotactic frame based on bone cement fixation. Microstereotactic frames are under investigation for minimal invasive temporal bone surgery, e.g. cochlear implantation, or for deep brain stimulation, where products are already on the market. The correct pose of the microstereotactic frame is either adjusted outside or inside the operating room and the frame is used for e.g. drill or electrode guidance. We present a patientspecific, disposable frame that allows intraoperative, sterile pose-setting. Key idea of our approach is bone cement between two plates that cures while the plates are positioned with a mechatronics system in the desired pose. This paper includes new designs of microstereotactic frames, a system for alignment and first measurements to analyze accuracy and applicable load.
Video Salient Object Detection via Fully Convolutional Networks.
Wang, Wenguan; Shen, Jianbing; Shao, Ling
This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image data sets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the densely annotated video segmentation data set (MAE of .06) and the Freiburg-Berkeley Motion Segmentation data set (MAE of .07), and do so with much improved speed (2 fps with all steps).This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image data sets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the densely annotated video segmentation data set (MAE of .06) and the Freiburg-Berkeley Motion Segmentation data set (MAE of .07), and do so with much improved speed (2 fps with all steps).
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-11
... Pacific Fishery Management Council (Council) prepared a regulatory amendment, including an environmental... swordfish. This would support the National Standards for fishery management in Magnuson-Stevens Fishery Conservation and Management Act. The predominant hook types used in the deep-set fishery are tuna hooks...
Research on Daily Objects Detection Based on Deep Neural Network
NASA Astrophysics Data System (ADS)
Ding, Sheng; Zhao, Kun
2018-03-01
With the rapid development of deep learning, great breakthroughs have been made in the field of object detection. In this article, the deep learning algorithm is applied to the detection of daily objects, and some progress has been made in this direction. Compared with traditional object detection methods, the daily objects detection method based on deep learning is faster and more accurate. The main research work of this article: 1. collect a small data set of daily objects; 2. in the TensorFlow framework to build different models of object detection, and use this data set training model; 3. the training process and effect of the model are improved by fine-tuning the model parameters.
Deep generative learning for automated EHR diagnosis of traditional Chinese medicine.
Liang, Zhaohui; Liu, Jun; Ou, Aihua; Zhang, Honglai; Li, Ziping; Huang, Jimmy Xiangji
2018-05-04
Computer-aided medical decision-making (CAMDM) is the method to utilize massive EMR data as both empirical and evidence support for the decision procedure of healthcare activities. Well-developed information infrastructure, such as hospital information systems and disease surveillance systems, provides abundant data for CAMDM. However, the complexity of EMR data with abstract medical knowledge makes the conventional model incompetent for the analysis. Thus a deep belief networks (DBN) based model is proposed to simulate the information analysis and decision-making procedure in medical practice. The purpose of this paper is to evaluate a deep learning architecture as an effective solution for CAMDM. A two-step model is applied in our study. At the first step, an optimized seven-layer deep belief network (DBN) is applied as an unsupervised learning algorithm to perform model training to acquire feature representation. Then a support vector machine model is adopted to DBN at the second step of the supervised learning. There are two data sets used in the experiments. One is a plain text data set indexed by medical experts. The other is a structured dataset on primary hypertension. The data are randomly divided to generate the training set for the unsupervised learning and the testing set for the supervised learning. The model performance is evaluated by the statistics of mean and variance, the average precision and coverage on the data sets. Two conventional shallow models (support vector machine / SVM and decision tree / DT) are applied as the comparisons to show the superiority of our proposed approach. The deep learning (DBN + SVM) model outperforms simple SVM and DT on two data sets in terms of all the evaluation measures, which confirms our motivation that the deep model is good at capturing the key features with less dependence when the index is built up by manpower. Our study shows the two-step deep learning model achieves high performance for medical information retrieval over the conventional shallow models. It is able to capture the features of both plain text and the highly-structured database of EMR data. The performance of the deep model is superior to the conventional shallow learning models such as SVM and DT. It is an appropriate knowledge-learning model for information retrieval of EMR system. Therefore, deep learning provides a good solution to improve the performance of CAMDM systems. Copyright © 2018. Published by Elsevier B.V.
Winkler, David A; Le, Tu C
2017-01-01
Neural networks have generated valuable Quantitative Structure-Activity/Property Relationships (QSAR/QSPR) models for a wide variety of small molecules and materials properties. They have grown in sophistication and many of their initial problems have been overcome by modern mathematical techniques. QSAR studies have almost always used so-called "shallow" neural networks in which there is a single hidden layer between the input and output layers. Recently, a new and potentially paradigm-shifting type of neural network based on Deep Learning has appeared. Deep learning methods have generated impressive improvements in image and voice recognition, and are now being applied to QSAR and QSAR modelling. This paper describes the differences in approach between deep and shallow neural networks, compares their abilities to predict the properties of test sets for 15 large drug data sets (the kaggle set), discusses the results in terms of the Universal Approximation theorem for neural networks, and describes how DNN may ameliorate or remove troublesome "activity cliffs" in QSAR data sets. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Intelligent (Autonomous) Power Controller Development for Human Deep Space Exploration
NASA Technical Reports Server (NTRS)
Soeder, James; Raitano, Paul; McNelis, Anne
2016-01-01
As NASAs Evolvable Mars Campaign and other exploration initiatives continue to mature they have identified the need for more autonomous operations of the power system. For current human space operations such as the International Space Station, the paradigm is to perform the planning, operation and fault diagnosis from the ground. However, the dual problems of communication lag as well as limited communication bandwidth beyond GEO synchronous orbit, underscore the need to change the operation methodology for human operation in deep space. To address this need, for the past several years the Glenn Research Center has had an effort to develop an autonomous power controller for human deep space vehicles. This presentation discusses the present roadmap for deep space exploration along with a description of conceptual power system architecture for exploration modules. It then contrasts the present ground centric control and management architecture with limited autonomy on-board the spacecraft with an advanced autonomous power control system that features ground based monitoring with a spacecraft mission manager with autonomous control of all core systems, including power. It then presents a functional breakdown of the autonomous power control system and examines its operation in both normal and fault modes. Finally, it discusses progress made in the development of a real-time power system model and how it is being used to evaluate the performance of the controller and well as using it for verification of the overall operation.
Towards deep inclusion for equity-oriented health research priority-setting: A working model.
Pratt, Bridget; Merritt, Maria; Hyder, Adnan A
2016-02-01
Growing consensus that health research funders should align their investments with national research priorities presupposes that such national priorities exist and are just. Arguably, justice requires national health research priority-setting to promote health equity. Such a position is consistent with recommendations made by the World Health Organization and at global ministerial summits that health research should serve to reduce health inequalities between and within countries. Thus far, no specific requirements for equity-oriented research priority-setting have been described to guide policymakers. As a step towards the explication and defence of such requirements, we propose that deep inclusion is a key procedural component of equity-oriented research priority-setting. We offer a model of deep inclusion that was developed by applying concepts from work on deliberative democracy and development ethics. This model consists of three dimensions--breadth, qualitative equality, and high-quality non-elite participation. Deep inclusion is captured not only by who is invited to join a decision-making process but also by how they are involved and by when non-elite stakeholders are involved. To clarify and illustrate the proposed dimensions, we use the sustained example of health systems research. We conclude by reviewing practical challenges to achieving deep inclusion. Despite the existence of barriers to implementation, our model can help policymakers and other stakeholders design more inclusive national health research priority-setting processes and assess these processes' depth of inclusion. Copyright © 2016 Elsevier Ltd. All rights reserved.
Diabetic retinopathy screening using deep neural network.
Ramachandran, Nishanthan; Hong, Sheng Chiong; Sime, Mary J; Wilson, Graham A
2017-09-07
There is a burgeoning interest in the use of deep neural network in diabetic retinal screening. To determine whether a deep neural network could satisfactorily detect diabetic retinopathy that requires referral to an ophthalmologist from a local diabetic retinal screening programme and an international database. Retrospective audit. Diabetic retinal photos from Otago database photographed during October 2016 (485 photos), and 1200 photos from Messidor international database. Receiver operating characteristic curve to illustrate the ability of a deep neural network to identify referable diabetic retinopathy (moderate or worse diabetic retinopathy or exudates within one disc diameter of the fovea). Area under the receiver operating characteristic curve, sensitivity and specificity. For detecting referable diabetic retinopathy, the deep neural network had an area under receiver operating characteristic curve of 0.901 (95% confidence interval 0.807-0.995), with 84.6% sensitivity and 79.7% specificity for Otago and 0.980 (95% confidence interval 0.973-0.986), with 96.0% sensitivity and 90.0% specificity for Messidor. This study has shown that a deep neural network can detect referable diabetic retinopathy with sensitivities and specificities close to or better than 80% from both an international and a domestic (New Zealand) database. We believe that deep neural networks can be integrated into community screening once they can successfully detect both diabetic retinopathy and diabetic macular oedema. © 2017 Royal Australian and New Zealand College of Ophthalmologists.
Deep Convolutional Neural Networks for Multi-Modality Isointense Infant Brain Image Segmentation
Zhang, Wenlu; Li, Rongjian; Deng, Houtao; Wang, Li; Lin, Weili; Ji, Shuiwang; Shen, Dinggang
2015-01-01
The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development in health and disease. In the isointense stage (approximately 6–8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, making the tissue segmentation very challenging. Only a small number of existing methods have been designed for tissue segmentation in this isointense stage; however, they only used a single T1 or T2 images, or the combination of T1 and T2 images. In this paper, we propose to use deep convolutional neural networks (CNNs) for segmenting isointense stage brain tissues using multi-modality MR images. CNNs are a type of deep models in which trainable filters and local neighborhood pooling operations are applied alternatingly on the raw input images, resulting in a hierarchy of increasingly complex features. Specifically, we used multimodality information from T1, T2, and fractional anisotropy (FA) images as inputs and then generated the segmentation maps as outputs. The multiple intermediate layers applied convolution, pooling, normalization, and other operations to capture the highly nonlinear mappings between inputs and outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense stage brain images. Results showed that our proposed model significantly outperformed prior methods on infant brain tissue segmentation. In addition, our results indicated that integration of multi-modality images led to significant performance improvement. PMID:25562829
Deep Space Station (DSS-13) automation demonstration
NASA Technical Reports Server (NTRS)
Remer, D. S.; Lorden, G.
1980-01-01
The data base collected during a six month demonstration of an automated Deep Space Station (DSS 13) run unattended and remotely controlled is summarized. During this period, DSS 13 received spacecraft telemetry data from Voyager, Pioneers 10 and 11, and Helios projects. Corrective and preventive maintenance are reported by subsystem including the traditional subsystems and those subsystems added for the automation demonstration. Operations and maintenance data for a comparable manned Deep Space Station (DSS 11) are also presented for comparison. The data suggests that unattended operations may reduce maintenance manhours in addition to reducing operator manhours. Corrective maintenance for the unmanned station was about one third of the manned station, and preventive maintenance was about one half.
2008-09-01
2 Deep Ocean Engineering Triggerfish ...Figures Figure 1. Deep Ocean Engineering Triggerfish ROV carried by two divers (top)................................... 4 Figure 2. SeaBotix...the physical parameters and approximate costs of the systems as tested. Deep Ocean Engineering Triggerfish Figure 1 shows the Deep Ocean
Viking Mars launch set for August 11
NASA Technical Reports Server (NTRS)
Panagakos, N.
1975-01-01
The 1975-1976 Viking Mars Mission is described in detail, from launch phase through landing and communications relay phase. The mission's scientific goals are outlined and the various Martian investigations are discussed. These investigations include: geological photomapping and seismology; high-resolution, stereoscopic horizon scanning; water vapor and thermal mapping; entry science; meteorology; atmospheric composition and atmospheric density; and, search for biological products. The configurations of the Titan 3/Centaur combined launch vehicles, the Viking orbiters, and the Viking landers are described; their subsystems and performance characteristics are discussed. Preflight operations, launch window, mission control, and the deep space tracking network are also presented.
NSTAR Ion Thrusters and Power Processors
NASA Technical Reports Server (NTRS)
Bond, T. A.; Christensen, J. A.
1999-01-01
The purpose of the NASA Solar Electric Propulsion Technology Applications Readiness (NSTAR) project is to validate ion propulsion technology for use on future NASA deep space missions. This program, which was initiated in September 1995, focused on the development of two sets of flight quality ion thrusters, power processors, and controllers that provided the same performance as engineering model hardware and also met the dynamic and environmental requirements of the Deep Space 1 Project. One of the flight sets was used for primary propulsion for the Deep Space 1 spacecraft which was launched in October 1998.
Space Operations Analysis Using the Synergistic Engineering Environment
NASA Technical Reports Server (NTRS)
Angster, Scott; Brewer, Laura
2002-01-01
The Synergistic Engineering Environment has been under development at the NASA Langley Research Center to aid in the understanding of the operations of spacecraft. This is accomplished through the integration of multiple data sets, analysis tools, spacecraft geometric models, and a visualization environment to create an interactive virtual simulation of the spacecraft. Initially designed to support the needs of the International Space Station, the SEE has broadened the scope to include spacecraft ranging from low-earth orbit to deep space missions. Analysis capabilities within the SEE include rigid body dynamics, kinematics, orbital mechanics, and payload operations. This provides the user the ability to perform real-time interactive engineering analyses in areas including flight attitudes and maneuvers, visiting vehicle docking scenarios, robotic operations, plume impingement, field of view obscuration, and alternative assembly configurations. The SEE has been used to aid in the understanding of several operational procedures related to the International Space Station. This paper will address the capabilities of the first build of the SEE, present several use cases of the SEE, and discuss the next build of the SEE.
Mars Reconnaissance Orbiter Ka-band (32 GHz) Demonstration: Cruise Phase Operations
NASA Technical Reports Server (NTRS)
Shambayati, Shervin; Morabito, David; Border, James S.; Davarian, Faramaz; Lee, Dennis; Mendoza, Ricardo; Britcliffe, Michael; Weinreb, Sander
2006-01-01
The X-band (8.41 GHz) frequency currently used for deep space telecommunications is too narrow (50 MHz) to support future high rate missions. Because of this NASA has decided to transition to Ka-band (32 GHz) frequencies. As weather effects cause much larger fluctuations on Ka-band than on X-band, the traditional method of using a few dBs of margin to cover these fluctuations is wasteful of power for Ka-band; therefore, a different operations concept is needed for Ka-band links. As part of the development of the operations concept for Ka-band, NASA has implemented a fully functioning Ka-band communications suite on its Mars Reconnaissance Orbiter (MRO). This suite will be used during the primary science phase to develop and refine the Ka-band operations concept for deep space missions. In order to test the functional readiness of the spacecraft and the Deep Space Network's (DSN) readiness to support the demonstration activities a series of passes over DSN 34-m Beam Waveguide (BWG) antennas were scheduled during the cruise phase of the mission. MRO was launched on August 12, 2005 from Kennedy Space Center, Cape Canaveral, Florida, USA and went into Mars Orbit on March 10, 2006. A total of ten telemetry demonstration and one high gain antenna (HGA) calibration passes were allocated to the Ka-band demonstration. Furthermore, a number of "shadow" passes were also scheduled where, during a regular MRO track over a Ka-band capable antenna, Ka-band was identically configured as the X-band and tracked by the station. In addition, nine Ka-band delta differential one way ranging ((delta)DOR) passes were scheduled. During these passes, the spacecraft and the ground system were put through their respective paces. Among the highlights of these was setting a single day record for data return from a deep space spacecraft (133 Gbits) achieved during one 10-hour pass; achieving the highest data rate ever from a planetary mission (6 Mbps) and successfully demonstrating Ka-band DDOR. In addition, DSN performed well. However, there are concerns with the active pointing of the Ka-band antennas as well as delivery of the monitor data from the stations. The spacecraft also presented challenges not normally associated with planetary missions mostly because of its very high equivalent isotropic radiated power (EIRP). This caused problems in accurately evaluating the in-flight EIRP of the spacecraft which led to difficulties evaluating the quality of the HGA calibration data. These led to the development of additional measurement techniques that could be used for future high-power deep space missions.
1992-05-01
and systems for developing , testing, and operating the system. A new, lightweight cable de- used this evolving technology base in the ensuing years...Funding Numbers. Development , Testing, and Operation of a Large Suspended Ocean Contrac Measurement Structure for Deep-Ocean Use Program Element No...Research L.aboratory Report Number. Ocean Acoutics and Technology Directorate PR 91:132:253 Stennis Space Center, MS 39529-5004 9. Sponsoring
Maleti, O; Lugli, M; Perrin, M
2017-02-01
To identify which deep anatomical anomalies can explain variable hemodynamic outcomes in patients with superficial reflux associated with primary deep axial reflux who underwent isolated superficial vein ablation without improvement. This is a retrospective study of deep venous valve anomalies in patients who underwent superficial vein ablation for superficial and associated deep reflux. A group of 21 patients who were diagnosed with saphenous reflux associated with primary deep axial reflux, were submitted to great saphenous vein ablation. In 17 patients the deep reflux was not abolished. In this subgroup, surgical exploration of the deep valve was carried out using venotomy for possible valve repair. Among the 17 subgroup patients, four post-thrombotic lesions were discovered intra-operatively in four patients; they underwent different surgical procedures. In 13 of the subgroup patients, primary valve incompetence was confirmed intra-operatively. In 11 cases the leaflets were asymmetrical and in only two were they symmetrical. After valvuloplasty, deep reflux was abolished in all 13 patients. Clinical improvement was obtained in 12/13 patients (92%). It is noteworthy that abolition of deep reflux was associated with significant improvement in air plethysmography data as well as with improvement in clinical status measured on CEAP class, VCSS and the SF-36 questionnaire. Failure to correct deep axial reflux by superficial ablation in patients with superficial and associated primary deep axial reflux may be related to asymmetry in the leaflets of the incompetent deep venous valve. Copyright © 2016 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.
Deep UV Raman spectroscopy for planetary exploration: The search for in situ organics
NASA Astrophysics Data System (ADS)
Abbey, William J.; Bhartia, Rohit; Beegle, Luther W.; DeFlores, Lauren; Paez, Veronica; Sijapati, Kripa; Sijapati, Shakher; Williford, Kenneth; Tuite, Michael; Hug, William; Reid, Ray
2017-07-01
Raman spectroscopy has emerged as a powerful, non-contact, non-destructive technique for detection and characterization of in situ organic compounds. Excitation using deep UV wavelengths (< 250 nm), in particular, offers the benefits of spectra obtained in a largely fluorescence-free region while taking advantage of signal enhancing resonance Raman effects for key classes of organic compounds, such as the aromatics. In order to demonstrate the utility of this technique for planetary exploration and astrobiological applications, we interrogated three sets of samples using a custom built Raman instrument equipped with a deep UV (248.6 nm) excitation source. The sample sets included: (1) the Mojave Mars Simulant, a well characterized basaltic sample used as an analog for Martian regolith, in which we detected ∼0.04 wt% of condensed carbon; (2) a suite of organic (aromatic hydrocarbons, carboxylic acids, and amino acids) and astrobiologically relevant inorganic (sulfates, carbonates, phosphates, nitrates and perchlorate) standards, many of which have not had deep UV Raman spectra in the solid phase previously reported in the literature; and (3) Mojave Mars Simulant spiked with a representative selection of these standards, at a concentration of 1 wt%, in order to investigate natural 'real world' matrix effects. We were able to resolve all of the standards tested at this concentration. Some compounds, such as the aromatic hydrocarbons, have especially strong signals due to resonance effects even when present in trace amounts. Phenanthrene, one of the aromatic hydrocarbons, was also examined at a concentration of 0.1 wt% and even at this level was found to have a strong signal-to-noise ratio. It should be noted that the instrument utilized in this study was designed to approximate the operation of a 'fieldable' spectrometer in order to test astrobiological applications both here on Earth as well as for current and future planetary missions. It is the foundation of SHERLOC, an arm mounted instrument recently selected by NASA to fly on the next rover mission to Mars in 2020.
Kim, D H; MacKinnon, T
2018-05-01
To identify the extent to which transfer learning from deep convolutional neural networks (CNNs), pre-trained on non-medical images, can be used for automated fracture detection on plain radiographs. The top layer of the Inception v3 network was re-trained using lateral wrist radiographs to produce a model for the classification of new studies as either "fracture" or "no fracture". The model was trained on a total of 11,112 images, after an eightfold data augmentation technique, from an initial set of 1,389 radiographs (695 "fracture" and 694 "no fracture"). The training data set was split 80:10:10 into training, validation, and test groups, respectively. An additional 100 wrist radiographs, comprising 50 "fracture" and 50 "no fracture" images, were used for final testing and statistical analysis. The area under the receiver operator characteristic curve (AUC) for this test was 0.954. Setting the diagnostic cut-off at a threshold designed to maximise both sensitivity and specificity resulted in values of 0.9 and 0.88, respectively. The AUC scores for this test were comparable to state-of-the-art providing proof of concept for transfer learning from CNNs in fracture detection on plain radiographs. This was achieved using only a moderate sample size. This technique is largely transferable, and therefore, has many potential applications in medical imaging, which may lead to significant improvements in workflow productivity and in clinical risk reduction. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Reves-Sohn, R. A.; Singh, H.; Humphris, S.; Shank, T.; Jakuba, M.; Kunz, C.; Murphy, C.; Willis, C.
2007-12-01
Deep-sea hydrothermal fields on the Gakkel Ridge beneath the Arctic ice cap provide perhaps the best terrestrial analogue for volcanically-hosted chemosynthetic biological communities that may exist beneath the ice-covered ocean of Europa. In both cases the key enabling technologies are robotic (untethered) vehicles that can swim freely under the ice and the supporting hardware and software. The development of robotic technology for deep- sea research beneath ice-covered oceans thus has relevance to both polar oceanography and future astrobiological missions to Europa. These considerations motivated a technology development effort under the auspices of NASA's ASTEP program and NSF's Office of Polar Programs that culminated in the AGAVE expedition aboard the icebreaker Oden from July 1 - August 10, 2007. The scientific objective was to study hydrothermal processes on the Gakkel Ridge, which is a key target for global studies of deep-sea vent fields. We developed two new autonomous underwater vehicles (AUVs) for the project, and deployed them to search for vent fields beneath the ice. We conducted eight AUV missions (four to completion) during the 40-day long expedition, which also included ship-based bathymetric surveys, CTD/rosette water column surveys, and wireline photographic and sampling surveys of remote sections of the Gakkel Ridge. The AUV missions, which lasted 16 hours on average and achieved operational depths of 4200 meters, returned sensor data that showed clear evidence of hydrothermal venting, but for a combination of technical reasons and time constraints, the AUVs did not ultimately return images of deep-sea vent fields. Nevertheless we used our wireline system to obtain images and samples of extensive microbial mats that covered fresh volcanic surfaces on a newly discovered set of volcanoes. The microbes appear to be living in regions where reducing and slightly warm fluids are seeping through cracks in the fresh volcanic terrain. These discoveries shed new light on the nature of volcanic and hydrothermal processes in the Arctic basin, and also demonstrate the importance of new technologies for advancing science beneath ice-covered oceans. Operationally, the AUV missions pushed the envelope of deep-sea technology. The recoveries were particularly difficult as it was necessary to have the vehicle find small pools of open water next to the ship, but in some cases the ice was in a state of regional compression such that no open water could be found or created. In these cases a well-calibrated, ship-based, short-baseline acoustic system was essential for successful vehicle recoveries. In all we were able to achieve a variety of operational and technological advances that provide stepping stones for future under-ice robotic missions, both on Earth and perhaps eventually on Europa.
Salvage of Combat Hindfoot Fractures in 2003-2014 UK Military.
Bennett, Philippa M; Stevenson, Thomas; Sargeant, Ian D; Mountain, Alistair; Penn-Barwell, Jowan G
2017-07-01
Hindfoot fractures pose a considerable challenge to military orthopaedic surgeons, as combat injuries are typically the result of energy transfers not seen in civilian practice. This study aimed to characterize the pattern of hindfoot injuries sustained by UK military casualties in recent conflicts, define the early amputation and infection rate, and identify factors associated with poor early outcomes. The UK Joint Theatre Trauma Registry was searched for British military casualties sustaining a hindfoot fracture from Iraq and Afghanistan between 2003 and 2014. Data on the injury pattern and management were obtained along with 18-month follow-up data. Statistical analysis was performed with the chi-square test and binomial logistic regression analysis. The threshold for significance was set at P < .05. One hundred fourteen patients sustained 134 hindfoot injuries. Eighteen-month follow-up was available for 92 patients (81%) and 114 hindfeet (85%). The calcaneus was fractured in 116 cases (87%): 54 (47%) were managed conservatively, 32 (28%) underwent K-wire fixation, and 30 (26%) underwent internal fixation. Nineteen patients (17%) required transtibial amputation during this time. A deep infection requiring operative treatment occurred in 13 cases (11%) with Staphylococcus aureus, the most common infectious organism (46%). A deep infection was strongly associated with operative fracture management ( P = .0016). When controlling for multiple variables, the presence of a deep infection was significantly associated with a requirement for amputation at 18 months ( P = .023). There was no association between open fractures and a requirement for amputation at 18 months ( P = .640), nor was conservative management associated with a requirement for amputation ( P = .999). Thirty-six fractures (32%) required unplanned revision surgery within the first 18 months following salvage, of which 19 (53%) involved amputation. A deep infection was the sole variable significantly associated with a requirement for amputation by 18 months. These results suggest that attempts at salvaging these injuries are at the limits of orthopaedic technical feasibility. Level III, comparative series.
Integrated piezoelectric actuators in deep drawing tools to reduce the try-out
NASA Astrophysics Data System (ADS)
Neugebauer, Reimund; Mainda, Patrick; Kerschner, Matthias; Drossel, Welf-Guntram; Roscher, Hans-Jürgen
2011-05-01
Tool making is a very time consuming and expensive operation because many iteration loops are used to manually adjust tool components during the try-out process. That means that trying out deep drawing tools is 30% of the total costs. This is the reason why an active deep drawing tool was developed at the Fraunhofer Institute for Machine Tools and Forming Technology IWU in cooperation with Audi and Volkswagen to reduce the costs and production rates. The main difference between the active and conventional deep drawing tools is using piezoelectric actuators to control the forming process. The active tool idea, which is the main subject of this research, will be presented as well as the findings of experiments with the custom-built deep drawing tool. This experimental tool was designed according to production requirements and has been equipped with piezoelectric actuators that allow active pressure distribution on the sheet metal flange. The disposed piezoelectric elements are similar to those being used in piezo injector systems for modern diesel engines. In order to achieve the required force, the actuators are combined in a cluster that is embedded in the die of the deep drawing tool. One main objective of this work, i.e. reducing the time-consuming try-out-period, has been achieved with the experimental tool which means that the actuators were used to set static pressure distribution between the blankholder and die. We will present the findings of our analysis and the advantages of the active system over a conventional deep drawing tool. In addition to the ability of changing the static pressure distribution, the piezoelectric actuator can also be used to generate a dynamic pressure distribution during the forming process. As a result the active tool has the potential to expand the forming constraints to make it possible to manage forming restrictions caused by light weight materials in future.
Photometric redshifts for the next generation of deep radio continuum surveys - I. Template fitting
NASA Astrophysics Data System (ADS)
Duncan, Kenneth J.; Brown, Michael J. I.; Williams, Wendy L.; Best, Philip N.; Buat, Veronique; Burgarella, Denis; Jarvis, Matt J.; Małek, Katarzyna; Oliver, S. J.; Röttgering, Huub J. A.; Smith, Daniel J. B.
2018-01-01
We present a study of photometric redshift performance for galaxies and active galactic nuclei detected in deep radio continuum surveys. Using two multiwavelength data sets, over the NOAO Deep Wide Field Survey Boötes and COSMOS fields, we assess photometric redshift (photo-z) performance for a sample of ∼4500 radio continuum sources with spectroscopic redshifts relative to those of ∼63 000 non-radio-detected sources in the same fields. We investigate the performance of three photometric redshift template sets as a function of redshift, radio luminosity and infrared/X-ray properties. We find that no single template library is able to provide the best performance across all subsets of the radio-detected population, with variation in the optimum template set both between subsets and between fields. Through a hierarchical Bayesian combination of the photo-z estimates from all three template sets, we are able to produce a consensus photo-z estimate that equals or improves upon the performance of any individual template set.
A deep learning method for lincRNA detection using auto-encoder algorithm.
Yu, Ning; Yu, Zeng; Pan, Yi
2017-12-06
RNA sequencing technique (RNA-seq) enables scientists to develop novel data-driven methods for discovering more unidentified lincRNAs. Meantime, knowledge-based technologies are experiencing a potential revolution ignited by the new deep learning methods. By scanning the newly found data set from RNA-seq, scientists have found that: (1) the expression of lincRNAs appears to be regulated, that is, the relevance exists along the DNA sequences; (2) lincRNAs contain some conversed patterns/motifs tethered together by non-conserved regions. The two evidences give the reasoning for adopting knowledge-based deep learning methods in lincRNA detection. Similar to coding region transcription, non-coding regions are split at transcriptional sites. However, regulatory RNAs rather than message RNAs are generated. That is, the transcribed RNAs participate the biological process as regulatory units instead of generating proteins. Identifying these transcriptional regions from non-coding regions is the first step towards lincRNA recognition. The auto-encoder method achieves 100% and 92.4% prediction accuracy on transcription sites over the putative data sets. The experimental results also show the excellent performance of predictive deep neural network on the lincRNA data sets compared with support vector machine and traditional neural network. In addition, it is validated through the newly discovered lincRNA data set and one unreported transcription site is found by feeding the whole annotated sequences through the deep learning machine, which indicates that deep learning method has the extensive ability for lincRNA prediction. The transcriptional sequences of lincRNAs are collected from the annotated human DNA genome data. Subsequently, a two-layer deep neural network is developed for the lincRNA detection, which adopts the auto-encoder algorithm and utilizes different encoding schemes to obtain the best performance over intergenic DNA sequence data. Driven by those newly annotated lincRNA data, deep learning methods based on auto-encoder algorithm can exert their capability in knowledge learning in order to capture the useful features and the information correlation along DNA genome sequences for lincRNA detection. As our knowledge, this is the first application to adopt the deep learning techniques for identifying lincRNA transcription sequences.
Deep seafloor arrivals: an unexplained set of arrivals in long-range ocean acoustic propagation.
Stephen, Ralph A; Bolmer, S Thompson; Dzieciuch, Matthew A; Worcester, Peter F; Andrew, Rex K; Buck, Linda J; Mercer, James A; Colosi, John A; Howe, Bruce M
2009-08-01
Receptions, from a ship-suspended source (in the band 50-100 Hz) to an ocean bottom seismometer (about 5000 m depth) and the deepest element on a vertical hydrophone array (about 750 m above the seafloor) that were acquired on the 2004 Long-Range Ocean Acoustic Propagation Experiment in the North Pacific Ocean, are described. The ranges varied from 50 to 3200 km. In addition to predicted ocean acoustic arrivals and deep shadow zone arrivals (leaking below turning points), "deep seafloor arrivals," that are dominant on the seafloor geophone but are absent or very weak on the hydrophone array, are observed. These deep seafloor arrivals are an unexplained set of arrivals in ocean acoustics possibly associated with seafloor interface waves.
Deep-Space Optical Communications: Visions, Trends, and Prospects
NASA Technical Reports Server (NTRS)
Cesarone, R. J.; Abraham, D. S.; Shambayati, S.; Rush, J.
2011-01-01
Current key initiatives in deep-space optical communications are treated in terms of historical context, contemporary trends, and prospects for the future. An architectural perspective focusing on high-level drivers, systems, and related operations concepts is provided. Detailed subsystem and component topics are not addressed. A brief overview of past ideas and architectural concepts sets the stage for current developments. Current requirements that might drive a transition from radio frequencies to optical communications are examined. These drivers include mission demand for data rates and/or data volumes; spectrum to accommodate such data rates; and desired power, mass, and cost benefits. As is typical, benefits come with associated challenges. For optical communications, these include atmospheric effects, link availability, pointing, and background light. The paper describes how NASA's Space Communication and Navigation Office will respond to the drivers, achieve the benefits, and mitigate the challenges, as documented in its Optical Communications Roadmap. Some nontraditional architectures and operations concepts are advanced in an effort to realize benefits and mitigate challenges as quickly as possible. Radio frequency communications is considered as both a competitor to and a partner with optical communications. The paper concludes with some suggestions for two affordable first steps that can yet evolve into capable architectures that will fulfill the vision inherent in optical communications.
Office-based deep sedation for pediatric ophthalmologic procedures using a sedation service model.
Lalwani, Kirk; Tomlinson, Matthew; Koh, Jeffrey; Wheeler, David
2012-01-01
Aims. (1) To assess the efficacy and safety of pediatric office-based sedation for ophthalmologic procedures using a pediatric sedation service model. (2) To assess the reduction in hospital charges of this model of care delivery compared to the operating room (OR) setting for similar procedures. Background. Sedation is used to facilitate pediatric procedures and to immobilize patients for imaging and examination. We believe that the pediatric sedation service model can be used to facilitate office-based deep sedation for brief ophthalmologic procedures and examinations. Methods. After IRB approval, all children who underwent office-based ophthalmologic procedures at our institution between January 1, 2000 and July 31, 2008 were identified using the sedation service database and the electronic health record. A comparison of hospital charges between similar procedures in the operating room was performed. Results. A total of 855 procedures were reviewed. Procedure completion rate was 100% (C.I. 99.62-100). There were no serious complications or unanticipated admissions. Our analysis showed a significant reduction in hospital charges (average of $1287 per patient) as a result of absent OR and recovery unit charges. Conclusions. Pediatric ophthalmologic minor procedures can be performed using a sedation service model with significant reductions in hospital charges.
Jahanshahi, Marjan
2013-01-01
Inhibition of inappropriate, habitual or prepotent responses is an essential component of executive control and a cornerstone of self-control. Via the hyperdirect pathway, the subthalamic nucleus (STN) receives inputs from frontal areas involved in inhibition and executive control. Evidence is reviewed from our own work and the literature suggesting that in Parkinson's disease (PD), deep brain stimulation (DBS) of the STN has an impact on executive control during attention-demanding tasks or in situations of conflict when habitual or prepotent responses have to be inhibited. These results support a role for the STN in an inter-related set of processes: switching from automatic to controlled processing, inhibitory and executive control, adjusting response thresholds and influencing speed-accuracy trade-offs. Such STN DBS-induced deficits in inhibitory and executive control may contribute to some of the psychiatric problems experienced by a proportion of operated cases after STN DBS surgery in PD. However, as no direct evidence for such a link is currently available, there is a need to provide direct evidence for such a link between STN DBS-induced deficits in inhibitory and executive control and post-surgical psychiatric complications experienced by operated patients. PMID:24399941
Applying Deep Learning in Medical Images: The Case of Bone Age Estimation.
Lee, Jang Hyung; Kim, Kwang Gi
2018-01-01
A diagnostic need often arises to estimate bone age from X-ray images of the hand of a subject during the growth period. Together with measured physical height, such information may be used as indicators for the height growth prognosis of the subject. We present a way to apply the deep learning technique to medical image analysis using hand bone age estimation as an example. Age estimation was formulated as a regression problem with hand X-ray images as input and estimated age as output. A set of hand X-ray images was used to form a training set with which a regression model was trained. An image preprocessing procedure is described which reduces image variations across data instances that are unrelated to age-wise variation. The use of Caffe, a deep learning tool is demonstrated. A rather simple deep learning network was adopted and trained for tutorial purpose. A test set distinct from the training set was formed to assess the validity of the approach. The measured mean absolute difference value was 18.9 months, and the concordance correlation coefficient was 0.78. It is shown that the proposed deep learning-based neural network can be used to estimate a subject's age from hand X-ray images, which eliminates the need for tedious atlas look-ups in clinical environments and should improve the time and cost efficiency of the estimation process.
High-Throughput Classification of Radiographs Using Deep Convolutional Neural Networks.
Rajkomar, Alvin; Lingam, Sneha; Taylor, Andrew G; Blum, Michael; Mongan, John
2017-02-01
The study aimed to determine if computer vision techniques rooted in deep learning can use a small set of radiographs to perform clinically relevant image classification with high fidelity. One thousand eight hundred eighty-five chest radiographs on 909 patients obtained between January 2013 and July 2015 at our institution were retrieved and anonymized. The source images were manually annotated as frontal or lateral and randomly divided into training, validation, and test sets. Training and validation sets were augmented to over 150,000 images using standard image manipulations. We then pre-trained a series of deep convolutional networks based on the open-source GoogLeNet with various transformations of the open-source ImageNet (non-radiology) images. These trained networks were then fine-tuned using the original and augmented radiology images. The model with highest validation accuracy was applied to our institutional test set and a publicly available set. Accuracy was assessed by using the Youden Index to set a binary cutoff for frontal or lateral classification. This retrospective study was IRB approved prior to initiation. A network pre-trained on 1.2 million greyscale ImageNet images and fine-tuned on augmented radiographs was chosen. The binary classification method correctly classified 100 % (95 % CI 99.73-100 %) of both our test set and the publicly available images. Classification was rapid, at 38 images per second. A deep convolutional neural network created using non-radiological images, and an augmented set of radiographs is effective in highly accurate classification of chest radiograph view type and is a feasible, rapid method for high-throughput annotation.
NASA Technical Reports Server (NTRS)
1974-01-01
The progress is reported of Deep Space Network (DSN) research in the following areas: (1) flight project support, (2) spacecraft/ground communications, (3) station control and operations technology, (4) network control and processing, and (5) deep space stations. A description of the DSN functions and facilities is included.
30 CFR 203.44 - What administrative steps must I take to use the royalty suspension volume?
Code of Federal Regulations, 2011 CFR
2011-07-01
... REDUCTION IN ROYALTY RATES OCS Oil, Gas, and Sulfur General Royalty Relief for Drilling Deep Gas Wells on... in writing of your intent to begin drilling operations on all deep wells and phase 1 ultra-deep wells...
NASA Technical Reports Server (NTRS)
1977-01-01
Presented is Deep Space Network (DSN) progress in flight project support, tracking and data acquisition (TDA) research and technology, network engineering, hardware and software implementation, and operations.
NASA Technical Reports Server (NTRS)
1975-01-01
Summaries are given of Deep Space Network progress in flight project support, tracking and data acquisition research and technology, network engineering, hardware and software implementation, and operations.
The Deep Space Network in the Common Platform Era: A Prototype Implementation at DSS-13
NASA Technical Reports Server (NTRS)
Davarian, F.
2013-01-01
To enhance NASA's Deep Space Network (DSN), an effort is underway to improve network performance and simplify its operation and maintenance. This endeavor, known as the "Common Platform," has both short- and long-term objectives. The long-term work has not begun yet; however, the activity to realize the short-term goals has started. There are three goals for the long-term objective: 1. Convert the DSN into a digital network where signals are digitized at the output of the down converters at the antennas and are distributed via a digital IF switch to the processing platforms. 2. Employ a set of common hardware for signal processing applications, e.g., telemetry, tracking, radio science and Very Long Baseline Interferometry (VLBI). 3. Minimize in-house developments in favor of purchasing commercial off-the-shelf (COTS) equipment. The short-term goal is to develop a prototype of the above at NASA's experimental station known as DSS-13. This station consists of a 34m beam waveguide antenna with cryogenically cooled amplifiers capable of handling deep space research frequencies at S-, X-, and Ka-bands. Without the effort at DSS-13, the implementation of the long-term goal can potentially be risky because embarking on the modification of an operational network without prior preparations can, among other things, result in unwanted service interruptions. Not only are there technical challenges to address, full network implementation of the Common Platform concept includes significant cost uncertainties. Therefore, a limited implementation at DSS-13 will contribute to risk reduction. The benefits of employing common platforms for the DSN are lower cost and improved operations resulting from ease of maintenance and reduced number of spare parts. Increased flexibility for the user is another potential benefit. This paper will present the plans for DSS-13 implementation. It will discuss key issues such as the Common Platform architecture, choice of COTS equipment, and the standard for radio frequency (RF) to digital interface.
Blom, Ashley W; Whitehouse, Michael R; Gooberman-Hill, Rachael
2015-01-01
Objectives Around 1% of patients who have a hip replacement have deep prosthetic joint infection (PJI) afterwards. PJI is often treated with antibiotics plus a single revision operation (1-stage revision), or antibiotics plus a 2-stage revision process involving more than 1 operation. This study aimed to characterise the impact and experience of PJI and treatment on patients, including comparison of 1-stage with 2-stage revision treatment. Design Qualitative semistructured interviews with patients who had undergone surgical revision treatment for PJI. Patients were interviewed between 2 weeks and 12 months postdischarge. Data were audio-recorded, transcribed, anonymised and analysed using a thematic approach, with 20% of transcripts double-coded. Setting Patients from 5 National Health Service (NHS) orthopaedic departments treating PJI in England and Wales were interviewed in their homes (n=18) or at hospital (n=1). Participants 19 patients participated (12 men, 7 women, age range 56–88 years, mean age 73.2 years). Results Participants reported receiving between 1 and 15 revision operations after their primary joint replacement. Analysis indicated that participants made sense of their experience through reference to 3 key phases: the period of symptom onset, the treatment period and protracted recovery after treatment. By conceptualising their experience in this way, and through themes that emerged in these periods, they conveyed the ordeal that PJI represented. Finally, in light of the challenges of PJI, they described the need for support in all of these phases. 2-stage revision had greater impact on participants’ mobility, and further burdens associated with additional complications. Conclusions Deep PJI impacted on all aspects of patients’ lives. 2-stage revision had greater impact than 1-stage revision on participants’ well-being because the time in between revision procedures meant long periods of immobility and related psychological distress. Participants expressed a need for more psychological and rehabilitative support during treatment and long-term recovery. PMID:26644124
Responsible vendors, intelligent consumers: Silk Road, the online revolution in drug trading.
Van Hout, Marie Claire; Bingham, Tim
2014-03-01
Silk Road is located on the Deep Web and provides an anonymous transacting infrastructure for the retail of drugs and pharmaceuticals. Members are attracted to the site due to protection of identity by screen pseudonyms, variety and quality of product listings, selection of vendors based on reviews, reduced personal risks, stealth of product delivery, development of personal connections with vendors in stealth modes and forum activity. The study aimed to explore vendor accounts of Silk Road as retail infrastructure. A single and holistic case study with embedded units approach (Yin, 2003) was chosen to explore the accounts of vendor subunits situated within the Silk Road marketplace. Vendors (n=10) completed an online interview via the direct message facility and via Tor mail. Vendors described themselves as 'intelligent and responsible' consumers of drugs. Decisions to commence vending operations on the site centred on simplicity in setting up vendor accounts, and opportunity to operate within a low risk, high traffic, high mark-up, secure and anonymous Deep Web infrastructure. The embedded online culture of harm reduction ethos appealed to them in terms of the responsible vending and use of personally tested high quality products. The professional approach to running their Silk Road businesses and dedication to providing a quality service was characterised by professional advertising of quality products, professional communication and visibility on forum pages, speedy dispatch of slightly overweight products, competitive pricing, good stealth techniques and efforts to avoid customer disputes. Vendors appeared content with a fairly constant buyer demand and described a relatively competitive market between small and big time market players. Concerns were evident with regard to Bitcoin instability. The greatest threat to Silk Road and other sites operating on the Deep Web is not law enforcement or market dynamics, it is technology itself. Copyright © 2013 Elsevier B.V. All rights reserved.
Environmental projects. Volume 7: Environmental resources document
NASA Technical Reports Server (NTRS)
Kushner, Len; Kroll, Glenn
1988-01-01
The Goldstone Deep Space Communications Complex (GDSCC) in Barstow, California, is part of the NASA Deep Space Network, one of the world's largest and most sensitive scientific telecommunications and radio navigation networks. Goldstone is managed, directed and operated by the Jet Propulsion Laboratory of Pasadena, California. The GDSCC includes five distinct operational sites: Echo, Venus, Mars, Apollo, and Mojave Base. Within each site is a Deep Space Station (DPS), consisting of a large dish antenna and its support facilities. As required by NASA directives concerning the implementation of the National Environmental Policy Act, each NASA field installation is to publish an Environmental Resources Document describing the current environment at the installation, including any adverse effects that NASA operations may have on the local environment.
Deep-sea bioluminescence blooms after dense water formation at the ocean surface.
Tamburini, Christian; Canals, Miquel; Durrieu de Madron, Xavier; Houpert, Loïc; Lefèvre, Dominique; Martini, Séverine; D'Ortenzio, Fabrizio; Robert, Anne; Testor, Pierre; Aguilar, Juan Antonio; Samarai, Imen Al; Albert, Arnaud; André, Michel; Anghinolfi, Marco; Anton, Gisela; Anvar, Shebli; Ardid, Miguel; Jesus, Ana Carolina Assis; Astraatmadja, Tri L; Aubert, Jean-Jacques; Baret, Bruny; Basa, Stéphane; Bertin, Vincent; Biagi, Simone; Bigi, Armando; Bigongiari, Ciro; Bogazzi, Claudio; Bou-Cabo, Manuel; Bouhou, Boutayeb; Bouwhuis, Mieke C; Brunner, Jurgen; Busto, José; Camarena, Francisco; Capone, Antonio; Cârloganu, Christina; Carminati, Giada; Carr, John; Cecchini, Stefano; Charif, Ziad; Charvis, Philippe; Chiarusi, Tommaso; Circella, Marco; Coniglione, Rosa; Costantini, Heide; Coyle, Paschal; Curtil, Christian; Decowski, Patrick; Dekeyser, Ivan; Deschamps, Anne; Donzaud, Corinne; Dornic, Damien; Dorosti, Hasankiadeh Q; Drouhin, Doriane; Eberl, Thomas; Emanuele, Umberto; Ernenwein, Jean-Pierre; Escoffier, Stéphanie; Fermani, Paolo; Ferri, Marcelino; Flaminio, Vincenzo; Folger, Florian; Fritsch, Ulf; Fuda, Jean-Luc; Galatà, Salvatore; Gay, Pascal; Giacomelli, Giorgio; Giordano, Valentina; Gómez-González, Juan-Pablo; Graf, Kay; Guillard, Goulven; Halladjian, Garadeb; Hallewell, Gregory; van Haren, Hans; Hartman, Joris; Heijboer, Aart J; Hello, Yann; Hernández-Rey, Juan Jose; Herold, Bjoern; Hößl, Jurgen; Hsu, Ching-Cheng; de Jong, Marteen; Kadler, Matthias; Kalekin, Oleg; Kappes, Alexander; Katz, Uli; Kavatsyuk, Oksana; Kooijman, Paul; Kopper, Claudio; Kouchner, Antoine; Kreykenbohm, Ingo; Kulikovskiy, Vladimir; Lahmann, Robert; Lamare, Patrick; Larosa, Giuseppina; Lattuada, Dario; Lim, Gordon; Presti, Domenico Lo; Loehner, Herbert; Loucatos, Sotiris; Mangano, Salvatore; Marcelin, Michel; Margiotta, Annarita; Martinez-Mora, Juan Antonio; Meli, Athina; Montaruli, Teresa; Moscoso, Luciano; Motz, Holger; Neff, Max; Nezri, Emma Nuel; Palioselitis, Dimitris; Păvălaş, Gabriela E; Payet, Kevin; Payre, Patrice; Petrovic, Jelena; Piattelli, Paolo; Picot-Clemente, Nicolas; Popa, Vlad; Pradier, Thierry; Presani, Eleonora; Racca, Chantal; Reed, Corey; Riccobene, Giorgio; Richardt, Carsten; Richter, Roland; Rivière, Colas; Roensch, Kathrin; Rostovtsev, Andrei; Ruiz-Rivas, Joaquin; Rujoiu, Marius; Russo, Valerio G; Salesa, Francisco; Sánchez-Losa, Augustin; Sapienza, Piera; Schöck, Friederike; Schuller, Jean-Pierre; Schussler, Fabian; Shanidze, Rezo; Simeone, Francesco; Spies, Andreas; Spurio, Maurizio; Steijger, Jos J M; Stolarczyk, Thierry; Taiuti, Mauro G F; Toscano, Simona; Vallage, Bertrand; Van Elewyck, Véronique; Vannoni, Giulia; Vecchi, Manuela; Vernin, Pascal; Wijnker, Guus; Wilms, Jorn; de Wolf, Els; Yepes, Harold; Zaborov, Dmitry; De Dios Zornoza, Juan; Zúñiga, Juan
2013-01-01
The deep ocean is the largest and least known ecosystem on Earth. It hosts numerous pelagic organisms, most of which are able to emit light. Here we present a unique data set consisting of a 2.5-year long record of light emission by deep-sea pelagic organisms, measured from December 2007 to June 2010 at the ANTARES underwater neutrino telescope in the deep NW Mediterranean Sea, jointly with synchronous hydrological records. This is the longest continuous time-series of deep-sea bioluminescence ever recorded. Our record reveals several weeks long, seasonal bioluminescence blooms with light intensity up to two orders of magnitude higher than background values, which correlate to changes in the properties of deep waters. Such changes are triggered by the winter cooling and evaporation experienced by the upper ocean layer in the Gulf of Lion that leads to the formation and subsequent sinking of dense water through a process known as "open-sea convection". It episodically renews the deep water of the study area and conveys fresh organic matter that fuels the deep ecosystems. Luminous bacteria most likely are the main contributors to the observed deep-sea bioluminescence blooms. Our observations demonstrate a consistent and rapid connection between deep open-sea convection and bathypelagic biological activity, as expressed by bioluminescence. In a setting where dense water formation events are likely to decline under global warming scenarios enhancing ocean stratification, in situ observatories become essential as environmental sentinels for the monitoring and understanding of deep-sea ecosystem shifts.
Orion Underway Recovery Test 5 (URT-5)
2016-10-29
NASA, contractor and U.S. Navy personnel are on the deck of the USS San Diego as the sun sets on the fourth day of Underway Recovery Test 5 in the Pacific Ocean off the coast of California. NASA's Ground Systems Development and Operations Program and the U.S. Navy practiced retrieving and securing a test version of the Orion crew module in the well deck of the ship using tethers and a winch system to prepare for recovery of Orion on its return from deep space missions. The testing will allow the team to demonstrate and evaluate recovery processes, procedures, hardware and personnel in open waters. Orion is the exploration spacecraft designed to carry astronauts to destinations not yet explored by humans, including an asteroid and NASA's Journey to Mars. It will have emergency abort capability, sustain the crew during space travel and provide safe re-entry from deep space return velocities. Orion is scheduled to launch on NASA's Space Launch System in late 2018. For more information, visit http://www.nasa.gov/orion.
Development of an Ion Thruster and Power Processor for New Millennium's Deep Space 1 Mission
NASA Technical Reports Server (NTRS)
Sovey, James S.; Hamley, John A.; Haag, Thomas W.; Patterson, Michael J.; Pencil, Eric J.; Peterson, Todd T.; Pinero, Luis R.; Power, John L.; Rawlin, Vincent K.; Sarmiento, Charles J.;
1997-01-01
The NASA Solar Electric Propulsion Technology Applications Readiness Program (NSTAR) will provide a single-string primary propulsion system to NASA's New Millennium Deep Space 1 Mission which will perform comet and asteroid flybys in the years 1999 and 2000. The propulsion system includes a 30-cm diameter ion thruster, a xenon feed system, a power processing unit, and a digital control and interface unit. A total of four engineering model ion thrusters, three breadboard power processors, and a controller have been built, integrated, and tested. An extensive set of development tests has been completed along with thruster design verification tests of 2000 h and 1000 h. An 8000 h Life Demonstration Test is ongoing and has successfully demonstrated more than 6000 h of operation. In situ measurements of accelerator grid wear are consistent with grid lifetimes well in excess of the 12,000 h qualification test requirement. Flight hardware is now being assembled in preparation for integration, functional, and acceptance tests.
NASA Astrophysics Data System (ADS)
Tynan, M. C.; Russell, G. P.; Perry, F.; Kelley, R.; Champenois, S. T.
2017-12-01
This global survey presents a synthesis of some notable geotechnical and engineering information reflected in four interactive layer maps for selected: 1) deep mines and shafts; 2) existing, considered or planned radioactive waste management deep underground studies, sites, or disposal facilities; 3) deep large diameter boreholes, and 4) physics underground laboratories and facilities from around the world. These data are intended to facilitate user access to basic information and references regarding deep underground "facilities", history, activities, and plans. In general, the interactive maps and database [http://gis.inl.gov/globalsites/] provide each facility's approximate site location, geology, and engineered features (e.g.: access, geometry, depth, diameter, year of operations, groundwater, lithology, host unit name and age, basin; operator, management organization, geographic data, nearby cultural features, other). Although the survey is not all encompassing, it is a comprehensive review of many of the significant existing and historical underground facilities discussed in the literature addressing radioactive waste management and deep mined geologic disposal safety systems. The global survey is intended to support and to inform: 1) interested parties and decision makers; 2) radioactive waste disposal and siting option evaluations, and 3) safety case development as a communication tool applicable to any mined geologic disposal facility as a demonstration of historical and current engineering and geotechnical capabilities available for use in deep underground facility siting, planning, construction, operations and monitoring.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. Workers at Astrotech Space Operations in Titusville, Fla., begin fueling operations of the Deep Impact spacecraft, seen wrapped in a protective cover in the background. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth, and reveal the secrets of its interior. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will collect pictures and data of how the crater forms, measuring the craters depth and diameter, as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. Workers at Astrotech Space Operations in Titusville, Fla., begin fueling operations of the Deep Impact spacecraft, seen wrapped in a protective cover in the background. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth, and reveal the secrets of its interior. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will collect pictures and data of how the crater forms, measuring the craters depth and diameter, as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
Deep-UV Based Acousto-Optic Tunable Filter for Spectral Sensing Applications
NASA Technical Reports Server (NTRS)
Prasad, Narasimha S.
2006-01-01
In this paper, recent progress made in the development of quartz and KDP crystal based acousto-optic tunable filters (AOTF) are presented. These AOTFs are developed for operation over deep-UV to near-UV wavelengths of 190 nm to 400 nm. Preliminary output performance measurements of quartz AOTF and design specifications of KDP AOTF are presented. At 355 nm, the quartz AOTF device offered approx.15% diffraction efficiency with a passband full-width-half-maximum (FWHM) of less than 0.0625 nm. Further characterization of quartz AOTF devices at deep-UV wavelengths is progressing. The hermetic packaging of KDP AOTF is nearing completion. The solid-state optical sources being used for excitation include nonlinear optics based high-energy tunable UV transmitters that operate around 320 nm and 308 nm wavelengths, and a tunable deep-UV laser operating over 193 nm to 210 nm. These AOTF devices have been developed as turn-key devices for primarily for space-based chemical and biological sensing applications using laser induced Fluorescence and resonance Raman techniques.
NASA Technical Reports Server (NTRS)
1977-01-01
A Deep Space Network progress report is presented dealing with in flight project support, tracking and data acquisition research and technology, network engineering, hardware and software implementation, and operations.
NASA Astrophysics Data System (ADS)
Merritt, Donald R.; Cardesin Moinelo, Alejandro; Marin Yaseli de la Parra, Julia; Breitfellner, Michel; Blake, Rick; Castillo Fraile, Manuel; Grotheer, Emmanuel; Martin, Patrick; Titov, Dmitri
2018-05-01
This paper summarizes the changes required to the science planning of the Mars Express spacecraft to deal with the second-half of 2017, a very restrictive period that combined low power, low data rate and deep eclipses, imposing very limiting constraints for science operations. With this difficult operational constraint imposed, the ESAC Mars Express science planning team worked very hard with the ESOC flight control team and all science experiment teams to maintain a minimal level of science operations during this difficult operational period. This maintained the integrity and continuity of the long term science observations, which is a hallmark and highlight of such long-lived missions.
Ziatdinov, Maxim; Dyck, Ondrej; Maksov, Artem; ...
2017-12-07
Recent advances in scanning transmission electron and scanning probe microscopies have opened unprecedented opportunities in probing the materials structural parameters and various functional properties in real space with an angstrom-level precision. This progress has been accompanied by exponential increase in the size and quality of datasets produced by microscopic and spectroscopic experimental techniques. These developments necessitate adequate methods for extracting relevant physical and chemical information from the large datasets, for which a priori information on the structures of various atomic configurations and lattice defects is limited or absent. Here we demonstrate an application of deep neural networks to extracting informationmore » from atomically resolved images including location of the atomic species and type of defects. We develop a “weakly-supervised” approach that uses information on the coordinates of all atomic species in the image, extracted via a deep neural network, to identify a rich variety of defects that are not part of an initial training set. We further apply our approach to interpret complex atomic and defect transformation, including switching between different coordination of silicon dopants in graphene as a function of time, formation of peculiar silicon dimer with mixed 3-fold and 4-fold coordination, and the motion of molecular “rotor”. In conclusion, this deep learning based approach resembles logic of a human operator, but can be scaled leading to significant shift in the way of extracting and analyzing information from raw experimental data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ziatdinov, Maxim; Dyck, Ondrej; Maksov, Artem
Recent advances in scanning transmission electron and scanning probe microscopies have opened unprecedented opportunities in probing the materials structural parameters and various functional properties in real space with an angstrom-level precision. This progress has been accompanied by exponential increase in the size and quality of datasets produced by microscopic and spectroscopic experimental techniques. These developments necessitate adequate methods for extracting relevant physical and chemical information from the large datasets, for which a priori information on the structures of various atomic configurations and lattice defects is limited or absent. Here we demonstrate an application of deep neural networks to extracting informationmore » from atomically resolved images including location of the atomic species and type of defects. We develop a “weakly-supervised” approach that uses information on the coordinates of all atomic species in the image, extracted via a deep neural network, to identify a rich variety of defects that are not part of an initial training set. We further apply our approach to interpret complex atomic and defect transformation, including switching between different coordination of silicon dopants in graphene as a function of time, formation of peculiar silicon dimer with mixed 3-fold and 4-fold coordination, and the motion of molecular “rotor”. In conclusion, this deep learning based approach resembles logic of a human operator, but can be scaled leading to significant shift in the way of extracting and analyzing information from raw experimental data.« less
Lam, D L; Mitsumori, L M; Neligan, P C; Warren, B H; Shuman, W P; Dubinsky, T J
2012-12-01
Autologous breast reconstructive surgery with deep inferior epigastric artery (DIEA) perforator flaps has become the mainstay for breast reconstructive surgery. CT angiography and three-dimensional image post processing can depict the number, size, course and location of the DIEA perforating arteries for the pre-operative selection of the best artery to use for the tissue flap. Knowledge of the location and selection of the optimal perforating artery shortens operative times and decreases patient morbidity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schlanger, S.O.
Prior to 1968, ooids had not been described from shallow-water carbonate complexes deposited in atoll, seamount, or guyot settings in the Pacific basin. This apparent lack of an oolite facies in the Pacific was puzzling, considering the abundance of ooids in modern Bahamian settings and in the Phanerozoic record in general. Since 1968, Deep Sea Drilling Project operations, marine seismic stratigraphic studies, dredging on drowned atolls, and field studies of an emergent atoll have revealed the presence of a Cretaceous oolite limestone atop Ita Maitai Guyot, Paleocene ooids on Koko Seamount, late Paleocene to middle Eocene ooids on Ojin Seamount,more » Eocene ooids on Harrie Guyot, and Holocene oolite limestone on Malden Island. At Ita Maitai Guyot the oolite limestone overlies normal lagoon sediments and is overlain by deep-water pelagic carbonate. At Malden Island, which is an emergent atoll, 3550-year-old oolite limestone overlies a 125,000-year-old reef complex. At Harrie Guyot and at Koko and Ojin Seamounts, ooids are associated with drowned atoll reef and lagoon complexes. The paleolatitude of deposition of the oolite facies lay between 5/sup 0/S and 18/sup 0/N. In these settings the formation of the oolite facies was apparently related to a rapid rise in sea level that caused flooding of an antecedent reef complex which failed to keep up with the rise in sea level. In Pacific basin environments the oolite facies is a minor and temporally ephemeral one which accounts for its scarcity in the stratigraphic record from this region.« less
Strong motion from surface waves in deep sedimentary basins
Joyner, W.B.
2000-01-01
It is widely recognized that long-period surface waves generated by conversion of body waves at the boundaries of deep sedimentary basins make an important contribution to strong ground motion. The factors controlling the amplitude of such motion, however, are not widely understood. A study of pseudovelocity response spectra of strong-motion records from the Los Angeles Basin shows that late-arriving surface waves with group velocities of about 1 km/sec dominate the ground motion for periods of 3 sec and longer. The rate of amplitude decay for these waves is less than for the body waves and depends significantly on period, with smaller decay for longer periods. The amplitude can be modeled by the equation log y = f(M, RE) + c + bRB where y is the pseudovelocity response, f(M, RE) is an attenuation relation based on a general strong-motion data set, M is moment magnitude, RE is the distance from the source to the edge of the basin, RB is the distance from the edge of the basin to the recording site, and b and c are parameters fit to the data. The equation gives values larger by as much as a factor of 3 than given by the attenuation relationships based on general strong-motion data sets for the same source-site distance. It is clear that surface waves need to be taken into account in the design of long-period structures in deep sedimentary basins. The ground-motion levels specified by the earthquake provisions of current building codes, in California at least, accommodate the long-period ground motions from basin-edge-generated surface waves for periods of 5 sec and less and earthquakes with moment magnitudes of 7.5 or less located more than 20 km outside the basin. There may be problems at longer periods and for earthquakes located closer to the basin edge. The results of this study suggest that anelastic attenuation may need to be included in attempts to model long-period motion in deep sedimentary basins. To obtain better data on surface waves in the future, operators of strong-motion networks should take special care for the faithful recording of the long-period components of ground motion. It will also be necessary to insure that at least some selected recorders, once triggered, continue to operate for a time sufficient for the surface waves to traverse the basin. With velocities of about 1 km/sec, that time will be as long as 100 sec for a basin the size of the Los Angeles Basin.
NASA Technical Reports Server (NTRS)
1977-01-01
The various systems and subsystems are discussed for the Deep Space Network (DSN). A description of the DSN is presented along with mission support, program planning, facility engineering, implementation and operations.
Automating Mid- and Long-Range Scheduling for NASA's Deep Space Network
NASA Technical Reports Server (NTRS)
Johnston, Mark D.; Tran, Daniel; Arroyo, Belinda; Sorensen, Sugi; Tay, Peter; Carruth, Butch; Coffman, Adam; Wallace, Mike
2012-01-01
NASA has recently deployed a new mid-range scheduling system for the antennas of the Deep Space Network (DSN), called Service Scheduling Software, or S(sup 3). This system is architected as a modern web application containing a central scheduling database integrated with a collaborative environment, exploiting the same technologies as social web applications but applied to a space operations context. This is highly relevant to the DSN domain since the network schedule of operations is developed in a peer-to-peer negotiation process among all users who utilize the DSN (representing 37 projects including international partners and ground-based science and calibration users). The initial implementation of S(sup 3) is complete and the system has been operational since July 2011. S(sup 3) has been used for negotiating schedules since April 2011, including the baseline schedules for three launching missions in late 2011. S(sup 3) supports a distributed scheduling model, in which changes can potentially be made by multiple users based on multiple schedule "workspaces" or versions of the schedule. This has led to several challenges in the design of the scheduling database, and of a change proposal workflow that allows users to concur with or to reject proposed schedule changes, and then counter-propose with alternative or additional suggested changes. This paper describes some key aspects of the S(sup 3) system and lessons learned from its operational deployment to date, focusing on the challenges of multi-user collaborative scheduling in a practical and mission-critical setting. We will also describe the ongoing project to extend S(sup 3) to encompass long-range planning, downtime analysis, and forecasting, as the next step in developing a single integrated DSN scheduling tool suite to cover all time ranges.
Deep machine learning provides state-of-the-art performance in image-based plant phenotyping.
Pound, Michael P; Atkinson, Jonathan A; Townsend, Alexandra J; Wilson, Michael H; Griffiths, Marcus; Jackson, Aaron S; Bulat, Adrian; Tzimiropoulos, Georgios; Wells, Darren M; Murchie, Erik H; Pridmore, Tony P; French, Andrew P
2017-10-01
In plant phenotyping, it has become important to be able to measure many features on large image sets in order to aid genetic discovery. The size of the datasets, now often captured robotically, often precludes manual inspection, hence the motivation for finding a fully automated approach. Deep learning is an emerging field that promises unparalleled results on many data analysis problems. Building on artificial neural networks, deep approaches have many more hidden layers in the network, and hence have greater discriminative and predictive power. We demonstrate the use of such approaches as part of a plant phenotyping pipeline. We show the success offered by such techniques when applied to the challenging problem of image-based plant phenotyping and demonstrate state-of-the-art results (>97% accuracy) for root and shoot feature identification and localization. We use fully automated trait identification using deep learning to identify quantitative trait loci in root architecture datasets. The majority (12 out of 14) of manually identified quantitative trait loci were also discovered using our automated approach based on deep learning detection to locate plant features. We have shown deep learning-based phenotyping to have very good detection and localization accuracy in validation and testing image sets. We have shown that such features can be used to derive meaningful biological traits, which in turn can be used in quantitative trait loci discovery pipelines. This process can be completely automated. We predict a paradigm shift in image-based phenotyping bought about by such deep learning approaches, given sufficient training sets. © The Authors 2017. Published by Oxford University Press.
Deep SOMs for automated feature extraction and classification from big data streaming
NASA Astrophysics Data System (ADS)
Sakkari, Mohamed; Ejbali, Ridha; Zaied, Mourad
2017-03-01
In this paper, we proposed a deep self-organizing map model (Deep-SOMs) for automated features extracting and learning from big data streaming which we benefit from the framework Spark for real time streams and highly parallel data processing. The SOMs deep architecture is based on the notion of abstraction (patterns automatically extract from the raw data, from the less to more abstract). The proposed model consists of three hidden self-organizing layers, an input and an output layer. Each layer is made up of a multitude of SOMs, each map only focusing at local headmistress sub-region from the input image. Then, each layer trains the local information to generate more overall information in the higher layer. The proposed Deep-SOMs model is unique in terms of the layers architecture, the SOMs sampling method and learning. During the learning stage we use a set of unsupervised SOMs for feature extraction. We validate the effectiveness of our approach on large data sets such as Leukemia dataset and SRBCT. Results of comparison have shown that the Deep-SOMs model performs better than many existing algorithms for images classification.
DeepID-Net: Deformable Deep Convolutional Neural Networks for Object Detection.
Ouyang, Wanli; Zeng, Xingyu; Wang, Xiaogang; Qiu, Shi; Luo, Ping; Tian, Yonglong; Li, Hongsheng; Yang, Shuo; Wang, Zhe; Li, Hongyang; Loy, Chen Change; Wang, Kun; Yan, Junjie; Tang, Xiaoou
2016-07-07
In this paper, we propose deformable deep convolutional neural networks for generic object detection. This new deep learning object detection framework has innovations in multiple aspects. In the proposed new deep architecture, a new deformation constrained pooling (def-pooling) layer models the deformation of object parts with geometric constraint and penalty. A new pre-training strategy is proposed to learn feature representations more suitable for the object detection task and with good generalization capability. By changing the net structures, training strategies, adding and removing some key components in the detection pipeline, a set of models with large diversity are obtained, which significantly improves the effectiveness of model averaging. The proposed approach improves the mean averaged precision obtained by RCNN [16], which was the state-of-the-art, from 31% to 50.3% on the ILSVRC2014 detection test set. It also outperforms the winner of ILSVRC2014, GoogLeNet, by 6.1%. Detailed component-wise analysis is also provided through extensive experimental evaluation, which provides a global view for people to understand the deep learning object detection pipeline.
Deep-sea tsunami deposits in the Miocene Nishizaki Formation of Boso Peninsula, Central Japan
NASA Astrophysics Data System (ADS)
Lee, I. T.; Ogawa, Y.
2003-12-01
Many sets of deep-sea deposits considered to be formed by return flow of tsunami were found from the middle Miocene Nishizaki Formation of Boso Peninsula, Central Japan, which is located near the convergent plate boundary at present as well as in the past, and has been frequently attacked by tsunami. The characteristics of the tsunami deposits in the Nishizaki Formation are as follows. Each set consists of 10-20 beds with parallel laminations formed under upper plane regime composed of alternated pumiceous beds in white and black colors. The white bed comprises coarse sands and pebbles with thickness of 5-10 cm. In contrast, the black bed is made of silts with thickness less than 1 cm. Among the 10-20 beds, the grain size is coarsest in the middle part of the set in general. The uppermost bed of each set shows cross-lamination formed by lower plane regime, gradually changing into finer graded bed on top. Sometimes, the lower part of the parallel laminated bed is associated with an underlying debrite or turbidite bed. Each set of these parallel-laminated beds is lenticular in shape thinning to the east in consistent with the generally eastward paleocurrent of the cross-lamination at the top. Such sedimentary characteristics are different from any event deposits reported in deep-sea but similar to the deep-sea K/T boundary deposits in the Caribbean region. Statistically, tsunami waves occur totally 12-13 times. Among them the height of 5-6th wave is known to be strongest. Interval time of each return flow is known to be 30-40 minutes, enough to settle the finer clastics at each bed top. The parallel-laminated parts have common dish structure and never trace fossils, indicating rather rapid deposition for the whole parts of the set. Consequently, the sedimentary characteristics shown from the parallel-laminated beds of the Nishizaki Formation are attributed to the return flow of tsunami to the deep-sea. We considered that such deep-sea parallel-laminated deposits of pumiceous clastics occur just after a large earthquake which forms the debrite or turbidite at the lowermost part.
NASA Integrated Space Communications Network
NASA Technical Reports Server (NTRS)
Tai, Wallace; Wright, Nate; Prior, Mike; Bhasin, Kul
2012-01-01
The NASA Integrated Network for Space Communications and Navigation (SCaN) has been in the definition phase since 2010. It is intended to integrate NASA s three existing network elements, i.e., the Space Network, Near Earth Network, and Deep Space Network, into a single network. In addition to the technical merits, the primary purpose of the Integrated Network is to achieve a level of operating cost efficiency significantly higher than it is today. Salient features of the Integrated Network include (a) a central system element that performs service management functions and user mission interfaces for service requests; (b) a set of common service execution equipment deployed at the all stations that provides return, forward, and radiometric data processing and delivery capabilities; (c) the network monitor and control operations for the entire integrated network are conducted remotely and centrally at a prime-shift site and rotating among three sites globally (a follow-the-sun approach); (d) the common network monitor and control software deployed at all three network elements that supports the follow-the-sun operations.
Jones, David T; Kandathil, Shaun M
2018-04-26
In addition to substitution frequency data from protein sequence alignments, many state-of-the-art methods for contact prediction rely on additional sources of information, or features, of protein sequences in order to predict residue-residue contacts, such as solvent accessibility, predicted secondary structure, and scores from other contact prediction methods. It is unclear how much of this information is needed to achieve state-of-the-art results. Here, we show that using deep neural network models, simple alignment statistics contain sufficient information to achieve state-of-the-art precision. Our prediction method, DeepCov, uses fully convolutional neural networks operating on amino-acid pair frequency or covariance data derived directly from sequence alignments, without using global statistical methods such as sparse inverse covariance or pseudolikelihood estimation. Comparisons against CCMpred and MetaPSICOV2 show that using pairwise covariance data calculated from raw alignments as input allows us to match or exceed the performance of both of these methods. Almost all of the achieved precision is obtained when considering relatively local windows (around 15 residues) around any member of a given residue pairing; larger window sizes have comparable performance. Assessment on a set of shallow sequence alignments (fewer than 160 effective sequences) indicates that the new method is substantially more precise than CCMpred and MetaPSICOV2 in this regime, suggesting that improved precision is attainable on smaller sequence families. Overall, the performance of DeepCov is competitive with the state of the art, and our results demonstrate that global models, which employ features from all parts of the input alignment when predicting individual contacts, are not strictly needed in order to attain precise contact predictions. DeepCov is freely available at https://github.com/psipred/DeepCov. d.t.jones@ucl.ac.uk.
The Great Observatories Origins Deep Survey (GOODS): Overview and Status
NASA Astrophysics Data System (ADS)
Hook, R. N.; GOODS Team
2002-12-01
GOODS is a very large project to gather deep imaging data and spectroscopic followup of two fields, the Hubble Deep Field North (HDF-N) and the Chandra Deep Field South (CDF-S), with both space and ground-based instruments to create an extensive multiwavelength public data set for community research on the distant Universe. GOODS includes a SIRTF Legacy Program (PI: Mark Dickinson) and a Hubble Treasury Program of ACS imaging (PI: Mauro Giavalisco). The ACS imaging was also optimized for the detection of high-z supernovae which are being followed up by a further target of opportunity Hubble GO Program (PI: Adam Riess). The bulk of the CDF-S ground-based data presently available comes from an ESO Large Programme (PI: Catherine Cesarsky) which includes both deep imaging and multi-object followup spectroscopy. This is currently complemented in the South by additional CTIO imaging. Currently available HDF-N ground-based data forming part of GOODS includes NOAO imaging. Although the SIRTF part of the survey will not begin until later in the year the ACS imaging is well advanced and there is also a huge body of complementary ground-based imaging and some follow-up spectroscopy which is already publicly available. We summarize the current status of GOODS and give an overview of the data products currently available and present the timescales for the future. Many early science results from the survey are presented in other GOODS papers at this meeting. Support for the HST GOODS program presented here and in companion abstracts was provided by NASA thorugh grant number GO-9425 from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555.
NASA Astrophysics Data System (ADS)
Samala, Ravi K.; Chan, Heang-Ping; Hadjiiski, Lubomir; Helvie, Mark A.; Richter, Caleb; Cha, Kenny
2018-02-01
Deep-learning models are highly parameterized, causing difficulty in inference and transfer learning. We propose a layered pathway evolution method to compress a deep convolutional neural network (DCNN) for classification of masses in DBT while maintaining the classification accuracy. Two-stage transfer learning was used to adapt the ImageNet-trained DCNN to mammography and then to DBT. In the first-stage transfer learning, transfer learning from ImageNet trained DCNN was performed using mammography data. In the second-stage transfer learning, the mammography-trained DCNN was trained on the DBT data using feature extraction from fully connected layer, recursive feature elimination and random forest classification. The layered pathway evolution encapsulates the feature extraction to the classification stages to compress the DCNN. Genetic algorithm was used in an iterative approach with tournament selection driven by count-preserving crossover and mutation to identify the necessary nodes in each convolution layer while eliminating the redundant nodes. The DCNN was reduced by 99% in the number of parameters and 95% in mathematical operations in the convolutional layers. The lesion-based area under the receiver operating characteristic curve on an independent DBT test set from the original and the compressed network resulted in 0.88+/-0.05 and 0.90+/-0.04, respectively. The difference did not reach statistical significance. We demonstrated a DCNN compression approach without additional fine-tuning or loss of performance for classification of masses in DBT. The approach can be extended to other DCNNs and transfer learning tasks. An ensemble of these smaller and focused DCNNs has the potential to be used in multi-target transfer learning.
Deep-Sea Bioluminescence Blooms after Dense Water Formation at the Ocean Surface
Tamburini, Christian; Canals, Miquel; Durrieu de Madron, Xavier; Houpert, Loïc; Lefèvre, Dominique; Martini, Séverine; D'Ortenzio, Fabrizio; Robert, Anne; Testor, Pierre; Aguilar, Juan Antonio; Samarai, Imen Al; Albert, Arnaud; André, Michel; Anghinolfi, Marco; Anton, Gisela; Anvar, Shebli; Ardid, Miguel; Jesus, Ana Carolina Assis; Astraatmadja, Tri L.; Aubert, Jean-Jacques; Baret, Bruny; Basa, Stéphane; Bertin, Vincent; Biagi, Simone; Bigi, Armando; Bigongiari, Ciro; Bogazzi, Claudio; Bou-Cabo, Manuel; Bouhou, Boutayeb; Bouwhuis, Mieke C.; Brunner, Jurgen; Busto, José; Camarena, Francisco; Capone, Antonio; Cârloganu, Christina; Carminati, Giada; Carr, John; Cecchini, Stefano; Charif, Ziad; Charvis, Philippe; Chiarusi, Tommaso; Circella, Marco; Coniglione, Rosa; Costantini, Heide; Coyle, Paschal; Curtil, Christian; Decowski, Patrick; Dekeyser, Ivan; Deschamps, Anne; Donzaud, Corinne; Dornic, Damien; Dorosti, Hasankiadeh Q.; Drouhin, Doriane; Eberl, Thomas; Emanuele, Umberto; Ernenwein, Jean-Pierre; Escoffier, Stéphanie; Fermani, Paolo; Ferri, Marcelino; Flaminio, Vincenzo; Folger, Florian; Fritsch, Ulf; Fuda, Jean-Luc; Galatà, Salvatore; Gay, Pascal; Giacomelli, Giorgio; Giordano, Valentina; Gómez-González, Juan-Pablo; Graf, Kay; Guillard, Goulven; Halladjian, Garadeb; Hallewell, Gregory; van Haren, Hans; Hartman, Joris; Heijboer, Aart J.; Hello, Yann; Hernández-Rey, Juan Jose; Herold, Bjoern; Hößl, Jurgen; Hsu, Ching-Cheng; de Jong, Marteen; Kadler, Matthias; Kalekin, Oleg; Kappes, Alexander; Katz, Uli; Kavatsyuk, Oksana; Kooijman, Paul; Kopper, Claudio; Kouchner, Antoine; Kreykenbohm, Ingo; Kulikovskiy, Vladimir; Lahmann, Robert; Lamare, Patrick; Larosa, Giuseppina; Lattuada, Dario; Lim, Gordon; Presti, Domenico Lo; Loehner, Herbert; Loucatos, Sotiris; Mangano, Salvatore; Marcelin, Michel; Margiotta, Annarita; Martinez-Mora, Juan Antonio; Meli, Athina; Montaruli, Teresa; Motz, Holger; Neff, Max; Nezri, Emma nuel; Palioselitis, Dimitris; Păvălaş, Gabriela E.; Payet, Kevin; Payre, Patrice; Petrovic, Jelena; Piattelli, Paolo; Picot-Clemente, Nicolas; Popa, Vlad; Pradier, Thierry; Presani, Eleonora; Racca, Chantal; Reed, Corey; Riccobene, Giorgio; Richardt, Carsten; Richter, Roland; Rivière, Colas; Roensch, Kathrin; Rostovtsev, Andrei; Ruiz-Rivas, Joaquin; Rujoiu, Marius; Russo, Valerio G.; Salesa, Francisco; Sánchez-Losa, Augustin; Sapienza, Piera; Schöck, Friederike; Schuller, Jean-Pierre; Schussler, Fabian; Shanidze, Rezo; Simeone, Francesco; Spies, Andreas; Spurio, Maurizio; Steijger, Jos J. M.; Stolarczyk, Thierry; Taiuti, Mauro G. F.; Toscano, Simona; Vallage, Bertrand; Van Elewyck, Véronique; Vannoni, Giulia; Vecchi, Manuela; Vernin, Pascal; Wijnker, Guus; Wilms, Jorn; de Wolf, Els; Yepes, Harold; Zaborov, Dmitry; De Dios Zornoza, Juan; Zúñiga, Juan
2013-01-01
The deep ocean is the largest and least known ecosystem on Earth. It hosts numerous pelagic organisms, most of which are able to emit light. Here we present a unique data set consisting of a 2.5-year long record of light emission by deep-sea pelagic organisms, measured from December 2007 to June 2010 at the ANTARES underwater neutrino telescope in the deep NW Mediterranean Sea, jointly with synchronous hydrological records. This is the longest continuous time-series of deep-sea bioluminescence ever recorded. Our record reveals several weeks long, seasonal bioluminescence blooms with light intensity up to two orders of magnitude higher than background values, which correlate to changes in the properties of deep waters. Such changes are triggered by the winter cooling and evaporation experienced by the upper ocean layer in the Gulf of Lion that leads to the formation and subsequent sinking of dense water through a process known as “open-sea convection”. It episodically renews the deep water of the study area and conveys fresh organic matter that fuels the deep ecosystems. Luminous bacteria most likely are the main contributors to the observed deep-sea bioluminescence blooms. Our observations demonstrate a consistent and rapid connection between deep open-sea convection and bathypelagic biological activity, as expressed by bioluminescence. In a setting where dense water formation events are likely to decline under global warming scenarios enhancing ocean stratification, in situ observatories become essential as environmental sentinels for the monitoring and understanding of deep-sea ecosystem shifts. PMID:23874425
Subsurface Hybrid Power Options for Oil & Gas Production at Deep Ocean Sites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farmer, J C; Haut, R; Jahn, G
2010-02-19
An investment in deep-sea (deep-ocean) hybrid power systems may enable certain off-shore oil and gas exploration and production. Advanced deep-ocean drilling and production operations, locally powered, may provide commercial access to oil and gas reserves otherwise inaccessible. Further, subsea generation of electrical power has the potential of featuring a low carbon output resulting in improved environmental conditions. Such technology therefore, enhances the energy security of the United States in a green and environmentally friendly manner. The objective of this study is to evaluate alternatives and recommend equipment to develop into hybrid energy conversion and storage systems for deep ocean operations.more » Such power systems will be located on the ocean floor and will be used to power offshore oil and gas exploration and production operations. Such power systems will be located on the oceans floor, and will be used to supply oil and gas exploration activities, as well as drilling operations required to harvest petroleum reserves. The following conceptual hybrid systems have been identified as candidates for powering sub-surface oil and gas production operations: (1) PWR = Pressurized-Water Nuclear Reactor + Lead-Acid Battery; (2) FC1 = Line for Surface O{sub 2} + Well Head Gas + Reformer + PEMFC + Lead-Acid & Li-Ion Batteries; (3) FC2 = Stored O2 + Well Head Gas + Reformer + Fuel Cell + Lead-Acid & Li-Ion Batteries; (4) SV1 = Submersible Vehicle + Stored O{sub 2} + Fuel Cell + Lead-Acid & Li-Ion Batteries; (5) SV2 = Submersible Vehicle + Stored O{sub 2} + Engine or Turbine + Lead-Acid & Li-Ion Batteries; (6) SV3 = Submersible Vehicle + Charge at Docking Station + ZEBRA & Li-Ion Batteries; (7) PWR TEG = PWR + Thermoelectric Generator + Lead-Acid Battery; (8) WELL TEG = Thermoelectric Generator + Well Head Waste Heat + Lead-Acid Battery; (9) GRID = Ocean Floor Electrical Grid + Lead-Acid Battery; and (10) DOC = Deep Ocean Current + Lead-Acid Battery.« less
Sedimentary, tectonic, and sea-level controls on submarine fan and slope-apron turbidite systems
Stow, D.A.V.; Howell, D.G.; Nelson, C.H.
1984-01-01
To help understand factors that influence submarine fan deposition, we outline some of the principal sedimentary, tectonic, and sea-level controls involved in deep-water sedimentation, give some data on the rates at which they operate, and evaluate their probable effects. Three depositional end-member systems, two submarine fan types (elongate and radial), and a third nonfan, slope-apron system result primarily from variations in sediment type and supply. Tectonic setting and local and global sea-level changes further modify the nature of fan growth, the distribution of facies, and the resulting vertical stratigraphic sequences. ?? 1984 Springer-Verlag New York Inc.
Operant learning (R-S) principles applied to nail-biting.
McClanahan, T M
1995-10-01
The principles of R-S learning were applied to a 32-yr.-old Caucasian woman to reduce the frequency and duration of fingernail-biting activity in a reversal-replication (ABAB) research design. The undesirable behavior, fingernail-biting which included frequency and duration, antecedents, and setting events, was recorded during a 28-day study. Self-monitoring recordings indicated that anxiety was the most prevalent antecedent. Through the use of a preliminary questionnaire and interview, increase in self-awareness was judged to be most effective in the extinction of the undesired behavior. The systematic desensitization techniques of deep muscle relaxation and Transcendental Meditation were used during the treatment phase.
Wills, B W; Sheppard, E D; Smith, W R; Staggers, J R; Li, P; Shah, A; Lee, S R; Naranje, S M
2018-03-22
Infections and deep vein thrombosis (DVT) after total hip arthroplasty (THA) are challenging problems for both the patient and surgeon. Previous studies have identified numerous risk factors for infections and DVT after THA but have often been limited by sample size. We aimed to evaluate the effect of operative time on early postoperative infection as well as DVT rates following THA. We hypothesized that an increase in operative time would result in increased odds of acquiring an infection as well as a DVT. We conducted a retrospective analysis of prospectively collected data using the American College of Surgeons National Surgical Quality Improvement Program (NSQIP) database from 2006 to 2015 for all patients undergoing primary THA. Associations between operative time and infection or DVT were evaluated with multivariable logistic regressions controlling for demographics and several known risks factors for infection. Three different types of infections were evaluated: (1) superficial surgical site infection (SSI), an infection involving the skin or subcutaneous tissue, (2) deep SSI, an infection involving the muscle or fascial layers beneath the subcutaneous tissue, and (3) organ/space infection, an infection involving any part of the anatomy manipulated during surgery other than the incisional components. In total, 103,044 patients who underwent THA were included in our study. Our results suggested a significant association between superficial SSIs and operative time. Specifically, the adjusted odds of suffering a superficial SSI increased by 6% (CI=1.04-1.08, p<0.0001) for every 10-minute increase of operative time. When using dichotomized operative time (<90minutes or >90minutes), the adjusted odds of suffering a superficial SSI was 56% higher for patients with prolonged operative time (CI=1.05-2.32, p=0.0277). The adjusted odds of suffering a deep SSI increased by 7% for every 10-minute increase in operative time (CI=1.01-1.14, p=0.0335). No significant associations were detected between organ/space infection, wound dehiscence, or DVT and operative time either as continuous or as dichotomized. Prolonged operative times (>90min) are associated with increased rates of superficial SSIs, but not deep SSIs, organ/space infections, wound dehiscence, or DVT. III. Copyright © 2018 Elsevier Masson SAS. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tynan, Mark C.; Russell, Glenn P.; Perry, Frank V.
These associated tables, references, notes, and report present a synthesis of some notable geotechnical and engineering information used to create four interactive layer maps for selected: 1) deep mines and shafts; 2) existing, considered or planned radioactive waste management deep underground studies or disposal facilities 3) deep large diameter boreholes, and 4) physics underground laboratories and facilities from around the world. These data are intended to facilitate user access to basic information and references regarding “deep underground” facilities, history, activities, and plans. In general, the interactive maps and database provide each facility’s approximate site location, geology, and engineered features (e.g.:more » access, geometry, depth, diameter, year of operations, groundwater, lithology, host unit name and age, basin; operator, management organization, geographic data, nearby cultural features, other). Although the survey is not comprehensive, it is representative of many of the significant existing and historical underground facilities discussed in the literature addressing radioactive waste management and deep mined geologic disposal safety systems. The global survey is intended to support and to inform: 1) interested parties and decision makers; 2) radioactive waste disposal and siting option evaluations, and 3) safety case development applicable to any mined geologic disposal facility as a demonstration of historical and current engineering and geotechnical capabilities available for use in deep underground facility siting, planning, construction, operations and monitoring.« less
NASA Technical Reports Server (NTRS)
1980-01-01
The functions and facilities of the Deep Space Network are considered. Progress in flight project support, tracking and data acquisition research and technology, network engineering, hardware and software implementation, and operations is reported.
NASA Technical Reports Server (NTRS)
1979-01-01
Progress is reported in flight project support, tracking and data acquisition research and technology, network engineering, hardware and software implementation, and operations. The functions and facilities of the Deep Space Network are emphasized.
Traum, Avram Z; Wells, Meghan P; Aivado, Manuel; Libermann, Towia A; Ramoni, Marco F; Schachter, Asher D
2006-03-01
Proteomic profiling with SELDI-TOF MS has facilitated the discovery of disease-specific protein profiles. However, multicenter studies are often hindered by the logistics required for prompt deep-freezing of samples in liquid nitrogen or dry ice within the clinic setting prior to shipping. We report high concordance between MS profiles within sets of quadruplicate split urine and serum samples deep-frozen at 0, 2, 6, and 24 h after sample collection. Gage R&R results confirm that deep-freezing times are not a statistically significant source of SELDI-TOF MS variability for either blood or urine.
NASA Technical Reports Server (NTRS)
Lee, L. F.; Cooper, L. P.
1993-01-01
This article describes the approach, results, and lessons learned from an applied research project demonstrating how artificial intelligence (AI) technology can be used to improve Deep Space Network operations. Configuring antenna and associated equipment necessary to support a communications link is a time-consuming process. The time spent configuring the equipment is essentially overhead and results in reduced time for actual mission support operations. The NASA Office of Space Communications (Code O) and the NASA Office of Advanced Concepts and Technology (Code C) jointly funded an applied research project to investigate technologies which can be used to reduce configuration time. This resulted in the development and application of AI-based automated operations technology in a prototype system, the Link Monitor and Control Operator Assistant (LMC OA). The LMC OA was tested over the course of three months in a parallel experimental mode on very long baseline interferometry (VLBI) operations at the Goldstone Deep Space Communications Center. The tests demonstrated a 44 percent reduction in pre-calibration time for a VLBI pass on the 70-m antenna. Currently, this technology is being developed further under Research and Technology Operating Plan (RTOP)-72 to demonstrate the applicability of the technology to operations in the entire Deep Space Network.
The Deep Impact Network Experiment Operations Center
NASA Technical Reports Server (NTRS)
Torgerson, J. Leigh; Clare, Loren; Wang, Shin-Ywan
2009-01-01
Delay/Disruption Tolerant Networking (DTN) promises solutions in solving space communications challenges arising from disconnections as orbiters lose line-of-sight with landers, long propagation delays over interplanetary links, and other phenomena. DTN has been identified as the basis for the future NASA space communications network backbone, and international standardization is progressing through both the Consultative Committee for Space Data Systems (CCSDS) and the Internet Engineering Task Force (IETF). JPL has developed an implementation of the DTN architecture, called the Interplanetary Overlay Network (ION). ION is specifically implemented for space use, including design for use in a real-time operating system environment and high processing efficiency. In order to raise the Technology Readiness Level of ION, the first deep space flight demonstration of DTN is underway, using the Deep Impact (DI) spacecraft. Called the Deep Impact Network (DINET), operations are planned for Fall 2008. An essential component of the DINET project is the Experiment Operations Center (EOC), which will generate and receive the test communications traffic as well as "out-of-DTN band" command and control of the DTN experiment, store DTN flight test information in a database, provide display systems for monitoring DTN operations status and statistics (e.g., bundle throughput), and support query and analyses of the data collected. This paper describes the DINET EOC and its value in the DTN flight experiment and potential for further DTN testing.
Operations Concepts for Deep-Space Missions: Challenges and Opportunities
NASA Technical Reports Server (NTRS)
McCann, Robert S.
2010-01-01
Historically, manned spacecraft missions have relied heavily on real-time communication links between crewmembers and ground control for generating crew activity schedules and working time-critical off-nominal situations. On crewed missions beyond the Earth-Moon system, speed-of-light limitations will render this ground-centered concept of operations obsolete. A new, more distributed concept of operations will have to be developed in which the crew takes on more responsibility for real-time anomaly diagnosis and resolution, activity planning and replanning, and flight operations. I will discuss the innovative information technologies, human-machine interfaces, and simulation capabilities that must be developed in order to develop, test, and validate deep-space mission operations
NASA Technical Reports Server (NTRS)
1979-01-01
A report is given of the Deep Space Networks progress in (1) flight project support, (2) tracking and data acquisition research and technology, (3) network engineering, (4) hardware and software implementation, and (5) operations.
NASA Astrophysics Data System (ADS)
Stewart, R. A.; Reimold, W. U.; Charlesworth, E. G.; Ortlepp, W. D.
2001-07-01
In August 1998, a major deformation zone was exposed over several metres during mining operations on 87 Level (2463 m below surface) at Western Deep Levels Gold Mine, southwest of Johannesburg, providing a unique opportunity to study the products of a recent rockburst. This zone consists of three shear zones, with dip-slip displacements of up to 15 cm, that are oriented near-parallel to the advancing stope face. Jogs and a highly pulverised, cataclastic 'rock-flour' are developed on the displacement surfaces, and several sets of secondary extensional fractures occur on either side of the shear zones. A set of pinnate (feather) joints intersects the fault surfaces perpendicular to the slip vector. Microscopically, the shear zones consist of two pinnate joint sets that exhibit cataclastic joint fillings; quartz grains display intense intragranular fracturing. Secondary, intergranular extension fractures are associated with the pinnate joints. Extensional deformation is also the cause of the breccia fill of the pinnate joints. The initial deformation experienced by this zone is brittle and tensile, and is related to stresses induced by mining. This deformation has been masked by later changes in the stress field, which resulted in shearing. This deformation zone does not appear to be controlled by pre-existing geological features and, thus, represents a 'burst fracture', which is believed to be related to a seismic event of magnitude ML=2.1 recorded in July 1998, the epicentre of which was located to within 50 m of the study locality.
NASA Astrophysics Data System (ADS)
Nemoto, Mitsutaka; Hayashi, Naoto; Hanaoka, Shouhei; Nomura, Yukihiro; Miki, Soichiro; Yoshikawa, Takeharu; Ohtomo, Kuni
2016-03-01
The purpose of this study is to evaluate the feasibility of a novel feature generation, which is based on multiple deep neural networks (DNNs) with boosting, for computer-assisted detection (CADe). It is hard and time-consuming to optimize the hyperparameters for DNNs such as stacked denoising autoencoder (SdA). The proposed method allows using SdA based features without the burden of the hyperparameter setting. The proposed method was evaluated by an application for detecting cerebral aneurysms on magnetic resonance angiogram (MRA). A baseline CADe process included four components; scaling, candidate area limitation, candidate detection, and candidate classification. Proposed feature generation method was applied to extract the optimal features for candidate classification. Proposed method only required setting range of the hyperparameters for SdA. The optimal feature set was selected from a large quantity of SdA based features by multiple SdAs, each of which was trained using different hyperparameter set. The feature selection was operated through ada-boost ensemble learning method. Training of the baseline CADe process and proposed feature generation were operated with 200 MRA cases, and the evaluation was performed with 100 MRA cases. Proposed method successfully provided SdA based features just setting the range of some hyperparameters for SdA. The CADe process by using both previous voxel features and SdA based features had the best performance with 0.838 of an area under ROC curve and 0.312 of ANODE score. The results showed that proposed method was effective in the application for detecting cerebral aneurysms on MRA.
Chezar, H.; Lee, J.
1985-01-01
A deep-towed photographic system with completely self-contained recording instrumentation and power can obtain color-video and still-photographic transects along rough terrane without need for a long electrically conducting cable. Both the video- and still-camera systems utilize relatively inexpensive and proven off-the-shelf hardware adapted for deep-water environments. The small instrument frame makes the towed sled an ideal photographic tool for use on ship or small-boat operations. The system includes a temperature probe and altimeter that relay data acoustically from the sled to the surface ship. This relay enables the operator to monitor simultaneously water temperature and the precise height off the bottom. ?? 1985.
NASA Astrophysics Data System (ADS)
Houpert, L.; Durrieu de Madron, X.; Testor, P.; Bosse, A.; D'Ortenzio, F.; Bouin, M. N.; Dausse, D.; Le Goff, H.; Kunesch, S.; Labaste, M.; Coppola, L.; Mortier, L.; Raimbault, P.
2016-11-01
We present here a unique oceanographic and meteorological data set focus on the deep convection processes. Our results are essentially based on in situ data (mooring, research vessel, glider, and profiling float) collected from a multiplatform and integrated monitoring system (MOOSE: Mediterranean Ocean Observing System on Environment), which monitored continuously the northwestern Mediterranean Sea since 2007, and in particular high-frequency potential temperature, salinity, and current measurements from the mooring LION located within the convection region. From 2009 to 2013, the mixed layer depth reaches the seabed, at a depth of 2330m, in February. Then, the violent vertical mixing of the whole water column lasts between 9 and 12 days setting up the characteristics of the newly formed deep water. Each deep convection winter formed a new warmer and saltier "vintage" of deep water. These sudden inputs of salt and heat in the deep ocean are responsible for trends in salinity (3.3 ± 0.2 × 10-3/yr) and potential temperature (3.2 ± 0.5 × 10-3 C/yr) observed from 2009 to 2013 for the 600-2300 m layer. For the first time, the overlapping of the three "phases" of deep convection can be observed, with secondary vertical mixing events (2-4 days) after the beginning of the restratification phase, and the restratification/spreading phase still active at the beginning of the following deep convection event.
Park, Bo-Yong; Lee, Mi Ji; Lee, Seung-Hak; Cha, Jihoon; Chung, Chin-Sang; Kim, Sung Tae; Park, Hyunjin
2018-01-01
Migraineurs show an increased load of white matter hyperintensities (WMHs) and more rapid deep WMH progression. Previous methods for WMH segmentation have limited efficacy to detect small deep WMHs. We developed a new fully automated detection pipeline, DEWS (DEep White matter hyperintensity Segmentation framework), for small and superficially-located deep WMHs. A total of 148 non-elderly subjects with migraine were included in this study. The pipeline consists of three components: 1) white matter (WM) extraction, 2) WMH detection, and 3) false positive reduction. In WM extraction, we adjusted the WM mask to re-assign misclassified WMHs back to WM using many sequential low-level image processing steps. In WMH detection, the potential WMH clusters were detected using an intensity based threshold and region growing approach. For false positive reduction, the detected WMH clusters were classified into final WMHs and non-WMHs using the random forest (RF) classifier. Size, texture, and multi-scale deep features were used to train the RF classifier. DEWS successfully detected small deep WMHs with a high positive predictive value (PPV) of 0.98 and true positive rate (TPR) of 0.70 in the training and test sets. Similar performance of PPV (0.96) and TPR (0.68) was attained in the validation set. DEWS showed a superior performance in comparison with other methods. Our proposed pipeline is freely available online to help the research community in quantifying deep WMHs in non-elderly adults.
DEEP: a general computational framework for predicting enhancers
Kleftogiannis, Dimitrios; Kalnis, Panos; Bajic, Vladimir B.
2015-01-01
Transcription regulation in multicellular eukaryotes is orchestrated by a number of DNA functional elements located at gene regulatory regions. Some regulatory regions (e.g. enhancers) are located far away from the gene they affect. Identification of distal regulatory elements is a challenge for the bioinformatics research. Although existing methodologies increased the number of computationally predicted enhancers, performance inconsistency of computational models across different cell-lines, class imbalance within the learning sets and ad hoc rules for selecting enhancer candidates for supervised learning, are some key questions that require further examination. In this study we developed DEEP, a novel ensemble prediction framework. DEEP integrates three components with diverse characteristics that streamline the analysis of enhancer's properties in a great variety of cellular conditions. In our method we train many individual classification models that we combine to classify DNA regions as enhancers or non-enhancers. DEEP uses features derived from histone modification marks or attributes coming from sequence characteristics. Experimental results indicate that DEEP performs better than four state-of-the-art methods on the ENCODE data. We report the first computational enhancer prediction results on FANTOM5 data where DEEP achieves 90.2% accuracy and 90% geometric mean (GM) of specificity and sensitivity across 36 different tissues. We further present results derived using in vivo-derived enhancer data from VISTA database. DEEP-VISTA, when tested on an independent test set, achieved GM of 80.1% and accuracy of 89.64%. DEEP framework is publicly available at http://cbrc.kaust.edu.sa/deep/. PMID:25378307
Automatic detection of the inner ears in head CT images using deep convolutional neural networks
NASA Astrophysics Data System (ADS)
Zhang, Dongqing; Noble, Jack H.; Dawant, Benoit M.
2018-03-01
Cochlear implants (CIs) use electrode arrays that are surgically inserted into the cochlea to stimulate nerve endings to replace the natural electro-mechanical transduction mechanism and restore hearing for patients with profound hearing loss. Post-operatively, the CI needs to be programmed. Traditionally, this is done by an audiologist who is blind to the positions of the electrodes relative to the cochlea and relies on the patient's subjective response to stimuli. This is a trial-and-error process that can be frustratingly long (dozens of programming sessions are not unusual). To assist audiologists, we have proposed what we call IGCIP for image-guided cochlear implant programming. In IGCIP, we use image processing algorithms to segment the intra-cochlear anatomy in pre-operative CT images and to localize the electrode arrays in post-operative CTs. We have shown that programming strategies informed by image-derived information significantly improve hearing outcomes for both adults and pediatric populations. We are now aiming at deploying these techniques clinically, which requires full automation. One challenge we face is the lack of standard image acquisition protocols. The content of the image volumes we need to process thus varies greatly and visual inspection and labelling is currently required to initialize processing pipelines. In this work we propose a deep learning-based approach to automatically detect if a head CT volume contains two ears, one ear, or no ear. Our approach has been tested on a data set that contains over 2,000 CT volumes from 153 patients and we achieve an overall 95.97% classification accuracy.
Layered virus protection for the operations and administrative messaging system
NASA Technical Reports Server (NTRS)
Cortez, R. H.
2002-01-01
NASA's Deep Space Network (DSN) is critical in supporting the wide variety of operating and plannedunmanned flight projects. For day-to-day operations it relies on email communication between the three Deep Space Communication Complexes (Canberra, Goldstone, Madrid) and NASA's Jet Propulsion Laboratory. The Operations & Administrative Messaging system, based on the Microsoft Windows NTand Exchange platform, provides the infrastructure that is required for reliable, mission-critical messaging. The reliability of this system, however, is threatened by the proliferation of email viruses that continue to spread at alarming rates. A layered approach to email security has been implemented across the DSN to protect against this threat.
Architectures for Human Exploration of Near Earth Asteroids
NASA Technical Reports Server (NTRS)
Drake, Bret G.
2011-01-01
The presentation explores human exploration of Near Earth Asteroid (NEA) key factors including challenges of supporting humans for long-durations in deep-space, incorporation of advanced technologies, mission design constraints, and how many launches are required to conduct a round trip human mission to a NEA. Topics include applied methodology, all chemical NEA mission operations, all nuclear thermal propulsion NEA mission operations, SEP only for deep space mission operations, and SEP/chemical hybrid mission operations. Examples of mass trends between datasets are provided as well as example sensitivity of delta-v and trip home, sensitivity of number of launches and trip home, and expected targets for various transportation architectures.
Determination of the atmospheric neutrino flux and searches for new physics with AMANDA-II
NASA Astrophysics Data System (ADS)
Abbasi, R.; Abdou, Y.; Ackermann, M.; Adams, J.; Ahlers, M.; Andeen, K.; Auffenberg, J.; Bai, X.; Baker, M.; Barwick, S. W.; Bay, R.; Bazo Alba, J. L.; Beattie, K.; Bechet, S.; Becker, J. K.; Becker, K.-H.; Benabderrahmane, M. L.; Berdermann, J.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Bissok, M.; Blaufuss, E.; Boersma, D. J.; Bohm, C.; Bolmont, J.; Böser, S.; Botner, O.; Bradley, L.; Braun, J.; Breder, D.; Burgess, T.; Castermans, T.; Chirkin, D.; Christy, B.; Clem, J.; Cohen, S.; Cowen, D. F.; D'Agostino, M. V.; Danninger, M.; Day, C. T.; de Clercq, C.; Demirörs, L.; Depaepe, O.; Descamps, F.; Desiati, P.; de Vries-Uiterweerd, G.; De Young, T.; Diaz-Velez, J. C.; Dreyer, J.; Dumm, J. P.; Duvoort, M. R.; Edwards, W. R.; Ehrlich, R.; Eisch, J.; Ellsworth, R. W.; Engdegård, O.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Feusels, T.; Filimonov, K.; Finley, C.; Foerster, M. M.; Fox, B. D.; Franckowiak, A.; Franke, R.; Gaisser, T. K.; Gallagher, J.; Ganugapati, R.; Gerhardt, L.; Gladstone, L.; Goldschmidt, A.; Goodman, J. A.; Gozzini, R.; Grant, D.; Griesel, T.; Groß, A.; Grullon, S.; Gunasingha, R. M.; Gurtner, M.; Ha, C.; Hallgren, A.; Halzen, F.; Han, K.; Hanson, K.; Hasegawa, Y.; Heise, J.; Helbing, K.; Herquet, P.; Hickford, S.; Hill, G. C.; Hoffman, K. D.; Hoshina, K.; Hubert, D.; Huelsnitz, W.; Hülß, J.-P.; Hulth, P. O.; Hultqvist, K.; Hussain, S.; Imlay, R. L.; Inaba, M.; Ishihara, A.; Jacobsen, J.; Japaridze, G. S.; Johansson, H.; Joseph, J. M.; Kampert, K.-H.; Kappes, A.; Karg, T.; Karle, A.; Kelley, J. L.; Kenny, P.; Kiryluk, J.; Kislat, F.; Klein, S. R.; Klepser, S.; Knops, S.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Kowalski, M.; Kowarik, T.; Krasberg, M.; Kuehn, K.; Kuwabara, T.; Labare, M.; Laihem, K.; Landsman, H.; Lauer, R.; Leich, H.; Lennarz, D.; Lucke, A.; Lundberg, J.; Lünemann, J.; Madsen, J.; Majumdar, P.; Maruyama, R.; Mase, K.; Matis, H. S.; McParland, C. P.; Meagher, K.; Merck, M.; Mészáros, P.; Middell, E.; Milke, N.; Miyamoto, H.; Mohr, A.; Montaruli, T.; Morse, R.; Movit, S. M.; Münich, K.; Nahnhauer, R.; Nam, J. W.; Nießen, P.; Nygren, D. R.; Odrowski, S.; Olivas, A.; Olivo, M.; Ono, M.; Panknin, S.; Patton, S.; Pérez de Los Heros, C.; Petrovic, J.; Piegsa, A.; Pieloth, D.; Pohl, A. C.; Porrata, R.; Potthoff, N.; Price, P. B.; Prikockis, M.; Przybylski, G. T.; Rawlins, K.; Redl, P.; Resconi, E.; Rhode, W.; Ribordy, M.; Rizzo, A.; Rodrigues, J. P.; Roth, P.; Rothmaier, F.; Rott, C.; Roucelle, C.; Rutledge, D.; Ryckbosch, D.; Sander, H.-G.; Sarkar, S.; Satalecka, K.; Schlenstedt, S.; Schmidt, T.; Schneider, D.; Schukraft, A.; Schulz, O.; Schunck, M.; Seckel, D.; Semburg, B.; Seo, S. H.; Sestayo, Y.; Seunarine, S.; Silvestri, A.; Slipak, A.; Spiczak, G. M.; Spiering, C.; Stanev, T.; Stephens, G.; Stezelberger, T.; Stokstad, R. G.; Stoufer, M. C.; Stoyanov, S.; Strahler, E. A.; Straszheim, T.; Sulanke, K.-H.; Sullivan, G. W.; Swillens, Q.; Taboada, I.; Tarasova, O.; Tepe, A.; Ter-Antonyan, S.; Terranova, C.; Tilav, S.; Tluczykont, M.; Toale, P. A.; Tosi, D.; Turčan, D.; van Eijndhoven, N.; Vandenbroucke, J.; van Overloop, A.; Voigt, B.; Walck, C.; Waldenmaier, T.; Walter, M.; Wendt, C.; Westerhoff, S.; Whitehorn, N.; Wiebusch, C. H.; Wiedemann, A.; Wikström, G.; Williams, D. R.; Wischnewski, R.; Wissing, H.; Woschnagg, K.; Xu, X. W.; Yodh, G.; Yoshida, S.
2009-05-01
The AMANDA-II detector, operating since 2000 in the deep ice at the geographic South Pole, has accumulated a large sample of atmospheric muon neutrinos in the 100 GeV to 10 TeV energy range. The zenith angle and energy distribution of these events can be used to search for various phenomenological signatures of quantum gravity in the neutrino sector, such as violation of Lorentz invariance or quantum decoherence. Analyzing a set of 5511 candidate neutrino events collected during 1387 days of livetime from 2000 to 2006, we find no evidence for such effects and set upper limits on violation of Lorentz invariance and quantum decoherence parameters using a maximum likelihood method. Given the absence of evidence for new flavor-changing physics, we use the same methodology to determine the conventional atmospheric muon neutrino flux above 100 GeV.
Analysis of the operation of the SCD Response intermittent compression system.
Morris, Rh J; Griffiths, H; Woodcock, J P
2002-01-01
The work assessed the performance of the Kendall SCD Response intermittent pneumatic compression system for deep vein thrombosis prophylaxis, which claimed to set its cycle according to the blood flow characteristics of individual patient limbs. A series of tests measured the system response in various situations, including application to the limbs of healthy volunteers, and to false limbs. Practical experimentation and theoretical analysis were used to investigate influences on the system functioning other than blood flow. The system tested did not seem to perform as claimed, being unable to distinguish between real and fake limbs. The intervals between compressions were set to times unrealistic for venous refill, with temperature changes in the cuff the greatest influence on performance. Combining the functions of compression and the measurement of the effects of compression in the same air bladder makes temperature artefacts unavoidable and can cause significant errors in the inter-compression interval.
Determination of the Atmospheric Neutrino Flux and Searches for New Physics with AMANDA-II
DOE Office of Scientific and Technical Information (OSTI.GOV)
IceCube Collaboration; Klein, Spencer; Collaboration, IceCube
2009-06-02
The AMANDA-II detector, operating since 2000 in the deep ice at the geographic South Pole, has accumulated a large sample of atmospheric muon neutrinos in the 100 GeV to 10 TeV energy range. The zenith angle and energy distribution of these events can be used to search for various phenomenological signatures of quantum gravity in the neutrino sector, such as violation of Lorentz invariance (VLI) or quantum decoherence (QD). Analyzing a set of 5511 candidate neutrino events collected during 1387 days of livetime from 2000 to 2006, we find no evidence for such effects and set upper limits on VLImore » and QD parameters using a maximum likelihood method. Given the absence of evidence for new flavor-changing physics, we use the same methodology to determine the conventional atmospheric muon neutrino flux above 100 GeV.« less
Determination of the atmospheric neutrino flux and searches for new physics with AMANDA-II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abbasi, R.; Andeen, K.; Baker, M.
2009-05-15
The AMANDA-II detector, operating since 2000 in the deep ice at the geographic South Pole, has accumulated a large sample of atmospheric muon neutrinos in the 100 GeV to 10 TeV energy range. The zenith angle and energy distribution of these events can be used to search for various phenomenological signatures of quantum gravity in the neutrino sector, such as violation of Lorentz invariance or quantum decoherence. Analyzing a set of 5511 candidate neutrino events collected during 1387 days of livetime from 2000 to 2006, we find no evidence for such effects and set upper limits on violation of Lorentzmore » invariance and quantum decoherence parameters using a maximum likelihood method. Given the absence of evidence for new flavor-changing physics, we use the same methodology to determine the conventional atmospheric muon neutrino flux above 100 GeV.« less
25 CFR 215.25 - Other minerals and deep-lying lead and zinc minerals.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 1 2014-04-01 2014-04-01 false Other minerals and deep-lying lead and zinc minerals. 215.25 Section 215.25 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR ENERGY AND MINERALS LEAD AND ZINC MINING OPERATIONS AND LEASES, QUAPAW AGENCY § 215.25 Other minerals and deep-lying lead...
25 CFR 215.25 - Other minerals and deep-lying lead and zinc minerals.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 1 2012-04-01 2011-04-01 true Other minerals and deep-lying lead and zinc minerals. 215.25 Section 215.25 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR ENERGY AND MINERALS LEAD AND ZINC MINING OPERATIONS AND LEASES, QUAPAW AGENCY § 215.25 Other minerals and deep-lying lead...
25 CFR 215.25 - Other minerals and deep-lying lead and zinc minerals.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 1 2013-04-01 2013-04-01 false Other minerals and deep-lying lead and zinc minerals. 215.25 Section 215.25 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR ENERGY AND MINERALS LEAD AND ZINC MINING OPERATIONS AND LEASES, QUAPAW AGENCY § 215.25 Other minerals and deep-lying lead...
43 CFR 3252.12 - How deep may I drill a temperature gradient well?
Code of Federal Regulations, 2012 CFR
2012-10-01
... 43 Public Lands: Interior 2 2012-10-01 2012-10-01 false How deep may I drill a temperature... RESOURCE LEASING Conducting Exploration Operations § 3252.12 How deep may I drill a temperature gradient well? (a) You may drill a temperature gradient well to any depth that we approve in your exploration...
High Speed Trimaran (HST) Seatrain Experiments, Model 5714
2013-12-01
Marine Highway 1 Historical Seatrains 1 Objectives 2 Hull &: Model Description 4 Data Acquisition and Instrumentation 7 Carriage II - Deep ...Operational Demonstration Measurement System 10 Experimental Procedures 10 Carriage II - Deep Water Basin Test 10 Calm Water Resistance 11... Deep Water Basin Analysis 17 Calm Water Resistance 17 Longitudinal Flow Through The Propeller Plane 18 Body Forces & Moments 18
43 CFR 3252.12 - How deep may I drill a temperature gradient well?
Code of Federal Regulations, 2013 CFR
2013-10-01
... 43 Public Lands: Interior 2 2013-10-01 2013-10-01 false How deep may I drill a temperature... RESOURCE LEASING Conducting Exploration Operations § 3252.12 How deep may I drill a temperature gradient well? (a) You may drill a temperature gradient well to any depth that we approve in your exploration...
43 CFR 3252.12 - How deep may I drill a temperature gradient well?
Code of Federal Regulations, 2014 CFR
2014-10-01
... 43 Public Lands: Interior 2 2014-10-01 2014-10-01 false How deep may I drill a temperature... RESOURCE LEASING Conducting Exploration Operations § 3252.12 How deep may I drill a temperature gradient well? (a) You may drill a temperature gradient well to any depth that we approve in your exploration...
43 CFR 3252.12 - How deep may I drill a temperature gradient well?
Code of Federal Regulations, 2011 CFR
2011-10-01
... 43 Public Lands: Interior 2 2011-10-01 2011-10-01 false How deep may I drill a temperature... RESOURCE LEASING Conducting Exploration Operations § 3252.12 How deep may I drill a temperature gradient well? (a) You may drill a temperature gradient well to any depth that we approve in your exploration...
Spaceport operations for deep space missions
NASA Technical Reports Server (NTRS)
Holt, Alan C.
1990-01-01
Space Station Freedom is designed with the capability to cost-effectively evolve into a transportation node which can support manned lunar and Mars missions. To extend a permanent human presence to the outer planets (moon outposts) and to nearby star systems, additional orbiting space infrastructure and great advances in propulsion system and other technologies will be required. To identify primary operations and management requirements for these deep space missions, an interstellar design concept was developed and analyzed. The assembly, test, servicing, logistics resupply, and increment management techniques anticipated for lunar and Mars missions appear to provide a pattern which can be extended in an analogous manner to deep space missions. A long range, space infrastructure development plan (encompassing deep space missions) coupled with energetic, breakthrough level propulsion research should be initiated now to assist in making the best budget and schedule decisions.
In Brief: Deep-sea observatory
NASA Astrophysics Data System (ADS)
Showstack, Randy
2008-11-01
The first deep-sea ocean observatory offshore of the continental United States has begun operating in the waters off central California. The remotely operated Monterey Accelerated Research System (MARS) will allow scientists to monitor the deep sea continuously. Among the first devices to be hooked up to the observatory are instruments to monitor earthquakes, videotape deep-sea animals, and study the effects of acidification on seafloor animals. ``Some day we may look back at the first packets of data streaming in from the MARS observatory as the equivalent of those first words spoken by Alexander Graham Bell: `Watson, come here, I need you!','' commented Marcia McNutt, president and CEO of the Monterey Bay Aquarium Research Institute, which coordinated construction of the observatory. For more information, see http://www.mbari.org/news/news_releases/2008/mars-live/mars-live.html.
Karamintziou, Sofia D; Custódio, Ana Luísa; Piallat, Brigitte; Polosan, Mircea; Chabardès, Stéphan; Stathis, Pantelis G; Tagaris, George A; Sakas, Damianos E; Polychronaki, Georgia E; Tsirogiannis, George L; David, Olivier; Nikita, Konstantina S
2017-01-01
Advances in the field of closed-loop neuromodulation call for analysis and modeling approaches capable of confronting challenges related to the complex neuronal response to stimulation and the presence of strong internal and measurement noise in neural recordings. Here we elaborate on the algorithmic aspects of a noise-resistant closed-loop subthalamic nucleus deep brain stimulation system for advanced Parkinson's disease and treatment-refractory obsessive-compulsive disorder, ensuring remarkable performance in terms of both efficiency and selectivity of stimulation, as well as in terms of computational speed. First, we propose an efficient method drawn from dynamical systems theory, for the reliable assessment of significant nonlinear coupling between beta and high-frequency subthalamic neuronal activity, as a biomarker for feedback control. Further, we present a model-based strategy through which optimal parameters of stimulation for minimum energy desynchronizing control of neuronal activity are being identified. The strategy integrates stochastic modeling and derivative-free optimization of neural dynamics based on quadratic modeling. On the basis of numerical simulations, we demonstrate the potential of the presented modeling approach to identify, at a relatively low computational cost, stimulation settings potentially associated with a significantly higher degree of efficiency and selectivity compared with stimulation settings determined post-operatively. Our data reinforce the hypothesis that model-based control strategies are crucial for the design of novel stimulation protocols at the backstage of clinical applications.
Energy consumption analysis for the Mars deep space station
NASA Technical Reports Server (NTRS)
Hayes, N. V.
1982-01-01
Results for the energy consumption analysis at the Mars deep space station are presented. It is shown that the major energy consumers are the 64-Meter antenna building and the operations support building. Verification of the antenna's energy consumption is highly dependent on an accurate knowlege of the tracking operations. The importance of a regular maintenance schedule for the watt hour meters installed at the station is indicated.
Submerged electricity generation plane with marine current-driven motors
Dehlsen, James G.P.; Dehlsen, James B.; Fleming, Alexander
2014-07-01
An underwater apparatus for generating electric power from ocean currents and deep water tides. A submersible platform including two or more power pods, each having a rotor with fixed-pitch blades, with drivetrains housed in pressure vessels that are connected by a transverse structure providing buoyancy, which can be a wing depressor, hydrofoil, truss, or faired tube. The platform is connected to anchors on the seafloor by forward mooring lines and a vertical mooring line that restricts the depth of the device in the water column. The platform operates using passive, rather than active, depth control. The wing depressor, along with rotor drag loads, ensures the platform seeks the desired operational current velocity. The rotors are directly coupled to a hydraulic pump that drives at least one constant-speed hydraulic-motor generator set and enables hydraulic braking. A fluidic bearing decouples non-torque rotor loads to the main shaft driving the hydraulic pumps.
ESONET , a milestone towards sustained multidisciplinary ocean observation.
NASA Astrophysics Data System (ADS)
Rolin, J.-F.
2012-04-01
At the end of a 4 year project dedicated to the constitution of a Network of Excellence (NoE) on subsea observatories in Europe, large expectations are still in the agenda. The economical crisis changes the infrastructure construction planning in many ways but the objectives are quite clear and may be reached at European scale. The overall objective of the ESONET NoE was to create an organisation able to implement, operate and maintain a sustainable underwater observation network, extending into deep water, capable of monitoring biological, geo-chemical, geological, geophysical and physical processes occurring throughout the water column, sea floor interface and solid earth below. This main objective of ESONET has been met by creating the network of 11 permanent underwater observation sites together with the "ESONET Vi" Virtual Institute organising the exchange of staff and joint experiments on EMSO large research infrastructure observatories. The development of recommendations on best practices, standardization and interoperability concepts concerning underwater observatory equipment, as synthetized by the so called ESONET Label document has been created. The ESONET Label is a set of criteria to be met by the deep-sea observatory equipment as well as recommended solutions and options to guarantee their optimal operation in the ocean over long time periods. ESONET contributes to the fixed point sustained observatory community which extends worldwide, is fully multidisciplinary and in its way may open a new page in ocean sciences history.
NASA Technical Reports Server (NTRS)
Beaton, Kara H.; Chappell, Steven P.; Bekdash, Omar S.; Gernhardt, Michael L.
2018-01-01
The NASA Next Space Technologies for Exploration Partnerships (NextSTEP) program is a public-private partnership model that seeks commercial development of deep space exploration capabilities to support extensive human spaceflight missions around and beyond cislunar space. NASA first issued the Phase 1 NextSTEP Broad Agency Announcement to U.S. industries in 2014, which called for innovative cislunar habitation concepts that leveraged commercialization plans for low Earth orbit. These habitats will be part of the Deep Space Gateway (DSG), the cislunar space station planned by NASA for construction in the 2020s. In 2016, Phase 2 of the NextSTEP program selected five commercial partners to develop ground prototypes. A team of NASA research engineers and subject matter experts have been tasked with developing the ground test protocol that will serve as the primary means by which these Phase 2 prototype habitats will be evaluated. Since 2008, this core test team has successfully conducted multiple spaceflight analog mission evaluations utilizing a consistent set of operational products, tools, methods, and metrics to enable the iterative development, testing, analysis, and validation of evolving exploration architectures, operations concepts, and vehicle designs. The purpose of implementing a similar evaluation process for the NextSTEP Phase 2 Habitation Concepts is to consistently evaluate the different commercial partner ground prototypes to provide data-driven, actionable recommendations for Phase 3.
Park, Seong-Wook; Park, Junyoung; Bong, Kyeongryeol; Shin, Dongjoo; Lee, Jinmook; Choi, Sungpill; Yoo, Hoi-Jun
2015-12-01
Deep Learning algorithm is widely used for various pattern recognition applications such as text recognition, object recognition and action recognition because of its best-in-class recognition accuracy compared to hand-crafted algorithm and shallow learning based algorithms. Long learning time caused by its complex structure, however, limits its usage only in high-cost servers or many-core GPU platforms so far. On the other hand, the demand on customized pattern recognition within personal devices will grow gradually as more deep learning applications will be developed. This paper presents a SoC implementation to enable deep learning applications to run with low cost platforms such as mobile or portable devices. Different from conventional works which have adopted massively-parallel architecture, this work adopts task-flexible architecture and exploits multiple parallelism to cover complex functions of convolutional deep belief network which is one of popular deep learning/inference algorithms. In this paper, we implement the most energy-efficient deep learning and inference processor for wearable system. The implemented 2.5 mm × 4.0 mm deep learning/inference processor is fabricated using 65 nm 8-metal CMOS technology for a battery-powered platform with real-time deep inference and deep learning operation. It consumes 185 mW average power, and 213.1 mW peak power at 200 MHz operating frequency and 1.2 V supply voltage. It achieves 411.3 GOPS peak performance and 1.93 TOPS/W energy efficiency, which is 2.07× higher than the state-of-the-art.
Interplanetary Mission Design Handbook: Earth-to-Mars Mission Opportunities 2026 to 2045
NASA Technical Reports Server (NTRS)
Burke, Laura M.; Falck, Robert D.; McGuire, Melissa L.
2010-01-01
The purpose of this Mission Design Handbook is to provide trajectory designers and mission planners with graphical information about Earth to Mars ballistic trajectory opportunities for the years of 2026 through 2045. The plots, displayed on a departure date/arrival date mission space, show departure energy, right ascension and declination of the launch asymptote, and target planet hyperbolic arrival excess speed, V(sub infinity), for each launch opportunity. Provided in this study are two sets of contour plots for each launch opportunity. The first set of plots shows Earth to Mars ballistic trajectories without the addition of any deep space maneuvers. The second set of plots shows Earth to Mars transfer trajectories with the addition of deep space maneuvers, which further optimize the determined trajectories. The accompanying texts explains the trajectory characteristics, transfers using deep space maneuvers, mission assumptions and a summary of the minimum departure energy for each opportunity.
Punjabi, Amol; Wu, Xiang; Tokatli-Apollon, Amira; ...
2014-09-25
A class of biocompatible upconverting nanoparticles (UCNPs) with largely amplified red-emissions was developed. The optimal UCNP shows a high absolute upconversion quantum yield of 3.2% in red-emission, which is 15-fold stronger than the known optimal β-phase core/shell UCNPs. When conjugated to aminolevulinic acid, a clinically used photodynamic therapy (PDT) prodrug, significant PDT effect in tumor was demonstrated in a deep-tissue (>1.2 cm) setting in vivo at a biocompatible laser power density. Furthermore, we show that our UCNP–PDT system with NIR irradiation outperforms clinically used red light irradiation in a deep tumor setting in vivo. This study marks a major stepmore » forward in photodynamic therapy utilizing UCNPs to effectively access deep-set tumors.Lastly, it also provides an opportunity for the wide application of upconverting red radiation in photonics and biophotonics.« less
Punjabi, Amol; Wu, Xiang; Tokatli-Apollon, Amira; El-Rifai, Mahmoud; Lee, Hyungseok; Zhang, Yuanwei; Wang, Chao; Liu, Zhuang; Chan, Emory M; Duan, Chunying; Han, Gang
2014-10-28
A class of biocompatible upconverting nanoparticles (UCNPs) with largely amplified red-emissions was developed. The optimal UCNP shows a high absolute upconversion quantum yield of 3.2% in red-emission, which is 15-fold stronger than the known optimal β-phase core/shell UCNPs. When conjugated to aminolevulinic acid, a clinically used photodynamic therapy (PDT) prodrug, significant PDT effect in tumor was demonstrated in a deep-tissue (>1.2 cm) setting in vivo at a biocompatible laser power density. Furthermore, we show that our UCNP-PDT system with NIR irradiation outperforms clinically used red light irradiation in a deep tumor setting in vivo. This study marks a major step forward in photodynamic therapy utilizing UCNPs to effectively access deep-set tumors. It also provides an opportunity for the wide application of upconverting red radiation in photonics and biophotonics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Punjabi, Amol; Wu, Xiang; Tokatli-Apollon, Amira
A class of biocompatible upconverting nanoparticles (UCNPs) with largely amplified red-emissions was developed. The optimal UCNP shows a high absolute upconversion quantum yield of 3.2% in red-emission, which is 15-fold stronger than the known optimal β-phase core/shell UCNPs. When conjugated to aminolevulinic acid, a clinically used photodynamic therapy (PDT) prodrug, significant PDT effect in tumor was demonstrated in a deep-tissue (>1.2 cm) setting in vivo at a biocompatible laser power density. Furthermore, we show that our UCNP–PDT system with NIR irradiation outperforms clinically used red light irradiation in a deep tumor setting in vivo. This study marks a major stepmore » forward in photodynamic therapy utilizing UCNPs to effectively access deep-set tumors.Lastly, it also provides an opportunity for the wide application of upconverting red radiation in photonics and biophotonics.« less
Automated Diagnosis and Control of Complex Systems
NASA Technical Reports Server (NTRS)
Kurien, James; Plaunt, Christian; Cannon, Howard; Shirley, Mark; Taylor, Will; Nayak, P.; Hudson, Benoit; Bachmann, Andrew; Brownston, Lee; Hayden, Sandra;
2007-01-01
Livingstone2 is a reusable, artificial intelligence (AI) software system designed to assist spacecraft, life support systems, chemical plants, or other complex systems by operating with minimal human supervision, even in the face of hardware failures or unexpected events. The software diagnoses the current state of the spacecraft or other system, and recommends commands or repair actions that will allow the system to continue operation. Livingstone2 is an enhancement of the Livingstone diagnosis system that was flight-tested onboard the Deep Space One spacecraft in 1999. This version tracks multiple diagnostic hypotheses, rather than just a single hypothesis as in the previous version. It is also able to revise diagnostic decisions made in the past when additional observations become available. In such cases, Livingstone might arrive at an incorrect hypothesis. Re-architecting and re-implementing the system in C++ has increased performance. Usability has been improved by creating a set of development tools that is closely integrated with the Livingstone2 engine. In addition to the core diagnosis engine, Livingstone2 includes a compiler that translates diagnostic models written in a Java-like language into Livingstone2's language, and a broad set of graphical tools for model development.
Stochastic availability analysis of operational data systems in the Deep Space Network
NASA Technical Reports Server (NTRS)
Issa, T. N.
1991-01-01
Existing availability models of standby redundant systems consider only an operator's performance and its interaction with the hardware performance. In the case of operational data systems in the Deep Space Network (DSN), in addition to an operator system interface, a controller reconfigures the system and links a standby unit into the network data path upon failure of the operating unit. A stochastic (Markovian) process technique is used to model and analyze the availability performance and occurrence of degradation due to partial failures are quantitatively incorporated into the model. Exact expressions of the steady state availability and proportion degraded performance measures are derived for the systems under study. The interaction among the hardware, operator, and controller performance parameters and that interaction's effect on data availability are evaluated and illustrated for an operational data processing system.
Deep learning improves prediction of CRISPR-Cpf1 guide RNA activity.
Kim, Hui Kwon; Min, Seonwoo; Song, Myungjae; Jung, Soobin; Choi, Jae Woo; Kim, Younggwang; Lee, Sangeun; Yoon, Sungroh; Kim, Hyongbum Henry
2018-03-01
We present two algorithms to predict the activity of AsCpf1 guide RNAs. Indel frequencies for 15,000 target sequences were used in a deep-learning framework based on a convolutional neural network to train Seq-deepCpf1. We then incorporated chromatin accessibility information to create the better-performing DeepCpf1 algorithm for cell lines for which such information is available and show that both algorithms outperform previous machine learning algorithms on our own and published data sets.
DL-ADR: a novel deep learning model for classifying genomic variants into adverse drug reactions.
Liang, Zhaohui; Huang, Jimmy Xiangji; Zeng, Xing; Zhang, Gang
2016-08-10
Genomic variations are associated with the metabolism and the occurrence of adverse reactions of many therapeutic agents. The polymorphisms on over 2000 locations of cytochrome P450 enzymes (CYP) due to many factors such as ethnicity, mutations, and inheritance attribute to the diversity of response and side effects of various drugs. The associations of the single nucleotide polymorphisms (SNPs), the internal pharmacokinetic patterns and the vulnerability of specific adverse reactions become one of the research interests of pharmacogenomics. The conventional genomewide association studies (GWAS) mainly focuses on the relation of single or multiple SNPs to a specific risk factors which are a one-to-many relation. However, there are no robust methods to establish a many-to-many network which can combine the direct and indirect associations between multiple SNPs and a serial of events (e.g. adverse reactions, metabolic patterns, prognostic factors etc.). In this paper, we present a novel deep learning model based on generative stochastic networks and hidden Markov chain to classify the observed samples with SNPs on five loci of two genes (CYP2D6 and CYP1A2) respectively to the vulnerable population of 14 types of adverse reactions. A supervised deep learning model is proposed in this study. The revised generative stochastic networks (GSN) model with transited by the hidden Markov chain is used. The data of the training set are collected from clinical observation. The training set is composed of 83 observations of blood samples with the genotypes respectively on CYP2D6*2, *10, *14 and CYP1A2*1C, *1 F. The samples are genotyped by the polymerase chain reaction (PCR) method. A hidden Markov chain is used as the transition operator to simulate the probabilistic distribution. The model can perform learning at lower cost compared to the conventional maximal likelihood method because the transition distribution is conditional on the previous state of the hidden Markov chain. A least square loss (LASSO) algorithm and a k-Nearest Neighbors (kNN) algorithm are used as the baselines for comparison and to evaluate the performance of our proposed deep learning model. There are 53 adverse reactions reported during the observation. They are assigned to 14 categories. In the comparison of classification accuracy, the deep learning model shows superiority over the LASSO and kNN model with a rate over 80 %. In the comparison of reliability, the deep learning model shows the best stability among the three models. Machine learning provides a new method to explore the complex associations among genomic variations and multiple events in pharmacogenomics studies. The new deep learning algorithm is capable of classifying various SNPs to the corresponding adverse reactions. We expect that as more genomic variations are added as features and more observations are made, the deep learning model can improve its performance and can act as a black-box but reliable verifier for other GWAS studies.
Sista, Akhilesh K; Vedantham, Suresh; Kaufman, John A; Madoff, David C
2015-07-01
The societal and individual burden caused by acute and chronic lower extremity venous disease is considerable. In the past several decades, minimally invasive endovascular interventions have been developed to reduce thrombus burden in the setting of acute deep venous thrombosis to prevent both short- and long-term morbidity and to recanalize chronically occluded or stenosed postthrombotic or nonthrombotic veins in symptomatic patients. This state-of-the-art review provides an overview of the techniques and challenges, rationale, patient selection criteria, complications, postinterventional care, and outcomes data for endovascular intervention in the setting of acute and chronic lower extremity deep venous disease. Online supplemental material is available for this article.
Forecasting Space Weather Hazards for Astronauts in Deep Space
NASA Astrophysics Data System (ADS)
Martens, P. C.
2018-02-01
Deep Space Gateway provides a unique platform to develop, calibrate, and test a space weather forecasting system for interplanetary travel in a real life setting. We will discuss requirements and design of such a system.
Informal science education: lifelong, life-wide, life-deep.
Sacco, Kalie; Falk, John H; Bell, James
2014-11-01
Informal Science Education: Lifelong, Life-Wide, Life-Deep Informal science education cultivates diverse opportunities for lifelong learning outside of formal K-16 classroom settings, from museums to online media, often with the help of practicing scientists.
Diverse, rare microbial taxa responded to the Deepwater Horizon deep-sea hydrocarbon plume.
Kleindienst, Sara; Grim, Sharon; Sogin, Mitchell; Bracco, Annalisa; Crespo-Medina, Melitza; Joye, Samantha B
2016-02-01
The Deepwater Horizon (DWH) oil well blowout generated an enormous plume of dispersed hydrocarbons that substantially altered the Gulf of Mexico's deep-sea microbial community. A significant enrichment of distinct microbial populations was observed, yet, little is known about the abundance and richness of specific microbial ecotypes involved in gas, oil and dispersant biodegradation in the wake of oil spills. Here, we document a previously unrecognized diversity of closely related taxa affiliating with Cycloclasticus, Colwellia and Oceanospirillaceae and describe their spatio-temporal distribution in the Gulf's deepwater, in close proximity to the discharge site and at increasing distance from it, before, during and after the discharge. A highly sensitive, computational method (oligotyping) applied to a data set generated from 454-tag pyrosequencing of bacterial 16S ribosomal RNA gene V4-V6 regions, enabled the detection of population dynamics at the sub-operational taxonomic unit level (0.2% sequence similarity). The biogeochemical signature of the deep-sea samples was assessed via total cell counts, concentrations of short-chain alkanes (C1-C5), nutrients, (colored) dissolved organic and inorganic carbon, as well as methane oxidation rates. Statistical analysis elucidated environmental factors that shaped ecologically relevant dynamics of oligotypes, which likely represent distinct ecotypes. Major hydrocarbon degraders, adapted to the slow-diffusive natural hydrocarbon seepage in the Gulf of Mexico, appeared unable to cope with the conditions encountered during the DWH spill or were outcompeted. In contrast, diverse, rare taxa increased rapidly in abundance, underscoring the importance of specialized sub-populations and potential ecotypes during massive deep-sea oil discharges and perhaps other large-scale perturbations.
Deep learning for staging liver fibrosis on CT: a pilot study.
Yasaka, Koichiro; Akai, Hiroyuki; Kunimatsu, Akira; Abe, Osamu; Kiryu, Shigeru
2018-05-14
To investigate whether liver fibrosis can be staged by deep learning techniques based on CT images. This clinical retrospective study, approved by our institutional review board, included 496 CT examinations of 286 patients who underwent dynamic contrast-enhanced CT for evaluations of the liver and for whom histopathological information regarding liver fibrosis stage was available. The 396 portal phase images with age and sex data of patients (F0/F1/F2/F3/F4 = 113/36/56/66/125) were used for training a deep convolutional neural network (DCNN); the data for the other 100 (F0/F1/F2/F3/F4 = 29/9/14/16/32) were utilised for testing the trained network, with the histopathological fibrosis stage used as reference. To improve robustness, additional images for training data were generated by rotating or parallel shifting the images, or adding Gaussian noise. Supervised training was used to minimise the difference between the liver fibrosis stage and the fibrosis score obtained from deep learning based on CT images (F DLCT score) output by the model. Testing data were input into the trained DCNNs to evaluate their performance. The F DLCT scores showed a significant correlation with liver fibrosis stage (Spearman's correlation coefficient = 0.48, p < 0.001). The areas under the receiver operating characteristic curves (with 95% confidence intervals) for diagnosing significant fibrosis (≥ F2), advanced fibrosis (≥ F3) and cirrhosis (F4) by using F DLCT scores were 0.74 (0.64-0.85), 0.76 (0.66-0.85) and 0.73 (0.62-0.84), respectively. Liver fibrosis can be staged by using a deep learning model based on CT images, with moderate performance. • Liver fibrosis can be staged by a deep learning model based on magnified CT images including the liver surface, with moderate performance. • Scores from a trained deep learning model showed moderate correlation with histopathological liver fibrosis staging. • Further improvement are necessary before utilisation in clinical settings.
Assessing Deep Sea Communities Through Seabed Imagery
NASA Astrophysics Data System (ADS)
Matkin, A. G.; Cross, K.; Milititsky, M.
2016-02-01
The deep sea still remains virtually unexplored. Human activity, such as oil and gas exploration and deep sea mining, is expanding further into the deep sea, increasing the need to survey and map extensive areas of this habitat in order to assess ecosystem health and value. The technology needed to explore this remote environment has been advancing. Seabed imagery can cover extensive areas of the seafloor and investigate areas where sampling with traditional coring methodologies is just not possible (e.g. cold water coral reefs). Remotely operated vehicles (ROVs) are an expensive option, so drop or towed camera systems can provide a more viable and affordable alternative, while still allowing for real-time control. Assessment of seabed imagery in terms of presence, abundance and density of particular species can be conducted by bringing together a variety of analytical tools for a holistic approach. Sixteen deep sea transects located offshore West Africa were investigated with a towed digital video telemetry system (DTS). Both digital stills and video footage were acquired. An extensive data set was obtained from over 13,000 usable photographs, allowing for characterisation of the different habitats present in terms of community composition and abundance. All observed fauna were identified to the lowest taxonomic level and enumerated when possible, with densities derived after the seabed area was calculated for each suitable photograph. This methodology allowed for consistent assessment of the different habitat types present, overcoming constraints, such as specific taxa that cannot be enumerated, such as sponges, corals or bryozoans, the presence of mobile and sessile species, or the level of taxonomic detail. Although this methodology will not enable a full characterisation of a deep sea community, in terms of species composition for instance, itt will allow a robust assessment of large areas of the deep sea in terms of sensitive habitats present and community characteristics of each habitat. Such data can be readily utilised for planning and licensing purposes and be potentially revisited in the future when taxonomic resolution increases, for a more detailed characterisation or monitoring of this poorly described environment.
25 CFR 215.25 - Other minerals and deep-lying lead and zinc minerals.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false Other minerals and deep-lying lead and zinc minerals. 215... LEAD AND ZINC MINING OPERATIONS AND LEASES, QUAPAW AGENCY § 215.25 Other minerals and deep-lying lead and zinc minerals. Except as provided in § 215.6(b), leases on Quapaw Indian lands, for mining...
25 CFR 215.25 - Other minerals and deep-lying lead and zinc minerals.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 1 2011-04-01 2011-04-01 false Other minerals and deep-lying lead and zinc minerals. 215... LEAD AND ZINC MINING OPERATIONS AND LEASES, QUAPAW AGENCY § 215.25 Other minerals and deep-lying lead and zinc minerals. Except as provided in § 215.6(b), leases on Quapaw Indian lands, for mining...
Mursch, K; Gotthardt, T; Kröger, R; Bublat, M; Behnke-Mursch, J
2005-08-01
We evaluated an advanced concept for patient-based navigation during minimally invasive neurosurgical procedures. An infrared-based, off-line neuro-navigation system (LOCALITE, Bonn, Germany) was applied during operations within a 0.5 T intraoperative MRI scanner (iMRI) (Signa SF, GE Medical Systems, Milwaukee, WI, USA) in addition to the conventional real-time system. The three-dimensional (3D) data set was acquired intraoperatively and up-dated when brain-shift was suspected. Twenty-three patients with subcortical lesions were operated upon with the aim to minimise the operative trauma. Small craniotomies (median diameter 30 mm, mean diameter 27 mm) could be placed exactly. In all cases, the primary goal of the operation (total resection or biopsy) was achieved in a straightforward procedure without permanent morbidity. The navigation system could be easily used without technical problems. In contrast to the real-time navigation mode of the MR system, the higher quality as well as the real-time display of the MR images reconstructed from the 3D reference data provided sufficient visual-manual coordination. The system combines the advantages of conventional neuro-navigation with the ability to adapt intraoperatively to the continuously changing anatomy. Thus, small and/or deep lesions can be operated upon in straightforward minimally invasive operations.
Electronics for Deep Space Cryogenic Applications
NASA Technical Reports Server (NTRS)
Patterson, R. L.; Hammond, A.; Dickman, J. E.; Gerber, S. S.; Elbuluk, M. E.; Overton, E.
2002-01-01
Deep space probes and planetary exploration missions require electrical power management and control systems that are capable of efficient and reliable operation in very cold temperature environments. Typically, in deep space probes, heating elements are used to keep the spacecraft electronics near room temperature. The utilization of power electronics designed for and operated at low temperature will contribute to increasing efficiency and improving reliability of space power systems. At NASA Glenn Research Center, commercial-off-the-shelf devices as well as developed components are being investigated for potential use at low temperatures. These devices include semiconductor switching devices, magnetics, and capacitors. Integrated circuits such as digital-to-analog and analog-to-digital converters, DC/DC converters, operational amplifiers, and oscillators are also being evaluated. In this paper, results will be presented for selected analog-to-digital converters, oscillators, DC/DC converters, and pulse width modulation (PWM) controllers.
Going Deeper With Contextual CNN for Hyperspectral Image Classification.
Lee, Hyungtae; Kwon, Heesung
2017-10-01
In this paper, we describe a novel deep convolutional neural network (CNN) that is deeper and wider than other existing deep networks for hyperspectral image classification. Unlike current state-of-the-art approaches in CNN-based hyperspectral image classification, the proposed network, called contextual deep CNN, can optimally explore local contextual interactions by jointly exploiting local spatio-spectral relationships of neighboring individual pixel vectors. The joint exploitation of the spatio-spectral information is achieved by a multi-scale convolutional filter bank used as an initial component of the proposed CNN pipeline. The initial spatial and spectral feature maps obtained from the multi-scale filter bank are then combined together to form a joint spatio-spectral feature map. The joint feature map representing rich spectral and spatial properties of the hyperspectral image is then fed through a fully convolutional network that eventually predicts the corresponding label of each pixel vector. The proposed approach is tested on three benchmark data sets: the Indian Pines data set, the Salinas data set, and the University of Pavia data set. Performance comparison shows enhanced classification performance of the proposed approach over the current state-of-the-art on the three data sets.
NASA Technical Reports Server (NTRS)
Thorman, H. C.
1975-01-01
Key characteristics of the Deep Space Network Test and Training System were presented. Completion of the Mark III-75 system implementation is reported. Plans are summarized for upgrading the system to a Mark III-77 configuration to support Deep Space Network preparations for the Mariner Jupiter/Saturn 1977 and Pioneer Venus 1978 missions. A general description of the Deep Space Station, Ground Communications Facility, and Network Operations Control Center functions that comprise the Deep Space Network Test and Training System is also presented.
NASA Astrophysics Data System (ADS)
Demirci, İsmail; Dikmen, Ünal; Candansayar, M. Emin
2018-02-01
Joint inversion of data sets collected by using several geophysical exploration methods has gained importance and associated algorithms have been developed. To explore the deep subsurface structures, Magnetotelluric and local earthquake tomography algorithms are generally used individually. Due to the usage of natural resources in both methods, it is not possible to increase data quality and resolution of model parameters. For this reason, the solution of the deep structures with the individual usage of the methods cannot be fully attained. In this paper, we firstly focused on the effects of both Magnetotelluric and local earthquake data sets on the solution of deep structures and discussed the results on the basis of the resolving power of the methods. The presence of deep-focus seismic sources increase the resolution of deep structures. Moreover, conductivity distribution of relatively shallow structures can be solved with high resolution by using MT algorithm. Therefore, we developed a new joint inversion algorithm based on the cross gradient function in order to jointly invert Magnetotelluric and local earthquake data sets. In the study, we added a new regularization parameter into the second term of the parameter correction vector of Gallardo and Meju (2003). The new regularization parameter is enhancing the stability of the algorithm and controls the contribution of the cross gradient term in the solution. The results show that even in cases where resistivity and velocity boundaries are different, both methods influence each other positively. In addition, the region of common structural boundaries of the models are clearly mapped compared with original models. Furthermore, deep structures are identified satisfactorily even with using the minimum number of seismic sources. In this paper, in order to understand the future studies, we discussed joint inversion of Magnetotelluric and local earthquake data sets only in two-dimensional space. In the light of these results and by means of the acceleration on the three-dimensional modelling and inversion algorithms, it is thought that it may be easier to identify underground structures with high resolution.
Deep brain stimulation of the internal pallidum in multiple system atrophy.
Santens, Patrick; Patrick, Santens; Vonck, Kristl; Kristl, Vonck; De Letter, Miet; Miet, De Letter; Van Driessche, Katya; Katya, Van Driessche; Sieben, Anne; Anne, Sieben; De Reuck, Jacques; Jacques, De Reuck; Van Roost, Dirk; Dirk, Van Roost; Boon, Paul; Paul, Boon
2006-04-01
We describe the outcome of deep brain stimulation of the internal pallidum in a 57-year old patient with multiple system atrophy. Although the prominent dystonic features of this patient were markedly attenuated post-operatively, the outcome was to be considered unfavourable. There was a severe increase in akinesia resulting in overall decrease of mobility in limbs as well as in the face. As a result, the patient was anarthric and displayed dysphagia. A laterality effect of stimulation on oro-facial movements was demonstrated. The patient died 7 months post-operatively. This report adds to the growing consensus that multiple system atrophy patients are unsuitable candidates for deep brain stimulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, Jimmy
2014-05-31
In 2000 Chevron began a project to learn how to characterize the natural gas hydrate deposits in the deep water portion of the Gulf of Mexico (GOM). Chevron is an active explorer and operator in the Gulf of Mexico and is aware that natural gas hydrates need to be understood to operate safely in deep water. In August 2000 Chevron worked closely with the National Energy Technology Laboratory (NETL) of the United States Department of Energy (DOE) and held a workshop in Houston, Texas to define issues concerning the characterization of natural gas hydrate deposits. Specifically, the workshop was meantmore » to clearly show where research, the development of new technologies, and new information sources would be of benefit to the DOE and to the oil and gas industry in defining issues and solving gas hydrate problems in deep water.« less
The fluid dynamics of deep-sea mining
NASA Astrophysics Data System (ADS)
Peacock, Thomas; Rzeznik, Andrew
2017-11-01
With vast mineral deposits on the ocean floor, deep-sea nodule mining operations are expected to commence in the next decade. Among several fundamental fluid dynamics problems, this could involve plans for dewatering plumes to be released into the water column by surface processing vessels. To study this scenario, we consider the effects of non-uniform, realistic stratifications on forced compressible plumes with finite initial size. The classical plume model is developed to take into account the influence of thermal conduction through the dewatering pipe and also compressibility effects, for which a dimensionless number is introduced to determine their importance compared to the background stratification. Among other things, our results show that small-scale features of a realistic stratification can have a large effect on plume dynamics compared to smoothed profiles and that for any given set of environmental parameters there is a discharge flow rate that minimizes the plume vertical extent. Our findings are put in the context of nodule mining plumes for which the rapid and efficient re-sedimentation of waste material has important environmental consequences.
Yousefi, Mina; Krzyżak, Adam; Suen, Ching Y
2018-05-01
Digital breast tomosynthesis (DBT) was developed in the field of breast cancer screening as a new tomographic technique to minimize the limitations of conventional digital mammography breast screening methods. A computer-aided detection (CAD) framework for mass detection in DBT has been developed and is described in this paper. The proposed framework operates on a set of two-dimensional (2D) slices. With plane-to-plane analysis on corresponding 2D slices from each DBT, it automatically learns complex patterns of 2D slices through a deep convolutional neural network (DCNN). It then applies multiple instance learning (MIL) with a randomized trees approach to classify DBT images based on extracted information from 2D slices. This CAD framework was developed and evaluated using 5040 2D image slices derived from 87 DBT volumes. The empirical results demonstrate that this proposed CAD framework achieves much better performance than CAD systems that use hand-crafted features and deep cardinality-restricted Bolzmann machines to detect masses in DBTs. Copyright © 2018 Elsevier Ltd. All rights reserved.
Deep neural networks to enable real-time multimessenger astrophysics
NASA Astrophysics Data System (ADS)
George, Daniel; Huerta, E. A.
2018-02-01
Gravitational wave astronomy has set in motion a scientific revolution. To further enhance the science reach of this emergent field of research, there is a pressing need to increase the depth and speed of the algorithms used to enable these ground-breaking discoveries. We introduce Deep Filtering—a new scalable machine learning method for end-to-end time-series signal processing. Deep Filtering is based on deep learning with two deep convolutional neural networks, which are designed for classification and regression, to detect gravitational wave signals in highly noisy time-series data streams and also estimate the parameters of their sources in real time. Acknowledging that some of the most sensitive algorithms for the detection of gravitational waves are based on implementations of matched filtering, and that a matched filter is the optimal linear filter in Gaussian noise, the application of Deep Filtering using whitened signals in Gaussian noise is investigated in this foundational article. The results indicate that Deep Filtering outperforms conventional machine learning techniques, achieves similar performance compared to matched filtering, while being several orders of magnitude faster, allowing real-time signal processing with minimal resources. Furthermore, we demonstrate that Deep Filtering can detect and characterize waveform signals emitted from new classes of eccentric or spin-precessing binary black holes, even when trained with data sets of only quasicircular binary black hole waveforms. The results presented in this article, and the recent use of deep neural networks for the identification of optical transients in telescope data, suggests that deep learning can facilitate real-time searches of gravitational wave sources and their electromagnetic and astroparticle counterparts. In the subsequent article, the framework introduced herein is directly applied to identify and characterize gravitational wave events in real LIGO data.
Tracking and data system support for the Viking 1975 mission to Mars. Volume 3: Planetary operations
NASA Technical Reports Server (NTRS)
Mudgway, D. J.
1977-01-01
The support provided by the Deep Space Network to the 1975 Viking Mission from the first landing on Mars July 1976 to the end of the Prime Mission on November 15, 1976 is described and evaluated. Tracking and data acquisition support required the continuous operation of a worldwide network of tracking stations with 64-meter and 26-meter diameter antennas, together with a global communications system for the transfer of commands, telemetry, and radio metric data between the stations and the Network Operations Control Center in Pasadena, California. Performance of the deep-space communications links between Earth and Mars, and innovative new management techniques for operations and data handling are included.
Li, Songfeng; Wei, Jun; Chan, Heang-Ping; Helvie, Mark A; Roubidoux, Marilyn A; Lu, Yao; Zhou, Chuan; Hadjiiski, Lubomir M; Samala, Ravi K
2018-01-09
Breast density is one of the most significant factors that is associated with cancer risk. In this study, our purpose was to develop a supervised deep learning approach for automated estimation of percentage density (PD) on digital mammograms (DMs). The input 'for processing' DMs was first log-transformed, enhanced by a multi-resolution preprocessing scheme, and subsampled to a pixel size of 800 µm × 800 µm from 100 µm × 100 µm. A deep convolutional neural network (DCNN) was trained to estimate a probability map of breast density (PMD) by using a domain adaptation resampling method. The PD was estimated as the ratio of the dense area to the breast area based on the PMD. The DCNN approach was compared to a feature-based statistical learning approach. Gray level, texture and morphological features were extracted and a least absolute shrinkage and selection operator was used to combine the features into a feature-based PMD. With approval of the Institutional Review Board, we retrospectively collected a training set of 478 DMs and an independent test set of 183 DMs from patient files in our institution. Two experienced mammography quality standards act radiologists interactively segmented PD as the reference standard. Ten-fold cross-validation was used for model selection and evaluation with the training set. With cross-validation, DCNN obtained a Dice's coefficient (DC) of 0.79 ± 0.13 and Pearson's correlation (r) of 0.97, whereas feature-based learning obtained DC = 0.72 ± 0.18 and r = 0.85. For the independent test set, DCNN achieved DC = 0.76 ± 0.09 and r = 0.94, while feature-based learning achieved DC = 0.62 ± 0.21 and r = 0.75. Our DCNN approach was significantly better and more robust than the feature-based learning approach for automated PD estimation on DMs, demonstrating its potential use for automated density reporting as well as for model-based risk prediction.
NASA Astrophysics Data System (ADS)
Li, Songfeng; Wei, Jun; Chan, Heang-Ping; Helvie, Mark A.; Roubidoux, Marilyn A.; Lu, Yao; Zhou, Chuan; Hadjiiski, Lubomir M.; Samala, Ravi K.
2018-01-01
Breast density is one of the most significant factors that is associated with cancer risk. In this study, our purpose was to develop a supervised deep learning approach for automated estimation of percentage density (PD) on digital mammograms (DMs). The input ‘for processing’ DMs was first log-transformed, enhanced by a multi-resolution preprocessing scheme, and subsampled to a pixel size of 800 µm × 800 µm from 100 µm × 100 µm. A deep convolutional neural network (DCNN) was trained to estimate a probability map of breast density (PMD) by using a domain adaptation resampling method. The PD was estimated as the ratio of the dense area to the breast area based on the PMD. The DCNN approach was compared to a feature-based statistical learning approach. Gray level, texture and morphological features were extracted and a least absolute shrinkage and selection operator was used to combine the features into a feature-based PMD. With approval of the Institutional Review Board, we retrospectively collected a training set of 478 DMs and an independent test set of 183 DMs from patient files in our institution. Two experienced mammography quality standards act radiologists interactively segmented PD as the reference standard. Ten-fold cross-validation was used for model selection and evaluation with the training set. With cross-validation, DCNN obtained a Dice’s coefficient (DC) of 0.79 ± 0.13 and Pearson’s correlation (r) of 0.97, whereas feature-based learning obtained DC = 0.72 ± 0.18 and r = 0.85. For the independent test set, DCNN achieved DC = 0.76 ± 0.09 and r = 0.94, while feature-based learning achieved DC = 0.62 ± 0.21 and r = 0.75. Our DCNN approach was significantly better and more robust than the feature-based learning approach for automated PD estimation on DMs, demonstrating its potential use for automated density reporting as well as for model-based risk prediction.
Symptomatic iliofemoral deep venous thrombosis treated with hybrid operative thrombectomy.
Rodríguez, Limael E; Aponte-Rivera, Francisco; Figueroa-Vicente, Ricardo; Bolanos-Avila, Guillermo E; Martínez-Trabal, Jorge L
2015-10-01
During the past 15 years, strategies that promote immediate and complete thrombus removal have gained popularity for the treatment of acute-onset iliofemoral deep venous thrombosis. In this case report, we describe a novel operative approach to venous thrombus removal known as hybrid operative thrombectomy. The technique employs a direct inguinal approach with concomitant retrograde advancement of a balloon catheter by femoral venotomy. Moreover, it provides effective thrombus removal through a single incision, with or without stent placement, and has the advantage of a completion venogram. Copyright © 2015 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Halperin, A.; Stelzmuller, P.
1986-01-01
The key heating, ventilation, and air-conditioning (HVAC) modifications implemented at the Mars Deep Space Station's Operation Support Building at Jet Propulsion Laboratories (JPL) in order to reduce energy consumption and decrease operating costs are described. An energy analysis comparison between the computer simulated model for the building and the actual meter data was presented. The measurement performance data showed that the cumulative energy savings was about 21% for the period 1979 to 1981. The deviation from simulated data to measurement performance data was only about 3%.
Deep X-ray lithography for the fabrication of microstructures at ELSA
NASA Astrophysics Data System (ADS)
Pantenburg, F. J.; Mohr, J.
2001-07-01
Two beamlines at the Electron Stretcher Accelerator (ELSA) of Bonn University are dedicated for the production of microstructures by deep X-ray lithography with synchrotron radiation. They are equipped with state-of-the-art X-ray scanners, maintained and used by Forschungszentrum Karlsruhe. Polymer microstructure heights between 30 and 3000 μm are manufactured regularly for research and industrial projects. This requires different characteristic energies. Therefore, ELSA operates routinely at 1.6, 2.3 and 2.7 GeV, for high-resolution X-ray mask fabrication, deep and ultra-deep X-ray lithography, respectively. The experimental setup, as well as the structure quality of deep and ultra deep X-ray lithographic microstructures are described.
Shot noise limited characterization of ultraweak femtosecond pulse trains.
Schwartz, Osip; Raz, Oren; Katz, Ori; Dudovich, Nirit; Oron, Dan
2011-01-17
Ultrafast science is inherently, due to the lack of fast enough detectors and electronics, based on nonlinear interactions. Typically, however, nonlinear measurements require significant powers and often operate in a limited spectral range. Here we overcome the difficulties of ultraweak ultrafast measurements by precision time-domain localization of spectral components. We utilize this for linear self-referenced characterization of pulse trains having ∼ 1 photon per pulse, a regime in which nonlinear techniques are impractical, at a temporal resolution of ∼ 10 fs. This technique does not only set a new scale of sensitivity in ultrashort pulse characterization, but is also applicable in any spectral range from the near-infrared to the deep UV.
Reverse osmosis water purification system
NASA Technical Reports Server (NTRS)
Ahlstrom, H. G.; Hames, P. S.; Menninger, F. J.
1986-01-01
A reverse osmosis water purification system, which uses a programmable controller (PC) as the control system, was designed and built to maintain the cleanliness and level of water for various systems of a 64-m antenna. The installation operates with other equipment of the antenna at the Goldstone Deep Space Communication Complex. The reverse osmosis system was designed to be fully automatic; with the PC, many complex sequential and timed logic networks were easily implemented and are modified. The PC monitors water levels, pressures, flows, control panel requests, and set points on analog meters; with this information various processes are initiated, monitored, modified, halted, or eliminated as required by the equipment being supplied pure water.
Ehteshami Bejnordi, Babak; Mullooly, Maeve; Pfeiffer, Ruth M; Fan, Shaoqi; Vacek, Pamela M; Weaver, Donald L; Herschorn, Sally; Brinton, Louise A; van Ginneken, Bram; Karssemeijer, Nico; Beck, Andrew H; Gierach, Gretchen L; van der Laak, Jeroen A W M; Sherman, Mark E
2018-06-13
The breast stromal microenvironment is a pivotal factor in breast cancer development, growth and metastases. Although pathologists often detect morphologic changes in stroma by light microscopy, visual classification of such changes is subjective and non-quantitative, limiting its diagnostic utility. To gain insights into stromal changes associated with breast cancer, we applied automated machine learning techniques to digital images of 2387 hematoxylin and eosin stained tissue sections of benign and malignant image-guided breast biopsies performed to investigate mammographic abnormalities among 882 patients, ages 40-65 years, that were enrolled in the Breast Radiology Evaluation and Study of Tissues (BREAST) Stamp Project. Using deep convolutional neural networks, we trained an algorithm to discriminate between stroma surrounding invasive cancer and stroma from benign biopsies. In test sets (928 whole-slide images from 330 patients), this algorithm could distinguish biopsies diagnosed as invasive cancer from benign biopsies solely based on the stromal characteristics (area under the receiver operator characteristics curve = 0.962). Furthermore, without being trained specifically using ductal carcinoma in situ as an outcome, the algorithm detected tumor-associated stroma in greater amounts and at larger distances from grade 3 versus grade 1 ductal carcinoma in situ. Collectively, these results suggest that algorithms based on deep convolutional neural networks that evaluate only stroma may prove useful to classify breast biopsies and aid in understanding and evaluating the biology of breast lesions.
The DEEP-South: Preliminary Photometric Results from the KMTNet-CTIO
NASA Astrophysics Data System (ADS)
Kim, Myung-Jin; Moon, Hong-Kyu; Choi, Young-Jun; Yim, Hong-Suh; Bae, Youngho; Roh, Dong-Goo; the DEEP-South Team
2015-08-01
The DEep Ecliptic Patrol of the Southern sky (DEEP-South) will not only conduct characterization of targeted asteroids and blind survey at the sweet spots, but also utilize data mining of small Solar System bodies in the whole KMTNet archive. As round-the-clock observation with the KMTNet is optimized for spin characterization of tumbling and slow-rotating bodies as it facilitates debiasing previously reported lightcurve observations. It is also most suitable for detection and rapid follow-up of Atens and Atiras, the “difficult objects” that are being discovered at lower solar elongations.For the sake of efficiency, we implemented an observation scheduler, SMART (Scheduler for Measuring Asteroids RoTation), designed to conduct follow-up observations in a timely manner. It automatically updates catalogs, generates ephemerides, checks priorities, prepares target lists, and sends a suite of scripts to site operators. We also developed photometric analysis software called ASAP (Asteroid Spin Analysis Package) that aids to find a set of appropriate comparison stars in an image, to derive spin parameters and reconstruct lightcurve simultaneously in a semi-automatic manner. In this presentation, we will show our preliminary results of time series analyses of a number of km-sized Potentially Hazardous Asteroids (PHAs), 5189 (1990 UQ), 12923 (1999 GK4), 53426 (1999 SL5), 136614 (1993 VA6), 385186 (1994 AW1), and 2000 OH from test runs in February and March 2015 at the KMTNet-CTIO.
[Rectovaginal endometriosis--analysis of 160 cases].
Wilczyński, Miłosz; Wiecka-Płusa, Monika; Antosiak, Beata; Maciołek-Blewniewska, Grazyna; Majchrzak-Baczmańska, Dominika; Malinowski, Andrzej
2015-12-01
The aim of the study was a retrospective analysis of the medical records of patients who underwent surgery due to deep infiltrating rectovaginal endometriosis (mainly with the use of the 'shaving' technique). We analysed 160 cases of patients who underwent surgery due to the deep infiltrating rectovaginal endometriosis in our ward between 2003-2014. Depending on lesion localization, disease severity and clinical characteristics, three possible ways of operation were proposed: laparoscopic, vaginal or a combined vagino-laparoscopic approach. A total of 120 patients underwent laparoscopic removal of the endometrial lesions, whereas 17 were operated vaginally and 23 with the use of the combined approach. Nodule resection was successfully performed in all cases. The combined vagino-laparoscopic operations were characterized by the longest operating time. The rate of perioperative complications was low in the group of patients who underwent laparoscopic or combined operations. The necessity of bowel wall suturing occurred in 15 cases. This procedure was performed in order to strengthen the bowel wall (in cases when no perforation occurred) or due to bowel resection during surgery. Unexpected bowel perforation occurred in only 5 cases. Conclusions: Vaginal, laparoscopic and the combined vagino-laparoscopic surgeries can be safely performed in cases of deep rectovaginal endometriosis.
NASA Technical Reports Server (NTRS)
1975-01-01
Work accomplished on the Deep Space Network (DSN) was described, including the following topics: supporting research and technology, advanced development and engineering, system implementation, and DSN operations pertaining to mission-independent or multiple-mission development as well as to support of flight projects.
Tang, Dang; Wang, Cheng; Gao, Yongjun; Pu, Jun; Long, Jiang; Xu, Wei
2016-10-06
Deep hypothermia is known for its organ-preservation properties, which is introduced into surgical operations on the brain and heart, providing both safety in stopping circulation as well as an attractive bloodless operative field. However, the molecular mechanisms have not been clearly identified. This study was undertaken to determine the influence of deep hypothermia on neural apoptosis and the potential mechanism of these effects in PC12 cells following oxygen-glucose deprivation. Deep hypothermia (18°C) was given to PC12 cells while the model of oxygen-glucose deprivation (OGD) induction for 1h. After 24h of reperfusion, the results showed that deep hypothermia decreased the neural apoptosis, and significantly suppressed overexpression of Bax, CytC, Caspase 3, Caspase 9 and cleaved PARP-1, and inhibited the reduction of Bcl-2 expression. While deep hypothermia increased the LC3II/LC3I and Beclin 1, an autophagy marker, which can be inhibited by 3-methyladenine (3-MA), indicating that deep hypothermia-enhanced autophagy ameliorated apoptotic cell death in PC12 cells subjected to OGD. Based on these findings we propose that deep hypothermia protects against neural apoptosis after the induction of OGD by attenuating the mitochondrial apoptosis pathway, moreover, the mechanism of these antiapoptosis effects is related to the enhancement of autophagy, which autophagy might provide a means of neuroprotection against OGD. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Foster, R.; Schlutsmeyer, A.
1997-01-01
A new technology that can lower the cost of mission operations on future spacecraft will be tested on the NASA New Millennium Deep Space 1 (DS-1) Mission. This technology, the Beacon Monitor Experiment (BMOX), can be used to reduce the Deep Space Network (DSN) tracking time and its associated costs on future missions.
Dynamic stresses in a Francis model turbine at deep part load
NASA Astrophysics Data System (ADS)
Weber, Wilhelm; von Locquenghien, Florian; Conrad, Philipp; Koutnik, Jiri
2017-04-01
A comparison between numerically obtained dynamic stresses in a Francis model turbine at deep part load with experimental ones is presented. Due to the change in the electrical power mix to more content of new renewable energy sources, Francis turbines are forced to operate at deep part load in order to compensate stochastic nature of wind and solar power and to ensure grid stability. For the extension of the operating range towards deep part load improved understanding of the harsh flow conditions and their impact on material fatigue of hydraulic components is required in order to ensure long life time of the power unit. In this paper pressure loads on a model turbine runner from unsteady two-phase computational fluid dynamics simulation at deep part load are used for calculation of mechanical stresses by finite element analysis. Therewith, stress distribution over time is determined. Since only few runner rotations are simulated due to enormous numerical cost, more effort has to be spent to evaluation procedure in order to obtain objective results. By comparing the numerical results with measured strains accuracy of the whole simulation procedure is verified.
NASA Astrophysics Data System (ADS)
Heinz, W. F.
1988-12-01
Pre-cementation or pre-grouting of deep shafts in South Africa is an established technique to improve safety and reduce water ingress during shaft sinking. The recent completion of several pre-cementation projects for shafts deeper than 1000m has once again highlighted the effectiveness of pre-grouting of shafts utilizing deep slimline boreholes and incorporating wireline technique for drilling and conventional deep borehole grouting techniques for pre-cementation. Pre-cementation of deep shaft will: (i) Increase the safety of shaft sinking operation (ii) Minimize water and gas inflow during shaft sinking (iii) Minimize the time lost due to additional grouting operations during sinking of the shaft and hence minimize costly delays and standing time of shaft sinking crews and equipment. (iv) Provide detailed information of the geology of the proposed shaft site. Informations on anomalies, dykes, faults as well as reef (gold bearing conglomerates) intersections can be obtained from the evaluation of cores of the pre-cementation boreholes. (v) Provide improved rock strength for excavations in the immediate vicinity of the shaft area. The paper describes pre-cementation techniques recently applied successfully from surface and some conclusions drawn for further considerations.
NASA Astrophysics Data System (ADS)
Schultz, A.; Bedrosian, P.; Evans, R.; Egbert, G.; Kelbert, A.; Mickus, K.; Livelybrooks, D.; Park, S.; Patro, P.; Peery, T.; Wannamaker, P.; Unsworth, M.; Weiss, C.; Woodward, B.
2008-12-01
EMScope, the MT component of the Earthscope project has completed its final year of infrastructure construction, and its third annual campaign of regional magnetotelluric array operations in the western USA. Seven semi-permanent "backbone" MT observatories have been installed in California, Oregon, Montana, New Mexico, Minnesota, Missouri and Virginia, designed through installation in 2 m deep, insulated underground vaults and with long, buried electric dipole detectors using stable electrodes, to provide extremely long-period magnetotelluric data meant to provide a set of regional, deep structural "anchor points" penetrating into the mid-mantle, in which a series of denser and more uniform regional, transportable MT networks can be tied. A total of 160 "transportable array" MT stations have been occupied in Oregon, Washington, Idaho, northernmost-California, and Montana. These were located on a 70 km quasi-regular grid, with coverage of Cascadia, parts of the Basin and Range, the Rockies and the Snake River Plain, the zone above a putative mantle plume that is hypothesized to serve as the magma source for both the Yellowstone supervolcano and a chain of volcanic features extending westward into Oregon. It is anticipated that in 2009 the transportable array will sweep eastward through the Yellowstone region, following which a set of regional transects at sites of special geodynamic interest will be staged. The transportable array stations are typically occupied for three weeks, providing MT response functions extending from 2-10,000 s or in cases as great as 20,000 s period. These stations are anchored at longer periods (extending as close to 100,000 s periods as possible) by the network of 7 backbone stations, to be operated continuously for up to five years. We present an initial set of 3-d inverse models from the EMScope data sets There is substantial coherence between the resulting 3-d conductivity model and the known boundaries of major physiographic provinces, as well as seismically delineated mid-to-lower crustal and upper mantle features. A combination of telemetry from backbone stations and frequent batch transmission of data from the transportable array field sites, followed by rapid data quality control procedures and generation of MT response functions provides a data set of use to all interested researchers. All EMScope data are made available freely through the IRIS Data Management Center or via the EMScope data portal. For transportable array sites these data are available typically within two weeks of acquisition.
Deep Impact comet encounter: design, development, and operations of the Big Event at Tempel 1
NASA Technical Reports Server (NTRS)
Wissler, Steven
2005-01-01
Deep Impact is NASA's eighth Discovery mission. This low-cost, focused planetary science investigation gathered the data necessary to help scientists unlock early secrets of our solar system. The comet encounter with Tempel 1 was a complex event - requiring extremely accurate timing, robutstness to an unknown environment, and flight team adaptability. The mission operations and flight systems performance were spectacular for approach, impact, and lookback imaging on July 4, 2005.
Deep Impact comet encounter: design, development, and operations of the big event at Tempel 1
NASA Technical Reports Server (NTRS)
Wissler, Steven; Rocca, Jennifer; Kubitschek, Daniel
2005-01-01
Deep Impact is NASA's eighth Discovery mission. This low-cost, focused planetary science investigation gathered the data necessary to help scientists unlock early secrets of our solar system. The comet encounter with Tempel 1 was a complex event - requiring extremely accurate timing, robustness to an unknown environment, and flight team adaptibility. The mission operations and flight systems performance were spectacular for approach, impact, and lookback imaging on July 4, 2005.
A new one-man submarine is tested as vehicle for solid rocket booster retrieval
NASA Technical Reports Server (NTRS)
2000-01-01
A Diver Operator Plug (DOP) is being pulled down into the ocean by a newly designed one-man submarine known as DeepWorker 2000. The activity is part of an operation to attach the plug to a mockup of a solid rocket booster nozzle. DeepWorker 2000 is being tested on its ability to duplicate the sometimes hazardous job United Space Alliance (USA) divers perform to recover the expended boosters in the ocean after a launch. The boosters splash down in an impact area about 140 miles east of Jacksonville and after recovery are towed back to KSC for refurbishment by the specially rigged recovery ships. DeepWorker 2000 will be used in a demonstration during retrieval operations after the upcoming STS-101 launch. The submarine pilot will demonstrate capabilities to cut tangled parachute riser lines using a manipulator arm and attach the DOP to extract water and provide flotation for the booster. DeepWorker 2000 was built by Nuytco Research Ltd., North Vancouver, British Columbia. It is 8.25 feet long, 5.75 feet high, and weighs 3,800 pounds. USA is a prime contractor to NASA for the Space Shuttle program.
Wimmer, Matthias D; Ploeger, Milena M; Friedrich, Max J; Hügle, Thomas; Gravius, Sascha; Randau, Thomas M
2017-07-01
Histopathological tissue analysis is a key parameter within the diagnostic algorithm for suspected periprosthetic joint infections (PJIs), conventionally acquired in open surgery. In 2014, Hügle and co-workers introduced novel retrograde forceps for retrograde synovial biopsy with simultaneous fluid aspiration of the knee joint. We hypothesised that tissue samples acquired by retrograde synovial biopsy are equal to intra-operatively acquired deep representative tissue samples regarding bacterial detection and differentiation of periprosthetic infectious membranes. Thirty patients (male n = 15, 50%; female n = 15, 50%) with 30 suspected PJIs in painful total hip arthroplasties (THAs) were included in this prospective, controlled, non-blinded trial. The results were compared with intra-operatively obtained representative deep tissue samples. In summary, 27 out of 30 patients were diagnosed correctly as infected (17/17) or non-infected (10/13). The sensitivity to predict a PJI using the Retroforce® sampling forceps in addition to standard diagnostics was 85%, the specificity 100%. Retrograde synovial biopsy is a new and rapid diagnostic procedure under local anaesthesia in patients with painful THAs with similar histological results compared to deep tissue sampling.
2000-04-22
KENNEDY SPACE CENTER, FLA. -- A Diver Operator Plug (DOP) is being pulled down into the ocean by a newly designed one-man submarine known as DeepWorker 2000. The activity is part of an operation to attach the plug to a mockup of a solid rocket booster nozzle. DeepWorker 2000 is being tested on its ability to duplicate the sometimes hazardous job United Space Alliance (USA) divers perform to recover the expended boosters in the ocean after a launch. The boosters splash down in an impact area about 140 miles east of Jacksonville and after recovery are towed back to KSC for refurbishment by the specially rigged recovery ships. DeepWorker 2000 will be used in a demonstration during retrieval operations after the upcoming STS-101 launch. The submarine pilot will demonstrate capabilities to cut tangled parachute riser lines using a manipulator arm and attach the DOP to extract water and provide flotation for the booster. DeepWorker 2000 was built by Nuytco Research Ltd., North Vancouver, British Columbia. It is 8.25 feet long, 5.75 feet high, and weighs 3,800 pounds. USA is a prime contractor to NASA for the Space Shuttle program
2000-04-22
KENNEDY SPACE CENTER, FLA. -- A Diver Operator Plug (DOP) is being pulled down into the ocean by a newly designed one-man submarine known as DeepWorker 2000. The activity is part of an operation to attach the plug to a mockup of a solid rocket booster nozzle. DeepWorker 2000 is being tested on its ability to duplicate the sometimes hazardous job United Space Alliance (USA) divers perform to recover the expended boosters in the ocean after a launch. The boosters splash down in an impact area about 140 miles east of Jacksonville and after recovery are towed back to KSC for refurbishment by the specially rigged recovery ships. DeepWorker 2000 will be used in a demonstration during retrieval operations after the upcoming STS-101 launch. The submarine pilot will demonstrate capabilities to cut tangled parachute riser lines using a manipulator arm and attach the DOP to extract water and provide flotation for the booster. DeepWorker 2000 was built by Nuytco Research Ltd., North Vancouver, British Columbia. It is 8.25 feet long, 5.75 feet high, and weighs 3,800 pounds. USA is a prime contractor to NASA for the Space Shuttle program
Sista, Akhilesh K.; Vedantham, Suresh; Kaufman, John A.
2015-01-01
The societal and individual burden caused by acute and chronic lower extremity venous disease is considerable. In the past several decades, minimally invasive endovascular interventions have been developed to reduce thrombus burden in the setting of acute deep venous thrombosis to prevent both short- and long-term morbidity and to recanalize chronically occluded or stenosed postthrombotic or nonthrombotic veins in symptomatic patients. This state-of-the-art review provides an overview of the techniques and challenges, rationale, patient selection criteria, complications, postinterventional care, and outcomes data for endovascular intervention in the setting of acute and chronic lower extremity deep venous disease. Online supplemental material is available for this article. © RSNA, 2015 PMID:26101920
Czekaj, Jaroslaw; Fary, Camdon; Gaillard, Thierry; Lustig, Sebastien
2017-07-01
Severe varus and valgus knee deformities traditionally are replaced with constrained implants, with a number of disadvantages. We present our results in this challenging group using a low constraint deep-dish mobile bearing implant design. One hundred fifty-four patients (170 arthroplasties) who underwent primary TKA using a deep-dish, mobile bearing posterior-stabilized implant for severe varus (HKA < 170°) or valgus (HKA > 190°) deformity between 2004 and 2009 were evaluated at a mean of 6.6 years post-operatively (minimum of 5 years). Alignment improved from a pre-operative mean (±SD) varus deformity of 167.4° (±2.6°) and a mean (±SD) valgus deformity of 194.1° (±4.0°) to an overall mean (±SD) post-operative mechanical alignment of 178.6° (±3.2°). Twenty-three patients had post-operative varus alignment, five patients had post-operative valgus alignment and 134 knees were in neutral alignment (within 3° spread). Clinical scores at final follow-up were excellent (IKS score 93.8 (±7.4) and function score 82.4 (±20.2)). Three patients were re-operated upon: one deep infection, one periprosthetic fracture and one revision at 144 months for aseptic loosening of the femoral component. No patient was revised for instability or implant failure. The survival rate at five years was 99.4% and at ten years 98.6%. Satisfactory outcomes can be achieved in patients with substantial varus or valgus deformities using low constraint deep-dish mobile bearing implant, standard approach and appropriate soft tissue releases.
A diagnosis system using object-oriented fault tree models
NASA Technical Reports Server (NTRS)
Iverson, David L.; Patterson-Hine, F. A.
1990-01-01
Spaceborne computing systems must provide reliable, continuous operation for extended periods. Due to weight, power, and volume constraints, these systems must manage resources very effectively. A fault diagnosis algorithm is described which enables fast and flexible diagnoses in the dynamic distributed computing environments planned for future space missions. The algorithm uses a knowledge base that is easily changed and updated to reflect current system status. Augmented fault trees represented in an object-oriented form provide deep system knowledge that is easy to access and revise as a system changes. Given such a fault tree, a set of failure events that have occurred, and a set of failure events that have not occurred, this diagnosis system uses forward and backward chaining to propagate causal and temporal information about other failure events in the system being diagnosed. Once the system has established temporal and causal constraints, it reasons backward from heuristically selected failure events to find a set of basic failure events which are a likely cause of the occurrence of the top failure event in the fault tree. The diagnosis system has been implemented in common LISP using Flavors.
Shafiee, Mohammad Javad; Chung, Audrey G; Khalvati, Farzad; Haider, Masoom A; Wong, Alexander
2017-10-01
While lung cancer is the second most diagnosed form of cancer in men and women, a sufficiently early diagnosis can be pivotal in patient survival rates. Imaging-based, or radiomics-driven, detection methods have been developed to aid diagnosticians, but largely rely on hand-crafted features that may not fully encapsulate the differences between cancerous and healthy tissue. Recently, the concept of discovery radiomics was introduced, where custom abstract features are discovered from readily available imaging data. We propose an evolutionary deep radiomic sequencer discovery approach based on evolutionary deep intelligence. Motivated by patient privacy concerns and the idea of operational artificial intelligence, the evolutionary deep radiomic sequencer discovery approach organically evolves increasingly more efficient deep radiomic sequencers that produce significantly more compact yet similarly descriptive radiomic sequences over multiple generations. As a result, this framework improves operational efficiency and enables diagnosis to be run locally at the radiologist's computer while maintaining detection accuracy. We evaluated the evolved deep radiomic sequencer (EDRS) discovered via the proposed evolutionary deep radiomic sequencer discovery framework against state-of-the-art radiomics-driven and discovery radiomics methods using clinical lung CT data with pathologically proven diagnostic data from the LIDC-IDRI dataset. The EDRS shows improved sensitivity (93.42%), specificity (82.39%), and diagnostic accuracy (88.78%) relative to previous radiomics approaches.
A Deep and Autoregressive Approach for Topic Modeling of Multimodal Data.
Zheng, Yin; Zhang, Yu-Jin; Larochelle, Hugo
2016-06-01
Topic modeling based on latent Dirichlet allocation (LDA) has been a framework of choice to deal with multimodal data, such as in image annotation tasks. Another popular approach to model the multimodal data is through deep neural networks, such as the deep Boltzmann machine (DBM). Recently, a new type of topic model called the Document Neural Autoregressive Distribution Estimator (DocNADE) was proposed and demonstrated state-of-the-art performance for text document modeling. In this work, we show how to successfully apply and extend this model to multimodal data, such as simultaneous image classification and annotation. First, we propose SupDocNADE, a supervised extension of DocNADE, that increases the discriminative power of the learned hidden topic features and show how to employ it to learn a joint representation from image visual words, annotation words and class label information. We test our model on the LabelMe and UIUC-Sports data sets and show that it compares favorably to other topic models. Second, we propose a deep extension of our model and provide an efficient way of training the deep model. Experimental results show that our deep model outperforms its shallow version and reaches state-of-the-art performance on the Multimedia Information Retrieval (MIR) Flickr data set.
Deep Space Spaceflight: The Challenge of Crew Performance in Autonomous Operations
NASA Astrophysics Data System (ADS)
Thaxton, S. S.; Williams, T. J.; Norsk, P.; Zwart, S.; Crucian, B.; Antonsen, E. L.
2018-02-01
Distance from Earth and limited communications in future missions will increase the demands for crew autonomy and dependence on automation, and Deep Space Gateway presents an opportunity to study the impacts of these increased demands on human performance.
Laser-Assisted Wire Additive Manufacturing System for the Deep Space Gateway
NASA Astrophysics Data System (ADS)
Foster, B. D.; Matthews, B.
2018-02-01
Investigation on the Deep Space Gateway will involve experiments/operations inside pressurized modules. Support for those experiments may necessitate a means to fabricate and repair required articles. This capability can be provided through an additive manufacturing (AM) system.
Pick- and waveform-based techniques for real-time detection of induced seismicity
NASA Astrophysics Data System (ADS)
Grigoli, Francesco; Scarabello, Luca; Böse, Maren; Weber, Bernd; Wiemer, Stefan; Clinton, John F.
2018-05-01
The monitoring of induced seismicity is a common operation in many industrial activities, such as conventional and non-conventional hydrocarbon production or mining and geothermal energy exploitation, to cite a few. During such operations, we generally collect very large and strongly noise-contaminated data sets that require robust and automated analysis procedures. Induced seismicity data sets are often characterized by sequences of multiple events with short interevent times or overlapping events; in these cases, pick-based location methods may struggle to correctly assign picks to phases and events, and errors can lead to missed detections and/or reduced location resolution and incorrect magnitudes, which can have significant consequences if real-time seismicity information are used for risk assessment frameworks. To overcome these issues, different waveform-based methods for the detection and location of microseismicity have been proposed. The main advantages of waveform-based methods is that they appear to perform better and can simultaneously detect and locate seismic events providing high-quality locations in a single step, while the main disadvantage is that they are computationally expensive. Although these methods have been applied to different induced seismicity data sets, an extensive comparison with sophisticated pick-based detection methods is still missing. In this work, we introduce our improved waveform-based detector and we compare its performance with two pick-based detectors implemented within the SeiscomP3 software suite. We test the performance of these three approaches with both synthetic and real data sets related to the induced seismicity sequence at the deep geothermal project in the vicinity of the city of St. Gallen, Switzerland.
NASA Technical Reports Server (NTRS)
Killian, D. A.; Menninger, F. J.; Gorman, T.; Glenn, P.
1988-01-01
The Technical Facilities Controller is a microprocessor-based energy management system that is to be implemented in the Deep Space Network facilities. This system is used in conjunction with facilities equipment at each of the complexes in the operation and maintenance of air-conditioning equipment, power generation equipment, power distribution equipment, and other primary facilities equipment. The implementation of the Technical Facilities Controller was completed at the Goldstone Deep Space Communications Complex and is now operational. The installation completed at the Goldstone Complex is described and the utilization of the Technical Facilities Controller is evaluated. The findings will be used in the decision to implement a similar system at the overseas complexes at Canberra, Australia, and Madrid, Spain.
Abràmoff, Michael David; Lou, Yiyue; Erginay, Ali; Clarida, Warren; Amelon, Ryan; Folk, James C; Niemeijer, Meindert
2016-10-01
To compare performance of a deep-learning enhanced algorithm for automated detection of diabetic retinopathy (DR), to the previously published performance of that algorithm, the Iowa Detection Program (IDP)-without deep learning components-on the same publicly available set of fundus images and previously reported consensus reference standard set, by three US Board certified retinal specialists. We used the previously reported consensus reference standard of referable DR (rDR), defined as International Clinical Classification of Diabetic Retinopathy moderate, severe nonproliferative (NPDR), proliferative DR, and/or macular edema (ME). Neither Messidor-2 images, nor the three retinal specialists setting the Messidor-2 reference standard were used for training IDx-DR version X2.1. Sensitivity, specificity, negative predictive value, area under the curve (AUC), and their confidence intervals (CIs) were calculated. Sensitivity was 96.8% (95% CI: 93.3%-98.8%), specificity was 87.0% (95% CI: 84.2%-89.4%), with 6/874 false negatives, resulting in a negative predictive value of 99.0% (95% CI: 97.8%-99.6%). No cases of severe NPDR, PDR, or ME were missed. The AUC was 0.980 (95% CI: 0.968-0.992). Sensitivity was not statistically different from published IDP sensitivity, which had a CI of 94.4% to 99.3%, but specificity was significantly better than the published IDP specificity CI of 55.7% to 63.0%. A deep-learning enhanced algorithm for the automated detection of DR, achieves significantly better performance than a previously reported, otherwise essentially identical, algorithm that does not employ deep learning. Deep learning enhanced algorithms have the potential to improve the efficiency of DR screening, and thereby to prevent visual loss and blindness from this devastating disease.
Kark, Salit; Brokovich, Eran; Mazor, Tessa; Levin, Noam
2015-12-01
Globally, extensive marine areas important for biodiversity conservation and ecosystem functioning are undergoing exploration and extraction of oil and natural gas resources. Such operations are expanding to previously inaccessible deep waters and other frontier regions, while conservation-related legislation and planning is often lacking. Conservation challenges arising from offshore hydrocarbon development are wide-ranging. These challenges include threats to ecosystems and marine species from oil spills, negative impacts on native biodiversity from invasive species colonizing drilling infrastructure, and increased political conflicts that can delay conservation actions. With mounting offshore operations, conservationists need to urgently consider some possible opportunities that could be leveraged for conservation. Leveraging options, as part of multi-billion dollar marine hydrocarbon operations, include the use of facilities and costly equipment of the deep and ultra-deep hydrocarbon industry for deep-sea conservation research and monitoring and establishing new conservation research, practice, and monitoring funds and environmental offsetting schemes. The conservation community, including conservation scientists, should become more involved in the earliest planning and exploration phases and remain involved throughout the operations so as to influence decision making and promote continuous monitoring of biodiversity and ecosystems. A prompt response by conservation professionals to offshore oil and gas developments can mitigate impacts of future decisions and actions of the industry and governments. New environmental decision support tools can be used to explicitly incorporate the impacts of hydrocarbon operations on biodiversity into marine spatial and conservation plans and thus allow for optimum trade-offs among multiple objectives, costs, and risks. © 2015 Society for Conservation Biology.
Code of Federal Regulations, 2011 CFR
2011-07-01
... MINERALS REVENUE MANAGEMENT RELIEF OR REDUCTION IN ROYALTY RATES OCS Oil, Gas, and Sulfur General Royalty Relief for Drilling Ultra-Deep Wells on Leases Not Subject to Deep Water Royalty Relief § 203.35 What... Development in writing of your intent to begin drilling operations on all your ultra-deep wells. (b) Before...
Code of Federal Regulations, 2011 CFR
2011-07-01
... INTERIOR MINERALS REVENUE MANAGEMENT RELIEF OR REDUCTION IN ROYALTY RATES OCS Oil, Gas, and Sulfur General Royalty Relief for Drilling Ultra-Deep Wells on Leases Not Subject to Deep Water Royalty Relief § 203.33... from qualified wells on or after May 18, 2007, reported on the Oil and Gas Operations Report, Part A...
NASA Technical Reports Server (NTRS)
1974-01-01
The objectives, functions, and organization, of the Deep Space Network are summarized. Deep Space stations, ground communications, and network operations control capabilities are described. The network is designed for two-way communications with unmanned spacecraft traveling approximately 1600 km from earth to the farthest planets in the solar system. It has provided tracking and data acquisition support for the following projects: Ranger, Surveyor, Mariner, Pioneer, Apollo, Helios, Viking, and the Lunar Orbiter.
Sabokrou, Mohammad; Fayyaz, Mohsen; Fathy, Mahmood; Klette, Reinhard
2017-02-17
This paper proposes a fast and reliable method for anomaly detection and localization in video data showing crowded scenes. Time-efficient anomaly localization is an ongoing challenge and subject of this paper. We propose a cubicpatch- based method, characterised by a cascade of classifiers, which makes use of an advanced feature-learning approach. Our cascade of classifiers has two main stages. First, a light but deep 3D auto-encoder is used for early identification of "many" normal cubic patches. This deep network operates on small cubic patches as being the first stage, before carefully resizing remaining candidates of interest, and evaluating those at the second stage using a more complex and deeper 3D convolutional neural network (CNN). We divide the deep autoencoder and the CNN into multiple sub-stages which operate as cascaded classifiers. Shallow layers of the cascaded deep networks (designed as Gaussian classifiers, acting as weak single-class classifiers) detect "simple" normal patches such as background patches, and more complex normal patches are detected at deeper layers. It is shown that the proposed novel technique (a cascade of two cascaded classifiers) performs comparable to current top-performing detection and localization methods on standard benchmarks, but outperforms those in general with respect to required computation time.
Light Infantry: A Tactical Deep Battle Asset for Central Europe.
1985-12-02
that overview siptgficant lessons will be extracted with a view toward modern day applicability. A brief review of current capabilities for U. S. o...Genral Boldin wrote about a forty-five day operation in the enemy’s rear in which he establisad caummications with friendly forces and was able to...between the lessons extracted above n the hb fits of deep battle operations identified by LT Holder. lie contends that the principal benefLts of d v-p
Light Armor in Deep Operational Maneuver: The New Excalibur?
1994-05-04
organizations. 29 777-P7 777777-7 Operatlon Barration . The Belorussilan Camoaign During The Russo - German War. 194& Operation Bagration took place...McLean, VA: Brassey’s Defence Publisher’s, 1984), p. 141., 35. James 3. Schneider, "V.K. Triandafillov, Military Theorist." The Journal of SoQviet...Doctrine," The_= RIjournaL (March 1976): 38-46. Barbara,~ James C. and Brown, R.F. "Deep Thrust on the Extended Battlefield," Military Review (October 1982
Process control and recovery in the Link Monitor and Control Operator Assistant
NASA Technical Reports Server (NTRS)
Lee, Lorrine; Hill, Randall W., Jr.
1993-01-01
This paper describes our approach to providing process control and recovery functions in the Link Monitor and Control Operator Assistant (LMCOA). The focus of the LMCOA is to provide semi-automated monitor and control to support station operations in the Deep Space Network. The LMCOA will be demonstrated with precalibration operations for Very Long Baseline Interferometry on a 70-meter antenna. Precalibration, the task of setting up the equipment to support a communications link with a spacecraft, is a manual, time consuming and error-prone process. One problem with the current system is that it does not provide explicit feedback about the effects of control actions. The LMCOA uses a Temporal Dependency Network (TDN) to represent an end-to-end sequence of operational procedures and a Situation Manager (SM) module to provide process control, diagnosis, and recovery functions. The TDN is a directed network representing precedence, parallelism, precondition, and postcondition constraints. The SM maintains an internal model of the expected and actual states of the subsystems in order to determine if each control action executed successfully and to provide feedback to the user. The LMCOA is implemented on a NeXT workstation using Objective C, Interface Builder and the C Language Integrated Production System.
Analysis of the environmental issues concerning the deployment of an OTEC power plant in Martinique.
Devault, Damien A; Péné-Annette, Anne
2017-11-01
Ocean thermal energy conversion (OTEC) is a form of power generation, which exploits the temperature difference between warm surface seawater and cold deep seawater. Suitable conditions for OTEC occur in deep warm seas, especially the Caribbean, the Red Sea and parts of the Indo-Pacific Ocean. The continuous power provided by this renewable power source makes a useful contribution to a renewable energy mix because of the intermittence of the other major renewable power sources, i.e. solar or wind power. Industrial-scale OTEC power plants have simply not been built. However, recent innovations and greater political awareness of power transition to renewable energy sources have strengthened the support for such power plants and, after preliminary studies in the Reunion Island (Indian Ocean), the Martinique Island (West Indies) has been selected for the development of the first full-size OTEC power plant in the world, to be a showcase for testing and demonstration. An OTEC plant, even if the energy produced is cheap, calls for high initial capital investment. However, this technology is of interest mainly in tropical areas where funding is limited. The cost of innovations to create an operational OTEC plant has to be amortized, and this technology remains expensive. This paper will discuss the heuristic, technical and socio-economic limits and consequences of deploying an OTEC plant in Martinique to highlight respectively the impact of the OTEC plant on the environment the impact of the environment on the OTEC plant. After defining OTEC, we will describe the different constraints relating to the setting up of the first operational-scale plant worldwide. This includes the investigations performed (reporting declassified data), the political context and the local acceptance of the project. We will then provide an overview of the processes involved in the OTEC plant and discuss the feasibility of future OTEC installations. We will also list the extensive marine investigations required prior to installation and the dangers of setting up OTEC plants in inappropriate locations.
Is Multitask Deep Learning Practical for Pharma?
Ramsundar, Bharath; Liu, Bowen; Wu, Zhenqin; Verras, Andreas; Tudor, Matthew; Sheridan, Robert P; Pande, Vijay
2017-08-28
Multitask deep learning has emerged as a powerful tool for computational drug discovery. However, despite a number of preliminary studies, multitask deep networks have yet to be widely deployed in the pharmaceutical and biotech industries. This lack of acceptance stems from both software difficulties and lack of understanding of the robustness of multitask deep networks. Our work aims to resolve both of these barriers to adoption. We introduce a high-quality open-source implementation of multitask deep networks as part of the DeepChem open-source platform. Our implementation enables simple python scripts to construct, fit, and evaluate sophisticated deep models. We use our implementation to analyze the performance of multitask deep networks and related deep models on four collections of pharmaceutical data (three of which have not previously been analyzed in the literature). We split these data sets into train/valid/test using time and neighbor splits to test multitask deep learning performance under challenging conditions. Our results demonstrate that multitask deep networks are surprisingly robust and can offer strong improvement over random forests. Our analysis and open-source implementation in DeepChem provide an argument that multitask deep networks are ready for widespread use in commercial drug discovery.
Ngo, Tuan Anh; Lu, Zhi; Carneiro, Gustavo
2017-01-01
We introduce a new methodology that combines deep learning and level set for the automated segmentation of the left ventricle of the heart from cardiac cine magnetic resonance (MR) data. This combination is relevant for segmentation problems, where the visual object of interest presents large shape and appearance variations, but the annotated training set is small, which is the case for various medical image analysis applications, including the one considered in this paper. In particular, level set methods are based on shape and appearance terms that use small training sets, but present limitations for modelling the visual object variations. Deep learning methods can model such variations using relatively small amounts of annotated training, but they often need to be regularised to produce good generalisation. Therefore, the combination of these methods brings together the advantages of both approaches, producing a methodology that needs small training sets and produces accurate segmentation results. We test our methodology on the MICCAI 2009 left ventricle segmentation challenge database (containing 15 sequences for training, 15 for validation and 15 for testing), where our approach achieves the most accurate results in the semi-automated problem and state-of-the-art results for the fully automated challenge. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.
FPGA wavelet processor design using language for instruction-set architectures (LISA)
NASA Astrophysics Data System (ADS)
Meyer-Bäse, Uwe; Vera, Alonzo; Rao, Suhasini; Lenk, Karl; Pattichis, Marios
2007-04-01
The design of an microprocessor is a long, tedious, and error-prone task consisting of typically three design phases: architecture exploration, software design (assembler, linker, loader, profiler), architecture implementation (RTL generation for FPGA or cell-based ASIC) and verification. The Language for instruction-set architectures (LISA) allows to model a microprocessor not only from instruction-set but also from architecture description including pipelining behavior that allows a design and development tool consistency over all levels of the design. To explore the capability of the LISA processor design platform a.k.a. CoWare Processor Designer we present in this paper three microprocessor designs that implement a 8/8 wavelet transform processor that is typically used in today's FBI fingerprint compression scheme. We have designed a 3 stage pipelined 16 bit RISC processor (NanoBlaze). Although RISC μPs are usually considered "fast" processors due to design concept like constant instruction word size, deep pipelines and many general purpose registers, it turns out that DSP operations consume essential processing time in a RISC processor. In a second step we have used design principles from programmable digital signal processor (PDSP) to improve the throughput of the DWT processor. A multiply-accumulate operation along with indirect addressing operation were the key to achieve higher throughput. A further improvement is possible with today's FPGA technology. Today's FPGAs offer a large number of embedded array multipliers and it is now feasible to design a "true" vector processor (TVP). A multiplication of two vectors can be done in just one clock cycle with our TVP, a complete scalar product in two clock cycles. Code profiling and Xilinx FPGA ISE synthesis results are provided that demonstrate the essential improvement that a TVP has compared with traditional RISC or PDSP designs.
Deep Space Network Antenna Monitoring Using Adaptive Time Series Methods and Hidden Markov Models
NASA Technical Reports Server (NTRS)
Smyth, Padhraic; Mellstrom, Jeff
1993-01-01
The Deep Space Network (DSN)(designed and operated by the Jet Propulsion Laboratory for the National Aeronautics and Space Administration (NASA) provides end-to-end telecommunication capabilities between earth and various interplanetary spacecraft throughout the solar system.
15 CFR 971.602 - Significant adverse environmental effects.
Code of Federal Regulations, 2010 CFR
2010-01-01
... REGULATIONS OF THE ENVIRONMENTAL DATA SERVICE DEEP SEABED MINING REGULATIONS FOR COMMERCIAL RECOVERY PERMITS... testing of recovery equipment, the recovery of manganese nodules in commercial quantities from the deep seabed, and the construction and operation of commercial-scale processing facilities as activities which...
The deep space network, volume 19
NASA Technical Reports Server (NTRS)
1974-01-01
The progress is reported in the DSN for Nov. and Dec. 1973. Research is described for the following areas: functions and facilities, mission support for flight projects, tracking and ground-based navigation, spacecraft/ground communication, network control and operations technology, and deep space stations.
Applications of Deep Learning and Reinforcement Learning to Biological Data.
Mahmud, Mufti; Kaiser, Mohammed Shamim; Hussain, Amir; Vassanelli, Stefano
2018-06-01
Rapid advances in hardware-based technologies during the past decades have opened up new possibilities for life scientists to gather multimodal data in various application domains, such as omics, bioimaging, medical imaging, and (brain/body)-machine interfaces. These have generated novel opportunities for development of dedicated data-intensive machine learning techniques. In particular, recent research in deep learning (DL), reinforcement learning (RL), and their combination (deep RL) promise to revolutionize the future of artificial intelligence. The growth in computational power accompanied by faster and increased data storage, and declining computing costs have already allowed scientists in various fields to apply these techniques on data sets that were previously intractable owing to their size and complexity. This paper provides a comprehensive survey on the application of DL, RL, and deep RL techniques in mining biological data. In addition, we compare the performances of DL techniques when applied to different data sets across various application domains. Finally, we outline open issues in this challenging research area and discuss future development perspectives.
50 CFR 665.803 - Notifications.
Code of Federal Regulations, 2010 CFR
2010-10-01
... notification of the trip type (either deep-setting or shallow-setting). (b) The permit holder, or designated... in processing an application, permit holders failing to receive important notifications, or sanctions...
Karamintziou, Sofia D.; Custódio, Ana Luísa; Piallat, Brigitte; Polosan, Mircea; Chabardès, Stéphan; Stathis, Pantelis G.; Tagaris, George A.; Sakas, Damianos E.; Polychronaki, Georgia E.; Tsirogiannis, George L.; David, Olivier; Nikita, Konstantina S.
2017-01-01
Advances in the field of closed-loop neuromodulation call for analysis and modeling approaches capable of confronting challenges related to the complex neuronal response to stimulation and the presence of strong internal and measurement noise in neural recordings. Here we elaborate on the algorithmic aspects of a noise-resistant closed-loop subthalamic nucleus deep brain stimulation system for advanced Parkinson’s disease and treatment-refractory obsessive-compulsive disorder, ensuring remarkable performance in terms of both efficiency and selectivity of stimulation, as well as in terms of computational speed. First, we propose an efficient method drawn from dynamical systems theory, for the reliable assessment of significant nonlinear coupling between beta and high-frequency subthalamic neuronal activity, as a biomarker for feedback control. Further, we present a model-based strategy through which optimal parameters of stimulation for minimum energy desynchronizing control of neuronal activity are being identified. The strategy integrates stochastic modeling and derivative-free optimization of neural dynamics based on quadratic modeling. On the basis of numerical simulations, we demonstrate the potential of the presented modeling approach to identify, at a relatively low computational cost, stimulation settings potentially associated with a significantly higher degree of efficiency and selectivity compared with stimulation settings determined post-operatively. Our data reinforce the hypothesis that model-based control strategies are crucial for the design of novel stimulation protocols at the backstage of clinical applications. PMID:28222198
Picking Deep Filter Responses for Fine-Grained Image Recognition (Open Access Author’s Manuscript)
2016-12-16
stages. Our method explores a unified framework based on two steps of deep filter response picking. The first picking step is to find distinctive... filters which respond to specific patterns significantly and consistently, and learn a set of part detectors via iteratively alternating between new...positive sample mining and part model retraining. The second picking step is to pool deep filter responses via spatially weighted combination of Fisher
NASA Astrophysics Data System (ADS)
De Marchi, G.; Paresce, F.; Straniero, O.; Prada Moroni, P. G.
2004-03-01
Very deep images of the Galactic globular cluster M 4 (NGC 6121) through the F606W and F814W filters were taken in 2001 with the WFPC2 on board the HST. A first published analysis of this data set (Richer et al. \\cite{Richer2002}) produced the result that the age of M 4 is 12.7± 0.7 Gyr (Hansen et al. \\cite{Hansen2002}), thus setting a robust lower limit to the age of the universe. In view of the great astronomical importance of getting this number right, we have subjected the same data set to the simplest possible photometric analysis that completely avoids uncertain assumptions about the origin of the detected sources. This analysis clearly reveals both a thin main sequence, from which can be deduced the deepest statistically complete mass function yet determined for a globular cluster, and a white dwarf (WD) sequence extending all the way down to the 5 \\sigma detection limit at I ≃ 27. The WD sequence is abruptly terminated at exactly this limit as expected by detection statistics. Using our most recent theoretical WD models (Prada Moroni & Straniero \\cite{Prada2002}) to obtain the expected WD sequence for different ages in the observed bandpasses, we find that the data so far obtained do not reach the peak of the WD luminosity function, thus only allowing one to set a lower limit to the age of M 4 of ˜9 Gyr. Thus, the problem of determining the absolute age of a globular cluster and, therefore, the onset of GC formation with cosmologically significant accuracy remains completely open. Only observations several magnitudes deeper than the limit obtained so far would allow one to approach this objective. Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA for NASA under contract NAS5-26555.
Lessons Learned from Daily Uplink Operations during the Deep Impact Mission
NASA Technical Reports Server (NTRS)
Stehly, Joseph S.
2006-01-01
The daily preparation of uplink products (commands and files) for Deep Impact was as problematic as the final encounter images were spectacular. The operations team was faced with many challenges during the six-month mission to comet Tempel One of the biggest difficulties was that the Deep Impact Flyby and Impactor vehicles necessitated a high volume of uplink products while also utilizing a new uplink file transfer capability. The Jet Propulsion Laboratory (JPL) Multi-Mission Ground Systems and Services (MGSS) Mission Planning and Sequence Team (MPST) had the responsibility of preparing the uplink products for use on the two spacecraft. These responsibilities included processing nearly 15,000 flight products, modeling the states of the spacecraft during all activities for subsystem review, and ensuring that the proper commands and files were uplinked to the spacecraft. To guarantee this transpired and the health and safety of the two spacecraft were not jeopardized several new ground scripts and procedures were developed while the Deep Impact Flyby and Impactor spacecraft were en route to their encounter with Tempel-1. These scripts underwent several adaptations throughout the entire mission up until three days before the separation of the Flyby and Impactor vehicles. The problems presented by Deep Impact's daily operations and the development of scripts and procedures to ease those challenges resulted in several valuable lessons learned. These lessons are now being integrated into the design of current and future MGSS missions at JPL.
NASA Astrophysics Data System (ADS)
Gaonkar, Bilwaj; Hovda, David; Martin, Neil; Macyszyn, Luke
2016-03-01
Deep Learning, refers to large set of neural network based algorithms, have emerged as promising machine- learning tools in the general imaging and computer vision domains. Convolutional neural networks (CNNs), a specific class of deep learning algorithms, have been extremely effective in object recognition and localization in natural images. A characteristic feature of CNNs, is the use of a locally connected multi layer topology that is inspired by the animal visual cortex (the most powerful vision system in existence). While CNNs, perform admirably in object identification and localization tasks, typically require training on extremely large datasets. Unfortunately, in medical image analysis, large datasets are either unavailable or are extremely expensive to obtain. Further, the primary tasks in medical imaging are organ identification and segmentation from 3D scans, which are different from the standard computer vision tasks of object recognition. Thus, in order to translate the advantages of deep learning to medical image analysis, there is a need to develop deep network topologies and training methodologies, that are geared towards medical imaging related tasks and can work in a setting where dataset sizes are relatively small. In this paper, we present a technique for stacked supervised training of deep feed forward neural networks for segmenting organs from medical scans. Each `neural network layer' in the stack is trained to identify a sub region of the original image, that contains the organ of interest. By layering several such stacks together a very deep neural network is constructed. Such a network can be used to identify extremely small regions of interest in extremely large images, inspite of a lack of clear contrast in the signal or easily identifiable shape characteristics. What is even more intriguing is that the network stack achieves accurate segmentation even when it is trained on a single image with manually labelled ground truth. We validate this approach,using a publicly available head and neck CT dataset. We also show that a deep neural network of similar depth, if trained directly using backpropagation, cannot acheive the tasks achieved using our layer wise training paradigm.
Lessons Learned for Planning and Estimating Operations Support Requirements
NASA Technical Reports Server (NTRS)
Newhouse, Marilyn
2011-01-01
Operations (phase E) costs are typically small compared to the spacecraft development and test costs. This, combined with the long lead time for realizing operations costs, can lead projects to focus on hardware development schedules and costs, de-emphasizing estimation of operations support requirements during proposal, early design, and replan cost exercises. The Discovery and New Frontiers (D&NF) programs comprise small, cost-capped missions supporting scientific exploration of the solar system. Even moderate yearly underestimates of the operations costs can present significant LCC impacts for deep space missions with long operational durations, and any LCC growth can directly impact the programs ability to fund new missions. The D&NF Program Office at Marshall Space Flight Center recently studied cost overruns for 7 D&NF missions related to phase C/D development of operational capabilities and phase E mission operations. The goal was to identify the underlying causes for the overruns and develop practical mitigations to assist the D&NF projects in identifying potential operations risks and controlling the associated impacts to operations development and execution costs. The study found that the drivers behind these overruns include overly optimistic assumptions regarding the savings resulting from the use of heritage technology, late development of operations requirements, inadequate planning for sustaining engineering and the special requirements of long duration missions (e.g., knowledge retention and hardware/software refresh), and delayed completion of ground system development work. This presentation summarizes the study and the results, providing a set of lessons NASA can use to improve early estimation and validation of operations costs.
Algorithm research on infrared imaging target extraction based on GAC model
NASA Astrophysics Data System (ADS)
Li, Yingchun; Fan, Youchen; Wang, Yanqing
2016-10-01
Good target detection and tracking technique is significantly meaningful to increase infrared target detection distance and enhance resolution capacity. For the target detection problem about infrared imagining, firstly, the basic principles of level set method and GAC model are is analyzed in great detail. Secondly, "convergent force" is added according to the defect that GAC model is stagnant outside the deep concave region and cannot reach deep concave edge to build the promoted GAC model. Lastly, the self-adaptive detection method in combination of Sobel operation and GAC model is put forward by combining the advantages that subject position of the target could be detected with Sobel operator and the continuous edge of the target could be obtained through GAC model. In order to verify the effectiveness of the model, the two groups of experiments are carried out by selecting the images under different noise effects. Besides, the comparative analysis is conducted with LBF and LIF models. The experimental result shows that target could be better locked through LIF and LBF algorithms for the slight noise effect. The accuracy of segmentation is above 0.8. However, as for the strong noise effect, the target and noise couldn't be distinguished under the strong interference of GAC, LIF and LBF algorithms, thus lots of non-target parts are extracted during iterative process. The accuracy of segmentation is below 0.8. The accurate target position is extracted through the algorithm proposed in this paper. Besides, the accuracy of segmentation is above 0.8.
NASA Astrophysics Data System (ADS)
Hashimoto, Noriaki; Suzuki, Kenji; Liu, Junchi; Hirano, Yasushi; MacMahon, Heber; Kido, Shoji
2018-02-01
Consolidation and ground-glass opacity (GGO) are two major types of opacities associated with diffuse lung diseases. Accurate detection and classification of such opacities are crucially important in the diagnosis of lung diseases, but the process is subjective, and suffers from interobserver variability. Our study purpose was to develop a deep neural network convolution (NNC) system for distinguishing among consolidation, GGO, and normal lung tissue in high-resolution CT (HRCT). We developed ensemble of two deep NNC models, each of which was composed of neural network regression (NNR) with an input layer, a convolution layer, a fully-connected hidden layer, and a fully-connected output layer followed by a thresholding layer. The output layer of each NNC provided a map for the likelihood of being each corresponding lung opacity of interest. The two NNC models in the ensemble were connected in a class-selection layer. We trained our NNC ensemble with pairs of input 2D axial slices and "teaching" probability maps for the corresponding lung opacity, which were obtained by combining three radiologists' annotations. We randomly selected 10 and 40 slices from HRCT scans of 172 patients for each class as a training and test set, respectively. Our NNC ensemble achieved an area under the receiver-operating-characteristic (ROC) curve (AUC) of 0.981 and 0.958 in distinction of consolidation and GGO, respectively, from normal opacity, yielding a classification accuracy of 93.3% among 3 classes. Thus, our deep-NNC-based system for classifying diffuse lung diseases achieved high accuracies for classification of consolidation, GGO, and normal opacity.
NASA Astrophysics Data System (ADS)
Skinner, L. C.
2009-09-01
So far, the exploration of possible mechanisms for glacial atmospheric CO2 drawdown and marine carbon sequestration has tended to focus on dynamic or kinetic processes (i.e. variable mixing-, equilibration- or export rates). Here an attempt is made to underline instead the possible importance of changes in the standing volumes of intra-oceanic carbon reservoirs (i.e. different water-masses) in influencing the total marine carbon inventory. By way of illustration, a simple mechanism is proposed for enhancing the marine carbon inventory via an increase in the volume of relatively cold and carbon-enriched deep water, analogous to modern Lower Circumpolar Deep Water (LCDW), filling the ocean basins. A set of simple box-model experiments confirm the expectation that a deep sea dominated by an expanded LCDW-like watermass holds more CO2, without any pre-imposed changes in ocean overturning rate, biological export or ocean-atmosphere exchange. The magnitude of this "standing volume effect" (which operates by boosting the solubility- and biological pumps) might be as large as the contributions that have previously been attributed to carbonate compensation, terrestrial biosphere reduction or ocean fertilisation for example. By providing a means of not only enhancing but also driving changes in the efficiency of the biological- and solubility pumps, this standing volume mechanism may help to reduce the amount of glacial-interglacial CO2 change that remains to be explained by other mechanisms that are difficult to assess in the geological archive, such as reduced mass transport or mixing rates in particular. This in turn could help narrow the search for forcing conditions capable of pushing the global carbon cycle between glacial and interglacial modes.
Diverse, rare microbial taxa responded to the Deepwater Horizon deep-sea hydrocarbon plume
Kleindienst, Sara; Grim, Sharon; Sogin, Mitchell; Bracco, Annalisa; Crespo-Medina, Melitza; Joye, Samantha B
2016-01-01
The Deepwater Horizon (DWH) oil well blowout generated an enormous plume of dispersed hydrocarbons that substantially altered the Gulf of Mexico's deep-sea microbial community. A significant enrichment of distinct microbial populations was observed, yet, little is known about the abundance and richness of specific microbial ecotypes involved in gas, oil and dispersant biodegradation in the wake of oil spills. Here, we document a previously unrecognized diversity of closely related taxa affiliating with Cycloclasticus, Colwellia and Oceanospirillaceae and describe their spatio-temporal distribution in the Gulf's deepwater, in close proximity to the discharge site and at increasing distance from it, before, during and after the discharge. A highly sensitive, computational method (oligotyping) applied to a data set generated from 454-tag pyrosequencing of bacterial 16S ribosomal RNA gene V4–V6 regions, enabled the detection of population dynamics at the sub-operational taxonomic unit level (0.2% sequence similarity). The biogeochemical signature of the deep-sea samples was assessed via total cell counts, concentrations of short-chain alkanes (C1–C5), nutrients, (colored) dissolved organic and inorganic carbon, as well as methane oxidation rates. Statistical analysis elucidated environmental factors that shaped ecologically relevant dynamics of oligotypes, which likely represent distinct ecotypes. Major hydrocarbon degraders, adapted to the slow-diffusive natural hydrocarbon seepage in the Gulf of Mexico, appeared unable to cope with the conditions encountered during the DWH spill or were outcompeted. In contrast, diverse, rare taxa increased rapidly in abundance, underscoring the importance of specialized sub-populations and potential ecotypes during massive deep-sea oil discharges and perhaps other large-scale perturbations. PMID:26230048
Vector-based navigation using grid-like representations in artificial agents.
Banino, Andrea; Barry, Caswell; Uria, Benigno; Blundell, Charles; Lillicrap, Timothy; Mirowski, Piotr; Pritzel, Alexander; Chadwick, Martin J; Degris, Thomas; Modayil, Joseph; Wayne, Greg; Soyer, Hubert; Viola, Fabio; Zhang, Brian; Goroshin, Ross; Rabinowitz, Neil; Pascanu, Razvan; Beattie, Charlie; Petersen, Stig; Sadik, Amir; Gaffney, Stephen; King, Helen; Kavukcuoglu, Koray; Hassabis, Demis; Hadsell, Raia; Kumaran, Dharshan
2018-05-01
Deep neural networks have achieved impressive successes in fields ranging from object recognition to complex games such as Go 1,2 . Navigation, however, remains a substantial challenge for artificial agents, with deep neural networks trained by reinforcement learning 3-5 failing to rival the proficiency of mammalian spatial behaviour, which is underpinned by grid cells in the entorhinal cortex 6 . Grid cells are thought to provide a multi-scale periodic representation that functions as a metric for coding space 7,8 and is critical for integrating self-motion (path integration) 6,7,9 and planning direct trajectories to goals (vector-based navigation) 7,10,11 . Here we set out to leverage the computational functions of grid cells to develop a deep reinforcement learning agent with mammal-like navigational abilities. We first trained a recurrent network to perform path integration, leading to the emergence of representations resembling grid cells, as well as other entorhinal cell types 12 . We then showed that this representation provided an effective basis for an agent to locate goals in challenging, unfamiliar, and changeable environments-optimizing the primary objective of navigation through deep reinforcement learning. The performance of agents endowed with grid-like representations surpassed that of an expert human and comparison agents, with the metric quantities necessary for vector-based navigation derived from grid-like units within the network. Furthermore, grid-like representations enabled agents to conduct shortcut behaviours reminiscent of those performed by mammals. Our findings show that emergent grid-like representations furnish agents with a Euclidean spatial metric and associated vector operations, providing a foundation for proficient navigation. As such, our results support neuroscientific theories that see grid cells as critical for vector-based navigation 7,10,11 , demonstrating that the latter can be combined with path-based strategies to support navigation in challenging environments.
Developing Deep Learning Applications for Life Science and Pharma Industry.
Siegismund, Daniel; Tolkachev, Vasily; Heyse, Stephan; Sick, Beate; Duerr, Oliver; Steigele, Stephan
2018-06-01
Deep Learning has boosted artificial intelligence over the past 5 years and is seen now as one of the major technological innovation areas, predicted to replace lots of repetitive, but complex tasks of human labor within the next decade. It is also expected to be 'game changing' for research activities in pharma and life sciences, where large sets of similar yet complex data samples are systematically analyzed. Deep learning is currently conquering formerly expert domains especially in areas requiring perception, previously not amenable to standard machine learning. A typical example is the automated analysis of images which are typically produced en-masse in many domains, e. g., in high-content screening or digital pathology. Deep learning enables to create competitive applications in so-far defined core domains of 'human intelligence'. Applications of artificial intelligence have been enabled in recent years by (i) the massive availability of data samples, collected in pharma driven drug programs (='big data') as well as (ii) deep learning algorithmic advancements and (iii) increase in compute power. Such applications are based on software frameworks with specific strengths and weaknesses. Here, we introduce typical applications and underlying frameworks for deep learning with a set of practical criteria for developing production ready solutions in life science and pharma research. Based on our own experience in successfully developing deep learning applications we provide suggestions and a baseline for selecting the most suited frameworks for a future-proof and cost-effective development. © Georg Thieme Verlag KG Stuttgart · New York.
Visual Saliency Detection Based on Multiscale Deep CNN Features.
Guanbin Li; Yizhou Yu
2016-11-01
Visual saliency is a fundamental problem in both cognitive and computational sciences, including computer vision. In this paper, we discover that a high-quality visual saliency model can be learned from multiscale features extracted using deep convolutional neural networks (CNNs), which have had many successes in visual recognition tasks. For learning such saliency models, we introduce a neural network architecture, which has fully connected layers on top of CNNs responsible for feature extraction at three different scales. The penultimate layer of our neural network has been confirmed to be a discriminative high-level feature vector for saliency detection, which we call deep contrast feature. To generate a more robust feature, we integrate handcrafted low-level features with our deep contrast feature. To promote further research and evaluation of visual saliency models, we also construct a new large database of 4447 challenging images and their pixelwise saliency annotations. Experimental results demonstrate that our proposed method is capable of achieving the state-of-the-art performance on all public benchmarks, improving the F-measure by 6.12% and 10%, respectively, on the DUT-OMRON data set and our new data set (HKU-IS), and lowering the mean absolute error by 9% and 35.3%, respectively, on these two data sets.
Genetic homogeneity in the deep-sea grenadier Macrourus berglax across the North Atlantic Ocean
NASA Astrophysics Data System (ADS)
Coscia, Ilaria; Castilho, Rita; Massa-Gallucci, Alexia; Sacchi, Carlotta; Cunha, Regina L.; Stefanni, Sergio; Helyar, Sarah J.; Knutsen, Halvor; Mariani, Stefano
2018-02-01
Paucity of data on population structure and connectivity in deep sea species remains a major obstacle to their sustainable management and conservation in the face of ever increasing fisheries pressure and other forms of impacts on deep sea ecosystems. The roughhead grenadier Macrourus berglax presents all the classical characteristics of a deep sea species, such as slow growth and low fecundity, which make them particularly vulnerable to anthropogenic impact, due to their low resilience to change. In this study, the population structure of the roughhead grenadier is investigated throughout its geographic distribution using two sets of molecular markers: a partial sequence of the Control Region of mitochondrial DNA and species-specific microsatellites. No evidence of significant structure was found throughout the North Atlantic, with both sets of molecular markers yielding the same results of overall homogeneity. We posit two non-mutually exclusive scenarios that can explain such outcome: i) substantial high gene flow among locations, possibly maintained by larval stages, ii) very large effective size of post-glacially expanded populations. The results can inform management strategies in this by-caught species, and contribute to the broader issue of biological connectivity in the deep ocean.
Aziz, Faisal; Lehman, Erik; Blebea, John; Lurie, Fedor
2017-01-01
Background Deep venous thrombosis after any surgical operations is considered a preventable complication. Lower extremity bypass surgery is a commonly performed operation to improve blood flow to lower extremities in patients with severe peripheral arterial disease. Despite advances in endovascular surgery, lower extremity arterial bypass remains the gold standard treatment for severe, symptomatic peripheral arterial disease. The purpose of this study is to identify the clinical risk factors associated with development of deep venous thrombosis after lower extremity bypass surgery. Methods The American College of Surgeons' NSQIP database was utilized and all lower extremity bypass procedures performed in 2013 were examined. Patient and procedural characteristics were evaluated. Univariate and multivariate logistic regression analysis was used to determine independent risk factors for the development of postoperative deep venous thrombosis. Results A total of 2646 patients (65% males and 35% females) underwent lower extremity open revascularization during the year 2013. The following factors were found to be significantly associated with postoperative deep venous thrombosis: transfusion >4 units of packed red blood cells (odds ratio (OR) = 5.21, confidence interval (CI) = 1.29-22.81, p = 0.03), postoperative urinary tract infection (OR = 12.59, CI = 4.12-38.48, p < 0.01), length of hospital stay >28 days (OR = 9.30, CI = 2.79-30.92, p < 0.01), bleeding (OR = 2.93, CI = 1.27-6.73, p = 0.01), deep wound infection (OR = 3.21, CI = 1.37-7.56, p < 0.01), and unplanned reoperation (OR = 4.57, CI = 2.03-10.26, p < 0.01). Of these, multivariable analysis identified the factors independently associated with development of deep venous thrombosis after lower extremity bypass surgery to be unplanned reoperation (OR = 3.57, CI = 1.54-8.30, p < 0.01), reintubation (OR = 8.93, CI = 2.66-29.97, p < 0.01), and urinary tract infection (OR = 7.64, CI = 2.27-25.73, p < 0.01). Presence of all three factors was associated with a 54% incidence of deep venous thrombosis. Conclusions Development of deep venous thrombosis after lower extremity bypass is a serious but infrequent complication. Patients who require unplanned return to the operating room, reintubation, or develop a postoperative urinary tract are at high risk for developing postoperative deep venous thrombosis. Increased monitoring of these patients and ensuring adequate deep venous thrombosis prophylaxis for such patients is suggested.
Hybrid AlGaN-SiC Avalanche Photodiode for Deep-UV Photon Detection
NASA Technical Reports Server (NTRS)
Aslam, Shahid; Herrero, Federico A.; Sigwarth, John; Goldsman, Neil; Akturk, Akin
2010-01-01
The proposed device is capable of counting ultraviolet (UV) photons, is compatible for inclusion into space instruments, and has applications as deep- UV detectors for calibration systems, curing systems, and crack detection. The device is based on a Separate Absorption and Charge Multiplication (SACM) structure. It is based on aluminum gallium nitride (AlGaN) absorber on a silicon carbide APD (avalanche photodiode). The AlGaN layer absorbs incident UV photons and injects photogenerated carriers into an underlying SiC APD that is operated in Geiger mode and provides current multiplication via avalanche breakdown. The solid-state detector is capable of sensing 100-to-365-nanometer wavelength radiation at a flux level as low as 6 photons/pixel/s. Advantages include, visible-light blindness, operation in harsh environments (e.g., high temperatures), deep-UV detection response, high gain, and Geiger mode operation at low voltage. Furthermore, the device can also be designed in array formats, e.g., linear arrays or 2D arrays (micropixels inside a superpixel).
NASA Astrophysics Data System (ADS)
Arumugam, Vinodiran
2013-08-01
Breast cancer remains a significant cause of morbidity and mortality. Assessment of the axillary lymph nodes is part of the staging of the disease. Advances in surgical management of breast cancer have seen a move towards intra-operative lymph node assessment that facilitates an immediate axillary clearance if it is indicated. Raman spectroscopy, a technique based on the inelastic scattering of light, has previously been shown to be capable of differentiating between normal and malignant tissue. These results, based on the biochemical composition of the tissue, potentially allow for this technique to be utilised in this clinical context. The aim of this study was to evaluate the facility of Raman spectroscopy to both assess axillary lymph node tissue within the theatre setting and to achieve results that were comparable to other intra-operative techniques within a clinically relevant time frame. Initial experiments demonstrated that these aims were feasible within the context of both the theatre environment and current surgical techniques. A laboratory based feasibility study involving 17 patients and 38 lymph node samples achieved sensivities and specificities of >90% in unsupervised testing. 339 lymph node samples from 66 patients were subsequently assessed within the theatre environment. Chemometric analysis of this data demonstrated sensitivities of up to 94% and specificities of up to 99% in unsupervised testing. The best results were achieved when comparing negative nodes from N0 patients and nodes containing macrometastases. Spectral analysis revealed increased levels of lipid in the negative nodes and increased DNA and protein levels in the positive nodes. Further studies highlighted the reproducibility of these results using different equipment, users and time from excision. This study uses Raman spectroscopy for the first time in an operating theatre and demonstrates that the results obtained, in real-time, are comparable, if not superior, to current intra-operative techniques of lymph nodes assessment.
The deep space network, volume 15
NASA Technical Reports Server (NTRS)
1973-01-01
The DSN progress is reported in flight project support, TDA research and technology, network engineering, hardware and software implementation, and operations. Topics discussed include: DSN functions and facilities, planetary flight projects, tracking and ground-based navigation, communications, data processing, network control system, and deep space stations.
The Deep Space Network, volume 39
NASA Technical Reports Server (NTRS)
1977-01-01
The functions, facilities, and capabilities of the Deep Space Network and its support of the Pioneer, Helios, and Viking missions are described. Progress in tracking and data acquisition research and technology, network engineering and modifications, as well as hardware and software implementation and operations are reported.
Deep space network Mark 4A description
NASA Technical Reports Server (NTRS)
Wallace, R. J.; Burt, R. W.
1986-01-01
The general system configuration for the Mark 4A Deep Space Network is described. The arrangement and complement of antennas at the communications complexes and subsystem equipment at the signal processing centers are described. A description of the Network Operations Control Center is also presented.
RD860 and RD860L Engines with Deep Thrust Throttling and a High Technology Readiness Level (TRL)
NASA Astrophysics Data System (ADS)
Prokopchuk, O. O.; Shul'ga, V. A.; Dibrivnyi, O. V.; Kukhta, A. S.
2018-04-01
To solve the problems of delivering payloads to Mars surface and returning them to the orbit, liquid rocket engines, operating on storable propellants with deep throttling possibility, are needed, besides having high energy-mass characteristics.
On the Shallow Processing (Dis)Advantage: Grammar and Economy.
Koornneef, Arnout; Reuland, Eric
2016-01-01
In the psycholinguistic literature it has been proposed that readers and listeners often adopt a "good-enough" processing strategy in which a "shallow" representation of an utterance driven by (top-down) extra-grammatical processes has a processing advantage over a "deep" (bottom-up) grammatically-driven representation of that same utterance. In the current contribution we claim, both on theoretical and experimental grounds, that this proposal is overly simplistic. Most importantly, in the domain of anaphora there is now an accumulating body of evidence showing that the anaphoric dependencies between (reflexive) pronominals and their antecedents are subject to an economy hierarchy. In this economy hierarchy, deriving anaphoric dependencies by deep-grammatical-operations requires less processing costs than doing so by shallow-extra-grammatical-operations. In addition, in case of ambiguity when both a shallow and a deep derivation are available to the parser, the latter is actually preferred. This, we argue, contradicts the basic assumptions of the shallow-deep dichotomy and, hence, a rethinking of the good-enough processing framework is warranted.
[Automated Assessment for Bone Age of Left Wrist Joint in Uyghur Teenagers by Deep Learning].
Hu, T H; Huo, Z; Liu, T A; Wang, F; Wan, L; Wang, M W; Chen, T; Wang, Y H
2018-02-01
To realize the automated bone age assessment by applying deep learning to digital radiography (DR) image recognition of left wrist joint in Uyghur teenagers, and explore its practical application value in forensic medicine bone age assessment. The X-ray films of left wrist joint after pretreatment, which were taken from 245 male and 227 female Uyghur nationality teenagers in Uygur Autonomous Region aged from 13.0 to 19.0 years old, were chosen as subjects. And AlexNet was as a regression model of image recognition. From the total samples above, 60% of male and female DR images of left wrist joint were selected as net train set, and 10% of samples were selected as validation set. As test set, the rest 30% were used to obtain the image recognition accuracy with an error range in ±1.0 and ±0.7 age respectively, compared to the real age. The modelling results of deep learning algorithm showed that when the error range was in ±1.0 and ±0.7 age respectively, the accuracy of the net train set was 81.4% and 75.6% in male, and 80.5% and 74.8% in female, respectively. When the error range was in ±1.0 and ±0.7 age respectively, the accuracy of the test set was 79.5% and 71.2% in male, and 79.4% and 66.2% in female, respectively. The combination of bone age research on teenagers' left wrist joint and deep learning, which has high accuracy and good feasibility, can be the research basis of bone age automatic assessment system for the rest joints of body. Copyright© by the Editorial Department of Journal of Forensic Medicine.
Rueckauer, Bodo; Lungu, Iulia-Alexandra; Hu, Yuhuang; Pfeiffer, Michael; Liu, Shih-Chii
2017-01-01
Spiking neural networks (SNNs) can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs) can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications.
Rueckauer, Bodo; Lungu, Iulia-Alexandra; Hu, Yuhuang; Pfeiffer, Michael; Liu, Shih-Chii
2017-01-01
Spiking neural networks (SNNs) can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs) can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications. PMID:29375284
Advanced development of double-injection, deep-impurity semiconductor switches
NASA Technical Reports Server (NTRS)
Hanes, M. H.
1987-01-01
Deep-impurity, double-injection devices, commonly refered to as (DI) squared devices, represent a class of semiconductor switches possessing a very high degree of tolerance to electron and neutron irradiation and to elevated temperature operation. These properties have caused them to be considered as attractive candidates for space power applications. The design, fabrication, and testing of several varieties of (DI) squared devices intended for power switching are described. All of these designs were based upon gold-doped silicon material. Test results, along with results of computer simulations of device operation, other calculations based upon the assumed mode of operation of (DI) squared devices, and empirical information regarding power semiconductor device operation and limitations, have led to the conculsion that these devices are not well suited to high-power applications. When operated in power circuitry configurations, they exhibit high-power losses in both the off-state and on-state modes. These losses are caused by phenomena inherent to the physics and material of the devices and cannot be much reduced by device design optimizations. The (DI) squared technology may, however, find application in low-power functions such as sensing, logic, and memory, when tolerance to radiation and temperature are desirable (especially is device performance is improved by incorporation of deep-level impurities other than gold.
NASA Technical Reports Server (NTRS)
Tikidjian, Raffi; Mackey, Ryan
2008-01-01
The DSN Array Simulator (wherein 'DSN' signifies NASA's Deep Space Network) is an updated version of software previously denoted the DSN Receive Array Technology Assessment Simulation. This software (see figure) is used for computational modeling of a proposed DSN facility comprising user-defined arrays of antennas and transmitting and receiving equipment for microwave communication with spacecraft on interplanetary missions. The simulation includes variations in spacecraft tracked and communication demand changes for up to several decades of future operation. Such modeling is performed to estimate facility performance, evaluate requirements that govern facility design, and evaluate proposed improvements in hardware and/or software. The updated version of this software affords enhanced capability for characterizing facility performance against user-defined mission sets. The software includes a Monte Carlo simulation component that enables rapid generation of key mission-set metrics (e.g., numbers of links, data rates, and date volumes), and statistical distributions thereof as functions of time. The updated version also offers expanded capability for mixed-asset network modeling--for example, for running scenarios that involve user-definable mixtures of antennas having different diameters (in contradistinction to a fixed number of antennas having the same fixed diameter). The improved version also affords greater simulation fidelity, sufficient for validation by comparison with actual DSN operations and analytically predictable performance metrics.
New ideas for shallow gas well control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bourgoyne, A.T.; Kelly, O.A.; Sandoz, C.L.
1996-06-01
Flow from an unexpected shallow gas sand is one of the most difficult well control problems faced by oil and gas well operators during drilling operations. Current well control practice for bottom-supported marine rigs usually calls for shutting in the well when a kick is detected, if sufficient casing has been set to keep any flow underground. However, when shallow gas is encountered, casing may not be set deep enough to keep the underground flow from broaching to surface near the platform foundations. Once the flow reaches surface, craters are sometimes formed which can lead to loss of the rigmore » and associated marine structures. This short article overviews an ongoing study by Louisiana State University of the breakdown resistance of shallow marine sediments, using leak-off test data and geotechnical reports provided by Unocal. Such study is important for improving the characterization of shallow marine sediments to allow more reliable shallow casing designs, as the authors will conclude. This study has already proven that sediment failure mechanisms that lead to cratering have been poorly understood. In addition, there has been considerable uncertainty as to the best choices of well design parameters and well control contingency plans that will minimize risks associated with a shallow gas flow.« less
NASA Astrophysics Data System (ADS)
De Lange, Gert J.; Krijgsman, Wout
2014-05-01
The Messinian Salinity Crisis (MSC) is a dramatic event that took place ~ 5.9 Ma ago, and resulted in the deposition of 0.3-3 km thick evaporites at the Mediterranean seafloor. A considerable and long-lasting controversy existed on the modes of their formation. During the CIESM Almeria Workshop a consensus was reached on several aspects. In addition, remaining issues to be solved were identified, such as for the observed shallow gypsum versus deep dolostone deposits for the early phase of MSC. The onset of MSC is marked by deposition of gypsum/sapropel-like alternations, thought to relate to arid/humid climate conditions. Gypsum precipitation only occurred at marginal settings, while dolomite containing rocks have been reported from deeper settings. A range of potential explanations have been reported, most of which cannot satisfactorily explain all observations. Biogeochemical processes during MSC are poorly understood and commonly neglected. These may, however, explain that different deposits formed in shallow versus deep environments without needing exceptional physical boundary conditions for each. We present here a unifying mechanism in which gypsum formation occurs at all shallow water depths but its preservation is mostly limited to shallow sedimentary settings. In contrast, ongoing anoxic organic matter (OM) degradation processes in the deep basin result in the formation of dolomite. Gypsum precipitation in evaporating seawater takes place at 3-7 times concentrated seawater; seawater is always largely oversaturated relative to dolomite but its formation is thought to be inhibited by the presence of dissolved sulphate. Thus the conditions for formation of gypsum exclude those for the formation of dolomite and vice versa. Another process that links the saturation states of gypsum and dolomite is that of OM degradation by sulphate reduction. In stagnant deep water, oxygen is rapidly depleted through OM degradation, then sulphate becomes the main oxidant for OM mineralization, thus reducing the deep-water sulphate content. In addition, considerable amounts of dissolved carbonate are formed. This means that low-sulphate conditions as for MSC deepwater, i.e. unfavorable conditions for gypsum formation, always coincide with anoxic, i.e. oxygen-free conditions. Thus one would expect a bath-tub rim of gypsum at all shallow depths, but gypsum appears mainly at silled marginal basins. However, a thick package of heavy gypsum on top of more liquid mud in a marginal/slope setting is highly unstable, thus any physical disturbance such as tectonic activity or sea-level change, would easily lead to downslope transport of such marginal gypsum deposits. The absence of gypsum and the presence of erosional unconformities at the sill-less Mediterranean passive margins concord to such removal mechanism. In addition, large-scale re-sedimentation of gypsum has also been found for deep Messinian settings in the Northern Apennines and Sicily. Only at those marginal settings that were silled, the marginal gypsum deposits have been preserved. Including the dynamic biogeochemical processes in the thusfar static interpretations of evaporite formation mechanisms can thus account for the paradoxal, isochronous formation of shallow gypsum and deep-dolomite during the early MSC (1). (1) De Lange G.J. and Krijgsman W. (2010) Mar. Geol. 275, 273-277.
Clean subglacial access: prospects for future deep hot-water drilling
Pearce, David; Hodgson, Dominic A.; Smith, Andrew M.; Rose, Mike; Ross, Neil; Mowlem, Matt; Parnell, John
2016-01-01
Accessing and sampling subglacial environments deep beneath the Antarctic Ice Sheet presents several challenges to existing drilling technologies. With over half of the ice sheet believed to be resting on a wet bed, drilling down to this environment must conform to international agreements on environmental stewardship and protection, making clean hot-water drilling the most viable option. Such a drill, and its water recovery system, must be capable of accessing significantly greater ice depths than previous hot-water drills, and remain fully operational after connecting with the basal hydrological system. The Subglacial Lake Ellsworth (SLE) project developed a comprehensive plan for deep (greater than 3000 m) subglacial lake research, involving the design and development of a clean deep-ice hot-water drill. However, during fieldwork in December 2012 drilling was halted after a succession of equipment issues culminated in a failure to link with a subsurface cavity and abandonment of the access holes. The lessons learned from this experience are presented here. Combining knowledge gained from these lessons with experience from other hot-water drilling programmes, and recent field testing, we describe the most viable technical options and operational procedures for future clean entry into SLE and other deep subglacial access targets. PMID:26667913
iLab 20M: A Large-scale Controlled Object Dataset to Investigate Deep Learning
2016-07-01
and train) and anno - tate them with rotation labels. Alexnet is fine tuned on the training set. We set the learning rate for all the layers to 0.001...Azizpour, A. Razavian, J . Sullivan, A. Maki, and S. Carls- son. From generic to specific deep representations for visual recognition. In CVPR...113–120. IEEE, 2014. 2 [5] J . Bromley, J . W. Bentz, L. Bottou, I. Guyon, Y. LeCun, C. Moore, E. Säckinger, and R. Shah. Signature verifica- tion using
Dynamic Sampling of Trace Contaminants During the Mission Operations Test of the Deep Space Habitat
NASA Technical Reports Server (NTRS)
Monje, Oscar; Valling, Simo; Cornish, Jim
2013-01-01
The atmospheric composition inside spacecraft during long duration space missions is dynamic due to changes in the living and working environment of crew members, crew metabolism and payload operations. A portable FTIR gas analyzer was used to monitor the atmospheric composition within the Deep Space Habitat (DSH) during the Mission Operations Test (MOT) conducted at the Johnson Space Center (JSC). The FTIR monitored up to 20 gases in near- real time. The procedures developed for operating the FTIR were successful and data was collected with the FTIR at 5 minute intervals. Not all the 20 gases sampled were detected in all the modules and it was possible to measure dynamic changes in trace contaminant concentrations that were related to crew activities involving exercise and meal preparation.
NASA Astrophysics Data System (ADS)
Koike, Hiroki; Ohsawa, Takashi; Miura, Sadahiko; Honjo, Hiroaki; Ikeda, Shoji; Hanyu, Takahiro; Ohno, Hideo; Endoh, Tetsuo
2015-04-01
A spintronic-based power-gated micro-processing unit (MPU) is proposed. It includes a power control circuit activated by the newly supported power-off instruction for the deep-sleep mode. These means enable the power-off procedure for the MPU to be executed appropriately. A test chip was designed and fabricated using 90 nm CMOS and an additional 100 nm MTJ process; it was successfully operated. The guideline of the energy reduction effects for this MPU was presented, using the estimation based on the measurement results of the test chip. The result shows that a large operation energy reduction of 1/28 can be achieved when the operation duty is 10%, under the condition of a sufficient number of idle clock cycles.
How to improve healthcare? Identify, nurture and embed individuals and teams with "deep smarts".
Eljiz, Kathy; Greenfield, David; Molineux, John; Sloan, Terry
2018-03-19
Purpose Unlocking and transferring skills and capabilities in individuals to the teams they work within, and across, is the key to positive organisational development and improved patient care. Using the "deep smarts" model, the purpose of this paper is to examine these issues. Design/methodology/approach The "deep smarts" model is described, reviewed and proposed as a way of transferring knowledge and capabilities within healthcare organisations. Findings Effective healthcare delivery is achieved through, and continues to require, integrative care involving numerous, dispersed service providers. In the space of overlapping organisational boundaries, there is a need for "deep smarts" people who act as "boundary spanners". These are critical integrative, networking roles employing clinical, organisational and people skills across multiple settings. Research limitations/implications Studies evaluating the barriers and enablers to the application of the deep smarts model and 13 knowledge development strategies proposed are required. Such future research will empirically and contemporary ground our understanding of organisational development in modern complex healthcare settings. Practical implications An organisation with "deep smarts" people - in managerial, auxiliary and clinical positions - has a greater capacity for integration and achieving improved patient-centred care. Originality/value In total, 13 developmental strategies, to transfer individual capabilities into organisational capability, are proposed. These strategies are applicable to different contexts and challenges faced by individuals and teams in complex healthcare organisations.
Aliper, Alexander; Plis, Sergey; Artemov, Artem; Ulloa, Alvaro; Mamoshina, Polina; Zhavoronkov, Alex
2016-07-05
Deep learning is rapidly advancing many areas of science and technology with multiple success stories in image, text, voice and video recognition, robotics, and autonomous driving. In this paper we demonstrate how deep neural networks (DNN) trained on large transcriptional response data sets can classify various drugs to therapeutic categories solely based on their transcriptional profiles. We used the perturbation samples of 678 drugs across A549, MCF-7, and PC-3 cell lines from the LINCS Project and linked those to 12 therapeutic use categories derived from MeSH. To train the DNN, we utilized both gene level transcriptomic data and transcriptomic data processed using a pathway activation scoring algorithm, for a pooled data set of samples perturbed with different concentrations of the drug for 6 and 24 hours. In both pathway and gene level classification, DNN achieved high classification accuracy and convincingly outperformed the support vector machine (SVM) model on every multiclass classification problem, however, models based on pathway level data performed significantly better. For the first time we demonstrate a deep learning neural net trained on transcriptomic data to recognize pharmacological properties of multiple drugs across different biological systems and conditions. We also propose using deep neural net confusion matrices for drug repositioning. This work is a proof of principle for applying deep learning to drug discovery and development.
Balloon Exoplanet Nulling Interferometer (BENI)
NASA Technical Reports Server (NTRS)
Lyon, Richard G.; Clampin, Mark; Woodruff, Robert A.; Vasudevan, Gopal; Ford, Holland; Petro, Larry; Herman, Jay; Rinehart, Stephen; Carpenter, Kenneth; Marzouk, Joe
2009-01-01
We evaluate the feasibility of using a balloon-borne nulling interferometer to detect and characterize exosolar planets and debris disks. The existing instrument consists of a 3-telescope Fizeau imaging interferometer with 3 fast steering mirrors and 3 delay lines operating at 800 Hz for closed-loop control of wavefront errors and fine pointing. A compact visible nulling interferometer is under development which when coupled to the imaging interferometer would in-principle allow deep suppression of starlight. We have conducted atmospheric simulations of the environment above 100,000 feet and believe balloons are a feasible path forward towards detection and characterization of a limited set of exoplanets and their debris disks. Herein we will discuss the BENI instrument, the balloon environment and the feasibility of such as mission.
Integrated Cryogenic Propulsion Test Article Thermal Vacuum Hotfire Testing
NASA Technical Reports Server (NTRS)
Morehead, Robert L.; Melcher, J. C.; Atwell, Matthew J.; Hurlbert, Eric A.
2017-01-01
In support of a facility characterization test, the Integrated Cryogenic Propulsion Test Article (ICPTA) was hotfire tested at a variety of simulated altitude and thermal conditions in the NASA Glenn Research Center Plum Brook Station In-Space Propulsion Thermal Vacuum Chamber (formerly B2). The ICPTA utilizes liquid oxygen and liquid methane propellants for its main engine and four reaction control engines, and uses a cold helium system for tank pressurization. The hotfire test series included high altitude, high vacuum, ambient temperature, and deep cryogenic environments, and several hundred sensors on the vehicle collected a range of system level data useful to characterize the operation of an integrated LOX/Methane spacecraft in the space environment - a unique data set for this propellant combination.
Deep turbulence effects mitigation with coherent combining of 21 laser beams over 7 km.
Weyrauch, Thomas; Vorontsov, Mikhail; Mangano, Joseph; Ovchinnikov, Vladimir; Bricker, David; Polnau, Ernst; Rostov, Andrey
2016-02-15
We demonstrate coherent beam combining and adaptive mitigation of atmospheric turbulence effects over 7 km under strong scintillation conditions using a coherent fiber array laser transmitter operating in a target-in-the-loop setting. The transmitter system is composed of a densely packed array of 21 fiber collimators with integrated capabilities for piston, tip, and tilt control of the outgoing beams wavefront phases. A small cat's-eye retro reflector was used for evaluation of beam combining and turbulence compensation performance at the target plane, and to provide the feedback signal for control of piston and tip/tilt phases of the transmitted beams using the stochastic parallel gradient descent maximization of the power-in-the-bucket metric.
Robotic Mining Competition - Setup
2018-05-14
On the first day of NASA's 9th Robotic Mining Competition, set-up day on May 14, college team members work on their robot miner in the RobotPits in the Educator Resource Center at Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
The status of MUSIC: the multiwavelength sub-millimeter inductance camera
NASA Astrophysics Data System (ADS)
Sayers, Jack; Bockstiegel, Clint; Brugger, Spencer; Czakon, Nicole G.; Day, Peter K.; Downes, Thomas P.; Duan, Ran P.; Gao, Jiansong; Gill, Amandeep K.; Glenn, Jason; Golwala, Sunil R.; Hollister, Matthew I.; Lam, Albert; LeDuc, Henry G.; Maloney, Philip R.; Mazin, Benjamin A.; McHugh, Sean G.; Miller, David A.; Mroczkowski, Anthony K.; Noroozian, Omid; Nguyen, Hien Trong; Schlaerth, James A.; Siegel, Seth R.; Vayonakis, Anastasios; Wilson, Philip R.; Zmuidzinas, Jonas
2014-08-01
The Multiwavelength Sub/millimeter Inductance Camera (MUSIC) is a four-band photometric imaging camera operating from the Caltech Submillimeter Observatory (CSO). MUSIC is designed to utilize 2304 microwave kinetic inductance detectors (MKIDs), with 576 MKIDs for each observing band centered on 150, 230, 290, and 350 GHz. MUSIC's field of view (FOV) is 14' square, and the point-spread functions (PSFs) in the four observing bands have 45'', 31'', 25'', and 22'' full-widths at half maximum (FWHM). The camera was installed in April 2012 with 25% of its nominal detector count in each band, and has subsequently completed three short sets of engineering observations and one longer duration set of early science observations. Recent results from on-sky characterization of the instrument during these observing runs are presented, including achieved map- based sensitivities from deep integrations, along with results from lab-based measurements made during the same period. In addition, recent upgrades to MUSIC, which are expected to significantly improve the sensitivity of the camera, are described.
Parenchymal-sparing hepatectomy for deep-placed colorectal liver metastases.
Matsuki, Ryota; Mise, Yoshihiro; Saiura, Akio; Inoue, Yosuke; Ishizawa, Takeaki; Takahashi, Yu
2016-11-01
The feasibility of parenchymal-sparing hepatectomy has yet to be assessed based on the tumor location, which affects the choice of treatment in patients with colorectal liver metastases. Sixty-three patients underwent first curative hepatectomy for deep-placed colorectal liver metastases whose center was located >30 mm from the liver surface. Operative outcomes were compared among patients who underwent parenchymal-sparing hepatectomy or major hepatectomy (≥3 segments). Parenchymal-sparing hepatectomy and major hepatectomy were performed for deep-placed colorectal liver metastases in 40 (63%) and 23 (37%) patients, respectively. Resection time was longer in the parenchymal-sparing hepatectomy than in the major hepatectomy group (57 vs 39 minutes) (P = .02) and cut-surface area was wider (120 vs 86 cm 2 ) (P < .01). Resected volume was smaller in the parenchymal-sparing hepatectomy than in the major hepatectomy group (251 vs 560 g) (P < .01). No differences were found between the 2 groups for total operation time (306 vs 328 minutes), amount of blood loss (516 vs 400 mL), rate of major complications (10% vs 13%), and positive operative margins (5% vs 4%). Overall, recurrence-free, and liver recurrence-free survivals did not differ between the 2 groups. Direct major hepatectomy without portal venous embolization could not have been performed in 40% of the parenchymal-sparing hepatectomy group (16/40) because of the small liver remnant volume. Parenchymal-sparing hepatectomy for deep-placed colorectal liver metastases was performed safely without compromising oncologic radicality. Parenchymal-sparing hepatectomy can increase the number of patients eligible for an operation by halving the resection volume and by increasing the chance of direct operative treatment in patients with ill-located colorectal liver metastases. Copyright © 2016 Elsevier Inc. All rights reserved.
Veelo, Denise P; Gisbertz, Suzanne S; Hannivoort, Rebekka A; van Dieren, Susan; Geerts, Bart F; van Berge Henegouwen, Mark I; Hollmann, Markus W
2015-08-05
Deep muscle relaxation has been shown to facilitate operating conditions during laparoscopic surgery. Minimally invasive esophageal surgery is a high-risk procedure in which the use of deep neuromuscular block (NMB) may improve conditions in the thoracic phase as well. Neuromuscular antagonists can be given on demand or by continuous infusion (deep NMB). However, the positioning of the patient often hampers train-of-four (TOF) monitoring. A continuous infusion thus may result in a deep NMB at the end of surgery. The use of neostigmine not only is insufficient for reversing deep NMB but also may be contraindicated for this procedure because of its cholinergic effects. Sugammadex is an effective alternative but is rather expensive. This study aims to evaluate the use of deep versus on-demand NMB on operating, anaesthesiologic conditions, and costs in patients undergoing a two- or three-phase thoracolaparoscopic esophageal resection. We will conduct a single-center randomized controlled double-blinded intervention study. Sixty-six patients undergoing a thoracolaparoscopic esophageal resection will be included. Patients will receive either continuous infusion of rocuronium 0.6 mg/kg per hour (group 1) or continuous infusion of NaCl 0.9 % 0.06 ml/kg per hour (group 2). In both groups, on-demand boluses of rocuronium can be given (open-label design). The primary aim of this study is to compare the surgical rating scale (SRS) during the abdominal phase. Main secondary aims are to evaluate SRS during the thoracic phase, to evaluate anesthesiologic conditions, and to compare costs (in euros) associated with use of rocuronium, sugammadex, and duration of surgery. This study is the first to evaluate the benefits of deep neuromuscular relaxation on surgical and anaesthesiologic conditions during thoracolaparoscopic esophageal surgery. This surgical procedure is unique because it consists of both an abdominal phase and a thoracic phase taking place in different order depending on the subtype of surgery (a two- or three-stage transthoracic esophagectomy). In addition, possible benefits associated with deep NMB, such as decrease in operating time, will be weighed against costs. European Clinical Trials Database (EudraCT) number: 2014-002147-18 (obtained 19 May 2014) ClinicalTrials.gov: NCT02320734 (obtained 18 Dec. 2014).
NASA Astrophysics Data System (ADS)
Dreier, Norman; Fröhle, Peter
2017-12-01
The knowledge of the wave-induced hydrodynamic loads on coastal dikes including their temporal and spatial resolution on the dike in combination with actual water levels is of crucial importance of any risk-based early warning system. As a basis for the assessment of the wave-induced hydrodynamic loads, an operational wave now- and forecast system is set up that consists of i) available field measurements from the federal and local authorities and ii) data from numerical simulation of waves in the German Bight using the SWAN wave model. In this study, results of the hindcast of deep water wave conditions during the winter storm on 5-6 December, 2013 (German name `Xaver') are shown and compared with available measurements. Moreover field measurements of wave run-up from the local authorities at a sea dike on the German North Sea Island of Pellworm are presented and compared against calculated wave run-up using the EurOtop (2016) approach.
NASA Technical Reports Server (NTRS)
Vaughan, William W.; Anderson, B. Jeffrey
2005-01-01
In modern government and aerospace industry institutions the necessity of controlling current year costs often leads to high mobility in the technical workforce, "one-deep" technical capabilities, and minimal mentoring for young engineers. Thus, formal recording, use, and teaching of lessons learned are especially important in the maintenance and improvement of current knowledge and development of new technologies, regardless of the discipline area. Within the NASA Technical Standards Program Website http://standards.nasa.gov there is a menu item entitled "Lessons Learned/Best Practices". It contains links to a large number of engineering and technical disciplines related data sets that contain a wealth of lessons learned information based on past experiences. This paper has provided a small sample of lessons learned relative to the atmospheric and space environment. There are many more whose subsequent applications have improved our knowledge of the atmosphere and space environment, and the application of this knowledge to the engineering and operations for a variety of aerospace programs.
Experiment Comparison between Engineering Acid Dew Point and Thermodynamic Acid Dew Point
NASA Astrophysics Data System (ADS)
Song, Jinghui; Yuan, Hui; Deng, Jianhua
2018-06-01
in order to realize the accurate prediction of acid dew point, a set of measurement system of acid dew point for the flue gas flue gas in the tail of the boiler was designed and built, And measured at the outlet of an air preheater of a power plant of 1 000 MW, The results show that: Under the same conditions, with the test temperature decreases, Nu of heat transfer tubes, fouling and corrosion of pipe wall and corrosion pieces gradually deepened. Then, the measured acid dew point is compared with the acid dew point obtained by using the existing empirical formula under the same coal type. The dew point of engineering acid is usually about 40 ° lower than the dew point of thermodynamic acid because of the coupling effect of fouling on the acid liquid, which can better reflect the actual operation of flue gas in engineering and has certain theoretical guidance for the design and operation of deep waste heat utilization system significance.
Poon, Shi Sum; Estrera, Anthony; Oo, Aung; Field, Mark
2016-09-01
A best evidence topic in cardiac surgery was written according to a structured protocol. The question addressed was whether moderate hypothermia circulatory arrest with selective antegrade cerebral perfusion (SACP) is more beneficial than deep hypothermic circulatory arrest in elective aortic arch surgery. Altogether, 1028 papers were found using the reported search, of which 6 represented the best evidence to answer the clinical question. The authors, journal, date and country of publication, patient group studied, study type, relevant outcomes and results of these papers are tabulated. There were four retrospective observational studies, one prospective randomized controlled trial and one meta-analysis study. There were no local or neuromuscular complications related to axillary arterial cannulation reported. In the elective setting, four studies showed that the in-hospital mortality for moderate hypothermia is consistently low, ranging from 1.0 to 4.3%. In a large series of hemiarch replacement comparing 682 cases of deep hypothermia with 94 cases of moderate hypothermia with SACP, 20 cases (2.8%) of permanent neurological deficit were reported, compared to 3 cases (3.2%) in moderate hypothermia. Three observational studies and a meta-analysis study did not identify an increased risk of postoperative renal failure and dialysis following either deep or moderate hypothermia although a higher incidence of stroke was reported in the meta-analysis study with deep hypothermia (12.7 vs 7.3%). Longer cardiopulmonary bypass time and circulatory arrest time were reported in four studies for deep hypothermia, suggesting an increased time required for systemic cooling and rewarming in that group. Overall, these findings suggested that in elective aortic arch surgery, moderate hypothermia with selective antegrade cerebral perfusion adapted to the duration of circulatory arrest can be performed safely with acceptable mortality and morbidity outcomes. The risk of spinal cord and visceral organ complications is low with the use of this cerebral adjunct. Current studies did not identify an advantage in terms of postoperative bleeding when compared with deep hypothermia. The moderate hypothermia strategy reduced operative time without increasing the mortality and morbidity of surgery. © The Author 2016. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
Maier, Katherine L.; Brothers, Daniel; Paull, Charles K.; McGann, Mary; Caress, David W.; Conrad, James E.
2016-01-01
Variations in seabed gradient are widely acknowledged to influence deep-water deposition, but are often difficult to measure in sufficient detail from both modern and ancient examples. On the continental slope offshore Los Angeles, California, autonomous underwater vehicle, remotely operated vehicle, and shipboard methods were used to collect a dense grid of high-resolution multibeam bathymetry, chirp sub-bottom profiles, and targeted sediment core samples that demonstrate the influence of seafloor gradient on sediment accumulation, depositional environment, grain size of deposits, and seafloor morphology. In this setting, restraining and releasing bends along the active right-lateral Palos Verdes Fault create and maintain variations in seafloor gradient. Holocene down-slope flows appear to have been generated by slope failure, primarily on the uppermost slope (~ 100–200 m water depth). Turbidity currents created a low relief (< 10 m) channel, up-slope migrating sediment waves (λ = ~ 100 m, h ≤ 2 m), and a series of depocenters that have accumulated up to 4 m of Holocene sediment. Sediment waves increase in wavelength and decrease in wave height with decreasing gradient. Integrated analysis of high-resolution datasets provides quantification of morphodynamic sensitivity to seafloor gradients acting throughout deep-water depositional systems. These results help to bridge gaps in scale between existing deep-sea and experimental datasets and may provide constraints for future numerical modeling studies.
Deep 3D convolution neural network for CT brain hemorrhage classification
NASA Astrophysics Data System (ADS)
Jnawali, Kamal; Arbabshirani, Mohammad R.; Rao, Navalgund; Patel, Alpen A.
2018-02-01
Intracranial hemorrhage is a critical conditional with the high mortality rate that is typically diagnosed based on head computer tomography (CT) images. Deep learning algorithms, in particular, convolution neural networks (CNN), are becoming the methodology of choice in medical image analysis for a variety of applications such as computer-aided diagnosis, and segmentation. In this study, we propose a fully automated deep learning framework which learns to detect brain hemorrhage based on cross sectional CT images. The dataset for this work consists of 40,367 3D head CT studies (over 1.5 million 2D images) acquired retrospectively over a decade from multiple radiology facilities at Geisinger Health System. The proposed algorithm first extracts features using 3D CNN and then detects brain hemorrhage using the logistic function as the last layer of the network. Finally, we created an ensemble of three different 3D CNN architectures to improve the classification accuracy. The area under the curve (AUC) of the receiver operator characteristic (ROC) curve of the ensemble of three architectures was 0.87. Their results are very promising considering the fact that the head CT studies were not controlled for slice thickness, scanner type, study protocol or any other settings. Moreover, the proposed algorithm reliably detected various types of hemorrhage within the skull. This work is one of the first applications of 3D CNN trained on a large dataset of cross sectional medical images for detection of a critical radiological condition
Bathymetric and oceanic controls on Abbot Ice Shelf thickness and stability
NASA Astrophysics Data System (ADS)
Cochran, J. R.; Jacobs, S. S.; Tinto, K. J.; Bell, R. E.
2014-05-01
Ice shelves play key roles in stabilizing Antarctica's ice sheets, maintaining its high albedo and returning freshwater to the Southern Ocean. Improved data sets of ice shelf draft and underlying bathymetry are important for assessing ocean-ice interactions and modeling ice response to climate change. The long, narrow Abbot Ice Shelf south of Thurston Island produces a large volume of meltwater, but is close to being in overall mass balance. Here we invert NASA Operation IceBridge (OIB) airborne gravity data over the Abbot region to obtain sub-ice bathymetry, and combine OIB elevation and ice thickness measurements to estimate ice draft. A series of asymmetric fault-bounded basins formed during rifting of Zealandia from Antarctica underlie the Abbot Ice Shelf west of 94° W and the Cosgrove Ice Shelf to the south. Sub-ice water column depths along OIB flight lines are sufficiently deep to allow warm deep and thermocline waters observed near the western Abbot ice front to circulate through much of the ice shelf cavity. An average ice shelf draft of ~200 m, 15% less than the Bedmap2 compilation, coincides with the summer transition between the ocean surface mixed layer and upper thermocline. Thick ice streams feeding the Abbot cross relatively stable grounding lines and are rapidly thinned by the warmest inflow. While the ice shelf is presently in equilibrium, the overall correspondence between draft distribution and thermocline depth indicates sensitivity to changes in characteristics of the ocean surface and deep waters.
Deep Recurrent Neural Networks for Supernovae Classification
NASA Astrophysics Data System (ADS)
Charnock, Tom; Moss, Adam
2017-03-01
We apply deep recurrent neural networks, which are capable of learning complex sequential information, to classify supernovae (code available at https://github.com/adammoss/supernovae). The observational time and filter fluxes are used as inputs to the network, but since the inputs are agnostic, additional data such as host galaxy information can also be included. Using the Supernovae Photometric Classification Challenge (SPCC) data, we find that deep networks are capable of learning about light curves, however the performance of the network is highly sensitive to the amount of training data. For a training size of 50% of the representational SPCC data set (around 104 supernovae) we obtain a type-Ia versus non-type-Ia classification accuracy of 94.7%, an area under the Receiver Operating Characteristic curve AUC of 0.986 and an SPCC figure-of-merit F 1 = 0.64. When using only the data for the early-epoch challenge defined by the SPCC, we achieve a classification accuracy of 93.1%, AUC of 0.977, and F 1 = 0.58, results almost as good as with the whole light curve. By employing bidirectional neural networks, we can acquire impressive classification results between supernovae types I, II and III at an accuracy of 90.4% and AUC of 0.974. We also apply a pre-trained model to obtain classification probabilities as a function of time and show that it can give early indications of supernovae type. Our method is competitive with existing algorithms and has applications for future large-scale photometric surveys.
Nondestructive Evaluation Methods for Characterization of Corrosion: State of the Art Review
1988-12-01
form molecules of hydrogen gas damage is characterized by surface discolora- and leave the surface. Under some circum- tion and deep gouges or pits...large electromagnet and low operating granular corrosion without stress-related crack- frequencies resulted in deep penetration of ing can produce a...focus, and then the spray al. (11) showed that thermography was able to and the focus were moved together down the detect 3-mm deep , 50-mm diameter
High power laser workover and completion tools and systems
Zediker, Mark S; Rinzler, Charles C; Faircloth, Brian O; Koblick, Yeshaya; Moxley, Joel F
2014-10-28
Workover and completion systems, devices and methods for utilizing 10 kW or more laser energy transmitted deep into the earth with the suppression of associated nonlinear phenomena. Systems and devices for the laser workover and completion of a borehole in the earth. These systems and devices can deliver high power laser energy down a deep borehole, while maintaining the high power to perform laser workover and completion operations in such boreholes deep within the earth.
The deep space network, volume 6
NASA Technical Reports Server (NTRS)
1971-01-01
Progress on Deep Space Network (DSN) supporting research and technology is presented, together with advanced development and engineering, implementation, and DSN operations of flight projects. The DSN is described. Interplanetary and planetary flight projects and radio science experiments are discussed. Tracking and navigational accuracy analysis, communications systems and elements research, and supporting research are considered. Development of the ground communications and deep space instrumentation facilities is also presented. Network allocation schedules and angle tracking and test development are included.
1994-09-01
50 years ago as an imperative f’or a simple fighter- boamber escort team has .ince produced a highly sophisticated web of relationships between multip...encyclopedia of mission.specific u,)ject.ives that. are neither defined nor conceived at the operational level. .’lhe ATO cannot possibly cut this deep , nor...but they were nervous). [Meanwhile] F-15s are at their orbit point 150 miles deep in Iraq-waiting! We should do better than that. Responsiveness What
2008-10-01
Agents in the DEEP architecture extend and use the Java Agent Development (JADE) framework. DEEP requires a distributed multi-agent system and a...framework to help simplify the implementation of this system. JADE was chosen because it is fully implemented in Java , and supports these requirements
NASA Astrophysics Data System (ADS)
Kring, D. A.
2018-02-01
The Deep Space Gateway can support astronauts on the lunar surface, providing them a departure and returning rendezvous point, a communication relay from the lunar farside to Earth, and a transfer point to Orion for return to Earth.
The telecommunications and data acquisition report
NASA Technical Reports Server (NTRS)
1980-01-01
Progress in the development and operations of the Deep Space Network is reported. Developments in Earth based radio technology as applied to geodynamics, astrophysics, and radio astronomy's use of the deep space stations for a radio search for extraterrestrial intelligence in the microwave region of the electromagnetic spectrum are reported.
NASA Astrophysics Data System (ADS)
Head, J. W.; Pieters, C. M.; Scott, D. R.
2018-02-01
We outline an Orientale Basin Human/Robotic Architecture that can be facilitated by a Deep Space Gateway International Science Operations Center (DSG-ISOC) (like McMurdo/Antarctica) to address fundamental scientific problems about the Moon and Mars.
New scientific ocean drilling depth record extends study of subseafloor life
NASA Astrophysics Data System (ADS)
Showstack, Randy
2012-09-01
The Japanese deep-sea drilling vessel Chikyu set a new depth record for scientific ocean drilling and core retrieval by reaching a depth of 2119.5 meters below the seafloor (mbsf) on 6 September. This is 8.5 meters deeper than the prior record, set 19 years ago. Three days later, on 9 September, Chikyu set another record by reaching a drilling depth of 2466 mbsf, the maximum depth that will be attempted during the current expedition. The 6 September record was set on day 44 of the Deep Coalbed Biosphere off Shimokita expedition, which is expedition 337 of the Integrated Ocean Drilling Program (IODP). It occurred at drilling site C0020 in the northwestern Pacific Ocean, approximately 80 kilometers northeast from Hachinohe, Japan. The expedition is scheduled to conclude on 30 September.
Study of CT image texture using deep learning techniques
NASA Astrophysics Data System (ADS)
Dutta, Sandeep; Fan, Jiahua; Chevalier, David
2018-03-01
For CT imaging, reduction of radiation dose while improving or maintaining image quality (IQ) is currently a very active research and development topic. Iterative Reconstruction (IR) approaches have been suggested to be able to offer better IQ to dose ratio compared to the conventional Filtered Back Projection (FBP) reconstruction. However, it has been widely reported that often CT image texture from IR is different compared to that from FBP. Researchers have proposed different figure of metrics to quantitate the texture from different reconstruction methods. But there is still a lack of practical and robust method in the field for texture description. This work applied deep learning method for CT image texture study. Multiple dose scans of a 20cm diameter cylindrical water phantom was performed on Revolution CT scanner (GE Healthcare, Waukesha) and the images were reconstructed with FBP and four different IR reconstruction settings. The training images generated were randomly allotted (80:20) to a training and validation set. An independent test set of 256-512 images/class were collected with the same scan and reconstruction settings. Multiple deep learning (DL) networks with Convolution, RELU activation, max-pooling, fully-connected, global average pooling and softmax activation layers were investigated. Impact of different image patch size for training was investigated. Original pixel data as well as normalized image data were evaluated. DL models were reliably able to classify CT image texture with accuracy up to 99%. Results show that the deep learning techniques suggest that CT IR techniques may help lower the radiation dose compared to FBP.
Excess cost and inpatient stay of treating deep spinal surgical site infections.
Barnacle, James; Wilson, Dianne; Little, Christopher; Hoffman, Christopher; Raymond, Nigel
2018-05-18
To determine the excess cost and hospitalisation associated with surgical site infections (SSI) following spinal operations in a New Zealand setting. We identified inpatients treated for deep SSI following primary or revision spinal surgery at a regional tertiary spinal centre between 2009 and 2016. Excess cost and excess length of stay (LOS) were calculated via a clinical costing system using procedure-matched controls. Twenty-eight patients were identified. Twenty-five had metalware following spinal fusion surgery, while three had non-instrumented decompression and/or discectomy. Five were diagnosed during their index hospitalisation and 23 (82%) were re-admitted. The average excess SSI cost was NZ$51,434 (range $1,398-$262,206.16) and LOS 37.1 days (range 7-275 days). Infections following metalware procedures had a greater excess cost (average $56,258.90 vs. $11,228.61) and LOS (average 40.4 days vs. 9.7 days) than procedures without metalware. The costs associated with spinal SSI are significant and comparable to a previous New Zealand study of hip and knee prosthesis SSI. More awareness of the high costs involved should encourage research and implementation of infection prevention strategies.
Orion Underway Recovery Test 5 (URT-5)
2016-10-26
A test version of the Orion crew module is secured in the well deck of the USS San Diego for Underway Recovery Test 5 in the Pacific Ocean off the coast of California. In view is the winch system that will be used to help retrieve the crew module during a series of tests in open waters. NASA's Ground Systems Development and Operations Program and the U.S. Navy will practice retrieving and securing the crew module in the well deck of the ship using a set of tethers and the winch system to prepare for recovery of Orion on its return from deep space missions. The testing will allow the team to demonstrate and evaluate recovery processes, procedures, hardware and personnel in open waters. Orion is the exploration spacecraft designed to carry astronauts to destinations not yet explored by humans, including an asteroid and NASA's Journey to Mars. It will have emergency abort capability, sustain the crew during space travel and provide safe re-entry from deep space return velocities. Orion is scheduled to launch on NASA's Space Launch System in late 2018. For more information, visit http://www.nasa.gov/orion.
Object-Location-Aware Hashing for Multi-Label Image Retrieval via Automatic Mask Learning.
Huang, Chang-Qin; Yang, Shang-Ming; Pan, Yan; Lai, Han-Jiang
2018-09-01
Learning-based hashing is a leading approach of approximate nearest neighbor search for large-scale image retrieval. In this paper, we develop a deep supervised hashing method for multi-label image retrieval, in which we propose to learn a binary "mask" map that can identify the approximate locations of objects in an image, so that we use this binary "mask" map to obtain length-limited hash codes which mainly focus on an image's objects but ignore the background. The proposed deep architecture consists of four parts: 1) a convolutional sub-network to generate effective image features; 2) a binary "mask" sub-network to identify image objects' approximate locations; 3) a weighted average pooling operation based on the binary "mask" to obtain feature representations and hash codes that pay most attention to foreground objects but ignore the background; and 4) the combination of a triplet ranking loss designed to preserve relative similarities among images and a cross entropy loss defined on image labels. We conduct comprehensive evaluations on four multi-label image data sets. The results indicate that the proposed hashing method achieves superior performance gains over the state-of-the-art supervised or unsupervised hashing baselines.
Audit of the Douglas Hocking Research Institute bone bank: ten years of non-irradiated bone graft.
Love, David; Pritchard, Michael; Burgess, Tanya; Van Der Meer, Gavin; Page, Richard; Williams, Simon
2009-01-01
An audit performed in the use of non-irradiated femoral head bone graft at the Geelong Hospital over a 10-year period. While it is thought the non-irradiated bone graft provides a better structural construct there is theoretical increased risk of infection transmission. We performed a retrospective review of prospectively collected data in the use of non-irradiated bone allograft used from the Geelong Hospital Douglas Hocking Research Institute bone bank over a 10-year period. The review was performed using data collected from the bone bank and correlating it with the patient's medical record. All complications, including infections, related to the use of the allograft were recorded. We found that over the 10 years to 2004 that 811 femoral heads were donated, with 555 being used over 362 procedures in 316 patients. We identified a total of nine deep infections, of which seven were in joint replacements. Overall this was a 2.5% deep infection rate, which was lowered to 1.4% if the previously infected joints that were operated on were excluded. The use of non-irradiated femoral head bone graft was safe in a regional setting.
Güçlü, Umut; van Gerven, Marcel A J
2017-01-15
Recently, deep neural networks (DNNs) have been shown to provide accurate predictions of neural responses across the ventral visual pathway. We here explore whether they also provide accurate predictions of neural responses across the dorsal visual pathway, which is thought to be devoted to motion processing and action recognition. This is achieved by training deep neural networks to recognize actions in videos and subsequently using them to predict neural responses while subjects are watching natural movies. Moreover, we explore whether dorsal stream representations are shared between subjects. In order to address this question, we examine if individual subject predictions can be made in a common representational space estimated via hyperalignment. Results show that a DNN trained for action recognition can be used to accurately predict how dorsal stream responds to natural movies, revealing a correspondence in representations of DNN layers and dorsal stream areas. It is also demonstrated that models operating in a common representational space can generalize to responses of multiple or even unseen individual subjects to novel spatio-temporal stimuli in both encoding and decoding settings, suggesting that a common representational space underlies dorsal stream responses across multiple subjects. Copyright © 2015 Elsevier Inc. All rights reserved.
Batra, Vinita; Guerin, Glenn F.; Goeders, Nicholas E.; Wilden, Jessica A.
2016-01-01
Substance use disorders, particularly to methamphetamine, are devastating, relapsing diseases that disproportionally affect young people. There is a need for novel, effective and practical treatment strategies that are validated in animal models. Neuromodulation, including deep brain stimulation (DBS) therapy, refers to the use of electricity to influence pathological neuronal activity and has shown promise for psychiatric disorders, including drug dependence. DBS in clinical practice involves the continuous delivery of stimulation into brain structures using an implantable pacemaker-like system that is programmed externally by a physician to alleviate symptoms. This treatment will be limited in methamphetamine users due to challenging psychosocial situations. Electrical treatments that can be delivered intermittently, non-invasively and remotely from the drug-use setting will be more realistic. This article describes the delivery of intracranial electrical stimulation that is temporally and spatially separate from the drug-use environment for the treatment of IV methamphetamine dependence. Methamphetamine dependence is rapidly developed in rodents using an operant paradigm of intravenous (IV) self-administration that incorporates a period of extended access to drug and demonstrates both escalation of use and high motivation to obtain drug. PMID:26863392
Deep learning based classification of breast tumors with shear-wave elastography.
Zhang, Qi; Xiao, Yang; Dai, Wei; Suo, Jingfeng; Wang, Congzhi; Shi, Jun; Zheng, Hairong
2016-12-01
This study aims to build a deep learning (DL) architecture for automated extraction of learned-from-data image features from the shear-wave elastography (SWE), and to evaluate the DL architecture in differentiation between benign and malignant breast tumors. We construct a two-layer DL architecture for SWE feature extraction, comprised of the point-wise gated Boltzmann machine (PGBM) and the restricted Boltzmann machine (RBM). The PGBM contains task-relevant and task-irrelevant hidden units, and the task-relevant units are connected to the RBM. Experimental evaluation was performed with five-fold cross validation on a set of 227 SWE images, 135 of benign tumors and 92 of malignant tumors, from 121 patients. The features learned with our DL architecture were compared with the statistical features quantifying image intensity and texture. Results showed that the DL features achieved better classification performance with an accuracy of 93.4%, a sensitivity of 88.6%, a specificity of 97.1%, and an area under the receiver operating characteristic curve of 0.947. The DL-based method integrates feature learning with feature selection on SWE. It may be potentially used in clinical computer-aided diagnosis of breast cancer. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Lindqwister, Ulf J.; Lichten, Stephen M.; Davis, Edgar S.; Theiss, Harold L.
1993-01-01
Topex/Poseidon, a cooperative satellite mission between United States and France, aims to determine global ocean circulation patterns and to study their influence on world climate through precise measurements of sea surface height above the geoid with an on-board altimeter. To achieve the mission science aims, a goal of 13-cm orbit altitude accuracy was set. Topex/Poseidon includes a Global Positioning System (GPS) precise orbit determination (POD) system that has now demonstrated altitude accuracy better than 5 cm. The GPS POD system includes an on-board GPS receiver and a 6-station GPS global tracking network. This paper reviews early GPS results and discusses multi-mission capabilities available from a future enhanced global GPS network, which would provide ground-based geodetic and atmospheric calibrations needed for NASA deep space missions while also supplying tracking data for future low Earth orbiters. Benefits of the enhanced global GPS network include lower operations costs for deep space tracking and many scientific and societal benefits from the low Earth orbiter missions, including improved understanding of ocean circulation, ocean-weather interactions, the El Nino effect, the Earth thermal balance, and weather forecasting.
Discerning autotrophy, mixotrophy and heterotrophy in marine TACK archaea from the North Atlantic.
Seyler, Lauren M; McGuinness, Lora R; Gilbert, Jack A; Biddle, Jennifer F; Gong, Donglai; Kerkhof, Lee J
2018-03-01
DNA stable isotope probing (SIP) was used to track the uptake of organic and inorganic carbon sources for TACK archaea (Thaumarchaeota/Aigarchaeota/Crenarchaeota/Korarchaeota) on a cruise of opportunity in the North Atlantic. Due to water limitations, duplicate samples from the deep photic (60-115 m), the mesopelagic zones (local oxygen minimum; 215-835 m) and the bathypelagic zone (2085-2835 m) were amended with various combinations of 12C- or 13C-acetate/urea/bicarbonate to assess cellular carbon acquisition. The SIP results indicated the majority of TACK archaeal operational taxonomic units (OTUs) incorporated 13C from acetate and/or urea into newly synthesized DNA within 48 h. A small fraction (16%) of the OTUs, often representing the most dominant members of the archaeal community, were able to incorporate bicarbonate in addition to organic substrates. Only two TACK archaeal OTUs were found to incorporate bicarbonate but not urea or acetate. These results further demonstrate the utility of SIP to elucidate the metabolic capability of mesothermal archaea in distinct oceanic settings and suggest that TACK archaea play a role in organic carbon recycling in the mid-depth to deep ocean.
Deep Space Habitat Team: HEFT Phase 2 Effects
NASA Technical Reports Server (NTRS)
Toups, Larry D.; Smitherman, David; Shyface, Hilary; Simon, Matt; Bobkill, Marianne; Komar, D. R.; Guirgis, Peggy; Bagdigian, Bob; Spexarth, Gary
2011-01-01
HEFT was a NASA-wide team that performed analyses of architectures for human exploration beyond LEO, evaluating technical, programmatic, and budgetary issues to support decisions at the highest level of the agency in HSF planning. HEFT Phase I (April - September, 2010) and Phase II (September - December, 2010) examined a broad set of Human Exploration of Near Earth Objects (NEOs) Design Reference Missions (DRMs), evaluating such factors as elements, performance, technologies, schedule, and cost. At end of HEFT Phase 1, an architecture concept known as DRM 4a represented the best available option for a full capability NEO mission. Within DRM4a, the habitation system was provided by Deep Space Habitat (DSH), Multi-Mission Space Exploration Vehicle (MMSEV), and Crew Transfer Vehicle (CTV) pressurized elements. HEFT Phase 2 extended DRM4a, resulting in DRM4b. Scrubbed element-level functionality assumptions and mission Concepts of Operations. Habitation Team developed more detailed concepts of the DSH and the DSH/MMSEV/CTV Conops, including functionality and accommodations, mass & volume estimates, technology requirements, and DDT&E costs. DRM 5 represented an effort to reduce cost by scaling back on technologies and eliminating the need for the development of an MMSEV.
2007-07-02
TYPE Final Report 3. DATES COVERED (From - To) 26-Sep-01 to 26-Jun-07 4. TITLE AND SUBTITLE OBTAINING UNIQUE, COMPREHENSIVE DEEP SEISMIC ... seismic records from 12 major Deep Seismic Sounding (DSS) projects acquired in 1970-1980’s in the former Soviet Union. The data include 3-component...records from 22 Peaceful Nuclear Explosions (PNEs) and over 500 chemical explosions recorded by a grid of linear, reversed seismic profiles covering a
NASA Astrophysics Data System (ADS)
QingJie, Wei; WenBin, Wang
2017-06-01
In this paper, the image retrieval using deep convolutional neural network combined with regularization and PRelu activation function is studied, and improves image retrieval accuracy. Deep convolutional neural network can not only simulate the process of human brain to receive and transmit information, but also contains a convolution operation, which is very suitable for processing images. Using deep convolutional neural network is better than direct extraction of image visual features for image retrieval. However, the structure of deep convolutional neural network is complex, and it is easy to over-fitting and reduces the accuracy of image retrieval. In this paper, we combine L1 regularization and PRelu activation function to construct a deep convolutional neural network to prevent over-fitting of the network and improve the accuracy of image retrieval
Wang, Zhuheng; Shi, Chunzhi; Sun, Liping; Guo, Qinghua; Qiao, Wei; Zhou, Guanhua
2017-11-01
To evaluate the efficacy and safety of short-term deep sedation strategy in patients with spontaneous intracerebral hemorrhage (ICH) after surgery. A perspective, randomized, parallel-group study was conducted. Adult patients with spontaneous ICH and undergoing craniotomy admitted to Daxing Teaching Hospital of Capital Medical University from December 2015 to November 2016 were enrolled. The patients who received surgery were randomly divided into a short-term deep sedation and a slight and middle sedation group. Sufentanil was used as an analgesic drug in all patients and midazolam was used as a sedative after the operation. The patients in the slight and middle sedation group received midazolam 0.05-0.10 mg/kg with a goal of mild sedation [Richmond agitation and sedation scale (RASS) score of -2-1]. The patients in the short-term deep sedation group received midazolam 0.1-0.2 mg/kg with a goal of deep sedation (RASS score of -4 to -3) and a duration of no more than 12 hours. Postoperative sedation, blood pressure changes, laboratory indexes, residual hematoma and clinical outcomes were recorded in two groups. During the study, a total of 183 patients with spontaneous ICH were collected, excluding who was older than 65 years, with shock, and with preoperative Glasgow coma score (GCS) of 3. 106 patients were enrolled in this study, and 53 patients were assigned to the short-term deep sedation group and slight and middle sedation group, respectively. In the slight and middle sedation group, 4 patients received reoperation because of repeated hemorrhage and no patient operated repeatedly in the short-term deep sedation group, and there was a significant difference between the two groups (χ 2 = 4.000, P = 0.045). The number of patients undergoing tracheotomy in the short-term deep sedation group was significantly lower than that in the slight and middle sedation group (9 cases vs. 21 cases, P < 0.05). RASS score within 12 hours after operation of the patients in the short-term deep sedation group was lower than that in slight and middle sedation group [-4 (-4, -2) vs. -2 (-3, -1) at 4 hours, -4 (-4, -2) vs. -1 (-2, 0) at 8 hours, -3 (-4, -2) vs. 0 (-2, 1) at 12 hours, all P < 0.01], sudden restlessness was significantly reduced [times: 1 (0, 1) vs. 3 (2, 3), P < 0.01], and postoperative sedation duration was significantly prolonged [hours: 14.0 (8.3, 20.8) vs. 8.9 (3.4, 15.3), P < 0.05]. Systolic blood pressure (SBP) and diastolic blood pressure (DBP) within 12 hours after operation in the short-term deep sedation group were significantly lower than those of the slight and middle sedation group [SBP (mmHg, 1 mmHg = 0.133 kPa): 136.8±30.5 vs. 149.1±33.5, DBP (mmHg): 85.0 (70.8, 102.3) vs. 89.0 (69.2, 116.7), both P < 0.05]. There were no significant differences in the arterial blood gas, routine blood test or coagulation function between the two groups at 24 hours after operation. The volume of residual hematoma at 2, 7 and 14 days after operation in the short-term deep sedation group was significantly decreased as compared with slight and middle sedation group (mL: 16.4±15.6 vs. 38.2±22.2 at 2 days, 9.6±8.7 vs. 20.6±18.6 at 7 days, 1.2±1.0 vs. 4.4±3.6 at 14 days, all P < 0.05), number of deaths in 3 months were significantly less (5 cases vs. 13 cases), and the patients with favorable prognosis were increased significantly (39 cases vs. 12 cases, both P < 0.05). The study results showed that short-term deep sedation strategy after surgery can reduce the incidence of adverse events and improve the prognosis of patients with spontaneous ICH, so it is safe and effective.
Predictive factors for surgical site infection in general surgery.
Haridas, Manjunath; Malangoni, Mark A
2008-10-01
Global parameters, such as wound class, the American Society of Anesthesiologists' physical classification score, and prolonged operative time, have been associated with the risk of surgical site infection (SSI). We hypothesized that additional risk factors for SSI would be identified by controlling for these parameters and that deep and organ/space SSI may have different risk factors for occurrence. A retrospective review was performed on general and vascular surgical patients who underwent an operation between June 2000 and June 2006 at a single institution. Patients with SSI were matched with a case-control cohort of patients without infection (no SSI) according to age, sex, ASA score, wound class, and type of operative procedure. Data were analyzed using bivariate and regression analyses. Overall, 10,253 general surgical procedures were performed during the 6-year period; 316 patients (3.1%) developed SSI. In all, 300 patients with 251 superficial (83.6%), 22 deep (7.3%), and 27 organ/space (9%) SSIs were matched for comparison. Multivariate logistic regression analysis identified previous operation (odds ratio [OR], 2.4; 95% confidence interval [CI] = 1.6-3.7), duration of operation >or=75th percentile (OR, 1.8; 95% CI = 1.2-2.8), hypoalbuminemia (OR, 1.8; 95% CI = 1.1-2.8), and a history of chronic obstructive pulmonary disease (OR, 1.7; 95% CI = 1.0-2.8) as independent risk factors for SSI. Only hypoalbuminemia (OR, 2.9; 95% CI = 1.4-6.3) and a previous operation (OR, 2.0; 95% CI = 1.0-4.4) were significantly associated with deep or organ/space infections. These results demonstrate additional factors that increase the risk of developing SSI. Deep and organ/space infections have a different risk profile. This information should guide clinicians in their assessment of SSI risk and should identify targets for intervention to decrease the incidence of SSI.
NASA Technical Reports Server (NTRS)
1976-01-01
Various phases of planetary operations related to the Viking mission to Mars are described. Topics discussed include: approach phase, Mars orbit insertion, prelanding orbital activities, separation, descent and landing, surface operations, surface sampling and operations starting, orbiter science and radio science, Viking 2, Deep Space Network and data handling.
Deep Borehole Disposal Remediation Costs for Off-Normal Outcomes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finger, John T.; Cochran, John R.; Hardin, Ernest
2015-08-17
This memo describes rough-order-of-magnitude (ROM) cost estimates for a set of off-normal (accident) scenarios, as defined for two waste package emplacement method options for deep borehole disposal: drill-string and wireline. It summarizes the different scenarios and the assumptions made for each, with respect to fishing, decontamination, remediation, etc.
Addressing Human System Risks to Future Space Exploration
NASA Technical Reports Server (NTRS)
Paloski, W. H.; Francisco, D. R.; Davis, J. R.
2015-01-01
NASA is contemplating future human exploration missions to destinations beyond low Earth orbit, including the Moon, deep-space asteroids, and Mars. While we have learned much about protecting crew health and performance during orbital space flight over the past half-century, the challenges of these future missions far exceed those within our current experience base. To ensure success in these missions, we have developed a Human System Risk Board (HSRB) to identify, quantify, and develop mitigation plans for the extraordinary risks associated with each potential mission scenario. The HSRB comprises research, technology, and operations experts in medicine, physiology, psychology, human factors, radiation, toxicology, microbiology, pharmacology, and food sciences. Methods: Owing to the wide range of potential mission characteristics, we first identified the hazards to human health and performance common to all exploration missions: altered gravity, isolation/confinement, increased radiation, distance from Earth, and hostile/closed environment. Each hazard leads to a set of risks to crew health and/or performance. For example the radiation hazard leads to risks of acute radiation syndrome, central nervous system dysfunction, soft tissue degeneration, and carcinogenesis. Some of these risks (e.g., acute radiation syndrome) could affect crew health or performance during the mission, while others (e.g., carcinogenesis) would more likely affect the crewmember well after the mission ends. We next defined a set of design reference missions (DRM) that would span the range of exploration missions currently under consideration. In addition to standard (6-month) and long-duration (1-year) missions in low Earth orbit (LEO), these DRM include deep space sortie missions of 1 month duration, lunar orbital and landing missions of 1 year duration, deep space journey and asteroid landing missions of 1 year duration, and Mars orbital and landing missions of 3 years duration. We then assessed the likelihood and consequences of each risk against each DRM, using three levels of likelihood (Low: less than or equal to 0.1%; Medium: 0.1%–1.0%; High: greater than or equal to 1.0%) and four levels of consequence ranging from Very Low (temporary or insignificant) to High (death, loss of mission, or significant reduction to length or quality of life). Quantitative evidence from clinical, operational, and research sources were used whenever available. Qualitative evidence was used when quantitative evidence was unavailable. Expert opinion was used whenever insufficient evidence was available. Results: A set of 30 risks emerged that will require further mitigation efforts before being accepted by the Agency. The likelihood by consequence risk assessment process provided a means of prioritizing among the risks identified. For each of the high priority risks, a plan was developed to perform research, technology, or standards development thought necessary to provide suitable reduction of likelihood or consequence to allow agency acceptance. Conclusion: The HSRB process has successfully identified a complete set of risks to human space travelers on planned exploration missions based on the best evidence available today. Risk mitigation plans have been established for the highest priority risks. Each risk will be reassessed annually to track the progress of our risk mitigation efforts.
2017-01-01
Although deep learning approaches have had tremendous success in image, video and audio processing, computer vision, and speech recognition, their applications to three-dimensional (3D) biomolecular structural data sets have been hindered by the geometric and biological complexity. To address this problem we introduce the element-specific persistent homology (ESPH) method. ESPH represents 3D complex geometry by one-dimensional (1D) topological invariants and retains important biological information via a multichannel image-like representation. This representation reveals hidden structure-function relationships in biomolecules. We further integrate ESPH and deep convolutional neural networks to construct a multichannel topological neural network (TopologyNet) for the predictions of protein-ligand binding affinities and protein stability changes upon mutation. To overcome the deep learning limitations from small and noisy training sets, we propose a multi-task multichannel topological convolutional neural network (MM-TCNN). We demonstrate that TopologyNet outperforms the latest methods in the prediction of protein-ligand binding affinities, mutation induced globular protein folding free energy changes, and mutation induced membrane protein folding free energy changes. Availability: weilab.math.msu.edu/TDL/ PMID:28749969
Does the ocean-atmosphere system have more than one stable mode of operation?
NASA Technical Reports Server (NTRS)
Broecker, W. S.; Peteet, D. M.; Rind, D.
1985-01-01
The climate record obtained from two long Greenland ice cores reveals several brief climate oscillations during glacial time. The most recent of these oscillations, also found in continental pollen records, has greatest impact in the area under the meteorological influence of the northern Atlantic, but none in the United States. This suggests that these oscillations are caused by fluctuations in the formation rate of deep water in the northern Atlantic. As the present production of deep water in this area is driven by an excess of evaporation over precipitation and continental runoff, atmospheric water transport may be an important element in climate change. Changes in the production rate of deep water in this sector of the ocean may push the climate system from one quasi-stable mode of operation to another.
Deep reconditioning of batteries during DSCS 3 flight operations
NASA Technical Reports Server (NTRS)
Thierfelder, H. E.; Stearns, R. J.; Jones, P. W.
1985-01-01
Deep reconditioning of batteries is defined as discharge below the 1.0 volt/cell level to a value of about 1.0 volt/battery. This type of reconditioning was investigated for use on the Defense Satellite Communications System (DSCS) spacecraft, and has been used during the first year of orbital operation. Prior to launch of the spacecraft, the deep reconditioning was used during the battery life test, which has now complete fourteen eclipse periods. Reconditioning was performed prior to each eclipse period of the life test, and is scheduled to be used prior to each eclipse period in orbit. The battery data for discharge and recharge is presented for one of the life test reconditioning cycles, and for each of the three batteries during the reconditioning cycles between eclipse period no.1 and eclipse period no.2 in Earth orbit.
Hsu, Guoo-Shyng Wang; Hsu, Shun-Yao
2018-04-01
Electrolyzed water is a sustainable disinfectant, which can comply with food safety regulations and is environmental friendly. A two-factor central composite design was adopted for studying the effects of electrode gap and electric current on chlorine generation efficiency of electrolyzed deep ocean water. Deep ocean water was electrolyzed in a glass electrolyzing cell equipped with platinum-plated titanium anode and cathode in a constant-current operation mode. Results showed that current density, chlorine concentration, and electrolyte temperature increased with electric current, while electric efficiency decreased with electric current and electrode gap. An electrode gap of less than 11.7 mm, and a low electric current appeared to be a more energy efficient design and operation condition for the electrolysis system. Copyright © 2017. Published by Elsevier B.V.
Deep learning with non-medical training used for chest pathology identification
NASA Astrophysics Data System (ADS)
Bar, Yaniv; Diamant, Idit; Wolf, Lior; Greenspan, Hayit
2015-03-01
In this work, we examine the strength of deep learning approaches for pathology detection in chest radiograph data. Convolutional neural networks (CNN) deep architecture classification approaches have gained popularity due to their ability to learn mid and high level image representations. We explore the ability of a CNN to identify different types of pathologies in chest x-ray images. Moreover, since very large training sets are generally not available in the medical domain, we explore the feasibility of using a deep learning approach based on non-medical learning. We tested our algorithm on a dataset of 93 images. We use a CNN that was trained with ImageNet, a well-known large scale nonmedical image database. The best performance was achieved using a combination of features extracted from the CNN and a set of low-level features. We obtained an area under curve (AUC) of 0.93 for Right Pleural Effusion detection, 0.89 for Enlarged heart detection and 0.79 for classification between healthy and abnormal chest x-ray, where all pathologies are combined into one large class. This is a first-of-its-kind experiment that shows that deep learning with large scale non-medical image databases may be sufficient for general medical image recognition tasks.
Student beats the teacher: deep neural networks for lateral ventricles segmentation in brain MR
NASA Astrophysics Data System (ADS)
Ghafoorian, Mohsen; Teuwen, Jonas; Manniesing, Rashindra; Leeuw, Frank-Erik d.; van Ginneken, Bram; Karssemeijer, Nico; Platel, Bram
2018-03-01
Ventricular volume and its progression are known to be linked to several brain diseases such as dementia and schizophrenia. Therefore accurate measurement of ventricle volume is vital for longitudinal studies on these disorders, making automated ventricle segmentation algorithms desirable. In the past few years, deep neural networks have shown to outperform the classical models in many imaging domains. However, the success of deep networks is dependent on manually labeled data sets, which are expensive to acquire especially for higher dimensional data in the medical domain. In this work, we show that deep neural networks can be trained on muchcheaper-to-acquire pseudo-labels (e.g., generated by other automated less accurate methods) and still produce more accurate segmentations compared to the quality of the labels. To show this, we use noisy segmentation labels generated by a conventional region growing algorithm to train a deep network for lateral ventricle segmentation. Then on a large manually annotated test set, we show that the network significantly outperforms the conventional region growing algorithm which was used to produce the training labels for the network. Our experiments report a Dice Similarity Coefficient (DSC) of 0.874 for the trained network compared to 0.754 for the conventional region growing algorithm (p < 0.001).
How Do Lessons Learned on the International Space Station (ISS) Help Plan Life Support for Mars?
NASA Technical Reports Server (NTRS)
Jones, Harry W.; Hodgson, Edward W.; Gentry, Gregory J.; Kliss, Mark H.
2016-01-01
How can our experience in developing and operating the International Space Station (ISS) guide the design, development, and operation of life support for the journey to Mars? The Mars deep space Environmental Control and Life Support System (ECLSS) must incorporate the knowledge and experience gained in developing ECLSS for low Earth orbit, but it must also meet the challenging new requirements of operation in deep space where there is no possibility of emergency resupply or quick crew return. The understanding gained by developing ISS flight hardware and successfully supporting a crew in orbit for many years is uniquely instructive. Different requirements for Mars life support suggest that different decisions may be made in design, testing, and operations planning, but the lessons learned developing the ECLSS for ISS provide valuable guidance.
Deep Borehole Disposal Concept: Development of Universal Canister Concept of Operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rigali, Mark J.; Price, Laura L.
This report documents key elements of the conceptual design for deep borehole disposal of radioactive waste to support the development of a universal canister concept of operations. A universal canister is a canister that is designed to be able to store, transport, and dispose of radioactive waste without the canister having to be reopened to treat or repackage the waste. This report focuses on the conceptual design for disposal of radioactive waste contained in a universal canister in a deep borehole. The general deep borehole disposal concept consists of drilling a borehole into crystalline basement rock to a depth ofmore » about 5 km, emplacing WPs in the lower 2 km of the borehole, and sealing and plugging the upper 3 km. Research and development programs for deep borehole disposal have been ongoing for several years in the United States and the United Kingdom; these studies have shown that deep borehole disposal of radioactive waste could be safe, cost effective, and technically feasible. The design concepts described in this report are workable solutions based on expert judgment, and are intended to guide follow-on design activities. Both preclosure and postclosure safety were considered in the development of the reference design concept. The requirements and assumptions that form the basis for the deep borehole disposal concept include WP performance requirements, radiological protection requirements, surface handling and transport requirements, and emplacement requirements. The key features of the reference disposal concept include borehole drilling and construction concepts, WP designs, and waste handling and emplacement concepts. These features are supported by engineering analyses.« less
NASA Capabilities That Could Impact Terrestrial Smart Grids of the Future
NASA Technical Reports Server (NTRS)
Beach, Raymond F.
2015-01-01
Incremental steps to steadily build, test, refine, and qualify capabilities that lead to affordable flight elements and a deep space capability. Potential Deep Space Vehicle Power system characteristics: power 10 kilowatts average; two independent power channels with multi-level cross-strapping; solar array power 24 plus kilowatts; multi-junction arrays; lithium Ion battery storage 200 plus ampere-hours; sized for deep space or low lunar orbit operation; distribution120 volts secondary (SAE AS 5698); 2 kilowatt power transfer between vehicles.
NASA Astrophysics Data System (ADS)
Kirkwood, William J.; Walz, Peter M.; Peltzer, Edward T.; Barry, James P.; Herlien, Robert A.; Headley, Kent L.; Kecy, Chad; Matsumoto, George I.; Maughan, Thom; O'Reilly, Thomas C.; Salamy, Karen A.; Shane, Farley; Brewer, Peter G.
2015-03-01
We describe the design, testing, and performance of an actively controlled deep-sea Free Ocean CO2 Enrichment (dp-FOCE) system for the execution of seafloor experiments relating to the impacts of ocean acidification on natural ecosystems. We used the 880 m deep MARS (Monterey Accelerated Research System) cable site offshore Monterey Bay, California for this work, but the Free Ocean CO2 Enrichment (FOCE) system concept is designed to be scalable and can be modified to be used in a wide variety of ocean depths and locations. The main frame is based on a flume design with active thruster control of flow and a central experimental chamber. The unit was allowed to free fall to the seafloor and connected to the cable node by remotely operated vehicle (ROV) manipulation. For operation at depth we designed a liquid CO2 containment reservoir which provided the CO2 enriched working fluid as ambient seawater was drawn through the reservoir beneath the more buoyant liquid CO2. Our design allowed for the significant lag time associated with the hydration of the dissolved CO2 molecule, resulting in an e-folding time, τ, of 97 s between fluid injection and pH sensing at the mean local T=4.31±0.14 °C and pHT of 7.625±0.011. The system maintained a pH offset of 0.4 pH units compared to the surrounding ocean for a period of 1 month. The unit allows for the emplacement of deep-sea animals for testing. We describe the components and software used for system operation and show examples of each. The demonstrated ability for active control of experimental systems opens new possibilities for deep-sea biogeochemical perturbation experiments of several kinds and our developments in open source control systems software and hardware described here are applicable to this end.
Urbanowicz, Tomasz K; Budniak, Wiktor; Buczkowski, Piotr; Perek, Bartłomiej; Walczak, Maciej; Tomczyk, Jadwiga; Katarzyński, Sławomir; Jemielity, Marek
2014-12-01
Monitoring the central nervous system during aortic dissection repair may improve the understanding of the intraoperative changes related to its bioactivity. The aim of the study was to evaluate the influence of deep hypothermia on intraoperative brain bioactivity measured by the compressed spectral array (CSA) method and to assess the influence of the operations on postoperative cognitive function. The study enrolled 40 patients (31 men and 9 women) at the mean age of 60.2 ± 8.6 years, diagnosed with acute aortic dissection. They underwent emergency operations in deep hypothermic circulatory arrest (DHCA). During the operations, brain bioactivity was monitored with the compressed spectral array method. There were no intraoperative deaths. Electrocerebral silence during DHCA was observed in 31 patients (74%). The lowest activity was observed during DHCA: it was 0.01 ± 0.05 nW in the left hemisphere and 0.01 ± 0.03 nW in the right hemisphere. The postoperative results of neurological tests deteriorated statistically significantly (26.9 ± 1.7 points vs. 22.0 ± 1.7 points; p < 0.001), especially among patients who exhibited brain activity during DHCA. The compressed spectral array method is clinically useful in monitoring brain bioactivity during emergency operations of acute aortic dissections. Electrocerebral silence occurs in 75% of patients during DHCA. The cognitive function of patients deteriorates significantly after operations with DHCA.
NASA Astrophysics Data System (ADS)
Alapaty, K.; Zhang, G. J.; Song, X.; Kain, J. S.; Herwehe, J. A.
2012-12-01
Short lived pollutants such as aerosols play an important role in modulating not only the radiative balance but also cloud microphysical properties and precipitation rates. In the past, to understand the interactions of aerosols with clouds, several cloud-resolving modeling studies were conducted. These studies indicated that in the presence of anthropogenic aerosols, single-phase deep convection precipitation is reduced or suppressed. On the other hand, anthropogenic aerosol pollution led to enhanced precipitation for mixed-phase deep convective clouds. To date, there have not been many efforts to incorporate such aerosol indirect effects (AIE) in mesoscale models or global models that use parameterization schemes for deep convection. Thus, the objective of this work is to implement a diagnostic cloud microphysical scheme directly into a deep convection parameterization facilitating aerosol indirect effects in the WRF-CMAQ integrated modeling systems. Major research issues addressed in this study are: What is the sensitivity of a deep convection scheme to cloud microphysical processes represented by a bulk double-moment scheme? How close are the simulated cloud water paths as compared to observations? Does increased aerosol pollution lead to increased precipitation for mixed-phase clouds? These research questions are addressed by performing several WRF simulations using the Kain-Fritsch convection parameterization and a diagnostic cloud microphysical scheme. In the first set of simulations (control simulations) the WRF model is used to simulate two scenarios of deep convection over the continental U.S. during two summer periods at 36 km grid resolution. In the second set, these simulations are repeated after incorporating a diagnostic cloud microphysical scheme to study the impacts of inclusion of cloud microphysical processes. Finally, in the third set, aerosol concentrations simulated by the CMAQ modeling system are supplied to the embedded cloud microphysical scheme to study impacts of aerosol concentrations on precipitation and radiation fields. Observations available from the ARM microbase data, the SURFRAD network, GOES imagery, and other reanalysis and measurements will be used to analyze the impacts of a cloud microphysical scheme and aerosol concentrations on parameterized convection.
Instructional Set, Deep Relaxation and Growth Enhancement: A Pilot Study
ERIC Educational Resources Information Center
Leeb, Charles; And Others
1976-01-01
This study provides experimental evidence that instructional set can influence access to altered states of consciousness. Fifteen male subjects were randomly assigned to three groups, each of which received the same autogenic biofeedback training in hand temperature control, but each group received a different attitudinal set. (Editor)
A Risk Stratification Model for Lung Cancer Based on Gene Coexpression Network and Deep Learning
2018-01-01
Risk stratification model for lung cancer with gene expression profile is of great interest. Instead of previous models based on individual prognostic genes, we aimed to develop a novel system-level risk stratification model for lung adenocarcinoma based on gene coexpression network. Using multiple microarray, gene coexpression network analysis was performed to identify survival-related networks. A deep learning based risk stratification model was constructed with representative genes of these networks. The model was validated in two test sets. Survival analysis was performed using the output of the model to evaluate whether it could predict patients' survival independent of clinicopathological variables. Five networks were significantly associated with patients' survival. Considering prognostic significance and representativeness, genes of the two survival-related networks were selected for input of the model. The output of the model was significantly associated with patients' survival in two test sets and training set (p < 0.00001, p < 0.0001 and p = 0.02 for training and test sets 1 and 2, resp.). In multivariate analyses, the model was associated with patients' prognosis independent of other clinicopathological features. Our study presents a new perspective on incorporating gene coexpression networks into the gene expression signature and clinical application of deep learning in genomic data science for prognosis prediction. PMID:29581968
Automating Mid- and Long-Range Scheduling for the NASA Deep Space Network
NASA Technical Reports Server (NTRS)
Johnston, Mark D.; Tran, Daniel
2012-01-01
NASA has recently deployed a new mid-range scheduling system for the antennas of the Deep Space Network (DSN), called Service Scheduling Software, or S(sup 3). This system was designed and deployed as a modern web application containing a central scheduling database integrated with a collaborative environment, exploiting the same technologies as social web applications but applied to a space operations context. This is highly relevant to the DSN domain since the network schedule of operations is developed in a peer-to-peer negotiation process among all users of the DSN. These users represent not only NASA's deep space missions, but also international partners and ground-based science and calibration users. The initial implementation of S(sup 3) is complete and the system has been operational since July 2011. This paper describes some key aspects of the S(sup 3) system and on the challenges of modeling complex scheduling requirements and the ongoing extension of S(sup 3) to encompass long-range planning, downtime analysis, and forecasting, as the next step in developing a single integrated DSN scheduling tool suite to cover all time ranges.
International Docking Standard (IDSS) Interface Definition Document (IDD) . E; Revision
NASA Technical Reports Server (NTRS)
Kelly, Sean M.; Cryan, Scott P.
2016-01-01
This International Docking System Standard (IDSS) Interface Definition Document (IDD) is the result of a collaboration by the International Space Station membership to establish a standard docking interface to enable on-orbit crew rescue operations and joint collaborative endeavors utilizing different spacecraft. This IDSS IDD details the physical geometric mating interface and design loads requirements. The physical geometric interface requirements must be strictly followed to ensure physical spacecraft mating compatibility. This includes both defined components and areas that are void of components. The IDD also identifies common design parameters as identified in section 3.0, e.g., docking initial conditions and vehicle mass properties. This information represents a recommended set of design values enveloping a broad set of design reference missions and conditions, which if accommodated in the docking system design, increases the probability of successful docking between different spacecraft. This IDD does not address operational procedures or off-nominal situations, nor does it dictate implementation or design features behind the mating interface. It is the responsibility of the spacecraft developer to perform all hardware verification and validation, and to perform final docking analyses to ensure the needed docking performance and to develop the final certification loads for their application. While there are many other critical requirements needed in the development of a docking system such as fault tolerance, reliability, and environments (e.g. vibration, etc.), it is not the intent of the IDSS IDD to mandate all of these requirements; these requirements must be addressed as part of the specific developer's unique program, spacecraft and mission needs. This approach allows designers the flexibility to design and build docking mechanisms to their unique program needs and requirements. The purpose of the IDSS IDD is to provide basic common design parameters to allow developers to independently design compatible docking systems. The IDSS is intended for uses ranging from crewed to autonomous space vehicles, and from Low Earth Orbit (LEO) to deep-space exploration missions.The purpose of the IDSS IDD is to provide basic common design parameters to allow developers to independently design compatible docking systems. The IDSS is intended for uses ranging from crewed to autonomous space vehicles, and from Low Earth Orbit (LEO) to deep-space exploration missions. The purpose of the IDSS IDD is to provide basic common design parameters to allow developers to independently design compatible docking systems. The IDSS is intended for uses ranging from crewed to autonomous space vehicles, and from Low Earth Orbit (LEO) to deep-space exploration missions.
ERIC Educational Resources Information Center
Davies, T. A.
1976-01-01
Described are the background, operation, and findings of the work of the deep sea drilling vessel Glomar Challenger, which has taken 8,638 core samples from 573 holes at 392 sites on the floor of the Earth's oceans. (SL)
Visualization experiences and issues in Deep Space Exploration
NASA Technical Reports Server (NTRS)
Wright, John; Burleigh, Scott; Maruya, Makoto; Maxwell, Scott; Pischel, Rene
2003-01-01
The panelists will discuss their experiences in collecting data in deep space, transmitting it to Earth, processing and visualizing it here, and using the visualization to drive the continued mission. This closes the loop, making missions more responsive to their environment, particularly in-situ operations on planetary surfaces and within planetary atmospheres.
A Critical Comparison of Transformation and Deep Approach Theories of Learning
ERIC Educational Resources Information Center
Howie, Peter; Bagnall, Richard
2015-01-01
This paper reports a critical comparative analysis of two popular and significant theories of adult learning: the transformation and the deep approach theories of learning. These theories are operative in different educational sectors, are significant, respectively, in each, and they may be seen as both touching on similar concerns with learning…
The deep space network. [tracking and communication support for space probes
NASA Technical Reports Server (NTRS)
1974-01-01
The objectives, functions, and organization of the deep space network are summarized. Progress in flight project support, tracking and data acquisition research and technology, network engineering, hardware and software implementation, and operations is reported. Interface support for the Mariner Venus Mercury 1973 flight and Pioneer 10 and 11 missions is included.
NASA Astrophysics Data System (ADS)
Ciarletti, V.; Le Gall, A.; Berthelier, J. J.; Corbel, Ch.; Dolon, F.; Ney, R.; Reineix, A.; Guiffaud, Ch.; Clifford, S.; Heggy, E.
2007-03-01
A bi-static version of the HF GPR TAPIR developed for martian deep soundings has been operated in the Egyptian Western Desert. The study presented focuses on the retrieval of the direction of arrival of the observed echoes on both simulated and measured d
The Status of Ka-Band Communications for Future Deep Space Missions
NASA Technical Reports Server (NTRS)
Edwards, C.; Deutsch, L.; Gatti, M.; Layland, J.; Perret, J.; Stelzried, C.
1997-01-01
Over the past decade, the Jet Propulsion Laboratory's Telecommunications and Mission Operations Directorate has invested in a variety of technologies, targeted at both the flight and ground sides of the communications link, with the goal of developing a Ka-band (32 GHz) communications capability for future deep space missions.
Automated analysis of high-content microscopy data with deep learning.
Kraus, Oren Z; Grys, Ben T; Ba, Jimmy; Chong, Yolanda; Frey, Brendan J; Boone, Charles; Andrews, Brenda J
2017-04-18
Existing computational pipelines for quantitative analysis of high-content microscopy data rely on traditional machine learning approaches that fail to accurately classify more than a single dataset without substantial tuning and training, requiring extensive analysis. Here, we demonstrate that the application of deep learning to biological image data can overcome the pitfalls associated with conventional machine learning classifiers. Using a deep convolutional neural network (DeepLoc) to analyze yeast cell images, we show improved performance over traditional approaches in the automated classification of protein subcellular localization. We also demonstrate the ability of DeepLoc to classify highly divergent image sets, including images of pheromone-arrested cells with abnormal cellular morphology, as well as images generated in different genetic backgrounds and in different laboratories. We offer an open-source implementation that enables updating DeepLoc on new microscopy datasets. This study highlights deep learning as an important tool for the expedited analysis of high-content microscopy data. © 2017 The Authors. Published under the terms of the CC BY 4.0 license.
Calculating massive 3-loop graphs for operator matrix elements by the method of hyperlogarithms
NASA Astrophysics Data System (ADS)
Ablinger, Jakob; Blümlein, Johannes; Raab, Clemens; Schneider, Carsten; Wißbrock, Fabian
2014-08-01
We calculate convergent 3-loop Feynman diagrams containing a single massive loop equipped with twist τ=2 local operator insertions corresponding to spin N. They contribute to the massive operator matrix elements in QCD describing the massive Wilson coefficients for deep-inelastic scattering at large virtualities. Diagrams of this kind can be computed using an extended version of the method of hyperlogarithms, originally being designed for massless Feynman diagrams without operators. The method is applied to Benz- and V-type graphs, belonging to the genuine 3-loop topologies. In case of the V-type graphs with five massive propagators, new types of nested sums and iterated integrals emerge. The sums are given in terms of finite binomially and inverse binomially weighted generalized cyclotomic sums, while the 1-dimensionally iterated integrals are based on a set of ∼30 square-root valued letters. We also derive the asymptotic representations of the nested sums and present the solution for N∈C. Integrals with a power-like divergence in N-space ∝aN,a∈R,a>1, for large values of N emerge. They still possess a representation in x-space, which is given in terms of root-valued iterated integrals in the present case. The method of hyperlogarithms is also used to calculate higher moments for crossed box graphs with different operator insertions.
Kumar, Sandeep; Kumar, Sugam; Katharria, Y S; Safvan, C P; Kanjilal, D
2008-05-01
A computerized system for in situ deep level characterization during irradiation in semiconductors has been set up and tested in the beam line for materials science studies of the 15 MV Pelletron accelerator at the Inter-University Accelerator Centre, New Delhi. This is a new facility for in situ irradiation-induced deep level studies, available in the beam line of an accelerator laboratory. It is based on the well-known deep level transient spectroscopy (DLTS) technique. High versatility for data manipulation is achieved through multifunction data acquisition card and LABVIEW. In situ DLTS studies of deep levels produced by impact of 100 MeV Si ions on Aun-Si(100) Schottky barrier diode are presented to illustrate performance of the automated DLTS facility in the beam line.
Research on wind power grid-connected operation and dispatching strategies of Liaoning power grid
NASA Astrophysics Data System (ADS)
Han, Qiu; Qu, Zhi; Zhou, Zhi; He, Xiaoyang; Li, Tie; Jin, Xiaoming; Li, Jinze; Ling, Zhaowei
2018-02-01
As a kind of clean energy, wind power has gained rapid development in recent years. Liaoning Province has abundant wind resources and the total installed capacity of wind power is in the forefront. With the large-scale wind power grid-connected operation, the contradiction between wind power utilization and peak load regulation of power grid has been more prominent. To this point, starting with the power structure and power grid installation situation of Liaoning power grid, the distribution and the space-time output characteristics of wind farm, the prediction accuracy, the curtailment and the off-grid situation of wind power are analyzed. Based on the deep analysis of the seasonal characteristics of power network load, the composition and distribution of main load are presented. Aiming at the problem between the acceptance of wind power and power grid adjustment, the scheduling strategies are given, including unit maintenance scheduling, spinning reserve, energy storage equipment settings by the analysis of the operation characteristics and the response time of thermal power units and hydroelectric units, which can meet the demand of wind power acceptance and provide a solution to improve the level of power grid dispatching.
O'Reilly, Andrew M.
2004-01-01
A relatively simple method is needed that provides estimates of transient ground-water recharge in deep water-table settings that can be incorporated into other hydrologic models. Deep water-table settings are areas where the water table is below the reach of plant roots and virtually all water that is not lost to surface runoff, evaporation at land surface, or evapotranspiration in the root zone eventually becomes ground-water recharge. Areas in central Florida with a deep water table generally are high recharge areas; consequently, simulation of recharge in these areas is of particular interest to water-resource managers. Yet the complexities of meteorological variations and unsaturated flow processes make it difficult to estimate short-term recharge rates, thereby confounding calibration and predictive use of transient hydrologic models. A simple water-balance/transfer-function (WBTF) model was developed for simulating transient ground-water recharge in deep water-table settings. The WBTF model represents a one-dimensional column from the top of the vegetative canopy to the water table and consists of two components: (1) a water-balance module that simulates the water storage capacity of the vegetative canopy and root zone; and (2) a transfer-function module that simulates the traveltime of water as it percolates from the bottom of the root zone to the water table. Data requirements include two time series for the period of interest?precipitation (or precipitation minus surface runoff, if surface runoff is not negligible) and evapotranspiration?and values for five parameters that represent water storage capacity or soil-drainage characteristics. A limiting assumption of the WBTF model is that the percolation of water below the root zone is a linear process. That is, percolating water is assumed to have the same traveltime characteristics, experiencing the same delay and attenuation, as it moves through the unsaturated zone. This assumption is more accurate if the moisture content, and consequently the unsaturated hydraulic conductivity, below the root zone does not vary substantially with time. Results of the WBTF model were compared to those of the U.S. Geological Survey variably saturated flow model, VS2DT, and to field-based estimates of recharge to demonstrate the applicability of the WBTF model for a range of conditions relevant to deep water-table settings in central Florida. The WBTF model reproduced independently obtained estimates of recharge reasonably well for different soil types and water-table depths.
Deep epistasis in human metabolism
NASA Astrophysics Data System (ADS)
Imielinski, Marcin; Belta, Calin
2010-06-01
We extend and apply a method that we have developed for deriving high-order epistatic relationships in large biochemical networks to a published genome-scale model of human metabolism. In our analysis we compute 33 328 reaction sets whose knockout synergistically disables one or more of 43 important metabolic functions. We also design minimal knockouts that remove flux through fumarase, an enzyme that has previously been shown to play an important role in human cancer. Most of these knockout sets employ more than eight mutually buffering reactions, spanning multiple cellular compartments and metabolic subsystems. These reaction sets suggest that human metabolic pathways possess a striking degree of parallelism, inducing "deep" epistasis between diversely annotated genes. Our results prompt specific chemical and genetic perturbation follow-up experiments that could be used to query in vivo pathway redundancy. They also suggest directions for future statistical studies of epistasis in genetic variation data sets.
NASA Astrophysics Data System (ADS)
Puebla, Ricardo; Casanova, Jorge; Plenio, Martin B.
2018-03-01
The dynamics of the quantum Rabi model (QRM) in the deep strong coupling regime is theoretically analyzed in a trapped-ion set-up. Recognizably, the main hallmark of this regime is the emergence of collapses and revivals, whose faithful observation is hindered under realistic magnetic dephasing noise. Here, we discuss how to attain a faithful implementation of the QRM in the deep strong coupling regime which is robust against magnetic field fluctuations and at the same time provides a large tunability of the simulated parameters. This is achieved by combining standing wave laser configuration with continuous dynamical decoupling. In addition, we study the role that amplitude fluctuations play to correctly attain the QRM using the proposed method. In this manner, the present work further supports the suitability of continuous dynamical decoupling techniques in trapped-ion settings to faithfully realize different interacting dynamics.
Deep learning for computational biology.
Angermueller, Christof; Pärnamaa, Tanel; Parts, Leopold; Stegle, Oliver
2016-07-29
Technological advances in genomics and imaging have led to an explosion of molecular and cellular profiling data from large numbers of samples. This rapid increase in biological data dimension and acquisition rate is challenging conventional analysis strategies. Modern machine learning methods, such as deep learning, promise to leverage very large data sets for finding hidden structure within them, and for making accurate predictions. In this review, we discuss applications of this new breed of analysis approaches in regulatory genomics and cellular imaging. We provide background of what deep learning is, and the settings in which it can be successfully applied to derive biological insights. In addition to presenting specific applications and providing tips for practical use, we also highlight possible pitfalls and limitations to guide computational biologists when and how to make the most use of this new technology. © 2016 The Authors. Published under the terms of the CC BY 4.0 license.
Temperature based Restricted Boltzmann Machines
NASA Astrophysics Data System (ADS)
Li, Guoqi; Deng, Lei; Xu, Yi; Wen, Changyun; Wang, Wei; Pei, Jing; Shi, Luping
2016-01-01
Restricted Boltzmann machines (RBMs), which apply graphical models to learning probability distribution over a set of inputs, have attracted much attention recently since being proposed as building blocks of multi-layer learning systems called deep belief networks (DBNs). Note that temperature is a key factor of the Boltzmann distribution that RBMs originate from. However, none of existing schemes have considered the impact of temperature in the graphical model of DBNs. In this work, we propose temperature based restricted Boltzmann machines (TRBMs) which reveals that temperature is an essential parameter controlling the selectivity of the firing neurons in the hidden layers. We theoretically prove that the effect of temperature can be adjusted by setting the parameter of the sharpness of the logistic function in the proposed TRBMs. The performance of RBMs can be improved by adjusting the temperature parameter of TRBMs. This work provides a comprehensive insights into the deep belief networks and deep learning architectures from a physical point of view.
NASA Astrophysics Data System (ADS)
Amit, Guy; Ben-Ari, Rami; Hadad, Omer; Monovich, Einat; Granot, Noa; Hashoul, Sharbell
2017-03-01
Diagnostic interpretation of breast MRI studies requires meticulous work and a high level of expertise. Computerized algorithms can assist radiologists by automatically characterizing the detected lesions. Deep learning approaches have shown promising results in natural image classification, but their applicability to medical imaging is limited by the shortage of large annotated training sets. In this work, we address automatic classification of breast MRI lesions using two different deep learning approaches. We propose a novel image representation for dynamic contrast enhanced (DCE) breast MRI lesions, which combines the morphological and kinetics information in a single multi-channel image. We compare two classification approaches for discriminating between benign and malignant lesions: training a designated convolutional neural network and using a pre-trained deep network to extract features for a shallow classifier. The domain-specific trained network provided higher classification accuracy, compared to the pre-trained model, with an area under the ROC curve of 0.91 versus 0.81, and an accuracy of 0.83 versus 0.71. Similar accuracy was achieved in classifying benign lesions, malignant lesions, and normal tissue images. The trained network was able to improve accuracy by using the multi-channel image representation, and was more robust to reductions in the size of the training set. A small-size convolutional neural network can learn to accurately classify findings in medical images using only a few hundred images from a few dozen patients. With sufficient data augmentation, such a network can be trained to outperform a pre-trained out-of-domain classifier. Developing domain-specific deep-learning models for medical imaging can facilitate technological advancements in computer-aided diagnosis.
a Fully Automated Pipeline for Classification Tasks with AN Application to Remote Sensing
NASA Astrophysics Data System (ADS)
Suzuki, K.; Claesen, M.; Takeda, H.; De Moor, B.
2016-06-01
Nowadays deep learning has been intensively in spotlight owing to its great victories at major competitions, which undeservedly pushed `shallow' machine learning methods, relatively naive/handy algorithms commonly used by industrial engineers, to the background in spite of their facilities such as small requisite amount of time/dataset for training. We, with a practical point of view, utilized shallow learning algorithms to construct a learning pipeline such that operators can utilize machine learning without any special knowledge, expensive computation environment, and a large amount of labelled data. The proposed pipeline automates a whole classification process, namely feature-selection, weighting features and the selection of the most suitable classifier with optimized hyperparameters. The configuration facilitates particle swarm optimization, one of well-known metaheuristic algorithms for the sake of generally fast and fine optimization, which enables us not only to optimize (hyper)parameters but also to determine appropriate features/classifier to the problem, which has conventionally been a priori based on domain knowledge and remained untouched or dealt with naïve algorithms such as grid search. Through experiments with the MNIST and CIFAR-10 datasets, common datasets in computer vision field for character recognition and object recognition problems respectively, our automated learning approach provides high performance considering its simple setting (i.e. non-specialized setting depending on dataset), small amount of training data, and practical learning time. Moreover, compared to deep learning the performance stays robust without almost any modification even with a remote sensing object recognition problem, which in turn indicates that there is a high possibility that our approach contributes to general classification problems.
Geothermal energy from deep sedimentary basins: The Valley of Mexico (Central Mexico)
NASA Astrophysics Data System (ADS)
Lenhardt, Nils; Götz, Annette E.
2015-04-01
The geothermal potential of the Valley of Mexico has not been addressed in the past, although volcaniclastic settings in other parts of the world contain promising target reservoir formations. A first assessment of the geothermal potential of the Valley of Mexico is based on thermophysical data gained from outcrop analogues, covering all lithofacies types, and evaluation of groundwater temperature and heat flow values from literature. Furthermore, the volumetric approach of Muffler and Cataldi (1978) leads to a first estimation of ca. 4000 TWh (14.4 EJ) of power generation from Neogene volcanic rocks within the Valley of Mexico. Comparison with data from other sedimentary basins where deep geothermal reservoirs are identified shows the high potential of the Valley of Mexico for future geothermal reservoir utilization. The mainly low permeable lithotypes may be operated as stimulated systems, depending on the fracture porosity in the deeper subsurface. In some areas also auto-convective thermal water circulation might be expected and direct heat use without artificial stimulation becomes reasonable. Thermophysical properties of tuffs and siliciclastic rocks qualify them as promising target horizons (Lenhardt and Götz, 2015). The here presented data serve to identify exploration areas and are valuable attributes for reservoir modelling, contributing to (1) a reliable reservoir prognosis, (2) the decision of potential reservoir stimulation, and (3) the planning of long-term efficient reservoir utilization. References Lenhardt, N., Götz, A.E., 2015. Geothermal reservoir potential of volcaniclastic settings: The Valley of Mexico, Central Mexico. Renewable Energy. [in press] Muffler, P., Cataldi, R., 1978. Methods for regional assessment of geothermal resources. Geothermics, 7, 53-89.
The changing paradigm for integrated simulation in support of Command and Control (C2)
NASA Astrophysics Data System (ADS)
Riecken, Mark; Hieb, Michael
2016-05-01
Modern software and network technologies are on the verge of enabling what has eluded the simulation and operational communities for more than two decades, truly integrating simulation functionality into operational Command and Control (C2) capabilities. This deep integration will benefit multiple stakeholder communities from experimentation and test to training by providing predictive and advanced analytics. There is a new opportunity to support operations with simulation once a deep integration is achieved. While it is true that doctrinal and acquisition issues remain to be addressed, nonetheless it is increasingly obvious that few technical barriers persist. How will this change the way in which common simulation and operational data is stored and accessed? As the Services move towards single networks, will there be technical and policy issues associated with sharing those operational networks with simulation data, even if the simulation data is operational in nature (e.g., associated with planning)? How will data models that have traditionally been simulation only be merged in with operational data models? How will the issues of trust be addressed?
Spitzer Operations: Scheduling the Out Years
NASA Technical Reports Server (NTRS)
Mahoney, William A.; Effertz, Mark J.; Fisher, Mark E.; Garcia, Lisa J.; Hunt, Joseph C. Jr.; Mannings, Vincent; McElroy, Douglas B.; Scire, Elena
2012-01-01
Spitzer Warm Mission operations have remained robust and exceptionally efficient since the cryogenic mission ended in mid-2009. The distance to the now exceeds 1 AU, making telecommunications increasingly difficult; however, analysis has shown that two-way communication could be maintained through at least 2017 with minimal loss in observing efficiency. The science program continues to emphasize the characterization of exoplanets, time domain studies, and deep surveys, all of which can impose interesting scheduling constraints. Recent changes have significantly improved on-board data compression, which both enables certain high volume observations and reduces Spitzer's demand for competitive Deep Space Network resources.
Development of deep drawn aluminum piston tanks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitehead, J.C.; Bronder, R.L.; Kilgard, L.W.
1990-06-08
An aluminum piston tank has been developed for applications requiring lightweight, low cost, low pressure, positive-expulsion liquid storage. The 3 liter (183 in{sup 3}) vessel is made primarily from aluminum sheet, using production forming and joining operations. The development process relied mainly on pressurizing prototype parts and assemblies to failure, as the primary source of decision making information for driving the tank design toward its optimum minimum-mass configuration. Critical issues addressed by development testing included piston operation, strength of thin-walled formed shells, alloy choice, and joining the end cap to the seamless deep drawn can. 9 refs., 8 figs.
Potential sound production by a deep-sea fish
NASA Astrophysics Data System (ADS)
Mann, David A.; Jarvis, Susan M.
2004-05-01
Swimbladder sonic muscles of deep-sea fishes were described over 35 years ago. Until now, no recordings of probable deep-sea fish sounds have been published. A sound likely produced by a deep-sea fish has been isolated and localized from an analysis of acoustic recordings made at the AUTEC test range in the Tongue of the Ocean, Bahamas, from four deep-sea hydrophones. This sound is typical of a fish sound in that it is pulsed and relatively low frequency (800-1000 Hz). Using time-of-arrival differences, the sound was localized to 548-696-m depth, where the bottom was 1620 m. The ability to localize this sound in real-time on the hydrophone range provides a great advantage for being able to identify the sound-producer using a remotely operated vehicle.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. A worker at Astrotech Space Operations in Titusville, Fla., begins fueling the Deep Impact spacecraft. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth, and reveal the secrets of its interior. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will collect pictures and data of how the crater forms, measuring the craters depth and diameter, as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. Workers at Astrotech Space Operations in Titusville, Fla., suit up before fueling the Deep Impact spacecraft. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth, and reveal the secrets of its interior. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will collect pictures and data of how the crater forms, measuring the craters depth and diameter, as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. Workers at Astrotech Space Operations in Titusville, Fla., suit up before fueling the Deep Impact spacecraft. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth, and reveal the secrets of its interior. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will collect pictures and data of how the crater forms, measuring the craters depth and diameter, as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. A worker at Astrotech Space Operations in Titusville, Fla., begins fueling the Deep Impact spacecraft. Scheduled for liftoff Jan. 12, Deep Impact will probe beneath the surface of Comet Tempel 1 on July 4, 2005, when the comet is 83 million miles from Earth, and reveal the secrets of its interior. After releasing a 3- by 3-foot projectile to crash onto the surface, Deep Impacts flyby spacecraft will collect pictures and data of how the crater forms, measuring the craters depth and diameter, as well as the composition of the interior of the crater and any material thrown out, and determining the changes in natural outgassing produced by the impact. It will send the data back to Earth through the antennas of the Deep Space Network. Deep Impact is a NASA Discovery mission.
Training Deep Spiking Neural Networks Using Backpropagation.
Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael
2016-01-01
Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.
Hybrid Warfare: How to Shape Special Operations Forces
2016-06-10
irregular tactics, information operations, and deliberate terrorism as they waged war against Russia in the territory of Chechnya and o deep in Russian...Russia initially had to withdraw its forces from Chechnya , but later led by a former KGB operative, Vladimir Putin, were able to defeat the Chechen rebels
Beacon Spacecraft Operations: Lessons in Automation
NASA Technical Reports Server (NTRS)
Sherwood, R.; Schlutsmeyer, A.; Sue, M.; Szijjarto, J.; Wyatt, E. J.
2000-01-01
A new approach to mission operations has been flight validated on NASA's Deep Space One (DS1) mission that launched in October 1998. The beacon monitor operations technology is aimed at decreasing the total volume of downlinked engineering telemetry by reducing the frequency of downlink and the volume of data received per pass.
Reducing cost with autonomous operations of the Deep Space Network radio science receiver
NASA Technical Reports Server (NTRS)
Asmar, S.; Anabtawi, A.; Connally, M.; Jongeling, A.
2003-01-01
This paper describes the Radio Science Receiver system and the savings it has brought to mission operations. The design and implementation of remote and autonomous operations will be discussed along with the process of including user feedback along the way and lessons learned and procedures avoided.
ERIC Educational Resources Information Center
National Aeronautics and Space Administration, Washington, DC.
This lesson guide accompanies the Hubble Deep Field set of 10 lithographs and introduces 4 astronomy lesson plans for middle school students. Lessons include: (1) "How Many Objects Are There?"; (2) "Classifying and Identifying"; (3) "Estimating Distances in Space"; and (4) "Review and Assessment." Appendices…
Roles and Relations in Language Deep Structure. Studies in Language Education, Report No. 9.
ERIC Educational Resources Information Center
O'Donnell, Roy C.
This essay discusses a theory of grammar which incorporated Chomsky's distinction between deep and surface structure and accepts Fillmore's proposal to exclude such subject and concepts as direct object from the base structure. While recognizing the need for specifying an underlying set of caselike relations, it is proposed that this need can best…
Badaoui, Rachid; Cabaret, Aurélie; Alami, Youssef; Zogheib, Elie; Popov, Ivan; Lorne, Emmanuel; Dupont, Hervé
2016-02-01
Sugammadex is the first molecule able to antagonize steroidal muscle relaxants with few adverse effects. Doses are adjusted to body weight and the level of neuromuscular blockade. Sleeve gastrectomy is becoming a very popular form of bariatric surgery. It requires deep muscle relaxation followed by complete and rapid reversal to decrease postoperative and especially post-anaesthetic morbidity. Sugammadex is therefore particularly indicated in this setting. The objective of this study was to evaluate the deep neuromuscular blockade reversal time after administration of various doses of sugammadex (based on real weight or at lower doses). Secondary endpoints were the interval between the sugammadex injection and extubation and transfer from the operating room to the recovery room. We then investigated any complications observed in the recovery room. This pilot, prospective, observational, clinical practice evaluation study was conducted in the Amiens University Hospital. Neuromuscular blockade was induced by rocuronium. At the end of the operation, deep neuromuscular blockade was reversed by sugammadex at the dose of 4mg/kg. Sixty-four patients were included: 31 patients received sugammadex at a dosage based on their real weight (RW) and 33 patients received a lower dose (based on ideal weight [IW]). For identical rocuronium doses calculated based on IBW, sugammadex doses were significantly lower in the IW group: 349 (± 65) mg versus 508 (± 75) mg (P<0.0001). Despite this dose reduction, neuromuscular blockade reversal took 115 (± 69) s in the IW group versus 87 (± 40) s in the RW group, but with no significant difference between the two groups (P=0.08). The intervals between injection of sugammadex and extubation (P=0.07) and transfer from the operating room to the recovery room (P=0.68) were also non-significantly longer in the IW group. The mean dose of sugammadex used by anaesthetists in the IW group was 4mg/kg of ideal weight increased by 35% to 50% (n=20; 351±34mg). No sugammadex adverse effects and no residual neuromuscular blockades were observed. Postoperative nausea and vomiting (PONV) was observed in 19.4% of patients in the real weight group versus 27.3% in the ideal weight group (P=NS). Reversal of deep neuromuscular blockades by sugammadex in obese subjects can be performed at doses of 4mg/kg of ideal weight plus 35-50% with no clinical consequences and no accentuation of adverse effects. Copyright © 2015 Société française d’anesthésie et de réanimation (Sfar). Published by Elsevier Masson SAS. All rights reserved.
SEMANTIC3D.NET: a New Large-Scale Point Cloud Classification Benchmark
NASA Astrophysics Data System (ADS)
Hackel, T.; Savinov, N.; Ladicky, L.; Wegner, J. D.; Schindler, K.; Pollefeys, M.
2017-05-01
This paper presents a new 3D point cloud classification benchmark data set with over four billion manually labelled points, meant as input for data-hungry (deep) learning methods. We also discuss first submissions to the benchmark that use deep convolutional neural networks (CNNs) as a work horse, which already show remarkable performance improvements over state-of-the-art. CNNs have become the de-facto standard for many tasks in computer vision and machine learning like semantic segmentation or object detection in images, but have no yet led to a true breakthrough for 3D point cloud labelling tasks due to lack of training data. With the massive data set presented in this paper, we aim at closing this data gap to help unleash the full potential of deep learning methods for 3D labelling tasks. Our semantic3D.net data set consists of dense point clouds acquired with static terrestrial laser scanners. It contains 8 semantic classes and covers a wide range of urban outdoor scenes: churches, streets, railroad tracks, squares, villages, soccer fields and castles. We describe our labelling interface and show that our data set provides more dense and complete point clouds with much higher overall number of labelled points compared to those already available to the research community. We further provide baseline method descriptions and comparison between methods submitted to our online system. We hope semantic3D.net will pave the way for deep learning methods in 3D point cloud labelling to learn richer, more general 3D representations, and first submissions after only a few months indicate that this might indeed be the case.
NASA Astrophysics Data System (ADS)
Lecun, Yann; Bengio, Yoshua; Hinton, Geoffrey
2015-05-01
Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
Deep convolutional neural network for prostate MR segmentation
NASA Astrophysics Data System (ADS)
Tian, Zhiqiang; Liu, Lizhi; Fei, Baowei
2017-03-01
Automatic segmentation of the prostate in magnetic resonance imaging (MRI) has many applications in prostate cancer diagnosis and therapy. We propose a deep fully convolutional neural network (CNN) to segment the prostate automatically. Our deep CNN model is trained end-to-end in a single learning stage based on prostate MR images and the corresponding ground truths, and learns to make inference for pixel-wise segmentation. Experiments were performed on our in-house data set, which contains prostate MR images of 20 patients. The proposed CNN model obtained a mean Dice similarity coefficient of 85.3%+/-3.2% as compared to the manual segmentation. Experimental results show that our deep CNN model could yield satisfactory segmentation of the prostate.
LeCun, Yann; Bengio, Yoshua; Hinton, Geoffrey
2015-05-28
Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
Deep learning based state recognition of substation switches
NASA Astrophysics Data System (ADS)
Wang, Jin
2018-06-01
Different from the traditional method which recognize the state of substation switches based on the running rules of electrical power system, this work proposes a novel convolutional neuron network-based state recognition approach of substation switches. Inspired by the theory of transfer learning, we first establish a convolutional neuron network model trained on the large-scale image set ILSVRC2012, then the restricted Boltzmann machine is employed to replace the full connected layer of the convolutional neuron network and trained on our small image dataset of 110kV substation switches to get a stronger model. Experiments conducted on our image dataset of 110kV substation switches show that, the proposed approach can be applicable to the substation to reduce the running cost and implement the real unattended operation.
Robotic Mining Competition - Setup
2018-05-14
On the first day of NASA's 9th Robotic Mining Competition, set-up day on May 14, team members from the University of Minnesota-Twin Cities work on their robot miner in the RobotPits in the Educator Resource Center at Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Setup
2018-05-14
On the first day of NASA's 9th Robotic Mining Competition, set-up day on May 14, team members from the South Dakota School of Mines & Technology work on their robot miner in the RobotPits in the Educator Resource Center at Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Setup
2018-05-14
On the first day of NASA's 9th Robotic Mining Competition, set-up day on May 14, team members from Montana Tech of the University of Montana work on their robot miner in the RobotPits in the Educator Resource Center at Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Setup
2018-05-14
On the first day of NASA's 9th Robotic Mining Competition, set-up day on May 14, team members from the Illinois Institute of Technology work on their robot miner in the RobotPits in the Educator Resource Center at Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Setup
2018-05-14
On the first day of NASA's 9th Robotic Mining Competition, set-up day on May 14, team members from the University of North Carolina at Charlotte work on their robot miner in the RobotPits in the Educator Resource Center at Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Setup
2018-05-14
On the first day of NASA's 9th Robotic Mining Competition, set-up day on May 14, team members from Temple University work on their robot miner in the RobotPits in the Educator Resource Center at Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
First deep space operational experience with simultaneous X- and Ka-bands coherent tracking
NASA Technical Reports Server (NTRS)
Asmar, S.; Herrera, R.; Armstrong, J.; Barbinis, E.; Fleischman, D.; Gatti, M.; Goltz, G.
2002-01-01
This paper describes the new DSN science capability and highlights of the engineering work that lead to its development. It will also discuss experience with operations along with statistics and data quality.
NASA Technical Reports Server (NTRS)
Fayyad, Kristina E.; Hill, Randall W., Jr.; Wyatt, E. J.
1993-01-01
This paper presents a case study of the knowledge engineering process employed to support the Link Monitor and Control Operator Assistant (LMCOA). The LMCOA is a prototype system which automates the configuration, calibration, test, and operation (referred to as precalibration) of the communications, data processing, metric data, antenna, and other equipment used to support space-ground communications with deep space spacecraft in NASA's Deep Space Network (DSN). The primary knowledge base in the LMCOA is the Temporal Dependency Network (TDN), a directed graph which provides a procedural representation of the precalibration operation. The TDN incorporates precedence, temporal, and state constraints and uses several supporting knowledge bases and data bases. The paper provides a brief background on the DSN, and describes the evolution of the TDN and supporting knowledge bases, the process used for knowledge engineering, and an analysis of the successes and problems of the knowledge engineering effort.
NASA Technical Reports Server (NTRS)
Wang, Yeou-Fang; Baldwin, John
2007-01-01
TIGRAS is client-side software, which provides tracking-station equipment planning, allocation, and scheduling services to the DSMS (Deep Space Mission System). TIGRAS provides functions for schedulers to coordinate the DSN (Deep Space Network) antenna usage time and to resolve the resource usage conflicts among tracking passes, antenna calibrations, maintenance, and system testing activities. TIGRAS provides a fully integrated multi-pane graphical user interface for all scheduling operations. This is a great improvement over the legacy VAX VMS command line user interface. TIGRAS has the capability to handle all DSN resource scheduling aspects from long-range to real time. TIGRAS assists NASA mission operations for DSN tracking of station equipment resource request processes from long-range load forecasts (ten years or longer), to midrange, short-range, and real-time (less than one week) emergency tracking plan changes. TIGRAS can be operated by NASA mission operations worldwide to make schedule requests for the DSN station equipment.
NASA Astrophysics Data System (ADS)
Schoepp, Juergen
The internal transition of the deep center Ni2+ in II to IV semiconductor cadmium sulfide is examined with reference to crystal field theory. An algorithm was developed for calculation, in a basis fitted to trigonal symmetry, of fine structure operator matrix which is made of the sum of operators from spin trajectory coupling, trigonal field and electron phonon coupling. The dependence of energy level on the mass was calculated in order to examine the isotropy effect at Ni2+ transition. The mass dependence of phonon energy was estimated in an atomic cluster by using a valence force model from Keating for elastic energy. The Zeeman behavior of Ni2+ transition was examined for magnetic fields; the Zeeman operator was added to the fine structure operator and the resulting matrix was diagonalized. It is noticed that calculations are quantitatively and qualitatively in agreement with experiments.
Dynamic Sampling of Cabin VOCs during the Mission Operations Test of the Deep Space Habitat
NASA Technical Reports Server (NTRS)
Monje, Oscar; Rojdev, Kristina
2013-01-01
The atmospheric composition inside spacecraft is dynamic due to changes in crew metabolism and payload operations. A portable FTIR gas analyzer was used to monitor the atmospheric composition of four modules (Core lab, Veggie Plant Atrium, Hygiene module, and Xhab loft) within the Deep Space Habitat '(DSH) during the Mission Operations Test (MOT) conducted at the Johnson Space Center. The FTIR was either physically relocated to a new location or the plumbing was changed so that a different location was monitored. An application composed of 20 gases was used and the FTIR was zeroed using N2 gas every time it was relocated. The procedures developed for operating the FTIR were successful as all data was collected and the FTIR worked during the entire MOT mission. Not all the 20 gases in the application sampled were detected and it was possible to measure dynamic VOC concentrations in each DSH location.
Deep-reasoning fault diagnosis - An aid and a model
NASA Technical Reports Server (NTRS)
Yoon, Wan Chul; Hammer, John M.
1988-01-01
The design and evaluation are presented for the knowledge-based assistance of a human operator who must diagnose a novel fault in a dynamic, physical system. A computer aid based on a qualitative model of the system was built to help the operators overcome some of their cognitive limitations. This aid differs from most expert systems in that it operates at several levels of interaction that are believed to be more suitable for deep reasoning. Four aiding approaches, each of which provided unique information to the operator, were evaluated. The aiding features were designed to help the human's casual reasoning about the system in predicting normal system behavior (N aiding), integrating observations into actual system behavior (O aiding), finding discrepancies between the two (O-N aiding), or finding discrepancies between observed behavior and hypothetical behavior (O-HN aiding). Human diagnostic performance was found to improve by almost a factor of two with O aiding and O-N aiding.
The Telecommunications and Data Acquisition Report. [Deep Space Network
NASA Technical Reports Server (NTRS)
Posner, E. C. (Editor)
1986-01-01
This publication, one of a series formerly titled The Deep Space Network Progress Report, documents DSN progress in flight project support, tracking and data acquisition research and technology, network engineering, hardware and software implementation, and operations. In addition, developments in Earth-based radio technology as applied to geodynamics, astrophysics and the radio search for extraterrestrial intelligence are reported.
Wang, Xinggang; Yang, Wei; Weinreb, Jeffrey; Han, Juan; Li, Qiubai; Kong, Xiangchuang; Yan, Yongluan; Ke, Zan; Luo, Bo; Liu, Tao; Wang, Liang
2017-11-13
Prostate cancer (PCa) is a major cause of death since ancient time documented in Egyptian Ptolemaic mummy imaging. PCa detection is critical to personalized medicine and varies considerably under an MRI scan. 172 patients with 2,602 morphologic images (axial 2D T2-weighted imaging) of the prostate were obtained. A deep learning with deep convolutional neural network (DCNN) and a non-deep learning with SIFT image feature and bag-of-word (BoW), a representative method for image recognition and analysis, were used to distinguish pathologically confirmed PCa patients from prostate benign conditions (BCs) patients with prostatitis or prostate benign hyperplasia (BPH). In fully automated detection of PCa patients, deep learning had a statistically higher area under the receiver operating characteristics curve (AUC) than non-deep learning (P = 0.0007 < 0.001). The AUCs were 0.84 (95% CI 0.78-0.89) for deep learning method and 0.70 (95% CI 0.63-0.77) for non-deep learning method, respectively. Our results suggest that deep learning with DCNN is superior to non-deep learning with SIFT image feature and BoW model for fully automated PCa patients differentiation from prostate BCs patients. Our deep learning method is extensible to image modalities such as MR imaging, CT and PET of other organs.
Prediction of Occult Invasive Disease in Ductal Carcinoma in Situ Using Deep Learning Features.
Shi, Bibo; Grimm, Lars J; Mazurowski, Maciej A; Baker, Jay A; Marks, Jeffrey R; King, Lorraine M; Maley, Carlo C; Hwang, E Shelley; Lo, Joseph Y
2018-03-01
The aim of this study was to determine whether deep features extracted from digital mammograms using a pretrained deep convolutional neural network are prognostic of occult invasive disease for patients with ductal carcinoma in situ (DCIS) on core needle biopsy. In this retrospective study, digital mammographic magnification views were collected for 99 subjects with DCIS at biopsy, 25 of which were subsequently upstaged to invasive cancer. A deep convolutional neural network model that was pretrained on nonmedical images (eg, animals, plants, instruments) was used as the feature extractor. Through a statistical pooling strategy, deep features were extracted at different levels of convolutional layers from the lesion areas, without sacrificing the original resolution or distorting the underlying topology. A multivariate classifier was then trained to predict which tumors contain occult invasive disease. This was compared with the performance of traditional "handcrafted" computer vision (CV) features previously developed specifically to assess mammographic calcifications. The generalization performance was assessed using Monte Carlo cross-validation and receiver operating characteristic curve analysis. Deep features were able to distinguish DCIS with occult invasion from pure DCIS, with an area under the receiver operating characteristic curve of 0.70 (95% confidence interval, 0.68-0.73). This performance was comparable with the handcrafted CV features (area under the curve = 0.68; 95% confidence interval, 0.66-0.71) that were designed with prior domain knowledge. Despite being pretrained on only nonmedical images, the deep features extracted from digital mammograms demonstrated comparable performance with handcrafted CV features for the challenging task of predicting DCIS upstaging. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Deep Restricted Kernel Machines Using Conjugate Feature Duality.
Suykens, Johan A K
2017-08-01
The aim of this letter is to propose a theory of deep restricted kernel machines offering new foundations for deep learning with kernel machines. From the viewpoint of deep learning, it is partially related to restricted Boltzmann machines, which are characterized by visible and hidden units in a bipartite graph without hidden-to-hidden connections and deep learning extensions as deep belief networks and deep Boltzmann machines. From the viewpoint of kernel machines, it includes least squares support vector machines for classification and regression, kernel principal component analysis (PCA), matrix singular value decomposition, and Parzen-type models. A key element is to first characterize these kernel machines in terms of so-called conjugate feature duality, yielding a representation with visible and hidden units. It is shown how this is related to the energy form in restricted Boltzmann machines, with continuous variables in a nonprobabilistic setting. In this new framework of so-called restricted kernel machine (RKM) representations, the dual variables correspond to hidden features. Deep RKM are obtained by coupling the RKMs. The method is illustrated for deep RKM, consisting of three levels with a least squares support vector machine regression level and two kernel PCA levels. In its primal form also deep feedforward neural networks can be trained within this framework.
Deep level transient spectroscopy (DLTS) on colloidal-synthesized nanocrystal solids.
Bozyigit, Deniz; Jakob, Michael; Yarema, Olesya; Wood, Vanessa
2013-04-24
We demonstrate current-based, deep level transient spectroscopy (DLTS) on semiconductor nanocrystal solids to obtain quantitative information on deep-lying trap states, which play an important role in the electronic transport properties of these novel solids and impact optoelectronic device performance. Here, we apply this purely electrical measurement to an ethanedithiol-treated, PbS nanocrystal solid and find a deep trap with an activation energy of 0.40 eV and a density of NT = 1.7 × 10(17) cm(-3). We use these findings to draw and interpret band structure models to gain insight into charge transport in PbS nanocrystal solids and the operation of PbS nanocrystal-based solar cells.
Chen, Chun-Yen; Chang, Hsin-Yueh
2016-03-01
Microalgae-based biodiesel has been recognized as a sustainable and promising alternative to fossil diesel. High lipid productivity of microalgae is required for economic production of biodiesel from microalgae. This study was undertaken to enhance the growth and oil accumulation of an indigenous microalga Chlorella sorokiniana CY1 by applying engineering strategies using deep-sea water as the medium. First, the microalga was cultivated using LED as the immersed light source, and the results showed that the immersed LED could effectively enhance the oil/lipid content and final microalgal biomass concentration to 53.8% and 2.5 g/l, respectively. Next, the semi-batch photobioreactor operation with deep-sea water was shown to improve lipid content and microalgal growth over those from using batch and continuous cultures under similar operating conditions. The optimal replacement ratio was 50%, resulting in an oil/lipid content and final biomass concentration of 61.5% and 2.8 g/l, respectively. A long-term semi-batch culture utilizing 50%-replaced medium was carried out for four runs. The final biomass concentration and lipid productivity were 2.5 g/L and 112.2 mg/L/d, respectively. The fatty acid composition of the microalgal lipids was predominant by palmitic acid, stearic acid, oleic acid and linoleic acid, and this lipid quality is suitable for biodiesel production. This demonstrates that optimizing light source arrangement, bioreactor operation and deep-sea water supplements could effectively promote the lipid production of C. sorokiniana CY1 for the applications in microalgae-based biodiesel industry. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Zucker, Shay; Giryes, Raja
2018-04-01
Transits of habitable planets around solar-like stars are expected to be shallow, and to have long periods, which means low information content. The current bottleneck in the detection of such transits is caused in large part by the presence of red (correlated) noise in the light curves obtained from the dedicated space telescopes. Based on the groundbreaking results deep learning achieves in many signal and image processing applications, we propose to use deep neural networks to solve this problem. We present a feasibility study, in which we applied a convolutional neural network on a simulated training set. The training set comprised light curves received from a hypothetical high-cadence space-based telescope. We simulated the red noise by using Gaussian Processes with a wide variety of hyper-parameters. We then tested the network on a completely different test set simulated in the same way. Our study proves that very difficult cases can indeed be detected. Furthermore, we show how detection trends can be studied and detection biases quantified. We have also checked the robustness of the neural-network performance against practical artifacts such as outliers and discontinuities, which are known to affect space-based high-cadence light curves. Future work will allow us to use the neural networks to characterize the transit model and identify individual transits. This new approach will certainly be an indispensable tool for the detection of habitable planets in the future planet-detection space missions such as PLATO.
NASA Technical Reports Server (NTRS)
Estefan, Jeff A.; Giovannoni, Brian J.
2014-01-01
The Advanced Multi-Mission Operations Systems (AMMOS) is NASA's premier space mission operations product line offering for use in deep-space robotic and astrophysics missions. The general approach to AMMOS modernization over the course of its 29-year history exemplifies a continual, evolutionary approach with periods of sponsor investment peaks and valleys in between. Today, the Multimission Ground Systems and Services (MGSS) office-the program office that manages the AMMOS for NASA-actively pursues modernization initiatives and continues to evolve the AMMOS by incorporating enhanced capabilities and newer technologies into its end-user tool and service offerings. Despite the myriad of modernization investments that have been made over the evolutionary course of the AMMOS, pain points remain. These pain points, based on interviews with numerous flight project mission operations personnel, can be classified principally into two major categories: 1) information-related issues, and 2) process-related issues. By information-related issues, we mean pain points associated with the management and flow of MOS data across the various system interfaces. By process-related issues, we mean pain points associated with the MOS activities performed by mission operators (i.e., humans) and supporting software infrastructure used in support of those activities. In this paper, three foundational concepts-Timeline, Closed Loop Control, and Separation of Concerns-collectively form the basis for expressing a set of core architectural tenets that provides a multifaceted approach to AMMOS system architecture modernization intended to address the information- and process-related issues. Each of these architectural tenets will be further explored in this paper. Ultimately, we envision the application of these core tenets resulting in a unified vision of a future-state architecture for the AMMOS-one that is intended to result in a highly adaptable, highly efficient, and highly cost-effective set of multimission MOS products and services.
MOS 2.0: The Next Generation in Mission Operations Systems
NASA Technical Reports Server (NTRS)
Bindschadler, Duane L.; Boyles, Carole A.; Carrion, Carlos; Delp, Chris L.
2010-01-01
A Mission Operations System (MOS) or Ground System constitutes that portion of an overall space mission Enterprise that resides here on Earth. Over the past two decades, technological innovations in computing and software technologies have allowed an MOS to support ever more complex missions while consuming a decreasing fraction of Project development budgets. Despite (or perhaps, because of) such successes, it is routine to hear concerns about the cost of MOS development. At the same time, demand continues for Ground Systems which will plan more spacecraft activities with fewer commanding errors, provide scientists and engineers with more autonomous functionality, process and manage larger and more complex data more quickly, all while requiring fewer people to develop, deploy, operate and maintain them. One successful approach to such concerns over this period is a multimission approach, based on the reuse of portions (most often software) developed and used in previous missions. The Advanced Multi-Mission Operations System (AMMOS), developed for deep-space science missions, is one successful example of such an approach. Like many computing-intensive systems, it has grown up in a near-organic fashion from a relatively simple set of tools into a complexly interrelated set of capabilities. Such systems, like a city lacking any concept of urban planning, can and will grow in ways that are neither efficient nor particularly easy to sustain. To meet the growing demands and unyielding constraints placed on ground systems, a new approach is necessary. Under the aegis of a multi-year effort to revitalize the AMMOS's multimission operations capabilities, we are utilizing modern practices in systems architecting and model-based engineering to create the next step in Ground Systems: MOS 2.0. In this paper we outline our work (ongoing and planned) to architect and design a multimission MOS 2.0, describe our goals and measureable objectives, and discuss some of the benefits that this top-down, architectural approach holds for creating a more flexible and capable MOS for Missions while holding the line on cost.
A Poor Relationship Between Sea Level and Deep-Water Sand Delivery
NASA Astrophysics Data System (ADS)
Harris, Ashley D.; Baumgardner, Sarah E.; Sun, Tao; Granjeon, Didier
2018-08-01
The most commonly cited control on delivery of sand to deep water is the rate of relative sea-level fall. The rapid rate of accommodation loss on the shelf causes sedimentation to shift basinward. Field and experimental numerical modeling studies have shown that deep-water sand delivery can occur during any stage of relative sea level position and across a large range of values of rate of relative sea-level change. However, these studies did not investigate the impact of sediment transport efficiency on the relationship between rate of relative sea-level change and deep-water sand delivery rate. We explore this relationship using a deterministic nonlinear diffusion-based numerical stratigraphic forward model. We vary across three orders of magnitude the diffusion coefficient value for marine settings, which controls sediment transport efficiency. We find that the rate of relative sea-level change can explain no more than 1% of the variability in deep-water sand delivery rates, regardless of sediment transport efficiency. Model results show a better correlation with relative sea level, with up to 55% of the variability in deep water sand delivery rates explained. The results presented here are consistent with studies of natural settings which suggest stochastic processes such as avulsion and slope failure, and interactions among such processes, may explain the remaining variance. Relative sea level is a better predictor of deep-water sand delivery than rate of relative sea-level change because it is the sea-level fall itself which promotes sand delivery, not the rate of the fall. We conclude that the poor relationship between sea level and sand delivery is not an artifact of the modeling parameters but is instead due to the inadequacy of relative sea level and the rate of relative sea-level change to fully describe the dimensional space in which depositional systems reside. Subsequently, sea level itself is unable to account for the interaction of multiple processes that contribute to sand delivery to deep water.
Design of Omni Directional Remotely Operated Vehicle (ROV)
NASA Astrophysics Data System (ADS)
Rahimuddin; Hasan, Hasnawiya; Rivai, Haryanti A.; Iskandar, Yanu; Claudio, P.
2018-02-01
Nowadays, underwater activities are increased with the increase of oil resources finding. The gap between demand and supply of oil and gas cause engineers to find oil and gas resources in deep water. In other side, high risk of working in deep underwater environment can cause a dangerous situation for human. Therefore, many research activities are developing an underwater vehicle to replace the human’s work such as ROV or Remotely Operated Vehicles. The vehicle operated using tether to transport the signals and electric power from the surface vehicle. Arrangements of weight, buoyancy, and the propeller placements are significant aspect in designing the vehicle’s performance. This paper presents design concept of ROV for survey and observation the underwater objects with interaction vectored propellers used for vehicle’s motions.
Maintenance and operations cost model for DSN subsystems
NASA Technical Reports Server (NTRS)
Burt, R. W.; Lesh, J. R.
1977-01-01
A procedure is described which partitions the recurring costs of the Deep Space Network (DSN) over the individual DSN subsystems. The procedure results in a table showing the maintenance, operations, sustaining engineering and supportive costs for each subsystems.
NASA Astrophysics Data System (ADS)
Tsehay, Yohannes K.; Lay, Nathan S.; Roth, Holger R.; Wang, Xiaosong; Kwak, Jin Tae; Turkbey, Baris I.; Pinto, Peter A.; Wood, Brad J.; Summers, Ronald M.
2017-03-01
Prostate cancer (PCa) is the second most common cause of cancer related deaths in men. Multiparametric MRI (mpMRI) is the most accurate imaging method for PCa detection; however, it requires the expertise of experienced radiologists leading to inconsistency across readers of varying experience. To increase inter-reader agreement and sensitivity, we developed a computer-aided detection (CAD) system that can automatically detect lesions on mpMRI that readers can use as a reference. We investigated a convolutional neural network based deep-learing (DCNN) architecture to find an improved solution for PCa detection on mpMRI. We adopted a network architecture from a state-of-the-art edge detector that takes an image as an input and produces an image probability map. Two-fold cross validation along with a receiver operating characteristic (ROC) analysis and free-response ROC (FROC) were used to determine our deep-learning based prostate-CAD's (CADDL) performance. The efficacy was compared to an existing prostate CAD system that is based on hand-crafted features, which was evaluated on the same test-set. CADDL had an 86% detection rate at 20% false-positive rate while the top-down learning CAD had 80% detection rate at the same false-positive rate, which translated to 94% and 85% detection rate at 10 false-positives per patient on the FROC. A CNN based CAD is able to detect cancerous lesions on mpMRI of the prostate with results comparable to an existing prostate-CAD showing potential for further development.
Alabama Ground Operations during the Deep Convective Clouds and Chemistry Experiment
NASA Technical Reports Server (NTRS)
Carey, Lawrence; Blakeslee, Richard; Koshak, William; Bain, Lamont; Rogers, Ryan; Kozlowski, Danielle; Sherrer, Adam; Saari, Matt; Bigelbach, Brandon; Scott, Mariana;
2013-01-01
The Deep Convective Clouds and Chemistry (DC3) field campaign investigates the impact of deep, midlatitude convective clouds, including their dynamical, physical and lighting processes, on upper tropospheric composition and chemistry. DC3 science operations took place from 14 May to 30 June 2012. The DC3 field campaign utilized instrumented aircraft and ground ]based observations. The NCAR Gulfstream ]V (GV) observed a variety of gas ]phase species, radiation and cloud particle characteristics in the high ]altitude outflow of storms while the NASA DC ]8 characterized the convective inflow. Groundbased radar networks were used to document the kinematic and microphysical characteristics of storms. In order to study the impact of lightning on convective outflow composition, VHF ]based lightning mapping arrays (LMAs) provided detailed three ]dimensional measurements of flashes. Mobile soundings were utilized to characterize the meteorological environment of the convection. Radar, sounding and lightning observations were also used in real ]time to provide forecasting and mission guidance to the aircraft operations. Combined aircraft and ground ]based observations were conducted at three locations, 1) northeastern Colorado, 2) Oklahoma/Texas and 3) northern Alabama, to study different modes of deep convection in a variety of meteorological and chemical environments. The objective of this paper is to summarize the Alabama ground operations and provide a preliminary assessment of the ground ]based observations collected over northern Alabama during DC3. The multi ] Doppler, dual ]polarization radar network consisted of the UAHuntsville Advanced Radar for Meteorological and Operational Research (ARMOR), the UAHuntsville Mobile Alabama X ]band (MAX) radar and the Hytop (KHTX) Weather Surveillance Radar 88 Doppler (WSR ]88D). Lightning frequency and structure were observed in near real ]time by the NASA MSFC Northern Alabama LMA (NALMA). Pre ]storm and inflow proximity soundings were obtained with the UAHuntsville mobile sounding unit and the Redstone Arsenal (QAG) morning sounding.
Electronic Components and Circuits for Extreme Temperature Environments
NASA Technical Reports Server (NTRS)
Patterson, Richard L.; Hammoud, Ahmad; Dickman, John E.; Gerber, Scott
2003-01-01
Planetary exploration missions and deep space probes require electrical power management and control systems that are capable of efficient and reliable operation in very low temperature environments. Presently, spacecraft operating in the cold environment of deep space carry a large number of radioisotope heating units in order to maintain the surrounding temperature of the on-board electronics at approximately 20 C. Electronics capable of operation at cryogenic temperatures will not only tolerate the hostile environment of deep space but also reduce system size and weight by eliminating or reducing the radioisotope heating units and their associate structures; thereby reducing system development as well as launch costs. In addition, power electronic circuits designed for operation at low temperatures are expected to result in more efficient systems than those at room temperature. This improvement results from better behavior and tolerance in the electrical and thermal properties of semiconductor and dielectric materials at low temperatures. The Low Temperature Electronics Program at the NASA Glenn Research Center focuses on research and development of electrical components, circuits, and systems suitable for applications in the aerospace environment and deep space exploration missions. Research is being conducted on devices and systems for reliable use down to cryogenic temperatures. Some of the commercial-off-the-shelf as well as developed components that are being characterized include switching devices, resistors, magnetics, and capacitors. Semiconductor devices and integrated circuits including digital-to-analog and analog-to-digital converters, DC/DC converters, operational amplifiers, and oscillators are also being investigated for potential use in low temperature applications. An overview of the NASA Glenn Research Center Low Temperature Electronic Program will be presented in this paper. A description of the low temperature test facilities along with selected data obtained through in-house component and circuit testing will also be discussed. Ongoing research activities that are being performed in collaboration with various organizations will also be presented.
The Great Observatories Origins Deep Survey (GOODS) Spitzer Legacy Science Program
NASA Astrophysics Data System (ADS)
Dickinson, M.; GOODS Team
2004-12-01
The Great Observatories Origins Deep Survey (GOODS) is an anthology of observing programs that are creating a rich, public, multiwavelength data set for studying galaxy formation and evolution. GOODS is observing two fields, one in each hemisphere, with extremely deep imaging and spectroscopy using the most powerful telescopes in space and on the ground. The GOODS Spitzer Legacy Science Program completes the trio of observations from NASA's Great Observatories, joining already-completed GOODS data from Chandra and Hubble. Barring unforeseen difficulties, the GOODS Spitzer observing program will have been completed by the end of 2004, and the first data products will have been released to the astronomical community. In this Special Oral Session, and in an accompanying poster session, the GOODS team presents early scientific results from this Spitzer Legacy program, as well as new research based on other GOODS data sets. I will introduce the session with a brief description of the Legacy observations and data set. Support for this work, part of the Spitzer Space Telescope Legacy Science Program, was provided by NASA through Contract Number 1224666 issued by the Jet Propulsion Laboratory, California Institute of Technology under NASA contract 1407.
NASA Astrophysics Data System (ADS)
Jin, Hyeongmin; Heo, Changyong; Kim, Jong Hyo
2018-02-01
Differing reconstruction kernels are known to strongly affect the variability of imaging biomarkers and thus remain as a barrier in translating the computer aided quantification techniques into clinical practice. This study presents a deep learning application to CT kernel conversion which converts a CT image of sharp kernel to that of standard kernel and evaluates its impact on variability reduction of a pulmonary imaging biomarker, the emphysema index (EI). Forty cases of low-dose chest CT exams obtained with 120kVp, 40mAs, 1mm thickness, of 2 reconstruction kernels (B30f, B50f) were selected from the low dose lung cancer screening database of our institution. A Fully convolutional network was implemented with Keras deep learning library. The model consisted of symmetric layers to capture the context and fine structure characteristics of CT images from the standard and sharp reconstruction kernels. Pairs of the full-resolution CT data set were fed to input and output nodes to train the convolutional network to learn the appropriate filter kernels for converting the CT images of sharp kernel to standard kernel with a criterion of measuring the mean squared error between the input and target images. EIs (RA950 and Perc15) were measured with a software package (ImagePrism Pulmo, Seoul, South Korea) and compared for the data sets of B50f, B30f, and the converted B50f. The effect of kernel conversion was evaluated with the mean and standard deviation of pair-wise differences in EI. The population mean of RA950 was 27.65 +/- 7.28% for B50f data set, 10.82 +/- 6.71% for the B30f data set, and 8.87 +/- 6.20% for the converted B50f data set. The mean of pair-wise absolute differences in RA950 between B30f and B50f is reduced from 16.83% to 1.95% using kernel conversion. Our study demonstrates the feasibility of applying the deep learning technique for CT kernel conversion and reducing the kernel-induced variability of EI quantification. The deep learning model has a potential to improve the reliability of imaging biomarker, especially in evaluating the longitudinal changes of EI even when the patient CT scans were performed with different kernels.
Al-Samadani, Khalid H; Gazal, Giath
2015-11-01
To investigate the effectiveness of topical anesthetic, 20% benzocaine in relieving pain and stress in patients following deep cavity restoration and extraction of teeth under local anesthesia (LA). A prospective clinical trial was conducted from October 2014 until April 2015 at Taibah University, Al Madinah Al Munawarah, Kingdom of Saudi Arabia. Forty-five patients were included in the 20% benzocaine group, and 46 in the normal saline group. Evaluation of the dental stress was made pre-operatively and immediately post-operative treatment using the visual analogue scale (VAS). Furthermore, discomfort of the injections were recorded by the patients after each treatment on standard 100 mm VAS, tagged at the endpoints with "no pain" (0 mm) and "unbearable pain" (100 mm). There were statistically significant differences between the mean stress scores for patients in the benzocaine and normal saline groups post-operatively (p=0.002). There were significant differences between the mean pain scores for patients in the post buccal injection (p=0.001), post palatal injection (p=0.01), and the post inferior alveolar nerve block groups (p=0.02). Buccal, palatal, and inferior alveolar nerve block injections were more painful for patients in the normal saline group than the benzocaine group. This investigation has demonstrated that post-operative stress associated with deep cavity restoration and dental extractions under LA can be reduced by the application of topical anesthetic (20% benzocaine) at the operative site for intra-oral injections.
Al-Samadani, Khalid H.; Gazal, Giath
2015-01-01
Objectives: To investigate the effectiveness of topical anesthetic, 20% benzocaine in relieving pain and stress in patients following deep cavity restoration and extraction of teeth under local anesthesia (LA). Methods: A prospective clinical trial was conducted from October 2014 until April 2015 at Taibah University, Al Madinah Al Munawarah, Kingdom of Saudi Arabia. Forty-five patients were included in the 20% benzocaine group, and 46 in the normal saline group. Evaluation of the dental stress was made pre-operatively and immediately post-operative treatment using the visual analogue scale (VAS). Furthermore, discomfort of the injections were recorded by the patients after each treatment on standard 100 mm VAS, tagged at the endpoints with “no pain” (0 mm) and “unbearable pain” (100 mm). Results: There were statistically significant differences between the mean stress scores for patients in the benzocaine and normal saline groups post-operatively (p=0.002). There were significant differences between the mean pain scores for patients in the post buccal injection (p=0.001), post palatal injection (p=0.01), and the post inferior alveolar nerve block groups (p=0.02). Buccal, palatal, and inferior alveolar nerve block injections were more painful for patients in the normal saline group than the benzocaine group. Conclusion: This investigation has demonstrated that post-operative stress associated with deep cavity restoration and dental extractions under LA can be reduced by the application of topical anesthetic (20% benzocaine) at the operative site for intra-oral injections. PMID:26593169
Deep Impact: 19 gigajoules can make quite an impression
NASA Technical Reports Server (NTRS)
Kubitschek, D.; Bank, T.; Frazier, W.; Blume, W.; Null, G.; Mastrodemos, N.; Synnott, S.
2001-01-01
Deep Impact will impact the comet Tempel-1 on July 4, 2005. The impact event will be clearly visible from small telescopes on Earth, especially in the IR bands. When combined with observations taken from the Flyby spacecraft, this science data set will provide unique insight into the materials and structure within the comet, and the strength of the surface.
Deep Hashing for Scalable Image Search.
Lu, Jiwen; Liong, Venice Erin; Zhou, Jie
2017-05-01
In this paper, we propose a new deep hashing (DH) approach to learn compact binary codes for scalable image search. Unlike most existing binary codes learning methods, which usually seek a single linear projection to map each sample into a binary feature vector, we develop a deep neural network to seek multiple hierarchical non-linear transformations to learn these binary codes, so that the non-linear relationship of samples can be well exploited. Our model is learned under three constraints at the top layer of the developed deep network: 1) the loss between the compact real-valued code and the learned binary vector is minimized, 2) the binary codes distribute evenly on each bit, and 3) different bits are as independent as possible. To further improve the discriminative power of the learned binary codes, we extend DH into supervised DH (SDH) and multi-label SDH by including a discriminative term into the objective function of DH, which simultaneously maximizes the inter-class variations and minimizes the intra-class variations of the learned binary codes with the single-label and multi-label settings, respectively. Extensive experimental results on eight widely used image search data sets show that our proposed methods achieve very competitive results with the state-of-the-arts.
Evaluation of a deep learning architecture for MR imaging prediction of ATRX in glioma patients
NASA Astrophysics Data System (ADS)
Korfiatis, Panagiotis; Kline, Timothy L.; Erickson, Bradley J.
2018-02-01
Predicting mutation/loss of alpha-thalassemia/mental retardation syndrome X-linked (ATRX) gene utilizing MR imaging is of high importance since it is a predictor of response and prognosis in brain tumors. In this study, we compare a deep neural network approach based on a residual deep neural network (ResNet) architecture and one based on a classical machine learning approach and evaluate their ability in predicting ATRX mutation status without the need for a distinct tumor segmentation step. We found that the ResNet50 (50 layers) architecture, pre trained on ImageNet data was the best performing model, achieving an accuracy of 0.91 for the test set (classification of a slice as no tumor, ATRX mutated, or mutated) in terms of f1 score in a test set of 35 cases. The SVM classifier achieved 0.63 for differentiating the Flair signal abnormality regions from the test patients based on their mutation status. We report a method that alleviates the need for extensive preprocessing and acts as a proof of concept that deep neural network architectures can be used to predict molecular biomarkers from routine medical images.
LeMoyne, Robert; Tomycz, Nestor; Mastroianni, Timothy; McCandless, Cyrus; Cozza, Michael; Peduto, David
2015-01-01
Essential tremor (ET) is a highly prevalent movement disorder. Patients with ET exhibit a complex progressive and disabling tremor, and medical management often fails. Deep brain stimulation (DBS) has been successfully applied to this disorder, however there has been no quantifiable way to measure tremor severity or treatment efficacy in this patient population. The quantified amelioration of kinetic tremor via DBS is herein demonstrated through the application of a smartphone (iPhone) as a wireless accelerometer platform. The recorded acceleration signal can be obtained at a setting of the subject's convenience and conveyed by wireless transmission through the Internet for post-processing anywhere in the world. Further post-processing of the acceleration signal can be classified through a machine learning application, such as the support vector machine. Preliminary application of deep brain stimulation with a smartphone for acquisition of a feature set and machine learning for classification has been successfully applied. The support vector machine achieved 100% classification between deep brain stimulation in `on' and `off' mode based on the recording of an accelerometer signal through a smartphone as a wireless accelerometer platform.
Deep Logic Networks: Inserting and Extracting Knowledge From Deep Belief Networks.
Tran, Son N; d'Avila Garcez, Artur S
2018-02-01
Developments in deep learning have seen the use of layerwise unsupervised learning combined with supervised learning for fine-tuning. With this layerwise approach, a deep network can be seen as a more modular system that lends itself well to learning representations. In this paper, we investigate whether such modularity can be useful to the insertion of background knowledge into deep networks, whether it can improve learning performance when it is available, and to the extraction of knowledge from trained deep networks, and whether it can offer a better understanding of the representations learned by such networks. To this end, we use a simple symbolic language-a set of logical rules that we call confidence rules-and show that it is suitable for the representation of quantitative reasoning in deep networks. We show by knowledge extraction that confidence rules can offer a low-cost representation for layerwise networks (or restricted Boltzmann machines). We also show that layerwise extraction can produce an improvement in the accuracy of deep belief networks. Furthermore, the proposed symbolic characterization of deep networks provides a novel method for the insertion of prior knowledge and training of deep networks. With the use of this method, a deep neural-symbolic system is proposed and evaluated, with the experimental results indicating that modularity through the use of confidence rules and knowledge insertion can be beneficial to network performance.
Does the Deep Layer of the Deep Temporalis Fascia Really Exist?
Li, Hui; Li, Kaide; Jia, Wenhao; Han, Chaoying; Chen, Jinlong; Liu, Lei
2018-04-14
It has been widely accepted that a split of the deep temporal fascia occurs approximately 2 to 3 cm above the zygomatic arch and extends into the superficial and deep layers. The deep layer of the deep temporal fascia is between the superficial temporal fat pad and the temporal muscle. However, during procedures, the authors noted the absence of the deep layer of the deep temporal fascia between the superficial temporal fat pad and the temporal muscle. This prospective study was conducted to clarify the presence or absence of a deep layer of the deep temporal fascia. Anatomic layers of the soft tissues of the temporal region, with reference to the deep temporal fascia, were investigated in 130 cases operated on for zygomaticofacial fractures using the supratemporal approach from June 2013 to June 2017. Of 130 surgeries, the authors found the absence of a thick, obviously identifiable, fascial layer between the superficial temporal fat pad and the temporal muscle. In fact, the authors found nothing above the temporal muscle in most cases. In a few cases, the authors observed only a small amount of scattered loose connective tissue between the superficial temporal fat pad and the temporal muscle. This clinical study showed the absence of a thick, obviously identifiable, fascial layer between the superficial temporal fat pad and the temporal muscle, which suggests that a "deep layer of the deep temporal fascia" might not exist. Copyright © 2018. Published by Elsevier Inc.
Cicero, Mark; Bilbily, Alexander; Colak, Errol; Dowdell, Tim; Gray, Bruce; Perampaladas, Kuhan; Barfett, Joseph
2017-05-01
Convolutional neural networks (CNNs) are a subtype of artificial neural network that have shown strong performance in computer vision tasks including image classification. To date, there has been limited application of CNNs to chest radiographs, the most frequently performed medical imaging study. We hypothesize CNNs can learn to classify frontal chest radiographs according to common findings from a sufficiently large data set. Our institution's research ethics board approved a single-center retrospective review of 35,038 adult posterior-anterior chest radiographs and final reports performed between 2005 and 2015 (56% men, average age of 56, patient type: 24% inpatient, 39% outpatient, 37% emergency department) with a waiver for informed consent. The GoogLeNet CNN was trained using 3 graphics processing units to automatically classify radiographs as normal (n = 11,702) or into 1 or more of cardiomegaly (n = 9240), consolidation (n = 6788), pleural effusion (n = 7786), pulmonary edema (n = 1286), or pneumothorax (n = 1299). The network's performance was evaluated using receiver operating curve analysis on a test set of 2443 radiographs with the criterion standard being board-certified radiologist interpretation. Using 256 × 256-pixel images as input, the network achieved an overall sensitivity and specificity of 91% with an area under the curve of 0.964 for classifying a study as normal (n = 1203). For the abnormal categories, the sensitivity, specificity, and area under the curve, respectively, were 91%, 91%, and 0.962 for pleural effusion (n = 782), 82%, 82%, and 0.868 for pulmonary edema (n = 356), 74%, 75%, and 0.850 for consolidation (n = 214), 81%, 80%, and 0.875 for cardiomegaly (n = 482), and 78%, 78%, and 0.861 for pneumothorax (n = 167). Current deep CNN architectures can be trained with modest-sized medical data sets to achieve clinically useful performance at detecting and excluding common pathology on chest radiographs.
NASA Astrophysics Data System (ADS)
Zheng, Jing; Lu, Jiren; Peng, Suping; Jiang, Tianqi
2018-02-01
The conventional arrival pick-up algorithms cannot avoid the manual modification of the parameters for the simultaneous identification of multiple events under different signal-to-noise ratios (SNRs). Therefore, in order to automatically obtain the arrivals of multiple events with high precision under different SNRs, in this study an algorithm was proposed which had the ability to pick up the arrival of microseismic or acoustic emission events based on deep recurrent neural networks. The arrival identification was performed using two important steps, which included a training phase and a testing phase. The training process was mathematically modelled by deep recurrent neural networks using Long Short-Term Memory architecture. During the testing phase, the learned weights were utilized to identify the arrivals through the microseismic/acoustic emission data sets. The data sets were obtained by rock physics experiments of the acoustic emission. In order to obtain the data sets under different SNRs, this study added random noise to the raw experiments' data sets. The results showed that the outcome of the proposed method was able to attain an above 80 per cent hit-rate at SNR 0 dB, and an approximately 70 per cent hit-rate at SNR -5 dB, with an absolute error in 10 sampling points. These results indicated that the proposed method had high selection precision and robustness.
NASA Astrophysics Data System (ADS)
Stolper, Daniel A.; Eiler, John M.; Higgins, John A.
2018-04-01
The measurement of multiply isotopically substituted ('clumped isotope') carbonate groups provides a way to reconstruct past mineral formation temperatures. However, dissolution-reprecipitation (i.e., recrystallization) reactions, which commonly occur during sedimentary burial, can alter a sample's clumped-isotope composition such that it partially or wholly reflects deeper burial temperatures. Here we derive a quantitative model of diagenesis to explore how diagenesis alters carbonate clumped-isotope values. We apply the model to a new dataset from deep-sea sediments taken from Ocean Drilling Project site 807 in the equatorial Pacific. This dataset is used to ground truth the model. We demonstrate that the use of the model with accompanying carbonate clumped-isotope and carbonate δ18O values provides new constraints on both the diagenetic history of deep-sea settings as well as past equatorial sea-surface temperatures. Specifically, the combination of the diagenetic model and data support previous work that indicates equatorial sea-surface temperatures were warmer in the Paleogene as compared to today. We then explore whether the model is applicable to shallow-water settings commonly preserved in the rock record. Using a previously published dataset from the Bahamas, we demonstrate that the model captures the main trends of the data as a function of burial depth and thus appears applicable to a range of depositional settings.
Auditory processing during deep propofol sedation and recovery from unconsciousness.
Koelsch, Stefan; Heinke, Wolfgang; Sammler, Daniela; Olthoff, Derk
2006-08-01
Using evoked potentials, this study investigated effects of deep propofol sedation, and effects of recovery from unconsciousness, on the processing of auditory information with stimuli suited to elicit a physical MMN, and a (music-syntactic) ERAN. Levels of sedation were assessed using the Bispectral Index (BIS) and the Modified Observer's Assessment of Alertness and Sedation Scale (MOAAS). EEG-measurements were performed during wakefulness, deep propofol sedation (MOAAS 2-3, mean BIS=68), and a recovery period. Between deep sedation and recovery period, the infusion rate of propofol was increased to achieve unconsciousness (MOAAS 0-1, mean BIS=35); EEG measurements of recovery period were performed after subjects regained consciousness. During deep sedation, the physical MMN was markedly reduced, but still significant. No ERAN was observed in this level. A clear P3a was elicited during deep sedation by those deviants, which were task-relevant during the awake state. As soon as subjects regained consciousness during the recovery period, a normal MMN was elicited. By contrast, the P3a was absent in the recovery period, and the P3b was markedly reduced. Results indicate that the auditory sensory memory (as indexed by the physical MMN) is still active, although strongly reduced, during deep sedation (MOAAS 2-3). The presence of the P3a indicates that attention-related processes are still operating during this level. Processes of syntactic analysis appear to be abolished during deep sedation. After propofol-induced anesthesia, the auditory sensory memory appears to operate normal as soon as subjects regain consciousness, whereas the attention-related processes indexed by P3a and P3b are markedly impaired. Results inform about effects of sedative drugs on auditory and attention-related mechanisms. The findings are important because these mechanisms are prerequisites for auditory awareness, auditory learning and memory, as well as language perception during anesthesia.
Developing a Crew Time Model for Human Exploration Missions to Mars
NASA Technical Reports Server (NTRS)
Battfeld, Bryan; Stromgren, Chel; Shyface, Hilary; Cirillo, William; Goodliff, Kandyce
2015-01-01
Candidate human missions to Mars require mission lengths that could extend beyond those that have previously been demonstrated during crewed Lunar (Apollo) and International Space Station (ISS) missions. The nature of the architectures required for deep space human exploration will likely necessitate major changes in how crews operate and maintain the spacecraft. The uncertainties associated with these shifts in mission constructs - including changes to habitation systems, transit durations, and system operations - raise concerns as to the ability of the crew to complete required overhead activities while still having time to conduct a set of robust exploration activities. This paper will present an initial assessment of crew operational requirements for human missions to the Mars surface. The presented results integrate assessments of crew habitation, system maintenance, and utilization to present a comprehensive analysis of potential crew time usage. Destination operations were assessed for a short (approx. 50 day) and long duration (approx. 500 day) surface habitation case. Crew time allocations are broken out by mission segment, and the availability of utilization opportunities was evaluated throughout the entire mission progression. To support this assessment, the integrated crew operations model (ICOM) was developed. ICOM was used to parse overhead, maintenance and system repair, and destination operations requirements within each mission segment - outbound transit, Mars surface duration, and return transit - to develop a comprehensive estimation of exploration crew time allocations. Overhead operational requirements included daily crew operations, health maintenance activities, and down time. Maintenance and repair operational allocations are derived using the Exploration Maintainability and Analysis Tool (EMAT) to develop a probabilistic estimation of crew repair time necessary to maintain systems functionality throughout the mission.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-20
... retention limits for swordfish, Xiphias gladius, harvested in the U.S. West Coast-based deep-set tuna..., there would be no limit on swordfish retained. Regulations prohibiting the use of shallow[hyphen]set..., those regulations prohibit vessels based on the West Coast from using longline gear to make shallow sets...
deepTools2: a next generation web server for deep-sequencing data analysis.
Ramírez, Fidel; Ryan, Devon P; Grüning, Björn; Bhardwaj, Vivek; Kilpert, Fabian; Richter, Andreas S; Heyne, Steffen; Dündar, Friederike; Manke, Thomas
2016-07-08
We present an update to our Galaxy-based web server for processing and visualizing deeply sequenced data. Its core tool set, deepTools, allows users to perform complete bioinformatic workflows ranging from quality controls and normalizations of aligned reads to integrative analyses, including clustering and visualization approaches. Since we first described our deepTools Galaxy server in 2014, we have implemented new solutions for many requests from the community and our users. Here, we introduce significant enhancements and new tools to further improve data visualization and interpretation. deepTools continue to be open to all users and freely available as a web service at deeptools.ie-freiburg.mpg.de The new deepTools2 suite can be easily deployed within any Galaxy framework via the toolshed repository, and we also provide source code for command line usage under Linux and Mac OS X. A public and documented API for access to deepTools functionality is also available. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
deepTools: a flexible platform for exploring deep-sequencing data.
Ramírez, Fidel; Dündar, Friederike; Diehl, Sarah; Grüning, Björn A; Manke, Thomas
2014-07-01
We present a Galaxy based web server for processing and visualizing deeply sequenced data. The web server's core functionality consists of a suite of newly developed tools, called deepTools, that enable users with little bioinformatic background to explore the results of their sequencing experiments in a standardized setting. Users can upload pre-processed files with continuous data in standard formats and generate heatmaps and summary plots in a straight-forward, yet highly customizable manner. In addition, we offer several tools for the analysis of files containing aligned reads and enable efficient and reproducible generation of normalized coverage files. As a modular and open-source platform, deepTools can easily be expanded and customized to future demands and developments. The deepTools webserver is freely available at http://deeptools.ie-freiburg.mpg.de and is accompanied by extensive documentation and tutorials aimed at conveying the principles of deep-sequencing data analysis. The web server can be used without registration. deepTools can be installed locally either stand-alone or as part of Galaxy. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
Parameterizing deep convection using the assumed probability density function method
Storer, R. L.; Griffin, B. M.; Höft, J.; ...
2014-06-11
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Parameterizing deep convection using the assumed probability density function method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Storer, R. L.; Griffin, B. M.; Höft, J.
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak.more » The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Parameterizing deep convection using the assumed probability density function method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Storer, R. L.; Griffin, B. M.; Hoft, Jan
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection.These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. Themore » same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Extracting Databases from Dark Data with DeepDive.
Zhang, Ce; Shin, Jaeho; Ré, Christopher; Cafarella, Michael; Niu, Feng
2016-01-01
DeepDive is a system for extracting relational databases from dark data : the mass of text, tables, and images that are widely collected and stored but which cannot be exploited by standard relational tools. If the information in dark data - scientific papers, Web classified ads, customer service notes, and so on - were instead in a relational database, it would give analysts a massive and valuable new set of "big data." DeepDive is distinctive when compared to previous information extraction systems in its ability to obtain very high precision and recall at reasonable engineering cost; in a number of applications, we have used DeepDive to create databases with accuracy that meets that of human annotators. To date we have successfully deployed DeepDive to create data-centric applications for insurance, materials science, genomics, paleontologists, law enforcement, and others. The data unlocked by DeepDive represents a massive opportunity for industry, government, and scientific researchers. DeepDive is enabled by an unusual design that combines large-scale probabilistic inference with a novel developer interaction cycle. This design is enabled by several core innovations around probabilistic training and inference.
ESONET LIDO Demonstration Mission: the East Sicily node
NASA Astrophysics Data System (ADS)
Riccobene, Giorgio; Favali, Paolo; Andrè, Michel; Chierici, Francesco; Pavan, Gianni; Esonet Lido Demonstration Mission Team
2010-05-01
Off East Sicily (at 2100 m depth, 25 km off the harbour of Catania) a prototype of a cabled deep-sea observatory (NEMO-SN1) was set up and has been operational in real-time since 2005 (the cabled deep-sea multi-parameter station SN1, equipped with geophysical and environmental sensors and the cabled NEMO-OνDE, equipped with 4 broadband hydrophones). The Western Ionian Sea is one of the node sites for the upcoming European permanent underwater network (EMSO). Within the activities of the EC project ESONET-NoE some demonstration missions have been funded. The LIDO-DM (Listening to the Deep Ocean-Demonstration Mission) is one of these and is related to two sites, East Sicily and Iberian Margin (Gulf of Cadiz), the main aims being geo-hazards monitoring and warning (seismic, tsunami, and volcanic) and bio-acoustics. The LIDO-DM East Sicily installation represents a further major step within ESONET-NoE, resulting in a fully integrated system for multidisciplinary deep-sea science, capable to transmit and distribute data in real time to the scientific community and to the general public. LIDO-DM East Sicily hosts a large number of sensors aimed at monitoring and studying oceanographic and environmental parameters (by means of CTD, ADCP, 3-C single point current meter, turbidity meter), geophysical phenomena (low frequency hydrophones, accelerometer, gravity meter, vector and scalar magnetometers, seismometer, absolute and differential pressure gauges), ocean noise monitoring and identification and tracking of biological acoustic sources in deep sea. The latter will be performed using two tetrahedral arrays of 4 hydrophones, located at a relative distance of about 5 km, and at about 25 km from the shore. The whole system will be connected and powered from shore, by means of the electro-optical cable net installed at the East Sicily Site Infrastructure, and synchronised with GPS. Sensors data sampling is performed underwater and transmitted via optical fibre link, with optimal S/N ratio for all signals. This will also permit real-time data acquisition, analysis and distribution on-shore. Innovative electronics for the off-shore data acquisition and transmission systems has been designed, built and tested. A dedicated computing and networking infrastructure for data acquisition, storage and distribution through the internet has been also created. The deployment and connection of the deep sea structures will be performed using the dedicated ROV and Deep Sea Shuttle handling facilities (PEGASO, owned by INGV and INFN). LIDO-DM constitutes the enhancement of the Western Ionian site in view of the EMSO Research Infrastructure.
Sensitivity of the deep-sea amphipod Eurythenes gryllus to chemically dispersed oil.
Olsen, Gro Harlaug; Coquillé, Nathalie; Le Floch, Stephane; Geraudie, Perrine; Dussauze, Matthieu; Lemaire, Philippe; Camus, Lionel
2016-04-01
In the context of an oil spill accident and the following oil spill response, much attention is given to the use of dispersants. Dispersants are used to disperse an oil slick from the sea surface into the water column generating a cloud of dispersed oil droplets. The main consequence is an increasing of the sea water-oil interface which induces an increase of the oil biodegradation. Hence, the use of dispersants can be effective in preventing oiling of sensitive coastal environments. Also, in case of an oil blowout from the seabed, subsea injection of dispersants may offer some benefits compared to containment and recovery of the oil or in situ burning operation at the sea surface. However, biological effects of dispersed oil are poorly understood for deep-sea species. Most effects studies on dispersed oil and also other oil-related compounds have been focusing on more shallow water species. This is the first approach to assess the sensitivity of a macro-benthic deep-sea organism to dispersed oil. This paper describes a toxicity test which was performed on the macro-benthic deep-sea amphipod (Eurythenes gryllus) to determine the concentration causing lethality to 50% of test individuals (LC50) after an exposure to dispersed Brut Arabian Light (BAL) oil. The LC50 (24 h) was 101 and 24 mg L(-1) after 72 h and 12 mg L(-1) at 96 h. Based on EPA scale of toxicity categories to aquatic organisms, an LC50 (96 h) of 12 mg L(-1) indicates that the dispersed oil was slightly to moderately toxic to E. gryllus. As an attempt to compare our results to others, a literature study was performed. Due to limited amount of data available for dispersed oil and amphipods, information on other crustacean species and other oil-related compounds was also collected. Only one study on dispersed oil and amphipods was found, the LC50 value in this study was similar to the LC50 value of E. gryllus in our study. Since toxicity data are important input to risk assessment and net environmental benefit analyses, and since such data are generally lacking on deep-sea species, the data set produced in this study is of interest to the industry, stakeholders, environmental management, and ecotoxicologists. However, studies including more deep-sea species covering different functional groups are needed to evaluate the sensitivity of the deep-sea compartments to dispersed oil relative to other environmental compartments.
Deep Learning Method for Denial of Service Attack Detection Based on Restricted Boltzmann Machine.
Imamverdiyev, Yadigar; Abdullayeva, Fargana
2018-06-01
In this article, the application of the deep learning method based on Gaussian-Bernoulli type restricted Boltzmann machine (RBM) to the detection of denial of service (DoS) attacks is considered. To increase the DoS attack detection accuracy, seven additional layers are added between the visible and the hidden layers of the RBM. Accurate results in DoS attack detection are obtained by optimization of the hyperparameters of the proposed deep RBM model. The form of the RBM that allows application of the continuous data is used. In this type of RBM, the probability distribution of the visible layer is replaced by a Gaussian distribution. Comparative analysis of the accuracy of the proposed method with Bernoulli-Bernoulli RBM, Gaussian-Bernoulli RBM, deep belief network type deep learning methods on DoS attack detection is provided. Detection accuracy of the methods is verified on the NSL-KDD data set. Higher accuracy from the proposed multilayer deep Gaussian-Bernoulli type RBM is obtained.
NASA Astrophysics Data System (ADS)
He, Fei; Han, Ye; Wang, Han; Ji, Jinchao; Liu, Yuanning; Ma, Zhiqiang
2017-03-01
Gabor filters are widely utilized to detect iris texture information in several state-of-the-art iris recognition systems. However, the proper Gabor kernels and the generative pattern of iris Gabor features need to be predetermined in application. The traditional empirical Gabor filters and shallow iris encoding ways are incapable of dealing with such complex variations in iris imaging including illumination, aging, deformation, and device variations. Thereby, an adaptive Gabor filter selection strategy and deep learning architecture are presented. We first employ particle swarm optimization approach and its binary version to define a set of data-driven Gabor kernels for fitting the most informative filtering bands, and then capture complex pattern from the optimal Gabor filtered coefficients by a trained deep belief network. A succession of comparative experiments validate that our optimal Gabor filters may produce more distinctive Gabor coefficients and our iris deep representations be more robust and stable than traditional iris Gabor codes. Furthermore, the depth and scales of the deep learning architecture are also discussed.
Networks consolidation program: Maintenance and Operations (M&O) staffing estimates
NASA Technical Reports Server (NTRS)
Goodwin, J. P.
1981-01-01
The Mark IV-A consolidate deep space and high elliptical Earth orbiter (HEEO) missions tracking and implements centralized control and monitoring at the deep space communications complexes (DSCC). One of the objectives of the network design is to reduce maintenance and operations (M&O) costs. To determine if the system design meets this objective an M&O staffing model for Goldstone was developed which was used to estimate the staffing levels required to support the Mark IV-A configuration. The study was performed for the Goldstone complex and the program office translated these estimates for the overseas complexes to derive the network estimates.