ICESat Science Investigator led Processing System (I-SIPS)
NASA Astrophysics Data System (ADS)
Bhardwaj, S.; Bay, J.; Brenner, A.; Dimarzio, J.; Hancock, D.; Sherman, M.
2003-12-01
The ICESat Science Investigator-led Processing System (I-SIPS) generates the GLAS standard data products. It consists of two main parts the Scheduling and Data Management System (SDMS) and the Geoscience Laser Altimeter System (GLAS) Science Algorithm Software. The system has been operational since the successful launch of ICESat. It ingests data from the GLAS instrument, generates GLAS data products, and distributes them to the GLAS Science Computing Facility (SCF), the Instrument Support Facility (ISF) and the National Snow and Ice Data Center (NSIDC) ECS DAAC. The SDMS is the Planning, Scheduling and Data Management System that runs the GLAS Science Algorithm Software (GSAS). GSAS is based on the Algorithm Theoretical Basis Documents provided by the Science Team and is developed independently of SDMS. The SDMS provides the processing environment to plan jobs based on existing data, control job flow, data distribution, and archiving. The SDMS design is based on a mission-independent architecture that imposes few constraints on the science code thereby facilitating I-SIPS integration. I-SIPS currently works in an autonomous manner to ingest GLAS instrument data, distribute this data to the ISF, run the science processing algorithms to produce the GLAS standard products, reprocess data when new versions of science algorithms are released, and distributes the products to the SCF, ISF, and NSIDC. I-SIPS has a proven performance record, delivering the data to the SCF within hours after the initial instrument activation. The I-SIPS design philosophy gives this system a high potential for reuse in other science missions.
STAR Algorithm Integration Team - Facilitating operational algorithm development
NASA Astrophysics Data System (ADS)
Mikles, V. J.
2015-12-01
The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.
The Kepler Science Operations Center Pipeline Framework Extensions
NASA Technical Reports Server (NTRS)
Klaus, Todd C.; Cote, Miles T.; McCauliff, Sean; Girouard, Forrest R.; Wohler, Bill; Allen, Christopher; Chandrasekaran, Hema; Bryson, Stephen T.; Middour, Christopher; Caldwell, Douglas A.;
2010-01-01
The Kepler Science Operations Center (SOC) is responsible for several aspects of the Kepler Mission, including managing targets, generating on-board data compression tables, monitoring photometer health and status, processing the science data, and exporting the pipeline products to the mission archive. We describe how the generic pipeline framework software developed for Kepler is extended to achieve these goals, including pipeline configurations for processing science data and other support roles, and custom unit of work generators that control how the Kepler data are partitioned and distributed across the computing cluster. We describe the interface between the Java software that manages the retrieval and storage of the data for a given unit of work and the MATLAB algorithms that process these data. The data for each unit of work are packaged into a single file that contains everything needed by the science algorithms, allowing these files to be used to debug and evolve the algorithms offline.
SeaWiFS Science Algorithm Flow Chart
NASA Technical Reports Server (NTRS)
Darzi, Michael
1998-01-01
This flow chart describes the baseline science algorithms for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Data Processing System (SDPS). As such, it includes only processing steps used in the generation of the operational products that are archived by NASA's Goddard Space Flight Center (GSFC) Distributed Active Archive Center (DAAC). It is meant to provide the reader with a basic understanding of the scientific algorithm steps applied to SeaWiFS data. It does not include non-science steps, such as format conversions, and places the greatest emphasis on the geophysical calculations of the level-2 processing. Finally, the flow chart reflects the logic sequences and the conditional tests of the software so that it may be used to evaluate the fidelity of the implementation of the scientific algorithm. In many cases however, the chart may deviate from the details of the software implementation so as to simplify the presentation.
Data Mining Citizen Science Results
NASA Astrophysics Data System (ADS)
Borne, K. D.
2012-12-01
Scientific discovery from big data is enabled through multiple channels, including data mining (through the application of machine learning algorithms) and human computation (commonly implemented through citizen science tasks). We will describe the results of new data mining experiments on the results from citizen science activities. Discovering patterns, trends, and anomalies in data are among the powerful contributions of citizen science. Establishing scientific algorithms that can subsequently re-discover the same types of patterns, trends, and anomalies in automatic data processing pipelines will ultimately result from the transformation of those human algorithms into computer algorithms, which can then be applied to much larger data collections. Scientific discovery from big data is thus greatly amplified through the marriage of data mining with citizen science.
Enabling Earth Science Through Cloud Computing
NASA Technical Reports Server (NTRS)
Hardman, Sean; Riofrio, Andres; Shams, Khawaja; Freeborn, Dana; Springer, Paul; Chafin, Brian
2012-01-01
Cloud Computing holds tremendous potential for missions across the National Aeronautics and Space Administration. Several flight missions are already benefiting from an investment in cloud computing for mission critical pipelines and services through faster processing time, higher availability, and drastically lower costs available on cloud systems. However, these processes do not currently extend to general scientific algorithms relevant to earth science missions. The members of the Airborne Cloud Computing Environment task at the Jet Propulsion Laboratory have worked closely with the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE) mission to integrate cloud computing into their science data processing pipeline. This paper details the efforts involved in deploying a science data system for the CARVE mission, evaluating and integrating cloud computing solutions with the system and porting their science algorithms for execution in a cloud environment.
A relevancy algorithm for curating earth science data around phenomenon
NASA Astrophysics Data System (ADS)
Maskey, Manil; Ramachandran, Rahul; Li, Xiang; Weigel, Amanda; Bugbee, Kaylin; Gatlin, Patrick; Miller, J. J.
2017-09-01
Earth science data are being collected for various science needs and applications, processed using different algorithms at multiple resolutions and coverages, and then archived at different archiving centers for distribution and stewardship causing difficulty in data discovery. Curation, which typically occurs in museums, art galleries, and libraries, is traditionally defined as the process of collecting and organizing information around a common subject matter or a topic of interest. Curating data sets around topics or areas of interest addresses some of the data discovery needs in the field of Earth science, especially for unanticipated users of data. This paper describes a methodology to automate search and selection of data around specific phenomena. Different components of the methodology including the assumptions, the process, and the relevancy ranking algorithm are described. The paper makes two unique contributions to improving data search and discovery capabilities. First, the paper describes a novel methodology developed for automatically curating data around a topic using Earth science metadata records. Second, the methodology has been implemented as a stand-alone web service that is utilized to augment search and usability of data in a variety of tools.
EOS MLS Science Data Processing System: A Description of Architecture and Capabilities
NASA Technical Reports Server (NTRS)
Cuddy, David T.; Echeverri, Mark D.; Wagner, Paul A.; Hanzel, Audrey T.; Fuller, Ryan A.
2006-01-01
This paper describes the architecture and capabilities of the Science Data Processing System (SDPS) for the EOS MLS. The SDPS consists of two major components--the Science Computing Facility and the Science Investigator-led Processing System. The Science Computing Facility provides the facilities for the EOS MLS Science Team to perform the functions of scientific algorithm development, processing software development, quality control of data products, and scientific analyses. The Science Investigator-led Processing System processes and reprocesses the science data for the entire mission and delivers the data products to the Science Computing Facility and to the Goddard Space Flight Center Earth Science Distributed Active Archive Center, which archives and distributes the standard science products.
Viirs Land Science Investigator-Led Processing System
NASA Astrophysics Data System (ADS)
Devadiga, S.; Mauoka, E.; Roman, M. O.; Wolfe, R. E.; Kalb, V.; Davidson, C. C.; Ye, G.
2015-12-01
The objective of the NASA's Suomi National Polar Orbiting Partnership (S-NPP) Land Science Investigator-led Processing System (Land SIPS), housed at the NASA Goddard Space Flight Center (GSFC), is to produce high quality land products from the Visible Infrared Imaging Radiometer Suite (VIIRS) to extend the Earth System Data Records (ESDRs) developed from NASA's heritage Earth Observing System (EOS) Moderate Resolution Imaging Spectroradiometer (MODIS) onboard the EOS Terra and Aqua satellites. In this paper we will present the functional description and capabilities of the S-NPP Land SIPS, including system development phases and production schedules, timeline for processing, and delivery of land science products based on coordination with the S-NPP Land science team members. The Land SIPS processing stream is expected to be operational by December 2016, generating land products either using the NASA science team delivered algorithms, or the "best-of" science algorithms currently in operation at NASA's Land Product Evaluation and Algorithm Testing Element (PEATE). In addition to generating the standard land science products through processing of the NASA's VIIRS Level 0 data record, the Land SIPS processing system is also used to produce a suite of near-real time products for NASA's application community. Land SIPS will also deliver the standard products, ancillary data sets, software and supporting documentation (ATBDs) to the assigned Distributed Active Archive Centers (DAACs) for archival and distribution. Quality assessment and validation will be an integral part of the Land SIPS processing system; the former being performed at Land Data Operational Product Evaluation (LDOPE) facility, while the latter under the auspices of the CEOS Working Group on Calibration & Validation (WGCV) Land Product Validation (LPV) Subgroup; adopting the best-practices and tools used to assess the quality of heritage EOS-MODIS products generated at the MODIS Adaptive Processing System (MODAPS).
A Relevancy Algorithm for Curating Earth Science Data Around Phenomenon
NASA Technical Reports Server (NTRS)
Maskey, Manil; Ramachandran, Rahul; Li, Xiang; Weigel, Amanda; Bugbee, Kaylin; Gatlin, Patrick; Miller, J. J.
2017-01-01
Earth science data are being collected for various science needs and applications, processed using different algorithms at multiple resolutions and coverages, and then archived at different archiving centers for distribution and stewardship causing difficulty in data discovery. Curation, which typically occurs in museums, art galleries, and libraries, is traditionally defined as the process of collecting and organizing information around a common subject matter or a topic of interest. Curating data sets around topics or areas of interest addresses some of the data discovery needs in the field of Earth science, especially for unanticipated users of data. This paper describes a methodology to automate search and selection of data around specific phenomena. Different components of the methodology including the assumptions, the process, and the relevancy ranking algorithm are described. The paper makes two unique contributions to improving data search and discovery capabilities. First, the paper describes a novel methodology developed for automatically curating data around a topic using Earthscience metadata records. Second, the methodology has been implemented as a standalone web service that is utilized to augment search and usability of data in a variety of tools.
Metaheuristic Optimization and its Applications in Earth Sciences
NASA Astrophysics Data System (ADS)
Yang, Xin-She
2010-05-01
A common but challenging task in modelling geophysical and geological processes is to handle massive data and to minimize certain objectives. This can essentially be considered as an optimization problem, and thus many new efficient metaheuristic optimization algorithms can be used. In this paper, we will introduce some modern metaheuristic optimization algorithms such as genetic algorithms, harmony search, firefly algorithm, particle swarm optimization and simulated annealing. We will also discuss how these algorithms can be applied to various applications in earth sciences, including nonlinear least-squares, support vector machine, Kriging, inverse finite element analysis, and data-mining. We will present a few examples to show how different problems can be reformulated as optimization. Finally, we will make some recommendations for choosing various algorithms to suit various problems. References 1) D. H. Wolpert and W. G. Macready, No free lunch theorems for optimization, IEEE Trans. Evolutionary Computation, Vol. 1, 67-82 (1997). 2) X. S. Yang, Nature-Inspired Metaheuristic Algorithms, Luniver Press, (2008). 3) X. S. Yang, Mathematical Modelling for Earth Sciences, Dunedin Academic Press, (2008).
An Overview of the JPSS Ground Project Algorithm Integration Process
NASA Astrophysics Data System (ADS)
Vicente, G. A.; Williams, R.; Dorman, T. J.; Williamson, R. C.; Shaw, F. J.; Thomas, W. M.; Hung, L.; Griffin, A.; Meade, P.; Steadley, R. S.; Cember, R. P.
2015-12-01
The smooth transition, implementation and operationalization of scientific software's from the National Oceanic and Atmospheric Administration (NOAA) development teams to the Join Polar Satellite System (JPSS) Ground Segment requires a variety of experiences and expertise. This task has been accomplished by a dedicated group of scientist and engineers working in close collaboration with the NOAA Satellite and Information Services (NESDIS) Center for Satellite Applications and Research (STAR) science teams for the JPSS/Suomi-NPOES Preparatory Project (S-NPP) Advanced Technology Microwave Sounder (ATMS), Cross-track Infrared Sounder (CrIS), Visible Infrared Imaging Radiometer Suite (VIIRS) and Ozone Mapping and Profiler Suite (OMPS) instruments. The presentation purpose is to describe the JPSS project process for algorithm implementation from the very early delivering stages by the science teams to the full operationalization into the Interface Processing Segment (IDPS), the processing system that provides Environmental Data Records (EDR's) to NOAA. Special focus is given to the NASA Data Products Engineering and Services (DPES) Algorithm Integration Team (AIT) functional and regression test activities. In the functional testing phase, the AIT uses one or a few specific chunks of data (granules) selected by the NOAA STAR Calibration and Validation (cal/val) Teams to demonstrate that a small change in the code performs properly and does not disrupt the rest of the algorithm chain. In the regression testing phase, the modified code is placed into to the Government Resources for Algorithm Verification, Integration, Test and Evaluation (GRAVITE) Algorithm Development Area (ADA), a simulated and smaller version of the operational IDPS. Baseline files are swapped out, not edited and the whole code package runs in one full orbit of Science Data Records (SDR's) using Calibration Look Up Tables (Cal LUT's) for the time of the orbit. The purpose of the regression test is to identify unintended outcomes. Overall the presentation provides a general and easy to follow overview of the JPSS Algorithm Change Process (ACP) and is intended to facility the audience understanding of a very extensive and complex process.
Atmospheric Science Data Center
2013-04-16
... using data from multiple MISR cameras within automated computer processing algorithms. The stereoscopic algorithms used to generate ... NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Science Mission Directorate, Washington, D.C. The Terra spacecraft is managed ...
The Suomi National Polar-Orbiting Partnership (SNPP): Continuing NASA Research and Applications
NASA Technical Reports Server (NTRS)
Butler, James; Gleason, James; Jedlovec, Gary; Coronado, Patrick
2015-01-01
The Suomi National Polar-orbiting Partnership (SNPP) satellite was successfully launched into a polar orbit on October 28, 2011 carrying 5 remote sensing instruments designed to provide data to improve weather forecasts and to increase understanding of long-term climate change. SNPP provides operational continuity of satellite-based observations for NOAA's Polar-orbiting Operational Environmental Satellites (POES) and continues the long-term record of climate quality observations established by NASA's Earth Observing System (EOS) satellites. In the 2003 to 2011 pre-launch timeframe, NASA's SNPP Science Team assessed the adequacy of the operational Raw Data Records (RDRs), Sensor Data Records (SDRs), and Environmental Data Records (EDRs) from the SNPP instruments for use in NASA Earth Science research, examined the operational algorithms used to produce those data records, and proposed a path forward for the production of climate quality products from SNPP. In order to perform these tasks, a distributed data system, the NASA Science Data Segment (SDS), ingested RDRs, SDRs, and EDRs from the NOAA Archive and Distribution and Interface Data Processing Segments, ADS and IDPS, respectively. The SDS also obtained operational algorithms for evaluation purposes from the NOAA Government Resource for Algorithm Verification, Independent Testing and Evaluation (GRAVITE). Within the NASA SDS, five Product Evaluation and Test Elements (PEATEs) received, ingested, and stored data and performed NASA's data processing, evaluation, and analysis activities. The distributed nature of this data distribution system was established by physically housing each PEATE within one of five Climate Analysis Research Systems (CARS) located at either at a NASA or a university institution. The CARS were organized around 5 key EDRs directly in support of the following NASA Earth Science focus areas: atmospheric sounding, ocean, land, ozone, and atmospheric composition products. The PEATES provided the system level interface with members of the NASA SNPP Science Team and other science investigators within each CARS. A sixth Earth Radiation Budget CARS was established at NASA Langley Research Center (NASA LaRC) to support instrument performance, data evaluation, and analysis for the SNPP Clouds and the Earth's Radiant Budget Energy System (CERES) instrument. Following the 2011 launch of SNPP, spacecraft commissioning, and instrument activation, the NASA SNPP Science Team evaluated the operational RDRs, SDRs, and EDRs produced by the NOAA ADS and IDPS. A key part in that evaluation was the NASA Science Team's independent processing of operational RDRs and SDRs to EDRs using the latest NASA science algorithms. The NASA science evaluation was completed in the December 2012 to April 2014 timeframe with the release of a series of NASA Science Team Discipline Reports. In summary, these reports indicated that the RDRs produced by the SNPP instruments were of sufficiently high quality to be used to create data products suitable for NASA Earth System science and applications. However, the quality of the SDRs and EDRs were found to vary greatly when considering suitability for NASA science. The need for improvements in operational algorithms, adoption of different algorithmic approaches, greater monitoring of on-orbit instrument calibration, greater attention to data product validation, and data reprocessing were prominent findings in the reports. In response to these findings, NASA, in late 2013, directed the NASA SNPP Science Team to use SNPP instrument data to develop data products of sufficiently high quality to enable the continuation of EOS time series data records and to develop innovative, practical applications of SNPP data. This direction necessitated a transition of the SDS data system from its pre-launch assessment mode to one of full data processing and production. To do this, the PEATES, which served as NASA's data product testing environment during the prelaunch and early on-orbit periods, were transitioned to Science Investigator-led Processing Systems (SIPS). The distributed data architecture was maintained in this new system by locating the SIPS at the same institutions at which the CARS and PEATES were located. The SIPS acquire raw SNPP instrument Level 0 (i.e. RDR) data over the full SNPP mission from the NOAA ADS and IDPS through the NASA SDS Data Distribution and Depository Element (SD3E). The SIPS process those data into NASA Level 1, Level 2, and global, gridded Level 3 standard products using peer-reviewed algorithms provided by members of the NASA Science Team. The SIPS work with the NASA SNPP Science Team in obtaining enhanced, refined, or alternate real-time algorithms to support the capabilities of the Direct Readout Laboratory (DRL). All data products, algorithm source codes, coefficients, and auxiliary data used in product generation are archived in an assigned NASA Distributed Active Archive Center (DAAC).
Simple, Scalable, Script-based, Science Processor for Measurements - Data Mining Edition (S4PM-DME)
NASA Astrophysics Data System (ADS)
Pham, L. B.; Eng, E. K.; Lynnes, C. S.; Berrick, S. W.; Vollmer, B. E.
2005-12-01
The S4PM-DME is the Goddard Earth Sciences Distributed Active Archive Center's (GES DAAC) web-based data mining environment. The S4PM-DME replaces the Near-line Archive Data Mining (NADM) system with a better web environment and a richer set of production rules. S4PM-DME enables registered users to submit and execute custom data mining algorithms. The S4PM-DME system uses the GES DAAC developed Simple Scalable Script-based Science Processor for Measurements (S4PM) to automate tasks and perform the actual data processing. A web interface allows the user to access the S4PM-DME system. The user first develops personalized data mining algorithm on his/her home platform and then uploads them to the S4PM-DME system. Algorithms in C and FORTRAN languages are currently supported. The user developed algorithm is automatically audited for any potential security problems before it is installed within the S4PM-DME system and made available to the user. Once the algorithm has been installed the user can promote the algorithm to the "operational" environment. From here the user can search and order the data available in the GES DAAC archive for his/her science algorithm. The user can also set up a processing subscription. The subscription will automatically process new data as it becomes available in the GES DAAC archive. The generated mined data products are then made available for FTP pickup. The benefits of using S4PM-DME are 1) to decrease the downloading time it typically takes a user to transfer the GES DAAC data to his/her system thus off-load the heavy network traffic, 2) to free-up the load on their system, and last 3) to utilize the rich and abundance ocean, atmosphere data from the MODIS and AIRS instruments available from the GES DAAC.
NASA Technical Reports Server (NTRS)
Susskind, Joel; Kouvaris, Louis; Iredell, Lena
2016-01-01
The AIRS Science Team Version 6 retrieval algorithm is currently producing high quality level-3 Climate Data Records (CDRs) from AIRSAMSU which are critical for understanding climate processes. The AIRS Science Team is finalizing an improved Version-7 retrieval algorithm to reprocess all old and future AIRS data. AIRS CDRs should eventually cover the period September 2002 through at least 2020. CrISATMS is the only scheduled follow on to AIRSAMSU. The objective of this research is to prepare for generation of a long term CrISATMS level-3 data using a finalized retrieval algorithm that is scientifically equivalent to AIRSAMSU Version-7.
NASA Astrophysics Data System (ADS)
Min, Min; Wu, Chunqiang; Li, Chuan; Liu, Hui; Xu, Na; Wu, Xiao; Chen, Lin; Wang, Fu; Sun, Fenglin; Qin, Danyu; Wang, Xi; Li, Bo; Zheng, Zhaojun; Cao, Guangzhen; Dong, Lixin
2017-08-01
Fengyun-4A (FY-4A), the first of the Chinese next-generation geostationary meteorological satellites, launched in 2016, offers several advances over the FY-2: more spectral bands, faster imaging, and infrared hyperspectral measurements. To support the major objective of developing the prototypes of FY-4 science algorithms, two science product algorithm testbeds for imagers and sounders have been developed by the scientists in the FY-4 Algorithm Working Group (AWG). Both testbeds, written in FORTRAN and C programming languages for Linux or UNIX systems, have been tested successfully by using Intel/g compilers. Some important FY-4 science products, including cloud mask, cloud properties, and temperature profiles, have been retrieved successfully through using a proxy imager, Himawari-8/Advanced Himawari Imager (AHI), and sounder data, obtained from the Atmospheric InfraRed Sounder, thus demonstrating their robustness. In addition, in early 2016, the FY-4 AWG was developed based on the imager testbed—a near real-time processing system for Himawari-8/AHI data for use by Chinese weather forecasters. Consequently, robust and flexible science product algorithm testbeds have provided essential and productive tools for popularizing FY-4 data and developing substantial improvements in FY-4 products.
The Quantitative Analysis of User Behavior Online - Data, Models and Algorithms
NASA Astrophysics Data System (ADS)
Raghavan, Prabhakar
By blending principles from mechanism design, algorithms, machine learning and massive distributed computing, the search industry has become good at optimizing monetization on sound scientific principles. This represents a successful and growing partnership between computer science and microeconomics. When it comes to understanding how online users respond to the content and experiences presented to them, we have more of a lacuna in the collaboration between computer science and certain social sciences. We will use a concrete technical example from image search results presentation, developing in the process some algorithmic and machine learning problems of interest in their own right. We then use this example to motivate the kinds of studies that need to grow between computer science and the social sciences; a critical element of this is the need to blend large-scale data analysis with smaller-scale eye-tracking and "individualized" lab studies.
NASA Astrophysics Data System (ADS)
Kalluri, S. N.; Haman, B.; Vititoe, D.
2014-12-01
The ground system under development for Geostationary Operational Environmental Satellite-R (GOES-R) series of weather satellite has completed a key milestone in implementing the science algorithms that process raw sensor data to higher level products in preparation for launch. Real time observations from GOES-R are expected to make significant contributions to Earth and space weather prediction, and there are stringent requirements to product weather products at very low latency to meet NOAA's operational needs. Simulated test data from all the six GOES-R sensors are being processed by the system to test and verify performance of the fielded system. Early results show that the system development is on track to meet functional and performance requirements to process science data. Comparison of science products generated by the ground system from simulated data with those generated by the algorithm developers show close agreement among data sets which demonstrates that the algorithms are implemented correctly. Successful delivery of products to AWIPS and the Product Distribution and Access (PDA) system from the core system demonstrate that the external interfaces are working.
The ALMA Science Pipeline: Current Status
NASA Astrophysics Data System (ADS)
Humphreys, Elizabeth; Miura, Rie; Brogan, Crystal L.; Hibbard, John; Hunter, Todd R.; Indebetouw, Remy
2016-09-01
The ALMA Science Pipeline is being developed for the automated calibration and imaging of ALMA interferometric and single-dish data. The calibration Pipeline for interferometric data was accepted for use by ALMA Science Operations in 2014, and for single-dish data end-to-end processing in 2015. However, work is ongoing to expand the use cases for which the Pipeline can be used e.g. for higher frequency and lower signal-to-noise datasets, and for new observing modes. A current focus includes the commissioning of science target imaging for interferometric data. For the Single Dish Pipeline, the line finding algorithm used in baseline subtraction and baseline flagging heuristics have been greately improved since the prototype used for data from the previous cycle. These algorithms, unique to the Pipeline, produce better results than standard manual processing in many cases. In this poster, we report on the current status of the Pipeline capabilities, present initial results from the Imaging Pipeline, and the smart line finding and flagging algorithm used in the Single Dish Pipeline. The Pipeline is released as part of CASA (the Common Astronomy Software Applications package).
NASA Technical Reports Server (NTRS)
Leptoukh, Gregory G.
2005-01-01
The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) is one of the major Distributed Active Archive Centers (DAACs) archiving and distributing remote sensing data from the NASA's Earth Observing System. In addition to providing just data, the GES DISC/DAAC has developed various value-adding processing services. A particularly useful service is data processing a t the DISC (i.e., close to the input data) with the users' algorithms. This can take a number of different forms: as a configuration-managed algorithm within the main processing stream; as a stand-alone program next to the on-line data storage; as build-it-yourself code within the Near-Archive Data Mining (NADM) system; or as an on-the-fly analysis with simple algorithms embedded into the web-based tools (to avoid downloading unnecessary all the data). The existing data management infrastructure at the GES DISC supports a wide spectrum of options: from data subsetting data spatially and/or by parameter to sophisticated on-line analysis tools, producing economies of scale and rapid time-to-deploy. Shifting processing and data management burden from users to the GES DISC, allows scientists to concentrate on science, while the GES DISC handles the data management and data processing at a lower cost. Several examples of successful partnerships with scientists in the area of data processing and mining are presented.
MTI science, data products, and ground-data processing overview
NASA Astrophysics Data System (ADS)
Szymanski, John J.; Atkins, William H.; Balick, Lee K.; Borel, Christoph C.; Clodius, William B.; Christensen, R. Wynn; Davis, Anthony B.; Echohawk, J. C.; Galbraith, Amy E.; Hirsch, Karen L.; Krone, James B.; Little, Cynthia K.; McLachlan, Peter M.; Morrison, Aaron; Pollock, Kimberly A.; Pope, Paul A.; Novak, Curtis; Ramsey, Keri A.; Riddle, Emily E.; Rohde, Charles A.; Roussel-Dupre, Diane C.; Smith, Barham W.; Smith, Kathy; Starkovich, Kim; Theiler, James P.; Weber, Paul G.
2001-08-01
The mission of the Multispectral Thermal Imager (MTI) satellite is to demonstrate the efficacy of highly accurate multispectral imaging for passive characterization of urban and industrial areas, as well as sites of environmental interest. The satellite makes top-of-atmosphere radiance measurements that are subsequently processed into estimates of surface properties such as vegetation health, temperatures, material composition and others. The MTI satellite also provides simultaneous data for atmospheric characterization at high spatial resolution. To utilize these data the MTI science program has several coordinated components, including modeling, comprehensive ground-truth measurements, image acquisition planning, data processing and data interpretation and analysis. Algorithms have been developed to retrieve a multitude of physical quantities and these algorithms are integrated in a processing pipeline architecture that emphasizes automation, flexibility and programmability. In addition, the MTI science team has produced detailed site, system and atmospheric models to aid in system design and data analysis. This paper provides an overview of the MTI research objectives, data products and ground data processing.
NASA Technical Reports Server (NTRS)
Susskind, Joel; Kouvaris, Louis; Iredell, Lena
2016-01-01
The AIRS Science Team Version-6 retrieval algorithm is currently producing high quality level-3 Climate Data Records (CDRs) from AIRS/AMSU which are critical for understanding climate processes. The AIRS Science Team is finalizing an improved Version-7 retrieval algorithm to reprocess all old and future AIRS data. AIRS CDRs should eventually cover the period September 2002 through at least 2020. CrIS/ATMS is the only scheduled follow on to AIRS/AMSU. The objective of this research is to prepare for generation of long term CrIS/ATMS CDRs using a retrieval algorithm that is scientifically equivalent to AIRS/AMSU Version-7.
NASA-HBCU Space Science and Engineering Research Forum Proceedings
NASA Technical Reports Server (NTRS)
Sanders, Yvonne D. (Editor); Freeman, Yvonne B. (Editor); George, M. C. (Editor)
1989-01-01
The proceedings of the Historically Black Colleges and Universities (HBCU) forum are presented. A wide range of research topics from plant science to space science and related academic areas was covered. The sessions were divided into the following subject areas: Life science; Mathematical modeling, image processing, pattern recognition, and algorithms; Microgravity processing, space utilization and application; Physical science and chemistry; Research and training programs; Space science (astronomy, planetary science, asteroids, moon); Space technology (engineering, structures and systems for application in space); Space technology (physics of materials and systems for space applications); and Technology (materials, techniques, measurements).
The Role of Visualization in Computer Science Education
ERIC Educational Resources Information Center
Fouh, Eric; Akbar, Monika; Shaffer, Clifford A.
2012-01-01
Computer science core instruction attempts to provide a detailed understanding of dynamic processes such as the working of an algorithm or the flow of information between computing entities. Such dynamic processes are not well explained by static media such as text and images, and are difficult to convey in lecture. The authors survey the history…
Supporting Abstraction Processes in Problem Solving through Pattern-Oriented Instruction
ERIC Educational Resources Information Center
Muller, Orna; Haberman, Bruria
2008-01-01
Abstraction is a major concept in computer science and serves as a powerful tool in software development. Pattern-oriented instruction (POI) is a pedagogical approach that incorporates patterns in an introductory computer science course in order to structure the learning of algorithmic problem solving. This paper examines abstraction processes in…
NASA Astrophysics Data System (ADS)
Moses, J. F.; Jain, P.; Johnson, J.; Doiron, J. A.
2017-12-01
New Earth observation instruments are planned to enable advancements in Earth science research over the next decade. Diversity of Earth observing instruments and their observing platforms will continue to increase as new instrument technologies emerge and are deployed as part of National programs such as Joint Polar Satellite System (JPSS), Geostationary Operational Environmental Satellite system (GOES), Landsat as well as the potential for many CubeSat and aircraft missions. The practical use and value of these observational data often extends well beyond their original purpose. The practicing community needs intuitive and standardized tools to enable quick unfettered development of tailored products for specific applications and decision support systems. However, the associated data processing system can take years to develop and requires inherent knowledge and the ability to integrate increasingly diverse data types from multiple sources. This paper describes the adaptation of a large-scale data processing system built for supporting JPSS algorithm calibration and validation (Cal/Val) node to a simplified science data system for rapid application. The new configurable data system reuses scalable JAVA technologies built for the JPSS Government Resource for Algorithm Verification, Independent Test, and Evaluation (GRAVITE) system to run within a laptop environment and support product generation and data processing of AURA Ozone Monitoring Instrument (OMI) science products. Of particular interest are the root requirements necessary for integrating experimental algorithms and Hierarchical Data Format (HDF) data access libraries into a science data production system. This study demonstrates the ability to reuse existing Ground System technologies to support future missions with minimal changes.
Pre-Launch Evaluation of the NPP VIIRS Land and Cryosphere EDRs to Meet NASA's Science Requirements
NASA Technical Reports Server (NTRS)
Roman, Miguel O.; Justice, Chris; Csiszar, Ivan; Key, Jeffrey R.; Devadiga, Sadashiva; Davidson, carol; Wolfe, Robert; Privette, Jeff
2011-01-01
This paper summarizes the NASA Visible Infrared Imaging Radiometer Suite (VIIRS) Land Science team's findings to date with respect to the utility of the VIIRS Land and Cryosphere EDRs to meet NASA's science requirements. Based on previous assessments and results from a recent 51-day global test performed by the Land Product Evaluation and Analysis Tool Element (Land PEATE), the NASA VIIRS Land Science team has determined that, if all the Land and Cryosphere EDRs are to serve the needs of the science community, a number of changes to several products and the Interface Data Processing Segment (IDPS) algorithm processing chain will be needed. In addition, other products will also need to be added to the VIIRS Land product suite to provide continuity for all of the MODIS land data record. As the NASA research program explores new global change research areas, the VIIRS instrument should also provide the polar-orbiting imager data from which new algorithms could be developed, produced, and validated.
Evaluation of Algorithms for Compressing Hyperspectral Data
NASA Technical Reports Server (NTRS)
Cook, Sid; Harsanyi, Joseph; Faber, Vance
2003-01-01
With EO-1 Hyperion in orbit NASA is showing their continued commitment to hyperspectral imaging (HSI). As HSI sensor technology continues to mature, the ever-increasing amounts of sensor data generated will result in a need for more cost effective communication and data handling systems. Lockheed Martin, with considerable experience in spacecraft design and developing special purpose onboard processors, has teamed with Applied Signal & Image Technology (ASIT), who has an extensive heritage in HSI spectral compression and Mapping Science (MSI) for JPEG 2000 spatial compression expertise, to develop a real-time and intelligent onboard processing (OBP) system to reduce HSI sensor downlink requirements. Our goal is to reduce the downlink requirement by a factor > 100, while retaining the necessary spectral and spatial fidelity of the sensor data needed to satisfy the many science, military, and intelligence goals of these systems. Our compression algorithms leverage commercial-off-the-shelf (COTS) spectral and spatial exploitation algorithms. We are currently in the process of evaluating these compression algorithms using statistical analysis and NASA scientists. We are also developing special purpose processors for executing these algorithms onboard a spacecraft.
Framework for Integrating Science Data Processing Algorithms Into Process Control Systems
NASA Technical Reports Server (NTRS)
Mattmann, Chris A.; Crichton, Daniel J.; Chang, Albert Y.; Foster, Brian M.; Freeborn, Dana J.; Woollard, David M.; Ramirez, Paul M.
2011-01-01
A software framework called PCS Task Wrapper is responsible for standardizing the setup, process initiation, execution, and file management tasks surrounding the execution of science data algorithms, which are referred to by NASA as Product Generation Executives (PGEs). PGEs codify a scientific algorithm, some step in the overall scientific process involved in a mission science workflow. The PCS Task Wrapper provides a stable operating environment to the underlying PGE during its execution lifecycle. If the PGE requires a file, or metadata regarding the file, the PCS Task Wrapper is responsible for delivering that information to the PGE in a manner that meets its requirements. If the PGE requires knowledge of upstream or downstream PGEs in a sequence of executions, that information is also made available. Finally, if information regarding disk space, or node information such as CPU availability, etc., is required, the PCS Task Wrapper provides this information to the underlying PGE. After this information is collected, the PGE is executed, and its output Product file and Metadata generation is managed via the PCS Task Wrapper framework. The innovation is responsible for marshalling output Products and Metadata back to a PCS File Management component for use in downstream data processing and pedigree. In support of this, the PCS Task Wrapper leverages the PCS Crawler Framework to ingest (during pipeline processing) the output Product files and Metadata produced by the PGE. The architectural components of the PCS Task Wrapper framework include PGE Task Instance, PGE Config File Builder, Config File Property Adder, Science PGE Config File Writer, and PCS Met file Writer. This innovative framework is really the unifying bridge between the execution of a step in the overall processing pipeline, and the available PCS component services as well as the information that they collectively manage.
Heuristic and algorithmic processing in English, mathematics, and science education.
Sharps, Matthew J; Hess, Adam B; Price-Sharps, Jana L; Teh, Jane
2008-01-01
Many college students experience difficulties in basic academic skills. Recent research suggests that much of this difficulty may lie in heuristic competency--the ability to use and successfully manage general cognitive strategies. In the present study, the authors evaluated this possibility. They compared participants' performance on a practice California Basic Educational Skills Test and on a series of questions in the natural sciences with heuristic and algorithmic performance on a series of mathematics and reading comprehension exercises. Heuristic competency in mathematics was associated with better scores in science and mathematics. Verbal and algorithmic skills were associated with better reading comprehension. These results indicate the importance of including heuristic training in educational contexts and highlight the importance of a relatively domain-specific approach to questions of cognition in higher education.
NASA Astrophysics Data System (ADS)
Das, B.; Wilson, M.; Divakarla, M. G.; Chen, W.; Barnet, C.; Wolf, W.
2013-05-01
Algorithm Development Library (ADL) is a framework that mimics the operational system IDPS (Interface Data Processing Segment) that is currently being used to process data from instruments aboard Suomi National Polar-orbiting Partnership (S-NPP) satellite. The satellite was launched successfully in October 2011. The Cross-track Infrared and Microwave Sounder Suite (CrIMSS) consists of the Advanced Technology Microwave Sounder (ATMS) and Cross-track Infrared Sounder (CrIS) instruments that are on-board of S-NPP. These instruments will also be on-board of JPSS (Joint Polar Satellite System) that will be launched in early 2017. The primary products of the CrIMSS Environmental Data Record (EDR) include global atmospheric vertical temperature, moisture, and pressure profiles (AVTP, AVMP and AVPP) and Ozone IP (Intermediate Product from CrIS radiances). Several algorithm updates have recently been proposed by CrIMSS scientists that include fixes to the handling of forward modeling errors, a more conservative identification of clear scenes, indexing corrections for daytime products, and relaxed constraints between surface temperature and air temperature for daytime land scenes. We have integrated these improvements into the ADL framework. This work compares the results from ADL emulation of future IDPS system incorporating all the suggested algorithm updates with the current official processing results by qualitative and quantitative evaluations. The results prove these algorithm updates improve science product quality.
ERIC Educational Resources Information Center
Lamb, Richard L.; Firestone, Jonah B.
2017-01-01
Conflicting explanations and unrelated information in science classrooms increase cognitive load and decrease efficiency in learning. This reduced efficiency ultimately limits one's ability to solve reasoning problems in the science. In reasoning, it is the ability of students to sift through and identify critical pieces of information that is of…
ERIC Educational Resources Information Center
Tataw, Oben Moses
2013-01-01
Interdisciplinary research in computer science requires the development of computational techniques for practical application in different domains. This usually requires careful integration of different areas of technical expertise. This dissertation presents image and time series analysis algorithms, with practical interdisciplinary applications…
Computational complexity of ecological and evolutionary spatial dynamics
Ibsen-Jensen, Rasmus; Chatterjee, Krishnendu; Nowak, Martin A.
2015-01-01
There are deep, yet largely unexplored, connections between computer science and biology. Both disciplines examine how information proliferates in time and space. Central results in computer science describe the complexity of algorithms that solve certain classes of problems. An algorithm is deemed efficient if it can solve a problem in polynomial time, which means the running time of the algorithm is a polynomial function of the length of the input. There are classes of harder problems for which the fastest possible algorithm requires exponential time. Another criterion is the space requirement of the algorithm. There is a crucial distinction between algorithms that can find a solution, verify a solution, or list several distinct solutions in given time and space. The complexity hierarchy that is generated in this way is the foundation of theoretical computer science. Precise complexity results can be notoriously difficult. The famous question whether polynomial time equals nondeterministic polynomial time (i.e., P = NP) is one of the hardest open problems in computer science and all of mathematics. Here, we consider simple processes of ecological and evolutionary spatial dynamics. The basic question is: What is the probability that a new invader (or a new mutant) will take over a resident population? We derive precise complexity results for a variety of scenarios. We therefore show that some fundamental questions in this area cannot be answered by simple equations (assuming that P is not equal to NP). PMID:26644569
Quality Assessment of Collection 6 MODIS Atmospheric Science Products
NASA Astrophysics Data System (ADS)
Manoharan, V. S.; Ridgway, B.; Platnick, S. E.; Devadiga, S.; Mauoka, E.
2015-12-01
Since the launch of the NASA Terra and Aqua satellites in December 1999 and May 2002, respectively, atmosphere and land data acquired by the MODIS (Moderate Resolution Imaging Spectroradiometer) sensor on-board these satellites have been reprocessed five times at the MODAPS (MODIS Adaptive Processing System) located at NASA GSFC. The global land and atmosphere products use science algorithms developed by the NASA MODIS science team investigators. MODAPS completed Collection 6 reprocessing of MODIS Atmosphere science data products in April 2015 and is currently generating the Collection 6 products using the latest version of the science algorithms. This reprocessing has generated one of the longest time series of consistent data records for understanding cloud, aerosol, and other constituents in the earth's atmosphere. It is important to carefully evaluate and assess the quality of this data and remove any artifacts to maintain a useful climate data record. Quality Assessment (QA) is an integral part of the processing chain at MODAPS. This presentation will describe the QA approaches and tools adopted by the MODIS Land/Atmosphere Operational Product Evaluation (LDOPE) team to assess the quality of MODIS operational Atmospheric products produced at MODAPS. Some of the tools include global high resolution images, time series analysis and statistical QA metrics. The new high resolution global browse images with pan and zoom have provided the ability to perform QA of products in real time through synoptic QA on the web. This global browse generation has been useful in identifying production error, data loss, and data quality issues from calibration error, geolocation error and algorithm performance. A time series analysis for various science datasets in the Level-3 monthly product was recently developed for assessing any long term drifts in the data arising from instrument errors or other artifacts. This presentation will describe and discuss some test cases from the recently processed C6 products. We will also describe the various tools and approaches developed to verify and assess the algorithm changes implemented by the science team to address known issues in the products and improve the quality of the products.
The GLAS Science Algorithm Software (GSAS) User's Guide Version 7
NASA Technical Reports Server (NTRS)
Lee, Jeffrey E.
2013-01-01
The Geoscience Laser Altimeter System (GLAS) is the primary instrument for the ICESat (Ice, Cloud and Land Elevation Satellite) laser altimetry mission. ICESat was the benchmark Earth Observing System (EOS) mission for measuring ice sheet mass balance, cloud and aerosol heights, as well as land topography and vegetation characteristics. From 2003 to 2009, the ICESat mission provided multi-year elevation data needed to determine ice sheet mass balance as well as cloud property information, especially for stratospheric clouds common over polar areas. It also provided topography and vegetation data around the globe, in addition to the polar-specific coverage over the Greenland and Antarctic ice sheets.This document is the final version of the GLAS Science Algorithm Software Users Guide document. It contains the instructions to install the GLAS Science Algorithm Software (GSAS) in the production environment that was used to create the standard data products. It also describes the usage of each GSAS program in that environment with their required inputs and outputs. Included are a number of utility programs that are used to create ancillary data files that are used in the processing but generally are not distributed to the public as data products. Of importance is the values for the large number of constants used in the GSAS algorithm during processing are provided in an appendix.
Convergence of Proximal Iteratively Reweighted Nuclear Norm Algorithm for Image Processing.
Sun, Tao; Jiang, Hao; Cheng, Lizhi
2017-08-25
The nonsmooth and nonconvex regularization has many applications in imaging science and machine learning research due to its excellent recovery performance. A proximal iteratively reweighted nuclear norm algorithm has been proposed for the nonsmooth and nonconvex matrix minimizations. In this paper, we aim to investigate the convergence of the algorithm. With the Kurdyka-Łojasiewicz property, we prove the algorithm globally converges to a critical point of the objective function. The numerical results presented in this paper coincide with our theoretical findings.
Photometer Performance Assessment in Kepler Science Data Processing
NASA Technical Reports Server (NTRS)
Li, Jie; Allen, Christopher; Bryson, Stephen T.; Caldwell, Douglas A.; Chandrasekaran, Hema; Clarke, Bruce D.; Gunter, Jay P.; Jenkins, Jon M.; Klaus, Todd C.; Quintana, Elisa V.;
2010-01-01
This paper describes the algorithms of the Photometer Performance Assessment (PPA) software component in the science data processing pipeline of the Kepler mission. The PPA performs two tasks: One is to analyze the health and performance of the Kepler photometer based on the long cadence science data down-linked via Ka band approximately every 30 days. The second is to determine the attitude of the Kepler spacecraft with high precision at each long cadence. The PPA component is demonstrated to work effectively with the Kepler flight data.
Photometric analysis in the Kepler Science Operations Center pipeline
NASA Astrophysics Data System (ADS)
Twicken, Joseph D.; Clarke, Bruce D.; Bryson, Stephen T.; Tenenbaum, Peter; Wu, Hayley; Jenkins, Jon M.; Girouard, Forrest; Klaus, Todd C.
2010-07-01
We describe the Photometric Analysis (PA) software component and its context in the Kepler Science Operations Center (SOC) Science Processing Pipeline. The primary tasks of this module are to compute the photometric flux and photocenters (centroids) for over 160,000 long cadence (~thirty minute) and 512 short cadence (~one minute) stellar targets from the calibrated pixels in their respective apertures. We discuss science algorithms for long and short cadence PA: cosmic ray cleaning; background estimation and removal; aperture photometry; and flux-weighted centroiding. We discuss the end-to-end propagation of uncertainties for the science algorithms. Finally, we present examples of photometric apertures, raw flux light curves, and centroid time series from Kepler flight data. PA light curves, centroid time series, and barycentric timestamp corrections are exported to the Multi-mission Archive at Space Telescope [Science Institute] (MAST) and are made available to the general public in accordance with the NASA/Kepler data release policy.
The Nasa-Isro SAR Mission Science Data Products and Processing Workflows
NASA Astrophysics Data System (ADS)
Rosen, P. A.; Agram, P. S.; Lavalle, M.; Cohen, J.; Buckley, S.; Kumar, R.; Misra-Ray, A.; Ramanujam, V.; Agarwal, K. M.
2017-12-01
The NASA-ISRO SAR (NISAR) Mission is currently in the development phase and in the process of specifying its suite of data products and algorithmic workflows, responding to inputs from the NISAR Science and Applications Team. NISAR will provide raw data (Level 0), full-resolution complex imagery (Level 1), and interferometric and polarimetric image products (Level 2) for the entire data set, in both natural radar and geocoded coordinates. NASA and ISRO are coordinating the formats, meta-data layers, and algorithms for these products, for both the NASA-provided L-band radar and the ISRO-provided S-band radar. Higher level products will be also be generated for the purpose of calibration and validation, over large areas of Earth, including tectonic plate boundaries, ice sheets and sea-ice, and areas of ecosystem disturbance and change. This level of comprehensive product generation has been unprecedented for SAR missions in the past, and leads to storage processing challenges for the production system and the archive center. Further, recognizing the potential to support applications that require low latency product generation and delivery, the NISAR team is optimizing the entire end-to-end ground data system for such response, including exploring the advantages of cloud-based processing, algorithmic acceleration using GPUs, and on-demand processing schemes that minimize computational and transport costs, but allow rapid delivery to science and applications users. This paper will review the current products, workflows, and discuss the scientific and operational trade-space of mission capabilities.
Applications of modern statistical methods to analysis of data in physical science
NASA Astrophysics Data System (ADS)
Wicker, James Eric
Modern methods of statistical and computational analysis offer solutions to dilemmas confronting researchers in physical science. Although the ideas behind modern statistical and computational analysis methods were originally introduced in the 1970's, most scientists still rely on methods written during the early era of computing. These researchers, who analyze increasingly voluminous and multivariate data sets, need modern analysis methods to extract the best results from their studies. The first section of this work showcases applications of modern linear regression. Since the 1960's, many researchers in spectroscopy have used classical stepwise regression techniques to derive molecular constants. However, problems with thresholds of entry and exit for model variables plagues this analysis method. Other criticisms of this kind of stepwise procedure include its inefficient searching method, the order in which variables enter or leave the model and problems with overfitting data. We implement an information scoring technique that overcomes the assumptions inherent in the stepwise regression process to calculate molecular model parameters. We believe that this kind of information based model evaluation can be applied to more general analysis situations in physical science. The second section proposes new methods of multivariate cluster analysis. The K-means algorithm and the EM algorithm, introduced in the 1960's and 1970's respectively, formed the basis of multivariate cluster analysis methodology for many years. However, several shortcomings of these methods include strong dependence on initial seed values and inaccurate results when the data seriously depart from hypersphericity. We propose new cluster analysis methods based on genetic algorithms that overcomes the strong dependence on initial seed values. In addition, we propose a generalization of the Genetic K-means algorithm which can accurately identify clusters with complex hyperellipsoidal covariance structures. We then use this new algorithm in a genetic algorithm based Expectation-Maximization process that can accurately calculate parameters describing complex clusters in a mixture model routine. Using the accuracy of this GEM algorithm, we assign information scores to cluster calculations in order to best identify the number of mixture components in a multivariate data set. We will showcase how these algorithms can be used to process multivariate data from astronomical observations.
NASA GPM GV Science Implementation
NASA Technical Reports Server (NTRS)
Petersen, W. A.
2009-01-01
Pre-launch algorithm development & post-launch product evaluation: The GPM GV paradigm moves beyond traditional direct validation/comparison activities by incorporating improved algorithm physics & model applications (end-to-end validation) in the validation process. Three approaches: 1) National Network (surface): Operational networks to identify and resolve first order discrepancies (e.g., bias) between satellite and ground-based precipitation estimates. 2) Physical Process (vertical column): Cloud system and microphysical studies geared toward testing and refinement of physically-based retrieval algorithms. 3) Integrated (4-dimensional): Integration of satellite precipitation products into coupled prediction models to evaluate strengths/limitations of satellite precipitation producers.
Is Self-organization a Rational Expectation?
NASA Astrophysics Data System (ADS)
Luediger, Heinz
Over decades and under varying names the study of biology-inspired algorithms applied to non-living systems has been the subject of a small and somewhat exotic research community. Only the recent coincidence of a growing inability to master the design, development and operation of increasingly intertwined systems and processes, and an accelerated trend towards a naïve if not romanticizing view of nature in the sciences, has led to the adoption of biology-inspired algorithmic research by a wider range of sciences. Adaptive systems, as we apparently observe in nature, are meanwhile viewed as a promising way out of the complexity trap and, propelled by a long list of ‘self’ catchwords, complexity research has become an influential stream in the science community. This paper presents four provocative theses that cast doubt on the strategic potential of complexity research and the viability of large scale deployment of biology-inspired algorithms in an expectation driven world.
Parallel asynchronous systems and image processing algorithms
NASA Technical Reports Server (NTRS)
Coon, D. D.; Perera, A. G. U.
1989-01-01
A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.
The Kepler Science Data Processing Pipeline Source Code Road Map
NASA Technical Reports Server (NTRS)
Wohler, Bill; Jenkins, Jon M.; Twicken, Joseph D.; Bryson, Stephen T.; Clarke, Bruce Donald; Middour, Christopher K.; Quintana, Elisa Victoria; Sanderfer, Jesse Thomas; Uddin, Akm Kamal; Sabale, Anima;
2016-01-01
We give an overview of the operational concepts and architecture of the Kepler Science Processing Pipeline. Designed, developed, operated, and maintained by the Kepler Science Operations Center (SOC) at NASA Ames Research Center, the Science Processing Pipeline is a central element of the Kepler Ground Data System. The SOC consists of an office at Ames Research Center, software development and operations departments, and a data center which hosts the computers required to perform data analysis. The SOC's charter is to analyze stellar photometric data from the Kepler spacecraft and report results to the Kepler Science Office for further analysis. We describe how this is accomplished via the Kepler Science Processing Pipeline, including, the software algorithms. We present the high-performance, parallel computing software modules of the pipeline that perform transit photometry, pixel-level calibration, systematic error correction, attitude determination, stellar target management, and instrument characterization.
Jaton, Florian
2017-01-01
This article documents the practical efforts of a group of scientists designing an image-processing algorithm for saliency detection. By following the actors of this computer science project, the article shows that the problems often considered to be the starting points of computational models are in fact provisional results of time-consuming, collective and highly material processes that engage habits, desires, skills and values. In the project being studied, problematization processes lead to the constitution of referential databases called ‘ground truths’ that enable both the effective shaping of algorithms and the evaluation of their performances. Working as important common touchstones for research communities in image processing, the ground truths are inherited from prior problematization processes and may be imparted to subsequent ones. The ethnographic results of this study suggest two complementary analytical perspectives on algorithms: (1) an ‘axiomatic’ perspective that understands algorithms as sets of instructions designed to solve given problems computationally in the best possible way, and (2) a ‘problem-oriented’ perspective that understands algorithms as sets of instructions designed to computationally retrieve outputs designed and designated during specific problematization processes. If the axiomatic perspective on algorithms puts the emphasis on the numerical transformations of inputs into outputs, the problem-oriented perspective puts the emphasis on the definition of both inputs and outputs. PMID:28950802
NASA Astrophysics Data System (ADS)
Houchin, J. S.
2014-09-01
A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.
Riding the Hype Wave: Evaluating new AI Techniques for their Applicability in Earth Science
NASA Astrophysics Data System (ADS)
Ramachandran, R.; Zhang, J.; Maskey, M.; Lee, T. J.
2016-12-01
Every few years a new technology rides the hype wave generated by the computer science community. Converts to this new technology who surface from both the science community and the informatics community promulgate that it can radically improve or even change the existing scientific process. Recent examples of new technology following in the footsteps of "big data" now include deep learning algorithms and knowledge graphs. Deep learning algorithms mimic the human brain and process information through multiple stages of transformation and representation. These algorithms are able to learn complex functions that map pixels directly to outputs without relying on human-crafted features and solve some of the complex classification problems that exist in science. Similarly, knowledge graphs aggregate information around defined topics that enable users to resolve their query without having to navigate and assemble information manually. Knowledge graphs could potentially be used in scientific research to assist in hypothesis formulation, testing, and review. The challenge for the Earth science research community is to evaluate these new technologies by asking the right questions and considering what-if scenarios. What is this new technology enabling/providing that is innovative and different? Can one justify the adoption costs with respect to the research returns? Since nothing comes for free, utilizing a new technology entails adoption costs that may outweigh the benefits. Furthermore, these technologies may require significant computing infrastructure in order to be utilized effectively. Results from two different projects will be presented along with lessons learned from testing these technologies. The first project primarily evaluates deep learning techniques for different applications of image retrieval within Earth science while the second project builds a prototype knowledge graph constructed for Hurricane science.
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Hunter, Stanley D.; Hanu, Andrei R.; Sheets, Teresa B.
2016-01-01
Richard O. Duda and Peter E. Hart of Stanford Research Institute in [1] described the recurring problem in computer image processing as the detection of straight lines in digitized images. The problem is to detect the presence of groups of collinear or almost collinear figure points. It is clear that the problem can be solved to any desired degree of accuracy by testing the lines formed by all pairs of points. However, the computation required for n=NxM points image is approximately proportional to n2 or O(n2), becoming prohibitive for large images or when data processing cadence time is in milliseconds. Rosenfeld in [2] described an ingenious method due to Hough [3] for replacing the original problem of finding collinear points by a mathematically equivalent problem of finding concurrent lines. This method involves transforming each of the figure points into a straight line in a parameter space. Hough chose to use the familiar slope-intercept parameters, and thus his parameter space was the two-dimensional slope-intercept plane. A parallel Hough transform running on multi-core processors was elaborated in [4]. There are many other proposed methods of solving a similar problem, such as sampling-up-the-ramp algorithm (SUTR) [5] and algorithms involving artificial swarm intelligence techniques [6]. However, all state-of-the-art algorithms lack in real time performance. Namely, they are slow for large images that require performance cadence of a few dozens of milliseconds (50ms). This problem arises in spaceflight applications such as near real-time analysis of gamma ray measurements contaminated by overwhelming amount of traces of cosmic rays (CR). Future spaceflight instruments such as the Advanced Energetic Pair Telescope instrument (AdEPT) [7-9] for cosmos gamma ray survey employ large detector readout planes registering multitudes of cosmic ray interference events and sparse science gamma ray event traces' projections. The AdEPT science of interest is in the gamma ray events and the problem is to detect and reject the much more voluminous cosmic ray projections, so that the remaining science data can be telemetered to the ground over the constrained communication link. The state-of-the-art in cosmic rays detection and rejection does not provide an adequate computational solution. This paper presents a novel approach to the AdEPT on-board data processing burdened with the CR detection top pole bottleneck problem. This paper is introducing the data processing object, demonstrates object segmentation and distribution for processing among many processing elements (PEs) and presents solution algorithm for the processing bottleneck - the CR-Algorithm. The algorithm is based on the a priori knowledge that a CR pierces the entire instrument pressure vessel. This phenomenon is also the basis for a straightforward CR simulator, allowing the CR-Algorithm performance testing. Parallel processing of the readout image's (2(N+M) - 4) peripheral voxels is detecting all CRs, resulting in O(n) computational complexity. This algorithm near real-time performance is making AdEPT class spaceflight instruments feasible.
The JPSS Ground Project Algorithm Verification, Test and Evaluation System
NASA Astrophysics Data System (ADS)
Vicente, G. A.; Jain, P.; Chander, G.; Nguyen, V. T.; Dixon, V.
2016-12-01
The Government Resource for Algorithm Verification, Independent Test, and Evaluation (GRAVITE) is an operational system that provides services to the Suomi National Polar-orbiting Partnership (S-NPP) Mission. It is also a unique environment for Calibration/Validation (Cal/Val) and Data Quality Assessment (DQA) of the Join Polar Satellite System (JPSS) mission data products. GRAVITE provides a fast and direct access to the data and products created by the Interface Data Processing Segment (IDPS), the NASA/NOAA operational system that converts Raw Data Records (RDR's) generated by sensors on the S-NPP into calibrated geo-located Sensor Data Records (SDR's) and generates Mission Unique Products (MUPS). It also facilitates algorithm investigation, integration, checkouts and tuning, instrument and product calibration and data quality support, monitoring and data/products distribution. GRAVITE is the portal for the latest S-NPP and JPSS baselined Processing Coefficient Tables (PCT's) and Look-Up-Tables (LUT's) and hosts a number DQA offline tools that takes advantage of the proximity to the near-real time data flows. It also contains a set of automated and ad-hoc Cal/Val tools used for algorithm analysis and updates, including an instance of the IDPS called GRAVITE Algorithm Development Area (G-ADA), that has the latest installation of the IDPS algorithms running in an identical software and hardware platforms. Two other important GRAVITE component are the Investigator-led Processing System (IPS) and the Investigator Computing Facility (ICF). The IPS is a dedicated environment where authorized users run automated scripts called Product Generation Executables (PGE's) to support Cal/Val and data quality assurance offline. This data-rich and data-driven service holds its own distribution system and allows operators to retrieve science data products. The ICF is a workspace where users can share computing applications and resources and have full access to libraries and science and sensor quality analysis tools. In this presentation we will describe the GRAVITE systems and subsystems, architecture, technical specifications, capabilities and resources, distributed data and products and the latest advances to support the JPSS science algorithm implementation, validation and testing.
Arnulf, Jan Ketil; Larsen, Kai Rune; Martinsen, Øyvind Lund; Bong, Chih How
2014-01-01
Some disciplines in the social sciences rely heavily on collecting survey responses to detect empirical relationships among variables. We explored whether these relationships were a priori predictable from the semantic properties of the survey items, using language processing algorithms which are now available as new research methods. Language processing algorithms were used to calculate the semantic similarity among all items in state-of-the-art surveys from Organisational Behaviour research. These surveys covered areas such as transformational leadership, work motivation and work outcomes. This information was used to explain and predict the response patterns from real subjects. Semantic algorithms explained 60–86% of the variance in the response patterns and allowed remarkably precise prediction of survey responses from humans, except in a personality test. Even the relationships between independent and their purported dependent variables were accurately predicted. This raises concern about the empirical nature of data collected through some surveys if results are already given a priori through the way subjects are being asked. Survey response patterns seem heavily determined by semantics. Language algorithms may suggest these prior to administering a survey. This study suggests that semantic algorithms are becoming new tools for the social sciences, opening perspectives on survey responses that prevalent psychometric theory cannot explain. PMID:25184672
Arnulf, Jan Ketil; Larsen, Kai Rune; Martinsen, Øyvind Lund; Bong, Chih How
2014-01-01
Some disciplines in the social sciences rely heavily on collecting survey responses to detect empirical relationships among variables. We explored whether these relationships were a priori predictable from the semantic properties of the survey items, using language processing algorithms which are now available as new research methods. Language processing algorithms were used to calculate the semantic similarity among all items in state-of-the-art surveys from Organisational Behaviour research. These surveys covered areas such as transformational leadership, work motivation and work outcomes. This information was used to explain and predict the response patterns from real subjects. Semantic algorithms explained 60-86% of the variance in the response patterns and allowed remarkably precise prediction of survey responses from humans, except in a personality test. Even the relationships between independent and their purported dependent variables were accurately predicted. This raises concern about the empirical nature of data collected through some surveys if results are already given a priori through the way subjects are being asked. Survey response patterns seem heavily determined by semantics. Language algorithms may suggest these prior to administering a survey. This study suggests that semantic algorithms are becoming new tools for the social sciences, opening perspectives on survey responses that prevalent psychometric theory cannot explain.
The Algorithm Theoretical Basis Document for Level 1A Processing
NASA Technical Reports Server (NTRS)
Jester, Peggy L.; Hancock, David W., III
2012-01-01
The first process of the Geoscience Laser Altimeter System (GLAS) Science Algorithm Software converts the Level 0 data into the Level 1A Data Products. The Level 1A Data Products are the time ordered instrument data converted from counts to engineering units. This document defines the equations that convert the raw instrument data into engineering units. Required scale factors, bias values, and coefficients are defined in this document. Additionally, required quality assurance and browse products are defined in this document.
The CCSDS Lossless Data Compression Algorithm for Space Applications
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Day, John H. (Technical Monitor)
2001-01-01
In the late 80's, when the author started working at the Goddard Space Flight Center (GSFC) for the National Aeronautics and Space Administration (NASA), several scientists there were in the process of formulating the next generation of Earth viewing science instruments, the Moderate Resolution Imaging Spectroradiometer (MODIS). The instrument would have over thirty spectral bands and would transmit enormous data through the communications channel. This was when the author was assigned the task of investigating lossless compression algorithms for space implementation to compress science data in order to reduce the requirement on bandwidth and storage.
Data Mining Web Services for Science Data Repositories
NASA Astrophysics Data System (ADS)
Graves, S.; Ramachandran, R.; Keiser, K.; Maskey, M.; Lynnes, C.; Pham, L.
2006-12-01
The maturation of web services standards and technologies sets the stage for a distributed "Service-Oriented Architecture" (SOA) for NASA's next generation science data processing. This architecture will allow members of the scientific community to create and combine persistent distributed data processing services and make them available to other users over the Internet. NASA has initiated a project to create a suite of specialized data mining web services designed specifically for science data. The project leverages the Algorithm Development and Mining (ADaM) toolkit as its basis. The ADaM toolkit is a robust, mature and freely available science data mining toolkit that is being used by several research organizations and educational institutions worldwide. These mining services will give the scientific community a powerful and versatile data mining capability that can be used to create higher order products such as thematic maps from current and future NASA satellite data records with methods that are not currently available. The package of mining and related services are being developed using Web Services standards so that community-based measurement processing systems can access and interoperate with them. These standards-based services allow users different options for utilizing them, from direct remote invocation by a client application to deployment of a Business Process Execution Language (BPEL) solutions package where a complex data mining workflow is exposed to others as a single service. The ability to deploy and operate these services at a data archive allows the data mining algorithms to be run where the data are stored, a more efficient scenario than moving large amounts of data over the network. This will be demonstrated in a scenario in which a user uses a remote Web-Service-enabled clustering algorithm to create cloud masks from satellite imagery at the Goddard Earth Sciences Data and Information Services Center (GES DISC).
An overview of the CATS level 1 processing algorithms and data products
NASA Astrophysics Data System (ADS)
Yorks, J. E.; McGill, M. J.; Palm, S. P.; Hlavka, D. L.; Selmer, P. A.; Nowottnick, E. P.; Vaughan, M. A.; Rodier, S. D.; Hart, W. D.
2016-05-01
The Cloud-Aerosol Transport System (CATS) is an elastic backscatter lidar that was launched on 10 January 2015 to the International Space Station (ISS). CATS provides both space-based technology demonstrations for future Earth Science missions and operational science measurements. This paper outlines the CATS Level 1 data products and processing algorithms. Initial results and validation data demonstrate the ability to accurately detect optically thin atmospheric layers with 1064 nm nighttime backscatter as low as 5.0E-5 km-1 sr-1. This sensitivity, along with the orbital characteristics of the ISS, enables the use of CATS data for cloud and aerosol climate studies. The near-real-time downlinking and processing of CATS data are unprecedented capabilities and provide data that have applications such as forecasting of volcanic plume transport for aviation safety and aerosol vertical structure that will improve air quality health alerts globally.
NASA Technical Reports Server (NTRS)
Tilmes, Curt A.; Fleig, Albert J.
2008-01-01
NASA's traditional science data processing systems have focused on specific missions, and providing data access, processing and services to the funded science teams of those specific missions. Recently NASA has been modifying this stance, changing the focus from Missions to Measurements. Where a specific Mission has a discrete beginning and end, the Measurement considers long term data continuity across multiple missions. Total Column Ozone, a critical measurement of atmospheric composition, has been monitored for'decades on a series of Total Ozone Mapping Spectrometer (TOMS) instruments. Some important European missions also monitor ozone, including the Global Ozone Monitoring Experiment (GOME) and SCIAMACHY. With the U.S.IEuropean cooperative launch of the Dutch Ozone Monitoring Instrument (OMI) on NASA Aura satellite, and the GOME-2 instrumental on MetOp, the ozone monitoring record has been further extended. In conjunction with the U.S. Department of Defense (DoD) and the National Oceanic and Atmospheric Administration (NOAA), NASA is now preparing to evaluate data and algorithms for the next generation Ozone Mapping and Profiler Suite (OMPS) which will launch on the National Polar-orbiting Operational Environmental Satellite System (NPOESS) Preparatory Project (NPP) in 2010. NASA is constructing the Science Data Segment (SDS) which is comprised of several elements to evaluate the various NPP data products and algorithms. The NPP SDS Ozone Product Evaluation and Test Element (PEATE) will build on the heritage of the TOMS and OM1 mission based processing systems. The overall measurement based system that will encompass these efforts is the Atmospheric Composition Processing System (ACPS). We have extended the system to include access to publically available data sets from other instruments where feasible, including non-NASA missions as appropriate. The heritage system was largely monolithic providing a very controlled processing flow from data.ingest of satellite data to the ultimate archive of specific operational data products. The ACPS allows more open access with standard protocols including HTTP, SOAPIXML, RSS and various REST incarnations. External entities can be granted access to various modules within the system, including an extended data archive, metadata searching, production planning and processing. Data access is provided with very fine grained access control. It is possible to easily designate certain datasets as being available to the public, or restricted to groups of researchers, or limited strictly to the originator. This can be used, for example, to release one's best validated data to the public, but restrict the "new version" of data processed with a new, unproven algorithm until it is ready. Similarly, the system can provide access to algorithms, both as modifiable source code (where possible) and fully integrated executable Algorithm Plugin Packages (APPs). This enables researchers to download publically released versions of the processing algorithms and easily reproduce the processing remotely, while interacting with the ACPS. The algorithms can be modified allowing better experimentation and rapid improvement. The modified algorithms can be easily integrated back into the production system for large scale bulk processing to evaluate improvements. The system includes complete provenance tracking of algorithms, data and the entire processing environment. The origin of any data or algorithms is recorded and the entire history of the processing chains are stored such that a researcher can understand the entire data flow. Provenance is captured in a form suitable for the system to guarantee scientific reproducability of any data product it distributes even in cases where the physical data products themselves have been deleted due to space constraints. We are currently working on Semantic Web ontologies for representing the various provenance information. A new web site focusing on consolidating informaon about the measurement, processing system, and data access has been established to encourage interaction with the overall scientific community. We will describe the system, its data processing capabilities, and the methods the community can use to interact with the standard interfaces of the system.
NASA Astrophysics Data System (ADS)
Rachmawati, D.; Budiman, M. A.; Siburian, W. S. E.
2018-05-01
On the process of exchanging files, security is indispensable to avoid the theft of data. Cryptography is one of the sciences used to secure the data by way of encoding. Fast Data Encipherment Algorithm (FEAL) is a block cipher symmetric cryptographic algorithms. Therefore, the file which wants to protect is encrypted and decrypted using the algorithm FEAL. To optimize the security of the data, session key that is utilized in the algorithm FEAL encoded with the Goldwasser-Micali algorithm, which is an asymmetric cryptographic algorithm and using probabilistic concept. In the encryption process, the key was converted into binary form. The selection of values of x that randomly causes the results of the cipher key is different for each binary value. The concept of symmetry and asymmetry algorithm merger called Hybrid Cryptosystem. The use of the algorithm FEAL and Goldwasser-Micali can restore the message to its original form and the algorithm FEAL time required for encryption and decryption is directly proportional to the length of the message. However, on Goldwasser- Micali algorithm, the length of the message is not directly proportional to the time of encryption and decryption.
Meta-heuristic algorithms as tools for hydrological science
NASA Astrophysics Data System (ADS)
Yoo, Do Guen; Kim, Joong Hoon
2014-12-01
In this paper, meta-heuristic optimization techniques are introduced and their applications to water resources engineering, particularly in hydrological science are introduced. In recent years, meta-heuristic optimization techniques have been introduced that can overcome the problems inherent in iterative simulations. These methods are able to find good solutions and require limited computation time and memory use without requiring complex derivatives. Simulation-based meta-heuristic methods such as Genetic algorithms (GAs) and Harmony Search (HS) have powerful searching abilities, which can occasionally overcome the several drawbacks of traditional mathematical methods. For example, HS algorithms can be conceptualized from a musical performance process and used to achieve better harmony; such optimization algorithms seek a near global optimum determined by the value of an objective function, providing a more robust determination of musical performance than can be achieved through typical aesthetic estimation. In this paper, meta-heuristic algorithms and their applications (focus on GAs and HS) in hydrological science are discussed by subject, including a review of existing literature in the field. Then, recent trends in optimization are presented and a relatively new technique such as Smallest Small World Cellular Harmony Search (SSWCHS) is briefly introduced, with a summary of promising results obtained in previous studies. As a result, previous studies have demonstrated that meta-heuristic algorithms are effective tools for the development of hydrological models and the management of water resources.
Representations of Complexity: How Nature Appears in Our Theories
2013-01-01
In science we study processes in the material world. The way these processes operate can be discovered by conducting experiments that activate them, and findings from such experiments can lead to functional complexity theories of how the material processes work. The results of a good functional theory will agree with experimental measurements, but the theory may not incorporate in its algorithmic workings a representation of the material processes themselves. Nevertheless, the algorithmic operation of a good functional theory may be said to make contact with material reality by incorporating the emergent computations the material processes carry out. These points are illustrated in the experimental analysis of behavior by considering an evolutionary theory of behavior dynamics, the algorithmic operation of which does not correspond to material features of the physical world, but the functional output of which agrees quantitatively and qualitatively with findings from a large body of research with live organisms. PMID:28018044
High-Speed On-Board Data Processing for Science Instruments
NASA Technical Reports Server (NTRS)
Beyon, Jeffrey Y.; Ng, Tak-Kwong; Lin, Bing; Hu, Yongxiang; Harrison, Wallace
2014-01-01
A new development of on-board data processing platform has been in progress at NASA Langley Research Center since April, 2012, and the overall review of such work is presented in this paper. The project is called High-Speed On-Board Data Processing for Science Instruments (HOPS) and focuses on a high-speed scalable data processing platform for three particular National Research Council's Decadal Survey missions such as Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS), Aerosol-Cloud-Ecosystems (ACE), and Doppler Aerosol Wind Lidar (DAWN) 3-D Winds. HOPS utilizes advanced general purpose computing with Field Programmable Gate Array (FPGA) based algorithm implementation techniques. The significance of HOPS is to enable high speed on-board data processing for current and future science missions with its reconfigurable and scalable data processing platform. A single HOPS processing board is expected to provide approximately 66 times faster data processing speed for ASCENDS, more than 70% reduction in both power and weight, and about two orders of cost reduction compared to the state-of-the-art (SOA) on-board data processing system. Such benchmark predictions are based on the data when HOPS was originally proposed in August, 2011. The details of these improvement measures are also presented. The two facets of HOPS development are identifying the most computationally intensive algorithm segments of each mission and implementing them in a FPGA-based data processing board. A general introduction of such facets is also the purpose of this paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ushizima, Daniela M.; Bianchi, Andrea G. C.; DeBianchi, Christina
We introduce a computational analysis workflow to access properties of solid objects using nondestructive imaging techniques that rely on X-ray imaging. The goal is to process and quantify structures from material science sample cross sections. The algorithms can differentiate the porous media (high density material) from the void (background, low density media) using a Boolean classifier, so that we can extract features, such as volume, surface area, granularity spectrum, porosity, among others. Our workflow, Quant-CT, leverages several algorithms from ImageJ, such as statistical region merging and 3D object counter. It also includes schemes for bilateral filtering that use a 3Dmore » kernel, for parallel processing of sub-stacks, and for handling over-segmentation using histogram similarities. The Quant-CT supports fast user interaction, providing the ability for the user to train the algorithm via subsamples to feed its core algorithms with automated parameterization. Quant-CT plugin is currently available for testing by personnel at the Advanced Light Source and Earth Sciences Divisions and Energy Frontier Research Center (EFRC), LBNL, as part of their research on porous materials. The goal is to understand the processes in fluid-rock systems for the geologic sequestration of CO2, and to develop technology for the safe storage of CO2 in deep subsurface rock formations. We describe our implementation, and demonstrate our plugin on porous material images. This paper targets end-users, with relevant information for developers to extend its current capabilities.« less
Value-added Data Services at the Goddard Earth Sciences Data and Information Services Center
NASA Technical Reports Server (NTRS)
Leptoukh, Gregory G.; Alcott, Gary T.; Kempler, Steven J.; Lynnes, Christopher S.; Vollmer, Bruce E.
2004-01-01
The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC), in addition to serving the Earth Science community as one of the major Distributed Active Archives Centers (DAACs), provides much more than just data. Among the value-added services available to general users are subsetting data spatially and/or by parameter, online analysis (to avoid downloading unnecessarily all the data), and assistance in obtaining data from other centers. Services available to data producers and high-volume users include consulting on building new products with standard formats and metadata and construction of data management systems. A particularly useful service is data processing at the DISC (i.e., close to the input data) with the users algorithm. This can take a number of different forms: as a configuration-managed algorithm within the main processing stream; as a stand-alone program next to the on-line data storage; as build-it-yourself code within the Near-Archive Data Mining (NADM) system; or as an on-the-fly analysis with simple algorithms embedded into the web-based tools. Partnerships between the GES DISC and scientists, both producers and users, allow the scientists to concentrate on science, while the GES DISC handles the data management, e.g., formats, integration, and data processing. The existing data management infrastructure at the GES DISC supports a wide spectrum of options: from simple data support to sophisticated on-line analysis tools, producing economies of scale and rapid time-to-deploy. At the same time, such partnerships allow the GES DISC to serve the user community more efficiently and to better prioritize on-line holdings. Several examples of successful partnerships are described in the presentation.
Large-scale Labeled Datasets to Fuel Earth Science Deep Learning Applications
NASA Astrophysics Data System (ADS)
Maskey, M.; Ramachandran, R.; Miller, J.
2017-12-01
Deep learning has revolutionized computer vision and natural language processing with various algorithms scaled using high-performance computing. However, generic large-scale labeled datasets such as the ImageNet are the fuel that drives the impressive accuracy of deep learning results. Large-scale labeled datasets already exist in domains such as medical science, but creating them in the Earth science domain is a challenge. While there are ways to apply deep learning using limited labeled datasets, there is a need in the Earth sciences for creating large-scale labeled datasets for benchmarking and scaling deep learning applications. At the NASA Marshall Space Flight Center, we are using deep learning for a variety of Earth science applications where we have encountered the need for large-scale labeled datasets. We will discuss our approaches for creating such datasets and why these datasets are just as valuable as deep learning algorithms. We will also describe successful usage of these large-scale labeled datasets with our deep learning based applications.
High-Speed On-Board Data Processing for Science Instruments: HOPS
NASA Technical Reports Server (NTRS)
Beyon, Jeffrey
2015-01-01
The project called High-Speed On-Board Data Processing for Science Instruments (HOPS) has been funded by NASA Earth Science Technology Office (ESTO) Advanced Information Systems Technology (AIST) program during April, 2012 â€" April, 2015. HOPS is an enabler for science missions with extremely high data processing rates. In this three-year effort of HOPS, Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) and 3-D Winds were of interest in particular. As for ASCENDS, HOPS replaces time domain data processing with frequency domain processing while making the real-time on-board data processing possible. As for 3-D Winds, HOPS offers real-time high-resolution wind profiling with 4,096-point fast Fourier transform (FFT). HOPS is adaptable with quick turn-around time. Since HOPS offers reusable user-friendly computational elements, its FPGA IP Core can be modified for a shorter development period if the algorithm changes. The FPGA and memory bandwidth of HOPS is 20 GB/sec while the typical maximum processor-to-SDRAM bandwidth of the commercial radiation tolerant high-end processors is about 130-150 MB/sec. The inter-board communication bandwidth of HOPS is 4 GB/sec while the effective processor-to-cPCI bandwidth of commercial radiation tolerant high-end boards is about 50-75 MB/sec. Also, HOPS offers VHDL cores for the easy and efficient implementation of ASCENDS and 3-D Winds, and other similar algorithms. A general overview of the 3-year development of HOPS is the goal of this presentation.
Analysis of CrIS-ATMS Data Using an AIRS Science Team Version 6 - Like Retrieval Algorithm
NASA Technical Reports Server (NTRS)
Susskind, Joel; Kouvaris, Louis C.
2013-01-01
CrIS/ATMS is flying on NPP and is scheduled to fly on JPSS-1. CrIS/ATMS has roughly equivalent capabilities to AIRS/AMSU. The AIRS Science Team Version 6 retrieval algorithm is currently producing very high quality level-3 Climate Data Records (CDR's) that will be critical for understanding climate processes AIRS CDRs should eventually cover the period September 2002 through at least 2020. CrIS/ATMS is the only scheduled follow on to AIRS AMSU. I have been asked by Ramesh Kakar if CrIS/ATMS can be counted on to adequately continue the AIRS/AMSU CDRs beyond 2020, or is something better needed? This research is being done to answer that question. A minimum requirement to obtain a yes answer is that CrIS/ATMS be analyzed using an AIRS Version 6 - like algorithm. NOAA is currently generating CrIS/ATMS products using 2 algorithms: IDPS and NUCAPS
Investigation of Dynamic Algorithms for Pattern Recognition Identified in Cerebral Cortex
1991-12-02
oscillatory and possibly chaotic activity forin the actual cortical substrate of the diverse sensory, motor, and cognitive operations now studied in...September Neural Information Processing Systems - Natural and Synthetic, Denver, Colo., November 1989 U.C. San Diego, Cognitive Science Dept...Baird. Biologically applied neural networks may foster the co-evolution of neurobiology and cognitive psychology. Brain and Behavioral Sciences, 37
An Innovative Thinking-Based Intelligent Information Fusion Algorithm
Hu, Liang; Liu, Gang; Zhou, Jin
2013-01-01
This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information. PMID:23956699
An innovative thinking-based intelligent information fusion algorithm.
Lu, Huimin; Hu, Liang; Liu, Gang; Zhou, Jin
2013-01-01
This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information.
Halftoning and Image Processing Algorithms
1999-02-01
screening techniques with the quality advantages of error diffusion in the half toning of color maps, and on color image enhancement for halftone ...image quality. Our goals in this research were to advance the understanding in image science for our new halftone algorithm and to contribute to...image retrieval and noise theory for such imagery. In the field of color halftone printing, research was conducted on deriving a theoretical model of our
Isaacson, M D; Srinivasan, S; Lloyd, L L
2010-01-01
MathSpeak is a set of rules for non speaking of mathematical expressions. These rules have been incorporated into a computerised module that translates printed mathematics into the non-ambiguous MathSpeak form for synthetic speech rendering. Differences between individual utterances produced with the translator module are difficult to discern because of insufficient pausing between utterances; hence, the purpose of this study was to develop an algorithm for improving the synthetic speech rendering of MathSpeak. To improve synthetic speech renderings, an algorithm for inserting pauses was developed based upon recordings of middle and high school math teachers speaking mathematic expressions. Efficacy testing of this algorithm was conducted with college students without disabilities and high school/college students with visual impairments. Parameters measured included reception accuracy, short-term memory retention, MathSpeak processing capacity and various rankings concerning the quality of synthetic speech renderings. All parameters measured showed statistically significant improvements when the algorithm was used. The algorithm improves the quality and information processing capacity of synthetic speech renderings of MathSpeak. This increases the capacity of individuals with print disabilities to perform mathematical activities and to successfully fulfill science, technology, engineering and mathematics academic and career objectives.
Nimbus/TOMS Science Data Operations Support
NASA Technical Reports Server (NTRS)
Childs, Jeff
1998-01-01
1. Participate in and provide analysis of laboratory and in-flight calibration of UV sensors used for space observations of backscattered UV radiation. 2. Provide support to the TOMS Science Operations Center, including generating instrument command lists and analysis of TOMS health and safety data. 3. Develop and maintain software and algorithms designed to capture and process raw spacecraft and instrument data, convert the instrument output into measured radiance and irradiances, and produce scientifically valid products. 4. Process the TOMS data into Level 1, Level 2, and Level 3 data products. 5. Provide analysis of the science data products in support of NASA GSFC Code 916's research.
Nimbus/TOMS Science Data Operations Support
NASA Technical Reports Server (NTRS)
1998-01-01
Projected goals include the following: (1) Participate in and provide analysis of laboratory and in-flight calibration of LTV sensors used for space observations of backscattered LTV radiation; (2) Provide support to the TOMS Science Operations Center, including generating instrument command lists and analysis of TOMS health and safety data; (3) Develop and maintain software and algorithms designed to capture and process raw spacecraft and instrument data, convert the instrument output into measured radiance and irradiances, and produce scientifically valid products; (4) Process the TOMS data into Level 1, Level 2, and Level 3 data products; (5) Provide analysis of the science data products in support of NASA GSFC Code 916's research.
Parallelization and implementation of approximate root isolation for nonlinear system by Monte Carlo
NASA Astrophysics Data System (ADS)
Khosravi, Ebrahim
1998-12-01
This dissertation solves a fundamental problem of isolating the real roots of nonlinear systems of equations by Monte-Carlo that were published by Bush Jones. This algorithm requires only function values and can be applied readily to complicated systems of transcendental functions. The implementation of this sequential algorithm provides scientists with the means to utilize function analysis in mathematics or other fields of science. The algorithm, however, is so computationally intensive that the system is limited to a very small set of variables, and this will make it unfeasible for large systems of equations. Also a computational technique was needed for investigating a metrology of preventing the algorithm structure from converging to the same root along different paths of computation. The research provides techniques for improving the efficiency and correctness of the algorithm. The sequential algorithm for this technique was corrected and a parallel algorithm is presented. This parallel method has been formally analyzed and is compared with other known methods of root isolation. The effectiveness, efficiency, enhanced overall performance of the parallel processing of the program in comparison to sequential processing is discussed. The message passing model was used for this parallel processing, and it is presented and implemented on Intel/860 MIMD architecture. The parallel processing proposed in this research has been implemented in an ongoing high energy physics experiment: this algorithm has been used to track neutrinoes in a super K detector. This experiment is located in Japan, and data can be processed on-line or off-line locally or remotely.
A Simple, Scalable, Script-based Science Processor
NASA Technical Reports Server (NTRS)
Lynnes, Christopher
2004-01-01
The production of Earth Science data from orbiting spacecraft is an activity that takes place 24 hours a day, 7 days a week. At the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC), this results in as many as 16,000 program executions each day, far too many to be run by human operators. In fact, when the Moderate Resolution Imaging Spectroradiometer (MODIS) was launched aboard the Terra spacecraft in 1999, the automated commercial system for running science processing was able to manage no more than 4,000 executions per day. Consequently, the GES DAAC developed a lightweight system based on the popular Per1 scripting language, named the Simple, Scalable, Script-based Science Processor (S4P). S4P automates science processing, allowing operators to focus on the rare problems occurring from anomalies in data or algorithms. S4P has been reused in several systems ranging from routine processing of MODIS data to data mining and is publicly available from NASA.
Kepler Science Operations Center Architecture
NASA Technical Reports Server (NTRS)
Middour, Christopher; Klaus, Todd; Jenkins, Jon; Pletcher, David; Cote, Miles; Chandrasekaran, Hema; Wohler, Bill; Girouard, Forrest; Gunter, Jay P.; Uddin, Kamal;
2010-01-01
We give an overview of the operational concepts and architecture of the Kepler Science Data Pipeline. Designed, developed, operated, and maintained by the Science Operations Center (SOC) at NASA Ames Research Center, the Kepler Science Data Pipeline is central element of the Kepler Ground Data System. The SOC charter is to analyze stellar photometric data from the Kepler spacecraft and report results to the Kepler Science Office for further analysis. We describe how this is accomplished via the Kepler Science Data Pipeline, including the hardware infrastructure, scientific algorithms, and operational procedures. The SOC consists of an office at Ames Research Center, software development and operations departments, and a data center that hosts the computers required to perform data analysis. We discuss the high-performance, parallel computing software modules of the Kepler Science Data Pipeline that perform transit photometry, pixel-level calibration, systematic error-correction, attitude determination, stellar target management, and instrument characterization. We explain how data processing environments are divided to support operational processing and test needs. We explain the operational timelines for data processing and the data constructs that flow into the Kepler Science Data Pipeline.
A Method for Identifying Contours in Processing Digital Images from Computer Tomograph
NASA Astrophysics Data System (ADS)
Roşu, Şerban; Pater, Flavius; Costea, Dan; Munteanu, Mihnea; Roşu, Doina; Fratila, Mihaela
2011-09-01
The first step in digital processing of two-dimensional computed tomography images is to identify the contour of component elements. This paper deals with the collective work of specialists in medicine and applied mathematics in computer science on elaborating new algorithms and methods in medical 2D and 3D imagery.
The Kepler Data Processing Handbook: A Field Guide to Prospecting for Habitable Worlds
NASA Technical Reports Server (NTRS)
Jenkins, Jon M.
2017-01-01
The Kepler telescope hurtled into orbit in March 2009, initiating NASA's first mission to discover Earth-size planets orbiting Sun-like stars. Kepler simultaneously collected data for approximately 165,000 target stars at a time over its four-year mission, identifying over 4700 planet candidates, over 2300 confirmed or validated planets, and over 2100 eclipsing binaries. While Kepler was designed to discover exoplanets, the long-term, ultrahigh photometric precision measurements it achieved made it a premier observational facility for stellar astrophysics, especially in the field of asteroseismology, and for variable stars, such as RR Lyrae. The Kepler Science Operations Center (SOC) was developed at NASA Ames Research Center to process the data acquired by Kepler from pixel-level calibrations all the way to identifying transiting planet signatures and subjecting them to a suite of diagnostic tests to establish or break confidence in their planetary nature. Detecting small, rocky planets transiting Sun-like stars presents a variety of daunting challenges, including achieving an unprecedented photometric precision of 20 ppm on 6.5-hour timescales, and supporting the science operations, management, processing, and repeated reprocessing of the accumulating data stream. A newly revised and expanded version of the Kepler Data Processing Handbook (KDPH) has been released to support the legacy archival products. The KDPH details the theory, design and performance of the algorithms supporting each data processing step. This paper presents an overview of the KDPH and features illustrations of several key algorithms in the Kepler Science Data Processing Pipeline. Kepler was selected as the 10th mission of the Discovery Program. Funding for this mission is provided by NASA, Science Mission Directorate.
ERIC Educational Resources Information Center
Fofonoff, N. P.; Millard, R. C., Jr.
Algorithms for computation of fundamental properties of seawater, based on the practicality salinity scale (PSS-78) and the international equation of state for seawater (EOS-80), are compiled in the present report for implementing and standardizing computer programs for oceanographic data processing. Sample FORTRAN subprograms and tables are given…
In Pursuit of LSST Science Requirements: A Comparison of Photometry Algorithms
NASA Astrophysics Data System (ADS)
Becker, Andrew C.; Silvestri, Nicole M.; Owen, Russell E.; Ivezić, Željko; Lupton, Robert H.
2007-12-01
We have developed an end-to-end photometric data-processing pipeline to compare current photometric algorithms commonly used on ground-based imaging data. This test bed is exceedingly adaptable and enables us to perform many research and development tasks, including image subtraction and co-addition, object detection and measurements, the production of photometric catalogs, and the creation and stocking of database tables with time-series information. This testing has been undertaken to evaluate existing photometry algorithms for consideration by a next-generation image-processing pipeline for the Large Synoptic Survey Telescope (LSST). We outline the results of our tests for four packages: the Sloan Digital Sky Survey's Photo package, DAOPHOT and ALLFRAME, DOPHOT, and two versions of Source Extractor (SExtractor). The ability of these algorithms to perform point-source photometry, astrometry, shape measurements, and star-galaxy separation and to measure objects at low signal-to-noise ratio is quantified. We also perform a detailed crowded-field comparison of DAOPHOT and ALLFRAME, and profile the speed and memory requirements in detail for SExtractor. We find that both DAOPHOT and Photo are able to perform aperture photometry to high enough precision to meet LSST's science requirements, and less adequately at PSF-fitting photometry. Photo performs the best at simultaneous point- and extended-source shape and brightness measurements. SExtractor is the fastest algorithm, and recent upgrades in the software yield high-quality centroid and shape measurements with little bias toward faint magnitudes. ALLFRAME yields the best photometric results in crowded fields.
Algorithmic, LOCS and HOCS (chemistry) exam questions: performance and attitudes of college students
NASA Astrophysics Data System (ADS)
Zoller, Uri
2002-02-01
The performance of freshmen biology and physics-mathematics majors and chemistry majors as well as pre- and in-service chemistry teachers in two Israeli universities on algorithmic (ALG), lower-order cognitive skills (LOCS), and higher-order cognitive skills (HOCS) chemistry exam questions were studied. The driving force for the study was an interest in moving science and chemistry instruction from an algorithmic and factual recall orientation dominated by LOCS, to a decision-making, problem-solving and critical system thinking approach, dominated by HOCS. College students' responses to the specially designed ALG, LOCS and HOCS chemistry exam questions were scored and analysed for differences and correlation between the performance means within and across universities by the questions' category. This was followed by a combined student interview - 'speaking aloud' problem solving session for assessing the thinking processes involved in solving these types of questions and the students' attitudes towards them. The main findings were: (1) students in both universities performed consistently in each of the three categories in the order of ALG > LOCS > HOCS; their 'ideological' preference, was HOCS > algorithmic/LOCS, - referred to as 'computational questions', but their pragmatic preference was the reverse; (2) success on algorithmic/LOCS does not imply success on HOCS questions; algorithmic questions constitute a category on its own as far as students success in solving them is concerned. Our study and its results support the effort being made, worldwide, to integrate HOCS-fostering teaching and assessment strategies and, to develop HOCS-oriented science-technology-environment-society (STES)-type curricula within science and chemistry education.
Improvements to the MODIS Land Products in Collection Version 6
NASA Astrophysics Data System (ADS)
Wolfe, R. E.; Devadiga, S.; Masuoka, E. J.; Running, S. W.; Vermote, E.; Giglio, L.; Wan, Z.; Riggs, G. A.; Schaaf, C.; Myneni, R. B.; Friedl, M. A.; Wang, Z.; Sulla-menashe, D. J.; Zhao, M.
2013-12-01
The MODIS (Moderate Resolution Imaging Spectroradiometer) Adaptive Processing System (MODAPS), housed at the NASA Goddard Space Flight Center (GSFC), has been processing the earth view data acquired by the MODIS instrument aboard the Terra (EOS AM) and Aqua (EOS PM) satellites to generate suite of land and atmosphere data products using the science algorithms developed by the MODIS Science Team. These data products are used by diverse set of users in research and other applications from both government and non-government agencies around the world. These validated global products are also being used in interactive Earth system models able to predict global change accurately enough to assist policy makers in making sound decisions concerning the protection of our environment. Hence an increased emphasis is being placed on generation of high quality consistent data records from the MODIS data through reprocessing of the records using improved science algorithms. Since the launch of Terra in December 1999, MODIS land data records have been reprocessed four times. The Collection Version 6 (C6) reprocessing of MODIS Land and Atmosphere products is scheduled to start in Fall 2013 and is expected to complete in Spring 2014. This presentation will describe changes made to the C6 science algorithms to correct issues in the C5 products, additional improvements made to the products as deemed necessary by the data users and science teams, and new products introduced in this reprocessing. In addition to the improvements from product specific changes to algorithms, the C6 products will also see significant improvement in the calibration by the MODIS Calibration Science Team (MCST) of the C6 L1B Top of the Atmosphere (TOA) reflectance and radiance product, more accurate geolocation, and an improved Land Water mask. For the a priori land cover input, this reprocessing will use the multi-year land cover product generated with three years of MODIS data as input as opposed to one single land cover product used for the entire mission in the C5 reprocessing. The C6 products are expected to be released from the Distributed Active Archive Center (DAAC) soon after the reprocessing begins. To facilitate user acquaintance with products from the new version and independent evaluation of C6 by comparison of two versions, MODAPS plans to continue generation of products from both versions for at least a year after completion of the C6 reprocessing after which C5 processing will be discontinued.
Random bits, true and unbiased, from atmospheric turbulence
Marangon, Davide G.; Vallone, Giuseppe; Villoresi, Paolo
2014-01-01
Random numbers represent a fundamental ingredient for secure communications and numerical simulation as well as to games and in general to Information Science. Physical processes with intrinsic unpredictability may be exploited to generate genuine random numbers. The optical propagation in strong atmospheric turbulence is here taken to this purpose, by observing a laser beam after a 143 km free-space path. In addition, we developed an algorithm to extract the randomness of the beam images at the receiver without post-processing. The numbers passed very selective randomness tests for qualification as genuine random numbers. The extracting algorithm can be easily generalized to random images generated by different physical processes. PMID:24976499
Towards operational multisensor registration
NASA Technical Reports Server (NTRS)
Rignot, Eric J. M.; Kwok, Ronald; Curlander, John C.
1991-01-01
To use data from a number of different remote sensors in a synergistic manner, a multidimensional analysis of the data is necessary. However, prior to this analysis, processing to correct for the systematic geometric distortion characteristic of each sensor is required. Furthermore, the registration process must be fully automated to handle a large volume of data and high data rates. A conceptual approach towards an operational multisensor registration algorithm is presented. The performance requirements of the algorithm are first formulated given the spatially, temporally, and spectrally varying factors that influence the image characteristics and the science requirements of various applications. Several registration techniques that fit within the structure of this algorithm are also presented. Their performance was evaluated using a multisensor test data set assembled from LANDSAT TM, SEASAT, SIR-B, Thermal Infrared Multispectral Scanner (TIMS), and SPOT sensors.
Improving the interoperability of biomedical ontologies with compound alignments.
Oliveira, Daniela; Pesquita, Catia
2018-01-09
Ontologies are commonly used to annotate and help process life sciences data. Although their original goal is to facilitate integration and interoperability among heterogeneous data sources, when these sources are annotated with distinct ontologies, bridging this gap can be challenging. In the last decade, ontology matching systems have been evolving and are now capable of producing high-quality mappings for life sciences ontologies, usually limited to the equivalence between two ontologies. However, life sciences research is becoming increasingly transdisciplinary and integrative, fostering the need to develop matching strategies that are able to handle multiple ontologies and more complex relations between their concepts. We have developed ontology matching algorithms that are able to find compound mappings between multiple biomedical ontologies, in the form of ternary mappings, finding for instance that "aortic valve stenosis"(HP:0001650) is equivalent to the intersection between "aortic valve"(FMA:7236) and "constricted" (PATO:0001847). The algorithms take advantage of search space filtering based on partial mappings between ontology pairs, to be able to handle the increased computational demands. The evaluation of the algorithms has shown that they are able to produce meaningful results, with precision in the range of 60-92% for new mappings. The algorithms were also applied to the potential extension of logical definitions of the OBO and the matching of several plant-related ontologies. This work is a first step towards finding more complex relations between multiple ontologies. The evaluation shows that the results produced are significant and that the algorithms could satisfy specific integration needs.
Faust, Oliver; Yu, Wenwei; Rajendra Acharya, U
2015-03-01
The concept of real-time is very important, as it deals with the realizability of computer based health care systems. In this paper we review biomedical real-time systems with a meta-analysis on computational complexity (CC), delay (Δ) and speedup (Sp). During the review we found that, in the majority of papers, the term real-time is part of the thesis indicating that a proposed system or algorithm is practical. However, these papers were not considered for detailed scrutiny. Our detailed analysis focused on papers which support their claim of achieving real-time, with a discussion on CC or Sp. These papers were analyzed in terms of processing system used, application area (AA), CC, Δ, Sp, implementation/algorithm (I/A) and competition. The results show that the ideas of parallel processing and algorithm delay were only recently introduced and journal papers focus more on Algorithm (A) development than on implementation (I). Most authors compete on big O notation (O) and processing time (PT). Based on these results, we adopt the position that the concept of real-time will continue to play an important role in biomedical systems design. We predict that parallel processing considerations, such as Sp and algorithm scaling, will become more important. Copyright © 2015 Elsevier Ltd. All rights reserved.
Rational approximations to rational models: alternative algorithms for category learning.
Sanborn, Adam N; Griffiths, Thomas L; Navarro, Daniel J
2010-10-01
Rational models of cognition typically consider the abstract computational problems posed by the environment, assuming that people are capable of optimally solving those problems. This differs from more traditional formal models of cognition, which focus on the psychological processes responsible for behavior. A basic challenge for rational models is thus explaining how optimal solutions can be approximated by psychological processes. We outline a general strategy for answering this question, namely to explore the psychological plausibility of approximation algorithms developed in computer science and statistics. In particular, we argue that Monte Carlo methods provide a source of rational process models that connect optimal solutions to psychological processes. We support this argument through a detailed example, applying this approach to Anderson's (1990, 1991) rational model of categorization (RMC), which involves a particularly challenging computational problem. Drawing on a connection between the RMC and ideas from nonparametric Bayesian statistics, we propose 2 alternative algorithms for approximate inference in this model. The algorithms we consider include Gibbs sampling, a procedure appropriate when all stimuli are presented simultaneously, and particle filters, which sequentially approximate the posterior distribution with a small number of samples that are updated as new data become available. Applying these algorithms to several existing datasets shows that a particle filter with a single particle provides a good description of human inferences.
Lunar BRDF Correction of Suomi-NPP VIIRS Day/Night Band Time Series Product
NASA Astrophysics Data System (ADS)
Wang, Z.; Roman, M. O.; Kalb, V.; Stokes, E.; Miller, S. D.
2015-12-01
Since the first-light images from the Suomi-NPP VIIRS low-light visible Day/Night Band (DNB) sensor were received in November 2011, the NASA Suomi-NPP Land Science Investigator Processing System (SIPS) has focused on evaluating this new capability for quantitative science applications, as well as developing and testing refined algorithms to meet operational and Land science research needs. While many promising DNB applications have been developed since the Suomi-NPP launch, most studies to-date have been limited by the traditional qualitative image display and spatial-temporal aggregated statistical analysis methods inherent in current heritage algorithms. This has resulted in strong interest for a new generation of science-quality products that can be used to monitor both the magnitude and signature of nighttime phenomena and anthropogenic sources of light emissions. In one particular case study, Román and Stokes (2015) demonstrated that tracking daily dynamic DNB radiances can provide valuable information about the character of the human activities and behaviors that influence energy, consumption, and vulnerability. Here we develop and evaluate a new suite of DNB science-quality algorithms that can exclude a primary source of background noise: i.e., the Lunar BRDF (Bidirectional Reflectance Distribution Function) effect. Every day, the operational NASA Land SIPS DNB algorithm makes use of 16 days worth of DNB-derived surface reflectances (SR) (based on the heritage MODIS SR algorithm) and a semiempirical kernel-driven bidirectional reflectance model to determine a global set of parameters describing the BRDF of the land surface. The nighttime period of interest is heavily weighted as a function of observation coverage. These gridded parameters, combined with Miller and Turner's [2009] top-of-atmosphere spectral irradiance model, are then used to determine the DNB's lunar radiance contribution at any point in time and under specific illumination conditions.
Reducing the Volume of NASA Earth-Science Data
NASA Technical Reports Server (NTRS)
Lee, Seungwon; Braverman, Amy J.; Guillaume, Alexandre
2010-01-01
A computer program reduces data generated by NASA Earth-science missions into representative clusters characterized by centroids and membership information, thereby reducing the large volume of data to a level more amenable to analysis. The program effects an autonomous data-reduction/clustering process to produce a representative distribution and joint relationships of the data, without assuming a specific type of distribution and relationship and without resorting to domain-specific knowledge about the data. The program implements a combination of a data-reduction algorithm known as the entropy-constrained vector quantization (ECVQ) and an optimization algorithm known as the differential evolution (DE). The combination of algorithms generates the Pareto front of clustering solutions that presents the compromise between the quality of the reduced data and the degree of reduction. Similar prior data-reduction computer programs utilize only a clustering algorithm, the parameters of which are tuned manually by users. In the present program, autonomous optimization of the parameters by means of the DE supplants the manual tuning of the parameters. Thus, the program determines the best set of clustering solutions without human intervention.
Overview of implementation of DARPA GPU program in SAIC
NASA Astrophysics Data System (ADS)
Braunreiter, Dennis; Furtek, Jeremy; Chen, Hai-Wen; Healy, Dennis
2008-04-01
This paper reviews the implementation of DARPA MTO STAP-BOY program for both Phase I and II conducted at Science Applications International Corporation (SAIC). The STAP-BOY program conducts fast covariance factorization and tuning techniques for space-time adaptive process (STAP) Algorithm Implementation on Graphics Processor unit (GPU) Architectures for Embedded Systems. The first part of our presentation on the DARPA STAP-BOY program will focus on GPU implementation and algorithm innovations for a prototype radar STAP algorithm. The STAP algorithm will be implemented on the GPU, using stream programming (from companies such as PeakStream, ATI Technologies' CTM, and NVIDIA) and traditional graphics APIs. This algorithm will include fast range adaptive STAP weight updates and beamforming applications, each of which has been modified to exploit the parallel nature of graphics architectures.
Onboard Science and Applications Algorithm for Hyperspectral Data Reduction
NASA Technical Reports Server (NTRS)
Chien, Steve A.; Davies, Ashley G.; Silverman, Dorothy; Mandl, Daniel
2012-01-01
An onboard processing mission concept is under development for a possible Direct Broadcast capability for the HyspIRI mission, a Hyperspectral remote sensing mission under consideration for launch in the next decade. The concept would intelligently spectrally and spatially subsample the data as well as generate science products onboard to enable return of key rapid response science and applications information despite limited downlink bandwidth. This rapid data delivery concept focuses on wildfires and volcanoes as primary applications, but also has applications to vegetation, coastal flooding, dust, and snow/ice applications. Operationally, the HyspIRI team would define a set of spatial regions of interest where specific algorithms would be executed. For example, known coastal areas would have certain products or bands downlinked, ocean areas might have other bands downlinked, and during fire seasons other areas would be processed for active fire detections. Ground operations would automatically generate the mission plans specifying the highest priority tasks executable within onboard computation, setup, and data downlink constraints. The spectral bands of the TIR (thermal infrared) instrument can accurately detect the thermal signature of fires and send down alerts, as well as the thermal and VSWIR (visible to short-wave infrared) data corresponding to the active fires. Active volcanism also produces a distinctive thermal signature that can be detected onboard to enable spatial subsampling. Onboard algorithms and ground-based algorithms suitable for onboard deployment are mature. On HyspIRI, the algorithm would perform a table-driven temperature inversion from several spectral TIR bands, and then trigger downlink of the entire spectrum for each of the hot pixels identified. Ocean and coastal applications include sea surface temperature (using a small spectral subset of TIR data, but requiring considerable ancillary data), and ocean color applications to track biological activity such as harmful algal blooms. Measuring surface water extent to track flooding is another rapid response product leveraging VSWIR spectral information.
GPU-Based High-performance Imaging for Mingantu Spectral RadioHeliograph
NASA Astrophysics Data System (ADS)
Mei, Ying; Wang, Feng; Wang, Wei; Chen, Linjie; Liu, Yingbo; Deng, Hui; Dai, Wei; Liu, Cuiyin; Yan, Yihua
2018-01-01
As a dedicated solar radio interferometer, the MingantU SpEctral RadioHeliograph (MUSER) generates massive observational data in the frequency range of 400 MHz-15 GHz. High-performance imaging forms a significantly important aspect of MUSER’s massive data processing requirements. In this study, we implement a practical high-performance imaging pipeline for MUSER data processing. At first, the specifications of the MUSER are introduced and its imaging requirements are analyzed. Referring to the most commonly used radio astronomy software such as CASA and MIRIAD, we then implement a high-performance imaging pipeline based on the Graphics Processing Unit technology with respect to the current operational status of the MUSER. A series of critical algorithms and their pseudo codes, i.e., detection of the solar disk and sky brightness, automatic centering of the solar disk and estimation of the number of iterations for clean algorithms, are proposed in detail. The preliminary experimental results indicate that the proposed imaging approach significantly increases the processing performance of MUSER and generates images with high-quality, which can meet the requirements of the MUSER data processing. Supported by the National Key Research and Development Program of China (2016YFE0100300), the Joint Research Fund in Astronomy (No. U1531132, U1631129, U1231205) under cooperative agreement between the National Natural Science Foundation of China (NSFC) and the Chinese Academy of Sciences (CAS), the National Natural Science Foundation of China (Nos. 11403009 and 11463003).
NASA Technical Reports Server (NTRS)
Entekhabi, Dara; Njoku, Eni E.; O'Neill, Peggy E.; Kellogg, Kent H.; Entin, Jared K.
2010-01-01
Talk outline 1. Derivation of SMAP basic and applied science requirements from the NRC Earth Science Decadal Survey applications 2. Data products and latencies 3. Algorithm highlights 4. SMAP Algorithm Testbed 5. SMAP Working Groups and community engagement
NASA Astrophysics Data System (ADS)
Nalli, N. R.; Gambacorta, A.; Tan, C.; Iturbide, F.; Barnet, C. D.; Reale, A.; Sun, B.; Liu, Q.
2017-12-01
This presentation overviews the performance of the operational SNPP NOAA Unique Combined Atmospheric Processing System (NUCAPS) environmental data record (EDR) products. The SNPP Cross-track Infrared Sounder and Advanced Technology Microwave Sounder (CrIS/ATMS) suite, the first of the Joint Polar Satellite System (JPSS) Program, is one of NOAA's major investments in our nation's future operational environmental observation capability. The NUCAPS algorithm is a world-class NOAA-operational IR/MW retrieval algorithm based upon the well-established AIRS science team algorithm for deriving temperature, moisture, ozone and carbon trace gas to provide users with state-of-the-art EDR products. Operational use of the products includes the NOAA National Weather Service (NWS) Advanced Weather Interactive Processing System (AWIPS), along with numerous science-user applications. NUCAPS EDR product assessments are made with reference to JPSS Level 1 global requirements, which provide the definitive metrics for assessing that the products have minimally met predefined global performance specifications. The NESDIS/STAR NUCAPS development and validation team recently delivered the Phase 4 algorithm which incorporated critical updates necessary for compatibility with full spectral-resolution (FSR) CrIS sensor data records (SDRs). Based on comprehensive analyses, the NUCAPS Phase 4 CrIS-FSR temperature, moisture and ozone profile EDRs, as well as the carbon trace gas EDRs (CO, CH4 and CO2), are shown o be meeting or close to meeting the JPSS program global requirements. Regional and temporal assessments of interest to EDR users (e.g., AWIPS) will also be presented.
Exemplar Models as a Mechanism for Performing Bayesian Inference
2010-01-01
Feldman Department of Cognitive and Linguistic Sciences Brown University Adam N. Sanborn Gatsby Computational Neuroscience Unit University College London...problem. As noted above, particle filters are another instance of a rational process model, but the great diversity of efficient approximation algorithms
Swarm intelligence metaheuristics for enhanced data analysis and optimization.
Hanrahan, Grady
2011-09-21
The swarm intelligence (SI) computing paradigm has proven itself as a comprehensive means of solving complicated analytical chemistry problems by emulating biologically-inspired processes. As global optimum search metaheuristics, associated algorithms have been widely used in training neural networks, function optimization, prediction and classification, and in a variety of process-based analytical applications. The goal of this review is to provide readers with critical insight into the utility of swarm intelligence tools as methods for solving complex chemical problems. Consideration will be given to algorithm development, ease of implementation and model performance, detailing subsequent influences on a number of application areas in the analytical, bioanalytical and detection sciences.
Kepler: A Search for Terrestrial Planets - SOC 9.3 DR25 Pipeline Parameter Configuration Reports
NASA Technical Reports Server (NTRS)
Campbell, Jennifer R.
2017-01-01
This document describes the manner in which the pipeline and algorithm parameters for the Kepler Science Operations Center (SOC) science data processing pipeline were managed. This document is intended for scientists and software developers who wish to better understand the software design for the final Kepler codebase (SOC 9.3) and the effect of the software parameters on the Data Release (DR) 25 archival products.
NASA Astrophysics Data System (ADS)
Hayrapetyan, David B.; Hovhannisyan, Levon; Mantashyan, Paytsar A.
2013-04-01
The analysis of complex spectra is an actual problem for modern science. The work is devoted to the creation of a software package, which analyzes spectrum in the different formats, possesses by dynamic knowledge database and self-study mechanism, performs automated analysis of the spectra compound based on knowledge database by application of certain algorithms. In the software package as searching systems, hyper-spherical random search algorithms, gradient algorithms and genetic searching algorithms were used. The analysis of Raman and IR spectrum of diamond-like carbon (DLC) samples were performed by elaborated program. After processing the data, the program immediately displays all the calculated parameters of DLC.
A sample implementation for parallelizing Divide-and-Conquer algorithms on the GPU.
Mei, Gang; Zhang, Jiayin; Xu, Nengxiong; Zhao, Kunyang
2018-01-01
The strategy of Divide-and-Conquer (D&C) is one of the frequently used programming patterns to design efficient algorithms in computer science, which has been parallelized on shared memory systems and distributed memory systems. Tzeng and Owens specifically developed a generic paradigm for parallelizing D&C algorithms on modern Graphics Processing Units (GPUs). In this paper, by following the generic paradigm proposed by Tzeng and Owens, we provide a new and publicly available GPU implementation of the famous D&C algorithm, QuickHull, to give a sample and guide for parallelizing D&C algorithms on the GPU. The experimental results demonstrate the practicality of our sample GPU implementation. Our research objective in this paper is to present a sample GPU implementation of a classical D&C algorithm to help interested readers to develop their own efficient GPU implementations with fewer efforts.
Formal logic rewrite system bachelor in teaching mathematical informatics
NASA Astrophysics Data System (ADS)
Habiballa, Hashim; Jendryscik, Radek
2017-07-01
The article presents capabilities of the formal rewrite logic system - Bachelor - for teaching theoretical computer science (mathematical informatics). The system Bachelor enables constructivist approach to teaching and therefore it may enhance the learning process in hard informatics essential disciplines. It brings not only detailed description of formal rewrite process but also it can demonstrate algorithmical principles for logic formulae manipulations.
High-Speed On-Board Data Processing Platform for LIDAR Projects at NASA Langley Research Center
NASA Astrophysics Data System (ADS)
Beyon, J.; Ng, T. K.; Davis, M. J.; Adams, J. K.; Lin, B.
2015-12-01
The project called High-Speed On-Board Data Processing for Science Instruments (HOPS) has been funded by NASA Earth Science Technology Office (ESTO) Advanced Information Systems Technology (AIST) program during April, 2012 - April, 2015. HOPS is an enabler for science missions with extremely high data processing rates. In this three-year effort of HOPS, Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) and 3-D Winds were of interest in particular. As for ASCENDS, HOPS replaces time domain data processing with frequency domain processing while making the real-time on-board data processing possible. As for 3-D Winds, HOPS offers real-time high-resolution wind profiling with 4,096-point fast Fourier transform (FFT). HOPS is adaptable with quick turn-around time. Since HOPS offers reusable user-friendly computational elements, its FPGA IP Core can be modified for a shorter development period if the algorithm changes. The FPGA and memory bandwidth of HOPS is 20 GB/sec while the typical maximum processor-to-SDRAM bandwidth of the commercial radiation tolerant high-end processors is about 130-150 MB/sec. The inter-board communication bandwidth of HOPS is 4 GB/sec while the effective processor-to-cPCI bandwidth of commercial radiation tolerant high-end boards is about 50-75 MB/sec. Also, HOPS offers VHDL cores for the easy and efficient implementation of ASCENDS and 3-D Winds, and other similar algorithms. A general overview of the 3-year development of HOPS is the goal of this presentation.
Fiji: an open-source platform for biological-image analysis.
Schindelin, Johannes; Arganda-Carreras, Ignacio; Frise, Erwin; Kaynig, Verena; Longair, Mark; Pietzsch, Tobias; Preibisch, Stephan; Rueden, Curtis; Saalfeld, Stephan; Schmid, Benjamin; Tinevez, Jean-Yves; White, Daniel James; Hartenstein, Volker; Eliceiri, Kevin; Tomancak, Pavel; Cardona, Albert
2012-06-28
Fiji is a distribution of the popular open-source software ImageJ focused on biological-image analysis. Fiji uses modern software engineering practices to combine powerful software libraries with a broad range of scripting languages to enable rapid prototyping of image-processing algorithms. Fiji facilitates the transformation of new algorithms into ImageJ plugins that can be shared with end users through an integrated update system. We propose Fiji as a platform for productive collaboration between computer science and biology research communities.
[Algorithms of artificial neural networks--practical application in medical science].
Stefaniak, Bogusław; Cholewiński, Witold; Tarkowska, Anna
2005-12-01
Artificial Neural Networks (ANN) may be a tool alternative and complementary to typical statistical analysis. However, in spite of many computer applications of various ANN algorithms ready for use, artificial intelligence is relatively rarely applied to data processing. This paper presents practical aspects of scientific application of ANN in medicine using widely available algorithms. Several main steps of analysis with ANN were discussed starting from material selection and dividing it into groups, to the quality assessment of obtained results at the end. The most frequent, typical reasons for errors as well as the comparison of ANN method to the modeling by regression analysis were also described.
Research Issues in Image Registration for Remote Sensing
NASA Technical Reports Server (NTRS)
Eastman, Roger D.; LeMoigne, Jacqueline; Netanyahu, Nathan S.
2007-01-01
Image registration is an important element in data processing for remote sensing with many applications and a wide range of solutions. Despite considerable investigation the field has not settled on a definitive solution for most applications and a number of questions remain open. This article looks at selected research issues by surveying the experience of operational satellite teams, application-specific requirements for Earth science, and our experiments in the evaluation of image registration algorithms with emphasis on the comparison of algorithms for subpixel accuracy. We conclude that remote sensing applications put particular demands on image registration algorithms to take into account domain-specific knowledge of geometric transformations and image content.
Moore, J H
1995-06-01
A genetic algorithm for instrumentation control and optimization was developed using the LabVIEW graphical programming environment. The usefulness of this methodology for the optimization of a closed loop control instrument is demonstrated with minimal complexity and the programming is presented in detail to facilitate its adaptation to other LabVIEW applications. Closed loop control instruments have variety of applications in the biomedical sciences including the regulation of physiological processes such as blood pressure. The program presented here should provide a useful starting point for those wishing to incorporate genetic algorithm approaches to LabVIEW mediated optimization of closed loop control instruments.
High Performance Compression of Science Data
NASA Technical Reports Server (NTRS)
Storer, James A.; Carpentieri, Bruno; Cohn, Martin
1994-01-01
Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.
NASA Technical Reports Server (NTRS)
Koster, Randal D. (Editor); Kimball, John S.; Jones, Lucas A.; Glassy, Joseph; Stavros, E. Natasha; Madani, Nima (Editor); Reichle, Rolf H.; Jackson, Thomas; Colliander, Andreas
2015-01-01
During the post-launch Cal/Val Phase of SMAP there are two objectives for each science product team: 1) calibrate, verify, and improve the performance of the science algorithms, and 2) validate accuracies of the science data products as specified in the L1 science requirements according to the Cal/Val timeline. This report provides analysis and assessment of the SMAP Level 4 Carbon (L4_C) product specifically for the beta release. The beta-release version of the SMAP L4_C algorithms utilizes a terrestrial carbon flux model informed by SMAP soil moisture inputs along with optical remote sensing (e.g. MODIS) vegetation indices and other ancillary biophysical data to estimate global daily NEE and component carbon fluxes, particularly vegetation gross primary production (GPP) and ecosystem respiration (Reco). Other L4_C product elements include surface (<10 cm depth) soil organic carbon (SOC) stocks and associated environmental constraints to these processes, including soil moisture and landscape FT controls on GPP and Reco (Kimball et al. 2012). The L4_C product encapsulates SMAP carbon cycle science objectives by: 1) providing a direct link between terrestrial carbon fluxes and underlying freeze/thaw and soil moisture constraints to these processes, 2) documenting primary connections between terrestrial water, energy and carbon cycles, and 3) improving understanding of terrestrial carbon sink activity in northern ecosystems.
Algorithms, complexity, and the sciences
Papadimitriou, Christos
2014-01-01
Algorithms, perhaps together with Moore’s law, compose the engine of the information technology revolution, whereas complexity—the antithesis of algorithms—is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal—and therefore less compelling—than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene’s cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution. PMID:25349382
Citizen science: A new perspective to advance spatial pattern evaluation in hydrology.
Koch, Julian; Stisen, Simon
2017-01-01
Citizen science opens new pathways that can complement traditional scientific practice. Intuition and reasoning often make humans more effective than computer algorithms in various realms of problem solving. In particular, a simple visual comparison of spatial patterns is a task where humans are often considered to be more reliable than computer algorithms. However, in practice, science still largely depends on computer based solutions, which inevitably gives benefits such as speed and the possibility to automatize processes. However, the human vision can be harnessed to evaluate the reliability of algorithms which are tailored to quantify similarity in spatial patterns. We established a citizen science project to employ the human perception to rate similarity and dissimilarity between simulated spatial patterns of several scenarios of a hydrological catchment model. In total, the turnout counts more than 2500 volunteers that provided over 43000 classifications of 1095 individual subjects. We investigate the capability of a set of advanced statistical performance metrics to mimic the human perception to distinguish between similarity and dissimilarity. Results suggest that more complex metrics are not necessarily better at emulating the human perception, but clearly provide auxiliary information that is valuable for model diagnostics. The metrics clearly differ in their ability to unambiguously distinguish between similar and dissimilar patterns which is regarded a key feature of a reliable metric. The obtained dataset can provide an insightful benchmark to the community to test novel spatial metrics.
Phase-Retrieval Uncertainty Estimation and Algorithm Comparison for the JWST-ISIM Test Campaign
NASA Technical Reports Server (NTRS)
Aronstein, David L.; Smith, J. Scott
2016-01-01
Phase retrieval, the process of determining the exitpupil wavefront of an optical instrument from image-plane intensity measurements, is the baseline methodology for characterizing the wavefront for the suite of science instruments (SIs) in the Integrated Science Instrument Module (ISIM) for the James Webb Space Telescope (JWST). JWST is a large, infrared space telescope with a 6.5-meter diameter primary mirror. JWST is currently NASA's flagship mission and will be the premier space observatory of the next decade. ISIM contains four optical benches with nine unique instruments, including redundancies. ISIM was characterized at the Goddard Space Flight Center (GSFC) in Greenbelt, MD in a series of cryogenic vacuum tests using a telescope simulator. During these tests, phase-retrieval algorithms were used to characterize the instruments. The objective of this paper is to describe the Monte-Carlo simulations that were used to establish uncertainties (i.e., error bars) for the wavefronts of the various instruments in ISIM. Multiple retrieval algorithms were used in the analysis of ISIM phase-retrieval focus-sweep data, including an iterativetransform algorithm and a nonlinear optimization algorithm. These algorithms emphasize the recovery of numerous optical parameters, including low-order wavefront composition described by Zernike polynomial terms and high-order wavefront described by a point-by-point map, location of instrument best focus, focal ratio, exit-pupil amplitude, the morphology of any extended object, and optical jitter. The secondary objective of this paper is to report on the relative accuracies of these algorithms for the ISIM instrument tests, and a comparison of their computational complexity and their performance on central and graphical processing unit clusters. From a phase-retrieval perspective, the ISIM test campaign includes a variety of source illumination bandwidths, various image-plane sampling criteria above and below the Nyquist- Shannon critical sampling value, various extended object sizes, and several other impactful effects.
Modeling biological problems in computer science: a case study in genome assembly.
Medvedev, Paul
2018-01-30
As computer scientists working in bioinformatics/computational biology, we often face the challenge of coming up with an algorithm to answer a biological question. This occurs in many areas, such as variant calling, alignment and assembly. In this tutorial, we use the example of the genome assembly problem to demonstrate how to go from a question in the biological realm to a solution in the computer science realm. We show the modeling process step-by-step, including all the intermediate failed attempts. Please note this is not an introduction to how genome assembly algorithms work and, if treated as such, would be incomplete and unnecessarily long-winded. © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
SMV⊥: Simplex of maximal volume based upon the Gram-Schmidt process
NASA Astrophysics Data System (ADS)
Salazar-Vazquez, Jairo; Mendez-Vazquez, Andres
2015-10-01
In recent years, different algorithms for Hyperspectral Image (HI) analysis have been introduced. The high spectral resolution of these images allows to develop different algorithms for target detection, material mapping, and material identification for applications in Agriculture, Security and Defense, Industry, etc. Therefore, from the computer science's point of view, there is fertile field of research for improving and developing algorithms in HI analysis. In some applications, the spectral pixels of a HI can be classified using laboratory spectral signatures. Nevertheless, for many others, there is no enough available prior information or spectral signatures, making any analysis a difficult task. One of the most popular algorithms for the HI analysis is the N-FINDR because it is easy to understand and provides a way to unmix the original HI in the respective material compositions. The N-FINDR is computationally expensive and its performance depends on a random initialization process. This paper proposes a novel idea to reduce the complexity of the N-FINDR by implementing a bottom-up approach based in an observation from linear algebra and the use of the Gram-Schmidt process. Therefore, the Simplex of Maximal Volume Perpendicular (SMV⊥) algorithm is proposed for fast endmember extraction in hyperspectral imagery. This novel algorithm has complexity O(n) with respect to the number of pixels. In addition, the evidence shows that SMV⊥ calculates a bigger volume, and has lower computational time complexity than other poular algorithms on synthetic and real scenarios.
Neurale Netwerken en Radarsystemen (Neural Networks and Radar Systems)
1989-08-01
general issues in cognitive science", Parallel distributed processing, Vol 1: Foundations, Rumelhart et al. 1986 pp 110-146 THO rapport Pagina 151 36 D.E...34Neural networks (part 2)",Expert Focus, IEEE Expert, Spring 1988. 61 J.A. Anderson, " Cognitive and Psychological Computations with Neural Models", IEEE...Pagina 154 69 David H. Ackley, Geoffrey E. Hinton and Terrence J. Sejnowski, "A Learning Algorithm for Boltzmann machines", cognitive science 9, 147-169
Testing a Firefly-Inspired Synchronization Algorithm in a Complex Wireless Sensor Network
Hao, Chuangbo; Song, Ping; Yang, Cheng; Liu, Xiongjun
2017-01-01
Data acquisition is the foundation of soft sensor and data fusion. Distributed data acquisition and its synchronization are the important technologies to ensure the accuracy of soft sensors. As a research topic in bionic science, the firefly-inspired algorithm has attracted widespread attention as a new synchronization method. Aiming at reducing the design difficulty of firefly-inspired synchronization algorithms for Wireless Sensor Networks (WSNs) with complex topologies, this paper presents a firefly-inspired synchronization algorithm based on a multiscale discrete phase model that can optimize the performance tradeoff between the network scalability and synchronization capability in a complex wireless sensor network. The synchronization process can be regarded as a Markov state transition, which ensures the stability of this algorithm. Compared with the Miroll and Steven model and Reachback Firefly Algorithm, the proposed algorithm obtains better stability and performance. Finally, its practicality has been experimentally confirmed using 30 nodes in a real multi-hop topology with low quality links. PMID:28282899
Testing a Firefly-Inspired Synchronization Algorithm in a Complex Wireless Sensor Network.
Hao, Chuangbo; Song, Ping; Yang, Cheng; Liu, Xiongjun
2017-03-08
Data acquisition is the foundation of soft sensor and data fusion. Distributed data acquisition and its synchronization are the important technologies to ensure the accuracy of soft sensors. As a research topic in bionic science, the firefly-inspired algorithm has attracted widespread attention as a new synchronization method. Aiming at reducing the design difficulty of firefly-inspired synchronization algorithms for Wireless Sensor Networks (WSNs) with complex topologies, this paper presents a firefly-inspired synchronization algorithm based on a multiscale discrete phase model that can optimize the performance tradeoff between the network scalability and synchronization capability in a complex wireless sensor network. The synchronization process can be regarded as a Markov state transition, which ensures the stability of this algorithm. Compared with the Miroll and Steven model and Reachback Firefly Algorithm, the proposed algorithm obtains better stability and performance. Finally, its practicality has been experimentally confirmed using 30 nodes in a real multi-hop topology with low quality links.
NASA Astrophysics Data System (ADS)
Dang, Nguyen Tuan; Akai-Kasada, Megumi; Asai, Tetsuya; Saito, Akira; Kuwahara, Yuji; Hokkaido University Collaboration
2015-03-01
Machine learning using the artificial neuron network research is supposed to be the best way to understand how the human brain trains itself to process information. In this study, we have successfully developed the programs using supervised machine learning algorithm. However, these supervised learning processes for the neuron network required the very strong computing configuration. Derivation from the necessity of increasing in computing ability and in reduction of power consumption, accelerator circuits become critical. To develop such accelerator circuits using supervised machine learning algorithm, conducting polymer micro/nanowires growing process was realized and applied as a synaptic weigh controller. In this work, high conductivity Polypyrrole (PPy) and Poly (3, 4 - ethylenedioxythiophene) PEDOT wires were potentiostatically grown crosslinking the designated electrodes, which were prefabricated by lithography, when appropriate square wave AC voltage and appropriate frequency were applied. Micro/nanowire growing process emulated the neurotransmitter release process of synapses inside a biological neuron and wire's resistance variation during the growing process was preferred to as the variation of synaptic weigh in machine learning algorithm. In a cooperation with Graduate School of Information Science and Technology, Hokkaido University.
Dawn: A Simulation Model for Evaluating Costs and Tradeoffs of Big Data Science Architectures
NASA Astrophysics Data System (ADS)
Cinquini, L.; Crichton, D. J.; Braverman, A. J.; Kyo, L.; Fuchs, T.; Turmon, M.
2014-12-01
In many scientific disciplines, scientists and data managers are bracing for an upcoming deluge of big data volumes, which will increase the size of current data archives by a factor of 10-100 times. For example, the next Climate Model Inter-comparison Project (CMIP6) will generate a global archive of model output of approximately 10-20 Peta-bytes, while the upcoming next generation of NASA decadal Earth Observing instruments are expected to collect tens of Giga-bytes/day. In radio-astronomy, the Square Kilometre Array (SKA) will collect data in the Exa-bytes/day range, of which (after reduction and processing) around 1.5 Exa-bytes/year will be stored. The effective and timely processing of these enormous data streams will require the design of new data reduction and processing algorithms, new system architectures, and new techniques for evaluating computation uncertainty. Yet at present no general software tool or framework exists that will allow system architects to model their expected data processing workflow, and determine the network, computational and storage resources needed to prepare their data for scientific analysis. In order to fill this gap, at NASA/JPL we have been developing a preliminary model named DAWN (Distributed Analytics, Workflows and Numerics) for simulating arbitrary complex workflows composed of any number of data processing and movement tasks. The model can be configured with a representation of the problem at hand (the data volumes, the processing algorithms, the available computing and network resources), and is able to evaluate tradeoffs between different possible workflows based on several estimators: overall elapsed time, separate computation and transfer times, resulting uncertainty, and others. So far, we have been applying DAWN to analyze architectural solutions for 4 different use cases from distinct science disciplines: climate science, astronomy, hydrology and a generic cloud computing use case. This talk will present preliminary results and discuss how DAWN can be evolved into a powerful tool for designing system architectures for data intensive science.
Autonomous Onboard Science Image Analysis for Future Mars Rover Missions
NASA Technical Reports Server (NTRS)
Gulick, V. C.; Morris, R. L.; Ruzon, M. A.; Roush, T. L.
1999-01-01
To explore high priority landing sites and to prepare for eventual human exploration, future Mars missions will involve rovers capable of traversing tens of kilometers. However, the current process by which scientists interact with a rover does not scale to such distances. Specifically, numerous command cycles are required to complete even simple tasks, such as, pointing the spectrometer at a variety of nearby rocks. In addition, the time required by scientists to interpret image data before new commands can be given and the limited amount of data that can be downlinked during a given command cycle constrain rover mobility and achievement of science goals. Experience with rover tests on Earth supports these concerns. As a result, traverses to science sites as identified in orbital images would require numerous science command cycles over a period of many weeks, months or even years, perhaps exceeding rover design life and other constraints. Autonomous onboard science analysis can address these problems in two ways. First, it will allow the rover to transmit only "interesting" images, defined as those likely to have higher science content. Second, the rover will be able to anticipate future commands. For example, a rover might autonomously acquire and return spectra of "interesting" rocks along with a high resolution image of those rocks in addition to returning the context images in which they were detected. Such approaches, coupled with appropriate navigational software, help to address both the data volume and command cycle bottlenecks that limit both rover mobility and science yield. We are developing fast, autonomous algorithms to enable such intelligent on-board decision making by spacecraft. Autonomous algorithms developed to date have the ability to identify rocks and layers in a scene, locate the horizon, and compress multi-spectral image data. Output from these algorithms could be used to autonomously obtain rock spectra, determine which images should be transmitted to the ground, or to aid in image compression. We will discuss these and other algorithms and demonstrate their performance during a recent rover field test.
Reinforcement Learning for Weakly-Coupled MDPs and an Application to Planetary Rover Control
NASA Technical Reports Server (NTRS)
Bernstein, Daniel S.; Zilberstein, Shlomo
2003-01-01
Weakly-coupled Markov decision processes can be decomposed into subprocesses that interact only through a small set of bottleneck states. We study a hierarchical reinforcement learning algorithm designed to take advantage of this particular type of decomposability. To test our algorithm, we use a decision-making problem faced by autonomous planetary rovers. In this problem, a Mars rover must decide which activities to perform and when to traverse between science sites in order to make the best use of its limited resources. In our experiments, the hierarchical algorithm performs better than Q-learning in the early stages of learning, but unlike Q-learning it converges to a suboptimal policy. This suggests that it may be advantageous to use the hierarchical algorithm when training time is limited.
The science of computing - Parallel computation
NASA Technical Reports Server (NTRS)
Denning, P. J.
1985-01-01
Although parallel computation architectures have been known for computers since the 1920s, it was only in the 1970s that microelectronic components technologies advanced to the point where it became feasible to incorporate multiple processors in one machine. Concommitantly, the development of algorithms for parallel processing also lagged due to hardware limitations. The speed of computing with solid-state chips is limited by gate switching delays. The physical limit implies that a 1 Gflop operational speed is the maximum for sequential processors. A computer recently introduced features a 'hypercube' architecture with 128 processors connected in networks at 5, 6 or 7 points per grid, depending on the design choice. Its computing speed rivals that of supercomputers, but at a fraction of the cost. The added speed with less hardware is due to parallel processing, which utilizes algorithms representing different parts of an equation that can be broken into simpler statements and processed simultaneously. Present, highly developed computer languages like FORTRAN, PASCAL, COBOL, etc., rely on sequential instructions. Thus, increased emphasis will now be directed at parallel processing algorithms to exploit the new architectures.
Raster Data Partitioning for Supporting Distributed GIS Processing
NASA Astrophysics Data System (ADS)
Nguyen Thai, B.; Olasz, A.
2015-08-01
In the geospatial sector big data concept also has already impact. Several studies facing originally computer science techniques applied in GIS processing of huge amount of geospatial data. In other research studies geospatial data is considered as it were always been big data (Lee and Kang, 2015). Nevertheless, we can prove data acquisition methods have been improved substantially not only the amount, but the resolution of raw data in spectral, spatial and temporal aspects as well. A significant portion of big data is geospatial data, and the size of such data is growing rapidly at least by 20% every year (Dasgupta, 2013). The produced increasing volume of raw data, in different format, representation and purpose the wealth of information derived from this data sets represents only valuable results. However, the computing capability and processing speed rather tackle with limitations, even if semi-automatic or automatic procedures are aimed on complex geospatial data (Kristóf et al., 2014). In late times, distributed computing has reached many interdisciplinary areas of computer science inclusive of remote sensing and geographic information processing approaches. Cloud computing even more requires appropriate processing algorithms to be distributed and handle geospatial big data. Map-Reduce programming model and distributed file systems have proven their capabilities to process non GIS big data. But sometimes it's inconvenient or inefficient to rewrite existing algorithms to Map-Reduce programming model, also GIS data can not be partitioned as text-based data by line or by bytes. Hence, we would like to find an alternative solution for data partitioning, data distribution and execution of existing algorithms without rewriting or with only minor modifications. This paper focuses on technical overview of currently available distributed computing environments, as well as GIS data (raster data) partitioning, distribution and distributed processing of GIS algorithms. A proof of concept implementation have been made for raster data partitioning, distribution and processing. The first results on performance have been compared against commercial software ERDAS IMAGINE 2011 and 2014. Partitioning methods heavily depend on application areas, therefore we may consider data partitioning as a preprocessing step before applying processing services on data. As a proof of concept we have implemented a simple tile-based partitioning method splitting an image into smaller grids (NxM tiles) and comparing the processing time to existing methods by NDVI calculation. The concept is demonstrated using own development open source processing framework.
A parallel-processing approach to computing for the geographic sciences
Crane, Michael; Steinwand, Dan; Beckmann, Tim; Krpan, Greg; Haga, Jim; Maddox, Brian; Feller, Mark
2001-01-01
The overarching goal of this project is to build a spatially distributed infrastructure for information science research by forming a team of information science researchers and providing them with similar hardware and software tools to perform collaborative research. Four geographically distributed Centers of the U.S. Geological Survey (USGS) are developing their own clusters of low-cost personal computers into parallel computing environments that provide a costeffective way for the USGS to increase participation in the high-performance computing community. Referred to as Beowulf clusters, these hybrid systems provide the robust computing power required for conducting research into various areas, such as advanced computer architecture, algorithms to meet the processing needs for real-time image and data processing, the creation of custom datasets from seamless source data, rapid turn-around of products for emergency response, and support for computationally intense spatial and temporal modeling.
The graph neural network model.
Scarselli, Franco; Gori, Marco; Tsoi, Ah Chung; Hagenbuchner, Markus; Monfardini, Gabriele
2009-01-01
Many underlying relationships among data in several areas of science and engineering, e.g., computer vision, molecular chemistry, molecular biology, pattern recognition, and data mining, can be represented in terms of graphs. In this paper, we propose a new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains. This GNN model, which can directly process most of the practically useful types of graphs, e.g., acyclic, cyclic, directed, and undirected, implements a function tau(G,n) is an element of IR(m) that maps a graph G and one of its nodes n into an m-dimensional Euclidean space. A supervised learning algorithm is derived to estimate the parameters of the proposed GNN model. The computational cost of the proposed algorithm is also considered. Some experimental results are shown to validate the proposed learning algorithm, and to demonstrate its generalization capabilities.
Realistic Covariance Prediction for the Earth Science Constellation
NASA Technical Reports Server (NTRS)
Duncan, Matthew; Long, Anne
2006-01-01
Routine satellite operations for the Earth Science Constellation (ESC) include collision risk assessment between members of the constellation and other orbiting space objects. One component of the risk assessment process is computing the collision probability between two space objects. The collision probability is computed using Monte Carlo techniques as well as by numerically integrating relative state probability density functions. Each algorithm takes as inputs state vector and state vector uncertainty information for both objects. The state vector uncertainty information is expressed in terms of a covariance matrix. The collision probability computation is only as good as the inputs. Therefore, to obtain a collision calculation that is a useful decision-making metric, realistic covariance matrices must be used as inputs to the calculation. This paper describes the process used by the NASA/Goddard Space Flight Center's Earth Science Mission Operations Project to generate realistic covariance predictions for three of the Earth Science Constellation satellites: Aqua, Aura and Terra.
The Langley Parameterized Shortwave Algorithm (LPSA) for Surface Radiation Budget Studies. 1.0
NASA Technical Reports Server (NTRS)
Gupta, Shashi K.; Kratz, David P.; Stackhouse, Paul W., Jr.; Wilber, Anne C.
2001-01-01
An efficient algorithm was developed during the late 1980's and early 1990's by W. F. Staylor at NASA/LaRC for the purpose of deriving shortwave surface radiation budget parameters on a global scale. While the algorithm produced results in good agreement with observations, the lack of proper documentation resulted in a weak acceptance by the science community. The primary purpose of this report is to develop detailed documentation of the algorithm. In the process, the algorithm was modified whenever discrepancies were found between the algorithm and its referenced literature sources. In some instances, assumptions made in the algorithm could not be justified and were replaced with those that were justifiable. The algorithm uses satellite and operational meteorological data for inputs. Most of the original data sources have been replaced by more recent, higher quality data sources, and fluxes are now computed on a higher spatial resolution. Many more changes to the basic radiation scheme and meteorological inputs have been proposed to improve the algorithm and make the product more useful for new research projects. Because of the many changes already in place and more planned for the future, the algorithm has been renamed the Langley Parameterized Shortwave Algorithm (LPSA).
High performance compression of science data
NASA Technical Reports Server (NTRS)
Storer, James A.; Cohn, Martin
1994-01-01
Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in the interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.
Event processing in X-IFU detector onboard Athena.
NASA Astrophysics Data System (ADS)
Ceballos, M. T.; Cobos, B.; van der Kuurs, J.; Fraga-Encinas, R.
2015-05-01
The X-ray Observatory ATHENA was proposed in April 2014 as the mission to implement the science theme "The Hot and Energetic Universe" selected by ESA for L2 (the second Large-class mission in ESA's Cosmic Vision science programme). One of the two X-ray detectors designed to be onboard ATHENA is X-IFU, a cryogenic microcalorimeter based on Transition Edge Sensor (TES) technology that will provide spatially resolved high-resolution spectroscopy. X-IFU will be developed by a consortium of European research institutions currently from France (leadership), Italy, The Netherlands, Belgium, UK, Germany and Spain. From Spain, IFCA (CSIC-UC) is involved in the Digital Readout Electronics (DRE) unit of the X-IFU detector, in particular in the Event Processor Subsytem. We at IFCA are in charge of the development and implementation in the DRE unit of the Event Processing algorithms, designed to recognize, from a noisy signal, the intensity pulses generated by the absorption of the X-ray photons, and lately extract their main parameters (coordinates, energy, arrival time, grade, etc.) Here we will present the design and performance of the algorithms developed for the event recognition (adjusted derivative), and pulse grading/qualification as well as the progress in the algorithms designed to extract the energy content of the pulses (pulse optimal filtering). IFCA will finally have the responsibility of the implementation on board in the (TBD) FPGAs or micro-processors of the DRE unit, where this Event Processing part will take place, to fit into the limited telemetry of the instrument.
Pre-Hardware Optimization and Implementation Of Fast Optics Closed Control Loop Algorithms
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Lyon, Richard G.; Herman, Jay R.; Abuhassan, Nader
2004-01-01
One of the main heritage tools used in scientific and engineering data spectrum analysis is the Fourier Integral Transform and its high performance digital equivalent - the Fast Fourier Transform (FFT). The FFT is particularly useful in two-dimensional (2-D) image processing (FFT2) within optical systems control. However, timing constraints of a fast optics closed control loop would require a supercomputer to run the software implementation of the FFT2 and its inverse, as well as other image processing representative algorithm, such as numerical image folding and fringe feature extraction. A laboratory supercomputer is not always available even for ground operations and is not feasible for a night project. However, the computationally intensive algorithms still warrant alternative implementation using reconfigurable computing technologies (RC) such as Digital Signal Processors (DSP) and Field Programmable Gate Arrays (FPGA), which provide low cost compact super-computing capabilities. We present a new RC hardware implementation and utilization architecture that significantly reduces the computational complexity of a few basic image-processing algorithm, such as FFT2, image folding and phase diversity for the NASA Solar Viewing Interferometer Prototype (SVIP) using a cluster of DSPs and FPGAs. The DSP cluster utilization architecture also assures avoidance of a single point of failure, while using commercially available hardware. This, combined with the control algorithms pre-hardware optimization, or the first time allows construction of image-based 800 Hertz (Hz) optics closed control loops on-board a spacecraft, based on the SVIP ground instrument. That spacecraft is the proposed Earth Atmosphere Solar Occultation Imager (EASI) to study greenhouse gases CO2, C2H, H2O, O3, O2, N2O from Lagrange-2 point in space. This paper provides an advanced insight into a new type of science capabilities for future space exploration missions based on on-board image processing for control and for robotics missions using vision sensors. It presents a top-level description of technologies required for the design and construction of SVIP and EASI and to advance the spatial-spectral imaging and large-scale space interferometry science and engineering.
1983-11-15
Concurrent Algorithms", A. Cremers , Dortmund University, West Germany, and T. Hibbard, JPL, Pasadena, CA 64 "An Overview of Signal Representations in...n O f\\ n O P- A -> Problem-oriented specification of concurrent algorithms Armin B. Cremers and Thomas N. Hibbard Preliminary version September...1982 s* Armin B. Cremers Computer Science Department University of Dortmund P.O. Box 50 05 00 D-4600 Dortmund 50 Fed. Rep. Germany
Spacecube: A Family of Reconfigurable Hybrid On-Board Science Data Processors
NASA Technical Reports Server (NTRS)
Flatley, Thomas P.
2015-01-01
SpaceCube is a family of Field Programmable Gate Array (FPGA) based on-board science data processing systems developed at the NASA Goddard Space Flight Center (GSFC). The goal of the SpaceCube program is to provide 10x to 100x improvements in on-board computing power while lowering relative power consumption and cost. SpaceCube is based on the Xilinx Virtex family of FPGAs, which include processor, FPGA logic and digital signal processing (DSP) resources. These processing elements are leveraged to produce a hybrid science data processing platform that accelerates the execution of algorithms by distributing computational functions to the most suitable elements. This approach enables the implementation of complex on-board functions that were previously limited to ground based systems, such as on-board product generation, data reduction, calibration, classification, eventfeature detection, data mining and real-time autonomous operations. The system is fully reconfigurable in flight, including data parameters, software and FPGA logic, through either ground commanding or autonomously in response to detected eventsfeatures in the instrument data stream.
Understanding Local Structure Globally in Earth Science Remote Sensing Data Sets
NASA Technical Reports Server (NTRS)
Braverman, Amy; Fetzer, Eric
2007-01-01
Empirical probability distributions derived from the data are the signatures of physical processes generating the data. Distributions defined on different space-time windows can be compared and differences or changes can be attributed to physical processes. This presentation discusses on ways to reduce remote sensing data in a way that preserves information, focusing on the rate-distortion theory and using the entropy-constrained vector quantization algorithm.
Interactive visualization of Earth and Space Science computations
NASA Technical Reports Server (NTRS)
Hibbard, William L.; Paul, Brian E.; Santek, David A.; Dyer, Charles R.; Battaiola, Andre L.; Voidrot-Martinez, Marie-Francoise
1994-01-01
Computers have become essential tools for scientists simulating and observing nature. Simulations are formulated as mathematical models but are implemented as computer algorithms to simulate complex events. Observations are also analyzed and understood in terms of mathematical models, but the number of these observations usually dictates that we automate analyses with computer algorithms. In spite of their essential role, computers are also barriers to scientific understanding. Unlike hand calculations, automated computations are invisible and, because of the enormous numbers of individual operations in automated computations, the relation between an algorithm's input and output is often not intuitive. This problem is illustrated by the behavior of meteorologists responsible for forecasting weather. Even in this age of computers, many meteorologists manually plot weather observations on maps, then draw isolines of temperature, pressure, and other fields by hand (special pads of maps are printed for just this purpose). Similarly, radiologists use computers to collect medical data but are notoriously reluctant to apply image-processing algorithms to that data. To these scientists with life-and-death responsibilities, computer algorithms are black boxes that increase rather than reduce risk. The barrier between scientists and their computations can be bridged by techniques that make the internal workings of algorithms visible and that allow scientists to experiment with their computations. Here we describe two interactive systems developed at the University of Wisconsin-Madison Space Science and Engineering Center (SSEC) that provide these capabilities to Earth and space scientists.
NASA Technical Reports Server (NTRS)
Stanboli, Alice
2013-01-01
Phxtelemproc is a C/C++ based telemetry processing program that processes SFDU telemetry packets from the Telemetry Data System (TDS). It generates Experiment Data Records (EDRs) for several instruments including surface stereo imager (SSI); robotic arm camera (RAC); robotic arm (RA); microscopy, electrochemistry, and conductivity analyzer (MECA); and the optical microscope (OM). It processes both uncompressed and compressed telemetry, and incorporates unique subroutines for the following compression algorithms: JPEG Arithmetic, JPEG Huffman, Rice, LUT3, RA, and SX4. This program was in the critical path for the daily command cycle of the Phoenix mission. The products generated by this program were part of the RA commanding process, as well as the SSI, RAC, OM, and MECA image and science analysis process. Its output products were used to advance science of the near polar regions of Mars, and were used to prove that water is found in abundance there. Phxtelemproc is part of the MIPL (Multi-mission Image Processing Laboratory) system. This software produced Level 1 products used to analyze images returned by in situ spacecraft. It ultimately assisted in operations, planning, commanding, science, and outreach.
Citizen science: A new perspective to advance spatial pattern evaluation in hydrology
Stisen, Simon
2017-01-01
Citizen science opens new pathways that can complement traditional scientific practice. Intuition and reasoning often make humans more effective than computer algorithms in various realms of problem solving. In particular, a simple visual comparison of spatial patterns is a task where humans are often considered to be more reliable than computer algorithms. However, in practice, science still largely depends on computer based solutions, which inevitably gives benefits such as speed and the possibility to automatize processes. However, the human vision can be harnessed to evaluate the reliability of algorithms which are tailored to quantify similarity in spatial patterns. We established a citizen science project to employ the human perception to rate similarity and dissimilarity between simulated spatial patterns of several scenarios of a hydrological catchment model. In total, the turnout counts more than 2500 volunteers that provided over 43000 classifications of 1095 individual subjects. We investigate the capability of a set of advanced statistical performance metrics to mimic the human perception to distinguish between similarity and dissimilarity. Results suggest that more complex metrics are not necessarily better at emulating the human perception, but clearly provide auxiliary information that is valuable for model diagnostics. The metrics clearly differ in their ability to unambiguously distinguish between similar and dissimilar patterns which is regarded a key feature of a reliable metric. The obtained dataset can provide an insightful benchmark to the community to test novel spatial metrics. PMID:28558050
EOS Data Products Latency and Reprocessing Evaluation
NASA Astrophysics Data System (ADS)
Ramapriyan, H. K.; Wanchoo, L.
2012-12-01
NASA's Earth Observing System (EOS) Data and Information System (EOSDIS) program has been processing, archiving, and distributing EOS data since the launch of Terra platform in 1999. The EOSDIS Distributed Active Archive Centers (DAACs) and Science-Investigator-led Processing Systems (SIPSs) are generating over 5000 unique products with a daily average volume of 1.7 Petabytes. Initially EOSDIS had requirements to make process data products within 24 hours of receiving all inputs needed for generating them. Thus, generally, the latency would be slightly over 24 and 48 hours after satellite data acquisition, respectively, for Level 1 and Level 2 products. Due to budgetary constraints these requirements were relaxed, with the requirement being to avoid a growing backlog of unprocessed data. However, the data providers have been generating these products in as timely a manner as possible. The reduction in costs of computing hardware has helped considerably. It is of interest to analyze the actual latencies achieved over the past several years in processing and inserting the data products into the EOSDIS archives for the users to support various scientific studies such as land processes, oceanography, hydrology, atmospheric science, cryospheric science, etc. The instrument science teams have continuously evaluated the data products since the launches of EOS satellites and improved the science algorithms to provide high quality products. Data providers have periodically reprocessed the previously acquired data with these improved algorithms. The reprocessing campaigns run for an extended time period in parallel with forward processing, since all data starting from the beginning of the mission need to be reprocessed. Each reprocessing activity involves more data than the previous reprocessing. The historical record of the reprocessing times would be of interest to future missions, especially those involving large volumes of data and/or computational loads due to complexity of algorithms. Evaluation of latency and reprocessing times requires some of the product metadata information, such as the beginning and ending time of data acquisition, processing date, and version number. This information for each product is made available by data providers to the ESDIS Metrics System (EMS). The EMS replaced the earlier ESDIS Data Gathering and Reporting System (EDGRS) in FY2005. Since then it has collected information about data products' ingest, archive, and distribution. The analysis of latencies and reprocessing times will provide an insight to the data provider process and identify potential areas of weakness in providing timely data to the user community. Delays may be caused by events such as system unavailability, disk failures, delay in level 0 data delivery, availability of input data, network problems, and power failures. Analysis of metrics will highlight areas for focused examination of root causes for delays. The purposes of this study are to: 1) perform a detailed analysis of latency of selected instrument products for last 6 years; 2) analyze the reprocessed data from various data providers to determine the times taken for reprocessing campaigns; 3) identify potential reasons for any anomalies in these metrics.
Incorporation of quality updates for JPSS CGS Products
NASA Astrophysics Data System (ADS)
Cochran, S.; Grant, K. D.; Ibrahim, W.; Brueske, K. F.; Smit, P.
2016-12-01
NOAA's next-generation environmental satellite, the Joint Polar Satellite System (JPSS) replaces the current Polar-orbiting Operational Environmental Satellites (POES). JPSS satellites carry sensors which collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The first JPSS satellite was launched in 2011 and is currently NOAA's primary operational polar satellite. The JPSS ground system is the Common Ground System (CGS), and provides command, control, and communications (C3) and data processing (DP). A multi-mission system, CGS provides combinations of C3/DP for numerous NASA, NOAA, DoD, and international missions. In preparation for the next JPSS satellite, CGS improved its multi-mission capabilities to enhance mission operations for larger constellations of earth observing satellites with the added benefit of streamlining mission operations for other NOAA missions. This paper will discuss both the theoretical basis and the actual practices used to date to identify, test and incorporate algorithm updates into the CGS processing baseline. To provide a basis for this support, Raytheon developed a theoretical analysis framework, and the application of derived engineering processes, for the maintenance of consistency and integrity of remote sensing operational algorithm outputs. The framework is an abstraction of the operationalization of the science-grade algorithm (Sci2Ops) process used throughout the JPSS program. By combining software and systems engineering controls, manufacturing disciplines to detect and reduce defects, and a standard process to control analysis, an environment to maintain operational algorithm maturity is achieved. Results of the use of this approach to implement algorithm changes into operations will also be detailed.
Methods and Tools for Product Quality Maintenance in JPSS CGS
NASA Astrophysics Data System (ADS)
Cochran, S.; Smit, P.; Grant, K. D.; Jamilkowski, M. L.
2015-12-01
NOAA's next-generation environmental satellite, the Joint Polar Satellite System (JPSS) replaces the current Polar-orbiting Operational Environmental Satellites (POES). JPSS satellites carry sensors which collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The first JPSS satellite was launched in 2011 and is currently NOAA's primary operational polar satellite. The JPSS ground system is the Common Ground System (CGS), and provides command, control, and communications (C3) and data processing (DP). A multi-mission system, CGS provides combinations of C3/DP for numerous NASA, NOAA, DoD, and international missions. In preparation for the next JPSS satellite, CGS improved its multi-mission capabilities to enhance mission operations for larger constellations of earth observing satellites with the added benefit of streamlining mission operations for other NOAA missions. This paper will discuss both the theoretical basis and the actual practices used to date to identify, test and incorporate algorithm updates into the CGS processing baseline. To provide a basis for this support, Raytheon developed a theoretical analysis framework, and the application of derived engineering processes, for the maintenance of consistency and integrity of remote sensing operational algorithm outputs. The framework is an abstraction of the operationalization of the science-grade algorithm (Sci2Ops) process used throughout the JPSS program. By combining software and systems engineering controls, manufacturing disciplines to detect and reduce defects, and a standard process to control analysis, an environment to maintain operational algorithm maturity is achieved. Results of the use of this approach to implement algorithm changes into operations will also be detailed.
NASA IMAGESEER: NASA IMAGEs for Science, Education, Experimentation and Research
NASA Technical Reports Server (NTRS)
Le Moigne, Jacqueline; Grubb, Thomas G.; Milner, Barbara C.
2012-01-01
A number of web-accessible databases, including medical, military or other image data, offer universities and other users the ability to teach or research new Image Processing techniques on relevant and well-documented data. However, NASA images have traditionally been difficult for researchers to find, are often only available in hard-to-use formats, and do not always provide sufficient context and background for a non-NASA Scientist user to understand their content. The new IMAGESEER (IMAGEs for Science, Education, Experimentation and Research) database seeks to address these issues. Through a graphically-rich web site for browsing and downloading all of the selected datasets, benchmarks, and tutorials, IMAGESEER provides a widely accessible database of NASA-centric, easy to read, image data for teaching or validating new Image Processing algorithms. As such, IMAGESEER fosters collaboration between NASA and research organizations while simultaneously encouraging development of new and enhanced Image Processing algorithms. The first prototype includes a representative sampling of NASA multispectral and hyperspectral images from several Earth Science instruments, along with a few small tutorials. Image processing techniques are currently represented with cloud detection, image registration, and map cover/classification. For each technique, corresponding data are selected from four different geographic regions, i.e., mountains, urban, water coastal, and agriculture areas. Satellite images have been collected from several instruments - Landsat-5 and -7 Thematic Mappers, Earth Observing-1 (EO-1) Advanced Land Imager (ALI) and Hyperion, and the Moderate Resolution Imaging Spectroradiometer (MODIS). After geo-registration, these images are available in simple common formats such as GeoTIFF and raw formats, along with associated benchmark data.
Production and Distribution of NASA MODIS Remote Sensing Products
NASA Technical Reports Server (NTRS)
Wolfe, Robert
2007-01-01
The two Moderate Resolution Imaging Spectroradiometer (MODIS) instruments on-board NASA's Earth Observing System (EOS) Terra and Aqua satellites make key measurements for understanding the Earth's terrestrial ecosystems. Global time-series of terrestrial geophysical parameters have been produced from MODIS/Terra for over 7 years and for MODIS/Aqua for more than 4 1/2 years. These well calibrated instruments, a team of scientists and a large data production, archive and distribution systems have allowed for the development of a new suite of high quality product variables at spatial resolutions as fine as 250m in support of global change research and natural resource applications. This talk describes the MODIS Science team's products, with a focus on the terrestrial (land) products, the data processing approach and the process for monitoring and improving the product quality. The original MODIS science team was formed in 1989. The team's primary role is the development and implementation of the geophysical algorithms. In addition, the team provided feedback on the design and pre-launch testing of the instrument and helped guide the development of the data processing system. The key challenges the science team dealt with before launch were the development of algorithms for a new instrument and provide guidance of the large and complex multi-discipline processing system. Land, Ocean and Atmosphere discipline teams drove the processing system requirements, particularly in the area of the processing loads and volumes needed to daily produce geophysical maps of the Earth at resolutions as fine as 250 m. The processing system had to handle a large number of data products, large data volumes and processing loads, and complex processing requirements. Prior to MODIS, daily global maps from heritage instruments, such as Advanced Very High Resolution Radiometer (AVHRR), were not produced at resolutions finer than 5 km. The processing solution evolved into a combination of processing the lower level (Level 1) products and the higher level discipline specific Land and Atmosphere products in the MODIS Science Investigator Lead Processing System (SIPS), the MODIS Adaptive Processing System (MODAPS), and archive and distribution of the Land products to the user community by two of NASA s EOS Distributed Active Archive Centers (DAACs). Recently, a part of MODAPS, the Level 1 and Atmosphere Archive and Distribution System (LAADS), took over the role of archiving and distributing the Level 1 and Atmosphere products to the user community.
Development of a Two-Wheel Contingency Mode for the MAP Spacecraft
NASA Technical Reports Server (NTRS)
Starin, Scott R.; ODonnell, James R., Jr.; Bauer, Frank H. (Technical Monitor)
2002-01-01
In the event of a failure of one of MAP's three reaction wheel assemblies (RWAs), it is not possible to achieve three-axis, full-state attitude control using the remaining two wheels. Hence, two of the attitude control algorithms implemented on the MAP spacecraft will no longer be usable in their current forms: Inertial Mode, used for slewing to and holding inertial attitudes, and Observing Mode, which implements the nominal dual-spin science mode. This paper describes the effort to create a complete strategy for using software algorithms to cope with a RWA failure. The discussion of the design process will be divided into three main subtopics: performing orbit maneuvers to reach and maintain an orbit about the second Earth-Sun libration point in the event of a RWA failure, completing the mission using a momentum-bias two-wheel science mode, and developing a new thruster-based mode for adjusting the inertially fixed momentum bias. In this summary, the philosophies used in designing these changes is shown; the full paper will supplement these with algorithm descriptions and testing results.
What does semantic tiling of the cortex tell us about semantics?
Barsalou, Lawrence W
2017-10-01
Recent use of voxel-wise modeling in cognitive neuroscience suggests that semantic maps tile the cortex. Although this impressive research establishes distributed cortical areas active during the conceptual processing that underlies semantics, it tells us little about the nature of this processing. While mapping concepts between Marr's computational and implementation levels to support neural encoding and decoding, this approach ignores Marr's algorithmic level, central for understanding the mechanisms that implement cognition, in general, and conceptual processing, in particular. Following decades of research in cognitive science and neuroscience, what do we know so far about the representation and processing mechanisms that implement conceptual abilities? Most basically, much is known about the mechanisms associated with: (1) feature and frame representations, (2) grounded, abstract, and linguistic representations, (3) knowledge-based inference, (4) concept composition, and (5) conceptual flexibility. Rather than explaining these fundamental representation and processing mechanisms, semantic tiles simply provide a trace of their activity over a relatively short time period within a specific learning context. Establishing the mechanisms that implement conceptual processing in the brain will require more than mapping it to cortical (and sub-cortical) activity, with process models from cognitive science likely to play central roles in specifying the intervening mechanisms. More generally, neuroscience will not achieve its basic goals until it establishes algorithmic-level mechanisms that contribute essential explanations to how the brain works, going beyond simply establishing the brain areas that respond to various task conditions. Copyright © 2017 Elsevier Ltd. All rights reserved.
EPA's Models-3 CMAQ system is intended to provide a community modeling paradigm that allows continuous improvement of the one-atmosphere modeling capability in a unified fashion. CMAQ's modular design promotes incorporation of several sets of science process modules representing ...
NASA Astrophysics Data System (ADS)
Chioran, Doina; Nicoarǎ, Adrian; Roşu, Şerban; Cǎrligeriu, Virgil; Ianeş, Emilia
2013-10-01
Digital processing of two-dimensional cone beam computer tomography slicesstarts by identification of the contour of elements within. This paper deals with the collective work of specialists in medicine and applied mathematics in computer science on elaborating and implementation of algorithms in dental 2D imagery.
Area collapse algorithm computing new curve of 2D geometric objects
NASA Astrophysics Data System (ADS)
Buczek, Michał Mateusz
2017-06-01
The processing of cartographic data demands human involvement. Up-to-date algorithms try to automate a part of this process. The goal is to obtain a digital model, or additional information about shape and topology of input geometric objects. A topological skeleton is one of the most important tools in the branch of science called shape analysis. It represents topological and geometrical characteristics of input data. Its plot depends on using algorithms such as medial axis, skeletonization, erosion, thinning, area collapse and many others. Area collapse, also known as dimension change, replaces input data with lower-dimensional geometric objects like, for example, a polygon with a polygonal chain, a line segment with a point. The goal of this paper is to introduce a new algorithm for the automatic calculation of polygonal chains representing a 2D polygon. The output is entirely contained within the area of the input polygon, and it has a linear plot without branches. The computational process is automatic and repeatable. The requirements of input data are discussed. The author analyzes results based on the method of computing ends of output polygonal chains. Additional methods to improve results are explored. The algorithm was tested on real-world cartographic data received from BDOT/GESUT databases, and on point clouds from laser scanning. An implementation for computing hatching of embankment is described.
NASA Astrophysics Data System (ADS)
Masek, J.; Rao, A.; Gao, F.; Davis, P.; Jackson, G.; Huang, C.; Weinstein, B.
2008-12-01
The Land Cover Change Community-based Processing and Analysis System (LC-ComPS) combines grid technology, existing science modules, and dynamic workflows to enable users to complete advanced land data processing on data available from local and distributed archives. Changes in land cover represent a direct link between human activities and the global environment, and in turn affect Earth's climate. Thus characterizing land cover change has become a major goal for Earth observation science. Many science algorithms exist to generate new products (e.g., surface reflectance, change detection) used to study land cover change. The overall objective of the LC-ComPS is to release a set of tools and services to the land science community that can be implemented as a flexible LC-ComPS to produce surface reflectance and land-cover change information with ground resolution on the order of Landsat-class instruments. This package includes software modules for pre-processing Landsat-type satellite imagery (calibration, atmospheric correction, orthorectification, precision registration, BRDF correction) for performing land-cover change analysis and includes pre-built workflow chains to automatically generate surface reflectance and land-cover change products based on user input. In order to meet the project objectives, the team created the infrastructure (i.e., client-server system with graphical and machine interfaces) to expand the use of these existing science algorithm capabilities in a community with distributed, large data archives and processing centers. Because of the distributed nature of the user community, grid technology was chosen to unite the dispersed community resources. At that time, grid computing was not used consistently and operationally within the Earth science research community. Therefore, there was a learning curve to configure and implement the underlying public key infrastructure (PKI) interfaces, required for the user authentication, secure file transfer and remote job execution on the grid network of machines. In addition, science support was needed to vet that the grid technology did not have any adverse affects of the science module outputs. Other open source, unproven technologies, such as a workflow package to manage jobs submitted by the user, were infused into the overall system with successful results. This presentation will discuss the basic capabilities of LC-ComPS, explain how the technology was infused, and provide lessons learned for using and integrating the various technologies while developing and operating the system, and finally outline plans moving forward (maintenance and operations decisions) based on the experience to date.
Zombie algorithms: a timesaving remote sensing systems engineering tool
NASA Astrophysics Data System (ADS)
Ardanuy, Philip E.; Powell, Dylan C.; Marley, Stephen
2008-08-01
In modern horror fiction, zombies are generally undead corpses brought back from the dead by supernatural or scientific means, and are rarely under anyone's direct control. They typically have very limited intelligence, and hunger for the flesh of the living [1]. Typical spectroradiometric or hyperspectral instruments providess calibrated radiances for a number of remote sensing algorithms. The algorithms typically must meet specified latency and availability requirements while yielding products at the required quality. These systems, whether research, operational, or a hybrid, are typically cost constrained. Complexity of the algorithms can be high, and may evolve and mature over time as sensor characterization changes, product validation occurs, and areas of scientific basis improvement are identified and completed. This suggests the need for a systems engineering process for algorithm maintenance that is agile, cost efficient, repeatable, and predictable. Experience on remote sensing science data systems suggests the benefits of "plug-n-play" concepts of operation. The concept, while intuitively simple, can be challenging to implement in practice. The use of zombie algorithms-empty shells that outwardly resemble the form, fit, and function of a "complete" algorithm without the implemented theoretical basis-provides the ground systems advantages equivalent to those obtained by integrating sensor engineering models onto the spacecraft bus. Combined with a mature, repeatable process for incorporating the theoretical basis, or scientific core, into the "head" of the zombie algorithm, along with associated scripting and registration, provides an easy "on ramp" for the rapid and low-risk integration of scientific applications into operational systems.
ERIC Educational Resources Information Center
Avancena, Aimee Theresa; Nishihara, Akinori; Vergara, John Paul
2012-01-01
This paper presents the online cognitive and algorithm tests, which were developed in order to determine if certain cognitive factors and fundamental algorithms correlate with the performance of students in their introductory computer science course. The tests were implemented among Management Information Systems majors from the Philippines and…
Development and application of unified algorithms for problems in computational science
NASA Technical Reports Server (NTRS)
Shankar, Vijaya; Chakravarthy, Sukumar
1987-01-01
A framework is presented for developing computationally unified numerical algorithms for solving nonlinear equations that arise in modeling various problems in mathematical physics. The concept of computational unification is an attempt to encompass efficient solution procedures for computing various nonlinear phenomena that may occur in a given problem. For example, in Computational Fluid Dynamics (CFD), a unified algorithm will be one that allows for solutions to subsonic (elliptic), transonic (mixed elliptic-hyperbolic), and supersonic (hyperbolic) flows for both steady and unsteady problems. The objectives are: development of superior unified algorithms emphasizing accuracy and efficiency aspects; development of codes based on selected algorithms leading to validation; application of mature codes to realistic problems; and extension/application of CFD-based algorithms to problems in other areas of mathematical physics. The ultimate objective is to achieve integration of multidisciplinary technologies to enhance synergism in the design process through computational simulation. Specific unified algorithms for a hierarchy of gas dynamics equations and their applications to two other areas: electromagnetic scattering, and laser-materials interaction accounting for melting.
Science with High Spatial Resolution Far-Infrared Data
NASA Technical Reports Server (NTRS)
Terebey, Susan (Editor); Mazzarella, Joseph M. (Editor)
1994-01-01
The goal of this workshop was to discuss new science and techniques relevant to high spatial resolution processing of far-infrared data, with particular focus on high resolution processing of IRAS data. Users of the maximum correlation method, maximum entropy, and other resolution enhancement algorithms applicable to far-infrared data gathered at the Infrared Processing and Analysis Center (IPAC) for two days in June 1993 to compare techniques and discuss new results. During a special session on the third day, interested astronomers were introduced to IRAS HIRES processing, which is IPAC's implementation of the maximum correlation method to the IRAS data. Topics discussed during the workshop included: (1) image reconstruction; (2) random noise; (3) imagery; (4) interacting galaxies; (5) spiral galaxies; (6) galactic dust and elliptical galaxies; (7) star formation in Seyfert galaxies; (8) wavelet analysis; and (9) supernova remnants.
A Science Data System Approach for the SMAP Mission
NASA Technical Reports Server (NTRS)
Woollard, David; Kwoun, Oh-ig; Bicknell, Tom; West, Richard; Leung, Kon
2009-01-01
Though Science Data System (SDS) development has not traditionally been part of the mission concept phase, lessons learned and study of past Earth science missions indicate that SDS functionality can greatly benefit algorithm developers in all mission phases. We have proposed a SDS approach for the SMAP Mission that incorporates early support for an algorithm testbed, allowing scientists to develop codes and seamlessly integrate them into the operational SDS. This approach will greatly reduce both the costs and risks involved in algorithm transitioning and SDS development.
The GOES-R Product Generation Architecture - Post CDR Update
NASA Astrophysics Data System (ADS)
Dittberner, G.; Kalluri, S.; Weiner, A.
2012-12-01
The GOES-R system will substantially improve the accuracy of information available to users by providing data from significantly enhanced instruments, which will generate an increased number and diversity of products with higher resolution, and much shorter relook times. Considerably greater compute and memory resources are necessary to achieve the necessary latency and availability for these products. Over time, new and updated algorithms are expected to be added and old ones removed as science advances and new products are developed. The GOES-R GS architecture is being planned to maintain functionality so that when such changes are implemented, operational product generation will continue without interruption. The primary parts of the PG infrastructure are the Service Based Architecture (SBA) and the Data Fabric (DF). SBA is the middleware that encapsulates and manages science algorithms that generate products. It is divided into three parts, the Executive, which manages and configures the algorithm as a service, the Dispatcher, which provides data to the algorithm, and the Strategy, which determines when the algorithm can execute with the available data. SBA is a distributed architecture, with services connected to each other over a compute grid and is highly scalable. This plug-and-play architecture allows algorithms to be added, removed, or updated without affecting any other services or software currently running and producing data. Algorithms require product data from other algorithms, so a scalable and reliable messaging is necessary. The SBA uses the DF to provide this data communication layer between algorithms. The DF provides an abstract interface over a distributed and persistent multi-layered storage system (e.g., memory based caching above disk-based storage) and an event management system that allows event-driven algorithm services to know when instrument data are available and where they reside. Together, the SBA and the DF provide a flexible, high performance architecture that can meet the needs of product processing now and as they grow in the future.
The GOES-R Product Generation Architecture
NASA Astrophysics Data System (ADS)
Dittberner, G. J.; Kalluri, S.; Hansen, D.; Weiner, A.; Tarpley, A.; Marley, S.
2011-12-01
The GOES-R system will substantially improve users' ability to succeed in their work by providing data with significantly enhanced instruments, higher resolution, much shorter relook times, and an increased number and diversity of products. The Product Generation architecture is designed to provide the computer and memory resources necessary to achieve the necessary latency and availability for these products. Over time, new and updated algorithms are expected to be added and old ones removed as science advances and new products are developed. The GOES-R GS architecture is being planned to maintain functionality so that when such changes are implemented, operational product generation will continue without interruption. The primary parts of the PG infrastructure are the Service Based Architecture (SBA) and the Data Fabric (DF). SBA is the middleware that encapsulates and manages science algorithms that generate products. It is divided into three parts, the Executive, which manages and configures the algorithm as a service, the Dispatcher, which provides data to the algorithm, and the Strategy, which determines when the algorithm can execute with the available data. SBA is a distributed architecture, with services connected to each other over a compute grid and is highly scalable. This plug-and-play architecture allows algorithms to be added, removed, or updated without affecting any other services or software currently running and producing data. Algorithms require product data from other algorithms, so a scalable and reliable messaging is necessary. The SBA uses the DF to provide this data communication layer between algorithms. The DF provides an abstract interface over a distributed and persistent multi-layered storage system (e.g., memory based caching above disk-based storage) and an event management system that allows event-driven algorithm services to know when instrument data are available and where they reside. Together, the SBA and the DF provide a flexible, high performance architecture that can meet the needs of product processing now and as they grow in the future.
Information dynamics algorithm for detecting communities in networks
NASA Astrophysics Data System (ADS)
Massaro, Emanuele; Bagnoli, Franco; Guazzini, Andrea; Lió, Pietro
2012-11-01
The problem of community detection is relevant in many scientific disciplines, from social science to statistical physics. Given the impact of community detection in many areas, such as psychology and social sciences, we have addressed the issue of modifying existing well performing algorithms by incorporating elements of the domain application fields, i.e. domain-inspired. We have focused on a psychology and social network-inspired approach which may be useful for further strengthening the link between social network studies and mathematics of community detection. Here we introduce a community-detection algorithm derived from the van Dongen's Markov Cluster algorithm (MCL) method [4] by considering networks' nodes as agents capable to take decisions. In this framework we have introduced a memory factor to mimic a typical human behavior such as the oblivion effect. The method is based on information diffusion and it includes a non-linear processing phase. We test our method on two classical community benchmark and on computer generated networks with known community structure. Our approach has three important features: the capacity of detecting overlapping communities, the capability of identifying communities from an individual point of view and the fine tuning the community detectability with respect to prior knowledge of the data. Finally we discuss how to use a Shannon entropy measure for parameter estimation in complex networks.
GOES-R GS Product Generation Infrastructure Operations
NASA Astrophysics Data System (ADS)
Blanton, M.; Gundy, J.
2012-12-01
GOES-R GS Product Generation Infrastructure Operations: The GOES-R Ground System (GS) will produce a much larger set of products with higher data density than previous GOES systems. This requires considerably greater compute and memory resources to achieve the necessary latency and availability for these products. Over time, new algorithms could be added and existing ones removed or updated, but the GOES-R GS cannot go down during this time. To meet these GOES-R GS processing needs, the Harris Corporation will implement a Product Generation (PG) infrastructure that is scalable, extensible, extendable, modular and reliable. The primary parts of the PG infrastructure are the Service Based Architecture (SBA), which includes the Distributed Data Fabric (DDF). The SBA is the middleware that encapsulates and manages science algorithms that generate products. The SBA is divided into three parts, the Executive, which manages and configures the algorithm as a service, the Dispatcher, which provides data to the algorithm, and the Strategy, which determines when the algorithm can execute with the available data. The SBA is a distributed architecture, with services connected to each other over a compute grid and is highly scalable. This plug-and-play architecture allows algorithms to be added, removed, or updated without affecting any other services or software currently running and producing data. Algorithms require product data from other algorithms, so a scalable and reliable messaging is necessary. The SBA uses the DDF to provide this data communication layer between algorithms. The DDF provides an abstract interface over a distributed and persistent multi-layered storage system (memory based caching above disk-based storage) and an event system that allows algorithm services to know when data is available and to get the data that they need to begin processing when they need it. Together, the SBA and the DDF provide a flexible, high performance architecture that can meet the needs of product processing now and as they grow in the future.
Simple, Scalable, Script-Based Science Processor (S4P)
NASA Technical Reports Server (NTRS)
Lynnes, Christopher; Vollmer, Bruce; Berrick, Stephen; Mack, Robert; Pham, Long; Zhou, Bryan; Wharton, Stephen W. (Technical Monitor)
2001-01-01
The development and deployment of data processing systems to process Earth Observing System (EOS) data has proven to be costly and prone to technical and schedule risk. Integration of science algorithms into a robust operational system has been difficult. The core processing system, based on commercial tools, has demonstrated limitations at the rates needed to produce the several terabytes per day for EOS, primarily due to job management overhead. This has motivated an evolution in the EOS Data Information System toward a more distributed one incorporating Science Investigator-led Processing Systems (SIPS). As part of this evolution, the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC) has developed a simplified processing system to accommodate the increased load expected with the advent of reprocessing and launch of a second satellite. This system, the Simple, Scalable, Script-based Science Processor (S42) may also serve as a resource for future SIPS. The current EOSDIS Core System was designed to be general, resulting in a large, complex mix of commercial and custom software. In contrast, many simpler systems, such as the EROS Data Center AVHRR IKM system, rely on a simple directory structure to drive processing, with directories representing different stages of production. The system passes input data to a directory, and the output data is placed in a "downstream" directory. The GES DAAC's Simple Scalable Script-based Science Processing System is based on the latter concept, but with modifications to allow varied science algorithms and improve portability. It uses a factory assembly-line paradigm: when work orders arrive at a station, an executable is run, and output work orders are sent to downstream stations. The stations are implemented as UNIX directories, while work orders are simple ASCII files. The core S4P infrastructure consists of a Perl program called stationmaster, which detects newly arrived work orders and forks a job to run the appropriate executable (registered in a configuration file for that station). Although S4P is written in Perl, the executables associated with a station can be any program that can be run from the command line, i.e., non-interactively. An S4P instance is typically monitored using a simple Graphical User Interface. However, the reliance of S4P on UNIX files and directories also allows visibility into the state of stations and jobs using standard operating system commands, permitting remote monitor/control over low-bandwidth connections. S4P is being used as the foundation for several small- to medium-size systems for data mining, on-demand subsetting, processing of direct broadcast Moderate Resolution Imaging Spectroradiometer (MODIS) data, and Quick-Response MODIS processing. It has also been used to implement a large-scale system to process MODIS Level 1 and Level 2 Standard Products, which will ultimately process close to 2 TB/day.
The Challenge of Promoting Algorithmic Thinking of Both Sciences- and Humanities-Oriented Learners
ERIC Educational Resources Information Center
Katai, Z.
2015-01-01
The research results we present in this paper reveal that properly calibrated e-learning tools have potential to effectively promote the algorithmic thinking of both science-oriented and humanities-oriented students. After students had watched an illustration (by a folk dance choreography) and an animation of the studied sorting algorithm (bubble…
NASA Technical Reports Server (NTRS)
Gulick, V. C.; Morris, R. L.; Bishop, J.; Gazis, P.; Alena, R.; Sierhuis, M.
2002-01-01
We are developing science analyses algorithms to interface with a Geologist's Field Assistant device to allow robotic or human remote explorers to better sense their surroundings during limited surface excursions. Our algorithms will interpret spectral and imaging data obtained by various sensors. Additional information is contained in the original extended abstract.
A Time Series of Sea Surface Nitrate and Nitrate based New Production in the Global Oceans
NASA Astrophysics Data System (ADS)
Goes, J. I.; Fargion, G. S.; Gomes, H. R.; Franz, B. A.
2014-12-01
With support from NASA's MEaSUREs program, we are developing algorithms for two innovative satellite-based Earth Science Data Records (ESDRs), one Sea Surface Nitrate (SSN) and the other, Nitrate based new Production (NnP). Newly developed algorithms will be applied to mature ESDRs of Chlorophyll a and SST available from NASA, to generate maps of SSN and NnP. Our proposed ESDRs offer the potential of greatly improving our understanding of the role of the oceans in global carbon cycling, earth system processes and climate change, especially for regions and seasons which are inaccessible to traditional shipboard studies. They also provide an innovative means for validating and improving coupled ecosystem models that currently rely on global maps of nitrate generated from multi-year data sets. To aid in our algorithm development efforts and to ensure that our ESDRs are truly global in nature, we are currently in the process of assembling a large database of nutrients from oceanographic institutions all over the world. Once our products are developed and our algorithms are fine-tuned, large-scale data production will be undertaken in collaboration with NASA's Ocean Biology Processing Group (OPBG), who will make the data publicly available first as evaluation products and then as mature ESDRs.
The art and science of switching antipsychotic medications, part 2.
Weiden, Peter J; Miller, Alexander L; Lambert, Tim J; Buckley, Peter F
2007-01-01
In the presentation "Switching and Metabolic Syndrome," Weiden summarizes reasons to switch antipsychotics, highlighting weight gain and other metabolic adverse events as recent treatment targets. In "Texas Medication Algorithm Project (TMAP)," Miller reviews the TMAP study design, discusses results related to the algorithm versus treatment as usual, and concludes with the implications of the study. Lambert's presentation, "Dosing and Titration Strategies to Optimize Patient Outcome When Switching Antipsychotic Therapy," reviews the decision-making process when switching patients' medication, addresses dosing and titration strategies to effectively transition between medications, and examines other factors to consider when switching pharmacotherapy.
Results from CrIS-ATMS Obtained Using the AIRS Science Team Retrieval Methodology
NASA Technical Reports Server (NTRS)
Susskind, Joel; Kouvaris, Louis C.; Iredell, Lena
2013-01-01
AIRS was launched on EOS Aqua in May 2002, together with AMSU-A and HSB (which subsequently failed early in the mission), to form a next generation polar orbiting infrared and microwave atmospheric sounding system. AIRS/AMSU had two primary objectives. The first objective was to provide real-time data products available for use by the operational Numerical Weather Prediction Centers in a data assimilation mode to improve the skill of their subsequent forecasts. The second objective was to provide accurate unbiased sounding products with good spatial coverage that are used to generate stable multi-year climate data sets to study the earth's interannual variability, climate processes, and possibly long-term trends. AIRS/AMSU data for all time periods are now being processed using the state of the art AIRS Science Team Version-6 retrieval methodology. The Suomi-NPP mission was launched in October 2011 as part of a sequence of Low Earth Orbiting satellite missions under the "Joint Polar Satellite System" (JPSS). NPP carries CrIS and ATMS, which are advanced infra-red and microwave atmospheric sounders that were designed as follow-ons to the AIRS and AMSU instruments. The main objective of this work is to assess whether CrIS/ATMS will be an adequate replacement for AIRS/AMSU from the perspective of the generation of accurate and consistent long term climate data records, or if improved instruments should be developed for future flight. It is critical for CrIS/ATMS to be processed using an algorithm similar to, or at least comparable to, AIRS Version-6 before such an assessment can be made. We have been conducting research to optimize products derived from CrIS/ATMS observations using a scientific approach analogous to the AIRS Version-6 retrieval algorithm. Our latest research uses Version-5.70 of the CrIS/ATMS retrieval algorithm, which is otherwise analogous to AIRS Version-6, but does not yet contain the benefit of use of a Neural-Net first guess start-up system which significantly improved results of AIRS Version-6. Version-5.70 CrIS/ATMS temperature profile and surface skin temperature retrievals are of very good quality, and are better than AIRS Version-5 retrievals, but are still significantly poorer than those of AIRS Version-6. CrIS/ATMS retrievals should improve when a Neural-Net start-up system is ready for use. We also examined CrIS/ATMS retrievals generated by NOAA using their NUCAPS retrieval algorithm, which is based on earlier versions of the AIRS Science Team retrieval algorithms. We show that the NUCAPS algorithm as currently configured is not well suited for climate monitoring purposes.
NASA Science Data Processing for SNPP
NASA Astrophysics Data System (ADS)
Hall, A.; Behnke, J.; Lowe, D. R.; Ho, E. L.
2014-12-01
NASA's ESDIS Project has been operating the Suomi National Polar-Orbiting Partnership (SNPP) Science Data Segment (SDS) since the launch in October 2011. The science data processing system includes a Science Data Depository and Distribution Element (SD3E) and five Product Evaluation and Analysis Tool Elements (PEATEs): Land, Ocean, Atmosphere, Ozone, and Sounder. The SDS has been responsible for assessing Environmental Data Records (EDRs) for climate quality, providing and demonstrating algorithm improvements/enhancements and supporting the calibration/validation activities as well as instrument calibration and sensor table uploads for mission planning. The SNPP also flies two NASA instruments: OMPS Limb and CERES. The SNPP SDS has been responsible for producing, archiving and distributing the standard products for those instruments in close association with their NASA science teams. The PEATEs leveraged existing science data processing techniques developed under the EOSDIS Program. This enabled he PEATEs to do an excellent job in supporting Science Team analysis for SNPP. The SDS acquires data from three sources: NESDIS IDPS (Raw Data Records (RDRs)), GRAVITE (Retained Intermediate Products (RIPs)), and the NOAA/CLASS (higher level products). The SD3E component aggregates the RDRs, and distributes them to each of the PEATEs for further analysis and processing. It provides a ~32 day rolling storage of data, available for pickup by the PEATEs. The current system used by NASA will be presented along with plans for streamlining the system in support of continuing the NASA's EOS measurements.
NASA Technical Reports Server (NTRS)
Duda, James L.; Barth, Suzanna C
2005-01-01
The VIIRS sensor provides measurements for 22 Environmental Data Records (EDRs) addressing the atmosphere, ocean surface temperature, ocean color, land parameters, aerosols, imaging for clouds and ice, and more. That is, the VIIRS collects visible and infrared radiometric data of the Earth's atmosphere, ocean, and land surfaces. Data types include atmospheric, clouds, Earth radiation budget, land/water and sea surface temperature, ocean color, and low light imagery. This wide scope of measurements calls for the preparation of a multiplicity of Algorithm Theoretical Basis Documents (ATBDs), and, additionally, for intermediate products such as cloud mask, et al. Furthermore, the VIIRS interacts with three or more other sensors. This paper addresses selected and crucial elements of the process being used to convert and test an immense volume of a maturing and changing science code to the initial operational source code in preparation for launch of NPP. The integrity of the original science code is maintained and enhanced via baseline comparisons when re-hosted, in addition to multiple planned code performance reviews.
Global Climate Monitoring with the EOS PM-Platform's Advanced Microwave Scanning Radiometer (AMSR-E)
NASA Technical Reports Server (NTRS)
Spencer, Roy W.
2002-01-01
The Advanced Microwave Scanning 2 Radiometer (AMSR-E) is being built by NASDA to fly on NASA's PM Platform (now called Aqua) in December 2000. This is in addition to a copy of AMSR that will be launched on Japan's ADEOS-II satellite in 2001. The AMSRs improve upon the window frequency radiometer heritage of the SSM/I and SMMR instruments. Major improvements over those instruments include channels spanning the 6.9 GHz to 89 GHz frequency range, and higher spatial resolution from a 1.6 m reflector (AMSR-E) and 2.0 m reflector (ADEOS-II AMSR). The ADEOS-II AMSR also will have 50.3 and 52.8 GHz channels, providing sensitivity to lower tropospheric temperature. NASA funds an AMSR-E Science Team to provide algorithms for the routine production of a number of standard geophysical products. These products will be generated by the AMSR-E Science Investigator-led Processing System (SIPS) at the Global Hydrology Resource Center (GHRC) in Huntsville, Alabama. While there is a separate NASDA-sponsored activity to develop algorithms and produce products from AMSR, as well as a Joint (NASDA-NASA) AMSR Science Team 3 activity, here I will review only the AMSR-E Team's algorithms and how they benefit from the new capabilities that AMSR-E will provide. The US Team's products will be archived at the National Snow and Ice Data Center (NSIDC).
Approximate Computing Techniques for Iterative Graph Algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panyala, Ajay R.; Subasi, Omer; Halappanavar, Mahantesh
Approximate computing enables processing of large-scale graphs by trading off quality for performance. Approximate computing techniques have become critical not only due to the emergence of parallel architectures but also the availability of large scale datasets enabling data-driven discovery. Using two prototypical graph algorithms, PageRank and community detection, we present several approximate computing heuristics to scale the performance with minimal loss of accuracy. We present several heuristics including loop perforation, data caching, incomplete graph coloring and synchronization, and evaluate their efficiency. We demonstrate performance improvements of up to 83% for PageRank and up to 450x for community detection, with lowmore » impact of accuracy for both the algorithms. We expect the proposed approximate techniques will enable scalable graph analytics on data of importance to several applications in science and their subsequent adoption to scale similar graph algorithms.« less
D'Onofrio, David J; Abel, David L; Johnson, Donald E
2012-03-14
The fields of molecular biology and computer science have cooperated over recent years to create a synergy between the cybernetic and biosemiotic relationship found in cellular genomics to that of information and language found in computational systems. Biological information frequently manifests its "meaning" through instruction or actual production of formal bio-function. Such information is called prescriptive information (PI). PI programs organize and execute a prescribed set of choices. Closer examination of this term in cellular systems has led to a dichotomy in its definition suggesting both prescribed data and prescribed algorithms are constituents of PI. This paper looks at this dichotomy as expressed in both the genetic code and in the central dogma of protein synthesis. An example of a genetic algorithm is modeled after the ribosome, and an examination of the protein synthesis process is used to differentiate PI data from PI algorithms.
Implementation of Multispectral Image Classification on a Remote Adaptive Computer
NASA Technical Reports Server (NTRS)
Figueiredo, Marco A.; Gloster, Clay S.; Stephens, Mark; Graves, Corey A.; Nakkar, Mouna
1999-01-01
As the demand for higher performance computers for the processing of remote sensing science algorithms increases, the need to investigate new computing paradigms its justified. Field Programmable Gate Arrays enable the implementation of algorithms at the hardware gate level, leading to orders of m a,gnitude performance increase over microprocessor based systems. The automatic classification of spaceborne multispectral images is an example of a computation intensive application, that, can benefit from implementation on an FPGA - based custom computing machine (adaptive or reconfigurable computer). A probabilistic neural network is used here to classify pixels of of a multispectral LANDSAT-2 image. The implementation described utilizes Java client/server application programs to access the adaptive computer from a remote site. Results verify that a remote hardware version of the algorithm (implemented on an adaptive computer) is significantly faster than a local software version of the same algorithm implemented on a typical general - purpose computer).
Agile science: creating useful products for behavior change in the real world.
Hekler, Eric B; Klasnja, Predrag; Riley, William T; Buman, Matthew P; Huberty, Jennifer; Rivera, Daniel E; Martin, Cesar A
2016-06-01
Evidence-based practice is important for behavioral interventions but there is debate on how best to support real-world behavior change. The purpose of this paper is to define products and a preliminary process for efficiently and adaptively creating and curating a knowledge base for behavior change for real-world implementation. We look to evidence-based practice suggestions and draw parallels to software development. We argue to target three products: (1) the smallest, meaningful, self-contained, and repurposable behavior change modules of an intervention; (2) "computational models" that define the interaction between modules, individuals, and context; and (3) "personalization" algorithms, which are decision rules for intervention adaptation. The "agile science" process includes a generation phase whereby contender operational definitions and constructs of the three products are created and assessed for feasibility and an evaluation phase, whereby effect size estimates/casual inferences are created. The process emphasizes early-and-often sharing. If correct, agile science could enable a more robust knowledge base for behavior change.
NASA Astrophysics Data System (ADS)
Havemann, Frank; Heinz, Michael; Struck, Alexander; Gläser, Jochen
2011-01-01
We propose a new local, deterministic and parameter-free algorithm that detects fuzzy and crisp overlapping communities in a weighted network and simultaneously reveals their hierarchy. Using a local fitness function, the algorithm greedily expands natural communities of seeds until the whole graph is covered. The hierarchy of communities is obtained analytically by calculating resolution levels at which communities grow rather than numerically by testing different resolution levels. This analytic procedure is not only more exact than its numerical alternatives such as LFM and GCE but also much faster. Critical resolution levels can be identified by searching for intervals in which large changes of the resolution do not lead to growth of communities. We tested our algorithm on benchmark graphs and on a network of 492 papers in information science. Combined with a specific post-processing, the algorithm gives much more precise results on LFR benchmarks with high overlap compared to other algorithms and performs very similarly to GCE.
NASA Astrophysics Data System (ADS)
Lanzagorta, Marco O.; Gomez, Richard B.; Uhlmann, Jeffrey K.
2003-08-01
In recent years, computer graphics has emerged as a critical component of the scientific and engineering process, and it is recognized as an important computer science research area. Computer graphics are extensively used for a variety of aerospace and defense training systems and by Hollywood's special effects companies. All these applications require the computer graphics systems to produce high quality renderings of extremely large data sets in short periods of time. Much research has been done in "classical computing" toward the development of efficient methods and techniques to reduce the rendering time required for large datasets. Quantum Computing's unique algorithmic features offer the possibility of speeding up some of the known rendering algorithms currently used in computer graphics. In this paper we discuss possible implementations of quantum rendering algorithms. In particular, we concentrate on the implementation of Grover's quantum search algorithm for Z-buffering, ray-tracing, radiosity, and scene management techniques. We also compare the theoretical performance between the classical and quantum versions of the algorithms.
Spectral unmixing of agents on surfaces for the Joint Contaminated Surface Detector (JCSD)
NASA Astrophysics Data System (ADS)
Slamani, Mohamed-Adel; Chyba, Thomas H.; LaValley, Howard; Emge, Darren
2007-09-01
ITT Corporation, Advanced Engineering and Sciences Division, is currently developing the Joint Contaminated Surface Detector (JCSD) technology under an Advanced Concept Technology Demonstration (ACTD) managed jointly by the U.S. Army Research, Development, and Engineering Command (RDECOM) and the Joint Project Manager for Nuclear, Biological, and Chemical Contamination Avoidance for incorporation on the Army's future reconnaissance vehicles. This paper describes the design of the chemical agent identification (ID) algorithm associated with JCSD. The algorithm detects target chemicals mixed with surface and interferent signatures. Simulated data sets were generated from real instrument measurements to support a matrix of parameters based on a Design Of Experiments approach (DOE). Decisions based on receiver operating characteristics (ROC) curves and area-under-the-curve (AUC) measures were used to down-select between several ID algorithms. Results from top performing algorithms were then combined via a fusion approach to converge towards optimum rates of detections and false alarms. This paper describes the process associated with the algorithm design and provides an illustrating example.
NASA Astrophysics Data System (ADS)
Umbarkar, A. J.; Balande, U. T.; Seth, P. D.
2017-06-01
The field of nature inspired computing and optimization techniques have evolved to solve difficult optimization problems in diverse fields of engineering, science and technology. The firefly attraction process is mimicked in the algorithm for solving optimization problems. In Firefly Algorithm (FA) sorting of fireflies is done by using sorting algorithm. The original FA is proposed with bubble sort for ranking the fireflies. In this paper, the quick sort replaces bubble sort to decrease the time complexity of FA. The dataset used is unconstrained benchmark functions from CEC 2005 [22]. The comparison of FA using bubble sort and FA using quick sort is performed with respect to best, worst, mean, standard deviation, number of comparisons and execution time. The experimental result shows that FA using quick sort requires less number of comparisons but requires more execution time. The increased number of fireflies helps to converge into optimal solution whereas by varying dimension for algorithm performed better at a lower dimension than higher dimension.
Adams, Bradley J; Aschheim, Kenneth W
2016-01-01
Comparison of antemortem and postmortem dental records is a leading method of victim identification, especially for incidents involving a large number of decedents. This process may be expedited with computer software that provides a ranked list of best possible matches. This study provides a comparison of the most commonly used conventional coding and sorting algorithms used in the United States (WinID3) with a simplified coding format that utilizes an optimized sorting algorithm. The simplified system consists of seven basic codes and utilizes an optimized algorithm based largely on the percentage of matches. To perform this research, a large reference database of approximately 50,000 antemortem and postmortem records was created. For most disaster scenarios, the proposed simplified codes, paired with the optimized algorithm, performed better than WinID3 which uses more complex codes. The detailed coding system does show better performance with extremely large numbers of records and/or significant body fragmentation. © 2015 American Academy of Forensic Sciences.
Optimization in optical systems revisited: Beyond genetic algorithms
NASA Astrophysics Data System (ADS)
Gagnon, Denis; Dumont, Joey; Dubé, Louis
2013-05-01
Designing integrated photonic devices such as waveguides, beam-splitters and beam-shapers often requires optimization of a cost function over a large solution space. Metaheuristics - algorithms based on empirical rules for exploring the solution space - are specifically tailored to those problems. One of the most widely used metaheuristics is the standard genetic algorithm (SGA), based on the evolution of a population of candidate solutions. However, the stochastic nature of the SGA sometimes prevents access to the optimal solution. Our goal is to show that a parallel tabu search (PTS) algorithm is more suited to optimization problems in general, and to photonics in particular. PTS is based on several search processes using a pool of diversified initial solutions. To assess the performance of both algorithms (SGA and PTS), we consider an integrated photonics design problem, the generation of arbitrary beam profiles using a two-dimensional waveguide-based dielectric structure. The authors acknowledge financial support from the Natural Sciences and Engineering Research Council of Canada (NSERC).
NASA Technical Reports Server (NTRS)
Lawson, Charles L.; Krogh, Fred; Van Snyder, W.; Oken, Carol A.; Mccreary, Faith A.; Lieske, Jay H.; Perrine, Jack; Coffin, Ralph S.; Wayne, Warren J.
1994-01-01
MATH77 is high-quality library of ANSI FORTRAN 77 subprograms implementing contemporary algorithms for basic computational processes of science and engineering. Release 4.0 of MATH77 contains 454 user-callable and 136 lower-level subprograms. MATH77 release 4.0 subroutine library designed to be usable on any computer system supporting full ANSI standard FORTRAN 77 language.
Science of Land Target Spectral Signatures
2013-04-03
F. Meriaudeau, T. Downey , A. Wig , A. Passian, M. Buncick, T.L. Ferrell, Fiber optic sensor based on gold island plasmon resonance , Sensors and...processing, detection algorithms, sensor fusion, spectral signature modeling Dr. J. Michael Cathcart Georgia Tech Research Corporation Office of...target detection and sensor fusion. The phenomenology research continued to focus on spectroscopic soil measurements, optical property analyses, field
NASA Technical Reports Server (NTRS)
Hlavka, Dennis L.; Palm, S. P.; Welton, E. J.; Hart, W. D.; Spinhirne, J. D.; McGill, M.; Mahesh, A.; Starr, David OC. (Technical Monitor)
2001-01-01
The Geoscience Laser Altimeter System (GLAS) is scheduled for launch on the ICESat satellite as part of the NASA EOS mission in 2002. GLAS will be used to perform high resolution surface altimetry and will also provide a continuously operating atmospheric lidar to profile clouds, aerosols, and the planetary boundary layer with horizontal and vertical resolution of 175 and 76.8 m, respectively. GLAS is the first active satellite atmospheric profiler to provide global coverage. Data products include direct measurements of the heights of aerosol and cloud layers, and the optical depth of transmissive layers. In this poster we provide an overview of the GLAS atmospheric data products, present a simulated GLAS data set, and show results from the simulated data set using the GLAS data processing algorithm. Optical results from the ER-2 Cloud Physics Lidar (CPL), which uses many of the same processing algorithms as GLAS, show algorithm performance with real atmospheric conditions during the Southern African Regional Science Initiative (SAFARI 2000).
NASA Technical Reports Server (NTRS)
Solarna, David; Moser, Gabriele; Le Moigne-Stewart, Jacqueline; Serpico, Sebastiano B.
2017-01-01
Because of the large variety of sensors and spacecraft collecting data, planetary science needs to integrate various multi-sensor and multi-temporal images. These multiple data represent a precious asset, as they allow the study of targets spectral responses and of changes in the surface structure; because of their variety, they also require accurate and robust registration. A new crater detection algorithm, used to extract features that will be integrated in an image registration framework, is presented. A marked point process-based method has been developed to model the spatial distribution of elliptical objects (i.e. the craters) and a birth-death Markov chain Monte Carlo method, coupled with a region-based scheme aiming at computational efficiency, is used to find the optimal configuration fitting the image. The extracted features are exploited, together with a newly defined fitness function based on a modified Hausdorff distance, by an image registration algorithm whose architecture has been designed to minimize the computational time.
Herrera, Lara Maria; Fernandes, Clemente Maia da Silva; Serra, Mônica da Costa
2018-01-01
This study aimed to develop and to assess an algorithm to facilitate lip print visualization, and to digitally analyze lip prints on different supports, by superimposition. It also aimed to classify lip prints according to sex. A batch image processing algorithm was developed, which facilitated the identification and extraction of information about lip grooves. However, it performed better for lip print images with a uniform background. Paper and glass slab allowed more correct identifications than glass and the both sides of compact disks. There was no significant difference between the type of support and the amount of matching structures located in the middle area of the lower lip. There was no evidence of association between types of lip grooves and sex. Lip groove patterns of type III and type I were the most common for both sexes. The development of systems for lip print analysis is necessary, mainly concerning digital methods. © 2017 American Academy of Forensic Sciences.
Fast segmentation of stained nuclei in terabyte-scale, time resolved 3D microscopy image stacks.
Stegmaier, Johannes; Otte, Jens C; Kobitski, Andrei; Bartschat, Andreas; Garcia, Ariel; Nienhaus, G Ulrich; Strähle, Uwe; Mikut, Ralf
2014-01-01
Automated analysis of multi-dimensional microscopy images has become an integral part of modern research in life science. Most available algorithms that provide sufficient segmentation quality, however, are infeasible for a large amount of data due to their high complexity. In this contribution we present a fast parallelized segmentation method that is especially suited for the extraction of stained nuclei from microscopy images, e.g., of developing zebrafish embryos. The idea is to transform the input image based on gradient and normal directions in the proximity of detected seed points such that it can be handled by straightforward global thresholding like Otsu's method. We evaluate the quality of the obtained segmentation results on a set of real and simulated benchmark images in 2D and 3D and show the algorithm's superior performance compared to other state-of-the-art algorithms. We achieve an up to ten-fold decrease in processing times, allowing us to process large data sets while still providing reasonable segmentation results.
NASA Astrophysics Data System (ADS)
Gomez Gonzalez, C. A.; Absil, O.; Absil, P.-A.; Van Droogenbroeck, M.; Mawet, D.; Surdej, J.
2016-05-01
Context. Data processing constitutes a critical component of high-contrast exoplanet imaging. Its role is almost as important as the choice of a coronagraph or a wavefront control system, and it is intertwined with the chosen observing strategy. Among the data processing techniques for angular differential imaging (ADI), the most recent is the family of principal component analysis (PCA) based algorithms. It is a widely used statistical tool developed during the first half of the past century. PCA serves, in this case, as a subspace projection technique for constructing a reference point spread function (PSF) that can be subtracted from the science data for boosting the detectability of potential companions present in the data. Unfortunately, when building this reference PSF from the science data itself, PCA comes with certain limitations such as the sensitivity of the lower dimensional orthogonal subspace to non-Gaussian noise. Aims: Inspired by recent advances in machine learning algorithms such as robust PCA, we aim to propose a localized subspace projection technique that surpasses current PCA-based post-processing algorithms in terms of the detectability of companions at near real-time speed, a quality that will be useful for future direct imaging surveys. Methods: We used randomized low-rank approximation methods recently proposed in the machine learning literature, coupled with entry-wise thresholding to decompose an ADI image sequence locally into low-rank, sparse, and Gaussian noise components (LLSG). This local three-term decomposition separates the starlight and the associated speckle noise from the planetary signal, which mostly remains in the sparse term. We tested the performance of our new algorithm on a long ADI sequence obtained on β Pictoris with VLT/NACO. Results: Compared to a standard PCA approach, LLSG decomposition reaches a higher signal-to-noise ratio and has an overall better performance in the receiver operating characteristic space. This three-term decomposition brings a detectability boost compared to the full-frame standard PCA approach, especially in the small inner working angle region where complex speckle noise prevents PCA from discerning true companions from noise.
The Apache OODT Project: An Introduction
NASA Astrophysics Data System (ADS)
Mattmann, C. A.; Crichton, D. J.; Hughes, J. S.; Ramirez, P.; Goodale, C. E.; Hart, A. F.
2012-12-01
Apache OODT is a science data system framework, borne over the past decade, with 100s of FTEs of investment, tens of sponsoring agencies (NASA, NIH/NCI, DoD, NSF, universities, etc.), and hundreds of projects and science missions that it powers everyday to their success. At its core, Apache OODT carries with it two fundamental classes of software services and components: those that deal with information integration from existing science data repositories and archives, that themselves have already-in-use business processes and models for populating those archives. Information integration allows search, retrieval, and dissemination across these heterogeneous systems, and ultimately rapid, interactive data access, and retrieval. The other suite of services and components within Apache OODT handle population and processing of those data repositories and archives. Workflows, resource management, crawling, remote data retrieval, curation and ingestion, along with science data algorithm integration all are part of these Apache OODT software elements. In this talk, I will provide an overview of the use of Apache OODT to unlock and populate information from science data repositories and archives. We'll cover the basics, along with some advanced use cases and success stories.
Big Data GPU-Driven Parallel Processing Spatial and Spatio-Temporal Clustering Algorithms
NASA Astrophysics Data System (ADS)
Konstantaras, Antonios; Skounakis, Emmanouil; Kilty, James-Alexander; Frantzeskakis, Theofanis; Maravelakis, Emmanuel
2016-04-01
Advances in graphics processing units' technology towards encompassing parallel architectures [1], comprised of thousands of cores and multiples of parallel threads, provide the foundation in terms of hardware for the rapid processing of various parallel applications regarding seismic big data analysis. Seismic data are normally stored as collections of vectors in massive matrices, growing rapidly in size as wider areas are covered, denser recording networks are being established and decades of data are being compiled together [2]. Yet, many processes regarding seismic data analysis are performed on each seismic event independently or as distinct tiles [3] of specific grouped seismic events within a much larger data set. Such processes, independent of one another can be performed in parallel narrowing down processing times drastically [1,3]. This research work presents the development and implementation of three parallel processing algorithms using Cuda C [4] for the investigation of potentially distinct seismic regions [5,6] present in the vicinity of the southern Hellenic seismic arc. The algorithms, programmed and executed in parallel comparatively, are the: fuzzy k-means clustering with expert knowledge [7] in assigning overall clusters' number; density-based clustering [8]; and a selves-developed spatio-temporal clustering algorithm encompassing expert [9] and empirical knowledge [10] for the specific area under investigation. Indexing terms: GPU parallel programming, Cuda C, heterogeneous processing, distinct seismic regions, parallel clustering algorithms, spatio-temporal clustering References [1] Kirk, D. and Hwu, W.: 'Programming massively parallel processors - A hands-on approach', 2nd Edition, Morgan Kaufman Publisher, 2013 [2] Konstantaras, A., Valianatos, F., Varley, M.R. and Makris, J.P.: 'Soft-Computing Modelling of Seismicity in the Southern Hellenic Arc', Geoscience and Remote Sensing Letters, vol. 5 (3), pp. 323-327, 2008 [3] Papadakis, S. and Diamantaras, K.: 'Programming and architecture of parallel processing systems', 1st Edition, Eds. Kleidarithmos, 2011 [4] NVIDIA.: 'NVidia CUDA C Programming Guide', version 5.0, NVidia (reference book) [5] Konstantaras, A.: 'Classification of Distinct Seismic Regions and Regional Temporal Modelling of Seismicity in the Vicinity of the Hellenic Seismic Arc', IEEE Selected Topics in Applied Earth Observations and Remote Sensing, vol. 6 (4), pp. 1857-1863, 2013 [6] Konstantaras, A. Varley, M.R.,. Valianatos, F., Collins, G. and Holifield, P.: 'Recognition of electric earthquake precursors using neuro-fuzzy models: methodology and simulation results', Proc. IASTED International Conference on Signal Processing Pattern Recognition and Applications (SPPRA 2002), Crete, Greece, 2002, pp 303-308, 2002 [7] Konstantaras, A., Katsifarakis, E., Maravelakis, E., Skounakis, E., Kokkinos, E. and Karapidakis, E.: 'Intelligent Spatial-Clustering of Seismicity in the Vicinity of the Hellenic Seismic Arc', Earth Science Research, vol. 1 (2), pp. 1-10, 2012 [8] Georgoulas, G., Konstantaras, A., Katsifarakis, E., Stylios, C.D., Maravelakis, E. and Vachtsevanos, G.: '"Seismic-Mass" Density-based Algorithm for Spatio-Temporal Clustering', Expert Systems with Applications, vol. 40 (10), pp. 4183-4189, 2013 [9] Konstantaras, A. J.: 'Expert knowledge-based algorithm for the dynamic discrimination of interactive natural clusters', Earth Science Informatics, 2015 (In Press, see: www.scopus.com) [10] Drakatos, G. and Latoussakis, J.: 'A catalog of aftershock sequences in Greece (1971-1997): Their spatial and temporal characteristics', Journal of Seismology, vol. 5, pp. 137-145, 2001
2016-12-01
Evaluated Genetic Algorithm prepared by Justin L Paul Academy of Applied Science 24 Warren Street Concord, NH 03301 under contract W911SR...Supersonic Bending Body Projectile by a Vector-Evaluated Genetic Algorithm prepared by Justin L Paul Academy of Applied Science 24 Warren Street... Genetic Algorithm 5a. CONTRACT NUMBER W199SR-15-2-001 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Justin L Paul 5d. PROJECT
Cognitive Correlates of Performance in Algorithms in a Computer Science Course for High School
ERIC Educational Resources Information Center
Avancena, Aimee Theresa; Nishihara, Akinori
2014-01-01
Computer science for high school faces many challenging issues. One of these is whether the students possess the appropriate cognitive ability for learning the fundamentals of computer science. Online tests were created based on known cognitive factors and fundamental algorithms and were implemented among the second grade students in the…
Systems aspects of COBE science data compression
NASA Technical Reports Server (NTRS)
Freedman, I.; Boggess, E.; Seiler, E.
1993-01-01
A general approach to compression of diverse data from large scientific projects has been developed and this paper addresses the appropriate system and scientific constraints together with the algorithm development and test strategy. This framework has been implemented for the COsmic Background Explorer spacecraft (COBE) by retrofitting the existing VAS-based data management system with high-performance compression software permitting random access to the data. Algorithms which incorporate scientific knowledge and consume relatively few system resources are preferred over ad hoc methods. COBE exceeded its planned storage by a large and growing factor and the retrieval of data significantly affects the processing, delaying the availability of data for scientific usage and software test. Embedded compression software is planned to make the project tractable by reducing the data storage volume to an acceptable level during normal processing.
Suomi NPP VIIRS active fire product status
NASA Astrophysics Data System (ADS)
Ellicott, E. A.; Csiszar, I. A.; Schroeder, W.; Giglio, L.; Wind, B.; Justice, C. O.
2012-12-01
We provide an overview of the evaluation and development of the Active Fires product derived from the Visible Infrared Imager Radiometer Suite (VIIRS) sensor on the Suomi National Polar-orbiting Partnership (SNPP) satellite during the first year of on-orbit data. Results from the initial evaluation of the standard SNPP Active Fires product, generated by the SNPP Interface Data Processing System (IDPS), supported the stabilization of the VIIRS Sensor Data Record (SDR) product. This activity focused in particular on the processing of the dual-gain 4 micron VIIRS M13 radiometric measurements into 750m aggregated data, which are fundamental for active fire detection. Following the VIIRS SDR product's Beta maturity status in April 2012, correlative analysis between VIIRS and near-simultaneous fire detections from the Moderate Resolution Imaging Spectroradiometer (MODIS) on the NASA Earth Observing System Aqua satellite confirmed the expected relative detection rates driven primarily by sensor differences. The VIIRS Active Fires Product Development and Validation Team also developed a science code that is based on the latest MODIS Collection 6 algorithm and provides a full spatially explicit fire mask to replace the sparse array output of fire locations from a MODIS Collection 4 equivalent algorithm in the current IDPS product. The Algorithm Development Library (ADL) was used to support the planning for the transition of the science code into IDPS operations in the future. Product evaluation and user outreach was facilitated by a product website that provided end user access to fire data in user-friendly format over North America as well as examples of VIIRS-MODIS comparisons. The VIIRS fire team also developed an experimental product based on 375m VIIRS Imagery band measurements and provided high quality imagery of major fire events in US. By August 2012 the IDPS product achieved Beta maturity, with some known and documented shortfalls related to the processing of incorrect SDR input data and to apparent algorithm deficiencies in select observing and environmental conditions.
Finding Intervals of Abrupt Change in Earth Science Data
NASA Astrophysics Data System (ADS)
Zhou, X.; Shekhar, S.; Liess, S.
2011-12-01
In earth science data (e.g., climate data), it is often observed that a persistently abrupt change in value occurs in a certain time-period or spatial interval. For example, abrupt climate change is defined as an unusually large shift of precipitation, temperature, etc, that occurs during a relatively short time period. A similar pattern can also be found in geographical space, representing a sharp transition of the environment (e.g., vegetation between different ecological zones). Identifying such intervals of change from earth science datasets is a crucial step for understanding and attributing the underlying phenomenon. However, inconsistencies in these noisy datasets can obstruct the major change trend, and more importantly can complicate the search of the beginning and end points of the interval of change. Also, the large volume of data makes it challenging to process the dataset reasonably fast. In earth science data (e.g., climate data), it is often observed that a persistently abrupt change in value occurs in a certain time-period or spatial interval. For example, abrupt climate change is defined as an unusually large shift of precipitation, temperature, etc, that occurs during a relatively short time period. A similar change pattern can also be found in geographical space, representing a sharp transition of the environment (e.g., vegetation between different ecological zones). Identifying such intervals of change from earth science datasets is a crucial step for understanding and attributing the underlying phenomenon. However, inconsistencies in these noisy datasets can obstruct the major change trend, and more importantly can complicate the search of the beginning and end points of the interval of change. Also, the large volume of data makes it challenging to process the dataset fast. In this work, we analyze earth science data using a novel, automated data mining approach to identify spatial/temporal intervals of persistent, abrupt change. We first propose a statistical model to quantitatively evaluate the change abruptness and persistence in an interval. Then we design an algorithm to exhaustively examine all the intervals using this model. Intervals passing a threshold test will be kept as final results. We evaluate the proposed method with the Climate Research Unit (CRU) precipitation data, whereby we focus on the Sahel rainfall index. Results show that this method can find periods of persistent and abrupt value changes with different temporal scales. We also further optimize the algorithm using a smart strategy, which always examines longer intervals before its subsets. By doing this, we reduce the computational cost to only one third of that of the original algorithm for the above test case. More significantly, the optimized algorithm is also proven to scale up well with data volume and number of changes. Particularly, it achieves better performance when dealing with longer change intervals.
BioImageXD: an open, general-purpose and high-throughput image-processing platform.
Kankaanpää, Pasi; Paavolainen, Lassi; Tiitta, Silja; Karjalainen, Mikko; Päivärinne, Joacim; Nieminen, Jonna; Marjomäki, Varpu; Heino, Jyrki; White, Daniel J
2012-06-28
BioImageXD puts open-source computer science tools for three-dimensional visualization and analysis into the hands of all researchers, through a user-friendly graphical interface tuned to the needs of biologists. BioImageXD has no restrictive licenses or undisclosed algorithms and enables publication of precise, reproducible and modifiable workflows. It allows simple construction of processing pipelines and should enable biologists to perform challenging analyses of complex processes. We demonstrate its performance in a study of integrin clustering in response to selected inhibitors.
NASA Technical Reports Server (NTRS)
Tilmes, Curt
2004-01-01
In 2001, NASA Goddard Space Flight Center's Laboratory for Terrestrial Physics started the construction of a science Investigator-led Processing System (SIPS) for processing data from the Ozone Monitoring Instrument (OMI) which will launch on the Aura platform in mid 2004. The Ozone Monitoring Instrument (OMI) is a contribution of the Netherlands Agency for Aerospace Programs (NIVR) in collaboration with the Finnish Meteorological Institute (FMI) to the Earth Observing System (EOS) Aura mission. It will continue the Total Ozone Monitoring System (TOMS) record for total ozone and other atmospheric parameters related to ozone chemistry and climate. OMI measurements will be highly synergistic with the other instruments on the EOS Aura platform. The LTP previously developed the Moderate Resolution Imaging Spectrometer (MODIS) Data Processing System (MODAPS), which has been in full operations since the launches of the Terra and Aqua spacecrafts in December, 1999 and May, 2002 respectively. During that time, it has continually evolved to better support the needs of the MODIS team. We now run multiple instances of the system managing faster than real time reprocessings of the data as well as continuing forward processing. The new OMI Data Processing System (OMIDAPS) was adapted from the MODAPS. It will ingest raw data from the satellite ground station and process it to produce calibrated, geolocated higher level data products. These data products will be transmitted to the Goddard Distributed Active Archive Center (GDAAC) instance of the Earth Observing System (EOS) Data and Information System (EOSDIS) for long term archive and distribution to the public. The OMIDAPS will also provide data distribution to the OMI Science Team for quality assessment, algorithm improvement, calibration, etc. We have taken advantage of lessons learned from the MODIS experience and software already developed for MODIS. We made some changes in the hardware system organization, database and software to adapt the system for OMI. We replaced the fundamental database system, Sybase, with an Open Source RDBMS called PostgreSQL, and based the entire OMIDAPS on a cluster of Linux based commodity computers rather than the large SGI servers that MODAPS uses. Rather than relying on a central I/O server host, the new system distributes its data archive among multiple server hosts in the cluster. OMI is also customizing the graphical user interfaces and reporting structure to more closely meet the needs of the OMI Science Team. Prior to 2003, simulated OMI data and the science algorithms were not ready for production testing. We initially constructed a prototype system and tested using a 25 year dataset of Total Ozone Mapping Spectrometer (TOMS) and Solar Backscatter Ultraviolet Instrument (SBUV) data. This prototype system provided a platform to support the adaptation of the algorithms for OMI, and provided reprocessing of the historical data aiding in its analysis. In a recent reanalysis of the TOMS data, the OMIDAPS processed 108,000 full orbits of data through 4 processing steps per orbit, producing about 800,000 files (400 GiB) of level 2 and greater data files. More recently we have installed two instances of the OMIDAPS for integration and testing of OM1 science processes as they get delivered from the Science Team. A Test instance of the OMIDAPS has also supported a series of "Interface Confidence Tests" (ICTs) and End-to-End Ground System tests to ensure the launch readiness of the system. This paper will discuss the high-level hardware, software, and database organization of the OMIDAPS and how it builds on the MODAPS heritage system. It will also provide an overview of the testing and implementation of the production OMIDAPS.
Accessing eSDO Solar Image Processing and Visualization through AstroGrid
NASA Astrophysics Data System (ADS)
Auden, E.; Dalla, S.
2008-08-01
The eSDO project is funded by the UK's Science and Technology Facilities Council (STFC) to integrate Solar Dynamics Observatory (SDO) data, algorithms, and visualization tools with the UK's Virtual Observatory project, AstroGrid. In preparation for the SDO launch in January 2009, the eSDO team has developed nine algorithms covering coronal behaviour, feature recognition, and global / local helioseismology. Each of these algorithms has been deployed as an AstroGrid Common Execution Architecture (CEA) application so that they can be included in complex VO workflows. In addition, the PLASTIC-enabled eSDO "Streaming Tool" online movie application allows users to search multi-instrument solar archives through AstroGrid web services and visualise the image data through galleries, an interactive movie viewing applet, and QuickTime movies generated on-the-fly.
NASA Technical Reports Server (NTRS)
Chien, Steve A.; McLaren, David A.; Rabideau, Gregg R.; Mandl, Daniel; Hengemihle, Jerry
2013-01-01
A set of automated planning algorithms is the current operations baseline approach for the Intelligent Payload Module (IPM) of the proposed Hyper spectral Infrared Imager (HyspIRI) mission. For this operations concept, there are only local (e.g. non-depletable) operations constraints, such as real-time downlink and onboard memory, and the forward sweeping algorithm is optimal for determining which science products should be generated onboard and on ground based on geographical overflights, science priorities, alerts, requests, and onboard and ground processing constraints. This automated planning approach was developed for the HyspIRI IPM concept. The HyspIRI IPM is proposed to use an X-band Direct Broadcast (DB) capability that would enable data to be delivered to ground stations virtually as it is acquired. However, the HyspIRI VSWIR and TIR instruments will produce approximately 1 Gbps data, while the DB capability is 15 Mbps for a approx. =60X oversubscription. In order to address this mismatch, this innovation determines which data to downlink based on both the type of surface the spacecraft is overflying, and the onboard processing of data to detect events. For example, when the spacecraft is overflying Polar Regions, it might downlink a snow/ice product. Additionally, the onboard software will search for thermal signatures indicative of a volcanic event or wild fire and downlink summary information (extent, spectra) when detected, thereby reducing data volume. The planning system described above automatically generated the IPM mission plan based on requested products, the overflight regions, and available resources.
Fast Algorithms for Estimating Mixture Parameters
1989-08-30
The investigation is a two year project with the first year sponsored by the Army Research Office and the second year by the National Science Foundation (Grant... Science Foundation during the coming year. Keywords: Fast algorithms; Algorithms Mixture Distribution Random Variables. (KR)...numerical testing of the accelerated fixed-point method was completed. The work on relaxation methods will be done under the sponsorship of the National
Onboard Radar Processing Development for Rapid Response Applications
NASA Technical Reports Server (NTRS)
Lou, Yunling; Chien, Steve; Clark, Duane; Doubleday, Josh; Muellerschoen, Ron; Wang, Charles C.
2011-01-01
We are developing onboard processor (OBP) technology to streamline data acquisition on-demand and explore the potential of the L-band SAR instrument onboard the proposed DESDynI mission and UAVSAR for rapid response applications. The technology would enable the observation and use of surface change data over rapidly evolving natural hazards, both as an aid to scientific understanding and to provide timely data to agencies responsible for the management and mitigation of natural disasters. We are adapting complex science algorithms for surface water extent to detect flooding, snow/water/ice classification to assist in transportation/ shipping forecasts, and repeat-pass change detection to detect disturbances. We are near completion of the development of a custom FPGA board to meet the specific memory and processing needs of L-band SAR processor algorithms and high speed interfaces to reformat and route raw radar data to/from the FPGA processor board. We have also developed a high fidelity Matlab model of the SAR processor that is modularized and parameterized for ease to prototype various SAR processor algorithms targeted for the FPGA. We will be testing the OBP and rapid response algorithms with UAVSAR data to determine the fidelity of the products.
Near-line Archive Data Mining at the Goddard Distributed Active Archive Center
NASA Astrophysics Data System (ADS)
Pham, L.; Mack, R.; Eng, E.; Lynnes, C.
2002-12-01
NASA's Earth Observing System (EOS) is generating immense volumes of data, in some cases too much to provide to users with data-intensive needs. As an alternative to moving the data to the user and his/her research algorithms, we are providing a means to move the algorithms to the data. The Near-line Archive Data Mining (NADM) system is the Goddard Earth Sciences Distributed Active Archive Center's (GES DAAC) web data mining portal to the EOS Data and Information System (EOSDIS) data pool, a 50-TB online disk cache. The NADM web portal enables registered users to submit and execute data mining algorithm codes on the data in the EOSDIS data pool. A web interface allows the user to access the NADM system. The users first develops personalized data mining code on their home platform and then uploads them to the NADM system. The C, FORTRAN and IDL languages are currently supported. The user developed code is automatically audited for any potential security problems before it is installed within the NADM system and made available to the user. Once the code has been installed the user is provided a test environment where he/she can test the execution of the software against data sets of the user's choosing. When the user is satisfied with the results, he/she can promote their code to the "operational" environment. From here the user can interactively run his/her code on the data available in the EOSDIS data pool. The user can also set up a processing subscription. The subscription will automatically process new data as it becomes available in the EOSDIS data pool. The generated mined data products are then made available for FTP pickup. The NADM system uses the GES DAAC-developed Simple Scalable Script-based Science Processor (S4P) to automate tasks and perform the actual data processing. Users will also have the option of selecting a DAAC-provided data mining algorithm and using it to process the data of their choice.
ERIC Educational Resources Information Center
Sayre, Scott Alan
The ultimate goal of the science of artificial intelligence (AI) is to establish programs that will use algorithmic computer techniques to imitate the heuristic thought processes of humans. Most AI programs, especially expert systems, organize their knowledge into three specific areas: data storage, a rule set, and a control structure. Limitations…
NASA Technical Reports Server (NTRS)
Palm, Stephen P.; Hlavka, Dennis; Hart, Bill; Welton, E. Judd; Spinhirne, James
2000-01-01
The Geoscience Laser Altimeter System (GLAS) will be placed into orbit in 2001 aboard the Ice, Cloud and Land Elevation Satellite (ICESat). From its nearly polar orbit (94 degree inclination), GLAS will provide continuous global measurements of the vertical distribution of clouds and aerosols while simultaneously providing high accuracy topographic profiling of surface features. During the mission, which is slated to last 3 to 5 years, the data collected by GLAS will be in near-real time to produce level 1 and 2 data products at the NASA GLAS Science Computing Facility (SCF) at Goddard Space Flight Center in Greenbelt, Maryland. The atmospheric products include cloud and aerosol layer heights, planetary boundary layer depth, polar stratospheric clouds and thin cloud and aerosol optical depth. These products will be made available to the science community within days of their creation. The processing algorithms must be robust, adaptive, efficient, and clever enough to run autonomously for the widely varying atmospheric conditions that will be encountered. This paper presents an overview of the GLAS atmospheric data products and briefly discusses the design of the processing algorithms.
The DataBridge: A System For Optimizing The Use Of Dark Data From The Long Tail Of Science
NASA Astrophysics Data System (ADS)
Lander, H.; Rajasekar, A.
2015-12-01
The DataBridge is a National Science Foundation funded collaborative project (OCI-1247652, OCI-1247602, OCI-1247663) designed to assist in the discovery of dark data sets from the long tail of science. The DataBridge aims to to build queryable communities of datasets using sociometric network analysis. This approach is being tested to evaluate the ability to leverage various forms of metadata to facilitate discovery of new knowledge. Each dataset in the Databridge has an associated name space used as a first level partitioning. In addition to testing known algorithms for SNA community building, the DataBridge project has built a message-based platform that allows users to provide their own algorithms for each of the stages in the community building process. The stages are: Signature Generation (SG): An SG algorithm creates a metadata signature for a dataset. Signature algorithms might use text metadata provided by the dataset creator or derive metadata. Relevance Algorithm (RA): An RA compares a pair of datasets and produces a similarity value between 0 and 1 for the two datasets. Sociometric Network Analysis (SNA): The SNA will operate on a similarity matrix produced by an RA to partition all of the datasets in the name space into a set of clusters. These clusters represent communities of closely related datasets. The DataBridge also includes a web application that produces a visual representation of the clustering. Future work includes a more complete application that will allow different types of searching of the network of datasets. The DataBridge approach is relevant to geoscience research and informatics. In this presentation we will outline the project, illustrate the deployment of the approach, and discuss other potential applications and next steps for the research such as applying this approach to models. In addition we will explore the relevance of DataBridge to other geoscience projects such as various EarthCube Building Blocks and DIBBS projects.
Hong, Felix T
2013-09-01
Rosen classified sciences into two categories: formalizable and unformalizable. Whereas formalizable sciences expressed in terms of mathematical theories were highly valued by Rutherford, Hutchins pointed out that unformalizable parts of soft sciences are of genuine interest and importance. Attempts to build mathematical theories for biology in the past century was met with modest and sporadic successes, and only in simple systems. In this article, a qualitative model of humans' high creativity is presented as a starting point to consider whether the gap between soft and hard sciences is bridgeable. Simonton's chance-configuration theory, which mimics the process of evolution, was modified and improved. By treating problem solving as a process of pattern recognition, the known dichotomy of visual thinking vs. verbal thinking can be recast in terms of analog pattern recognition (non-algorithmic process) and digital pattern recognition (algorithmic process), respectively. Additional concepts commonly encountered in computer science, operations research and artificial intelligence were also invoked: heuristic searching, parallel and sequential processing. The refurbished chance-configuration model is now capable of explaining several long-standing puzzles in human cognition: a) why novel discoveries often came without prior warning, b) why some creators had no ideas about the source of inspiration even after the fact, c) why some creators were consistently luckier than others, and, last but not least, d) why it was so difficult to explain what intuition, inspiration, insight, hunch, serendipity, etc. are all about. The predictive power of the present model was tested by means of resolving Zeno's paradox of Achilles and the Tortoise after one deliberately invoked visual thinking. Additional evidence of its predictive power must await future large-scale field studies. The analysis was further generalized to constructions of scientific theories in general. This approach is in line with Campbell's evolutionary epistemology. Instead of treating science as immutable Natural Laws, which already existed and which were just waiting to be discovered, scientific theories are regarded as humans' mental constructs, which must be invented to reconcile with observed natural phenomena. In this way, the pursuit of science is shifted from diligent and systematic (or random) searching for existing Natural Laws to firing up humans' imagination to comprehend Nature's behavioral pattern. The insights gained in understanding human creativity indicated that new mathematics that is capable of handling effectively parallel processing and human subjectivity is sorely needed. The past classification of formalizability vs. non-formalizability was made in reference to contemporary mathematics. Rosen's conclusion did not preclude future inventions of new biology-friendly mathematics. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Falkowski, Paul G.; Behrenfeld, Michael J.; Esaias, Wayne E.; Balch, William; Campbell, Janet W.; Iverson, Richard L.; Kiefer, Dale A.; Morel, Andre; Yoder, James A.; Hooker, Stanford B. (Editor);
1998-01-01
Two issues regarding primary productivity, as it pertains to the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Program and the National Aeronautics and Space Administration (NASA) Mission to Planet Earth (MTPE) are presented in this volume. Chapter 1 describes the development of a science plan for deriving primary production for the world ocean using satellite measurements, by the Ocean Primary Productivity Working Group (OPPWG). Chapter 2 presents discussions by the same group, of algorithm classification, algorithm parameterization and data availability, algorithm testing and validation, and the benefits of a consensus primary productivity algorithm.
Autonomous Image Analysis for Future Mars Missions
NASA Technical Reports Server (NTRS)
Gulick, V. C.; Morris, R. L.; Ruzon, M. A.; Bandari, E.; Roush, T. L.
1999-01-01
To explore high priority landing sites and to prepare for eventual human exploration, future Mars missions will involve rovers capable of traversing tens of kilometers. However, the current process by which scientists interact with a rover does not scale to such distances. Specifically, numerous command cycles are required to complete even simple tasks, such as, pointing the spectrometer at a variety of nearby rocks. In addition, the time required by scientists to interpret image data before new commands can be given and the limited amount of data that can be downlinked during a given command cycle constrain rover mobility and achievement of science goals. Experience with rover tests on Earth supports these concerns. As a result, traverses to science sites as identified in orbital images would require numerous science command cycles over a period of many weeks, months or even years, perhaps exceeding rover design life and other constraints. Autonomous onboard science analysis can address these problems in two ways. First, it will allow the rover to preferentially transmit "interesting" images, defined as those likely to have higher science content. Second, the rover will be able to anticipate future commands. For example, a rover might autonomously acquire and return spectra of "interesting" rocks along with a high-resolution image of those rocks in addition to returning the context images in which they were detected. Such approaches, coupled with appropriate navigational software, help to address both the data volume and command cycle bottlenecks that limit both rover mobility and science yield. We are developing fast, autonomous algorithms to enable such intelligent on-board decision making by spacecraft. Autonomous algorithms developed to date have the ability to identify rocks and layers in a scene, locate the horizon, and compress multi-spectral image data. We are currently investigating the possibility of reconstructing a 3D surface from a sequence of images acquired by a robotic arm camera. This would then allow the return of a single completely in focus image constructed only from those portions of individual images that lie within the camera's depth of field. Output from these algorithms could be used to autonomously obtain rock spectra, determine which images should be transmitted to the ground, or to aid in image compression. We will discuss these algorithms and their performance during a recent rover field test.
New Trends in E-Science: Machine Learning and Knowledge Discovery in Databases
NASA Astrophysics Data System (ADS)
Brescia, Massimo
2012-11-01
Data mining, or Knowledge Discovery in Databases (KDD), while being the main methodology to extract the scientific information contained in Massive Data Sets (MDS), needs to tackle crucial problems since it has to orchestrate complex challenges posed by transparent access to different computing environments, scalability of algorithms, reusability of resources. To achieve a leap forward for the progress of e-science in the data avalanche era, the community needs to implement an infrastructure capable of performing data access, processing and mining in a distributed but integrated context. The increasing complexity of modern technologies carried out a huge production of data, whose related warehouse management and the need to optimize analysis and mining procedures lead to a change in concept on modern science. Classical data exploration, based on local user own data storage and limited computing infrastructures, is no more efficient in the case of MDS, worldwide spread over inhomogeneous data centres and requiring teraflop processing power. In this context modern experimental and observational science requires a good understanding of computer science, network infrastructures, Data Mining, etc. i.e. of all those techniques which fall into the domain of the so called e-science (recently assessed also by the Fourth Paradigm of Science). Such understanding is almost completely absent in the older generations of scientists and this reflects in the inadequacy of most academic and research programs. A paradigm shift is needed: statistical pattern recognition, object oriented programming, distributed computing, parallel programming need to become an essential part of scientific background. A possible practical solution is to provide the research community with easy-to understand, easy-to-use tools, based on the Web 2.0 technologies and Machine Learning methodology. Tools where almost all the complexity is hidden to the final user, but which are still flexible and able to produce efficient and reliable scientific results. All these considerations will be described in the detail in the chapter. Moreover, examples of modern applications offering to a wide variety of e-science communities a large spectrum of computational facilities to exploit the wealth of available massive data sets and powerful machine learning and statistical algorithms will be also introduced.
Research on application of intelligent computation based LUCC model in urbanization process
NASA Astrophysics Data System (ADS)
Chen, Zemin
2007-06-01
Global change study is an interdisciplinary and comprehensive research activity with international cooperation, arising in 1980s, with the largest scopes. The interaction between land use and cover change, as a research field with the crossing of natural science and social science, has become one of core subjects of global change study as well as the front edge and hot point of it. It is necessary to develop research on land use and cover change in urbanization process and build an analog model of urbanization to carry out description, simulation and analysis on dynamic behaviors in urban development change as well as to understand basic characteristics and rules of urbanization process. This has positive practical and theoretical significance for formulating urban and regional sustainable development strategy. The effect of urbanization on land use and cover change is mainly embodied in the change of quantity structure and space structure of urban space, and LUCC model in urbanization process has been an important research subject of urban geography and urban planning. In this paper, based upon previous research achievements, the writer systematically analyzes the research on land use/cover change in urbanization process with the theories of complexity science research and intelligent computation; builds a model for simulating and forecasting dynamic evolution of urban land use and cover change, on the basis of cellular automation model of complexity science research method and multi-agent theory; expands Markov model, traditional CA model and Agent model, introduces complexity science research theory and intelligent computation theory into LUCC research model to build intelligent computation-based LUCC model for analog research on land use and cover change in urbanization research, and performs case research. The concrete contents are as follows: 1. Complexity of LUCC research in urbanization process. Analyze urbanization process in combination with the contents of complexity science research and the conception of complexity feature to reveal the complexity features of LUCC research in urbanization process. Urban space system is a complex economic and cultural phenomenon as well as a social process, is the comprehensive characterization of urban society, economy and culture, and is a complex space system formed by society, economy and nature. It has dissipative structure characteristics, such as opening, dynamics, self-organization, non-balance etc. Traditional model cannot simulate these social, economic and natural driving forces of LUCC including main feedback relation from LUCC to driving force. 2. Establishment of Markov extended model of LUCC analog research in urbanization process. Firstly, use traditional LUCC research model to compute change speed of regional land use through calculating dynamic degree, exploitation degree and consumption degree of land use; use the theory of fuzzy set to rewrite the traditional Markov model, establish structure transfer matrix of land use, forecast and analyze dynamic change and development trend of land use, and present noticeable problems and corresponding measures in urbanization process according to research results. 3. Application of intelligent computation research and complexity science research method in LUCC analog model in urbanization process. On the basis of detailed elaboration of the theory and the model of LUCC research in urbanization process, analyze the problems of existing model used in LUCC research (namely, difficult to resolve many complexity phenomena in complex urban space system), discuss possible structure realization forms of LUCC analog research in combination with the theories of intelligent computation and complexity science research. Perform application analysis on BP artificial neural network and genetic algorithms of intelligent computation and CA model and MAS technology of complexity science research, discuss their theoretical origins and their own characteristics in detail, elaborate the feasibility of them in LUCC analog research, and bring forward improvement methods and measures on existing problems of this kind of model. 4. Establishment of LUCC analog model in urbanization process based on theories of intelligent computation and complexity science. Based on the research on abovementioned BP artificial neural network, genetic algorithms, CA model and multi-agent technology, put forward improvement methods and application assumption towards their expansion on geography, build LUCC analog model in urbanization process based on CA model and Agent model, realize the combination of learning mechanism of BP artificial neural network and fuzzy logic reasoning, express the regulation with explicit formula, and amend the initial regulation through self study; optimize network structure of LUCC analog model and methods and procedures of model parameters with genetic algorithms. In this paper, I introduce research theory and methods of complexity science into LUCC analog research and presents LUCC analog model based upon CA model and MAS theory. Meanwhile, I carry out corresponding expansion on traditional Markov model and introduce the theory of fuzzy set into data screening and parameter amendment of improved model to improve the accuracy and feasibility of Markov model in the research on land use/cover change.
Supervised detection of exoplanets in high-contrast imaging sequences
NASA Astrophysics Data System (ADS)
Gomez Gonzalez, C. A.; Absil, O.; Van Droogenbroeck, M.
2018-06-01
Context. Post-processing algorithms play a key role in pushing the detection limits of high-contrast imaging (HCI) instruments. State-of-the-art image processing approaches for HCI enable the production of science-ready images relying on unsupervised learning techniques, such as low-rank approximations, for generating a model point spread function (PSF) and subtracting the residual starlight and speckle noise. Aims: In order to maximize the detection rate of HCI instruments and survey campaigns, advanced algorithms with higher sensitivities to faint companions are needed, especially for the speckle-dominated innermost region of the images. Methods: We propose a reformulation of the exoplanet detection task (for ADI sequences) that builds on well-established machine learning techniques to take HCI post-processing from an unsupervised to a supervised learning context. In this new framework, we present algorithmic solutions using two different discriminative models: SODIRF (random forests) and SODINN (neural networks). We test these algorithms on real ADI datasets from VLT/NACO and VLT/SPHERE HCI instruments. We then assess their performances by injecting fake companions and using receiver operating characteristic analysis. This is done in comparison with state-of-the-art ADI algorithms, such as ADI principal component analysis (ADI-PCA). Results: This study shows the improved sensitivity versus specificity trade-off of the proposed supervised detection approach. At the diffraction limit, SODINN improves the true positive rate by a factor ranging from 2 to 10 (depending on the dataset and angular separation) with respect to ADI-PCA when working at the same false-positive level. Conclusions: The proposed supervised detection framework outperforms state-of-the-art techniques in the task of discriminating planet signal from speckles. In addition, it offers the possibility of re-processing existing HCI databases to maximize their scientific return and potentially improve the demographics of directly imaged exoplanets.
An algorithmic approach to crustal deformation analysis
NASA Technical Reports Server (NTRS)
Iz, Huseyin Baki
1987-01-01
In recent years the analysis of crustal deformation measurements has become important as a result of current improvements in geodetic methods and an increasing amount of theoretical and observational data provided by several earth sciences. A first-generation data analysis algorithm which combines a priori information with current geodetic measurements was proposed. Relevant methods which can be used in the algorithm were discussed. Prior information is the unifying feature of this algorithm. Some of the problems which may arise through the use of a priori information in the analysis were indicated and preventive measures were demonstrated. The first step in the algorithm is the optimal design of deformation networks. The second step in the algorithm identifies the descriptive model of the deformation field. The final step in the algorithm is the improved estimation of deformation parameters. Although deformation parameters are estimated in the process of model discrimination, they can further be improved by the use of a priori information about them. According to the proposed algorithm this information must first be tested against the estimates calculated using the sample data only. Null-hypothesis testing procedures were developed for this purpose. Six different estimators which employ a priori information were examined. Emphasis was put on the case when the prior information is wrong and analytical expressions for possible improvements under incompatible prior information were derived.
Global Climate Monitoring with the Eos Pm-Platform's Advanced Microwave Scanning Radiometer (AMSR-E)
NASA Technical Reports Server (NTRS)
Spencer, Roy W.
2000-01-01
The Advanced Microwave Scanning Radiometer (AMSR-E) is being built by NASDA to fly on NASA's PM Platform (now called "Aqua") in December 2000. This is in addition to a copy of AMSR that will be launched on Japan's ADEOS-11 satellite in 2001. The AMSRs improve upon the window frequency radiometer heritage of the SSM[l and SMMR instruments. Major improvements over those instruments include channels spanning the 6.9 GHz to 89 GHz frequency range, and higher spatial resolution from a 1.6 m reflector (AMSR-E) and 2.0 m reflector (ADEOS-11 AMSR). The ADEOS-11 AMSR also will have 50.3 and 52.8 GHz channels, providing sensitivity to lower tropospheric temperature. NASA funds an AMSR-E Science Team to provide algorithms for the routine production of a number of standard geophysical products. These products will be generated by the AMSR-E Science Investigator-led Processing System (SIPS) at the Global Hydrology Resource Center (GHRC) in Huntsville, Alabama. While there is a separate NASDA-sponsored activity to develop algorithms and produce products from AMSR, as well as a Joint (NASDA-NASA) AMSR Science Team activity, here I will review only the AMSR-E Team's algorithms and how they benefit from the new capabilities that AMSR-E will provide. The U.S. Team's products will be archived at the National Snow and Ice Data Center (NSIDC). Further information about AMSR-E can be obtained at http://www.jzhcc.msfc.nasa.Vov/AMSR.
RUSHMAPS: Real-Time Uploadable Spherical Harmonic Moment Analysis for Particle Spectrometers
NASA Technical Reports Server (NTRS)
Figueroa-Vinas, Adolfo
2013-01-01
RUSHMAPS is a new onboard data reduction scheme that gives real-time access to key science parameters (e.g. moments) of a class of heliophysics science and/or solar system exploration investigation that includes plasma particle spectrometers (PPS), but requires moments reporting (density, bulk-velocity, temperature, pressure, etc.) of higher-level quality, and tolerates a lowpass (variable quality) spectral representation of the corresponding particle velocity distributions, such that telemetry use is minimized. The proposed methodology trades access to the full-resolution velocity distribution data, saving on telemetry, for real-time access to both the moments and an adjustable-quality (increasing quality increases volume) spectral representation of distribution functions. Traditional onboard data storage and downlink bandwidth constraints severely limit PPS system functionality and drive cost, which, as a consequence, drives a limited data collection and lower angular energy and time resolution. This prototypical system exploit, using high-performance processing technology at GSFC (Goddard Space Flight Center), uses a SpaceCube and/or Maestro-type platform for processing. These processing platforms are currently being used on the International Space Station as a technology demonstration, and work is currently ongoing in a new onboard computation system for the Earth Science missions, but they have never been implemented in heliospheric science or solar system exploration missions. Preliminary analysis confirms that the targeted processor platforms possess the processing resources required for realtime application of these algorithms to the spectrometer data. SpaceCube platforms demonstrate that the target architecture possesses the sort of compact, low-mass/power, radiation-tolerant characteristics needed for flight. These high-performing hybrid systems embed unprecedented amounts of onboard processing power in the CPU (central processing unit), FPGAs (field programmable gate arrays), and DSP (digital signal processing) elements. The fundamental computational algorithm de constructs 3D velocity distributions in terms of spherical harmonic spectral coefficients (which are analogous to a Fourier sine-cosine decomposition), but uses instead spherical harmonics Legendre polynomial orthogonal functions as a basis for the expansion, portraying each 2D angular distribution at every energy or, geometrically, spherical speed-shell swept by the particle spectrometer. Optionally, these spherical harmonic spectral coefficients may be telemetered to the ground. These will provide a smoothed description of the velocity distribution function whose quality will depend on the number of coefficients determined. Successfully implemented on the GSFC-developed processor, the capability to integrate the proposed methodology with both heritage and anticipated future plasma particle spectrometer designs is demonstrated (with sufficiently detailed design analysis to advance TRL) to show specific science relevancy with future HSD (Heliophysics Science Division) solar-interplanetary, planetary missions, sounding rockets and/or CubeSat missions.
Research progress on quantum informatics and quantum computation
NASA Astrophysics Data System (ADS)
Zhao, Yusheng
2018-03-01
Quantum informatics is an emerging interdisciplinary subject developed by the combination of quantum mechanics, information science, and computer science in the 1980s. The birth and development of quantum information science has far-reaching significance in science and technology. At present, the application of quantum information technology has become the direction of people’s efforts. The preparation, storage, purification and regulation, transmission, quantum coding and decoding of quantum state have become the hotspot of scientists and technicians, which have a profound impact on the national economy and the people’s livelihood, technology and defense technology. This paper first summarizes the background of quantum information science and quantum computer and the current situation of domestic and foreign research, and then introduces the basic knowledge and basic concepts of quantum computing. Finally, several quantum algorithms are introduced in detail, including Quantum Fourier transform, Deutsch-Jozsa algorithm, Shor’s quantum algorithm, quantum phase estimation.
TRMM Version 7 Near-Realtime Data Products
NASA Technical Reports Server (NTRS)
Tocker, Erich Franz; Kelley, Owen
2012-01-01
The TRMM data system has been providing near-realtime data products to the community since late 1999. While the TRMM project never had near-realtime production requirements, the science and applications communities had a great interest in receiving TRMM data as quickly as possible. As a result these NRT data are provided under a best-effort scenario but with the objective of having the swath data products available within three hours of data collection 90% of the time. In July of 2011 the Joint Precipitation Measurement Missions Science Team (JPST) authorized the reprocessing of TRMM mission data using the new version 7 algorithms. The reprocessing of the 14+ years of the mission was concluded within 30 days. Version 7 algorithms had substantial changes in the data product file formats both for data and metadata. In addition, the algorithms themselves had major modifications and improvements. The general approach to versioning up the NRT is to wait for the regular production algorithms to have run for a while and shake out any issues that might arise from the new version before updating the NRT products. Because of the substantial changes in data/metadata formats as well as the algorithm improvements themselves, the update of NRT to V7 followed an even more conservative path than usual. This was done to ensure that applications agencies and other users of the TRMM NRT would not be faces with short-timeframes for conversion to the new format. This paper will describe the process by which the TRMM NRT was updated to V7 and the V7 data products themselves.
NASA Astrophysics Data System (ADS)
Harris, A. T.; Ramachandran, R.; Maskey, M.
2013-12-01
The Exelis-developed IDL and ENVI software are ubiquitous tools in Earth science research environments. The IDL Workbench is used by the Earth science community for programming custom data analysis and visualization modules. ENVI is a software solution for processing and analyzing geospatial imagery that combines support for multiple Earth observation scientific data types (optical, thermal, multi-spectral, hyperspectral, SAR, LiDAR) with advanced image processing and analysis algorithms. The ENVI & IDL Services Engine (ESE) is an Earth science data processing engine that allows researchers to use open standards to rapidly create, publish and deploy advanced Earth science data analytics within any existing enterprise infrastructure. Although powerful in many ways, the tools lack collaborative features out-of-box. Thus, as part of the NASA funded project, Collaborative Workbench to Accelerate Science Algorithm Development, researchers at the University of Alabama in Huntsville and Exelis have developed plugins that allow seamless research collaboration from within IDL workbench. Such additional features within IDL workbench are possible because IDL workbench is built using the Eclipse Rich Client Platform (RCP). RCP applications allow custom plugins to be dropped in for extended functionalities. Specific functionalities of the plugins include creating complex workflows based on IDL application source code, submitting workflows to be executed by ESE in the cloud, and sharing and cloning of workflows among collaborators. All these functionalities are available to scientists without leaving their IDL workbench. Because ESE can interoperate with any middleware, scientific programmers can readily string together IDL processing tasks (or tasks written in other languages like C++, Java or Python) to create complex workflows for deployment within their current enterprise architecture (e.g. ArcGIS Server, GeoServer, Apache ODE or SciFlo from JPL). Using the collaborative IDL Workbench, coupled with ESE for execution in the cloud, asynchronous workflows could be executed in batch mode on large data in the cloud. We envision that a scientist will initially develop a scientific workflow locally on a small set of data. Once tested, the scientist will deploy the workflow to the cloud for execution. Depending on the results, the scientist may share the workflow and results, allowing them to be stored in a community catalog and instantly loaded into the IDL Workbench of other scientists. Thereupon, scientists can clone and modify or execute the workflow with different input parameters. The Collaborative Workbench will provide a platform for collaboration in the cloud, helping Earth scientists solve big-data problems in the Earth and planetary sciences.
NASA Astrophysics Data System (ADS)
Knackmuß, Jenny; Creutzburg, Reiner
2014-02-01
The aim of this paper is to describe the benefit and support of virtual tutorials, Wikipedia books and multimedia-based teaching in a course on Algorithms and Data Structures. We describe our work and experiences gained from using virtual tutorials held in Netucate iLinc sessions and the use of various multimedia and animation elements for the support of deeper understanding of the ordinary lectures held in the standard classroom on Algorithms and Data Structures for undergraduate computer sciences students. We will describe the benefits, form, style and contents of those virtual tutorials. Furthermore, we mention the advantage of Wikipedia books to support the blended learning process using modern mobile devices. Finally, we give some first statistical measures of improved student's scores after introducing this new form of teaching support.
Lopetegui, Marcelo A; Lara, Barbara A; Yen, Po-Yin; Çatalyürek, Ümit V; Payne, Philip R O
2015-01-01
Multiple choice questions play an important role in training and evaluating biomedical science students. However, the resource intensive nature of question generation limits their open availability, reducing their contribution to evaluation purposes mainly. Although applied-knowledge questions require a complex formulation process, the creation of concrete-knowledge questions (i.e., definitions, associations) could be assisted by the use of informatics methods. We envisioned a novel and simple algorithm that exploits validated knowledge repositories and generates concrete-knowledge questions by leveraging concepts' relationships. In this manuscript we present the development and validation of a prototype which successfully produced meaningful concrete-knowledge questions, opening new applications for existing knowledge repositories, potentially benefiting students of all biomedical sciences disciplines.
A space-based classification system for RF transients
NASA Astrophysics Data System (ADS)
Moore, K. R.; Call, D.; Johnson, S.; Payne, T.; Ford, W.; Spencer, K.; Wilkerson, J. F.; Baumgart, C.
The FORTE (Fast On-Orbit Recording of Transient Events) small satellite is scheduled for launch in mid 1995. The mission is to measure and classify VHF (30-300 MHz) electromagnetic pulses, primarily due to lightning, within a high noise environment dominated by continuous wave carriers such as TV and FM stations. The FORTE Event Classifier will use specialized hardware to implement signal processing and neural network algorithms that perform onboard classification of RF transients and carriers. Lightning events will also be characterized with optical data telemetered to the ground. A primary mission science goal is to develop a comprehensive understanding of the correlation between the optical flash and the VHF emissions from lightning. By combining FORTE measurements with ground measurements and/or active transmitters, other science issues can be addressed. Examples include the correlation of global precipitation rates with lightning flash rates and location, the effects of large scale structures within the ionosphere (such as traveling ionospheric disturbances and horizontal gradients in the total electron content) on the propagation of broad bandwidth RF signals, and various areas of lightning physics. Event classification is a key feature of the FORTE mission. Neural networks are promising candidates for this application. The authors describe the proposed FORTE Event Classifier flight system, which consists of a commercially available digital signal processing board and a custom board, and discuss work on signal processing and neural network algorithms.
ICESat-2 / ATLAS Flight Science Receiver Algorithms
NASA Astrophysics Data System (ADS)
Mcgarry, J.; Carabajal, C. C.; Degnan, J. J.; Mallama, A.; Palm, S. P.; Ricklefs, R.; Saba, J. L.
2013-12-01
NASA's Advanced Topographic Laser Altimeter System (ATLAS) will be the single instrument on the ICESat-2 spacecraft which is expected to launch in 2016 with a 3 year mission lifetime. The ICESat-2 orbital altitude will be 500 km with a 92 degree inclination and 91-day repeat tracks. ATLAS is a single photon detection system transmitting at 532nm with a laser repetition rate of 10 kHz and a 6 spot pattern on the Earth's surface. Without some method of eliminating solar background noise in near real-time, the volume of ATLAS telemetry would far exceed the normal X-band downlink capability. To reduce the data volume to an acceptable level a set of onboard Receiver Algorithms has been developed. These Algorithms limit the daily data volume by distinguishing surface echoes from the background noise and allow the instrument to telemeter only a small vertical region about the signal. This is accomplished through the use of an onboard Digital Elevation Model (DEM), signal processing techniques, and an onboard relief map. Similar to what was flown on the ATLAS predecessor GLAS (Geoscience Laser Altimeter System) the DEM provides minimum and maximum heights for each 1 degree x 1 degree tile on the Earth. This information allows the onboard algorithm to limit its signal search to the region between minimum and maximum heights (plus some margin for errors). The understanding that the surface echoes will tend to clump while noise will be randomly distributed led us to histogram the received event times. The selection of the signal locations is based on those histogram bins with statistically significant counts. Once the signal location has been established the onboard Digital Relief Map (DRM) is used to determine the vertical width of the telemetry band about the signal. The ATLAS Receiver Algorithms are nearing completion of the development phase and are currently being tested using a Monte Carlo Software Simulator that models the instrument, the orbit and the environment. This Simulator makes it possible to check all logic paths that could be encountered by the Algorithms on orbit. In addition the NASA airborne instrument MABEL is collecting data with characteristics similar to what ATLAS will see. MABEL data is being used to test the ATLAS Receiver Algorithms. Further verification will be performed during Integration and Testing of the ATLAS instrument and during Environmental Testing on the full ATLAS instrument. Results from testing to date show the Receiver Algorithms have the ability to handle a wide range of signal and noise levels with a very good sensitivity at relatively low signal to noise ratios. In addition, preliminary tests have demonstrated, using the ICESat-2 Science Team's selected land ice and sea ice test cases, the capability of the Algorithms to successfully find and telemeter the surface echoes. In this presentation we will describe the ATLAS Flight Science Receiver Algorithms and the Software Simulator, and will present results of the testing to date. The onboard databases (DEM, DRM and the Surface Reference Mask) are being developed at the University of Texas at Austin as part of the ATLAS Flight Science Receiver Algorithms. Verification of the onboard databases is being performed by ATLAS Receiver Algorithms team members Claudia Carabajal and Jack Saba.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moody, Daniela I.; Brumby, Steven P.; Rowland, Joel C.
Neuromimetic machine vision and pattern recognition algorithms are of great interest for landscape characterization and change detection in satellite imagery in support of global climate change science and modeling. We present results from an ongoing effort to extend machine vision methods to the environmental sciences, using adaptive sparse signal processing combined with machine learning. A Hebbian learning rule is used to build multispectral, multiresolution dictionaries from regional satellite normalized band difference index data. Land cover labels are automatically generated via our CoSA algorithm: Clustering of Sparse Approximations, using a clustering distance metric that combines spectral and spatial textural characteristics tomore » help separate geologic, vegetative, and hydrologie features. We demonstrate our method on example Worldview-2 satellite images of an Arctic region, and use CoSA labels to detect seasonal surface changes. In conclusion, our results suggest that neuroscience-based models are a promising approach to practical pattern recognition and change detection problems in remote sensing.« less
Moody, Daniela I.; Brumby, Steven P.; Rowland, Joel C.; ...
2014-10-01
Neuromimetic machine vision and pattern recognition algorithms are of great interest for landscape characterization and change detection in satellite imagery in support of global climate change science and modeling. We present results from an ongoing effort to extend machine vision methods to the environmental sciences, using adaptive sparse signal processing combined with machine learning. A Hebbian learning rule is used to build multispectral, multiresolution dictionaries from regional satellite normalized band difference index data. Land cover labels are automatically generated via our CoSA algorithm: Clustering of Sparse Approximations, using a clustering distance metric that combines spectral and spatial textural characteristics tomore » help separate geologic, vegetative, and hydrologie features. We demonstrate our method on example Worldview-2 satellite images of an Arctic region, and use CoSA labels to detect seasonal surface changes. In conclusion, our results suggest that neuroscience-based models are a promising approach to practical pattern recognition and change detection problems in remote sensing.« less
Optimization of knowledge-based systems and expert system building tools
NASA Technical Reports Server (NTRS)
Yasuda, Phyllis; Mckellar, Donald
1993-01-01
The objectives of the NASA-AMES Cooperative Agreement were to investigate, develop, and evaluate, via test cases, the system parameters and processing algorithms that constrain the overall performance of the Information Sciences Division's Artificial Intelligence Research Facility. Written reports covering various aspects of the grant were submitted to the co-investigators for the grant. Research studies concentrated on the field of artificial intelligence knowledge-based systems technology. Activities included the following areas: (1) AI training classes; (2) merging optical and digital processing; (3) science experiment remote coaching; (4) SSF data management system tests; (5) computer integrated documentation project; (6) conservation of design knowledge project; (7) project management calendar and reporting system; (8) automation and robotics technology assessment; (9) advanced computer architectures and operating systems; and (10) honors program.
Computational provenance in hydrologic science: a snow mapping example.
Dozier, Jeff; Frew, James
2009-03-13
Computational provenance--a record of the antecedents and processing history of digital information--is key to properly documenting computer-based scientific research. To support investigations in hydrologic science, we produce the daily fractional snow-covered area from NASA's moderate-resolution imaging spectroradiometer (MODIS). From the MODIS reflectance data in seven wavelengths, we estimate the fraction of each 500 m pixel that snow covers. The daily products have data gaps and errors because of cloud cover and sensor viewing geometry, so we interpolate and smooth to produce our best estimate of the daily snow cover. To manage the data, we have developed the Earth System Science Server (ES3), a software environment for data-intensive Earth science, with unique capabilities for automatically and transparently capturing and managing the provenance of arbitrary computations. Transparent acquisition avoids the scientists having to express their computations in specific languages or schemas in order for provenance to be acquired and maintained. ES3 models provenance as relationships between processes and their input and output files. It is particularly suited to capturing the provenance of an evolving algorithm whose components span multiple languages and execution environments.
NASA Astrophysics Data System (ADS)
Neher, Peter F.; Stieltjes, Bram; Reisert, Marco; Reicht, Ignaz; Meinzer, Hans-Peter; Fritzsche, Klaus H.
2012-02-01
Fiber tracking algorithms yield valuable information for neurosurgery as well as automated diagnostic approaches. However, they have not yet arrived in the daily clinical practice. In this paper we present an open source integration of the global tractography algorithm proposed by Reisert et.al.1 into the open source Medical Imaging Interaction Toolkit (MITK) developed and maintained by the Division of Medical and Biological Informatics at the German Cancer Research Center (DKFZ). The integration of this algorithm into a standardized and open development environment like MITK enriches accessibility of tractography algorithms for the science community and is an important step towards bringing neuronal tractography closer to a clinical application. The MITK diffusion imaging application, downloadable from www.mitk.org, combines all the steps necessary for a successful tractography: preprocessing, reconstruction of the images, the actual tracking, live monitoring of intermediate results, postprocessing and visualization of the final tracking results. This paper presents typical tracking results and demonstrates the steps for pre- and post-processing of the images.
Birkhoffian symplectic algorithms derived from Hamiltonian symplectic algorithms
NASA Astrophysics Data System (ADS)
Xin-Lei, Kong; Hui-Bin, Wu; Feng-Xiang, Mei
2016-01-01
In this paper, we focus on the construction of structure preserving algorithms for Birkhoffian systems, based on existing symplectic schemes for the Hamiltonian equations. The key of the method is to seek an invertible transformation which drives the Birkhoffian equations reduce to the Hamiltonian equations. When there exists such a transformation, applying the corresponding inverse map to symplectic discretization of the Hamiltonian equations, then resulting difference schemes are verified to be Birkhoffian symplectic for the original Birkhoffian equations. To illustrate the operation process of the method, we construct several desirable algorithms for the linear damped oscillator and the single pendulum with linear dissipation respectively. All of them exhibit excellent numerical behavior, especially in preserving conserved quantities. Project supported by the National Natural Science Foundation of China (Grant No. 11272050), the Excellent Young Teachers Program of North China University of Technology (Grant No. XN132), and the Construction Plan for Innovative Research Team of North China University of Technology (Grant No. XN129).
Collaborative workbench for cyberinfrastructure to accelerate science algorithm development
NASA Astrophysics Data System (ADS)
Ramachandran, R.; Maskey, M.; Kuo, K.; Lynnes, C.
2013-12-01
There are significant untapped resources for information and knowledge creation within the Earth Science community in the form of data, algorithms, services, analysis workflows or scripts, and the related knowledge about these resources. Despite the huge growth in social networking and collaboration platforms, these resources often reside on an investigator's workstation or laboratory and are rarely shared. A major reason for this is that there are very few scientific collaboration platforms, and those that exist typically require the use of a new set of analysis tools and paradigms to leverage the shared infrastructure. As a result, adoption of these collaborative platforms for science research is inhibited by the high cost to an individual scientist of switching from his or her own familiar environment and set of tools to a new environment and tool set. This presentation will describe an ongoing project developing an Earth Science Collaborative Workbench (CWB). The CWB approach will eliminate this barrier by augmenting a scientist's current research environment and tool set to allow him or her to easily share diverse data and algorithms. The CWB will leverage evolving technologies such as commodity computing and social networking to design an architecture for scalable collaboration that will support the emerging vision of an Earth Science Collaboratory. The CWB is being implemented on the robust and open source Eclipse framework and will be compatible with widely used scientific analysis tools such as IDL. The myScience Catalog built into CWB will capture and track metadata and provenance about data and algorithms for the researchers in a non-intrusive manner with minimal overhead. Seamless interfaces to multiple Cloud services will support sharing algorithms, data, and analysis results, as well as access to storage and computer resources. A Community Catalog will track the use of shared science artifacts and manage collaborations among researchers.
NASA Technical Reports Server (NTRS)
Cleve, Jeffrey E.
2010-01-01
This describes the collection of data and the processing done on it so when researchers around the world get the Kepler data sets (which are a set of pixels from the telescope of a particular target (star, galaxy or whatever) over a 3 month period) they can adjust their algorithms fro things that were done (like subtracting all of one particular wavelength for example). This is used to calibrate their own algorithms so that they know what it is they are starting with. It is posted so that whoever is accessing the publicly available data (not all of it is made public) can understand it .. (most of the Kepler data is under restriction for 1 - 4 years and is not available, but the handbook is for everyone (public and restricted) The Data Analysis Working Group have released long and short cadence materials, including FFls and Dropped Targets for the Public. The Kepler Science Office considers Data Release 3 to provide "browse quality" data. These notes have been prepared to give Kepler users of the Multimission Archive at STScl (MAST) a summary of how the data were collected and prepared, and how well the data processing pipeline is functioning on flight data. They will be updated for each release of data to the public archive and placed on MAST along with other Kepler documentation, at http:// archive.stsci.edu/kepler/documents.html .Data release 3 is meant to give users the opportunity to examine the data for possibly interesting science and to involve the users in improving the pipeline for future data releases. To perform the latter service, users are encouraged to notice and document artifacts, either in the raw or processed data, and report them to the Science Office.
Augmented Topological Descriptors of Pore Networks for Material Science.
Ushizima, D; Morozov, D; Weber, G H; Bianchi, A G C; Sethian, J A; Bethel, E W
2012-12-01
One potential solution to reduce the concentration of carbon dioxide in the atmosphere is the geologic storage of captured CO2 in underground rock formations, also known as carbon sequestration. There is ongoing research to guarantee that this process is both efficient and safe. We describe tools that provide measurements of media porosity, and permeability estimates, including visualization of pore structures. Existing standard algorithms make limited use of geometric information in calculating permeability of complex microstructures. This quantity is important for the analysis of biomineralization, a subsurface process that can affect physical properties of porous media. This paper introduces geometric and topological descriptors that enhance the estimation of material permeability. Our analysis framework includes the processing of experimental data, segmentation, and feature extraction and making novel use of multiscale topological analysis to quantify maximum flow through porous networks. We illustrate our results using synchrotron-based X-ray computed microtomography of glass beads during biomineralization. We also benchmark the proposed algorithms using simulated data sets modeling jammed packed bead beds of a monodispersive material.
Algorithms in nature: the convergence of systems biology and computational thinking
Navlakha, Saket; Bar-Joseph, Ziv
2011-01-01
Computer science and biology have enjoyed a long and fruitful relationship for decades. Biologists rely on computational methods to analyze and integrate large data sets, while several computational methods were inspired by the high-level design principles of biological systems. Recently, these two directions have been converging. In this review, we argue that thinking computationally about biological processes may lead to more accurate models, which in turn can be used to improve the design of algorithms. We discuss the similar mechanisms and requirements shared by computational and biological processes and then present several recent studies that apply this joint analysis strategy to problems related to coordination, network analysis, and tracking and vision. We also discuss additional biological processes that can be studied in a similar manner and link them to potential computational problems. With the rapid accumulation of data detailing the inner workings of biological systems, we expect this direction of coupling biological and computational studies to greatly expand in the future. PMID:22068329
A Disciplined Architectural Approach to Scaling Data Analysis for Massive, Scientific Data
NASA Astrophysics Data System (ADS)
Crichton, D. J.; Braverman, A. J.; Cinquini, L.; Turmon, M.; Lee, H.; Law, E.
2014-12-01
Data collections across remote sensing and ground-based instruments in astronomy, Earth science, and planetary science are outpacing scientists' ability to analyze them. Furthermore, the distribution, structure, and heterogeneity of the measurements themselves pose challenges that limit the scalability of data analysis using traditional approaches. Methods for developing science data processing pipelines, distribution of scientific datasets, and performing analysis will require innovative approaches that integrate cyber-infrastructure, algorithms, and data into more systematic approaches that can more efficiently compute and reduce data, particularly distributed data. This requires the integration of computer science, machine learning, statistics and domain expertise to identify scalable architectures for data analysis. The size of data returned from Earth Science observing satellites and the magnitude of data from climate model output, is predicted to grow into the tens of petabytes challenging current data analysis paradigms. This same kind of growth is present in astronomy and planetary science data. One of the major challenges in data science and related disciplines defining new approaches to scaling systems and analysis in order to increase scientific productivity and yield. Specific needs include: 1) identification of optimized system architectures for analyzing massive, distributed data sets; 2) algorithms for systematic analysis of massive data sets in distributed environments; and 3) the development of software infrastructures that are capable of performing massive, distributed data analysis across a comprehensive data science framework. NASA/JPL has begun an initiative in data science to address these challenges. Our goal is to evaluate how scientific productivity can be improved through optimized architectural topologies that identify how to deploy and manage the access, distribution, computation, and reduction of massive, distributed data, while managing the uncertainties of scientific conclusions derived from such capabilities. This talk will provide an overview of JPL's efforts in developing a comprehensive architectural approach to data science.
NASA Astrophysics Data System (ADS)
Liu, Shixing; Liu, Chang; Hua, Wei; Guo, Yongxin
2016-11-01
By using the discrete variational method, we study the numerical method of the general nonholonomic system in the generalized Birkhoffian framework, and construct a numerical method of generalized Birkhoffian equations called a self-adjoint-preserving algorithm. Numerical results show that it is reasonable to study the nonholonomic system by the structure-preserving algorithm in the generalized Birkhoffian framework. Project supported by the National Natural Science Foundation of China (Grant Nos. 11472124, 11572145, 11202090, and 11301350), the Doctor Research Start-up Fund of Liaoning Province, China (Grant No. 20141050), the China Postdoctoral Science Foundation (Grant No. 2014M560203), and the General Science and Technology Research Plans of Liaoning Educational Bureau, China (Grant No. L2013005).
The Human is the Loop: New Directions for Visual Analytics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endert, Alexander; Hossain, Shahriar H.; Ramakrishnan, Naren
2014-01-28
Visual analytics is the science of marrying interactive visualizations and analytic algorithms to support exploratory knowledge discovery in large datasets. We argue for a shift from a ‘human in the loop’ philosophy for visual analytics to a ‘human is the loop’ viewpoint, where the focus is on recognizing analysts’ work processes, and seamlessly fitting analytics into that existing interactive process. We survey a range of projects that provide visual analytic support contextually in the sensemaking loop, and outline a research agenda along with future challenges.
1987-08-25
Department Of Computer Science University Of Utah Salt Lake City, UT 84112 August 25, 1987 C IO ,c Acessin Tor i Ifl G I . DTIC ?T3 0 ummouned 0C0 Juxtil t...generated for each instance of an alternative operation. One procedure merits special attention. CheckAndCommit(AltListr, g ): INTEGER is called by process P...in figure 2. CheckAndCommit uses a procedure CheckGuard(AltListr, g ): INTEGER that scans th, remote alternative list AltList7 looking for a matching
Contemplating Synergistic Algorithms for the NASA ACE Mission
NASA Technical Reports Server (NTRS)
Mace, Gerald G.; Starr, David O.; Marchand, Roger; Ackerman, Steven A.; Platnick, Steven E.; Fridlind, Ann; Cooper, Steven; Vane, Deborah G.; Stephens, Graeme L.
2013-01-01
ACE is a proposed Tier 2 NASA Decadal Survey mission that will focus on clouds, aerosols, and precipitation as well as ocean ecosystems. The primary objective of the clouds component of this mission is to advance our ability to predict changes to the Earth's hydrological cycle and energy balance in response to climate forcings by generating observational constraints on future science questions, especially those associated with the effects of aerosol on clouds and precipitation. ACE will continue and extend the measurement heritage that began with the A-Train and that will continue through Earthcare. ACE planning efforts have identified several data streams that can contribute significantly to characterizing the properties of clouds and precipitation and the physical processes that force these properties. These include dual frequency Doppler radar, high spectral resolution lidar, polarimetric visible imagers, passive microwave and submillimeter wave radiometry. While all these data streams are technologically feasible, their total cost is substantial and likely prohibitive. It is, therefore, necessary to critically evaluate their contributions to the ACE science goals. We have begun developing algorithms to explore this trade space. Specifically, we will describe our early exploratory algorithms that take as input the set of potential ACE-like data streams and evaluate critically to what extent each data stream influences the error in a specific cloud quantity retrieval.
Applying the Karma Provenance tool to NASA's AMSR-E Data Production Stream
NASA Astrophysics Data System (ADS)
Ramachandran, R.; Conover, H.; Regner, K.; Movva, S.; Goodman, H. M.; Pale, B.; Purohit, P.; Sun, Y.
2010-12-01
Current procedures for capturing and disseminating provenance, or data product lineage, are limited in both what is captured and how it is disseminated to the science community. For example, the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E) Science Investigator-led Processing System (SIPS) generates Level 2 and Level 3 data products for a variety of geophysical parameters. Data provenance and quality information for these data sets is either very general (e.g., user guides, a list of anomalous data receipt and processing conditions over the life of the missions) or difficult to access or interpret (e.g., quality flags embedded in the data, production history files not easily available to users). Karma is a provenance collection and representation tool designed and developed for data driven workflows such as the productions streams used to produce EOS standard products. Karma records uniform and usable provenance metadata independent of the processing system while minimizing both the modification burden on the processing system and the overall performance overhead. Karma collects both the process and data provenance. The process provenance contains information about the workflow execution and the associated algorithm invocations. The data provenance captures metadata about the derivation history of the data product, including algorithms used and input data sources transformed to generate it. As part of an ongoing NASA funded project, Karma is being integrated into the AMSR-E SIPS data production streams. Metadata gathered by the tool will be presented to the data consumers as provenance graphs, which are useful in validating the workflows and determining the quality of the data product. This presentation will discuss design and implementation issues faced while incorporating a provenance tool into a structured data production flow. Prototype results will also be presented in this talk.
Modelling and simulating reaction-diffusion systems using coloured Petri nets.
Liu, Fei; Blätke, Mary-Ann; Heiner, Monika; Yang, Ming
2014-10-01
Reaction-diffusion systems often play an important role in systems biology when developmental processes are involved. Traditional methods of modelling and simulating such systems require substantial prior knowledge of mathematics and/or simulation algorithms. Such skills may impose a challenge for biologists, when they are not equally well-trained in mathematics and computer science. Coloured Petri nets as a high-level and graphical language offer an attractive alternative, which is easily approachable. In this paper, we investigate a coloured Petri net framework integrating deterministic, stochastic and hybrid modelling formalisms and corresponding simulation algorithms for the modelling and simulation of reaction-diffusion processes that may be closely coupled with signalling pathways, metabolic reactions and/or gene expression. Such systems often manifest multiscaleness in time, space and/or concentration. We introduce our approach by means of some basic diffusion scenarios, and test it against an established case study, the Brusselator model. Copyright © 2014 Elsevier Ltd. All rights reserved.
Global satellite composites - 20 years of evolution
NASA Astrophysics Data System (ADS)
Kohrs, Richard A.; Lazzara, Matthew A.; Robaidek, Jerrold O.; Santek, David A.; Knuth, Shelley L.
2014-01-01
For two decades, the University of Wisconsin Space Science and Engineering Center (SSEC) and the Antarctic Meteorological Research Center (AMRC) have been creating global, regional and hemispheric satellite composites. These composites have proven useful in research, operational forecasting, commercial applications and educational outreach. Using the Man computer Interactive Data System (McIDAS) software developed at SSEC, infrared window composites were created by combining Geostationary Operational Environmental Satellite (GOES), and polar orbiting data from the SSEC Data Center and polar data acquired at McMurdo and Palmer stations, Antarctica. Increased computer processing speed has allowed for more advanced algorithms to address the decision making process for co-located pixels. The algorithms have evolved from a simplistic maximum brightness temperature to those that account for distance from the sub-satellite point, parallax displacement, pixel time and resolution. The composites are the state-of-the-art means for merging/mosaicking satellite imagery.
Eslick, John C.; Ng, Brenda; Gao, Qianwen; ...
2014-12-31
Under the auspices of the U.S. Department of Energy’s Carbon Capture Simulation Initiative (CCSI), a Framework for Optimization and Quantification of Uncertainty and Sensitivity (FOQUS) has been developed. This tool enables carbon capture systems to be rapidly synthesized and rigorously optimized, in an environment that accounts for and propagates uncertainties in parameters and models. FOQUS currently enables (1) the development of surrogate algebraic models utilizing the ALAMO algorithm, which can be used for superstructure optimization to identify optimal process configurations, (2) simulation-based optimization utilizing derivative free optimization (DFO) algorithms with detailed black-box process models, and (3) rigorous uncertainty quantification throughmore » PSUADE. FOQUS utilizes another CCSI technology, the Turbine Science Gateway, to manage the thousands of simulated runs necessary for optimization and UQ. Thus, this computational framework has been demonstrated for the design and analysis of a solid sorbent based carbon capture system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cramer, Christopher J.
Charge transfer and charge transport in photoactivated systems are fundamental processes that underlie solar energy capture, solar energy conversion, and photoactivated catalysis, both organometallic and enzymatic. We developed methods, algorithms, and software tools needed for reliable treatment of the underlying physics for charge transfer and charge transport, an undertaking with broad applicability to the goals of the fundamental-interaction component of the Department of Energy Office of Basic Energy Sciences and the exascale initiative of the Office of Advanced Scientific Computing Research.
Thirty Years After Marr's Vision: Levels of Analysis in Cognitive Science.
Peebles, David; Cooper, Richard P
2015-04-01
Thirty years after the publication of Marr's seminal book Vision (Marr, 1982) the papers in this topic consider the contemporary status of his influential conception of three distinct levels of analysis for information-processing systems, and in particular the role of the algorithmic and representational level with its cognitive-level concepts. This level has (either implicitly or explicitly) been downplayed or eliminated both by reductionist neuroscience approaches from below that seek to account for behavior from the implementation level and by Bayesian approaches from above that seek to account for behavior in purely computational-level terms. Copyright © 2015 Cognitive Science Society, Inc.
Who Owns Educational Theory? Big Data, Algorithms and the Expert Power of Education Data Science
ERIC Educational Resources Information Center
Williamson, Ben
2017-01-01
"Education data science" is an emerging methodological field which possesses the algorithm-driven technologies required to generate insights and knowledge from educational big data. This article consists of an analysis of the Lytics Lab, Stanford University's laboratory for research and development in learning analytics, and the Center…
ERIC Educational Resources Information Center
Castillo, Antonio S.; Berenguer, Isabel A.; Sánchez, Alexander G.; Álvarez, Tomás R. R.
2017-01-01
This paper analyzes the results of a diagnostic study carried out with second year students of the computational sciences majors at University of Oriente, Cuba, to determine the limitations that they present in computational algorithmization. An exploratory research was developed using quantitative and qualitative methods. The results allowed…
Talkoot Portals: Discover, Tag, Share, and Reuse Collaborative Science Workflows
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Ramachandran, R.; Lynnes, C.
2009-05-01
A small but growing number of scientists are beginning to harness Web 2.0 technologies, such as wikis, blogs, and social tagging, as a transformative way of doing science. These technologies provide researchers easy mechanisms to critique, suggest and share ideas, data and algorithms. At the same time, large suites of algorithms for science analysis are being made available as remotely-invokable Web Services, which can be chained together to create analysis workflows. This provides the research community an unprecedented opportunity to collaborate by sharing their workflows with one another, reproducing and analyzing research results, and leveraging colleagues' expertise to expedite the process of scientific discovery. However, wikis and similar technologies are limited to text, static images and hyperlinks, providing little support for collaborative data analysis. A team of information technology and Earth science researchers from multiple institutions have come together to improve community collaboration in science analysis by developing a customizable "software appliance" to build collaborative portals for Earth Science services and analysis workflows. The critical requirement is that researchers (not just information technologists) be able to build collaborative sites around service workflows within a few hours. We envision online communities coming together, much like Finnish "talkoot" (a barn raising), to build a shared research space. Talkoot extends a freely available, open source content management framework with a series of modules specific to Earth Science for registering, creating, managing, discovering, tagging and sharing Earth Science web services and workflows for science data processing, analysis and visualization. Users will be able to author a "science story" in shareable web notebooks, including plots or animations, backed up by an executable workflow that directly reproduces the science analysis. New services and workflows of interest will be discoverable using tag search, and advertised using "service casts" and "interest casts" (Atom feeds). Multiple science workflow systems will be plugged into the system, with initial support for UAH's Mining Workflow Composer and the open-source Active BPEL engine, and JPL's SciFlo engine and the VizFlow visual programming interface. With the ability to share and execute analysis workflows, Talkoot portals can be used to do collaborative science in addition to communicate ideas and results. It will be useful for different science domains, mission teams, research projects and organizations. Thus, it will help to solve the "sociological" problem of bringing together disparate groups of researchers, and the technical problem of advertising, discovering, developing, documenting, and maintaining inter-agency science workflows. The presentation will discuss the goals of and barriers to Science 2.0, the social web technologies employed in the Talkoot software appliance (e.g. CMS, social tagging, personal presence, advertising by feeds, etc.), illustrate the resulting collaborative capabilities, and show early prototypes of the web interfaces (e.g. embedded workflows).
Toward a computational psycholinguistics of reference production.
van Deemter, Kees; Gatt, Albert; van Gompel, Roger P G; Krahmer, Emiel
2012-04-01
This article introduces the topic ''Production of Referring Expressions: Bridging the Gap between Computational and Empirical Approaches to Reference'' of the journal Topics in Cognitive Science. We argue that computational and psycholinguistic approaches to reference production can benefit from closer interaction, and that this is likely to result in the construction of algorithms that differ markedly from the ones currently known in the computational literature. We focus particularly on determinism, the feature of existing algorithms that is perhaps most clearly at odds with psycholinguistic results, discussing how future algorithms might include non-determinism, and how new psycholinguistic experiments could inform the development of such algorithms. Copyright © 2012 Cognitive Science Society, Inc.
Phase retrieval by coherent modulation imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Fucai; Chen, Bo; Morrison, Graeme R.
Phase retrieval is a long-standing problem in imaging when only the intensity of the wavefield can be recorded. Coherent diffraction imaging (CDI) is a lensless technique that uses iterative algorithms to recover amplitude and phase contrast images from diffraction intensity data. For general samples, phase retrieval from a single diffraction pattern has been an algorithmic and experimental challenge. Here we report a method of phase retrieval that uses a known modulation of the sample exit-wave. This coherent modulation imaging (CMI) method removes inherent ambiguities of CDI and uses a reliable, rapidly converging iterative algorithm involving three planes. It works formore » extended samples, does not require tight support for convergence, and relaxes dynamic range requirements on the detector. CMI provides a robust method for imaging in materials and biological science, while its single-shot capability will benefit the investigation of dynamical processes with pulsed sources, such as X-ray free electron laser.« less
Phase retrieval by coherent modulation imaging
Zhang, Fucai; Chen, Bo; Morrison, Graeme R.; ...
2016-11-18
Phase retrieval is a long-standing problem in imaging when only the intensity of the wavefield can be recorded. Coherent diffraction imaging (CDI) is a lensless technique that uses iterative algorithms to recover amplitude and phase contrast images from diffraction intensity data. For general samples, phase retrieval from a single diffraction pattern has been an algorithmic and experimental challenge. Here we report a method of phase retrieval that uses a known modulation of the sample exit-wave. This coherent modulation imaging (CMI) method removes inherent ambiguities of CDI and uses a reliable, rapidly converging iterative algorithm involving three planes. It works formore » extended samples, does not require tight support for convergence, and relaxes dynamic range requirements on the detector. CMI provides a robust method for imaging in materials and biological science, while its single-shot capability will benefit the investigation of dynamical processes with pulsed sources, such as X-ray free electron laser.« less
Iris Segmentation and Normalization Algorithm Based on Zigzag Collarette
NASA Astrophysics Data System (ADS)
Rizky Faundra, M.; Ratna Sulistyaningrum, Dwi
2017-01-01
In this paper, we proposed iris segmentation and normalization algorithm based on the zigzag collarette. First of all, iris images are processed by using Canny Edge Detection to detect pupil edge, then finding the center and the radius of the pupil with the Hough Transform Circle. Next, isolate important part in iris based zigzag collarette area. Finally, Daugman Rubber Sheet Model applied to get the fixed dimensions or normalization iris by transforming cartesian into polar format and thresholding technique to remove eyelid and eyelash. This experiment will be conducted with a grayscale eye image data taken from a database of iris-Chinese Academy of Sciences Institute of Automation (CASIA). Data iris taken is the data reliable and widely used to study the iris biometrics. The result show that specific threshold level is 0.3 have better accuracy than other, so the present algorithm can be used to segmentation and normalization zigzag collarette with accuracy is 98.88%
Phase retrieval by coherent modulation imaging.
Zhang, Fucai; Chen, Bo; Morrison, Graeme R; Vila-Comamala, Joan; Guizar-Sicairos, Manuel; Robinson, Ian K
2016-11-18
Phase retrieval is a long-standing problem in imaging when only the intensity of the wavefield can be recorded. Coherent diffraction imaging is a lensless technique that uses iterative algorithms to recover amplitude and phase contrast images from diffraction intensity data. For general samples, phase retrieval from a single-diffraction pattern has been an algorithmic and experimental challenge. Here we report a method of phase retrieval that uses a known modulation of the sample exit wave. This coherent modulation imaging method removes inherent ambiguities of coherent diffraction imaging and uses a reliable, rapidly converging iterative algorithm involving three planes. It works for extended samples, does not require tight support for convergence and relaxes dynamic range requirements on the detector. Coherent modulation imaging provides a robust method for imaging in materials and biological science, while its single-shot capability will benefit the investigation of dynamical processes with pulsed sources, such as X-ray free-electron lasers.
Improving Learning of Markov Logic Networks using Transfer and Bottom-Up Induction
2007-05-01
Texas at Austin Austin, TX 78712 lilyanam@cs.utexas.edu Doctoral Dissertation Proposal Supervising Professor: Raymond J. Mooney Abstract Statistical...maxima and plateaus. It is therefore an important research problem to develop learning algorithms that improve the speed and accuracy of this process. The...of Texas at Austin,Department of Computer Sciences,Austin,TX,78712 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S
Processing NASA Earth Science Data on Nebula Cloud
NASA Technical Reports Server (NTRS)
Chen, Aijun; Pham, Long; Kempler, Steven
2012-01-01
Three applications were successfully migrated to Nebula, including S4PM, AIRS L1/L2 algorithms, and Giovanni MAPSS. Nebula has some advantages compared with local machines (e.g. performance, cost, scalability, bundling, etc.). Nebula still faces some challenges (e.g. stability, object storage, networking, etc.). Migrating applications to Nebula is feasible but time consuming. Lessons learned from our Nebula experience will benefit future Cloud Computing efforts at GES DISC.
Investigation of cloud properties and atmospheric stability with MODIS
NASA Technical Reports Server (NTRS)
Menzel, Paul
1995-01-01
In the past six months several milestones were accomplished. The MODIS Airborne Simulator (MAS) was flown in a 50 channel configuration for the first time in January 1995 and the data were calibrated and validated; in the same field campaign the approach for validating MODIS radiances using the MAS and High resolution Interferometer Sounder (HIS) instruments was successfully tested on GOES-8. Cloud masks for two scenes (one winter and the other summer) of AVHRR local area coverage from the Gulf of Mexico to Canada were processed and forwarded to the SDST for MODIS Science Team investigation; a variety of surface and cloud scenes were evident. Beta software preparations continued with incorporation of the EOS SDP Toolkit. SCAR-C data was processed and presented at the biomass burning conference. Preparations for SCAR-B accelerated with generation of a home page for access to real time satellite data related to biomass burning; this will be available to the scientists in Brazil via internet on the World Wide Web. The CO2 cloud algorithm was compared to other algorithms that differ in their construction of clear radiance fields. The HIRS global cloud climatology was completed for six years. The MODIS science team meeting was attended by five of the UW scientists.
NASA Technical Reports Server (NTRS)
Lee, Hyun H.
2012-01-01
MERTELEMPROC processes telemetered data in data product format and generates Experiment Data Records (EDRs) for many instruments (HAZCAM, NAVCAM, PANCAM, microscopic imager, Moessbauer spectrometer, APXS, RAT, and EDLCAM) on the Mars Exploration Rover (MER). If the data is compressed, then MERTELEMPROC decompresses the data with an appropriate decompression algorithm. There are two compression algorithms (ICER and LOCO) used in MER. This program fulfills a MER specific need to generate Level 1 products within a 60-second time requirement. EDRs generated by this program are used by merinverter, marscahv, marsrad, and marsjplstereo to generate higher-level products for the mission operations. MERTELEPROC was the first GDS program to process the data product. Metadata of the data product is in XML format. The software allows user-configurable input parameters, per-product processing (not streambased processing), and fail-over is allowed if the leading image header is corrupted. It is used within the MER automated pipeline. MERTELEMPROC is part of the OPGS (Operational Product Generation Subsystem) automated pipeline, which analyzes images returned by in situ spacecraft and creates level 1 products to assist in operations, science, and outreach.
NASA Technical Reports Server (NTRS)
Equils, Douglas J.
2008-01-01
Launched on October 15, 1997, the Cassini-Huygens spacecraft began its ambitious journey to the Saturnian system with a complex suite of 12 scientific instruments, and another 6 instruments aboard the European Space Agencies Huygens Probe. Over the next 6 1/2 years, Cassini would continue its relatively simplistic cruise phase operations, flying past Venus, Earth, and Jupiter. However, following Saturn Orbit Insertion (SOI), Cassini would become involved in a complex series of tasks that required detailed resource management, distributed operations collaboration, and a data base for capturing science objectives. Collectively, these needs were met through a web-based software tool designed to help with the Cassini uplink process and ultimately used to generate more robust sequences for spacecraft operations. In 2001, in conjunction with the Southwest Research Institute (SwRI) and later Venustar Software and Engineering Inc., the Cassini Information Management System (CIMS) was released which enabled the Cassini spacecraft and science planning teams to perform complex information management and team collaboration between scientists and engineers in 17 countries. Originally tailored to help manage the science planning uplink process, CIMS has been actively evolving since its inception to meet the changing and growing needs of the Cassini uplink team and effectively reduce mission risk through a series of resource management validation algorithms. These algorithms have been implemented in the web-based software tool to identify potential sequence conflicts early in the science planning process. CIMS mitigates these sequence conflicts through identification of timing incongruities, pointing inconsistencies, flight rule violations, data volume issues, and by assisting in Deep Space Network (DSN) coverage analysis. In preparation for extended mission operations, CIMS has also evolved further to assist in the planning and coordination of the dual playback redundancy of highvalue data from targets such as Titan and Enceladus. This paper will outline the critical role that CIMS has played for Cassini in the distributed ops paradigm throughout operations. This paper will also examine the evolution that CIMS has undergone in the face of new science discoveries and fluctuating operational needs. And finally, this paper will conclude with theoretical adaptation of CIMS for other projects and the potential savings in cost and risk reduction that could potentially be tapped into by future missions.
Framework for Processing Citizens Science Data for Applications to NASA Earth Science Missions
NASA Technical Reports Server (NTRS)
Teng, William; Albayrak, Arif
2017-01-01
Citizen science (or crowdsourcing) has drawn much high-level recent and ongoing interest and support. It is poised to be applied, beyond the by-now fairly familiar use of, e.g., Twitter for natural hazards monitoring, to science research, such as augmenting the validation of NASA earth science mission data. This interest and support is seen in the 2014 National Plan for Civil Earth Observations, the 2015 White House forum on citizen science and crowdsourcing, the ongoing Senate Bill 2013 (Crowdsourcing and Citizen Science Act of 2015), the recent (August 2016) Open Geospatial Consortium (OGC) call for public participation in its newly-established Citizen Science Domain Working Group, and NASA's initiation of a new Citizen Science for Earth Systems Program (along with its first citizen science-focused solicitation for proposals). Over the past several years, we have been exploring the feasibility of extracting from the Twitter data stream useful information for application to NASA precipitation research, with both "passive" and "active" participation by the twitterers. The Twitter database, which recently passed its tenth anniversary, is potentially a rich source of real-time and historical global information for science applications. The time-varying set of "precipitation" tweets can be thought of as an organic network of rain gauges, potentially providing a widespread view of precipitation occurrence. The validation of satellite precipitation estimates is challenging, because many regions lack data or access to data, especially outside of the U.S. and in remote and developing areas. Mining the Twitter stream could augment these validation programs and, potentially, help tune existing algorithms. Our ongoing work, though exploratory, has resulted in key components for processing and managing tweets, including the capabilities to filter the Twitter stream in real time, to extract location information, to filter for exact phrases, and to plot tweet distributions. The key step is to process the "precipitation" tweets to be compatible with satellite-retrieved precipitation data. These key components for processing and managing "precipitation" tweets (and additional ones to be developed) are not limited to precipitation, nor are they limited to the Twitter social medium. Indeed, to maximize the value of our work for NASA earth science programs, these components should be generalized and be part of an overall framework for processing citizen science data for science research. In this paper, we outline such a framework.
NASA Astrophysics Data System (ADS)
Dean, David S.; Majumdar, Satya N.
2002-08-01
We study a fragmentation problem where an initial object of size x is broken into m random pieces provided x > x0 where x0 is an atomic cut-off. Subsequently, the fragmentation process continues for each of those daughter pieces whose sizes are bigger than x0. The process stops when all the fragments have sizes smaller than x0. We show that the fluctuation of the total number of splitting events, characterized by the variance, generically undergoes a nontrivial phase transition as one tunes the branching number m through a critical value m = mc. For m < mc, the fluctuations are Gaussian where as for m > mc they are anomalously large and non-Gaussian. We apply this general result to analyse two different search algorithms in computer science.
Atmospheric Correction at AERONET Locations: A New Science and Validation Data Set
NASA Technical Reports Server (NTRS)
Wang, Yujie; Lyapustin, Alexei; Privette, Jeffery L.; Morisette, Jeffery T.; Holben, Brent
2008-01-01
This paper describes an AERONET-based Surface Reflectance Validation Network (ASRVN) and its dataset of spectral surface bidirectional reflectance and albedo based on MODIS TERRA and AQUA data. The ASRVN is an operational data collection and processing system. It receives 50x50 square kilometer subsets of MODIS L1B data from MODAPS and AERONET aerosol and water vapor information. Then it performs an accurate atmospheric correction for about 100 AERONET sites based on accurate radiative transfer theory with high quality control of the input data. The ASRVN processing software consists of L1B data gridding algorithm, a new cloud mask algorithm based on a time series analysis, and an atmospheric correction algorithm. The atmospheric correction is achieved by fitting the MODIS top of atmosphere measurements, accumulated for 16-day interval, with theoretical reflectance parameterized in terms of coefficients of the LSRT BRF model. The ASRVN takes several steps to ensure high quality of results: 1) cloud mask algorithm filters opaque clouds; 2) an aerosol filter has been developed to filter residual semi-transparent and sub-pixel clouds, as well as cases with high inhomogeneity of aerosols in the processing area; 3) imposing requirement of consistency of the new solution with previously retrieved BRF and albedo; 4) rapid adjustment of the 16-day retrieval to the surface changes using the last day of measurements; and 5) development of seasonal back-up spectral BRF database to increase data coverage. The ASRVN provides a gapless or near-gapless coverage for the processing area. The gaps, caused by clouds, are filled most naturally with the latest solution for a given pixels. The ASRVN products include three parameters of LSRT model (k(sup L), k(sup G), k(sup V)), surface albedo, NBRF (a normalized BRF computed for a standard viewing geometry, VZA=0 deg., SZA=45 deg.), and IBRF (instantaneous, or one angle, BRF value derived from the last day of MODIS measurement for specific viewing geometry) for MODIS 500m bands 1-7. The results are produced daily at resolution of 1 km in gridded format. We also provide cloud mask, quality flag and a browse bitmap image. The new dataset can be used for a wide range of applications including validation analysis and science research.
Pre-Launch Performance Testing of the ICESat-2/ATLAS Flight Science Receiver Algorithms
NASA Astrophysics Data System (ADS)
Mcgarry, J.; Carabajal, C. C.; Saba, J. L.; Rackley, A.; Holland, S.
2016-12-01
NASA's Advanced Topographic Laser Altimeter System (ATLAS) will be the single instrument on the ICESat-2 spacecraft which is expected to launch in late 2017 with a 3 year mission lifetime. The ICESat-2 planned orbital altitude is 500 km with a 92 degree inclination and 91-day repeat tracks. ATLAS is a single-photon detection system transmitting at 532nm with a laser repetition rate of 10 kHz and a 6 spot pattern on the Earth's surface. Without some method of reducing the received data, the volume of ATLAS telemetry would far exceed the normal X-band downlink capability. To reduce the data volume to an acceptable level a set of onboard Receiver Algorithms has been developed. These Algorithms limit the daily data volume by distinguishing surface echoes from the background noise and allowing the instrument to telemeter data from only a small vertical region about the signal. This is accomplished through the use of an onboard Digital Elevation Model (DEM), signal processing techniques, and onboard relief and surface reference maps. The ATLAS Receiver Algorithms have been completed and have been verified during Instrument testing in the spacecraft assembly area at the Goddard Space Flight Center in late 2015 and early 2016. Testing has been performed at ambient temperature with a pressure of one atmosphere as well as at the expected hot and cold temperatures in a vacuum. Results from testing to date show the Receiver Algorithms have the ability to handle a wide range of signal and noise levels with a very good sensitivity at relatively low signal to noise ratios. Testing with the ATLAS instrument and flight software shows very good agreement with previous Simulator testing and all of the requirements for ATLAS Receiver Algorithms were successfully verified during Run for the Record Testing in December 2015. This poster will describe the performance of the ATLAS Flight Science Receiver Algorithms during the Run for Record and Comprehensive Performance Testing performed at Goddard, which will give insight into the future on-orbit performance of the Algorithms. See the companion poster (Carabajal, et al) in this session.
Gpm Level 1 Science Requirements: Science and Performance Viewed from the Ground
NASA Technical Reports Server (NTRS)
Petersen, W.; Kirstetter, P.; Wolff, D.; Kidd, C.; Tokay, A.; Chandrasekar, V.; Grecu, M.; Huffman, G.; Jackson, G. S.
2016-01-01
GPM meets Level 1 science requirements for rain estimation based on the strong performance of its radar algorithms. Changes in the V5 GPROF algorithm should correct errors in V4 and will likely resolve GPROF performance issues relative to L1 requirements. L1 FOV Snow detection largely verified but at unknown SWE rate threshold (likely < 0.5 –1 mm/hr/liquid equivalent). Ongoing work to improve SWE rate estimation for both satellite and GV remote sensing.
ERIC Educational Resources Information Center
Sunal, Cynthia Szymanski; Karr, Charles L.; Sunal, Dennis W.
2003-01-01
Students' conceptions of three major artificial intelligence concepts used in the modeling of systems in science, fuzzy logic, neural networks, and genetic algorithms were investigated before and after a higher education science course. Students initially explored their prior ideas related to the three concepts through active tasks. Then,…
Computer vision applications for coronagraphic optical alignment and image processing.
Savransky, Dmitry; Thomas, Sandrine J; Poyneer, Lisa A; Macintosh, Bruce A
2013-05-10
Modern coronagraphic systems require very precise alignment between optical components and can benefit greatly from automated image processing. We discuss three techniques commonly employed in the fields of computer vision and image analysis as applied to the Gemini Planet Imager, a new facility instrument for the Gemini South Observatory. We describe how feature extraction and clustering methods can be used to aid in automated system alignment tasks, and also present a search algorithm for finding regular features in science images used for calibration and data processing. Along with discussions of each technique, we present our specific implementation and show results of each one in operation.
The Hyper Suprime-Cam software pipeline
NASA Astrophysics Data System (ADS)
Bosch, James; Armstrong, Robert; Bickerton, Steven; Furusawa, Hisanori; Ikeda, Hiroyuki; Koike, Michitaro; Lupton, Robert; Mineo, Sogo; Price, Paul; Takata, Tadafumi; Tanaka, Masayuki; Yasuda, Naoki; AlSayyad, Yusra; Becker, Andrew C.; Coulton, William; Coupon, Jean; Garmilla, Jose; Huang, Song; Krughoff, K. Simon; Lang, Dustin; Leauthaud, Alexie; Lim, Kian-Tat; Lust, Nate B.; MacArthur, Lauren A.; Mandelbaum, Rachel; Miyatake, Hironao; Miyazaki, Satoshi; Murata, Ryoma; More, Surhud; Okura, Yuki; Owen, Russell; Swinbank, John D.; Strauss, Michael A.; Yamada, Yoshihiko; Yamanoi, Hitomi
2018-01-01
In this paper, we describe the optical imaging data processing pipeline developed for the Subaru Telescope's Hyper Suprime-Cam (HSC) instrument. The HSC Pipeline builds on the prototype pipeline being developed by the Large Synoptic Survey Telescope's Data Management system, adding customizations for HSC, large-scale processing capabilities, and novel algorithms that have since been reincorporated into the LSST codebase. While designed primarily to reduce HSC Subaru Strategic Program (SSP) data, it is also the recommended pipeline for reducing general-observer HSC data. The HSC pipeline includes high-level processing steps that generate coadded images and science-ready catalogs as well as low-level detrending and image characterizations.
Griffiths, Thomas L; Lieder, Falk; Goodman, Noah D
2015-04-01
Marr's levels of analysis-computational, algorithmic, and implementation-have served cognitive science well over the last 30 years. But the recent increase in the popularity of the computational level raises a new challenge: How do we begin to relate models at different levels of analysis? We propose that it is possible to define levels of analysis that lie between the computational and the algorithmic, providing a way to build a bridge between computational- and algorithmic-level models. The key idea is to push the notion of rationality, often used in defining computational-level models, deeper toward the algorithmic level. We offer a simple recipe for reverse-engineering the mind's cognitive strategies by deriving optimal algorithms for a series of increasingly more realistic abstract computational architectures, which we call "resource-rational analysis." Copyright © 2015 Cognitive Science Society, Inc.
A High Performance Cloud-Based Protein-Ligand Docking Prediction Algorithm
Chen, Jui-Le; Yang, Chu-Sing
2013-01-01
The potential of predicting druggability for a particular disease by integrating biological and computer science technologies has witnessed success in recent years. Although the computer science technologies can be used to reduce the costs of the pharmaceutical research, the computation time of the structure-based protein-ligand docking prediction is still unsatisfied until now. Hence, in this paper, a novel docking prediction algorithm, named fast cloud-based protein-ligand docking prediction algorithm (FCPLDPA), is presented to accelerate the docking prediction algorithm. The proposed algorithm works by leveraging two high-performance operators: (1) the novel migration (information exchange) operator is designed specially for cloud-based environments to reduce the computation time; (2) the efficient operator is aimed at filtering out the worst search directions. Our simulation results illustrate that the proposed method outperforms the other docking algorithms compared in this paper in terms of both the computation time and the quality of the end result. PMID:23762864
Application of Least Mean Square Algorithms to Spacecraft Vibration Compensation
NASA Technical Reports Server (NTRS)
Woodard , Stanley E.; Nagchaudhuri, Abhijit
1998-01-01
This paper describes the application of the Least Mean Square (LMS) algorithm in tandem with the Filtered-X Least Mean Square algorithm for controlling a science instrument's line-of-sight pointing. Pointing error is caused by a periodic disturbance and spacecraft vibration. A least mean square algorithm is used on-orbit to produce the transfer function between the instrument's servo-mechanism and error sensor. The result is a set of adaptive transversal filter weights tuned to the transfer function. The Filtered-X LMS algorithm, which is an extension of the LMS, tunes a set of transversal filter weights to the transfer function between the disturbance source and the servo-mechanism's actuation signal. The servo-mechanism's resulting actuation counters the disturbance response and thus maintains accurate science instrumental pointing. A simulation model of the Upper Atmosphere Research Satellite is used to demonstrate the algorithms.
Valdés, Julio J; Bonham-Carter, Graeme
2006-03-01
A computational intelligence approach is used to explore the problem of detecting internal state changes in time dependent processes; described by heterogeneous, multivariate time series with imprecise data and missing values. Such processes are approximated by collections of time dependent non-linear autoregressive models represented by a special kind of neuro-fuzzy neural network. Grid and high throughput computing model mining procedures based on neuro-fuzzy networks and genetic algorithms, generate: (i) collections of models composed of sets of time lag terms from the time series, and (ii) prediction functions represented by neuro-fuzzy networks. The composition of the models and their prediction capabilities, allows the identification of changes in the internal structure of the process. These changes are associated with the alternation of steady and transient states, zones with abnormal behavior, instability, and other situations. This approach is general, and its sensitivity for detecting subtle changes of state is revealed by simulation experiments. Its potential in the study of complex processes in earth sciences and astrophysics is illustrated with applications using paleoclimate and solar data.
Tracking Provenance of Earth Science Data
NASA Technical Reports Server (NTRS)
Tilmes, Curt; Yesha, Yelena; Halem, Milton
2010-01-01
Tremendous volumes of data have been captured, archived and analyzed. Sensors, algorithms and processing systems for transforming and analyzing the data are evolving over time. Web Portals and Services can create transient data sets on-demand. Data are transferred from organization to organization with additional transformations at every stage. Provenance in this context refers to the source of data and a record of the process that led to its current state. It encompasses the documentation of a variety of artifacts related to particular data. Provenance is important for understanding and using scientific datasets, and critical for independent confirmation of scientific results. Managing provenance throughout scientific data processing has gained interest lately and there are a variety of approaches. Large scale scientific datasets consisting of thousands to millions of individual data files and processes offer particular challenges. This paper uses the analogy of art history provenance to explore some of the concerns of applying provenance tracking to earth science data. It also illustrates some of the provenance issues with examples drawn from the Ozone Monitoring Instrument (OMI) Data Processing System (OMIDAPS) run at NASA's Goddard Space Flight Center by the first author.
Soil Moisture Active Passive Mission L4_C Data Product Assessment (Version 2 Validated Release)
NASA Technical Reports Server (NTRS)
Kimball, John S.; Jones, Lucas A.; Glassy, Joseph; Stavros, E. Natasha; Madani, Nima; Reichle, Rolf H.; Jackson, Thomas; Colliander, Andreas
2016-01-01
The SMAP satellite was successfully launched January 31st 2015, and began acquiring Earth observation data following in-orbit sensor calibration. Global data products derived from the SMAP L-band microwave measurements include Level 1 calibrated and geolocated radiometric brightness temperatures, Level 23 surface soil moisture and freezethaw geophysical retrievals mapped to a fixed Earth grid, and model enhanced Level 4 data products for surface to root zone soil moisture and terrestrial carbon (CO2) fluxes. The post-launch SMAP mission CalVal Phase had two primary objectives for each science product team: 1) calibrate, verify, and improve the performance of the science algorithms, and 2) validate accuracies of the science data products as specified in the L1 science requirements. This report provides analysis and assessment of the SMAP Level 4 Carbon (L4_C) product pertaining to the validated release. The L4_C validated product release effectively replaces an earlier L4_C beta-product release (Kimball et al. 2015). The validated release described in this report incorporates a longer data record and benefits from algorithm and CalVal refinements acquired during the SMAP post-launch CalVal intensive period. The SMAP L4_C algorithms utilize a terrestrial carbon flux model informed by SMAP soil moisture inputs along with optical remote sensing (e.g. MODIS) vegetation indices and other ancillary biophysical data to estimate global daily net ecosystem CO2 exchange (NEE) and component carbon fluxes for vegetation gross primary production (GPP) and ecosystem respiration (Reco). Other L4_C product elements include surface (10 cm depth) soil organic carbon (SOC) stocks and associated environmental constraints to these processes, including soil moisture and landscape freeze/thaw (FT) controls on GPP and respiration (Kimball et al. 2012). The L4_C product encapsulates SMAP carbon cycle science objectives by: 1) providing a direct link between terrestrial carbon fluxes and underlying FT and soil moisture constraints to these processes, 2) documenting primary connections between terrestrial water, energy and carbon cycles, and 3) improving understanding of terrestrial carbon sink activity in northern ecosystems. There are no L1 science requirements for the L4_C product; however self-imposed requirements have been established focusing on NEE as the primary product field for validation, and on demonstrating L4_C accuracy and success in meeting product science requirements (Jackson et al. 2012). The other L4_C product fields also have strong utility for carbon science applications; however, analysis of these other fields is considered secondary relative to primary validation activities focusing on NEE. The L4_C targeted accuracy requirements are to meet or exceed a mean unbiased accuracy (ubRMSE) for NEE of 1.6 g C/sq m/d or 30 g C/sq m/yr, emphasizing northern (45N) boreal and arctic ecosystems; this is similar to the estimated accuracy level of in situ tower eddy covariance measurement-based observations (Baldocchi 2008).
D Data Acquisition Based on Opencv for Close-Range Photogrammetry Applications
NASA Astrophysics Data System (ADS)
Jurjević, L.; Gašparović, M.
2017-05-01
Development of the technology in the area of the cameras, computers and algorithms for 3D the reconstruction of the objects from the images resulted in the increased popularity of the photogrammetry. Algorithms for the 3D model reconstruction are so advanced that almost anyone can make a 3D model of photographed object. The main goal of this paper is to examine the possibility of obtaining 3D data for the purposes of the close-range photogrammetry applications, based on the open source technologies. All steps of obtaining 3D point cloud are covered in this paper. Special attention is given to the camera calibration, for which two-step process of calibration is used. Both, presented algorithm and accuracy of the point cloud are tested by calculating the spatial difference between referent and produced point clouds. During algorithm testing, robustness and swiftness of obtaining 3D data is noted, and certainly usage of this and similar algorithms has a lot of potential in the real-time application. That is the reason why this research can find its application in the architecture, spatial planning, protection of cultural heritage, forensic, mechanical engineering, traffic management, medicine and other sciences.
Using human brain activity to guide machine learning.
Fong, Ruth C; Scheirer, Walter J; Cox, David D
2018-03-29
Machine learning is a field of computer science that builds algorithms that learn. In many cases, machine learning algorithms are used to recreate a human ability like adding a caption to a photo, driving a car, or playing a game. While the human brain has long served as a source of inspiration for machine learning, little effort has been made to directly use data collected from working brains as a guide for machine learning algorithms. Here we demonstrate a new paradigm of "neurally-weighted" machine learning, which takes fMRI measurements of human brain activity from subjects viewing images, and infuses these data into the training process of an object recognition learning algorithm to make it more consistent with the human brain. After training, these neurally-weighted classifiers are able to classify images without requiring any additional neural data. We show that our neural-weighting approach can lead to large performance gains when used with traditional machine vision features, as well as to significant improvements with already high-performing convolutional neural network features. The effectiveness of this approach points to a path forward for a new class of hybrid machine learning algorithms which take both inspiration and direct constraints from neuronal data.
Citizen science: A new perspective to evaluate spatial patterns in hydrology.
NASA Astrophysics Data System (ADS)
Koch, J.; Stisen, S.
2016-12-01
Citizen science opens new pathways that can complement traditional scientific practice. Intuition and reasoning make humans often more effective than computer algorithms in various realms of problem solving. In particular, a simple visual comparison of spatial patterns is a task where humans are often considered to be more reliable than computer algorithms. However, in practice, science still largely depends on computer based solutions, which is inevitable giving benefits such as speed and the possibility to automatize processes. This study highlights the integration of the generally underused human resource into hydrology. We established a citizen science project on the zooniverse platform entitled Pattern Perception. The aim is to employ the human perception to rate similarity and dissimilarity between simulated spatial patterns of a hydrological catchment model. In total, the turnout counts more than 2,800 users that provided over 46,000 classifications of 1,095 individual subjects within 64 days after the launch. Each subject displays simulated spatial patterns of land-surface variables of a baseline model and six modelling scenarios. The citizen science data discloses a numeric pattern similarity score for each of the scenarios with respect to the reference. We investigate the capability of a set of innovative statistical performance metrics to mimic the human perception to distinguish between similarity and dissimilarity. Results suggest that more complex metrics are not necessarily better at emulating the human perception, but clearly provide flexibility and auxiliary information that is valuable for model diagnostics. The metrics clearly differ in their ability to unambiguously distinguish between similar and dissimilar patterns which is regarded a key feature of a reliable metric.
Heat-bath algorithmic cooling with correlated qubit-environment interactions
NASA Astrophysics Data System (ADS)
Rodríguez-Briones, Nayeli A.; Li, Jun; Peng, Xinhua; Mor, Tal; Weinstein, Yossi; Laflamme, Raymond
2017-11-01
Cooling techniques are essential to understand fundamental thermodynamic questions of the low-energy states of physical systems, furthermore they are at the core of practical applications of quantum information science. In quantum computing, this controlled preparation of highly pure quantum states is required from the state initialization of most quantum algorithms to a reliable supply of ancilla qubits that satisfy the fault-tolerance threshold for quantum error correction. Heat-bath algorithmic cooling has been shown to purify qubits by controlled redistribution of entropy and multiple contact with a bath, not only for ensemble implementations but also for technologies with strong but imperfect measurements. In this paper, we show that correlated relaxation processes between the system and environment during rethermalization when we reset hot ancilla qubits, can be exploited to enhance purification. We show that a long standing upper bound on the limits of algorithmic cooling Schulman et al (2005 Phys. Rev. Lett. 94, 120501) can be broken by exploiting these correlations. We introduce a new tool for cooling algorithms, which we call ‘state-reset’, obtained when the coupling to the environment is generalized from individual-qubits relaxation to correlated-qubit relaxation. Furthermore, we present explicit improved cooling algorithms which lead to an increase of purity beyond all the previous work, and relate our results to the Nuclear Overhauser Effect.
Bayesian Methods and Universal Darwinism
NASA Astrophysics Data System (ADS)
Campbell, John
2009-12-01
Bayesian methods since the time of Laplace have been understood by their practitioners as closely aligned to the scientific method. Indeed a recent Champion of Bayesian methods, E. T. Jaynes, titled his textbook on the subject Probability Theory: the Logic of Science. Many philosophers of science including Karl Popper and Donald Campbell have interpreted the evolution of Science as a Darwinian process consisting of a `copy with selective retention' algorithm abstracted from Darwin's theory of Natural Selection. Arguments are presented for an isomorphism between Bayesian Methods and Darwinian processes. Universal Darwinism, as the term has been developed by Richard Dawkins, Daniel Dennett and Susan Blackmore, is the collection of scientific theories which explain the creation and evolution of their subject matter as due to the Operation of Darwinian processes. These subject matters span the fields of atomic physics, chemistry, biology and the social sciences. The principle of Maximum Entropy states that Systems will evolve to states of highest entropy subject to the constraints of scientific law. This principle may be inverted to provide illumination as to the nature of scientific law. Our best cosmological theories suggest the universe contained much less complexity during the period shortly after the Big Bang than it does at present. The scientific subject matter of atomic physics, chemistry, biology and the social sciences has been created since that time. An explanation is proposed for the existence of this subject matter as due to the evolution of constraints in the form of adaptations imposed on Maximum Entropy. It is argued these adaptations were discovered and instantiated through the Operations of a succession of Darwinian processes.
The Chandra Source Catalog: Processing and Infrastructure
NASA Astrophysics Data System (ADS)
Evans, Janet; Evans, Ian N.; Glotfelty, Kenny J.; Hain, Roger; Hall, Diane M.; Miller, Joseph B.; Plummer, David A.; Zografou, Panagoula; Primini, Francis A.; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Doe, Stephen M.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G., II; Grier, John D.; Harbo, Peter N.; He, Xiang Qun (Helen); Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Refsdal, Brian L.; Rots, Arnold H.; Siemiginowska, Aneta L.; Sundheim, Beth A.; Tibbetts, Michael S.; van Stone, David W.; Winkelman, Sherry L.
2009-09-01
Chandra Source Catalog processing recalibrates each observation using the latest available calibration data, and employs a wavelet-based source detection algorithm to identify all the X-ray sources in the field of view. Source properties are then extracted from each detected source that is a candidate for inclusion in the catalog. Catalog processing is completed by matching sources across multiple observations, merging common detections, and applying quality assurance checks. The Chandra Source Catalog processing system shares a common processing infrastructure and utilizes much of the functionality that is built into the Standard Data Processing (SDP) pipeline system that provides calibrated Chandra data to end-users. Other key components of the catalog processing system have been assembled from the portable CIAO data analysis package. Minimal new software tool development has been required to support the science algorithms needed for catalog production. Since processing pipelines must be instantiated for each detected source, the number of pipelines that are run during catalog construction is a factor of order 100 times larger than for SDP. The increased computational load, and inherent parallel nature of the processing, is handled by distributing the workload across a multi-node Beowulf cluster. Modifications to the SDP automated processing application to support catalog processing, and extensions to Chandra Data Archive software to ingest and retrieve catalog products, complete the upgrades to the infrastructure to support catalog processing.
NASA Technical Reports Server (NTRS)
Beyon, Jeffrey Y.; Ng, Tak-Kwong; Davis, Mitchell J.; Adams, James K.; Bowen, Stephen C.; Fay, James J.; Hutchinson, Mark A.
2015-01-01
The project called High-Speed On-Board Data Processing for Science Instruments (HOPS) has been funded by NASA Earth Science Technology Office (ESTO) Advanced Information Systems Technology (AIST) program since April, 2012. The HOPS team recently completed two flight campaigns during the summer of 2014 on two different aircrafts with two different science instruments. The first flight campaign was in July, 2014 based at NASA Langley Research Center (LaRC) in Hampton, VA on the NASA's HU-25 aircraft. The science instrument that flew with HOPS was Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) CarbonHawk Experiment Simulator (ACES) funded by NASA's Instrument Incubator Program (IIP). The second campaign was in August, 2014 based at NASA Armstrong Flight Research Center (AFRC) in Palmdale, CA on the NASA's DC-8 aircraft. HOPS flew with the Multifunctional Fiber Laser Lidar (MFLL) instrument developed by Excelis Inc. The goal of the campaigns was to perform an end-to-end demonstration of the capabilities of the HOPS prototype system (HOPS COTS) while running the most computationally intensive part of the ASCENDS algorithm real-time on-board. The comparison of the two flight campaigns and the results of the functionality tests of the HOPS COTS are presented in this paper.
JPRS Report, Science & Technology, USSR: Electronics & Electrical Engineering.
1988-02-05
Sirena -1 Self-propelled Flaw Detector [PRIBORYI SISTEMY UPRAVLENIYA, Jan 87] 14 Crane Strain-measurement Scales With Data Processing by a Microprocessor...was 3-5 m. 06415/06662 UDC 620.179.1:620.165.29 Algorithimization of Control of Electric Motor Drive of Sirena -1 Self-propelled Flaw Detector...The article describes one of the most optimum algorithms of control of the electric motor drive of the Sirena -1 self-propelled flaw detector
Review of Methods and Algorithms for Dynamic Management of CBRNE Collection Assets
2013-07-01
where they should be looking. An example sensor is a satellite with a limited energy budget, which may have power to operate, say, only 10 percent of...calculations by incorporating sensor data with initial dispersion estimates.1 DTRA and the Joint Science and Technology Office for Chem- Bio Defense (JSTO-CBD...detection performance through remote processing and fusion of sensor data and modeling of the operational environment. DTRA is actively developing
Science of Decision Making: A Data-Modeling Approach
2013-10-01
were separated on a capillary column using the Dionex UltiMate 3000 (Sunnyvale, CA). The resolved peptides were then sprayed into a linear ion trap...database (3–5). These algorithms assign a peptide sequence, along with a matching score of the experimental ion product mass spectrum, to a theoretical ion ...Bacterial Sample Processing Samples were prepared for liquid chromatography (LC) tandem MS (LC– MS/MS) in a similar manner to that previously reported
The SpaceCube Family of Hybrid On-Board Science Data Processors: An Update
NASA Astrophysics Data System (ADS)
Flatley, T.
2012-12-01
SpaceCube is an FPGA based on-board hybrid science data processing system developed at the NASA Goddard Space Flight Center (GSFC). The goal of the SpaceCube program is to provide 10x to 100x improvements in on-board computing power while lowering relative power consumption and cost. The SpaceCube design strategy incorporates commercial rad-tolerant FPGA technology and couples it with an upset mitigation software architecture to provide "order of magnitude" improvements in computing power over traditional rad-hard flight systems. Many of the missions proposed in the Earth Science Decadal Survey (ESDS) will require "next generation" on-board processing capabilities to meet their specified mission goals. Advanced laser altimeter, radar, lidar and hyper-spectral instruments are proposed for at least ten of the ESDS missions, and all of these instrument systems will require advanced on-board processing capabilities to facilitate the timely conversion of Earth Science data into Earth Science information. Both an "order of magnitude" increase in processing power and the ability to "reconfigure on the fly" are required to implement algorithms that detect and react to events, to produce data products on-board for applications such as direct downlink, quick look, and "first responder" real-time awareness, to enable "sensor web" multi-platform collaboration, and to perform on-board "lossless" data reduction by migrating typical ground-based processing functions on-board, thus reducing on-board storage and downlink requirements. This presentation will highlight a number of SpaceCube technology developments to date and describe current and future efforts, including the collaboration with the U.S. Department of Defense - Space Test Program (DoD/STP) on the STP-H4 ISS experiment pallet (launch June 2013) that will demonstrate SpaceCube 2.0 technology on-orbit.; ;
Computational thinking in life science education.
Rubinstein, Amir; Chor, Benny
2014-11-01
We join the increasing call to take computational education of life science students a step further, beyond teaching mere programming and employing existing software tools. We describe a new course, focusing on enriching the curriculum of life science students with abstract, algorithmic, and logical thinking, and exposing them to the computational "culture." The design, structure, and content of our course are influenced by recent efforts in this area, collaborations with life scientists, and our own instructional experience. Specifically, we suggest that an effective course of this nature should: (1) devote time to explicitly reflect upon computational thinking processes, resisting the temptation to drift to purely practical instruction, (2) focus on discrete notions, rather than on continuous ones, and (3) have basic programming as a prerequisite, so students need not be preoccupied with elementary programming issues. We strongly recommend that the mere use of existing bioinformatics tools and packages should not replace hands-on programming. Yet, we suggest that programming will mostly serve as a means to practice computational thinking processes. This paper deals with the challenges and considerations of such computational education for life science students. It also describes a concrete implementation of the course and encourages its use by others.
An intelligent algorithm for autonomous scientific sampling with the VALKYRIE cryobot
NASA Astrophysics Data System (ADS)
Clark, Evan B.; Bramall, Nathan E.; Christner, Brent; Flesher, Chris; Harman, John; Hogan, Bart; Lavender, Heather; Lelievre, Scott; Moor, Joshua; Siegel, Vickie
2018-07-01
The development of algorithms for agile science and autonomous exploration has been pursued in contexts ranging from spacecraft to planetary rovers to unmanned aerial vehicles to autonomous underwater vehicles. In situations where time, mission resources and communications are limited and the future state of the operating environment is unknown, the capability of a vehicle to dynamically respond to changing circumstances without human guidance can substantially improve science return. Such capabilities are difficult to achieve in practice, however, because they require intelligent reasoning to utilize limited resources in an inherently uncertain environment. Here we discuss the development, characterization and field performance of two algorithms for autonomously collecting water samples on VALKYRIE (Very deep Autonomous Laser-powered Kilowatt-class Yo-yoing Robotic Ice Explorer), a glacier-penetrating cryobot deployed to the Matanuska Glacier, Alaska (Mission Control location: 61°42'09.3''N 147°37'23.2''W). We show performance on par with human performance across a wide range of mission morphologies using simulated mission data, and demonstrate the effectiveness of the algorithms at autonomously collecting samples with high relative cell concentration during field operation. The development of such algorithms will help enable autonomous science operations in environments where constant real-time human supervision is impractical, such as penetration of ice sheets on Earth and high-priority planetary science targets like Europa.
NASA Technical Reports Server (NTRS)
Schoenwald, Adam J.; Bradley, Damon C.; Mohammed, Priscilla N.; Piepmeier, Jeffrey R.; Wong, Mark
2016-01-01
Radio-frequency interference (RFI) is a known problem for passive remote sensing as evidenced in the L-band radiometers SMOS, Aquarius and more recently, SMAP. Various algorithms have been developed and implemented on SMAP to improve science measurements. This was achieved by the use of a digital microwave radiometer. RFI mitigation becomes more challenging for microwave radiometers operating at higher frequencies in shared allocations. At higher frequencies larger bandwidths are also desirable for lower measurement noise further adding to processing challenges. This work focuses on finding improved RFI mitigation techniques that will be effective at additional frequencies and at higher bandwidths. To aid the development and testing of applicable detection and mitigation techniques, a wide-band RFI algorithm testing environment has been developed using the Reconfigurable Open Architecture Computing Hardware System (ROACH) built by the Collaboration for Astronomy Signal Processing and Electronics Research (CASPER) Group. The testing environment also consists of various test equipment used to reproduce typical signals that a radiometer may see including those with and without RFI. The testing environment permits quick evaluations of RFI mitigation algorithms as well as show that they are implementable in hardware. The algorithm implemented is a complex signal kurtosis detector which was modeled and simulated. The complex signal kurtosis detector showed improved performance over the real kurtosis detector under certain conditions. The real kurtosis is implemented on SMAP at 24 MHz bandwidth. The complex signal kurtosis algorithm was then implemented in hardware at 200 MHz bandwidth using the ROACH. In this work, performance of the complex signal kurtosis and the real signal kurtosis are compared. Performance evaluations and comparisons in both simulation as well as experimental hardware implementations were done with the use of receiver operating characteristic (ROC) curves. The complex kurtosis algorithm has the potential to reduce data rate due to onboard processing in addition to improving RFI detection performance.
Study of cache performance in distributed environment for data processing
NASA Astrophysics Data System (ADS)
Makatun, Dzmitry; Lauret, Jérôme; Šumbera, Michal
2014-06-01
Processing data in distributed environment has found its application in many fields of science (Nuclear and Particle Physics (NPP), astronomy, biology to name only those). Efficiently transferring data between sites is an essential part of such processing. The implementation of caching strategies in data transfer software and tools, such as the Reasoner for Intelligent File Transfer (RIFT) being developed in the STAR collaboration, can significantly decrease network load and waiting time by reusing the knowledge of data provenance as well as data placed in transfer cache to further expand on the availability of sources for files and data-sets. Though, a great variety of caching algorithms is known, a study is needed to evaluate which one can deliver the best performance in data access considering the realistic demand patterns. Records of access to the complete data-sets of NPP experiments were analyzed and used as input for computer simulations. Series of simulations were done in order to estimate the possible cache hits and cache hits per byte for known caching algorithms. The simulations were done for cache of different sizes within interval 0.001 - 90% of complete data-set and low-watermark within 0-90%. Records of data access were taken from several experiments and within different time intervals in order to validate the results. In this paper, we will discuss the different data caching strategies from canonical algorithms to hybrid cache strategies, present the results of our simulations for the diverse algorithms, debate and identify the choice for the best algorithm in the context of Physics Data analysis in NPP. While the results of those studies have been implemented in RIFT, they can also be used when setting up cache in any other computational work-flow (Cloud processing for example) or managing data storages with partial replicas of the entire data-set.
NASA Astrophysics Data System (ADS)
Dell'Acqua, Fabio; Iannelli, Gianni Cristian; Kerekes, John; Lisini, Gianni; Moser, Gabriele; Ricardi, Niccolo; Pierce, Leland
2016-08-01
The issue of homogeneity in performance assessment of proposed algorithms for information extraction is generally perceived also in the Earth Observation (EO) domain. Different authors propose different datasets to test their developed algorithms and to the reader it is frequently difficult to assess which is better for his/her specific application, given the wide variability in test sets that makes pure comparison of e.g. accuracy values less meaningful than one would desire. With our work, we gave a modest contribution to ease the problem by making it possible to automatically distribute a limited set of possible "standard" open datasets, together with some ground truth info, and automatically assess processing results provided by the users.
Chandra mission scheduling on-orbit experience
NASA Astrophysics Data System (ADS)
Bucher, Sabina; Williams, Brent; Pendexter, Misty; Balke, David
2008-07-01
Scheduling observatory time to maximize both day-to-day science target integration time and the lifetime of the observatory is a formidable challenge. Furthermore, it is not a static problem. Of course, every schedule brings a new set of observations, but the boundaries of the problem change as well. As spacecraft ages, its capabilities may degrade. As in-flight experience grows, capabilities may expand. As observing programs are completed, the needs and expectations of the science community may evolve. Changes such as these impact the rules by which a mission scheduled. In eight years on orbit, the Chandra X-Ray Observatory Mission Planning process has adapted to meet the challenge of maximizing day-to-day and mission lifetime science return, despite a consistently evolving set of scheduling constraints. The success of the planning team has been achieved, not through the use of complex algorithms and optimization routines, but through processes and home grown tools that help individuals make smart short term and long term Mission Planning decisions. This paper walks through the processes and tools used to plan and produce mission schedules for the Chandra X-Ray Observatory. Nominal planning and scheduling, target of opportunity response, and recovery from on-board autonomous safing actions are all addressed. Evolution of tools and processes, best practices, and lessons learned are highlighted along the way.
Mars Entry Atmospheric Data System Trajectory Reconstruction Algorithms and Flight Results
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark; Shidner, Jeremy; Munk, Michelle
2013-01-01
The Mars Entry Atmospheric Data System is a part of the Mars Science Laboratory, Entry, Descent, and Landing Instrumentation project. These sensors are a system of seven pressure transducers linked to ports on the entry vehicle forebody to record the pressure distribution during atmospheric entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. Specifically, angle of attack, angle of sideslip, dynamic pressure, Mach number, and freestream atmospheric properties are reconstructed from the measured pressures. Such data allows for the aerodynamics to become decoupled from the assumed atmospheric properties, allowing for enhanced trajectory reconstruction and performance analysis as well as an aerodynamic reconstruction, which has not been possible in past Mars entry reconstructions. This paper provides details of the data processing algorithms that are utilized for this purpose. The data processing algorithms include two approaches that have commonly been utilized in past planetary entry trajectory reconstruction, and a new approach for this application that makes use of the pressure measurements. The paper describes assessments of data quality and preprocessing, and results of the flight data reduction from atmospheric entry, which occurred on August 5th, 2012.
Essential Autonomous Science Inference on Rovers (EASIR)
NASA Technical Reports Server (NTRS)
Roush, Ted L.; Shipman, Mark; Morris, Robert; Gazis, Paul; Pedersen, Liam
2003-01-01
Existing constraints on time, computational, and communication resources associated with Mars rover missions suggest on-board science evaluation of sensor data can contribute to decreasing human-directed operational planning, optimizing returned science data volumes, and recognition of unique or novel data. All of which act to increase the scientific return from a mission. Many different levels of science autonomy exist and each impacts the data collected and returned by, and activities of, rovers. Several computational algorithms, designed to recognize objects of interest to geologists and biologists, are discussed. The algorithms represent various functions that producing scientific opinions and several scenarios illustrate how the opinions can be used.
ICASE Computer Science Program
NASA Technical Reports Server (NTRS)
1985-01-01
The Institute for Computer Applications in Science and Engineering computer science program is discussed in outline form. Information is given on such topics as problem decomposition, algorithm development, programming languages, and parallel architectures.
A cross-disciplinary introduction to quantum annealing-based algorithms
NASA Astrophysics Data System (ADS)
Venegas-Andraca, Salvador E.; Cruz-Santos, William; McGeoch, Catherine; Lanzagorta, Marco
2018-04-01
A central goal in quantum computing is the development of quantum hardware and quantum algorithms in order to analyse challenging scientific and engineering problems. Research in quantum computation involves contributions from both physics and computer science; hence this article presents a concise introduction to basic concepts from both fields that are used in annealing-based quantum computation, an alternative to the more familiar quantum gate model. We introduce some concepts from computer science required to define difficult computational problems and to realise the potential relevance of quantum algorithms to find novel solutions to those problems. We introduce the structure of quantum annealing-based algorithms as well as two examples of this kind of algorithms for solving instances of the max-SAT and Minimum Multicut problems. An overview of the quantum annealing systems manufactured by D-Wave Systems is also presented.
NASA Technical Reports Server (NTRS)
Stone, M. S.; Mcadam, P. L.; Saunders, O. W.
1977-01-01
The results are presented of a 4 month study to design a hybrid analog/digital receiver for outer planet mission probe communication links. The scope of this study includes functional design of the receiver; comparisons between analog and digital processing; hardware tradeoffs for key components including frequency generators, A/D converters, and digital processors; development and simulation of the processing algorithms for acquisition, tracking, and demodulation; and detailed design of the receiver in order to determine its size, weight, power, reliability, and radiation hardness. In addition, an evaluation was made of the receiver's capabilities to perform accurate measurement of signal strength and frequency for radio science missions.
Data Understanding Applied to Optimization
NASA Technical Reports Server (NTRS)
Buntine, Wray; Shilman, Michael
1998-01-01
The goal of this research is to explore and develop software for supporting visualization and data analysis of search and optimization. Optimization is an ever-present problem in science. The theory of NP-completeness implies that the problems can only be resolved by increasingly smarter problem specific knowledge, possibly for use in some general purpose algorithms. Visualization and data analysis offers an opportunity to accelerate our understanding of key computational bottlenecks in optimization and to automatically tune aspects of the computation for specific problems. We will prototype systems to demonstrate how data understanding can be successfully applied to problems characteristic of NASA's key science optimization tasks, such as central tasks for parallel processing, spacecraft scheduling, and data transmission from a remote satellite.
NASA Technical Reports Server (NTRS)
Stocker, Erich Franz
2004-01-01
TRMM has been an imminently successful mission from an engineering standpoint but even more from a science standpoint. An important part of this science success has been the careful quality control of the TRMM standard products. This paper will present the quality monitoring efforts that the TRMM Science Data and Information System (TSDIS) conducts on a routine basis. The paper will detail parameter trending, geolocation quality control and the procedures to support the preparation of next versions of the algorithm used for reprocessing.
High-speed Particle Image Velocimetry Near Surfaces
Lu, Louise; Sick, Volker
2013-01-01
Multi-dimensional and transient flows play a key role in many areas of science, engineering, and health sciences but are often not well understood. The complex nature of these flows may be studied using particle image velocimetry (PIV), a laser-based imaging technique for optically accessible flows. Though many forms of PIV exist that extend the technique beyond the original planar two-component velocity measurement capabilities, the basic PIV system consists of a light source (laser), a camera, tracer particles, and analysis algorithms. The imaging and recording parameters, the light source, and the algorithms are adjusted to optimize the recording for the flow of interest and obtain valid velocity data. Common PIV investigations measure two-component velocities in a plane at a few frames per second. However, recent developments in instrumentation have facilitated high-frame rate (> 1 kHz) measurements capable of resolving transient flows with high temporal resolution. Therefore, high-frame rate measurements have enabled investigations on the evolution of the structure and dynamics of highly transient flows. These investigations play a critical role in understanding the fundamental physics of complex flows. A detailed description for performing high-resolution, high-speed planar PIV to study a transient flow near the surface of a flat plate is presented here. Details for adjusting the parameter constraints such as image and recording properties, the laser sheet properties, and processing algorithms to adapt PIV for any flow of interest are included. PMID:23851899
On-Board Cryospheric Change Detection By The Autonomous Sciencecraft Experiment
NASA Astrophysics Data System (ADS)
Doggett, T.; Greeley, R.; Castano, R.; Cichy, B.; Chien, S.; Davies, A.; Baker, V.; Dohm, J.; Ip, F.
2004-12-01
The Autonomous Sciencecraft Experiment (ASE) is operating on-board Earth Observing - 1 (EO-1) with the Hyperion hyper-spectral visible/near-IR spectrometer. ASE science activities include autonomous monitoring of cryopsheric changes, triggering the collection of additional data when change is detected and filtering of null data such as no change or cloud cover. This would have application to the study of cryospheres on Earth, Mars and the icy moons of the outer solar system. A cryosphere classification algorithm, in combination with a previously developed cloud algorithm [1] has been tested on-board ten times from March through August 2004. The cloud algorithm correctly screened out three scenes with total cloud cover, while the cryosphere algorithm detected alpine snow cover in the Rocky Mountains, lake thaw near Madison, Wisconsin, and the presence and subsequent break-up of sea ice in the Barrow Strait of the Canadian Arctic. Hyperion has 220 bands ranging from 400 to 2400 nm, with a spatial resolution of 30 m/pixel and a spectral resolution of 10 nm. Limited on-board memory and processing speed imposed the constraint that only partially processed Level 0.5 data with dark image subtraction and gain factors applied, but not full radiometric calibration. In addition, a maximum of 12 bands could be used for any stacked sequence of algorithms run for a scene on-board. The cryosphere algorithm was developed to classify snow, water, ice and land, using six Hyperion bands at 427, 559, 661, 864, 1245 and 1649 nm. Of these, only 427 nm does overlap with the cloud algorithm. The cloud algorithm was developed with Level 1 data, which introduces complications because of the incomplete calibration of SWIR in Level 0.5 data, including a high level of noise in the 1377 nm band used by the cloud algorithm. Development of a more robust cryosphere classifier, including cloud classification specifically adapted to Level 0.5, is in progress for deployment on EO-1 as part of continued ASE operations. [1] Griffin, M.K. et al., Cloud Cover Detection Algorithm For EO-1 Hyperion Imagery, SPIE 17, 2003.
Data Compression for Maskless Lithography Systems: Architecture, Algorithms and Implementation
2008-05-19
Data Compression for Maskless Lithography Systems: Architecture, Algorithms and Implementation Vito Dai Electrical Engineering and Computer Sciences...servers or to redistribute to lists, requires prior specific permission. Data Compression for Maskless Lithography Systems: Architecture, Algorithms and...for Maskless Lithography Systems: Architecture, Algorithms and Implementation Copyright 2008 by Vito Dai 1 Abstract Data Compression for Maskless
Talkoot Portals: Discover, Tag, Share, and Reuse Collaborative Science Workflows (Invited)
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Ramachandran, R.; Lynnes, C.
2009-12-01
A small but growing number of scientists are beginning to harness Web 2.0 technologies, such as wikis, blogs, and social tagging, as a transformative way of doing science. These technologies provide researchers easy mechanisms to critique, suggest and share ideas, data and algorithms. At the same time, large suites of algorithms for science analysis are being made available as remotely-invokable Web Services, which can be chained together to create analysis workflows. This provides the research community an unprecedented opportunity to collaborate by sharing their workflows with one another, reproducing and analyzing research results, and leveraging colleagues’ expertise to expedite the process of scientific discovery. However, wikis and similar technologies are limited to text, static images and hyperlinks, providing little support for collaborative data analysis. A team of information technology and Earth science researchers from multiple institutions have come together to improve community collaboration in science analysis by developing a customizable “software appliance” to build collaborative portals for Earth Science services and analysis workflows. The critical requirement is that researchers (not just information technologists) be able to build collaborative sites around service workflows within a few hours. We envision online communities coming together, much like Finnish “talkoot” (a barn raising), to build a shared research space. Talkoot extends a freely available, open source content management framework with a series of modules specific to Earth Science for registering, creating, managing, discovering, tagging and sharing Earth Science web services and workflows for science data processing, analysis and visualization. Users will be able to author a “science story” in shareable web notebooks, including plots or animations, backed up by an executable workflow that directly reproduces the science analysis. New services and workflows of interest will be discoverable using tag search, and advertised using “service casts” and “interest casts” (Atom feeds). Multiple science workflow systems will be plugged into the system, with initial support for UAH’s Mining Workflow Composer and the open-source Active BPEL engine, and JPL’s SciFlo engine and the VizFlow visual programming interface. With the ability to share and execute analysis workflows, Talkoot portals can be used to do collaborative science in addition to communicate ideas and results. It will be useful for different science domains, mission teams, research projects and organizations. Thus, it will help to solve the “sociological” problem of bringing together disparate groups of researchers, and the technical problem of advertising, discovering, developing, documenting, and maintaining inter-agency science workflows. The presentation will discuss the goals of and barriers to Science 2.0, the social web technologies employed in the Talkoot software appliance (e.g. CMS, social tagging, personal presence, advertising by feeds, etc.), illustrate the resulting collaborative capabilities, and show early prototypes of the web interfaces (e.g. embedded workflows).
Passive coherent location system simulation and evaluation
NASA Astrophysics Data System (ADS)
Slezák, Libor; Kvasnička, Michael; Pelant, Martin; Vávra, Jiř; Plšek, Radek
2006-02-01
Passive Coherent Location (PCL) is going to be important and perspective system of passive location of non cooperative and stealth targets. It works with the sources of irradiation of opportunity. PCL is intended to be a part of mobile Air Command and Control System (ACCS) as a Deployable ACCS Component (DAC). The company ERA works on PCL system parameters verification program by complete PCL simulator development since the year 2003. The Czech DoD takes financial participation on this program. The moving targets scenario, the RCS calculation by method of moment, ground clutter scattering and signal processing method (the bottle neck of the PCL) are available up to now in simulator tool. The digital signal (DSP) processing algorithms are performed both on simulated data and on real data measured at NATO C3 Agency in their Haag experiment. The Institute of Information Theory and Automation of the Academy of Sciences of the Czech Republic takes part on the implementation of the DSP algorithms in FPGA. The paper describes the simulator and signal processing structure and results both on simulated and measured data.
DES Science Portal: Computing Photometric Redshifts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gschwend, Julia
An important challenge facing photometric surveys for cosmological purposes, such as the Dark Energy Survey (DES), is the need to produce reliable photometric redshifts (photo-z). The choice of adequate algorithms and configurations and the maintenance of an up-to-date spectroscopic database to build training sets, for example, are challenging tasks when dealing with large amounts of data that are regularly updated and constantly growing. In this paper, we present the first of a series of tools developed by DES, provided as part of the DES Science Portal, an integrated web-based data portal developed to facilitate the scientific analysis of the data,more » while ensuring the reproducibility of the analysis. We present the DES Science Portal photometric redshift tools, starting from the creation of a spectroscopic sample to training the neural network photo-z codes, to the final estimation of photo-zs for a large photometric catalog. We illustrate this operation by calculating well calibrated photo-zs for a galaxy sample extracted from the DES first year (Y1A1) data. The series of processes mentioned above is run entirely within the Portal environment, which automatically produces validation metrics, and maintains the provenance between the different steps. This system allows us to fine tune the many steps involved in the process of calculating photo-zs, making sure that we do not lose the information on the configurations and inputs of the previous processes. By matching the DES Y1A1 photometry to a spectroscopic sample, we define different training sets that we use to feed the photo-z algorithms already installed at the Portal. Finally, we validate the results under several conditions, including the case of a sample limited to i<22.5 with the color properties close to the full DES Y1A1 photometric data. This way we compare the performance of multiple methods and training configurations. The infrastructure presented here is an effcient way to test several methods of calculating photo-zs and use them to create different catalogs for portal science workflows« less
Translational medicine: science or wishful thinking?
Wehling, Martin
2008-01-01
"Translational medicine" as a fashionable term is being increasingly used to describe the wish of biomedical researchers to ultimately help patients. Despite increased efforts and investments into R&D, the output of novel medicines has been declining dramatically over the past years. Improvement of translation is thought to become a remedy as one of the reasons for this widening gap between input and output is the difficult transition between preclinical ("basic") and clinical stages in the R&D process. Animal experiments, test tube analyses and early human trials do simply not reflect the patient situation well enough to reliably predict efficacy and safety of a novel compound or device. This goal, however, can only be achieved if the translational processes are scientifically backed up by robust methods some of which still need to be developed. This mainly relates to biomarker development and predictivity assessment, biostatistical methods, smart and accelerated early human study designs and decision algorithms among other features. It is therefore claimed that a new science needs to be developed called 'translational science in medicine'. PMID:18559092
Current Status of Japanese Global Precipitation Measurement (GPM) Research Project
NASA Astrophysics Data System (ADS)
Kachi, Misako; Oki, Riko; Kubota, Takuji; Masaki, Takeshi; Kida, Satoshi; Iguchi, Toshio; Nakamura, Kenji; Takayabu, Yukari N.
2013-04-01
The Global Precipitation Measurement (GPM) mission is a mission led by the Japan Aerospace Exploration Agency (JAXA) and the National Aeronautics and Space Administration (NASA) under collaboration with many international partners, who will provide constellation of satellites carrying microwave radiometer instruments. The GPM Core Observatory, which carries the Dual-frequency Precipitation Radar (DPR) developed by JAXA and the National Institute of Information and Communications Technology (NICT), and the GPM Microwave Imager (GMI) developed by NASA. The GPM Core Observatory is scheduled to be launched in early 2014. JAXA also provides the Global Change Observation Mission (GCOM) 1st - Water (GCOM-W1) named "SHIZUKU," as one of constellation satellites. The SHIZUKU satellite was launched in 18 May, 2012 from JAXA's Tanegashima Space Center, and public data release of the Advanced Microwave Scanning Radiometer 2 (AMSR2) on board the SHIZUKU satellite was planned that Level 1 products in January 2013, and Level 2 products including precipitation in May 2013. The Japanese GPM research project conducts scientific activities on algorithm development, ground validation, application research including production of research products. In addition, we promote collaboration studies in Japan and Asian countries, and public relations activities to extend potential users of satellite precipitation products. In pre-launch phase, most of our activities are focused on the algorithm development and the ground validation related to the algorithm development. As the GPM standard products, JAXA develops the DPR Level 1 algorithm, and the NASA-JAXA Joint Algorithm Team develops the DPR Level 2 and the DPR-GMI combined Level2 algorithms. JAXA also develops the Global Rainfall Map product as national product to distribute hourly and 0.1-degree horizontal resolution rainfall map. All standard algorithms including Japan-US joint algorithm will be reviewed by the Japan-US Joint Precipitation Measuring Mission (PMM) Science Team (JPST) before the release. DPR Level 2 algorithm has been developing by the DPR Algorithm Team led by Japan, which is under the NASA-JAXA Joint Algorithm Team. The Level-2 algorithms will provide KuPR only products, KaPR only products, and Dual-frequency Precipitation products, with estimated precipitation rate, radar reflectivity, and precipitation information such as drop size distribution and bright band height. At-launch code was developed in December 2012. In addition, JAXA and NASA have provided synthetic DPR L1 data and tests have been performed using them. Japanese Global Rainfall Map algorithm for the GPM mission has been developed by the Global Rainfall Map Algorithm Development Team in Japan. The algorithm succeeded heritages of the Global Satellite Mapping for Precipitation (GSMaP) project, which was sponsored by the Japan Science and Technology Agency (JST) under the Core Research for Evolutional Science and Technology (CREST) framework between 2002 and 2007. The GSMaP near-real-time version and reanalysis version have been in operation at JAXA, and browse images and binary data available at the GSMaP web site (http://sharaku.eorc.jaxa.jp/GSMaP/). The GSMaP algorithm for GPM is developed in collaboration with AMSR2 standard algorithm for precipitation product, and their validation studies are closely related. As JAXA GPM product, we will provide 0.1-degree grid and hourly product for standard and near-realtime processing. Outputs will include hourly rainfall, gauge-calibrated hourly rainfall, and several quality information (satellite information flag, time information flag, and gauge quality information) over global areas from 60°S to 60°N. At-launch code of GSMaP for GPM is under development, and will be delivered to JAXA GPM Mission Operation System by April 2013. At-launch code will include several updates of microwave imager and sounder algorithms and databases, and introduction of rain-gauge correction.
NASA Technical Reports Server (NTRS)
Berman, A. L.
1977-01-01
An algorithm was developed for the continuous and automatic computation of Doppler noise concurrently at four sample rate intervals, evenly spanning three orders of magnitude. Average temporal Doppler phase fluctuation spectra will be routinely available in the DSN tracking system Mark III-77 and require little additional processing. The basic (noise) data will be extracted from the archival tracking data file (ATDF) of the tracking data management system.
NIMBUS-7 ERB MATGEN Science Document
NASA Technical Reports Server (NTRS)
Soule, H. V.
1983-01-01
The ERB algorithms and computer software data flow used to convert sensor data into equivalent radiometric data are described in detail. The NIMBUS satellite location, orientation and sensor orientation algorithms are given. The computer housekeeping and data flow and sensor/data status algorithms are also given.
Planning Readings: A Comparative Exploration of Basic Algorithms
ERIC Educational Resources Information Center
Piater, Justus H.
2009-01-01
Conventional introduction to computer science presents individual algorithmic paradigms in the context of specific, prototypical problems. To complement this algorithm-centric instruction, this study additionally advocates problem-centric instruction. I present an original problem drawn from students' life that is simply stated but provides rich…
Theoretical computer science and the natural sciences
NASA Astrophysics Data System (ADS)
Marchal, Bruno
2005-12-01
I present some fundamental theorems in computer science and illustrate their relevance in Biology and Physics. I do not assume prerequisites in mathematics or computer science beyond the set N of natural numbers, functions from N to N, the use of some notational conveniences to describe functions, and at some point, a minimal amount of linear algebra and logic. I start with Cantor's transcendental proof by diagonalization of the non enumerability of the collection of functions from natural numbers to the natural numbers. I explain why this proof is not entirely convincing and show how, by restricting the notion of function in terms of discrete well defined processes, we are led to the non algorithmic enumerability of the computable functions, but also-through Church's thesis-to the algorithmic enumerability of partial computable functions. Such a notion of function constitutes, with respect to our purpose, a crucial generalization of that concept. This will make easy to justify deep and astonishing (counter-intuitive) incompleteness results about computers and similar machines. The modified Cantor diagonalization will provide a theory of concrete self-reference and I illustrate it by pointing toward an elementary theory of self-reproduction-in the Amoeba's way-and cellular self-regeneration-in the flatworm Planaria's way. To make it easier, I introduce a very simple and powerful formal system known as the Schoenfinkel-Curry combinators. I will use the combinators to illustrate in a more concrete way the notion introduced above. The combinators, thanks to their low-level fine grained design, will also make it possible to make a rough but hopefully illuminating description of the main lessons gained by the careful observation of nature, and to describe some new relations, which should exist between computer science, the science of life and the science of inert matter, once some philosophical, if not theological, hypotheses are made in the cognitive sciences. In the last section, I come back to self-reference and I give an exposition of its modal logics. This is used to show that theoretical computer science makes those “philosophical hypotheses” in theoretical cognitive science experimentally and mathematically testable.
Processing techniques for digital sonar images from GLORIA.
Chavez, P.S.
1986-01-01
Image processing techniques have been developed to handle data from one of the newest members of the remote sensing family of digital imaging systems. This paper discusses software to process data collected by the GLORIA (Geological Long Range Inclined Asdic) sonar imaging system, designed and built by the Institute of Oceanographic Sciences (IOS) in England, to correct for both geometric and radiometric distortions that exist in the original 'raw' data. Preprocessing algorithms that are GLORIA-specific include corrections for slant-range geometry, water column offset, aspect ratio distortion, changes in the ship's velocity, speckle noise, and shading problems caused by the power drop-off which occurs as a function of range.-from Author
End-to-end commissioning demonstration of the James Webb Space Telescope
NASA Astrophysics Data System (ADS)
Acton, D. Scott; Towell, Timothy; Schwenker, John; Shields, Duncan; Sabatke, Erin; Contos, Adam R.; Hansen, Karl; Shi, Fang; Dean, Bruce; Smith, Scott
2007-09-01
The one-meter Testbed Telescope (TBT) has been developed at Ball Aerospace to facilitate the design and implementation of the wavefront sensing and control (WFSC) capabilities of the James Webb Space Telescope (JWST). We have recently conducted an "end-to-end" demonstration of the flight commissioning process on the TBT. This demonstration started with the Primary Mirror (PM) segments and the Secondary Mirror (SM) in random positions, traceable to the worst-case flight deployment conditions. The commissioning process detected and corrected the deployment errors, resulting in diffraction-limited performance across the entire science FOV. This paper will describe the commissioning demonstration and the WFSC algorithms used at each step in the process.
The Hyper Suprime-Cam software pipeline
Bosch, James; Armstrong, Robert; Bickerton, Steven; ...
2017-10-12
Here in this article, we describe the optical imaging data processing pipeline developed for the Subaru Telescope’s Hyper Suprime-Cam (HSC) instrument. The HSC Pipeline builds on the prototype pipeline being developed by the Large Synoptic Survey Telescope’s Data Management system, adding customizations for HSC, large-scale processing capabilities, and novel algorithms that have since been reincorporated into the LSST codebase. While designed primarily to reduce HSC Subaru Strategic Program (SSP) data, it is also the recommended pipeline for reducing general-observer HSC data. The HSC pipeline includes high-level processing steps that generate coadded images and science-ready catalogs as well as low-level detrendingmore » and image characterizations.« less
The Hyper Suprime-Cam software pipeline
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bosch, James; Armstrong, Robert; Bickerton, Steven
Here in this article, we describe the optical imaging data processing pipeline developed for the Subaru Telescope’s Hyper Suprime-Cam (HSC) instrument. The HSC Pipeline builds on the prototype pipeline being developed by the Large Synoptic Survey Telescope’s Data Management system, adding customizations for HSC, large-scale processing capabilities, and novel algorithms that have since been reincorporated into the LSST codebase. While designed primarily to reduce HSC Subaru Strategic Program (SSP) data, it is also the recommended pipeline for reducing general-observer HSC data. The HSC pipeline includes high-level processing steps that generate coadded images and science-ready catalogs as well as low-level detrendingmore » and image characterizations.« less
Using quantum theory to simplify input-output processes
NASA Astrophysics Data System (ADS)
Thompson, Jayne; Garner, Andrew J. P.; Vedral, Vlatko; Gu, Mile
2017-02-01
All natural things process and transform information. They receive environmental information as input, and transform it into appropriate output responses. Much of science is dedicated to building models of such systems-algorithmic abstractions of their input-output behavior that allow us to simulate how such systems can behave in the future, conditioned on what has transpired in the past. Here, we show that classical models cannot avoid inefficiency-storing past information that is unnecessary for correct future simulation. We construct quantum models that mitigate this waste, whenever it is physically possible to do so. This suggests that the complexity of general input-output processes depends fundamentally on what sort of information theory we use to describe them.
A new physical unclonable function architecture
NASA Astrophysics Data System (ADS)
Chuang, Bai; Xuecheng, Zou; Kui, Dai
2015-03-01
This paper describes a new silicon physical unclonable function (PUF) architecture that can be fabricated on a standard CMOS process. Our proposed architecture is built using process sensors, difference amplifier, comparator, voting mechanism and diffusion algorithm circuit. Multiple identical process sensors are fabricated on the same chip. Due to manufacturing process variations, each sensor produces slightly different physical characteristic values that can be compared in order to create a digital identification for the chip. The diffusion algorithm circuit ensures further that the PUF based on the proposed architecture is able to effectively identify a population of ICs. We also improve the stability of PUF design with respect to temporary environmental variations like temperature and supply voltage with the introduction of difference amplifier and voting mechanism. The PUF built on the proposed architecture is fabricated in 0.18 μm CMOS technology. Experimental results show that the PUF has a good output statistical characteristic of uniform distribution and a high stability of 98.1% with respect to temperature variation from -40 to 100 °C, and supply voltage variation from 1.7 to 1.9 V. Project supported by the National Natural Science Foundation of China (No. 61376031).
Extending MODIS Cloud Top and Infrared Phase Climate Records with VIIRS and CrIS
NASA Astrophysics Data System (ADS)
Heidinger, A. K.; Platnick, S. E.; Ackerman, S. A.; Holz, R.; Meyer, K.; Frey, R.; Wind, G.; Li, Y.; Botambekov, D.
2015-12-01
The MODIS imagers on the NASA EOS Terra and Aqua satellites have generated accurate and well-used cloud climate data records for 15 years. Both missions are expected to continue until the end of this decade and perhaps beyond. The Visible and Infrared Imaging Radiometer Suite (VIIRS) imagers on the Suomi-NPP (SNPP) mission (launched in October 2011) and future NOAA Joint Polar Satellite System (JPSS) platforms are the successors for imager-based cloud climate records from polar orbiting satellites after MODIS. To ensure product continuity across a broad suite of EOS products, NASA has funded a SNPP science team to develop EOS-like algorithms that can be use with SNPP and JPSS observations, including two teams to work on cloud products. Cloud data record continuity between MODIS and VIIRS is particularly challenging due to the lack of VIIRS CO2-slicing channels, which reduces information content for cloud detection and cloud-top property products, as well as down-stream cloud optical products that rely on both. Here we report on our approach to providing continuity specifically for the MODIS/VIIRS cloud-top and infrared-derived thermodynamic phase products by combining elements of the NASA MODIS science team (MOD) and the NOAA Algorithm Working Group (AWG) algorithms. The combined approach is referred to as the MODAWG processing package. In collaboration with the NASA Atmospheric SIPS located at the University of Wisconsin Space Science and Engineering Center, the MODAWG code has been exercised on one year of SNPP VIIRS data. In addition to cloud-top and phase, MODAWG provides a full suite of cloud products that are physically consistent with MODIS and have a similar data format. Further, the SIPS has developed tools to allow use of Cross-track Infrared Sounder (CrIS) observations in the MODAWG processing that can ameliorate the loss of the CO2 absorption channels on VIIRS. Examples will be given that demonstrate the positive impact that the CrIS data can provide when combined with VIIRS for cloud height and IR-phase retrievals.
Automated Big Data Analysis in Bottom-up and Targeted Proteomics
van der Plas-Duivesteijn, Suzanne; Domański, Dominik; Smith, Derek; Borchers, Christoph; Palmblad, Magnus; Mohamme, Yassene
2014-01-01
Similar to other data intensive sciences, analyzing mass spectrometry-based proteomics data involves multiple steps and diverse software using different algorithms and data formats and sizes. Besides that the distributed and evolving nature of the data in online repositories, another challenge is that a scientists have to deal with many steps of analysis pipelines. A documented data processing is also becoming an essential part for the overall reproducibility of the results. Thanks to different e-Science initiatives, scientific workflow engines have become a means for automated, sharable and reproducible data processing. While these are designed as general tools, they can be employed to solve different challenges that we are facing in handling our Big Data. Here we present three use cases: improving the performance of different spectral search engines by decomposing input data and recomposing the resulting files, building spectral libraries from more than 20 million spectra, and integrating information from multiple resources to select most appropriate peptides for targeted proteomics analyses. The three use cases demonstrate different challenges in exploiting proteomics data analysis. In the first we integrate local and cloud processing resources in order to obtain better performance resulting in more than 30-fold speed improvement. By considering search engines as legacy software our solution is applicable to multiple search algorithms. The second use case is an example of automated processing of many data files of different sizes and locations, starting with raw data and ending with the final, ready-to-use library. This demonstrates the robustness and fault tolerance when dealing with huge amount data stored in multiple files. The third use case demonstrates retrieval and integration of information and data from multiple online repositories. In addition to the diversity of data formats and Web interfaces, this use case also illustrates how to deal with incomplete data.
Infrastructure Systems for Advanced Computing in E-science applications
NASA Astrophysics Data System (ADS)
Terzo, Olivier
2013-04-01
In the e-science field are growing needs for having computing infrastructure more dynamic and customizable with a model of use "on demand" that follow the exact request in term of resources and storage capacities. The integration of grid and cloud infrastructure solutions allows us to offer services that can adapt the availability in terms of up scaling and downscaling resources. The main challenges for e-sciences domains will on implement infrastructure solutions for scientific computing that allow to adapt dynamically the demands of computing resources with a strong emphasis on optimizing the use of computing resources for reducing costs of investments. Instrumentation, data volumes, algorithms, analysis contribute to increase the complexity for applications who require high processing power and storage for a limited time and often exceeds the computational resources that equip the majority of laboratories, research Unit in an organization. Very often it is necessary to adapt or even tweak rethink tools, algorithms, and consolidate existing applications through a phase of reverse engineering in order to adapt them to a deployment on Cloud infrastructure. For example, in areas such as rainfall monitoring, meteorological analysis, Hydrometeorology, Climatology Bioinformatics Next Generation Sequencing, Computational Electromagnetic, Radio occultation, the complexity of the analysis raises several issues such as the processing time, the scheduling of tasks of processing, storage of results, a multi users environment. For these reasons, it is necessary to rethink the writing model of E-Science applications in order to be already adapted to exploit the potentiality of cloud computing services through the uses of IaaS, PaaS and SaaS layer. An other important focus is on create/use hybrid infrastructure typically a federation between Private and public cloud, in fact in this way when all resources owned by the organization are all used it will be easy with a federate cloud infrastructure to add some additional resources form the Public cloud for following the needs in term of computational and storage resources and release them where process are finished. Following the hybrid model, the scheduling approach is important for managing both cloud models. Thanks to this model infrastructure every time resources are available for additional request in term of IT capacities that can used "on demand" for a limited time without having to proceed to purchase additional servers.
A Fast MHD Code for Gravitationally Stratified Media using Graphical Processing Units: SMAUG
NASA Astrophysics Data System (ADS)
Griffiths, M. K.; Fedun, V.; Erdélyi, R.
2015-03-01
Parallelization techniques have been exploited most successfully by the gaming/graphics industry with the adoption of graphical processing units (GPUs), possessing hundreds of processor cores. The opportunity has been recognized by the computational sciences and engineering communities, who have recently harnessed successfully the numerical performance of GPUs. For example, parallel magnetohydrodynamic (MHD) algorithms are important for numerical modelling of highly inhomogeneous solar, astrophysical and geophysical plasmas. Here, we describe the implementation of SMAUG, the Sheffield Magnetohydrodynamics Algorithm Using GPUs. SMAUG is a 1-3D MHD code capable of modelling magnetized and gravitationally stratified plasma. The objective of this paper is to present the numerical methods and techniques used for porting the code to this novel and highly parallel compute architecture. The methods employed are justified by the performance benchmarks and validation results demonstrating that the code successfully simulates the physics for a range of test scenarios including a full 3D realistic model of wave propagation in the solar atmosphere.
An efficient multi-resolution GA approach to dental image alignment
NASA Astrophysics Data System (ADS)
Nassar, Diaa Eldin; Ogirala, Mythili; Adjeroh, Donald; Ammar, Hany
2006-02-01
Automating the process of postmortem identification of individuals using dental records is receiving an increased attention in forensic science, especially with the large volume of victims encountered in mass disasters. Dental radiograph alignment is a key step required for automating the dental identification process. In this paper, we address the problem of dental radiograph alignment using a Multi-Resolution Genetic Algorithm (MR-GA) approach. We use location and orientation information of edge points as features; we assume that affine transformations suffice to restore geometric discrepancies between two images of a tooth, we efficiently search the 6D space of affine parameters using GA progressively across multi-resolution image versions, and we use a Hausdorff distance measure to compute the similarity between a reference tooth and a query tooth subject to a possible alignment transform. Testing results based on 52 teeth-pair images suggest that our algorithm converges to reasonable solutions in more than 85% of the test cases, with most of the error in the remaining cases due to excessive misalignments.
Post-processing images from the WFIRST-AFTA coronagraph testbed
NASA Astrophysics Data System (ADS)
Zimmerman, Neil T.; Ygouf, Marie; Pueyo, Laurent; Soummer, Remi; Perrin, Marshall D.; Mennesson, Bertrand; Cady, Eric; Mejia Prada, Camilo
2016-01-01
The concept for the exoplanet imaging instrument on WFIRST-AFTA relies on the development of mission-specific data processing tools to reduce the speckle noise floor. No instruments have yet functioned on the sky in the planet-to-star contrast regime of the proposed coronagraph (1E-8). Therefore, starlight subtraction algorithms must be tested on a combination of simulated and laboratory data sets to give confidence that the scientific goals can be reached. The High Contrast Imaging Testbed (HCIT) at Jet Propulsion Lab has carried out several technology demonstrations for the instrument concept, demonstrating 1E-8 raw (absolute) contrast. Here, we have applied a mock reference differential imaging strategy to HCIT data sets, treating one subset of images as a reference star observation and another subset as a science target observation. We show that algorithms like KLIP (Karhunen-Loève Image Projection), by suppressing residual speckles, enable the recovery of exoplanet signals at contrast of order 2E-9.
Artificial Intelligence in Medical Practice: The Question to the Answer?
Miller, D Douglas; Brown, Eric W
2018-02-01
Computer science advances and ultra-fast computing speeds find artificial intelligence (AI) broadly benefitting modern society-forecasting weather, recognizing faces, detecting fraud, and deciphering genomics. AI's future role in medical practice remains an unanswered question. Machines (computers) learn to detect patterns not decipherable using biostatistics by processing massive datasets (big data) through layered mathematical models (algorithms). Correcting algorithm mistakes (training) adds to AI predictive model confidence. AI is being successfully applied for image analysis in radiology, pathology, and dermatology, with diagnostic speed exceeding, and accuracy paralleling, medical experts. While diagnostic confidence never reaches 100%, combining machines plus physicians reliably enhances system performance. Cognitive programs are impacting medical practice by applying natural language processing to read the rapidly expanding scientific literature and collate years of diverse electronic medical records. In this and other ways, AI may optimize the care trajectory of chronic disease patients, suggest precision therapies for complex illnesses, reduce medical errors, and improve subject enrollment into clinical trials. Copyright © 2018 Elsevier Inc. All rights reserved.
Algorithmics - Is There Hope for a Unified Theory?
NASA Astrophysics Data System (ADS)
Hromkovič, Juraj
Computer science was born with the formal definition of the notion of an algorithm. This definition provides clear limits of automatization, separating problems into algorithmically solvable problems and algorithmically unsolvable ones. The second big bang of computer science was the development of the concept of computational complexity. People recognized that problems that do not admit efficient algorithms are not solvable in practice. The search for a reasonable, clear and robust definition of the class of practically solvable algorithmic tasks started with the notion of the class {P} and of {NP}-completeness. In spite of the fact that this robust concept is still fundamental for judging the hardness of computational problems, a variety of approaches was developed for solving instances of {NP}-hard problems in many applications. Our 40-years short attempt to fix the fuzzy border between the practically solvable problems and the practically unsolvable ones partially reminds of the never-ending search for the definition of "life" in biology or for the definitions of matter and energy in physics. Can the search for the formal notion of "practical solvability" also become a never-ending story or is there hope for getting a well-accepted, robust definition of it? Hopefully, it is not surprising that we are not able to answer this question in this invited talk. But to deal with this question is of crucial importance, because only due to enormous effort scientists get a better and better feeling of what the fundamental notions of science like life and energy mean. In the flow of numerous technical results, we must not forget the fact that most of the essential revolutionary contributions to science were done by defining new concepts and notions.
MODIS Science Algorithms and Data Systems Lessons Learned
NASA Technical Reports Server (NTRS)
Wolfe, Robert E.; Ridgway, Bill L.; Patt, Fred S.; Masuoka, Edward J.
2009-01-01
For almost 10 years, standard global products from NASA's Earth Observing System s (EOS) two Moderate Resolution Imaging Spectroradiometer (MODIS) sensors are being used world-wide for earth science research and applications. This paper discusses the lessons learned in developing the science algorithms and the data systems needed to produce these high quality data products for the earth sciences community. Strong science team leadership and communication, an evolvable and scalable data system, and central coordination of QA and validation activities enabled the data system to grow by two orders of magnitude from the initial at-launch system to the current system able to reprocess data from both the Terra and Aqua missions in less than a year. Many of the lessons learned from MODIS are already being applied to follow-on missions.
NASA Astrophysics Data System (ADS)
Mary, D.; Ferrari, A.; Ferrari, C.; Deguignet, J.; Vannier, M.
2016-12-01
With millions of receivers leading to TerraByte data cubes, the story of the giant SKA telescope is also that of collaborative efforts from radioastronomy, signal processing, optimization and computer sciences. Reconstructing SKA cubes poses two challenges. First, the majority of existing algorithms work in 2D and cannot be directly translated into 3D. Second, the reconstruction implies solving an inverse problem and it is not clear what ultimate limit we can expect on the error of this solution. This study addresses (of course partially) both challenges. We consider an extremely simple data acquisition model, and we focus on strategies making it possible to implement 3D reconstruction algorithms that use state-of-the-art image/spectral regularization. The proposed approach has two main features: (i) reduced memory storage with respect to a previous approach; (ii) efficient parallelization and ventilation of the computational load over the spectral bands. This work will allow to implement and compare various 3D reconstruction approaches in a large scale framework.
On a methodology for robust segmentation of nonideal iris images.
Schmid, Natalia A; Zuo, Jinyu
2010-06-01
Iris biometric is one of the most reliable biometrics with respect to performance. However, this reliability is a function of the ideality of the data. One of the most important steps in processing nonideal data is reliable and precise segmentation of the iris pattern from remaining background. In this paper, a segmentation methodology that aims at compensating various nonidealities contained in iris images during segmentation is proposed. The virtue of this methodology lies in its capability to reliably segment nonideal imagery that is simultaneously affected with such factors as specular reflection, blur, lighting variation, occlusion, and off-angle images. We demonstrate the robustness of our segmentation methodology by evaluating ideal and nonideal data sets, namely, the Chinese Academy of Sciences iris data version 3 interval subdirectory, the iris challenge evaluation data, the West Virginia University (WVU) data, and the WVU off-angle data. Furthermore, we compare our performance to that of our implementation of Camus and Wildes's algorithm and Masek's algorithm. We demonstrate considerable improvement in segmentation performance over the formerly mentioned algorithms.
[Application of microelectronics CAD tools to synthetic biology].
Madec, Morgan; Haiech, Jacques; Rosati, Élise; Rezgui, Abir; Gendrault, Yves; Lallement, Christophe
2017-02-01
Synthetic biology is an emerging science that aims to create new biological functions that do not exist in nature, based on the knowledge acquired in life science over the last century. Since the beginning of this century, several projects in synthetic biology have emerged. The complexity of the developed artificial bio-functions is relatively low so that empirical design methods could be used for the design process. Nevertheless, with the increasing complexity of biological circuits, this is no longer the case and a large number of computer aided design softwares have been developed in the past few years. These tools include languages for the behavioral description and the mathematical modelling of biological systems, simulators at different levels of abstraction, libraries of biological devices and circuit design automation algorithms. All of these tools already exist in other fields of engineering sciences, particularly in microelectronics. This is the approach that is put forward in this paper. © 2017 médecine/sciences – Inserm.
NASA Astrophysics Data System (ADS)
Stavros, E. N.; Owen, S. E.
2016-12-01
Information products are assimilated and used to: a) conduct scientific research and b) provide decision support for management and policy. For example, aboveground biomass (i.e. an information product) can be integrated into Earth system models to test hypotheses about the changing world, or used to inform decision-making with respect to natural resource management and policy. Production and dissemination of an information product is referred to as the data product life cycle, which includes: 1) identifying needed information from decision-makers and researchers, 2) engineering an instrument and collecting the raw physical measurements (e.g, number of photons returned), 3) the scientific algorithm(s) for processing the data into an observable (e.g., number of dying trees), and 4) the integration and utilization of that observables by researchers and decision-makers. In this talk, I will discuss the data product life cycle in detail and provide examples from the pre-Hyperspectral Infrared Imager (HyspIRI) airborne campaign and the upcoming NASA-ISRO Synthetic Aperture Radar (NISAR) mission. Examples will focus on information products related to terrestrial ecosystems and natural resource management and will demonstrate that the key to providing information products for advancing scientific understanding and informing decision-makers, is the interdisciplinary integration of science, engineering and applied science - noting that applied science defines the wider impact and adoption of scientific principles by a wider community. As pre-HyspIRI airborne data is for research and development and NISAR is not yet launched, examples will include current plans for developing exemplar data products (from pre-HyspIRI) and the mission Applications Plan (for NISAR). Copyright 2016 California Institute of Technology. All Rights Reserved. We acknowledge support of the US Government, NASA, the Earth Science Division and Terrestrial Ecology program.
Using Analytics to Support Petabyte-Scale Science on the NASA Earth Exchange (NEX)
NASA Astrophysics Data System (ADS)
Votava, P.; Michaelis, A.; Ganguly, S.; Nemani, R. R.
2014-12-01
NASA Earth Exchange (NEX) is a data, supercomputing and knowledge collaboratory that houses NASA satellite, climate and ancillary data where a focused community can come together to address large-scale challenges in Earth sciences. Analytics within NEX occurs at several levels - data, workflows, science and knowledge. At the data level, we are focusing on collecting and analyzing any information that is relevant to efficient acquisition, processing and management of data at the smallest granularity, such as files or collections. This includes processing and analyzing all local and many external metadata that are relevant to data quality, size, provenance, usage and other attributes. This then helps us better understand usage patterns and improve efficiency of data handling within NEX. When large-scale workflows are executed on NEX, we capture information that is relevant to processing and that can be analyzed in order to improve efficiencies in job scheduling, resource optimization, or data partitioning that would improve processing throughput. At this point we also collect data provenance as well as basic statistics of intermediate and final products created during the workflow execution. These statistics and metrics form basic process and data QA that, when combined with analytics algorithms, helps us identify issues early in the production process. We have already seen impact in some petabyte-scale projects, such as global Landsat processing, where we were able to reduce processing times from days to hours and enhance process monitoring and QA. While the focus so far has been mostly on support of NEX operations, we are also building a web-based infrastructure that enables users to perform direct analytics on science data - such as climate predictions or satellite data. Finally, as one of the main goals of NEX is knowledge acquisition and sharing, we began gathering and organizing information that associates users and projects with data, publications, locations and other attributes that can then be analyzed as a part of the NEX knowledge graph and used to greatly improve advanced search capabilities. Overall, we see data analytics at all levels as an important part of NEX as we are continuously seeking improvements in data management, workflow processing, use of resources, usability and science acceleration.
DNA Cryptography and Deep Learning using Genetic Algorithm with NW algorithm for Key Generation.
Kalsi, Shruti; Kaur, Harleen; Chang, Victor
2017-12-05
Cryptography is not only a science of applying complex mathematics and logic to design strong methods to hide data called as encryption, but also to retrieve the original data back, called decryption. The purpose of cryptography is to transmit a message between a sender and receiver such that an eavesdropper is unable to comprehend it. To accomplish this, not only we need a strong algorithm, but a strong key and a strong concept for encryption and decryption process. We have introduced a concept of DNA Deep Learning Cryptography which is defined as a technique of concealing data in terms of DNA sequence and deep learning. In the cryptographic technique, each alphabet of a letter is converted into a different combination of the four bases, namely; Adenine (A), Cytosine (C), Guanine (G) and Thymine (T), which make up the human deoxyribonucleic acid (DNA). Actual implementations with the DNA don't exceed laboratory level and are expensive. To bring DNA computing on a digital level, easy and effective algorithms are proposed in this paper. In proposed work we have introduced firstly, a method and its implementation for key generation based on the theory of natural selection using Genetic Algorithm with Needleman-Wunsch (NW) algorithm and Secondly, a method for implementation of encryption and decryption based on DNA computing using biological operations Transcription, Translation, DNA Sequencing and Deep Learning.
Prediction in complex systems: The case of the international trade network
NASA Astrophysics Data System (ADS)
Vidmer, Alexandre; Zeng, An; Medo, Matúš; Zhang, Yi-Cheng
2015-10-01
Predicting the future evolution of complex systems is one of the main challenges in complexity science. Based on a current snapshot of a network, link prediction algorithms aim to predict its future evolution. We apply here link prediction algorithms to data on the international trade between countries. This data can be represented as a complex network where links connect countries with the products that they export. Link prediction techniques based on heat and mass diffusion processes are employed to obtain predictions for products exported in the future. These baseline predictions are improved using a recent metric of country fitness and product similarity. The overall best results are achieved with a newly developed metric of product similarity which takes advantage of causality in the network evolution.
Telemetry and Science Data Software System
NASA Technical Reports Server (NTRS)
Bates, Lakesha; Hong, Liang
2011-01-01
The Telemetry and Science Data Software System (TSDSS) was designed to validate the operational health of a spacecraft, ease test verification, assist in debugging system anomalies, and provide trending data and advanced science analysis. In doing so, the system parses, processes, and organizes raw data from the Aquarius instrument both on the ground and while in space. In addition, it provides a user-friendly telemetry viewer, and an instant pushbutton test report generator. Existing ground data systems can parse and provide simple data processing, but have limitations in advanced science analysis and instant report generation. The TSDSS functions as an offline data analysis system during I&T (integration and test) and mission operations phases. After raw data are downloaded from an instrument, TSDSS ingests the data files, parses, converts telemetry to engineering units, and applies advanced algorithms to produce science level 0, 1, and 2 data products. Meanwhile, it automatically schedules upload of the raw data to a remote server and archives all intermediate and final values in a MySQL database in time order. All data saved in the system can be straightforwardly retrieved, exported, and migrated. Using TSDSS s interactive data visualization tool, a user can conveniently choose any combination and mathematical computation of interesting telemetry points from a large range of time periods (life cycle of mission ground data and mission operations testing), and display a graphical and statistical view of the data. With this graphical user interface (GUI), the data queried graphs can be exported and saved in multiple formats. This GUI is especially useful in trending data analysis, debugging anomalies, and advanced data analysis. At the request of the user, mission-specific instrument performance assessment reports can be generated with a simple click of a button on the GUI. From instrument level to observatory level, the TSDSS has been operating supporting functional and performance tests and refining system calibration algorithms and coefficients, in sync with the Aquarius/SAC-D spacecraft. At the time of this reporting, it was prepared and set up to perform anomaly investigation for mission operations preceding the Aquarius/SAC-D spacecraft launch on June 10, 2011.
The mGA1.0: A common LISP implementation of a messy genetic algorithm
NASA Technical Reports Server (NTRS)
Goldberg, David E.; Kerzic, Travis
1990-01-01
Genetic algorithms (GAs) are finding increased application in difficult search, optimization, and machine learning problems in science and engineering. Increasing demands are being placed on algorithm performance, and the remaining challenges of genetic algorithm theory and practice are becoming increasingly unavoidable. Perhaps the most difficult of these challenges is the so-called linkage problem. Messy GAs were created to overcome the linkage problem of simple genetic algorithms by combining variable-length strings, gene expression, messy operators, and a nonhomogeneous phasing of evolutionary processing. Results on a number of difficult deceptive test functions are encouraging with the mGA always finding global optima in a polynomial number of function evaluations. Theoretical and empirical studies are continuing, and a first version of a messy GA is ready for testing by others. A Common LISP implementation called mGA1.0 is documented and related to the basic principles and operators developed by Goldberg et. al. (1989, 1990). Although the code was prepared with care, it is not a general-purpose code, only a research version. Important data structures and global variations are described. Thereafter brief function descriptions are given, and sample input data are presented together with sample program output. A source listing with comments is also included.
Presearch data conditioning in the Kepler Science Operations Center pipeline
NASA Astrophysics Data System (ADS)
Twicken, Joseph D.; Chandrasekaran, Hema; Jenkins, Jon M.; Gunter, Jay P.; Girouard, Forrest; Klaus, Todd C.
2010-07-01
We describe the Presearch Data Conditioning (PDC) software component and its context in the Kepler Science Operations Center (SOC) Science Processing Pipeline. The primary tasks of this component are to correct systematic and other errors, remove excess flux due to aperture crowding, and condition the raw flux light curves for over 160,000 long cadence (~thirty minute) and 512 short cadence (~one minute) stellar targets. Long cadence corrected flux light curves are subjected to a transiting planet search in a subsequent pipeline module. We discuss science algorithms for long and short cadence PDC: identification and correction of unexplained (i.e., unrelated to known anomalies) discontinuities; systematic error correction; and removal of excess flux due to aperture crowding. We discuss the propagation of uncertainties from raw to corrected flux. Finally, we present examples from Kepler flight data to illustrate PDC performance. Corrected flux light curves produced by PDC are exported to the Multi-mission Archive at Space Telescope [Science Institute] (MAST) and are made available to the general public in accordance with the NASA/Kepler data release policy.
Optimization of multi-color laser waveform for high-order harmonic generation
NASA Astrophysics Data System (ADS)
Jin, Cheng; Lin, C. D.
2016-09-01
With the development of laser technologies, multi-color light-field synthesis with complete amplitude and phase control would make it possible to generate arbitrary optical waveforms. A practical optimization algorithm is needed to generate such a waveform in order to control strong-field processes. We review some recent theoretical works of the optimization of amplitudes and phases of multi-color lasers to modify the single-atom high-order harmonic generation based on genetic algorithm. By choosing different fitness criteria, we demonstrate that: (i) harmonic yields can be enhanced by 10 to 100 times, (ii) harmonic cutoff energy can be substantially extended, (iii) specific harmonic orders can be selectively enhanced, and (iv) single attosecond pulses can be efficiently generated. The possibility of optimizing macroscopic conditions for the improved phase matching and low divergence of high harmonics is also discussed. The waveform control and optimization are expected to be new drivers for the next wave of breakthrough in the strong-field physics in the coming years. Project supported by the Fundamental Research Funds for the Central Universities of China (Grant No. 30916011207), Chemical Sciences, Geosciences and Biosciences Division, Office of Basic Energy Sciences, Office of Science, U. S. Department of Energy (Grant No. DE-FG02-86ER13491), and Air Force Office of Scientific Research, USA (Grant No. FA9550-14-1-0255).
Suborbital Science Program: Dryden Flight Research Center
NASA Technical Reports Server (NTRS)
DelFrate, John
2008-01-01
This viewgraph presentation reviews the suborbital science program at NASA Dryden Flight Research Center. The Program Objectives are given in various areas: (1) Satellite Calibration and Validation (Cal/val)--Provide methods to perform the cal/val requirements for Earth Observing System satellites; (2) New Sensor Development -- Provide methods to reduce risk for new sensor concepts and algorithm development prior to committing sensors to operations; (3) Process Studies -- Facilitate the acquisition of high spatial/temporal resolution focused measurements that are required to understand small atmospheric and surface structures which generate powerful Earth system effects; and (4) Airborne Networking -- Develop disruption-tolerant networking to enable integrated multiple scale measurements of critical environmental features. Dryden supports the NASA Airborne Science Program and the nation in several elements: ER-2, G-3, DC-8, Ikhana (Predator B) & Global Hawk and Reveal. These are reviewed in detail in the presentation.
Significant Advances in the AIRS Science Team Version-6 Retrieval Algorithm
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John; Iredell, Lena; Molnar, Gyula
2012-01-01
AIRS/AMSU is the state of the art infrared and microwave atmospheric sounding system flying aboard EOS Aqua. The Goddard DISC has analyzed AIRS/AMSU observations, covering the period September 2002 until the present, using the AIRS Science Team Version-S retrieval algorithm. These products have been used by many researchers to make significant advances in both climate and weather applications. The AIRS Science Team Version-6 Retrieval, which will become operation in mid-20l2, contains many significant theoretical and practical improvements compared to Version-5 which should further enhance the utility of AIRS products for both climate and weather applications. In particular, major changes have been made with regard to the algOrithms used to 1) derive surface skin temperature and surface spectral emissivity; 2) generate the initial state used to start the retrieval procedure; 3) compute Outgoing Longwave Radiation; and 4) determine Quality Control. This paper will describe these advances found in the AIRS Version-6 retrieval algorithm and demonstrate the improvement of AIRS Version-6 products compared to those obtained using Version-5,
Moving code - Sharing geoprocessing logic on the Web
NASA Astrophysics Data System (ADS)
Müller, Matthias; Bernard, Lars; Kadner, Daniel
2013-09-01
Efficient data processing is a long-standing challenge in remote sensing. Effective and efficient algorithms are required for product generation in ground processing systems, event-based or on-demand analysis, environmental monitoring, and data mining. Furthermore, the increasing number of survey missions and the exponentially growing data volume in recent years have created demand for better software reuse as well as an efficient use of scalable processing infrastructures. Solutions that address both demands simultaneously have begun to slowly appear, but they seldom consider the possibility to coordinate development and maintenance efforts across different institutions, community projects, and software vendors. This paper presents a new approach to share, reuse, and possibly standardise geoprocessing logic in the field of remote sensing. Drawing from the principles of service-oriented design and distributed processing, this paper introduces moving-code packages as self-describing software components that contain algorithmic code and machine-readable descriptions of the provided functionality, platform, and infrastructure, as well as basic information about exploitation rights. Furthermore, the paper presents a lean publishing mechanism by which to distribute these packages on the Web and to integrate them in different processing environments ranging from monolithic workstations to elastic computational environments or "clouds". The paper concludes with an outlook toward community repositories for reusable geoprocessing logic and their possible impact on data-driven science in general.
Magnetospheric Multiscale Instrument Suite Operations and Data System
NASA Technical Reports Server (NTRS)
Baker, D. N.; Riesberg, L.; Pankratz, C. K.; Panneton, R. S.; Giles, B. L.; Wilder, F. D.; Ergun, R. E.
2015-01-01
The four Magnetospheric Multiscale (MMS) spacecraft will collect a combined volume of approximately 100 gigabits per day of particle and field data. On average, only 4 gigabits of that volume can be transmitted to the ground. To maximize the scientific value of each transmitted data segment, MMS has developed the Science Operations Center (SOC) to manage science operations, instrument operations, and selection, downlink, distribution, and archiving of MMS science data sets. The SOC is managed by the Laboratory for Atmospheric and Space Physics (LASP) in Boulder, Colorado and serves as the primary point of contact for community participation in the mission. MMS instrument teams conduct their operations through the SOC, and utilize the SOC's Science Data Center (SOC) for data management and distribution. The SOC provides a single mission data archive for the housekeeping and science data, calibration data, ephemerides, attitude and other ancillary data needed to support the scientific use and interpretation. All levels of data products will reside at and be publicly disseminated from the SDC. Documentation and metadata describing data products, algorithms, instrument calibrations, validation, and data quality will be provided. Arguably, the most important innovation developed by the SOC is the MMS burst data management and selection system. With nested automation and 'Scientist-in-the-Loop' (SITL) processes, these systems are designed to maximize the value of the burst data by prioritizing the data segments selected for transmission to the ground. This paper describes the MMS science operations approach, processes and data systems, including the burst system and the SITL concept.
Magnetospheric Multiscale Instrument Suite Operations and Data System
NASA Astrophysics Data System (ADS)
Baker, D. N.; Riesberg, L.; Pankratz, C. K.; Panneton, R. S.; Giles, B. L.; Wilder, F. D.; Ergun, R. E.
2016-03-01
The four Magnetospheric Multiscale (MMS) spacecraft will collect a combined volume of ˜100 gigabits per day of particle and field data. On average, only 4 gigabits of that volume can be transmitted to the ground. To maximize the scientific value of each transmitted data segment, MMS has developed the Science Operations Center (SOC) to manage science operations, instrument operations, and selection, downlink, distribution, and archiving of MMS science data sets. The SOC is managed by the Laboratory for Atmospheric and Space Physics (LASP) in Boulder, Colorado and serves as the primary point of contact for community participation in the mission. MMS instrument teams conduct their operations through the SOC, and utilize the SOC's Science Data Center (SDC) for data management and distribution. The SOC provides a single mission data archive for the housekeeping and science data, calibration data, ephemerides, attitude and other ancillary data needed to support the scientific use and interpretation. All levels of data products will reside at and be publicly disseminated from the SDC. Documentation and metadata describing data products, algorithms, instrument calibrations, validation, and data quality will be provided. Arguably, the most important innovation developed by the SOC is the MMS burst data management and selection system. With nested automation and "Scientist-in-the-Loop" (SITL) processes, these systems are designed to maximize the value of the burst data by prioritizing the data segments selected for transmission to the ground. This paper describes the MMS science operations approach, processes and data systems, including the burst system and the SITL concept.
Workflows and Provenance: Toward Information Science Solutions for the Natural Sciences.
Gryk, Michael R; Ludäscher, Bertram
2017-01-01
The era of big data and ubiquitous computation has brought with it concerns about ensuring reproducibility in this new research environment. It is easy to assume computational methods self-document by their very nature of being exact, deterministic processes. However, similar to laboratory experiments, ensuring reproducibility in the computational realm requires the documentation of both the protocols used (workflows) as well as a detailed description of the computational environment: algorithms, implementations, software environments as well as the data ingested and execution logs of the computation. These two aspects of computational reproducibility (workflows and execution details) are discussed in the context of biomolecular Nuclear Magnetic Resonance spectroscopy (bioNMR) as well as the PRIMAD model for computational reproducibility.
Quantum-Like Models for Decision Making in Psychology and Cognitive Science
NASA Astrophysics Data System (ADS)
Khrennikov, Andrei.
2009-02-01
We show that (in contrast to rather common opinion) the domain of applications of the mathematical formalism of quantum mechanics is not restricted to physics. This formalism can be applied to the description of various quantum-like (QL) information processing. In particular, the calculus of quantum (and more general QL) probabilities can be used to explain some paradoxical statistical data which was collected in psychology and cognitive science. The main lesson of our study is that one should sharply distinguish the mathematical apparatus of QM from QM as a physical theory. The domain of application of the mathematical apparatus is essentially wider than quantum physics. Quantum-like representation algorithm, formula of total probability, interference of probabilities, psychology, cognition, decision making.
UAVSAR: Airborne L-band Radar for Repeat Pass Interferometry
NASA Technical Reports Server (NTRS)
Moes, Timothy R.
2009-01-01
The primary objectives of the UAVSAR Project were to: a) develop a miniaturized polarimetric L-band synthetic aperture radar (SAR) for use on an unmanned aerial vehicle (UAV) or piloted vehicle. b) develop the associated processing algorithms for repeat-pass differential interferometric measurements using a single antenna. c) conduct measurements of geophysical interest, particularly changes of rapidly deforming surfaces such as volcanoes or earthquakes. Two complete systems were developed. Operational Science Missions began on February 18, 2009 ... concurrent development and testing of the radar system continues.
Cognitive Algorithms for Signal Processing
2011-03-18
Analysis of Millennial Spiritual Issues,” Zygon, Journal of Science and Religion , 43(4), 797-821, 2008. [46] R. Linnehan, C. Mutz, L.I. Perlovsky, B...dimensions of X and Y : (a) true ‘smile’ and ‘frown’ patterns are shown without clutter; (b) actual image available for recognition (signal is below...clutter in 2 dimensions of X(n) = (X, Y ), is given by l(X(n)|m = clutter) = 1/ (X • Y ), X = (Xmax-Xmin), Y = (Ymax-Ymin); (6) 13 Minimal
Implementing Legacy-C Algorithms in FPGA Co-Processors for Performance Accelerated Smart Payloads
NASA Technical Reports Server (NTRS)
Pingree, Paula J.; Scharenbroich, Lucas J.; Werne, Thomas A.; Hartzell, Christine
2008-01-01
Accurate, on-board classification of instrument data is used to increase science return by autonomously identifying regions of interest for priority transmission or generating summary products to conserve transmission bandwidth. Due to on-board processing constraints, such classification has been limited to using the simplest functions on a small subset of the full instrument data. FPGA co-processor designs for SVM1 classifiers will lead to significant improvement in on-board classification capability and accuracy.
NASA Astrophysics Data System (ADS)
Zhou, Ya-Tong; Fan, Yu; Chen, Zi-Yi; Sun, Jian-Cheng
2017-05-01
The contribution of this work is twofold: (1) a multimodality prediction method of chaotic time series with the Gaussian process mixture (GPM) model is proposed, which employs a divide and conquer strategy. It automatically divides the chaotic time series into multiple modalities with different extrinsic patterns and intrinsic characteristics, and thus can more precisely fit the chaotic time series. (2) An effective sparse hard-cut expectation maximization (SHC-EM) learning algorithm for the GPM model is proposed to improve the prediction performance. SHC-EM replaces a large learning sample set with fewer pseudo inputs, accelerating model learning based on these pseudo inputs. Experiments on Lorenz and Chua time series demonstrate that the proposed method yields not only accurate multimodality prediction, but also the prediction confidence interval. SHC-EM outperforms the traditional variational learning in terms of both prediction accuracy and speed. In addition, SHC-EM is more robust and insusceptible to noise than variational learning. Supported by the National Natural Science Foundation of China under Grant No 60972106, the China Postdoctoral Science Foundation under Grant No 2014M561053, the Humanity and Social Science Foundation of Ministry of Education of China under Grant No 15YJA630108, and the Hebei Province Natural Science Foundation under Grant No E2016202341.
Advanced Methodologies for NASA Science Missions
NASA Astrophysics Data System (ADS)
Hurlburt, N. E.; Feigelson, E.; Mentzel, C.
2017-12-01
Most of NASA's commitment to computational space science involves the organization and processing of Big Data from space-based satellites, and the calculations of advanced physical models based on these datasets. But considerable thought is also needed on what computations are needed. The science questions addressed by space data are so diverse and complex that traditional analysis procedures are often inadequate. The knowledge and skills of the statistician, applied mathematician, and algorithmic computer scientist must be incorporated into programs that currently emphasize engineering and physical science. NASA's culture and administrative mechanisms take full cognizance that major advances in space science are driven by improvements in instrumentation. But it is less well recognized that new instruments and science questions give rise to new challenges in the treatment of satellite data after it is telemetered to the ground. These issues might be divided into two stages: data reduction through software pipelines developed within NASA mission centers; and science analysis that is performed by hundreds of space scientists dispersed through NASA, U.S. universities, and abroad. Both stages benefit from the latest statistical and computational methods; in some cases, the science result is completely inaccessible using traditional procedures. This paper will review the current state of NASA and present example applications using modern methodologies.
Spatial-Spectral Approaches to Edge Detection in Hyperspectral Remote Sensing
NASA Astrophysics Data System (ADS)
Cox, Cary M.
This dissertation advances geoinformation science at the intersection of hyperspectral remote sensing and edge detection methods. A relatively new phenomenology among its remote sensing peers, hyperspectral imagery (HSI) comprises only about 7% of all remote sensing research - there are five times as many radar-focused peer reviewed journal articles than hyperspectral-focused peer reviewed journal articles. Similarly, edge detection studies comprise only about 8% of image processing research, most of which is dedicated to image processing techniques most closely associated with end results, such as image classification and feature extraction. Given the centrality of edge detection to mapping, that most important of geographic functions, improving the collective understanding of hyperspectral imagery edge detection methods constitutes a research objective aligned to the heart of geoinformation sciences. Consequently, this dissertation endeavors to narrow the HSI edge detection research gap by advancing three HSI edge detection methods designed to leverage HSI's unique chemical identification capabilities in pursuit of generating accurate, high-quality edge planes. The Di Zenzo-based gradient edge detection algorithm, an innovative version of the Resmini HySPADE edge detection algorithm and a level set-based edge detection algorithm are tested against 15 traditional and non-traditional HSI datasets spanning a range of HSI data configurations, spectral resolutions, spatial resolutions, bandpasses and applications. This study empirically measures algorithm performance against Dr. John Canny's six criteria for a good edge operator: false positives, false negatives, localization, single-point response, robustness to noise and unbroken edges. The end state is a suite of spatial-spectral edge detection algorithms that produce satisfactory edge results against a range of hyperspectral data types applicable to a diverse set of earth remote sensing applications. This work also explores the concept of an edge within hyperspectral space, the relative importance of spatial and spectral resolutions as they pertain to HSI edge detection and how effectively compressed HSI data improves edge detection results. The HSI edge detection experiments yielded valuable insights into the algorithms' strengths, weaknesses and optimal alignment to remote sensing applications. The gradient-based edge operator produced strong edge planes across a range of evaluation measures and applications, particularly with respect to false negatives, unbroken edges, urban mapping, vegetation mapping and oil spill mapping applications. False positives and uncompressed HSI data presented occasional challenges to the algorithm. The HySPADE edge operator produced satisfactory results with respect to localization, single-point response, oil spill mapping and trace chemical detection, and was challenged by false positives, declining spectral resolution and vegetation mapping applications. The level set edge detector produced high-quality edge planes for most tests and demonstrated strong performance with respect to false positives, single-point response, oil spill mapping and mineral mapping. False negatives were a regular challenge for the level set edge detection algorithm. Finally, HSI data optimized for spectral information compression and noise was shown to improve edge detection performance across all three algorithms, while the gradient-based algorithm and HySPADE demonstrated significant robustness to declining spectral and spatial resolutions.
Formalizing Neurath's ship: Approximate algorithms for online causal learning.
Bramley, Neil R; Dayan, Peter; Griffiths, Thomas L; Lagnado, David A
2017-04-01
Higher-level cognition depends on the ability to learn models of the world. We can characterize this at the computational level as a structure-learning problem with the goal of best identifying the prevailing causal relationships among a set of relata. However, the computational cost of performing exact Bayesian inference over causal models grows rapidly as the number of relata increases. This implies that the cognitive processes underlying causal learning must be substantially approximate. A powerful class of approximations that focuses on the sequential absorption of successive inputs is captured by the Neurath's ship metaphor in philosophy of science, where theory change is cast as a stochastic and gradual process shaped as much by people's limited willingness to abandon their current theory when considering alternatives as by the ground truth they hope to approach. Inspired by this metaphor and by algorithms for approximating Bayesian inference in machine learning, we propose an algorithmic-level model of causal structure learning under which learners represent only a single global hypothesis that they update locally as they gather evidence. We propose a related scheme for understanding how, under these limitations, learners choose informative interventions that manipulate the causal system to help elucidate its workings. We find support for our approach in the analysis of 3 experiments. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Validation of On-board Cloud Cover Assessment Using EO-1
NASA Technical Reports Server (NTRS)
Mandl, Dan; Miller, Jerry; Griffin, Michael; Burke, Hsiao-hua
2003-01-01
The purpose of this NASA Earth Science Technology Office funded effort was to flight validate an on-board cloud detection algorithm and to determine the performance that can be achieved with a Mongoose V flight computer. This validation was performed on the EO-1 satellite, which is operational, by uploading new flight code to perform the cloud detection. The algorithm was developed by MIT/Lincoln Lab and is based on the use of the Hyperion hyperspectral instrument using selected spectral bands from 0.4 to 2.5 microns. The Technology Readiness Level (TRL) of this technology at the beginning of the task was level 5 and was TRL 6 upon completion. In the final validation, an 8 second (0.75 Gbytes) Hyperion image was processed on-board and assessed for percentage cloud cover within 30 minutes. It was expected to take many hours and perhaps a day considering that the Mongoose V is only a 6-8 MIP machine in performance. To accomplish this test, the image taken had to have level 0 and level 1 processing performed on-board before the cloud algorithm was applied. For almost all of the ground test cases and all of the flight cases, the cloud assessment was within 5% of the correct value and in most cases within 1-2%.
NASA Technical Reports Server (NTRS)
Biswas, Sayak K.; Jones, Linwood; Roberts, Jason; Ruf, Christopher; Ulhorn, Eric; Miller, Timothy
2012-01-01
The Hurricane Imaging Radiometer (HIRAD) is a new airborne synthetic aperture passive microwave radiometer capable of wide swath imaging of the ocean surface wind speed under heavy precipitation e.g. in tropical cyclones. It uses interferometric signal processing to produce upwelling brightness temperature (Tb) images at its four operating frequencies 4, 5, 6 and 6.6 GHz [1,2]. HIRAD participated in NASA s Genesis and Rapid Intensification Processes (GRIP) mission during 2010 as its first science field campaign. It produced Tb images with 70 km swath width and 3 km resolution from a 20 km altitude. From this, ocean surface wind speed and column averaged atmospheric liquid water content can be retrieved across the swath. The column averaged liquid water then could be related to an average rain rate. The retrieval algorithm (and the HIRAD instrument itself) is a direct descendant of the nadir-only Stepped Frequency Microwave Radiometer that is used operationally by the NOAA Hurricane Research Division to monitor tropical cyclones [3,4]. However, due to HIRAD s slant viewing geometry (compared to nadir viewing SFMR) a major modification is required in the algorithm. Results based on the modified algorithm from the GRIP campaign will be presented in the paper.
Computational data sciences for assessment and prediction of climate extremes
NASA Astrophysics Data System (ADS)
Ganguly, A. R.
2011-12-01
Climate extremes may be defined inclusively as severe weather events or large shifts in global or regional weather patterns which may be caused or exacerbated by natural climate variability or climate change. This area of research arguably represents one of the largest knowledge-gaps in climate science which is relevant for informing resource managers and policy makers. While physics-based climate models are essential in view of non-stationary and nonlinear dynamical processes, their current pace of uncertainty reduction may not be adequate for urgent stakeholder needs. The structure of the models may in some cases preclude reduction of uncertainty for critical processes at scales or for the extremes of interest. On the other hand, methods based on complex networks, extreme value statistics, machine learning, and space-time data mining, have demonstrated significant promise to improve scientific understanding and generate enhanced predictions. When combined with conceptual process understanding at multiple spatiotemporal scales and designed to handle massive data, interdisciplinary data science methods and algorithms may complement or supplement physics-based models. Specific examples from the prior literature and our ongoing work suggests how data-guided improvements may be possible, for example, in the context of ocean meteorology, climate oscillators, teleconnections, and atmospheric process understanding, which in turn can improve projections of regional climate, precipitation extremes and tropical cyclones in an useful and interpretable fashion. A community-wide effort is motivated to develop and adapt computational data science tools for translating climate model simulations to information relevant for adaptation and policy, as well as for improving our scientific understanding of climate extremes from both observed and model-simulated data.
NASA Astrophysics Data System (ADS)
Hogenson, K.; Arko, S. A.; Buechler, B.; Hogenson, R.; Herrmann, J.; Geiger, A.
2016-12-01
A problem often faced by Earth science researchers is how to scale algorithms that were developed against few datasets and take them to regional or global scales. One significant hurdle can be the processing and storage resources available for such a task, not to mention the administration of those resources. As a processing environment, the cloud offers nearly unlimited potential for compute and storage, with limited administration required. The goal of the Hybrid Pluggable Processing Pipeline (HyP3) project was to demonstrate the utility of the Amazon cloud to process large amounts of data quickly and cost effectively, while remaining generic enough to incorporate new algorithms with limited administration time or expense. Principally built by three undergraduate students at the ASF DAAC, the HyP3 system relies on core Amazon services such as Lambda, the Simple Notification Service (SNS), Relational Database Service (RDS), Elastic Compute Cloud (EC2), Simple Storage Service (S3), and Elastic Beanstalk. The HyP3 user interface was written using elastic beanstalk, and the system uses SNS and Lamdba to handle creating, instantiating, executing, and terminating EC2 instances automatically. Data are sent to S3 for delivery to customers and removed using standard data lifecycle management rules. In HyP3 all data processing is ephemeral; there are no persistent processes taking compute and storage resources or generating added cost. When complete, HyP3 will leverage the automatic scaling up and down of EC2 compute power to respond to event-driven demand surges correlated with natural disaster or reprocessing efforts. Massive simultaneous processing within EC2 will be able match the demand spike in ways conventional physical computing power never could, and then tail off incurring no costs when not needed. This presentation will focus on the development techniques and technologies that were used in developing the HyP3 system. Data and process flow will be shown, highlighting the benefits of the cloud for each step. Finally, the steps for integrating a new processing algorithm will be demonstrated. This is the true power of HyP3; allowing people to upload their own algorithms and execute them at archive level scales.
Data-Driven Design of Intelligent Wireless Networks: An Overview and Tutorial.
Kulin, Merima; Fortuna, Carolina; De Poorter, Eli; Deschrijver, Dirk; Moerman, Ingrid
2016-06-01
Data science or "data-driven research" is a research approach that uses real-life data to gain insight about the behavior of systems. It enables the analysis of small, simple as well as large and more complex systems in order to assess whether they function according to the intended design and as seen in simulation. Data science approaches have been successfully applied to analyze networked interactions in several research areas such as large-scale social networks, advanced business and healthcare processes. Wireless networks can exhibit unpredictable interactions between algorithms from multiple protocol layers, interactions between multiple devices, and hardware specific influences. These interactions can lead to a difference between real-world functioning and design time functioning. Data science methods can help to detect the actual behavior and possibly help to correct it. Data science is increasingly used in wireless research. To support data-driven research in wireless networks, this paper illustrates the step-by-step methodology that has to be applied to extract knowledge from raw data traces. To this end, the paper (i) clarifies when, why and how to use data science in wireless network research; (ii) provides a generic framework for applying data science in wireless networks; (iii) gives an overview of existing research papers that utilized data science approaches in wireless networks; (iv) illustrates the overall knowledge discovery process through an extensive example in which device types are identified based on their traffic patterns; (v) provides the reader the necessary datasets and scripts to go through the tutorial steps themselves.
Data-Driven Design of Intelligent Wireless Networks: An Overview and Tutorial
Kulin, Merima; Fortuna, Carolina; De Poorter, Eli; Deschrijver, Dirk; Moerman, Ingrid
2016-01-01
Data science or “data-driven research” is a research approach that uses real-life data to gain insight about the behavior of systems. It enables the analysis of small, simple as well as large and more complex systems in order to assess whether they function according to the intended design and as seen in simulation. Data science approaches have been successfully applied to analyze networked interactions in several research areas such as large-scale social networks, advanced business and healthcare processes. Wireless networks can exhibit unpredictable interactions between algorithms from multiple protocol layers, interactions between multiple devices, and hardware specific influences. These interactions can lead to a difference between real-world functioning and design time functioning. Data science methods can help to detect the actual behavior and possibly help to correct it. Data science is increasingly used in wireless research. To support data-driven research in wireless networks, this paper illustrates the step-by-step methodology that has to be applied to extract knowledge from raw data traces. To this end, the paper (i) clarifies when, why and how to use data science in wireless network research; (ii) provides a generic framework for applying data science in wireless networks; (iii) gives an overview of existing research papers that utilized data science approaches in wireless networks; (iv) illustrates the overall knowledge discovery process through an extensive example in which device types are identified based on their traffic patterns; (v) provides the reader the necessary datasets and scripts to go through the tutorial steps themselves. PMID:27258286
The Algorithms of Euclid and Jacobi
ERIC Educational Resources Information Center
Johnson, R. W.; Waterman, M. S.
1976-01-01
In a thesis written for the Doctor of Arts in Mathematics, the connection between Euclid's algorithm and continued fractions is developed and extended to n dimensions. Applications to computer sciences are noted. (SD)
Inverse Problems in Geodynamics Using Machine Learning Algorithms
NASA Astrophysics Data System (ADS)
Shahnas, M. H.; Yuen, D. A.; Pysklywec, R. N.
2018-01-01
During the past few decades numerical studies have been widely employed to explore the style of circulation and mixing in the mantle of Earth and other planets. However, in geodynamical studies there are many properties from mineral physics, geochemistry, and petrology in these numerical models. Machine learning, as a computational statistic-related technique and a subfield of artificial intelligence, has rapidly emerged recently in many fields of sciences and engineering. We focus here on the application of supervised machine learning (SML) algorithms in predictions of mantle flow processes. Specifically, we emphasize on estimating mantle properties by employing machine learning techniques in solving an inverse problem. Using snapshots of numerical convection models as training samples, we enable machine learning models to determine the magnitude of the spin transition-induced density anomalies that can cause flow stagnation at midmantle depths. Employing support vector machine algorithms, we show that SML techniques can successfully predict the magnitude of mantle density anomalies and can also be used in characterizing mantle flow patterns. The technique can be extended to more complex geodynamic problems in mantle dynamics by employing deep learning algorithms for putting constraints on properties such as viscosity, elastic parameters, and the nature of thermal and chemical anomalies.
A Comparative Evaluation of Unsupervised Anomaly Detection Algorithms for Multivariate Data.
Goldstein, Markus; Uchida, Seiichi
2016-01-01
Anomaly detection is the process of identifying unexpected items or events in datasets, which differ from the norm. In contrast to standard classification tasks, anomaly detection is often applied on unlabeled data, taking only the internal structure of the dataset into account. This challenge is known as unsupervised anomaly detection and is addressed in many practical applications, for example in network intrusion detection, fraud detection as well as in the life science and medical domain. Dozens of algorithms have been proposed in this area, but unfortunately the research community still lacks a comparative universal evaluation as well as common publicly available datasets. These shortcomings are addressed in this study, where 19 different unsupervised anomaly detection algorithms are evaluated on 10 different datasets from multiple application domains. By publishing the source code and the datasets, this paper aims to be a new well-funded basis for unsupervised anomaly detection research. Additionally, this evaluation reveals the strengths and weaknesses of the different approaches for the first time. Besides the anomaly detection performance, computational effort, the impact of parameter settings as well as the global/local anomaly detection behavior is outlined. As a conclusion, we give an advise on algorithm selection for typical real-world tasks.
Data Mining and Machine Learning in Astronomy
NASA Astrophysics Data System (ADS)
Ball, Nicholas M.; Brunner, Robert J.
We review the current state of data mining and machine learning in astronomy. Data Mining can have a somewhat mixed connotation from the point of view of a researcher in this field. If used correctly, it can be a powerful approach, holding the potential to fully exploit the exponentially increasing amount of available data, promising great scientific advance. However, if misused, it can be little more than the black box application of complex computing algorithms that may give little physical insight, and provide questionable results. Here, we give an overview of the entire data mining process, from data collection through to the interpretation of results. We cover common machine learning algorithms, such as artificial neural networks and support vector machines, applications from a broad range of astronomy, emphasizing those in which data mining techniques directly contributed to improving science, and important current and future directions, including probability density functions, parallel algorithms, Peta-Scale computing, and the time domain. We conclude that, so long as one carefully selects an appropriate algorithm and is guided by the astronomical problem at hand, data mining can be very much the powerful tool, and not the questionable black box.
Integration of Landsat-based disturbance maps in the Landscape Change Monitoring System (LCMS)
NASA Astrophysics Data System (ADS)
Healey, S. P.; Cohen, W. B.; Eidenshink, J. C.; Hernandez, A. J.; Huang, C.; Kennedy, R. E.; Moisen, G. G.; Schroeder, T. A.; Stehman, S.; Steinwand, D.; Vogelmann, J. E.; Woodcock, C.; Yang, L.; Yang, Z.; Zhu, Z.
2013-12-01
Land cover change can have a profound effect upon an area's natural resources and its role in biogeochemical and hydrological cycles. Many land cover changes processes are sensitive to climate, including: fire; storm damage, and insect activity. Monitoring of both past and ongoing land cover change is critical, particularly as we try to understand the impact of a changing climate on the natural systems we manage. The Landsat series of satellites, which initially launched in 1972, has allowed land observation at spatial and spectral resolutions appropriate for identification of many types of land cover change. Over the years, and particularly since the opening of the Landsat archive in 2008, many approaches have been developed to meet individual monitoring needs. Algorithms vary by the cover type targeted, the rate of change sought, and the period between observations. The Landscape Change Monitoring System (LCMS) is envisioned as a sustained, inter-agency monitoring program that brings together and operationally provides the best available land cover change maps over the United States. Expanding upon the successful USGS/Forest Service Monitoring Trends in Burn Severity project, LCMS is designed to serve a variety of research and management communities. The LCMS Science Team is currently assessing the relative strengths of a variety of leading change detection approaches, primarily emphasizing Landsat observations. Using standardized image pre-processing methods, maps produced by these algorithms have been compared at intensive validation sites across the country. Additionally, LCMS has taken steps toward a data-mining framework, in which ensembles of algorithm outputs are used with non-parametric models to create integrated predictions of change across a variety of scenarios and change dynamics. We present initial findings from the LCMS Science Team, including validation results from individual algorithms and assessment of initial 'integrated' products from the data-mining framework. It is anticipated that these results will directly impact land change information that will in the future be routinely available across the country through LCMS. With a baseline observation period of more than 40 years and a national scope, this data should shed light upon how trends in disturbance may be linked to climatic changes.
NASA Astrophysics Data System (ADS)
Chai, Xiu-Li; Gan, Zhi-Hua; Lu, Yang; Zhang, Miao-Hui; Chen, Yi-Ran
2016-10-01
Recently, many image encryption algorithms based on chaos have been proposed. Most of the previous algorithms encrypt components R, G, and B of color images independently and neglect the high correlation between them. In the paper, a novel color image encryption algorithm is introduced. The 24 bit planes of components R, G, and B of the color plain image are obtained and recombined into 4 compound bit planes, and this can make the three components affect each other. A four-dimensional (4D) memristive hyperchaotic system generates the pseudorandom key streams and its initial values come from the SHA 256 hash value of the color plain image. The compound bit planes and key streams are confused according to the principles of genetic recombination, then confusion and diffusion as a union are applied to the bit planes, and the color cipher image is obtained. Experimental results and security analyses demonstrate that the proposed algorithm is secure and effective so that it may be adopted for secure communication. Project supported by the National Natural Science Foundation of China (Grant Nos. 61203094 and 61305042), the Natural Science Foundation of the United States (Grant Nos. CNS-1253424 and ECCS-1202225), the Science and Technology Foundation of Henan Province, China (Grant No. 152102210048), the Foundation and Frontier Project of Henan Province, China (Grant No. 162300410196), the Natural Science Foundation of Educational Committee of Henan Province, China (Grant No. 14A413015), and the Research Foundation of Henan University, China (Grant No. xxjc20140006).
Designing highly flexible and usable cyberinfrastructures for convergence.
Herr, Bruce W; Huang, Weixia; Penumarthy, Shashikant; Börner, Katy
2006-12-01
This article presents the results of a 7-year-long quest into the development of a "dream tool" for our research in information science and scientometrics and more recently, network science. The results are two cyberinfrastructures (CI): The Cyberinfrastructure for Information Visualization and the Network Workbench that enjoy a growing national and interdisciplinary user community. Both CIs use the cyberinfrastructure shell (CIShell) software specification, which defines interfaces between data sets and algorithms/services and provides a means to bundle them into powerful tools and (Web) services. In fact, CIShell might be our major contribution to progress in convergence. Just as Wikipedia is an "empty shell" that empowers lay persons to share text, a CIShell implementation is an "empty shell" that empowers user communities to plug-and-play, share, compare and combine data sets, algorithms, and compute resources across national and disciplinary boundaries. It is argued here that CIs will not only transform the way science is conducted but also will play a major role in the diffusion of expertise, data sets, algorithms, and technologies across multiple disciplines and business sectors leading to a more integrative science.
Algorithmic Puzzles: History, Taxonomies, and Applications in Human Problem Solving
ERIC Educational Resources Information Center
Levitin, Anany
2017-01-01
The paper concerns an important but underappreciated genre of algorithmic puzzles, explaining what these puzzles are, reviewing milestones in their long history, and giving two different ways to classify them. Also covered are major applications of algorithmic puzzles in cognitive science research, with an emphasis on insight problem solving, and…
A Functional Programming Approach to AI Search Algorithms
ERIC Educational Resources Information Center
Panovics, Janos
2012-01-01
The theory and practice of search algorithms related to state-space represented problems form the major part of the introductory course of Artificial Intelligence at most of the universities and colleges offering a degree in the area of computer science. Students usually meet these algorithms only in some imperative or object-oriented language…
Barua, Shaibal; Begum, Shahina; Ahmed, Mobyen Uddin
2015-01-01
Machine learning algorithms play an important role in computer science research. Recent advancement in sensor data collection in clinical sciences lead to a complex, heterogeneous data processing, and analysis for patient diagnosis and prognosis. Diagnosis and treatment of patients based on manual analysis of these sensor data are difficult and time consuming. Therefore, development of Knowledge-based systems to support clinicians in decision-making is important. However, it is necessary to perform experimental work to compare performances of different machine learning methods to help to select appropriate method for a specific characteristic of data sets. This paper compares classification performance of three popular machine learning methods i.e., case-based reasoning, neutral networks and support vector machine to diagnose stress of vehicle drivers using finger temperature and heart rate variability. The experimental results show that case-based reasoning outperforms other two methods in terms of classification accuracy. Case-based reasoning has achieved 80% and 86% accuracy to classify stress using finger temperature and heart rate variability. On contrary, both neural network and support vector machine have achieved less than 80% accuracy by using both physiological signals.
NASA Astrophysics Data System (ADS)
Wan, Bo; Zhang, Xue-Ying; Chen, Liang; Ge, Hong-Lin; Ma, Fei; Zhang, Hong-Bin; Ju, Yong-Qin; Zhang, Yan-Bin; Li, Yan-Yan; Xu, Xiao-Wei
2015-11-01
A digital pulse shape discrimination system based on a programmable module NI-5772 has been established and tested with an EJ-301 liquid scintillation detector. The module was operated by running programs developed in LabVIEW, with a sampling frequency up to 1.6 GS/s. Standard gamma sources 22Na, 137Cs and 60Co were used to calibrate the EJ-301 liquid scintillation detector, and the gamma response function was obtained. Digital algorithms for the charge comparison method and zero-crossing method have been developed. The experimental results show that both digital signal processing (DSP) algorithms can discriminate neutrons from γ-rays. Moreover, the zero-crossing method shows better n-γ discrimination at 80 keVee and lower, whereas the charge comparison method gives better results at higher thresholds. In addition, the figure-of-merit (FOM) for detectors of two different dimensions were extracted at 9 energy thresholds, and it was found that the smaller detector presented better n-γ separation for fission neutrons. Supported by National Natural Science Foundation of China (91226107, 11305229) and the Strategic Priority Research Program of the Chinese Academy of Sciences (XDA03030300)
Evaluation of the Jonker-Volgenant-Castanon (JVC) assignment algorithm for track association
NASA Astrophysics Data System (ADS)
Malkoff, Donald B.
1997-07-01
The Jonker-Volgenant-Castanon (JVC) assignment algorithm was used by Lockheed Martin Advanced Technology Laboratories (ATL) for track association in the Rotorcraft Pilot's Associate (RPA) program. RPA is Army Aviation's largest science and technology program, involving an integrated hardware/software system approach for a next generation helicopter containing advanced sensor equipments and applying artificial intelligence `associate' technologies. ATL is responsible for the multisensor, multitarget, onboard/offboard track fusion. McDonnell Douglas Helicopter Systems is the prime contractor and Lockheed Martin Federal Systems is responsible for developing much of the cognitive decision aiding and controls-and-displays subsystems. RPA is scheduled for flight testing beginning in 1997. RPA is unique in requiring real-time tracking and fusion for large numbers of highly-maneuverable ground (and air) targets in a target-dense environment. It uses diverse sensors and is concerned with a large area of interest. Target class and identification data is tightly integrated with spatial and kinematic data throughout the processing. Because of platform constraints, processing hardware for track fusion was quite limited. No previous experience using JVC in this type environment had been reported. ATL performed extensive testing of the JVC, concentrating on error rates and run- times under a variety of conditions. These included wide ranging numbers and types of targets, sensor uncertainties, target attributes, differing degrees of target maneuverability, and diverse combinations of sensors. Testing utilized Monte Carlo approaches, as well as many kinds of challenging scenarios. Comparisons were made with a nearest-neighbor algorithm and a new, proprietary algorithm (the `Competition' algorithm). The JVC proved to be an excellent choice for the RPA environment, providing a good balance between speed of operation and accuracy of results.
2013-01-01
Background Next generation sequencing technologies have greatly advanced many research areas of the biomedical sciences through their capability to generate massive amounts of genetic information at unprecedented rates. The advent of next generation sequencing has led to the development of numerous computational tools to analyze and assemble the millions to billions of short sequencing reads produced by these technologies. While these tools filled an important gap, current approaches for storing, processing, and analyzing short read datasets generally have remained simple and lack the complexity needed to efficiently model the produced reads and assemble them correctly. Results Previously, we presented an overlap graph coarsening scheme for modeling read overlap relationships on multiple levels. Most current read assembly and analysis approaches use a single graph or set of clusters to represent the relationships among a read dataset. Instead, we use a series of graphs to represent the reads and their overlap relationships across a spectrum of information granularity. At each information level our algorithm is capable of generating clusters of reads from the reduced graph, forming an integrated graph modeling and clustering approach for read analysis and assembly. Previously we applied our algorithm to simulated and real 454 datasets to assess its ability to efficiently model and cluster next generation sequencing data. In this paper we extend our algorithm to large simulated and real Illumina datasets to demonstrate that our algorithm is practical for both sequencing technologies. Conclusions Our overlap graph theoretic algorithm is able to model next generation sequencing reads at various levels of granularity through the process of graph coarsening. Additionally, our model allows for efficient representation of the read overlap relationships, is scalable for large datasets, and is practical for both Illumina and 454 sequencing technologies. PMID:24564333
NASA Technical Reports Server (NTRS)
Schoenwald, Adam J.; Bradley, Damon C.; Mohammed, Priscilla N.; Piepmeier, Jeffrey R.; Wong, Mark
2016-01-01
Radio-frequency interference (RFI) is a known problem for passive remote sensing as evidenced in the L-band radiometers SMOS, Aquarius and more recently, SMAP. Various algorithms have been developed and implemented on SMAP to improve science measurements. This was achieved by the use of a digital microwave radiometer. RFI mitigation becomes more challenging for microwave radiometers operating at higher frequencies in shared allocations. At higher frequencies larger bandwidths are also desirable for lower measurement noise further adding to processing challenges. This work focuses on finding improved RFI mitigation techniques that will be effective at additional frequencies and at higher bandwidths. To aid the development and testing of applicable detection and mitigation techniques, a wide-band RFI algorithm testing environment has been developed using the Reconfigurable Open Architecture Computing Hardware System (ROACH) built by the Collaboration for Astronomy Signal Processing and Electronics Research (CASPER) Group. The testing environment also consists of various test equipment used to reproduce typical signals that a radiometer may see including those with and without RFI. The testing environment permits quick evaluations of RFI mitigation algorithms as well as show that they are implementable in hardware. The algorithm implemented is a complex signal kurtosis detector which was modeled and simulated. The complex signal kurtosis detector showed improved performance over the real kurtosis detector under certain conditions. The real kurtosis is implemented on SMAP at 24 MHz bandwidth. The complex signal kurtosis algorithm was then implemented in hardware at 200 MHz bandwidth using the ROACH. In this work, performance of the complex signal kurtosis and the real signal kurtosis are compared. Performance evaluations and comparisons in both simulation as well as experimental hardware implementations were done with the use of receiver operating characteristic (ROC) curves.
Autonomous Vision-Based Tethered-Assisted Rover Docking
NASA Technical Reports Server (NTRS)
Tsai, Dorian; Nesnas, Issa A.D.; Zarzhitsky, Dimitri
2013-01-01
Many intriguing science discoveries on planetary surfaces, such as the seasonal flows on crater walls and skylight entrances to lava tubes, are at sites that are currently inaccessible to state-of-the-art rovers. The in situ exploration of such sites is likely to require a tethered platform both for mechanical support and for providing power and communication. Mother/daughter architectures have been investigated where a mother deploys a tethered daughter into extreme terrains. Deploying and retracting a tethered daughter requires undocking and re-docking of the daughter to the mother, with the latter being the challenging part. In this paper, we describe a vision-based tether-assisted algorithm for the autonomous re-docking of a daughter to its mother following an extreme terrain excursion. The algorithm uses fiducials mounted on the mother to improve the reliability and accuracy of estimating the pose of the mother relative to the daughter. The tether that is anchored by the mother helps the docking process and increases the system's tolerance to pose uncertainties by mechanically aligning the mating parts in the final docking phase. A preliminary version of the algorithm was developed and field-tested on the Axel rover in the JPL Mars Yard. The algorithm achieved an 80% success rate in 40 experiments in both firm and loose soils and starting from up to 6 m away at up to 40 deg radial angle and 20 deg relative heading. The algorithm does not rely on an initial estimate of the relative pose. The preliminary results are promising and help retire the risk associated with the autonomous docking process enabling consideration in future martian and lunar missions.
Atmospheric correction at AERONET locations: A new science and validation data set
Wang, Y.; Lyapustin, A.I.; Privette, J.L.; Morisette, J.T.; Holben, B.
2009-01-01
This paper describes an Aerosol Robotic Network (AERONET)-based Surface Reflectance Validation Network (ASRVN) and its data set of spectral surface bidirectional reflectance and albedo based on Moderate Resolution Imaging Spectroradiometer (MODIS) TERRA and AQUA data. The ASRVN is an operational data collection and processing system. It receives 50 ?? 50 km2; subsets of MODIS level 1B (L1B) data from MODIS adaptive processing system and AERONET aerosol and water-vapor information. Then, it performs an atmospheric correction (AC) for about 100 AERONET sites based on accurate radiative-transfer theory with complex quality control of the input data. The ASRVN processing software consists of an L1B data gridding algorithm, a new cloud-mask (CM) algorithm based on a time-series analysis, and an AC algorithm using ancillary AERONET aerosol and water-vapor data. The AC is achieved by fitting the MODIS top-of-atmosphere measurements, accumulated for a 16-day interval, with theoretical reflectance parameterized in terms of the coefficients of the Li SparseRoss Thick (LSRT) model of the bidirectional reflectance factor (BRF). The ASRVN takes several steps to ensure high quality of results: 1) the filtering of opaque clouds by a CM algorithm; 2) the development of an aerosol filter to filter residual semitransparent and subpixel clouds, as well as cases with high inhomogeneity of aerosols in the processing area; 3) imposing the requirement of the consistency of the new solution with previously retrieved BRF and albedo; 4) rapid adjustment of the 16-day retrieval to the surface changes using the last day of measurements; and 5) development of a seasonal backup spectral BRF database to increase data coverage. The ASRVN provides a gapless or near-gapless coverage for the processing area. The gaps, caused by clouds, are filled most naturally with the latest solution for a given pixel. The ASRVN products include three parameters of the LSRT model (kL, kG, and kV), surface albedo, normalized BRF (computed for a standard viewing geometry, VZA = 0, SZA = 45??), and instantaneous BRF (or one-angle BRF value derived from the last day of MODIS measurement for specific viewing geometry) for the MODIS 500-m bands 17. The results are produced daily at a resolution of 1 km in gridded format. We also provide a cloud mask, a quality flag, and a browse bitmap image. The ASRVN data set, including 6 years of MODIS TERRA and 1.5 years of MODIS AQUA data, is available now as a standard MODIS product (MODASRVN) which can be accessed through the Level 1 and Atmosphere Archive and Distribution System website ( http://ladsweb.nascom.nasa.gov/data/search.html). It can be used for a wide range of applications including validation analysis and science research. ?? 2006 IEEE.
Real-Time On-Board Processing Validation of MSPI Ground Camera Images
NASA Technical Reports Server (NTRS)
Pingree, Paula J.; Werne, Thomas A.; Bekker, Dmitriy L.
2010-01-01
The Earth Sciences Decadal Survey identifies a multiangle, multispectral, high-accuracy polarization imager as one requirement for the Aerosol-Cloud-Ecosystem (ACE) mission. JPL has been developing a Multiangle SpectroPolarimetric Imager (MSPI) as a candidate to fill this need. A key technology development needed for MSPI is on-board signal processing to calculate polarimetry data as imaged by each of the 9 cameras forming the instrument. With funding from NASA's Advanced Information Systems Technology (AIST) Program, JPL is solving the real-time data processing requirements to demonstrate, for the first time, how signal data at 95 Mbytes/sec over 16-channels for each of the 9 multiangle cameras in the spaceborne instrument can be reduced on-board to 0.45 Mbytes/sec. This will produce the intensity and polarization data needed to characterize aerosol and cloud microphysical properties. Using the Xilinx Virtex-5 FPGA including PowerPC440 processors we have implemented a least squares fitting algorithm that extracts intensity and polarimetric parameters in real-time, thereby substantially reducing the image data volume for spacecraft downlink without loss of science information.
Goddard high resolution spectrograph science verification and data analysis
NASA Technical Reports Server (NTRS)
1992-01-01
The data analysis performed was to support the Orbital Verification (OV) and Science Verification (SV) of the GHRS was in the areas of the Digicon detector's performance and stability, wavelength calibration, and geomagnetic induced image motion. The results of the analyses are briefly described. Detailed results are given in the form of attachments. Specialized software was developed for the analyses. Calibration files were formatted according to the specifications in a Space Telescope Science report. IRAS images were restored of the Large Magellanic Cloud using a blocked iterative algorithm. The algorithm works with the raw data scans without regridding or interpolating the data on an equally spaced image grid.
Robot Science Autonomy in the Atacama Desert and Beyond
NASA Technical Reports Server (NTRS)
Thompson, David R.; Wettergreen, David S.
2013-01-01
Science-guided autonomy augments rovers with reasoning to make observations and take actions related to the objectives of scientific exploration. When rovers can directly interpret instrument measurements then scientific goals can inform and adapt ongoing navigation decisions. These autonomous explorers will make better scientific observations and collect massive, accurate datasets. In current astrobiology studies in the Atacama Desert we are applying algorithms for science autonomy to choose effective observations and measurements. Rovers are able to decide when and where to take follow-up actions that deepen scientific understanding. These techniques apply to planetary rovers, which we can illustrate with algorithms now used by Mars rovers and by discussing future missions.
Accelerated Adaptive MGS Phase Retrieval
NASA Technical Reports Server (NTRS)
Lam, Raymond K.; Ohara, Catherine M.; Green, Joseph J.; Bikkannavar, Siddarayappa A.; Basinger, Scott A.; Redding, David C.; Shi, Fang
2011-01-01
The Modified Gerchberg-Saxton (MGS) algorithm is an image-based wavefront-sensing method that can turn any science instrument focal plane into a wavefront sensor. MGS characterizes optical systems by estimating the wavefront errors in the exit pupil using only intensity images of a star or other point source of light. This innovative implementation of MGS significantly accelerates the MGS phase retrieval algorithm by using stream-processing hardware on conventional graphics cards. Stream processing is a relatively new, yet powerful, paradigm to allow parallel processing of certain applications that apply single instructions to multiple data (SIMD). These stream processors are designed specifically to support large-scale parallel computing on a single graphics chip. Computationally intensive algorithms, such as the Fast Fourier Transform (FFT), are particularly well suited for this computing environment. This high-speed version of MGS exploits commercially available hardware to accomplish the same objective in a fraction of the original time. The exploit involves performing matrix calculations in nVidia graphic cards. The graphical processor unit (GPU) is hardware that is specialized for computationally intensive, highly parallel computation. From the software perspective, a parallel programming model is used, called CUDA, to transparently scale multicore parallelism in hardware. This technology gives computationally intensive applications access to the processing power of the nVidia GPUs through a C/C++ programming interface. The AAMGS (Accelerated Adaptive MGS) software takes advantage of these advanced technologies, to accelerate the optical phase error characterization. With a single PC that contains four nVidia GTX-280 graphic cards, the new implementation can process four images simultaneously to produce a JWST (James Webb Space Telescope) wavefront measurement 60 times faster than the previous code.
Creating Science Simulations through Computational Thinking Patterns
ERIC Educational Resources Information Center
Basawapatna, Ashok Ram
2012-01-01
Computational thinking aims to outline fundamental skills from computer science that everyone should learn. As currently defined, with help from the National Science Foundation (NSF), these skills include problem formulation, logically organizing data, automating solutions through algorithmic thinking, and representing data through abstraction.…
A visual analytic framework for data fusion in investigative intelligence
NASA Astrophysics Data System (ADS)
Cai, Guoray; Gross, Geoff; Llinas, James; Hall, David
2014-05-01
Intelligence analysis depends on data fusion systems to provide capabilities of detecting and tracking important objects, events, and their relationships in connection to an analytical situation. However, automated data fusion technologies are not mature enough to offer reliable and trustworthy information for situation awareness. Given the trend of increasing sophistication of data fusion algorithms and loss of transparency in data fusion process, analysts are left out of the data fusion process cycle with little to no control and confidence on the data fusion outcome. Following the recent rethinking of data fusion as human-centered process, this paper proposes a conceptual framework towards developing alternative data fusion architecture. This idea is inspired by the recent advances in our understanding of human cognitive systems, the science of visual analytics, and the latest thinking about human-centered data fusion. Our conceptual framework is supported by an analysis of the limitation of existing fully automated data fusion systems where the effectiveness of important algorithmic decisions depend on the availability of expert knowledge or the knowledge of the analyst's mental state in an investigation. The success of this effort will result in next generation data fusion systems that can be better trusted while maintaining high throughput.
Simulation of finite-strain inelastic phenomena governed by creep and plasticity
NASA Astrophysics Data System (ADS)
Li, Zhen; Bloomfield, Max O.; Oberai, Assad A.
2017-11-01
Inelastic mechanical behavior plays an important role in many applications in science and engineering. Phenomenologically, this behavior is often modeled as plasticity or creep. Plasticity is used to represent the rate-independent component of inelastic deformation and creep is used to represent the rate-dependent component. In several applications, especially those at elevated temperatures and stresses, these processes occur simultaneously. In order to model these process, we develop a rate-objective, finite-deformation constitutive model for plasticity and creep. The plastic component of this model is based on rate-independent J_2 plasticity, and the creep component is based on a thermally activated Norton model. We describe the implementation of this model within a finite element formulation, and present a radial return mapping algorithm for it. This approach reduces the additional complexity of modeling plasticity and creep, over thermoelasticity, to just solving one nonlinear scalar equation at each quadrature point. We implement this algorithm within a multiphysics finite element code and evaluate the consistent tangent through automatic differentiation. We verify and validate the implementation, apply it to modeling the evolution of stresses in the flip chip manufacturing process, and test its parallel strong-scaling performance.
JPSS Science Data Services for the Direct Readout Community
NASA Technical Reports Server (NTRS)
Chander, Gyanesh; Lutz, Bob
2014-01-01
The Suomi National Polar-orbiting Partnership (S-NPP) and Joint Polar Satellite System (JPSS) High Rate Data (HRD) link provides Direct Broadcast data to users in real-time, utilizing their own remote field terminals. The Field Terminal Support (FTS) provides the resources needed to support the Direct Readout communities by providing software, documentation, and periodic updates to enable them to produce data products from SNPP and JPSS. The FTS distribution server will also provide the necessary ancillary and auxiliary data needed for processing the broadcasts, as well as making orbital data available to assist in locating the satellites of interest. In addition, the FTS provides development support for the algorithm and software through GSFC Direct Readout Laboratory (DRL) International Polar Orbiter Processing Package (IPOPP) and University of Wisconsin (UWISC) Community Satellite Processing Package (CSPP), to enable users to integrate the algorithms into their remote terminals. The support the JPSS Program provides to the institutions developing and maintaining these two software packages, will demonstrate the ability to produce ready-to-use products from the HRD link and provide risk reduction effort at a minimal cost. This paper discusses the key functions and system architecture of FTS.
Spatiotemporal stochastic models for earth science and engineering applications
NASA Astrophysics Data System (ADS)
Luo, Xiaochun
1998-12-01
Spatiotemporal processes occur in many areas of earth sciences and engineering. However, most of the available theoretical tools and techniques of space-time daft processing have been designed to operate exclusively in time or in space, and the importance of spatiotemporal variability was not fully appreciated until recently. To address this problem, a systematic framework of spatiotemporal random field (S/TRF) models for geoscience/engineering applications is presented and developed in this thesis. The space-tune continuity characterization is one of the most important aspects in S/TRF modelling, where the space-time continuity is displayed with experimental spatiotemporal variograms, summarized in terms of space-time continuity hypotheses, and modelled using spatiotemporal variogram functions. Permissible spatiotemporal covariance/variogram models are addressed through permissibility criteria appropriate to spatiotemporal processes. The estimation of spatiotemporal processes is developed in terms of spatiotemporal kriging techniques. Particular emphasis is given to the singularity analysis of spatiotemporal kriging systems. The impacts of covariance, functions, trend forms, and data configurations on the singularity of spatiotemporal kriging systems are discussed. In addition, the tensorial invariance of universal spatiotemporal kriging systems is investigated in terms of the space-time trend. The conditional simulation of spatiotemporal processes is proposed with the development of the sequential group Gaussian simulation techniques (SGGS), which is actually a series of sequential simulation algorithms associated with different group sizes. The simulation error is analyzed with different covariance models and simulation grids. The simulated annealing technique honoring experimental variograms, is also proposed, providing a way of conditional simulation without the covariance model fitting which is prerequisite for most simulation algorithms. The proposed techniques were first applied for modelling of the pressure system in a carbonate reservoir, and then applied for modelling of springwater contents in the Dyle watershed. The results of these case studies as well as the theory suggest that these techniques are realistic and feasible.
Invariant patterns in crystal lattices: Implications for protein folding algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
HART,WILLIAM E.; ISTRAIL,SORIN
2000-06-01
Crystal lattices are infinite periodic graphs that occur naturally in a variety of geometries and which are of fundamental importance in polymer science. Discrete models of protein folding use crystal lattices to define the space of protein conformations. Because various crystal lattices provide discretizations of the same physical phenomenon, it is reasonable to expect that there will exist invariants across lattices related to fundamental properties of the protein folding process. This paper considers whether performance-guaranteed approximability is such an invariant for HP lattice models. The authors define a master approximation algorithm that has provable performance guarantees provided that a specificmore » sublattice exists within a given lattice. They describe a broad class of crystal lattices that are approximable, which further suggests that approximability is a general property of HP lattice models.« less
Photometric Analysis in the Kepler Science Operations Center Pipeline
NASA Technical Reports Server (NTRS)
Twicken, Joseph D.; Clarke, Bruce D.; Bryson, Stephen T.; Tenenbaum, Peter; Wu, Hayley; Jenkins, Jon M.; Girouard, Forrest; Klaus, Todd C.
2010-01-01
We describe the Photometric Analysis (PA) software component and its context in the Kepler Science Operations Center (SOC) pipeline. The primary tasks of this module are to compute the photometric flux and photocenters (centroids) for over 160,000 long cadence (thirty minute) and 512 short cadence (one minute) stellar targets from the calibrated pixels in their respective apertures. We discuss the science algorithms for long and short cadence PA: cosmic ray cleaning; background estimation and removal; aperture photometry; and flux-weighted centroiding. We discuss the end-to-end propagation of uncertainties for the science algorithms. Finally, we present examples of photometric apertures, raw flux light curves, and centroid time series from Kepler flight data. PA light curves, centroid time series, and barycentric timestamp corrections are exported to the Multi-mission Archive at Space Telescope [Science Institute] (MAST) and are made available to the general public in accordance with the NASA/Kepler data release policy.
Scalable Conjunction Processing using Spatiotemporally Indexed Ephemeris Data
NASA Astrophysics Data System (ADS)
Budianto-Ho, I.; Johnson, S.; Sivilli, R.; Alberty, C.; Scarberry, R.
2014-09-01
The collision warnings produced by the Joint Space Operations Center (JSpOC) are of critical importance in protecting U.S. and allied spacecraft against destructive collisions and protecting the lives of astronauts during space flight. As the Space Surveillance Network (SSN) improves its sensor capabilities for tracking small and dim space objects, the number of tracked objects increases from thousands to hundreds of thousands of objects, while the number of potential conjunctions increases with the square of the number of tracked objects. Classical filtering techniques such as apogee and perigee filters have proven insufficient. Novel and orders of magnitude faster conjunction analysis algorithms are required to find conjunctions in a timely manner. Stellar Science has developed innovative filtering techniques for satellite conjunction processing using spatiotemporally indexed ephemeris data that efficiently and accurately reduces the number of objects requiring high-fidelity and computationally-intensive conjunction analysis. Two such algorithms, one based on the k-d Tree pioneered in robotics applications and the other based on Spatial Hash Tables used in computer gaming and animation, use, at worst, an initial O(N log N) preprocessing pass (where N is the number of tracked objects) to build large O(N) spatial data structures that substantially reduce the required number of O(N^2) computations, substituting linear memory usage for quadratic processing time. The filters have been implemented as Open Services Gateway initiative (OSGi) plug-ins for the Continuous Anomalous Orbital Situation Discriminator (CAOS-D) conjunction analysis architecture. We have demonstrated the effectiveness, efficiency, and scalability of the techniques using a catalog of 100,000 objects, an analysis window of one day, on a 64-core computer with 1TB shared memory. Each algorithm can process the full catalog in 6 minutes or less, almost a twenty-fold performance improvement over the baseline implementation running on the same machine. We will present an overview of the algorithms and results that demonstrate the scalability of our concepts.
Algorithm Animations for Teaching and Learning the Main Ideas of Basic Sortings
ERIC Educational Resources Information Center
Végh, Ladislav; Stoffová, Veronika
2017-01-01
Algorithms are hard to understand for novice computer science students because they dynamically modify values of elements of abstract data structures. Animations can help to understand algorithms, since they connect abstract concepts to real life objects and situations. In the past 30-35 years, there have been conducted many experiments in the…
Creating Engaging Online Learning Material with the JSAV JavaScript Algorithm Visualization Library
ERIC Educational Resources Information Center
Karavirta, Ville; Shaffer, Clifford A.
2016-01-01
Data Structures and Algorithms are a central part of Computer Science. Due to their abstract and dynamic nature, they are a difficult topic to learn for many students. To alleviate these learning difficulties, instructors have turned to algorithm visualizations (AV) and AV systems. Research has shown that especially engaging AVs can have an impact…
The Pan-STARRS PS1 Image Processing Pipeline
NASA Astrophysics Data System (ADS)
Magnier, E.
The Pan-STARRS PS1 Image Processing Pipeline (IPP) performs the image processing and data analysis tasks needed to enable the scientific use of the images obtained by the Pan-STARRS PS1 prototype telescope. The primary goals of the IPP are to process the science images from the Pan-STARRS telescopes and make the results available to other systems within Pan-STARRS. It also is responsible for combining all of the science images in a given filter into a single representation of the non-variable component of the night sky defined as the "Static Sky". To achieve these goals, the IPP also performs other analysis functions to generate the calibrations needed in the science image processing, and to occasionally use the derived data to generate improved astrometric and photometric reference catalogs. It also provides the infrastructure needed to store the incoming data and the resulting data products. The IPP inherits lessons learned, and in some cases code and prototype code, from several other astronomy image analysis systems, including Imcat (Kaiser), the Sloan Digital Sky Survey (REF), the Elixir system (Magnier & Cuillandre), and Vista (Tonry). Imcat and Vista have a large number of robust image processing functions. SDSS has demonstrated a working analysis pipeline and large-scale databasesystem for a dedicated project. The Elixir system has demonstrated an automatic image processing system and an object database system for operational usage. This talk will present an overview of the IPP architecture, functional flow, code development structure, and selected analysis algorithms. Also discussed is the HW highly parallel HW configuration necessary to support PS1 operational requirements. Finally, results are presented of the processing of images collected during PS1 early commissioning tasks utilizing the Pan-STARRS Test Camera #3.
An Online Algorithm for Maximizing Submodular Functions
2007-12-20
dynamics of the social network are known. In theory, our online algorithms could be used to adapt a marketing campaign to unknown or time-varying social...An Online Algorithm for Maximizing Submodular Functions Matthew Streeter Daniel Golovin December 20, 2007 CMU-CS-07-171 School of Computer Science...number. 1. REPORT DATE 20 DEC 2007 2. REPORT TYPE 3. DATES COVERED 00-00-2007 to 00-00-2007 4. TITLE AND SUBTITLE An Online Algorithm for
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John; Iredell, Lena
2014-01-01
The AIRS Science Team Version-6 AIRS/AMSU retrieval algorithm is now operational at the Goddard DISC. AIRS Version-6 level-2 products are generated near real-time at the Goddard DISC and all level-2 and level-3 products are available starting from September 2002. This paper describes some of the significant improvements in retrieval methodology contained in the Version-6 retrieval algorithm compared to that previously used in Version-5. In particular, the AIRS Science Team made major improvements with regard to the algorithms used to 1) derive surface skin temperature and surface spectral emissivity; 2) generate the initial state used to start the cloud clearing and retrieval procedures; and 3) derive error estimates and use them for Quality Control. Significant improvements have also been made in the generation of cloud parameters. In addition to the basic AIRS/AMSU mode, Version-6 also operates in an AIRS Only (AO) mode which produces results almost as good as those of the full AIRS/AMSU mode. This paper also demonstrates the improvements of some AIRS Version-6 and Version-6 AO products compared to those obtained using Version-5.
Universal Darwinism As a Process of Bayesian Inference.
Campbell, John O
2016-01-01
Many of the mathematical frameworks describing natural selection are equivalent to Bayes' Theorem, also known as Bayesian updating. By definition, a process of Bayesian Inference is one which involves a Bayesian update, so we may conclude that these frameworks describe natural selection as a process of Bayesian inference. Thus, natural selection serves as a counter example to a widely-held interpretation that restricts Bayesian Inference to human mental processes (including the endeavors of statisticians). As Bayesian inference can always be cast in terms of (variational) free energy minimization, natural selection can be viewed as comprising two components: a generative model of an "experiment" in the external world environment, and the results of that "experiment" or the "surprise" entailed by predicted and actual outcomes of the "experiment." Minimization of free energy implies that the implicit measure of "surprise" experienced serves to update the generative model in a Bayesian manner. This description closely accords with the mechanisms of generalized Darwinian process proposed both by Dawkins, in terms of replicators and vehicles, and Campbell, in terms of inferential systems. Bayesian inference is an algorithm for the accumulation of evidence-based knowledge. This algorithm is now seen to operate over a wide range of evolutionary processes, including natural selection, the evolution of mental models and cultural evolutionary processes, notably including science itself. The variational principle of free energy minimization may thus serve as a unifying mathematical framework for universal Darwinism, the study of evolutionary processes operating throughout nature.
Universal Darwinism As a Process of Bayesian Inference
Campbell, John O.
2016-01-01
Many of the mathematical frameworks describing natural selection are equivalent to Bayes' Theorem, also known as Bayesian updating. By definition, a process of Bayesian Inference is one which involves a Bayesian update, so we may conclude that these frameworks describe natural selection as a process of Bayesian inference. Thus, natural selection serves as a counter example to a widely-held interpretation that restricts Bayesian Inference to human mental processes (including the endeavors of statisticians). As Bayesian inference can always be cast in terms of (variational) free energy minimization, natural selection can be viewed as comprising two components: a generative model of an “experiment” in the external world environment, and the results of that “experiment” or the “surprise” entailed by predicted and actual outcomes of the “experiment.” Minimization of free energy implies that the implicit measure of “surprise” experienced serves to update the generative model in a Bayesian manner. This description closely accords with the mechanisms of generalized Darwinian process proposed both by Dawkins, in terms of replicators and vehicles, and Campbell, in terms of inferential systems. Bayesian inference is an algorithm for the accumulation of evidence-based knowledge. This algorithm is now seen to operate over a wide range of evolutionary processes, including natural selection, the evolution of mental models and cultural evolutionary processes, notably including science itself. The variational principle of free energy minimization may thus serve as a unifying mathematical framework for universal Darwinism, the study of evolutionary processes operating throughout nature. PMID:27375438
Cox process representation and inference for stochastic reaction-diffusion processes
NASA Astrophysics Data System (ADS)
Schnoerr, David; Grima, Ramon; Sanguinetti, Guido
2016-05-01
Complex behaviour in many systems arises from the stochastic interactions of spatially distributed particles or agents. Stochastic reaction-diffusion processes are widely used to model such behaviour in disciplines ranging from biology to the social sciences, yet they are notoriously difficult to simulate and calibrate to observational data. Here we use ideas from statistical physics and machine learning to provide a solution to the inverse problem of learning a stochastic reaction-diffusion process from data. Our solution relies on a non-trivial connection between stochastic reaction-diffusion processes and spatio-temporal Cox processes, a well-studied class of models from computational statistics. This connection leads to an efficient and flexible algorithm for parameter inference and model selection. Our approach shows excellent accuracy on numeric and real data examples from systems biology and epidemiology. Our work provides both insights into spatio-temporal stochastic systems, and a practical solution to a long-standing problem in computational modelling.
Autonomy and Privacy in Clinical Laboratory Science Policy and Practice.
Leibach, Elizabeth Kenimer
2014-01-01
Rapid advancements in diagnostic technologies coupled with growth in testing options and choices mandate the development of evidence-based testing algorithms linked to the care paths of the major chronic diseases and health challenges encountered most frequently. As care paths are evaluated, patient/consumers become partners in healthcare delivery. Clinical laboratory scientists find themselves firmly embedded in both quality improvement and clinical research with an urgent need to translate clinical laboratory information into knowledge required by practitioners and patient/consumers alike. To implement this patient-centered care approach in clinical laboratory science, practitioners must understand their roles in (1) protecting patient/consumer autonomy in the healthcare informed consent process and (2) assuring patient/consumer privacy and confidentiality while blending quality improvement study findings with protected health information. A literature review, describing the current ethical environment, supports a consultative role for clinical laboratory scientists in the clinical decision-making process and suggests guidance for policy and practice regarding the principle of autonomy and its associated operational characteristics: informed consent and privacy.
NASA Astrophysics Data System (ADS)
Gong, Weiwei; Zhou, Xu
2017-06-01
In Computer Science, the Boolean Satisfiability Problem(SAT) is the problem of determining if there exists an interpretation that satisfies a given Boolean formula. SAT is one of the first problems that was proven to be NP-complete, which is also fundamental to artificial intelligence, algorithm and hardware design. This paper reviews the main algorithms of the SAT solver in recent years, including serial SAT algorithms, parallel SAT algorithms, SAT algorithms based on GPU, and SAT algorithms based on FPGA. The development of SAT is analyzed comprehensively in this paper. Finally, several possible directions for the development of the SAT problem are proposed.
The Hubble Catalog of Variables
NASA Astrophysics Data System (ADS)
Gavras, P.; Bonanos, A. Z.; Bellas-Velidis, I.; Charmandaris, V.; Georgantopoulos, I.; Hatzidimitriou, D.; Kakaletris, G.; Karampelas, A.; Laskaris, N.; Lennon, D. J.; Moretti, M. I.; Pouliasis, E.; Sokolovsky, K.; Spetsieri, Z. T.; Tsinganos, K.; Whitmore, B. C.; Yang, M.
2017-06-01
The Hubble Catalog of Variables (HCV) is a 3 year ESA funded project that aims to develop a set of algorithms to identify variables among the sources included in the Hubble Source Catalog (HSC) and produce the HCV. We will process all HSC sources with more than a predefined number of measurements in a single filter/instrument combination and compute a range of lightcurve features to determine the variability status of each source. At the end of the project, the first release of the Hubble Catalog of Variables will be made available at the Mikulski Archive for Space Telescopes (MAST) and the ESA Science Archives. The variability detection pipeline will be implemented at the Space Telescope Science Institute (STScI) so that updated versions of the HCV may be created following the future releases of the HSC.
The information science of microbial ecology.
Hahn, Aria S; Konwar, Kishori M; Louca, Stilianos; Hanson, Niels W; Hallam, Steven J
2016-06-01
A revolution is unfolding in microbial ecology where petabytes of 'multi-omics' data are produced using next generation sequencing and mass spectrometry platforms. This cornucopia of biological information has enormous potential to reveal the hidden metabolic powers of microbial communities in natural and engineered ecosystems. However, to realize this potential, the development of new technologies and interpretative frameworks grounded in ecological design principles are needed to overcome computational and analytical bottlenecks. Here we explore the relationship between microbial ecology and information science in the era of cloud-based computation. We consider microorganisms as individual information processing units implementing a distributed metabolic algorithm and describe developments in ecoinformatics and ubiquitous computing with the potential to eliminate bottlenecks and empower knowledge creation and translation. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Sjoberg, W.; McWilliams, G.
2017-12-01
This presentation will focus on the continuity of the NOAA Joint Polar Satellite System (JPSS) Program's Proving Ground and Risk Reduction (PGRR) and key activities of the PGRR Initiatives. The PGRR Program was established in 2012, following the launch of the Suomi National Polar Partnership (SNPP) satellite. The JPSS Program Office has used two PGRR Project Proposals to establish an effective approach to managing its science and algorithm teams in order to focus on key NOAA missions. The presenter will provide details of the Initiatives and the processes used by the initiatives that have proven so successful. Details of the new 2017 PGRR Call-for-Proposals and the status of project selections will be discussed.
Meneses, Anderson Alvarenga de Moura; Palheta, Dayara Bastos; Pinheiro, Christiano Jorge Gomes; Barroso, Regina Cely Rodrigues
2018-03-01
X-ray Synchrotron Radiation Micro-Computed Tomography (SR-µCT) allows a better visualization in three dimensions with a higher spatial resolution, contributing for the discovery of aspects that could not be observable through conventional radiography. The automatic segmentation of SR-µCT scans is highly valuable due to its innumerous applications in geological sciences, especially for morphology, typology, and characterization of rocks. For a great number of µCT scan slices, a manual process of segmentation would be impractical, either for the time expended and for the accuracy of results. Aiming the automatic segmentation of SR-µCT geological sample images, we applied and compared Energy Minimization via Graph Cuts (GC) algorithms and Artificial Neural Networks (ANNs), as well as the well-known K-means and Fuzzy C-Means algorithms. The Dice Similarity Coefficient (DSC), Sensitivity and Precision were the metrics used for comparison. Kruskal-Wallis and Dunn's tests were applied and the best methods were the GC algorithms and ANNs (with Levenberg-Marquardt and Bayesian Regularization). For those algorithms, an approximate Dice Similarity Coefficient of 95% was achieved. Our results confirm the possibility of usage of those algorithms for segmentation and posterior quantification of porosity of an igneous rock sample SR-µCT scan. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA/Ames Research Center's science and applications aircraft program
NASA Technical Reports Server (NTRS)
Hall, G. Warren
1991-01-01
NASA-Ames Research Center operates a fleet of seven Science and Applications Aircraft, namely the C-141/Kuiper Airborne Observatory (KAO), DC-8, C-130, Lear Jet, and three ER-2s. These aircraft are used to satisfy two major objectives, each of equal importance. The first is to acquire remote and in-situ scientific data in astronomy, astrophysics, earth sciences, ocean processes, atmospheric physics, meteorology, materials processing and life sciences. The second major objective is to expedite the development of sensors and their attendant algorithms for ultimate use in space and to simulate from an aircraft, the data to be acquired from spaceborne sensors. NASA-Ames Science and Applications Aircraft are recognized as national and international facilities. They have performed and will continue to perform, operational missions from bases in the United States and worldwide. Historically, twice as many investigators have requested flight time than could be accommodated. This situation remains true today and is expected to increase in the years ahead. A major advantage of the existing fleet of aircraft is their ability to cover a large expanse of the earth's ecosystem from the surface to the lower stratosphere over large distances and time aloft. Their large payload capability allows a number of scientists to use multi-investigator sensor suites to permit simultaneous and complementary data gathering. In-flight changes to the sensors or data systems have greatly reduced the time required to optimize the development of new instruments. It is doubtful that spaceborne systems will ever totally replace the need for airborne science aircraft. The operations philosophy and capabilities exist at NASA-Ames Research Center.
A review on quantum search algorithms
NASA Astrophysics Data System (ADS)
Giri, Pulak Ranjan; Korepin, Vladimir E.
2017-12-01
The use of superposition of states in quantum computation, known as quantum parallelism, has significant advantage in terms of speed over the classical computation. It is evident from the early invented quantum algorithms such as Deutsch's algorithm, Deutsch-Jozsa algorithm and its variation as Bernstein-Vazirani algorithm, Simon algorithm, Shor's algorithms, etc. Quantum parallelism also significantly speeds up the database search algorithm, which is important in computer science because it comes as a subroutine in many important algorithms. Quantum database search of Grover achieves the task of finding the target element in an unsorted database in a time quadratically faster than the classical computer. We review Grover's quantum search algorithms for a singe and multiple target elements in a database. The partial search algorithm of Grover and Radhakrishnan and its optimization by Korepin called GRK algorithm are also discussed.
NASA Astrophysics Data System (ADS)
Salloum, Sara; BouJaoude, Saouma
2017-08-01
The purpose of this research is to better understand the uses and potential of triadic dialogue (initiation-response-feedback) as a dominant discourse pattern in test-driven environments. We used a Bakhtinian dialogic perspective to analyze interactions among high-stakes tests and triadic dialogue. Specifically, the study investigated (a) the global influence of high-stakes tests on knowledge types and cognitive processes presented and elicited by the science teacher in triadic dialogue and (b) the teacher's meaning making of her discourse patterns. The classroom talk occurred in a classroom where the teacher tried to balance conceptual learning with helping low-income public school students pass the national tests. Videos and transcripts of 20 grade 8 and 9 physical science sessions were analyzed qualitatively. Teacher utterances were categorized in terms of science knowledge types and cognitive processes. Explicitness and directionality of shifts among different knowledge types were analyzed. It was found that shifts between factual/conceptual/procedural-algorithmic and procedural inquiry were mostly dialectical and implicit, and dominated the body of concept development lessons. These shifts called for medium-level cognitive processes. Shifts between the different knowledge types and procedural-testing were more explicit and occurred mostly at the end of lessons. Moreover, the science teacher's focus on success and high expectations, her explicitness in dealing with high-stakes tests, and the relaxed atmosphere she created built a constructive partnership with the students toward a common goal of cracking the test. We discuss findings from a Bakhtinian dialogic perspective and the potential of triadic dialogue for teachers negotiating multiple goals and commitments.
Autonomous Surface Sample Acquisition for Planetary and Lunar Exploration
NASA Astrophysics Data System (ADS)
Barnes, D. P.
2007-08-01
Surface science sample acquisition is a critical activity within any planetary and lunar exploration mission, and our research is focused upon the design, implementation, experimentation and demonstration of an onboard autonomous surface sample acquisition capability for a rover equipped with a robotic arm upon which are mounted appropriate science instruments. Images captured by a rover stereo camera system can be processed using shape from stereo methods and a digital elevation model (DEM) generated. We have developed a terrain feature identification algorithm that can determine autonomously from DEM data suitable regions for instrument placement and/or surface sample acquisition. Once identified, surface normal data can be generated autonomously which are then used to calculate an arm trajectory for instrument placement and sample acquisition. Once an instrument placement and sample acquisition trajectory has been calculated, a collision detection algorithm is required to ensure the safe operation of the arm during sample acquisition.We have developed a novel adaptive 'bounding spheres' approach to this problem. Once potential science targets have been identified, and these are within the reach of the arm and will not cause any undesired collision, then the 'cost' of executing the sample acquisition activity is required. Such information which includes power expenditure and duration can be used to select the 'best' target from a set of potential targets. We have developed a science sample acquisition resource requirements calculation that utilises differential inverse kinematics methods to yield a high fidelity result, thus improving upon simple 1st order approximations. To test our algorithms a new Planetary Analogue Terrain (PAT) Laboratory has been created that has a terrain region composed of Mars Soil Simulant-D from DLR Germany, and rocks that have been fully characterised in the laboratory. These have been donated by the UK Planetary Analogue Field Study network, and constitute the science targets for our autonomous sample acquisition work. Our PAT Lab. terrain has been designed to support our new rover chassis which is based upon the ExoMars rover Concept-E mechanics which were investigated during the ESA ExoMars Phase A study. The rover has 6 wheel drives, 6 wheels steering, and a 6 wheel walking capability. Mounted on the rover chassis is the UWA robotic arm and mast. We have designed and built a PanCam system complete with a computer controlled pan and tilt mechanism. The UWA PanCam is based upon the ExoMars PanCam (Phase A study) and hence supports two Wide Angle Cameras (WAC - 64 degree FOV), and a High Resolution Camera (HRC - 5 degree FOV). WAC separation is 500 mm. Software has been developed to capture images which form the data input into our on-board autonomous surface sample acquisition algorithms.
SciSpark's SRDD : A Scientific Resilient Distributed Dataset for Multidimensional Data
NASA Astrophysics Data System (ADS)
Palamuttam, R. S.; Wilson, B. D.; Mogrovejo, R. M.; Whitehall, K. D.; Mattmann, C. A.; McGibbney, L. J.; Ramirez, P.
2015-12-01
Remote sensing data and climate model output are multi-dimensional arrays of massive sizes locked away in heterogeneous file formats (HDF5/4, NetCDF 3/4) and metadata models (HDF-EOS, CF) making it difficult to perform multi-stage, iterative science processing since each stage requires writing and reading data to and from disk. We have developed SciSpark, a robust Big Data framework, that extends ApacheTM Spark for scaling scientific computations. Apache Spark improves the map-reduce implementation in ApacheTM Hadoop for parallel computing on a cluster, by emphasizing in-memory computation, "spilling" to disk only as needed, and relying on lazy evaluation. Central to Spark is the Resilient Distributed Dataset (RDD), an in-memory distributed data structure that extends the functional paradigm provided by the Scala programming language. However, RDDs are ideal for tabular or unstructured data, and not for highly dimensional data. The SciSpark project introduces the Scientific Resilient Distributed Dataset (sRDD), a distributed-computing array structure which supports iterative scientific algorithms for multidimensional data. SciSpark processes data stored in NetCDF and HDF files by partitioning them across time or space and distributing the partitions among a cluster of compute nodes. We show usability and extensibility of SciSpark by implementing distributed algorithms for geospatial operations on large collections of multi-dimensional grids. In particular we address the problem of scaling an automated method for finding Mesoscale Convective Complexes. SciSpark provides a tensor interface to support the pluggability of different matrix libraries. We evaluate performance of the various matrix libraries in distributed pipelines, such as Nd4jTM and BreezeTM. We detail the architecture and design of SciSpark, our efforts to integrate climate science algorithms, parallel ingest and partitioning (sharding) of A-Train satellite observations from model grids. These solutions are encompassed in SciSpark, an open-source software framework for distributed computing on scientific data.
Enhanced round robin CPU scheduling with burst time based time quantum
NASA Astrophysics Data System (ADS)
Indusree, J. R.; Prabadevi, B.
2017-11-01
Process scheduling is a very important functionality of Operating system. The main-known process-scheduling algorithms are First Come First Serve (FCFS) algorithm, Round Robin (RR) algorithm, Priority scheduling algorithm and Shortest Job First (SJF) algorithm. Compared to its peers, Round Robin (RR) algorithm has the advantage that it gives fair share of CPU to the processes which are already in the ready-queue. The effectiveness of the RR algorithm greatly depends on chosen time quantum value. Through this research paper, we are proposing an enhanced algorithm called Enhanced Round Robin with Burst-time based Time Quantum (ERRBTQ) process scheduling algorithm which calculates time quantum as per the burst-time of processes already in ready queue. The experimental results and analysis of ERRBTQ algorithm clearly indicates the improved performance when compared with conventional RR and its variants.
Theory of Remote Image Formation
NASA Astrophysics Data System (ADS)
Blahut, Richard E.
2004-11-01
In many applications, images, such as ultrasonic or X-ray signals, are recorded and then analyzed with digital or optical processors in order to extract information. Such processing requires the development of algorithms of great precision and sophistication. This book presents a unified treatment of the mathematical methods that underpin the various algorithms used in remote image formation. The author begins with a review of transform and filter theory. He then discusses two- and three-dimensional Fourier transform theory, the ambiguity function, image construction and reconstruction, tomography, baseband surveillance systems, and passive systems (where the signal source might be an earthquake or a galaxy). Information-theoretic methods in image formation are also covered, as are phase errors and phase noise. Throughout the book, practical applications illustrate theoretical concepts, and there are many homework problems. The book is aimed at graduate students of electrical engineering and computer science, and practitioners in industry. Presents a unified treatment of the mathematical methods that underpin the algorithms used in remote image formation Illustrates theoretical concepts with reference to practical applications Provides insights into the design parameters of real systems
An Online Scheduling Algorithm with Advance Reservation for Large-Scale Data Transfers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balman, Mehmet; Kosar, Tevfik
Scientific applications and experimental facilities generate massive data sets that need to be transferred to remote collaborating sites for sharing, processing, and long term storage. In order to support increasingly data-intensive science, next generation research networks have been deployed to provide high-speed on-demand data access between collaborating institutions. In this paper, we present a practical model for online data scheduling in which data movement operations are scheduled in advance for end-to-end high performance transfers. In our model, data scheduler interacts with reservation managers and data transfer nodes in order to reserve available bandwidth to guarantee completion of jobs that aremore » accepted and confirmed to satisfy preferred time constraint given by the user. Our methodology improves current systems by allowing researchers and higher level meta-schedulers to use data placement as a service where theycan plan ahead and reserve the scheduler time in advance for their data movement operations. We have implemented our algorithm and examined possible techniques for incorporation into current reservation frameworks. Performance measurements confirm that the proposed algorithm is efficient and scalable.« less
Real-time multi-channel stimulus artifact suppression by local curve fitting.
Wagenaar, Daniel A; Potter, Steve M
2002-10-30
We describe an algorithm for suppression of stimulation artifacts in extracellular micro-electrode array (MEA) recordings. A model of the artifact based on locally fitted cubic polynomials is subtracted from the recording, yielding a flat baseline amenable to spike detection by voltage thresholding. The algorithm, SALPA, reduces the period after stimulation during which action potentials cannot be detected by an order of magnitude, to less than 2 ms. Our implementation is fast enough to process 60-channel data sampled at 25 kHz in real-time on an inexpensive desktop PC. It performs well on a wide range of artifact shapes without re-tuning any parameters, because it accounts for amplifier saturation explicitly and uses a statistic to verify successful artifact suppression immediately after the amplifiers become operational. We demonstrate the algorithm's effectiveness on recordings from dense monolayer cultures of cortical neurons obtained from rat embryos. SALPA opens up a previously inaccessible window for studying transient neural oscillations and precisely timed dynamics in short-latency responses to electric stimulation. Copyright 2002 Elsevier Science B.V.
HALOE Algorithm Improvements for Upper Tropospheric Sounding
NASA Technical Reports Server (NTRS)
McHugh, Martin J.; Gordley, Larry L.; Russell, James M., III; Hervig, Mark E.
1999-01-01
This report details the ongoing efforts by GATS, Inc., in conjunction with Hampton University and University of Wyoming, in NASA's Mission to Planet Earth UARS Science Investigator Program entitled "HALOE Algorithm Improvements for Upper Tropospheric Soundings." The goal of this effort is to develop and implement major inversion and processing improvements that will extend HALOE measurements further into the troposphere. In particular, O3, H2O, and CH4 retrievals may be extended into the middle troposphere, and NO, HCl and possibly HF into the upper troposphere. Key areas of research being carried out to accomplish this include: pointing/tracking analysis; cloud identification and modeling; simultaneous multichannel retrieval capability; forward model improvements; high vertical-resolution gas filter channel retrievals; a refined temperature retrieval; robust error analyses; long-term trend reliability studies; and data validation. The current (first-year) effort concentrates on the pointer/tracker correction algorithms, cloud filtering and validation, and multi-channel retrieval development. However, these areas are all highly coupled, so progress in one area benefits from and sometimes depends on work in others.
HALOE Algorithm Improvements for Upper Tropospheric Sounding
NASA Technical Reports Server (NTRS)
Thompson, Robert Earl; McHugh, Martin J.; Gordley, Larry L.; Hervig, Mark E.; Russell, James M., III; Douglass, Anne (Technical Monitor)
2001-01-01
This report details the ongoing efforts by GATS, Inc., in conjunction with Hampton University and University of Wyoming, in NASA's Mission to Planet Earth Upper Atmospheric Research Satellite (UARS) Science Investigator Program entitled 'HALOE Algorithm Improvements for Upper Tropospheric Sounding.' The goal of this effort is to develop and implement major inversion and processing improvements that will extend Halogen Occultation Experiment (HALOE) measurements further into the troposphere. In particular, O3, H2O, and CH4 retrievals may be extended into the middle troposphere, and NO, HCl and possibly HF into the upper troposphere. Key areas of research being carried out to accomplish this include: pointing/tracking analysis; cloud identification and modeling; simultaneous multichannel retrieval capability; forward model improvements; high vertical-resolution gas filter channel retrievals; a refined temperature retrieval; robust error analyses; long-term trend reliability studies; and data validation. The current (first year) effort concentrates on the pointer/tracker correction algorithms, cloud filtering and validation, and multichannel retrieval development. However, these areas are all highly coupled, so progress in one area benefits from and sometimes depends on work in others.
NASA Technical Reports Server (NTRS)
Smith, Paul H.
1988-01-01
The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.
Directed Field Ionization: A Genetic Algorithm for Evolving Electric Field Pulses
NASA Astrophysics Data System (ADS)
Kang, Xinyue; Rowley, Zoe A.; Carroll, Thomas J.; Noel, Michael W.
2017-04-01
When an ionizing electric field pulse is applied to a Rydberg atom, the electron's amplitude traverses many avoided crossings among the Stark levels as the field increases. The resulting superposition determines the shape of the time resolved field ionization spectrum at a detector. An engineered electric field pulse that sweeps back and forth through avoided crossings can control the phase evolution so as to determine the electron's path through the Stark map. In the region of n = 35 in rubidium there are hundreds of potential avoided crossings; this yields a large space of possible pulses. We use a genetic algorithm to search this space and evolve electric field pulses to direct the ionization of the Rydberg electron in rubidium. We present the algorithm along with a comparison of simulated and experimental results. This work was supported by the National Science Foundation under Grants No. 1607335 and No. 1607377 and used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation Grant Number OCI-1053575.
Algorithmic psychometrics and the scalable subject.
Stark, Luke
2018-04-01
Recent public controversies, ranging from the 2014 Facebook 'emotional contagion' study to psychographic data profiling by Cambridge Analytica in the 2016 American presidential election, Brexit referendum and elsewhere, signal watershed moments in which the intersecting trajectories of psychology and computer science have become matters of public concern. The entangled history of these two fields grounds the application of applied psychological techniques to digital technologies, and an investment in applying calculability to human subjectivity. Today, a quantifiable psychological subject position has been translated, via 'big data' sets and algorithmic analysis, into a model subject amenable to classification through digital media platforms. I term this position the 'scalable subject', arguing it has been shaped and made legible by algorithmic psychometrics - a broad set of affordances in digital platforms shaped by psychology and the behavioral sciences. In describing the contours of this 'scalable subject', this paper highlights the urgent need for renewed attention from STS scholars on the psy sciences, and on a computational politics attentive to psychology, emotional expression, and sociality via digital media.
Building a Snow Data Management System using Open Source Software (and IDL)
NASA Astrophysics Data System (ADS)
Goodale, C. E.; Mattmann, C. A.; Ramirez, P.; Hart, A. F.; Painter, T.; Zimdars, P. A.; Bryant, A.; Brodzik, M.; Skiles, M.; Seidel, F. C.; Rittger, K. E.
2012-12-01
At NASA's Jet Propulsion Laboratory free and open source software is used everyday to support a wide range of projects, from planetary to climate to research and development. In this abstract I will discuss the key role that open source software has played in building a robust science data processing pipeline for snow hydrology research, and how the system is also able to leverage programs written in IDL, making JPL's Snow Data System a hybrid of open source and proprietary software. Main Points: - The Design of the Snow Data System (illustrate how the collection of sub-systems are combined to create a complete data processing pipeline) - Discuss the Challenges of moving from a single algorithm on a laptop, to running 100's of parallel algorithms on a cluster of servers (lesson's learned) - Code changes - Software license related challenges - Storage Requirements - System Evolution (from data archiving, to data processing, to data on a map, to near-real-time products and maps) - Road map for the next 6 months (including how easily we re-used the snowDS code base to support the Airborne Snow Observatory Mission) Software in Use and their Software Licenses: IDL - Used for pre and post processing of data. Licensed under a proprietary software license held by Excelis. Apache OODT - Used for data management and workflow processing. Licensed under the Apache License Version 2. GDAL - Geospatial Data processing library used for data re-projection currently. Licensed under the X/MIT license. GeoServer - WMS Server. Licensed under the General Public License Version 2.0 Leaflet.js - Javascript web mapping library. Licensed under the Berkeley Software Distribution License. Python - Glue code and miscellaneous data processing support. Licensed under the Python Software Foundation License. Perl - Script wrapper for running the SCAG algorithm. Licensed under the General Public License Version 3. PHP - Front-end web application programming. Licensed under the PHP License Version 3.01
Segmenting overlapping nano-objects in atomic force microscopy image
NASA Astrophysics Data System (ADS)
Wang, Qian; Han, Yuexing; Li, Qing; Wang, Bing; Konagaya, Akihiko
2018-01-01
Recently, techniques for nanoparticles have rapidly been developed for various fields, such as material science, medical, and biology. In particular, methods of image processing have widely been used to automatically analyze nanoparticles. A technique to automatically segment overlapping nanoparticles with image processing and machine learning is proposed. Here, two tasks are necessary: elimination of image noises and action of the overlapping shapes. For the first task, mean square error and the seed fill algorithm are adopted to remove noises and improve the quality of the original image. For the second task, four steps are needed to segment the overlapping nanoparticles. First, possibility split lines are obtained by connecting the high curvature pixels on the contours. Second, the candidate split lines are classified with a machine learning algorithm. Third, the overlapping regions are detected with the method of density-based spatial clustering of applications with noise (DBSCAN). Finally, the best split lines are selected with a constrained minimum value. We give some experimental examples and compare our technique with two other methods. The results can show the effectiveness of the proposed technique.
NASA Astrophysics Data System (ADS)
Chen, Zhou; Tong, Qiu-Nan; Zhang, Cong-Cong; Hu, Zhan
2015-04-01
Identification of acetone and its two isomers, and the control of their ionization and dissociation processes are performed using a dual-mass-spectrometer scheme. The scheme employs two sets of time of flight mass spectrometers to simultaneously acquire the mass spectra of two different molecules under the irradiation of identically shaped femtosecond laser pulses. The optimal laser pulses are found using closed-loop learning method based on a genetic algorithm. Compared with the mass spectra of the two isomers that are obtained with the transform limited pulse, those obtained under the irradiation of the optimal laser pulse show large differences and the various reaction pathways of the two molecules are selectively controlled. The experimental results demonstrate that the scheme is quite effective and useful in studies of two molecules having common mass peaks, which makes a traditional single mass spectrometer unfeasible. Project supported by the National Basic Research Program of China (Grant No. 2013CB922200) and the National Natural Science Foundation of China (Grant No. 11374124).
Computational analysis of the roles of biochemical reactions in anomalous diffusion dynamics
NASA Astrophysics Data System (ADS)
Naruemon, Rueangkham; Charin, Modchang
2016-04-01
Most biochemical processes in cells are usually modeled by reaction-diffusion (RD) equations. In these RD models, the diffusive process is assumed to be Gaussian. However, a growing number of studies have noted that intracellular diffusion is anomalous at some or all times, which may result from a crowded environment and chemical kinetics. This work aims to computationally study the effects of chemical reactions on the diffusive dynamics of RD systems by using both stochastic and deterministic algorithms. Numerical method to estimate the mean-square displacement (MSD) from a deterministic algorithm is also investigated. Our computational results show that anomalous diffusion can be solely due to chemical reactions. The chemical reactions alone can cause anomalous sub-diffusion in the RD system at some or all times. The time-dependent anomalous diffusion exponent is found to depend on many parameters, including chemical reaction rates, reaction orders, and chemical concentrations. Project supported by the Thailand Research Fund and Mahidol University (Grant No. TRG5880157), the Thailand Center of Excellence in Physics (ThEP), CHE, Thailand, and the Development Promotion of Science and Technology.
Calibration and Image Reconstruction for the Hurricane Imaging Radiometer (HIRAD)
NASA Technical Reports Server (NTRS)
Ruf, Christopher; Roberts, J. Brent; Biswas, Sayak; James, Mark W.; Miller, Timothy
2012-01-01
The Hurricane Imaging Radiometer (HIRAD) is a new airborne passive microwave synthetic aperture radiometer designed to provide wide swath images of ocean surface wind speed under heavy precipitation and, in particular, in tropical cyclones. It operates at 4, 5, 6 and 6.6 GHz and uses interferometric signal processing to synthesize a pushbroom imager in software from a low profile planar antenna with no mechanical scanning. HIRAD participated in NASA s Genesis and Rapid Intensification Processes (GRIP) mission during Fall 2010 as its first science field campaign. HIRAD produced images of upwelling brightness temperature over a aprox 70 km swath width with approx 3 km spatial resolution. From this, ocean surface wind speed and column averaged atmospheric liquid water content can be retrieved across the swath. The calibration and image reconstruction algorithms that were used to verify HIRAD functional performance during and immediately after GRIP were only preliminary and used a number of simplifying assumptions and approximations about the instrument design and performance. The development and performance of a more detailed and complete set of algorithms are reported here.
Fast Segmentation of Stained Nuclei in Terabyte-Scale, Time Resolved 3D Microscopy Image Stacks
Stegmaier, Johannes; Otte, Jens C.; Kobitski, Andrei; Bartschat, Andreas; Garcia, Ariel; Nienhaus, G. Ulrich; Strähle, Uwe; Mikut, Ralf
2014-01-01
Automated analysis of multi-dimensional microscopy images has become an integral part of modern research in life science. Most available algorithms that provide sufficient segmentation quality, however, are infeasible for a large amount of data due to their high complexity. In this contribution we present a fast parallelized segmentation method that is especially suited for the extraction of stained nuclei from microscopy images, e.g., of developing zebrafish embryos. The idea is to transform the input image based on gradient and normal directions in the proximity of detected seed points such that it can be handled by straightforward global thresholding like Otsu’s method. We evaluate the quality of the obtained segmentation results on a set of real and simulated benchmark images in 2D and 3D and show the algorithm’s superior performance compared to other state-of-the-art algorithms. We achieve an up to ten-fold decrease in processing times, allowing us to process large data sets while still providing reasonable segmentation results. PMID:24587204
Python in the NERSC Exascale Science Applications Program for Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronaghi, Zahra; Thomas, Rollin; Deslippe, Jack
We describe a new effort at the National Energy Re- search Scientific Computing Center (NERSC) in performance analysis and optimization of scientific Python applications targeting the Intel Xeon Phi (Knights Landing, KNL) many- core architecture. The Python-centered work outlined here is part of a larger effort called the NERSC Exascale Science Applications Program (NESAP) for Data. NESAP for Data focuses on applications that process and analyze high-volume, high-velocity data sets from experimental/observational science (EOS) facilities supported by the US Department of Energy Office of Science. We present three case study applications from NESAP for Data that use Python. These codesmore » vary in terms of “Python purity” from applications developed in pure Python to ones that use Python mainly as a convenience layer for scientists without expertise in lower level programming lan- guages like C, C++ or Fortran. The science case, requirements, constraints, algorithms, and initial performance optimizations for each code are discussed. Our goal with this paper is to contribute to the larger conversation around the role of Python in high-performance computing today and tomorrow, highlighting areas for future work and emerging best practices« less
Data Science: History repeated? - The heritage of the Free and Open Source GIS community
NASA Astrophysics Data System (ADS)
Löwe, Peter; Neteler, Markus
2014-05-01
Data Science is described as the process of knowledge extraction from large data sets by means of scientific methods. The discipline draws heavily from techniques and theories from many fields, which are jointly used to furthermore develop information retrieval on structured or unstructured very large datasets. While the term Data Science was already coined in 1960, the current perception of this field places is still in the first section of the hype cycle according to Gartner, being well en route from the technology trigger stage to the peak of inflated expectations. In our view the future development of Data Science could benefit from the analysis of experiences from related evolutionary processes. One predecessor is the area of Geographic Information Systems (GIS). The intrinsic scope of GIS is the integration and storage of spatial information from often heterogeneous sources, data analysis, sharing of reconstructed or aggregated results in visual form or via data transfer. GIS is successfully applied to process and analyse spatially referenced content in a wide and still expanding range of science areas, spanning from human and social sciences like archeology, politics and architecture to environmental and geoscientific applications, even including planetology. This paper presents proven patterns for innovation and organisation derived from the evolution of GIS, which can be ported to Data Science. Within the GIS landscape, three strategic interacting tiers can be denoted: i) Standardisation, ii) applications based on closed-source software, without the option of access to and analysis of the implemented algorithms, and iii) Free and Open Source Software (FOSS) based on freely accessible program code enabling analysis, education and ,improvement by everyone. This paper focuses on patterns gained from the synthesis of three decades of FOSS development. We identified best-practices which evolved from long term FOSS projects, describe the role of community-driven global umbrella organisations such as OSGeo, as well as the standardization of innovative services. The main driver is the acknowledgement of a meritocratic attitude. These patterns follow evolutionary processes of establishing and maintaining a web-based democratic culture spawning new kinds of communication and projects. This culture transcends the established compartmentation and stratification of science by creating mutual benefits for the participants, irrespective of their respective research interest and standing. Adopting these best practices will enable the emerging Data Science communities to avoid pitfalls and to accelerate the progress to stages of productivity.
Numerical computation of linear instability of detonations
NASA Astrophysics Data System (ADS)
Kabanov, Dmitry; Kasimov, Aslan
2017-11-01
We propose a method to study linear stability of detonations by direct numerical computation. The linearized governing equations together with the shock-evolution equation are solved in the shock-attached frame using a high-resolution numerical algorithm. The computed results are processed by the Dynamic Mode Decomposition technique to generate dispersion relations. The method is applied to the reactive Euler equations with simple-depletion chemistry as well as more complex multistep chemistry. The results are compared with those known from normal-mode analysis. We acknowledge financial support from King Abdullah University of Science and Technology.
A Search Algorithm for Generating Alternative Process Plans in Flexible Manufacturing System
NASA Astrophysics Data System (ADS)
Tehrani, Hossein; Sugimura, Nobuhiro; Tanimizu, Yoshitaka; Iwamura, Koji
Capabilities and complexity of manufacturing systems are increasing and striving for an integrated manufacturing environment. Availability of alternative process plans is a key factor for integration of design, process planning and scheduling. This paper describes an algorithm for generation of alternative process plans by extending the existing framework of the process plan networks. A class diagram is introduced for generating process plans and process plan networks from the viewpoint of the integrated process planning and scheduling systems. An incomplete search algorithm is developed for generating and searching the process plan networks. The benefit of this algorithm is that the whole process plan network does not have to be generated before the search algorithm starts. This algorithm is applicable to large and enormous process plan networks and also to search wide areas of the network based on the user requirement. The algorithm can generate alternative process plans and to select a suitable one based on the objective functions.
NASA Astrophysics Data System (ADS)
Hwong, Y. L.; Oliver, C.; Van Kranendonk, M. J.
2016-12-01
The rise of social media has transformed the way the public engages with scientists and science organisations. `Retweet', `Like', `Share' and `Comment' are a few ways users engage with messages on Twitter and Facebook, two of the most popular social media platforms. Despite the availability of big data from these digital footprints, research into social media science communication is scant. This paper presents the results of an empirical study into the processes and outcomes of space science related social media communications using machine learning. The study is divided into two main parts. The first part is dedicated to the use of supervised learning methods to investigate the features of highly engaging messages., e.g. highly retweeted tweets and shared Facebook posts. It is hypothesised that these messages contain certain psycholinguistic features that are unique to the field of space science. We built a predictive model to forecast the engagement levels of social media posts. By using four feature sets (n-grams, psycholinguistics, grammar and social media), we were able to achieve prediction accuracies in the vicinity of 90% using three supervised learning algorithms (Naive Bayes, linear classifier and decision tree). We conducted the same experiments on social media messages from three other fields (politics, business and non-profit) and discovered several features that are exclusive to space science communications: anger, authenticity, hashtags, visual descriptions and a tentative tone. The second part of the study focuses on the extraction of topics from a corpus of texts using topic modelling. This part of the study is exploratory in nature and uses an unsupervised method called Latent Dirichlet Allocation (LDA) to uncover previously unknown topics within a large body of documents. Preliminary results indicate a strong potential of topic model algorithms to automatically uncover themes hidden within social media chatters on space related issues, with keywords such as `exoplanet', `water' and `life' being clustered together forming a topic (i.e. 'Astrobiology'). Results also demonstrate the freewheeling nature of social media conversations, while providing evidence for the role of these platforms in facilitating meaningful exchanges among science audience.
Landsat surface reflectance quality assurance extraction (version 1.7)
Jones, J.W.; Starbuck, M.J.; Jenkerson, Calli B.
2013-01-01
The U.S. Geological Survey (USGS) Land Remote Sensing Program is developing an operational capability to produce Climate Data Records (CDRs) and Essential Climate Variables (ECVs) from the Landsat Archive to support a wide variety of science and resource management activities from regional to global scale. The USGS Earth Resources Observation and Science (EROS) Center is charged with prototyping systems and software to generate these high-level data products. Various USGS Geographic Science Centers are charged with particular ECV algorithm development and (or) selection as well as the evaluation and application demonstration of various USGS CDRs and ECVs. Because it is a foundation for many other ECVs, the first CDR in development is the Landsat Surface Reflectance Product (LSRP). The LSRP incorporates data quality information in a bit-packed structure that is not readily accessible without postprocessing services performed by the user. This document describes two general methods of LSRP quality-data extraction for use in image processing systems. Helpful hints for the installation and use of software originally developed for manipulation of Hierarchical Data Format (HDF) produced through the National Aeronautics and Space Administration (NASA) Earth Observing System are first provided for users who wish to extract quality data into separate HDF files. Next, steps follow to incorporate these extracted data into an image processing system. Finally, an alternative example is illustrated in which the data are extracted within a particular image processing system.
Soldier Decision-Making for Allocation of Intelligence, Surveillance, and Reconnaissance Assets
2014-06-01
Judgments; also called Algoritmic or Statistical Judgements Computer Science , Psychology, and Statistics Actuarial or algorithmic...Jan. 2011. [17] R. M. Dawes, D. Faust, and P. E. Meehl, “Clinical versus Actuarial Judgment,” Science , vol. 243, no. 4899, pp. 1668–1674, 1989. [18...School of Computer Science
Utilization of curve offsets in additive manufacturing
NASA Astrophysics Data System (ADS)
Haseltalab, Vahid; Yaman, Ulas; Dolen, Melik
2018-05-01
Curve offsets are utilized in different fields of engineering and science. Additive manufacturing, which lately becomes an explicit requirement in manufacturing industry, utilizes curve offsets widely. One of the necessities of offsetting is for scaling which is required if there is shrinkage after the fabrication or if the surface quality of the resulting part is unacceptable. Therefore, some post-processing is indispensable. But the major application of curve offsets in additive manufacturing processes is for generating head trajectories. In a point-wise AM process, a correct tool-path in each layer can reduce lots of costs and increase the surface quality of the fabricated parts. In this study, different curve offset generation algorithms are analyzed to show their capabilities and disadvantages through some test cases and improvements on their drawbacks are suggested.
ERIC Educational Resources Information Center
Grandell, Linda
2005-01-01
Computer science is becoming increasingly important in our society. Meta skills, such as problem solving and logical and algorithmic thinking, are emphasized in every field, not only in the natural sciences. Still, largely due to gaps in tuition, common misunderstandings exist about the true nature of computer science. These are especially…
Data Management for a Climate Data Record in an Evolving Technical Landscape
NASA Astrophysics Data System (ADS)
Moore, K. D.; Walter, J.; Gleason, J. L.
2017-12-01
For nearly twenty years, NASA Langley Research Center's Clouds and the Earth's Radiant Energy System (CERES) Science Team has been producing a suite of data products that forms a persistent climate data record of the Earth's radiant energy budget. Many of the team's physical scientists and key research contributors have been with the team since the launch of the first CERES instrument in 1997. This institutional knowledge is irreplaceable and its longevity and continuity are among the reasons that the team has been so productive. Such legacy involvement, however, can also be a limiting factor. Some CERES scientists-cum-coders might possess skills that were state-of-the-field when they were emerging scientists but may now be outdated with respect to developments in software development best practices and supporting technologies. Both programming languages and processing frameworks have evolved significantly in the past twenty years, and updating one of these factors warrants consideration of updating the other. With the imminent launch of a final CERES instrument and the good health of those in flight, the CERES data record stands to continue far into the future. The CERES Science Team is, therefore, undergoing a re-architecture of its codebase to maintain compatibility with newer data processing platforms and technologies and to leverage modern software development best practices. This necessitates training our staff and consequently presents several challenges, including: Development continues immediately on the next "edition" of research algorithms upon release of the previous edition. How can code be rewritten at the same time that the science algorithms are being updated and integrated? With limited time to devote to training, how can we update the staff's existing skillset without slowing progress or introducing new errors? The CERES Science Team is large and complex, much like the current state of its codebase. How can we identify, in a breadth-wise manner, areas for code improvement across multiple research groups that maintain code with varying semantics but common concepts? In this work, we discuss the successes and pitfalls of this major re-architecture effort and share how we will sustain improvement into the future.
PREPARING FOR EXASCALE: ORNL Leadership Computing Application Requirements and Strategy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joubert, Wayne; Kothe, Douglas B; Nam, Hai Ah
2009-12-01
In 2009 the Oak Ridge Leadership Computing Facility (OLCF), a U.S. Department of Energy (DOE) facility at the Oak Ridge National Laboratory (ORNL) National Center for Computational Sciences (NCCS), elicited petascale computational science requirements from leading computational scientists in the international science community. This effort targeted science teams whose projects received large computer allocation awards on OLCF systems. A clear finding of this process was that in order to reach their science goals over the next several years, multiple projects will require computational resources in excess of an order of magnitude more powerful than those currently available. Additionally, for themore » longer term, next-generation science will require computing platforms of exascale capability in order to reach DOE science objectives over the next decade. It is generally recognized that achieving exascale in the proposed time frame will require disruptive changes in computer hardware and software. Processor hardware will become necessarily heterogeneous and will include accelerator technologies. Software must undergo the concomitant changes needed to extract the available performance from this heterogeneous hardware. This disruption portends to be substantial, not unlike the change to the message passing paradigm in the computational science community over 20 years ago. Since technological disruptions take time to assimilate, we must aggressively embark on this course of change now, to insure that science applications and their underlying programming models are mature and ready when exascale computing arrives. This includes initiation of application readiness efforts to adapt existing codes to heterogeneous architectures, support of relevant software tools, and procurement of next-generation hardware testbeds for porting and testing codes. The 2009 OLCF requirements process identified numerous actions necessary to meet this challenge: (1) Hardware capabilities must be advanced on multiple fronts, including peak flops, node memory capacity, interconnect latency, interconnect bandwidth, and memory bandwidth. (2) Effective parallel programming interfaces must be developed to exploit the power of emerging hardware. (3) Science application teams must now begin to adapt and reformulate application codes to the new hardware and software, typified by hierarchical and disparate layers of compute, memory and concurrency. (4) Algorithm research must be realigned to exploit this hierarchy. (5) When possible, mathematical libraries must be used to encapsulate the required operations in an efficient and useful way. (6) Software tools must be developed to make the new hardware more usable. (7) Science application software must be improved to cope with the increasing complexity of computing systems. (8) Data management efforts must be readied for the larger quantities of data generated by larger, more accurate science models. Requirements elicitation, analysis, validation, and management comprise a difficult and inexact process, particularly in periods of technological change. Nonetheless, the OLCF requirements modeling process is becoming increasingly quantitative and actionable, as the process becomes more developed and mature, and the process this year has identified clear and concrete steps to be taken. This report discloses (1) the fundamental science case driving the need for the next generation of computer hardware, (2) application usage trends that illustrate the science need, (3) application performance characteristics that drive the need for increased hardware capabilities, (4) resource and process requirements that make the development and deployment of science applications on next-generation hardware successful, and (5) summary recommendations for the required next steps within the computer and computational science communities.« less
A Comparative Evaluation of Unsupervised Anomaly Detection Algorithms for Multivariate Data
Goldstein, Markus; Uchida, Seiichi
2016-01-01
Anomaly detection is the process of identifying unexpected items or events in datasets, which differ from the norm. In contrast to standard classification tasks, anomaly detection is often applied on unlabeled data, taking only the internal structure of the dataset into account. This challenge is known as unsupervised anomaly detection and is addressed in many practical applications, for example in network intrusion detection, fraud detection as well as in the life science and medical domain. Dozens of algorithms have been proposed in this area, but unfortunately the research community still lacks a comparative universal evaluation as well as common publicly available datasets. These shortcomings are addressed in this study, where 19 different unsupervised anomaly detection algorithms are evaluated on 10 different datasets from multiple application domains. By publishing the source code and the datasets, this paper aims to be a new well-funded basis for unsupervised anomaly detection research. Additionally, this evaluation reveals the strengths and weaknesses of the different approaches for the first time. Besides the anomaly detection performance, computational effort, the impact of parameter settings as well as the global/local anomaly detection behavior is outlined. As a conclusion, we give an advise on algorithm selection for typical real-world tasks. PMID:27093601
NASA Astrophysics Data System (ADS)
Gregoric, Vincent C.; Kang, Xinyue; Liu, Zhimin Cheryl; Rowley, Zoe A.; Carroll, Thomas J.; Noel, Michael W.
2017-04-01
Selective field ionization is an important experimental technique used to study the state distribution of Rydberg atoms. This is achieved by applying a steadily increasing electric field, which successively ionizes more tightly bound states. An atom prepared in an energy eigenstate encounters many avoided Stark level crossings on the way to ionization. As it traverses these avoided crossings, its amplitude is split among multiple different states, spreading out the time resolved electron ionization signal. By perturbing the electric field ramp, we can change how the atoms traverse the avoided crossings, and thus alter the shape of the ionization signal. We have used a genetic algorithm to evolve these perturbations in real time in order to arrive at a target ionization signal shape. This process is robust to large fluctuations in experimental conditions. This work was supported by the National Science Foundation under Grants No. 1607335 and No. 1607377 and used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation Grant Number OCI-1053575.
Collective credit allocation in science
Shen, Hua-Wei; Barabási, Albert-László
2014-01-01
Collaboration among researchers is an essential component of the modern scientific enterprise, playing a particularly important role in multidisciplinary research. However, we continue to wrestle with allocating credit to the coauthors of publications with multiple authors, because the relative contribution of each author is difficult to determine. At the same time, the scientific community runs an informal field-dependent credit allocation process that assigns credit in a collective fashion to each work. Here we develop a credit allocation algorithm that captures the coauthors’ contribution to a publication as perceived by the scientific community, reproducing the informal collective credit allocation of science. We validate the method by identifying the authors of Nobel-winning papers that are credited for the discovery, independent of their positions in the author list. The method can also compare the relative impact of researchers working in the same field, even if they did not publish together. The ability to accurately measure the relative credit of researchers could affect many aspects of credit allocation in science, potentially impacting hiring, funding, and promotion decisions. PMID:25114238
NASA Astrophysics Data System (ADS)
Stough, T.; Green, D. S.
2017-12-01
This collaborative research to operations demonstration brings together the data and algorithms from NASA research, technology, and applications-funded projects to deliver relevant data streams, algorithms, predictive models, and visualization tools to the NOAA National Tsunami Warning Center (NTWC) and Pacific Tsunami Warning Center (PTWC). Using real-time GNSS data and models in an operational environment, we will test and evaluate an augmented capability for tsunami early warning. Each of three research groups collect data from a selected network of real-time GNSS stations, exchange data consisting of independently processed 1 Hz station displacements, and merge the output into a single, more accurate and reliable set. The resulting merged data stream is delivered from three redundant locations to the TWCs with a latency of 5-10 seconds. Data from a number of seismogeodetic stations with collocated GPS and accelerometer instruments are processed for displacements and seismic velocities and also delivered. Algorithms for locating and determining the magnitude of earthquakes as well as algorithms that compute the source function of a potential tsunami using this new data stream are included in the demonstration. The delivered data, algorithms, models and tools are hosted on NOAA-operated machines at both warning centers, and, once tested, the results will be evaluated for utility in improving the speed and accuracy of tsunami warnings. This collaboration has the potential to dramatically improve the speed and accuracy of the TWCs local tsunami information over the current seismometer-only based methods. In our first year of this work, we have established and deployed an architecture for data movement and algorithm installation at the TWC's. We are addressing data quality issues and porting algorithms into the TWCs operating environment. Our initial module deliveries will focus on estimating moment magnitude (Mw) from Peak Ground Displacement (PGD), within 2-3 minutes of the event, and coseismic displacements converging to static offsets. We will also develop visualizations of module outputs tailored to the operational environment. In the context of this work, we will also discuss this research to operations approach and other opportunities within the NASA Applied Science Disaster Program.
Page, Andrew J.; Keane, Thomas M.; Naughton, Thomas J.
2010-01-01
We present a multi-heuristic evolutionary task allocation algorithm to dynamically map tasks to processors in a heterogeneous distributed system. It utilizes a genetic algorithm, combined with eight common heuristics, in an effort to minimize the total execution time. It operates on batches of unmapped tasks and can preemptively remap tasks to processors. The algorithm has been implemented on a Java distributed system and evaluated with a set of six problems from the areas of bioinformatics, biomedical engineering, computer science and cryptography. Experiments using up to 150 heterogeneous processors show that the algorithm achieves better efficiency than other state-of-the-art heuristic algorithms. PMID:20862190
Intelligent Use of CFAR Algorithms
1993-05-01
the reference windows can raise the threshold too high in many CFAR algorithms and result in masking of targets. GCMLD is a modification of CMLD that...AD-A267 755 RL-TR-93-75 III 11 III II liiI Interim Report May 1993 INTELLIGENT USE OF CFAR ALGORITHMS Kaman Sciences Corporation P. Antonik, B...AND DATES COVERED IMay 1993 Inte ’rim Jan 92 - Se2 92 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS INTELLIGENT USE OF CFAR ALGORITHMS C - F30602-91-C-0017
Resolving the Milky Way and Nearby Galaxies with WFIRST
NASA Astrophysics Data System (ADS)
Kalirai, Jasonjot
High-resolution studies of nearby stellar populations have served as a foundation for our quest to understand the nature of galaxies. Today, studies of resolved stellar populations constrain fundamental relations -- such as the initial mass function of stars, the time scales of stellar evolution, the timing of mass loss and amount of energetic feedback, the color-magnitude relation and its dependency on age and metallicity, the stellar-dark matter connection in galaxy halos, and the build up of stellar populations over cosmic time -- that represent key ingredients in our prescription to interpret light from the Universe and to measure the physical state of galaxies. More than in any other area of astrophysics, WFIRST will yield a transformative impact in measuring and characterizing resolved stellar populations in the Milky Way and nearby galaxies. The proximity and level of detail that such populations need to be studied at directly map to all three pillars of WFIRST capabilities - sensitivity from a 2.4 meter space based telescope, resolution from 0.1" pixels, and large 0.3 degree field of view from multiple detectors. Our WFIRST GO Science Investigation Team (F) will develop three WFIRST (notional) GO programs related to resolved stellar populations to fully stress WFIRST's Wide Field Instrument. The programs will include a Survey of the Milky Way, a Survey of Nearby Galaxy Halos, and a Survey of Star-Forming Galaxies. Specific science goals for each program will be validated through a wide range of observational data sets, simulations, and new algorithms. As an output of this study, our team will deliver optimized strategies and tools to maximize stellar population science with WFIRST. This will include: new grids of IR-optimized stellar evolution and synthetic spectroscopic models; pipelines and algorithms for optimal data reduction at the WFIRST sensitivity and pixel scale; wide field simulations of MW environments and galaxy halos; cosmological simulations of nearby galaxy halos matched to WFIRST observations; strategies and automated algorithms to find substructure and dwarf galaxies in WFIRST IR data sets; and documentation. Our team will work closely with the WFIRST Science Center to translate our notional programs into inputs that can help achieve readiness for WFIRST science operations. This includes building full observing programs with target definitions, observing sequences, scheduling constraints, data processing needs, and calibration requirements. Our team has been chosen carefully. Team members are leading scientists in stellar population work that will be a core science theme for WFIRST and are also involved in all large future astronomy projects that will operate in the WFIRST era. The team is intentionally small, and each member will "own" significant science projects. The team will aggressively advocate for WFIRST through innovative initiatives. The team is also diverse in geographical location, observers and theorists, and gender.
The Caltech Concurrent Computation Program - Project description
NASA Technical Reports Server (NTRS)
Fox, G.; Otto, S.; Lyzenga, G.; Rogstad, D.
1985-01-01
The Caltech Concurrent Computation Program wwhich studies basic issues in computational science is described. The research builds on initial work where novel concurrent hardware, the necessary systems software to use it and twenty significant scientific implementations running on the initial 32, 64, and 128 node hypercube machines have been constructed. A major goal of the program will be to extend this work into new disciplines and more complex algorithms including general packages that decompose arbitrary problems in major application areas. New high-performance concurrent processors with up to 1024-nodes, over a gigabyte of memory and multigigaflop performance are being constructed. The implementations cover a wide range of problems in areas such as high energy and astrophysics, condensed matter, chemical reactions, plasma physics, applied mathematics, geophysics, simulation, CAD for VLSI, graphics and image processing. The products of the research program include the concurrent algorithms, hardware, systems software, and complete program implementations.
Advanced Architectures for Astrophysical Supercomputing
NASA Astrophysics Data System (ADS)
Barsdell, B. R.; Barnes, D. G.; Fluke, C. J.
2010-12-01
Astronomers have come to rely on the increasing performance of computers to reduce, analyze, simulate and visualize their data. In this environment, faster computation can mean more science outcomes or the opening up of new parameter spaces for investigation. If we are to avoid major issues when implementing codes on advanced architectures, it is important that we have a solid understanding of our algorithms. A recent addition to the high-performance computing scene that highlights this point is the graphics processing unit (GPU). The hardware originally designed for speeding-up graphics rendering in video games is now achieving speed-ups of O(100×) in general-purpose computation - performance that cannot be ignored. We are using a generalized approach, based on the analysis of astronomy algorithms, to identify the optimal problem-types and techniques for taking advantage of both current GPU hardware and future developments in computing architectures.
An Approach to Data Center-Based KDD of Remote Sensing Datasets
NASA Technical Reports Server (NTRS)
Lynnes, Christopher; Mack, Robert; Wharton, Stephen W. (Technical Monitor)
2001-01-01
The data explosion in remote sensing is straining the ability of data centers to deliver the data to the user community, yet many large-volume users actually seek a relatively small information component within the data, which they extract at their sites using Knowledge Discovery in Databases (KDD) techniques. To improve the efficiency of this process, the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC) has implemented a KDD subsystem that supports execution of the user's KDD algorithm at the data center, dramatically reducing the volume that is sent to the user. The data are extracted from the archive in a planned, organized "campaign"; the algorithms are executed, and the output products sent to the users over the network. The first campaign, now complete, has resulted in overall reductions in shipped volume from 3.3 TB to 0.4 TB.
A New Strategy to Land Precisely on the Northern Plains of Mars
NASA Technical Reports Server (NTRS)
Cheng, Yang; Huertas, Andres
2010-01-01
During the Phoenix mission landing site selection process, the Mars Reconnaissance Orbiter (MRO) High Resolution Imaging Science Experiment (HiRISE) images revealed widely spread and dense rock fields in the northern plains. Automatic rock mapping and subsequent statistical analyses showed 30-90% CFA (cumulative fractional area) covered by rocks larger than 1 meter in dense rock fields around craters. Less dense rock fields had 5-30% rock coverage in terrain away from craters. Detectable meter-scale boulders were found nearly everywhere. These rocks present a risk to spacecraft safety during landing. However, they are the most salient topographic features in this region, and can be good landmarks for spacecraft localization during landing. In this paper we present a novel strategy that uses abundance of rocks in northern plains for spacecraft localization. The paper discusses this approach in three sections: a rock-based landmark terrain relative navigation (TRN) algorithm; the TRN algorithm feasibility; and conclusions.
Machine learning methods in chemoinformatics
Mitchell, John B O
2014-01-01
Machine learning algorithms are generally developed in computer science or adjacent disciplines and find their way into chemical modeling by a process of diffusion. Though particular machine learning methods are popular in chemoinformatics and quantitative structure–activity relationships (QSAR), many others exist in the technical literature. This discussion is methods-based and focused on some algorithms that chemoinformatics researchers frequently use. It makes no claim to be exhaustive. We concentrate on methods for supervised learning, predicting the unknown property values of a test set of instances, usually molecules, based on the known values for a training set. Particularly relevant approaches include Artificial Neural Networks, Random Forest, Support Vector Machine, k-Nearest Neighbors and naïve Bayes classifiers. WIREs Comput Mol Sci 2014, 4:468–481. How to cite this article: WIREs Comput Mol Sci 2014, 4:468–481. doi:10.1002/wcms.1183 PMID:25285160
The 6th International Conference on Computer Science and Computational Mathematics (ICCSCM 2017)
NASA Astrophysics Data System (ADS)
2017-09-01
The ICCSCM 2017 (The 6th International Conference on Computer Science and Computational Mathematics) has aimed to provide a platform to discuss computer science and mathematics related issues including Algebraic Geometry, Algebraic Topology, Approximation Theory, Calculus of Variations, Category Theory; Homological Algebra, Coding Theory, Combinatorics, Control Theory, Cryptology, Geometry, Difference and Functional Equations, Discrete Mathematics, Dynamical Systems and Ergodic Theory, Field Theory and Polynomials, Fluid Mechanics and Solid Mechanics, Fourier Analysis, Functional Analysis, Functions of a Complex Variable, Fuzzy Mathematics, Game Theory, General Algebraic Systems, Graph Theory, Group Theory and Generalizations, Image Processing, Signal Processing and Tomography, Information Fusion, Integral Equations, Lattices, Algebraic Structures, Linear and Multilinear Algebra; Matrix Theory, Mathematical Biology and Other Natural Sciences, Mathematical Economics and Financial Mathematics, Mathematical Physics, Measure Theory and Integration, Neutrosophic Mathematics, Number Theory, Numerical Analysis, Operations Research, Optimization, Operator Theory, Ordinary and Partial Differential Equations, Potential Theory, Real Functions, Rings and Algebras, Statistical Mechanics, Structure Of Matter, Topological Groups, Wavelets and Wavelet Transforms, 3G/4G Network Evolutions, Ad-Hoc, Mobile, Wireless Networks and Mobile Computing, Agent Computing & Multi-Agents Systems, All topics related Image/Signal Processing, Any topics related Computer Networks, Any topics related ISO SC-27 and SC- 17 standards, Any topics related PKI(Public Key Intrastructures), Artifial Intelligences(A.I.) & Pattern/Image Recognitions, Authentication/Authorization Issues, Biometric authentication and algorithms, CDMA/GSM Communication Protocols, Combinatorics, Graph Theory, and Analysis of Algorithms, Cryptography and Foundation of Computer Security, Data Base(D.B.) Management & Information Retrievals, Data Mining, Web Image Mining, & Applications, Defining Spectrum Rights and Open Spectrum Solutions, E-Comerce, Ubiquitous, RFID, Applications, Fingerprint/Hand/Biometrics Recognitions and Technologies, Foundations of High-performance Computing, IC-card Security, OTP, and Key Management Issues, IDS/Firewall, Anti-Spam mail, Anti-virus issues, Mobile Computing for E-Commerce, Network Security Applications, Neural Networks and Biomedical Simulations, Quality of Services and Communication Protocols, Quantum Computing, Coding, and Error Controls, Satellite and Optical Communication Systems, Theory of Parallel Processing and Distributed Computing, Virtual Visions, 3-D Object Retrievals, & Virtual Simulations, Wireless Access Security, etc. The success of ICCSCM 2017 is reflected in the received papers from authors around the world from several countries which allows a highly multinational and multicultural idea and experience exchange. The accepted papers of ICCSCM 2017 are published in this Book. Please check http://www.iccscm.com for further news. A conference such as ICCSCM 2017 can only become successful using a team effort, so herewith we want to thank the International Technical Committee and the Reviewers for their efforts in the review process as well as their valuable advices. We are thankful to all those who contributed to the success of ICCSCM 2017. The Secretary
A Scalable Infrastructure for Lidar Topography Data Distribution, Processing, and Discovery
NASA Astrophysics Data System (ADS)
Crosby, C. J.; Nandigam, V.; Krishnan, S.; Phan, M.; Cowart, C. A.; Arrowsmith, R.; Baru, C.
2010-12-01
High-resolution topography data acquired with lidar (light detection and ranging) technology have emerged as a fundamental tool in the Earth sciences, and are also being widely utilized for ecological, planning, engineering, and environmental applications. Collected from airborne, terrestrial, and space-based platforms, these data are revolutionary because they permit analysis of geologic and biologic processes at resolutions essential for their appropriate representation. Public domain lidar data collection by federal, state, and local agencies are a valuable resource to the scientific community, however the data pose significant distribution challenges because of the volume and complexity of data that must be stored, managed, and processed. Lidar data acquisition may generate terabytes of data in the form of point clouds, digital elevation models (DEMs), and derivative products. This massive volume of data is often challenging to host for resource-limited agencies. Furthermore, these data can be technically challenging for users who lack appropriate software, computing resources, and expertise. The National Science Foundation-funded OpenTopography Facility (www.opentopography.org) has developed a cyberinfrastructure-based solution to enable online access to Earth science-oriented high-resolution lidar topography data, online processing tools, and derivative products. OpenTopography provides access to terabytes of point cloud data, standard DEMs, and Google Earth image data, all co-located with computational resources for on-demand data processing. The OpenTopography portal is built upon a cyberinfrastructure platform that utilizes a Services Oriented Architecture (SOA) to provide a modular system that is highly scalable and flexible enough to support the growing needs of the Earth science lidar community. OpenTopography strives to host and provide access to datasets as soon as they become available, and also to expose greater application level functionalities to our end-users (such as generation of custom DEMs via various gridding algorithms, and hydrological modeling algorithms). In the future, the SOA will enable direct authenticated access to back-end functionality through simple Web service Application Programming Interfaces (APIs), so that users may access our data and compute resources via clients other than Web browsers. In addition to an overview of the OpenTopography SOA, this presentation will discuss our recently developed lidar data ingestion and management system for point cloud data delivered in the binary LAS standard. This system compliments our existing partitioned database approach for data delivered in ASCII format, and permits rapid ingestion of data. The system has significantly reduced data ingestion times and has implications for data distribution in emergency response situations. We will also address on ongoing work to develop a community lidar metadata catalog based on the OGC Catalogue Service for Web (CSW) standard, which will help to centralize discovery of public domain lidar data.
Distilling the Verification Process for Prognostics Algorithms
NASA Technical Reports Server (NTRS)
Roychoudhury, Indranil; Saxena, Abhinav; Celaya, Jose R.; Goebel, Kai
2013-01-01
The goal of prognostics and health management (PHM) systems is to ensure system safety, and reduce downtime and maintenance costs. It is important that a PHM system is verified and validated before it can be successfully deployed. Prognostics algorithms are integral parts of PHM systems. This paper investigates a systematic process of verification of such prognostics algorithms. To this end, first, this paper distinguishes between technology maturation and product development. Then, the paper describes the verification process for a prognostics algorithm as it moves up to higher maturity levels. This process is shown to be an iterative process where verification activities are interleaved with validation activities at each maturation level. In this work, we adopt the concept of technology readiness levels (TRLs) to represent the different maturity levels of a prognostics algorithm. It is shown that at each TRL, the verification of a prognostics algorithm depends on verifying the different components of the algorithm according to the requirements laid out by the PHM system that adopts this prognostics algorithm. Finally, using simplified examples, the systematic process for verifying a prognostics algorithm is demonstrated as the prognostics algorithm moves up TRLs.
FPGA Coprocessor for Accelerated Classification of Images
NASA Technical Reports Server (NTRS)
Pingree, Paula J.; Scharenbroich, Lucas J.; Werne, Thomas A.
2008-01-01
An effort related to that described in the preceding article focuses on developing a spaceborne processing platform for fast and accurate onboard classification of image data, a critical part of modern satellite image processing. The approach again has been to exploit the versatility of recently developed hybrid Virtex-4FX field-programmable gate array (FPGA) to run diverse science applications on embedded processors while taking advantage of the reconfigurable hardware resources of the FPGAs. In this case, the FPGA serves as a coprocessor that implements legacy C-language support-vector-machine (SVM) image-classification algorithms to detect and identify natural phenomena such as flooding, volcanic eruptions, and sea-ice break-up. The FPGA provides hardware acceleration for increased onboard processing capability than previously demonstrated in software. The original C-language program demonstrated on an imaging instrument aboard the Earth Observing-1 (EO-1) satellite implements a linear-kernel SVM algorithm for classifying parts of the images as snow, water, ice, land, or cloud or unclassified. Current onboard processors, such as on EO-1, have limited computing power, extremely limited active storage capability and are no longer considered state-of-the-art. Using commercially available software that translates C-language programs into hardware description language (HDL) files, the legacy C-language program, and two newly formulated programs for a more capable expanded-linear-kernel and a more accurate polynomial-kernel SVM algorithm, have been implemented in the Virtex-4FX FPGA. In tests, the FPGA implementations have exhibited significant speedups over conventional software implementations running on general-purpose hardware.
Earth Science Mining Web Services
NASA Astrophysics Data System (ADS)
Pham, L. B.; Lynnes, C. S.; Hegde, M.; Graves, S.; Ramachandran, R.; Maskey, M.; Keiser, K.
2008-12-01
To allow scientists further capabilities in the area of data mining and web services, the Goddard Earth Sciences Data and Information Services Center (GES DISC) and researchers at the University of Alabama in Huntsville (UAH) have developed a system to mine data at the source without the need of network transfers. The system has been constructed by linking together several pre-existing technologies: the Simple Scalable Script-based Science Processor for Measurements (S4PM), a processing engine at the GES DISC; the Algorithm Development and Mining (ADaM) system, a data mining toolkit from UAH that can be configured in a variety of ways to create customized mining processes; ActiveBPEL, a workflow execution engine based on BPEL (Business Process Execution Language); XBaya, a graphical workflow composer; and the EOS Clearinghouse (ECHO). XBaya is used to construct an analysis workflow at UAH using ADaM components, which are also installed remotely at the GES DISC, wrapped as Web Services. The S4PM processing engine searches ECHO for data using space-time criteria, staging them to cache, allowing the ActiveBPEL engine to remotely orchestrates the processing workflow within S4PM. As mining is completed, the output is placed in an FTP holding area for the end user. The goals are to give users control over the data they want to process, while mining data at the data source using the server's resources rather than transferring the full volume over the internet. These diverse technologies have been infused into a functioning, distributed system with only minor changes to the underlying technologies. The key to this infusion is the loosely coupled, Web- Services based architecture: All of the participating components are accessible (one way or another) through (Simple Object Access Protocol) SOAP-based Web Services.
Earth Science Mining Web Services
NASA Technical Reports Server (NTRS)
Pham, Long; Lynnes, Christopher; Hegde, Mahabaleshwa; Graves, Sara; Ramachandran, Rahul; Maskey, Manil; Keiser, Ken
2008-01-01
To allow scientists further capabilities in the area of data mining and web services, the Goddard Earth Sciences Data and Information Services Center (GES DISC) and researchers at the University of Alabama in Huntsville (UAH) have developed a system to mine data at the source without the need of network transfers. The system has been constructed by linking together several pre-existing technologies: the Simple Scalable Script-based Science Processor for Measurements (S4PM), a processing engine at he GES DISC; the Algorithm Development and Mining (ADaM) system, a data mining toolkit from UAH that can be configured in a variety of ways to create customized mining processes; ActiveBPEL, a workflow execution engine based on BPEL (Business Process Execution Language); XBaya, a graphical workflow composer; and the EOS Clearinghouse (ECHO). XBaya is used to construct an analysis workflow at UAH using ADam components, which are also installed remotely at the GES DISC, wrapped as Web Services. The S4PM processing engine searches ECHO for data using space-time criteria, staging them to cache, allowing the ActiveBPEL engine to remotely orchestras the processing workflow within S4PM. As mining is completed, the output is placed in an FTP holding area for the end user. The goals are to give users control over the data they want to process, while mining data at the data source using the server's resources rather than transferring the full volume over the internet. These diverse technologies have been infused into a functioning, distributed system with only minor changes to the underlying technologies. The key to the infusion is the loosely coupled, Web-Services based architecture: All of the participating components are accessible (one way or another) through (Simple Object Access Protocol) SOAP-based Web Services.
Optimization of High-Dimensional Functions through Hypercube Evaluation
Abiyev, Rahib H.; Tunay, Mustafa
2015-01-01
A novel learning algorithm for solving global numerical optimization problems is proposed. The proposed learning algorithm is intense stochastic search method which is based on evaluation and optimization of a hypercube and is called the hypercube optimization (HO) algorithm. The HO algorithm comprises the initialization and evaluation process, displacement-shrink process, and searching space process. The initialization and evaluation process initializes initial solution and evaluates the solutions in given hypercube. The displacement-shrink process determines displacement and evaluates objective functions using new points, and the search area process determines next hypercube using certain rules and evaluates the new solutions. The algorithms for these processes have been designed and presented in the paper. The designed HO algorithm is tested on specific benchmark functions. The simulations of HO algorithm have been performed for optimization of functions of 1000-, 5000-, or even 10000 dimensions. The comparative simulation results with other approaches demonstrate that the proposed algorithm is a potential candidate for optimization of both low and high dimensional functions. PMID:26339237
Ichikawa, Kazuki; Morishita, Shinichi
2014-01-01
K-means clustering has been widely used to gain insight into biological systems from large-scale life science data. To quantify the similarities among biological data sets, Pearson correlation distance and standardized Euclidean distance are used most frequently; however, optimization methods have been largely unexplored. These two distance measurements are equivalent in the sense that they yield the same k-means clustering result for identical sets of k initial centroids. Thus, an efficient algorithm used for one is applicable to the other. Several optimization methods are available for the Euclidean distance and can be used for processing the standardized Euclidean distance; however, they are not customized for this context. We instead approached the problem by studying the properties of the Pearson correlation distance, and we invented a simple but powerful heuristic method for markedly pruning unnecessary computation while retaining the final solution. Tests using real biological data sets with 50-60K vectors of dimensions 10-2001 (~400 MB in size) demonstrated marked reduction in computation time for k = 10-500 in comparison with other state-of-the-art pruning methods such as Elkan's and Hamerly's algorithms. The BoostKCP software is available at http://mlab.cb.k.u-tokyo.ac.jp/~ichikawa/boostKCP/.
Minutes of TOPEX/POSEIDON Science Working Team Meeting and Ocean Tides Workshop
NASA Technical Reports Server (NTRS)
Fu, Lee-Lueng (Editor)
1995-01-01
This third TOPEX/POSEIDON Science Working Team meeting was held on December 4, 1994 to review progress in defining ocean tide models, precision Earth orbits, and various science algorithms. A related workshop on ocean tides convened to select the best models to be used by scientists in the Geophysical Data Records.
ERIC Educational Resources Information Center
Corlu, M. Sencer; Capraro, Robert M.; Corlu, M. Ali
2011-01-01
Students need to achieve automaticity in learning mathematics without sacrificing conceptual understanding of the algorithms that are essential in being successful in algebra and problem solving, as well as in science. This research investigated the relationship between science-contextualized problems and computational fluency by testing an…
Global Precipitation Measurement (GPM) Ground Validation (GV) Science Implementation Plan
NASA Technical Reports Server (NTRS)
Petersen, Walter A.; Hou, Arthur Y.
2008-01-01
For pre-launch algorithm development and post-launch product evaluation Global Precipitation Measurement (GPM) Ground Validation (GV) goes beyond direct comparisons of surface rain rates between ground and satellite measurements to provide the means for improving retrieval algorithms and model applications.Three approaches to GPM GV include direct statistical validation (at the surface), precipitation physics validation (in a vertical columns), and integrated science validation (4-dimensional). These three approaches support five themes: core satellite error characterization; constellation satellites validation; development of physical models of snow, cloud water, and mixed phase; development of cloud-resolving model (CRM) and land-surface models to bridge observations and algorithms; and, development of coupled CRM-land surface modeling for basin-scale water budget studies and natural hazard prediction. This presentation describes the implementation of these approaches.
Majumdar, Satya N
2003-08-01
We use the traveling front approach to derive exact asymptotic results for the statistics of the number of particles in a class of directed diffusion-limited aggregation models on a Cayley tree. We point out that some aspects of these models are closely connected to two different problems in computer science, namely, the digital search tree problem in data structures and the Lempel-Ziv algorithm for data compression. The statistics of the number of particles studied here is related to the statistics of height in digital search trees which, in turn, is related to the statistics of the length of the longest word formed by the Lempel-Ziv algorithm. Implications of our results to these computer science problems are pointed out.
NASA Astrophysics Data System (ADS)
Majumdar, Satya N.
2003-08-01
We use the traveling front approach to derive exact asymptotic results for the statistics of the number of particles in a class of directed diffusion-limited aggregation models on a Cayley tree. We point out that some aspects of these models are closely connected to two different problems in computer science, namely, the digital search tree problem in data structures and the Lempel-Ziv algorithm for data compression. The statistics of the number of particles studied here is related to the statistics of height in digital search trees which, in turn, is related to the statistics of the length of the longest word formed by the Lempel-Ziv algorithm. Implications of our results to these computer science problems are pointed out.
InSAR Scientific Computing Environment
NASA Technical Reports Server (NTRS)
Rosen, Paul A.; Sacco, Gian Franco; Gurrola, Eric M.; Zabker, Howard A.
2011-01-01
This computing environment is the next generation of geodetic image processing technology for repeat-pass Interferometric Synthetic Aperture (InSAR) sensors, identified by the community as a needed capability to provide flexibility and extensibility in reducing measurements from radar satellites and aircraft to new geophysical products. This software allows users of interferometric radar data the flexibility to process from Level 0 to Level 4 products using a variety of algorithms and for a range of available sensors. There are many radar satellites in orbit today delivering to the science community data of unprecedented quantity and quality, making possible large-scale studies in climate research, natural hazards, and the Earth's ecosystem. The proposed DESDynI mission, now under consideration by NASA for launch later in this decade, would provide time series and multiimage measurements that permit 4D models of Earth surface processes so that, for example, climate-induced changes over time would become apparent and quantifiable. This advanced data processing technology, applied to a global data set such as from the proposed DESDynI mission, enables a new class of analyses at time and spatial scales unavailable using current approaches. This software implements an accurate, extensible, and modular processing system designed to realize the full potential of InSAR data from future missions such as the proposed DESDynI, existing radar satellite data, as well as data from the NASA UAVSAR (Uninhabited Aerial Vehicle Synthetic Aperture Radar), and other airborne platforms. The processing approach has been re-thought in order to enable multi-scene analysis by adding new algorithms and data interfaces, to permit user-reconfigurable operation and extensibility, and to capitalize on codes already developed by NASA and the science community. The framework incorporates modern programming methods based on recent research, including object-oriented scripts controlling legacy and new codes, abstraction and generalization of the data model for efficient manipulation of objects among modules, and well-designed module interfaces suitable for command- line execution or GUI-programming. The framework is designed to allow users contributions to promote maximum utility and sophistication of the code, creating an open-source community that could extend the framework into the indefinite future.
Landsat Science: 40 Years of Innovation and Opportunity
NASA Technical Reports Server (NTRS)
Cook, Bruce D.; Irons, James R.; Masek, Jeffrey G.; Loveland, Thomas R.
2012-01-01
Landsat satellites have provided unparalleled Earth-observing data for nearly 40 years, allowing scientists to describe, monitor and model the global environment during a period of time that has seen dramatic changes in population growth, land use, and climate. The success of the Landsat program can be attributed to well-designed instrument specifications, astute engineering, comprehensive global acquisition and calibration strategies, and innovative scientists who have developed analytical techniques and applications to address a wide range of needs at local to global scales (e.g., crop production, water resource management, human health and environmental quality, urbanization, deforestation and biodiversity). Early Landsat contributions included inventories of natural resources and land cover classification maps, which were initially prepared by a visual interpretation of Landsat imagery. Over time, advances in computer technology facilitated the development of sophisticated image processing algorithms and complex ecosystem modeling, enabling scientists to create accurate, reproducible, and more realistic simulations of biogeochemical processes (e.g., plant production and ecosystem dynamics). Today, the Landsat data archive is freely available for download through the USGS, creating new opportunities for scientists to generate global image datasets, develop new change detection algorithms, and provide products in support of operational programs such as Reducing Emissions from Deforestation and Forest Degradation in Developing Countries (REDD). In particular, the use of dense (approximately annual) time series to characterize both rapid and progressive landscape change has yielded new insights into how the land environment is responding to anthropogenic and natural pressures. The launch of the Landsat Data Continuity Mission (LDCM) satellite in 2012 will continue to propel innovative Landsat science.
The Gemini NICI Planet-Finding Campaign: The Companion Detection Pipeline
NASA Astrophysics Data System (ADS)
Wahhaj, Zahed; Liu, Michael C.; Biller, Beth A.; Nielsen, Eric L.; Close, Laird M.; Hayward, Thomas L.; Hartung, Markus; Chun, Mark; Ftaclas, Christ; Toomey, Douglas W.
2013-12-01
We present high-contrast image processing techniques used by the Gemini NICI Planet-Finding Campaign to detect faint companions to bright stars. The Near-Infrared Coronographic Imager (NICI) is an adaptive optics instrument installed on the 8 m Gemini South telescope, capable of angular and spectral difference imaging and specifically designed to image exoplanets. The Campaign data pipeline achieves median contrasts of 12.6 mag at 0.''5 and 14.4 mag at 1'' separation, for a sample of 45 stars (V = 4.3-13.9 mag) from the early phase of the campaign. We also present a novel approach to calculating contrast curves for companion detection based on 95% completeness in the recovery of artificial companions injected into the raw data, while accounting for the false-positive rate. We use this technique to select the image processing algorithms that are more successful at recovering faint simulated point sources. We compare our pipeline to the performance of the Locally Optimized Combination of Images (LOCI) algorithm for NICI data and do not find significant improvement with LOCI. Based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the Science and Technology Facilities Council (United Kingdom), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Ministério da Ciência e Tecnologia (Brazil) and Ministerio de Ciencia, Tecnología e Innovación Productiva (Argentina).
Progress with lossy compression of data from the Community Earth System Model
NASA Astrophysics Data System (ADS)
Xu, H.; Baker, A.; Hammerling, D.; Li, S.; Clyne, J.
2017-12-01
Climate models, such as the Community Earth System Model (CESM), generate massive quantities of data, particularly when run at high spatial and temporal resolutions. The burden of storage is further exacerbated by creating large ensembles, generating large numbers of variables, outputting at high frequencies, and duplicating data archives (to protect against disk failures). Applying lossy compression methods to CESM datasets is an attractive means of reducing data storage requirements, but ensuring that the loss of information does not negatively impact science objectives is critical. In particular, test methods are needed to evaluate whether critical features (e.g., extreme values and spatial and temporal gradients) have been preserved and to boost scientists' confidence in the lossy compression process. We will provide an overview on our progress in applying lossy compression to CESM output and describe our unique suite of metric tests that evaluate the impact of information loss. Further, we will describe our processes how to choose an appropriate compression algorithm (and its associated parameters) given the diversity of CESM data (e.g., variables may be constant, smooth, change abruptly, contain missing values, or have large ranges). Traditional compression algorithms, such as those used for images, are not necessarily ideally suited for floating-point climate simulation data, and different methods may have different strengths and be more effective for certain types of variables than others. We will discuss our progress towards our ultimate goal of developing an automated multi-method parallel approach for compression of climate data that both maximizes data reduction and minimizes the impact of data loss on science results.
Product Quality of ESA's Atmospheric-Chemistry Missions
NASA Astrophysics Data System (ADS)
Dehn, Angelika; Bojkov, Bojan; Fehr, Thorsten
2012-11-01
ESA's Atmospheric Chemistry Mission is providing fundamental information for the understanding of atmospheric chemistry processes. The global datasets are supporting climate research, air quality assessments, stratospheric ozone monitoring and many other science areas and operational services.ENVISAT with GOMOS, MIPAS and SCIAMACHY has contributed to a unique data set over a period of 10 years, before its major anomaly in April 2012, leading to the end of the operational part of the mission. GOME, on board ERS-2 has been acquiring data for 16 years, before it's de-commissioning in July 2011.The quality of the corresponding data sets is continuously being improved, also beyond the termination of the satellite's operational phases. This is realised with the support of numerous teams of science experts, evolving the algorithm and calibration baseline and validation teams assessing the resulting upgraded data sets.
NASA Astrophysics Data System (ADS)
Pavlis, Nikolaos K.
Geomatics is a trendy term that has been used in recent years to describe academic departments that teach and research theories, methods, algorithms, and practices used in processing and analyzing data related to the Earth and other planets. Naming trends aside, geomatics could be considered as the mathematical and statistical “toolbox” that allows Earth scientists to extract information about physically relevant parameters from the available data and accompany such information with some measure of its reliability. This book is an attempt to present the mathematical-statistical methods used in data analysis within various disciplines—geodesy, geophysics, photogrammetry and remote sensing—from a unifying perspective that inverse problem formalism permits. At the same time, it allows us to stretch the relevance of statistical methods in achieving an optimal solution.
Diverse Applications of Electronic-Nose Technologies in Agriculture and Forestry
Wilson, Alphus D.
2013-01-01
Electronic-nose (e-nose) instruments, derived from numerous types of aroma-sensor technologies, have been developed for a diversity of applications in the broad fields of agriculture and forestry. Recent advances in e-nose technologies within the plant sciences, including improvements in gas-sensor designs, innovations in data analysis and pattern-recognition algorithms, and progress in material science and systems integration methods, have led to significant benefits to both industries. Electronic noses have been used in a variety of commercial agricultural-related industries, including the agricultural sectors of agronomy, biochemical processing, botany, cell culture, plant cultivar selections, environmental monitoring, horticulture, pesticide detection, plant physiology and pathology. Applications in forestry include uses in chemotaxonomy, log tracking, wood and paper processing, forest management, forest health protection, and waste management. These aroma-detection applications have improved plant-based product attributes, quality, uniformity, and consistency in ways that have increased the efficiency and effectiveness of production and manufacturing processes. This paper provides a comprehensive review and summary of a broad range of electronic-nose technologies and applications, developed specifically for the agriculture and forestry industries over the past thirty years, which have offered solutions that have greatly improved worldwide agricultural and agroforestry production systems. PMID:23396191
Diverse applications of electronic-nose technologies in agriculture and forestry.
Wilson, Alphus D
2013-02-08
Electronic-nose (e-nose) instruments, derived from numerous types of aroma-sensor technologies, have been developed for a diversity of applications in the broad fields of agriculture and forestry. Recent advances in e-nose technologies within the plant sciences, including improvements in gas-sensor designs, innovations in data analysis and pattern-recognition algorithms, and progress in material science and systems integration methods, have led to significant benefits to both industries. Electronic noses have been used in a variety of commercial agricultural-related industries, including the agricultural sectors of agronomy, biochemical processing, botany, cell culture, plant cultivar selections, environmental monitoring, horticulture, pesticide detection, plant physiology and pathology. Applications in forestry include uses in chemotaxonomy, log tracking, wood and paper processing, forest management, forest health protection, and waste management. These aroma-detection applications have improved plant-based product attributes, quality, uniformity, and consistency in ways that have increased the efficiency and effectiveness of production and manufacturing processes. This paper provides a comprehensive review and summary of a broad range of electronic-nose technologies and applications, developed specifically for the agriculture and forestry industries over the past thirty years, which have offered solutions that have greatly improved worldwide agricultural and agroforestry production systems.
An Expert System toward Buiding An Earth Science Knowledge Graph
NASA Astrophysics Data System (ADS)
Zhang, J.; Duan, X.; Ramachandran, R.; Lee, T. J.; Bao, Q.; Gatlin, P. N.; Maskey, M.
2017-12-01
In this ongoing work, we aim to build foundations of Cognitive Computing for Earth Science research. The goal of our project is to develop an end-to-end automated methodology for incrementally constructing Knowledge Graphs for Earth Science (KG4ES). These knowledge graphs can then serve as the foundational components for building cognitive systems in Earth science, enabling researchers to uncover new patterns and hypotheses that are virtually impossible to identify today. In addition, this research focuses on developing mining algorithms needed to exploit these constructed knowledge graphs. As such, these graphs will free knowledge from publications that are generated in a very linear, deterministic manner, and structure knowledge in a way that users can both interact and connect with relevant pieces of information. Our major contributions are two-fold. First, we have developed an end-to-end methodology for constructing Knowledge Graphs for Earth Science (KG4ES) using existing corpus of journal papers and reports. One of the key challenges in any machine learning, especially deep learning applications, is the need for robust and large training datasets. We have developed techniques capable of automatically retraining models and incrementally building and updating KG4ES, based on ever evolving training data. We also adopt the evaluation instrument based on common research methodologies used in Earth science research, especially in Atmospheric Science. Second, we have developed an algorithm to infer new knowledge that can exploit the constructed KG4ES. In more detail, we have developed a network prediction algorithm aiming to explore and predict possible new connections in the KG4ES and aid in new knowledge discovery.
COALA-System for Visual Representation of Cryptography Algorithms
ERIC Educational Resources Information Center
Stanisavljevic, Zarko; Stanisavljevic, Jelena; Vuletic, Pavle; Jovanovic, Zoran
2014-01-01
Educational software systems have an increasingly significant presence in engineering sciences. They aim to improve students' attitudes and knowledge acquisition typically through visual representation and simulation of complex algorithms and mechanisms or hardware systems that are often not available to the educational institutions. This paper…
Fractal Landscape Algorithms for Environmental Simulations
NASA Astrophysics Data System (ADS)
Mao, H.; Moran, S.
2014-12-01
Natural science and geographical research are now able to take advantage of environmental simulations that more accurately test experimental hypotheses, resulting in deeper understanding. Experiments affected by the natural environment can benefit from 3D landscape simulations capable of simulating a variety of terrains and environmental phenomena. Such simulations can employ random terrain generation algorithms that dynamically simulate environments to test specific models against a variety of factors. Through the use of noise functions such as Perlin noise, Simplex noise, and diamond square algorithms, computers can generate simulations that model a variety of landscapes and ecosystems. This study shows how these algorithms work together to create realistic landscapes. By seeding values into the diamond square algorithm, one can control the shape of landscape. Perlin noise and Simplex noise are also used to simulate moisture and temperature. The smooth gradient created by coherent noise allows more realistic landscapes to be simulated. Terrain generation algorithms can be used in environmental studies and physics simulations. Potential studies that would benefit from simulations include the geophysical impact of flash floods or drought on a particular region and regional impacts on low lying area due to global warming and rising sea levels. Furthermore, terrain generation algorithms also serve as aesthetic tools to display landscapes (Google Earth), and simulate planetary landscapes. Hence, it can be used as a tool to assist science education. Algorithms used to generate these natural phenomena provide scientists a different approach in analyzing our world. The random algorithms used in terrain generation not only contribute to the generating the terrains themselves, but are also capable of simulating weather patterns.
Reuse Requirements for Generating Long Term Climate Data Sets
NASA Astrophysics Data System (ADS)
Fleig, A. J.
2007-12-01
Creating long term climate data sets from remotely sensed data requires a specialized form of code reuse. To detect long term trends in a geophysical parameter, such as global ozone amount or mean sea surface temperature, it is essential to be able to differentiate between real changes in the measurement and artifacts related to changes in processing algorithms or instrument characteristics. The ability to rerun the exact algorithm used to produce a given data set many years after the data was originally made is essential to create consistent long term data sets. It is possible to quickly develop a basic algorithm that will convert a perfect instrument measurement into a geophysical parameter value for a well specified set of conditions. However the devil is in the details and it takes a massive effort to develop and verify a processing system to generate high quality global climate data over all necessary conditions. As an example, from 1976 until now, over a hundred man years and eight complete reprocessings have been spent on deriving thirty years of total ozone data from multiple backscattered ultraviolet instruments. To obtain a global data set it is necessary to make numerous assumptions and to handle many special conditions (e.g. "What happens at high solar zenith angles with scattered clouds for snow covered terrain at high altitudes"?) It is easier to determine the precision of a remotely sensed data set than to determine its absolute accuracy. Fortunately if the entire data set is made with a single instrument and a constant algorithm the ability to detect long term trends is primarily determined by the precision of the measurement system rather than its absolute accuracy. However no instrument runs forever and new processing algorithms are developed over time. Introducing the resulting changes can impact the estimate of product precision and reduce the ability to estimate long term trends.Given an extended period of time when both the initial measurement system and the new one provide simultaneous measurements it may be possible to identify differences between the two systems and produce a consistent merged long term data set. Unfortunately this is often not the case. Instead it is necessary to understand the exact details of all the assumptions built into the initial processing system and to evaluate the impact of changes in each of these assumptions and of new features introduced into the next generation processing system. This is not possible without complete understanding of exactly how the original data was produced. While scientific papers and algorithm theoretical basis documents provide substantial details about the concepts they do not provide the necessary detail. Only exact processing codes with all the necessary ancillary data to run them provide the needed information. Since it will be necessary to modify the code for the new instrument it is also necessary to provide all of the tools such as table generation routines and input parameters used to generate the code. This has not been a problem for the people that make the first set of measurements of a given parameter. There was no similar predecessor global data set to match and they know what they assumed in making their measurements. But we are entering an era when it is necessary to consider the next generation. For instance the entire 30 year global ozone data set that started with the Total Ozone Mapping Spectrometer instrument launched in 1978 on the Nimbus 7 spacecraft was produced by a single science team. Similar measurements will be made well into the middle of the coming century with instruments to be flown on the National Polar Orbiting Environmental Satellite System but the original science team (unfortunately) will not be there to explain what they did over that period
Hybrid ontology for semantic information retrieval model using keyword matching indexing system.
Uthayan, K R; Mala, G S Anandha
2015-01-01
Ontology is the process of growth and elucidation of concepts of an information domain being common for a group of users. Establishing ontology into information retrieval is a normal method to develop searching effects of relevant information users require. Keywords matching process with historical or information domain is significant in recent calculations for assisting the best match for specific input queries. This research presents a better querying mechanism for information retrieval which integrates the ontology queries with keyword search. The ontology-based query is changed into a primary order to predicate logic uncertainty which is used for routing the query to the appropriate servers. Matching algorithms characterize warm area of researches in computer science and artificial intelligence. In text matching, it is more dependable to study semantics model and query for conditions of semantic matching. This research develops the semantic matching results between input queries and information in ontology field. The contributed algorithm is a hybrid method that is based on matching extracted instances from the queries and information field. The queries and information domain is focused on semantic matching, to discover the best match and to progress the executive process. In conclusion, the hybrid ontology in semantic web is sufficient to retrieve the documents when compared to standard ontology.
Bio-inspired nano-sensor-enhanced CNN visual computer.
Porod, Wolfgang; Werblin, Frank; Chua, Leon O; Roska, Tamas; Rodriguez-Vazquez, Angel; Roska, Botond; Fay, Patrick; Bernstein, Gary H; Huang, Yih-Fang; Csurgay, Arpad I
2004-05-01
Nanotechnology opens new ways to utilize recent discoveries in biological image processing by translating the underlying functional concepts into the design of CNN (cellular neural/nonlinear network)-based systems incorporating nanoelectronic devices. There is a natural intersection joining studies of retinal processing, spatio-temporal nonlinear dynamics embodied in CNN, and the possibility of miniaturizing the technology through nanotechnology. This intersection serves as the springboard for our multidisciplinary project. Biological feature and motion detectors map directly into the spatio-temporal dynamics of CNN for target recognition, image stabilization, and tracking. The neural interactions underlying color processing will drive the development of nanoscale multispectral sensor arrays for image fusion. Implementing such nanoscale sensors on a CNN platform will allow the implementation of device feedback control, a hallmark of biological sensory systems. These biologically inspired CNN subroutines are incorporated into the new world of analog-and-logic algorithms and software, containing also many other active-wave computing mechanisms, including nature-inspired (physics and chemistry) as well as PDE-based sophisticated spatio-temporal algorithms. Our goal is to design and develop several miniature prototype devices for target detection, navigation, tracking, and robotics. This paper presents an example illustrating the synergies emerging from the convergence of nanotechnology, biotechnology, and information and cognitive science.
Wideband Agile Digital Microwave Radiometer
NASA Technical Reports Server (NTRS)
Gaier, Todd C.; Brown, Shannon T.; Ruf, Christopher; Gross, Steven
2012-01-01
The objectives of this work were to take the initial steps needed to develop a field programmable gate array (FPGA)- based wideband digital radiometer backend (>500 MHz bandwidth) that will enable passive microwave observations with minimal performance degradation in a radiofrequency-interference (RFI)-rich environment. As manmade RF emissions increase over time and fill more of the microwave spectrum, microwave radiometer science applications will be increasingly impacted in a negative way, and the current generation of spaceborne microwave radiometers that use broadband analog back ends will become severely compromised or unusable over an increasing fraction of time on orbit. There is a need to develop a digital radiometer back end that, for each observation period, uses digital signal processing (DSP) algorithms to identify the maximum amount of RFI-free spectrum across the radiometer band to preserve bandwidth to minimize radiometer noise (which is inversely related to the bandwidth). Ultimately, the objective is to incorporate all processing necessary in the back end to take contaminated input spectra and produce a single output value free of manmade signals to minimize data rates for spaceborne radiometer missions. But, to meet these objectives, several intermediate processing algorithms had to be developed, and their performance characterized relative to typical brightness temperature accuracy re quirements for current and future microwave radiometer missions, including those for measuring salinity, soil moisture, and snow pack.
Hybrid Ontology for Semantic Information Retrieval Model Using Keyword Matching Indexing System
Uthayan, K. R.; Anandha Mala, G. S.
2015-01-01
Ontology is the process of growth and elucidation of concepts of an information domain being common for a group of users. Establishing ontology into information retrieval is a normal method to develop searching effects of relevant information users require. Keywords matching process with historical or information domain is significant in recent calculations for assisting the best match for specific input queries. This research presents a better querying mechanism for information retrieval which integrates the ontology queries with keyword search. The ontology-based query is changed into a primary order to predicate logic uncertainty which is used for routing the query to the appropriate servers. Matching algorithms characterize warm area of researches in computer science and artificial intelligence. In text matching, it is more dependable to study semantics model and query for conditions of semantic matching. This research develops the semantic matching results between input queries and information in ontology field. The contributed algorithm is a hybrid method that is based on matching extracted instances from the queries and information field. The queries and information domain is focused on semantic matching, to discover the best match and to progress the executive process. In conclusion, the hybrid ontology in semantic web is sufficient to retrieve the documents when compared to standard ontology. PMID:25922851
Backup Attitude Control Algorithms for the MAP Spacecraft
NASA Technical Reports Server (NTRS)
ODonnell, James R., Jr.; Andrews, Stephen F.; Ericsson-Jackson, Aprille J.; Flatley, Thomas W.; Ward, David K.; Bay, P. Michael
1999-01-01
The Microwave Anisotropy Probe (MAP) is a follow-on to the Differential Microwave Radiometer (DMR) instrument on the Cosmic Background Explorer (COBE) spacecraft. The MAP spacecraft will perform its mission, studying the early origins of the universe, in a Lissajous orbit around the Earth-Sun L(sub 2) Lagrange point. Due to limited mass, power, and financial resources, a traditional reliability concept involving fully redundant components was not feasible. This paper will discuss the redundancy philosophy used on MAP, describe the hardware redundancy selected (and why), and present backup modes and algorithms that were designed in lieu of additional attitude control hardware redundancy to improve the odds of mission success. Three of these modes have been implemented in the spacecraft flight software. The first onboard mode allows the MAP Kalman filter to be used with digital sun sensor (DSS) derived rates, in case of the failure of one of MAP's two two-axis inertial reference units. Similarly, the second onboard mode allows a star tracker only mode, using attitude and derived rate from one or both of MAP's star trackers for onboard attitude determination and control. The last backup mode onboard allows a sun-line angle offset to be commanded that will allow solar radiation pressure to be used for momentum management and orbit stationkeeping. In addition to the backup modes implemented on the spacecraft, two backup algorithms have been developed in the event of less likely contingencies. One of these is an algorithm for implementing an alternative scan pattern to MAP's nominal dual-spin science mode using only one or two reaction wheels and thrusters. Finally, an algorithm has been developed that uses thruster one shots while in science mode for momentum management. This algorithm has been developed in case system momentum builds up faster than anticipated, to allow adequate momentum management while minimizing interruptions to science. In this paper, each mode and algorithm will be discussed, and simulation results presented.
Calibrated Noise Measurements with Induced Receiver Gain Fluctuations
NASA Technical Reports Server (NTRS)
Racette, Paul; Walker, David; Gu, Dazhen; Rajola, Marco; Spevacek, Ashly
2011-01-01
The lack of well-developed techniques for modeling changing statistical moments in our observations has stymied the application of stochastic process theory in science and engineering. These limitations were encountered when modeling the performance of radiometer calibration architectures and algorithms in the presence of non stationary receiver fluctuations. Analyses of measured signals have traditionally been limited to a single measurement series. Whereas in a radiometer that samples a set of noise references, the data collection can be treated as an ensemble set of measurements of the receiver state. Noise Assisted Data Analysis is a growing field of study with significant potential for aiding the understanding and modeling of non stationary processes. Typically, NADA entails adding noise to a signal to produce an ensemble set on which statistical analysis is performed. Alternatively as in radiometric measurements, mixing a signal with calibrated noise provides, through the calibration process, the means to detect deviations from the stationary assumption and thereby a measurement tool to characterize the signal's non stationary properties. Data sets comprised of calibrated noise measurements have been limited to those collected with naturally occurring fluctuations in the radiometer receiver. To examine the application of NADA using calibrated noise, a Receiver Gain Modulation Circuit (RGMC) was designed and built to modulate the gain of a radiometer receiver using an external signal. In 2010, an RGMC was installed and operated at the National Institute of Standards and Techniques (NIST) using their Noise Figure Radiometer (NFRad) and national standard noise references. The data collected is the first known set of calibrated noise measurements from a receiver with an externally modulated gain. As an initial step, sinusoidal and step-function signals were used to modulate the receiver gain, to evaluate the circuit characteristics and to study the performance of a variety of calibration algorithms. The receiver noise temperature and time-bandwidth product of the NFRad are calculated from the data. Statistical analysis using temporal-dependent calibration algorithms reveals that the natural occurring fluctuations in the receiver are stationary over long intervals (100s of seconds); however the receiver exhibits local non stationarity over the interval over which one set of reference measurements are collected. A variety of calibration algorithms have been applied to the data to assess algorithms' performance with the gain fluctuation signals. This presentation will describe the RGMC, experiment design and a comparative analysis of calibration algorithms.
Portable laser speckle perfusion imaging system based on digital signal processor.
Tang, Xuejun; Feng, Nengyun; Sun, Xiaoli; Li, Pengcheng; Luo, Qingming
2010-12-01
The ability to monitor blood flow in vivo is of major importance in clinical diagnosis and in basic researches of life science. As a noninvasive full-field technique without the need of scanning, laser speckle contrast imaging (LSCI) is widely used to study blood flow with high spatial and temporal resolution. Current LSCI systems are based on personal computers for image processing with large size, which potentially limit the widespread clinical utility. The need for portable laser speckle contrast imaging system that does not compromise processing efficiency is crucial in clinical diagnosis. However, the processing of laser speckle contrast images is time-consuming due to the heavy calculation for enormous high-resolution image data. To address this problem, a portable laser speckle perfusion imaging system based on digital signal processor (DSP) and the algorithm which is suitable for DSP is described. With highly integrated DSP and the algorithm, we have markedly reduced the size and weight of the system as well as its energy consumption while preserving the high processing speed. In vivo experiments demonstrate that our portable laser speckle perfusion imaging system can obtain blood flow images at 25 frames per second with the resolution of 640 × 480 pixels. The portable and lightweight features make it capable of being adapted to a wide variety of application areas such as research laboratory, operating room, ambulance, and even disaster site.
The Java Image Science Toolkit (JIST) for rapid prototyping and publishing of neuroimaging software.
Lucas, Blake C; Bogovic, John A; Carass, Aaron; Bazin, Pierre-Louis; Prince, Jerry L; Pham, Dzung L; Landman, Bennett A
2010-03-01
Non-invasive neuroimaging techniques enable extraordinarily sensitive and specific in vivo study of the structure, functional response and connectivity of biological mechanisms. With these advanced methods comes a heavy reliance on computer-based processing, analysis and interpretation. While the neuroimaging community has produced many excellent academic and commercial tool packages, new tools are often required to interpret new modalities and paradigms. Developing custom tools and ensuring interoperability with existing tools is a significant hurdle. To address these limitations, we present a new framework for algorithm development that implicitly ensures tool interoperability, generates graphical user interfaces, provides advanced batch processing tools, and, most importantly, requires minimal additional programming or computational overhead. Java-based rapid prototyping with this system is an efficient and practical approach to evaluate new algorithms since the proposed system ensures that rapidly constructed prototypes are actually fully-functional processing modules with support for multiple GUI's, a broad range of file formats, and distributed computation. Herein, we demonstrate MRI image processing with the proposed system for cortical surface extraction in large cross-sectional cohorts, provide a system for fully automated diffusion tensor image analysis, and illustrate how the system can be used as a simulation framework for the development of a new image analysis method. The system is released as open source under the Lesser GNU Public License (LGPL) through the Neuroimaging Informatics Tools and Resources Clearinghouse (NITRC).
The Java Image Science Toolkit (JIST) for Rapid Prototyping and Publishing of Neuroimaging Software
Lucas, Blake C.; Bogovic, John A.; Carass, Aaron; Bazin, Pierre-Louis; Prince, Jerry L.; Pham, Dzung
2010-01-01
Non-invasive neuroimaging techniques enable extraordinarily sensitive and specific in vivo study of the structure, functional response and connectivity of biological mechanisms. With these advanced methods comes a heavy reliance on computer-based processing, analysis and interpretation. While the neuroimaging community has produced many excellent academic and commercial tool packages, new tools are often required to interpret new modalities and paradigms. Developing custom tools and ensuring interoperability with existing tools is a significant hurdle. To address these limitations, we present a new framework for algorithm development that implicitly ensures tool interoperability, generates graphical user interfaces, provides advanced batch processing tools, and, most importantly, requires minimal additional programming or computational overhead. Java-based rapid prototyping with this system is an efficient and practical approach to evaluate new algorithms since the proposed system ensures that rapidly constructed prototypes are actually fully-functional processing modules with support for multiple GUI's, a broad range of file formats, and distributed computation. Herein, we demonstrate MRI image processing with the proposed system for cortical surface extraction in large cross-sectional cohorts, provide a system for fully automated diffusion tensor image analysis, and illustrate how the system can be used as a simulation framework for the development of a new image analysis method. The system is released as open source under the Lesser GNU Public License (LGPL) through the Neuroimaging Informatics Tools and Resources Clearinghouse (NITRC). PMID:20077162
Estimating the beam attenuation coefficient in coastal waters from AVHRR imagery
NASA Astrophysics Data System (ADS)
Gould, Richard W.; Arnone, Robert A.
1997-09-01
This paper presents an algorithm to estimate particle beam attenuation at 660 nm ( cp660) in coastal areas using the red and near-infrared channels of the NOAA AVHRR satellite sensor. In situ reflectance spectra and cp660 measurements were collected at 23 stations in Case I and II waters during an April 1993 cruise in the northern Gulf of Mexico. The reflectance spectra were weighted by the spectral response of the AVHRR sensor and integrated over the channel 1 waveband to estimate the atmospherically corrected signal recorded by the satellite. An empirical relationship between integrated reflectance and cp660 values was derived with a linear correlation coefficient of 0.88. Because the AVHRR sensor requires a strong channel 1 signal, the algorithm is applicable in highly turbid areas ( cp660 > 1.5 m -1) where scattering from suspended sediment strongly controls the shape and magnitude of the red (550-650 nm) reflectance spectrum. The algorithm was tested on a data set collected 2 years later in different coastal waters in the northern Gulf of Mexico and satellite estimates of cp660 averaged within 37% of measured values. Application of the algorithm provides daily images of nearshore regions at 1 km resolution for evaluating processes affecting ocean color distribution patterns (tides, winds, currents, river discharge). Further validation and refinement of the algorithm are in progress to permit quantitative application in other coastal areas. Published by Elsevier Science Ltd
NASA Astrophysics Data System (ADS)
Kreinovich, Vladik; Longpre, Luc; Starks, Scott A.; Xiang, Gang; Beck, Jan; Kandathi, Raj; Nayak, Asis; Ferson, Scott; Hajagos, Janos
2007-02-01
In many areas of science and engineering, it is desirable to estimate statistical characteristics (mean, variance, covariance, etc.) under interval uncertainty. For example, we may want to use the measured values x(t) of a pollution level in a lake at different moments of time to estimate the average pollution level; however, we do not know the exact values x(t)--e.g., if one of the measurement results is 0, this simply means that the actual (unknown) value of x(t) can be anywhere between 0 and the detection limit (DL). We must, therefore, modify the existing statistical algorithms to process such interval data. Such a modification is also necessary to process data from statistical databases, where, in order to maintain privacy, we only keep interval ranges instead of the actual numeric data (e.g., a salary range instead of the actual salary). Most resulting computational problems are NP-hard--which means, crudely speaking, that in general, no computationally efficient algorithm can solve all particular cases of the corresponding problem. In this paper, we overview practical situations in which computationally efficient algorithms exist: e.g., situations when measurements are very accurate, or when all the measurements are done with one (or few) instruments. As a case study, we consider a practical problem from bioinformatics: to discover the genetic difference between the cancer cells and the healthy cells, we must process the measurements results and find the concentrations c and h of a given gene in cancer and in healthy cells. This is a particular case of a general situation in which, to estimate states or parameters which are not directly accessible by measurements, we must solve a system of equations in which coefficients are only known with interval uncertainty. We show that in general, this problem is NP-hard, and we describe new efficient algorithms for solving this problem in practically important situations.
Data Assimilation Experiments Using Quality Controlled AIRS Version 5 Temperature Soundings
NASA Technical Reports Server (NTRS)
Susskind, Joel
2009-01-01
The AIRS Science Team Version 5 retrieval algorithm has been finalized and is now operational at the Goddard DAAC in the processing (and reprocessing) of all AIRS data. The AIRS Science Team Version 5 retrieval algorithm contains a number of significant improvements over Version 4. Two very significant improvements are described briefly below. 1) The AIRS Science Team Radiative Transfer Algorithm (RTA) has now been upgraded to accurately account for effects of non-local thermodynamic equilibrium on the AIRS observations. This allows for use of AIRS observations in the entire 4.3 micron CO2 absorption band in the retrieval algorithm during both day and night. Following theoretical considerations, tropospheric temperature profile information is obtained almost exclusively from clear column radiances in the 4.3 micron CO2 band in the AIRS Version 5 temperature profile retrieval step. These clear column radiances are a derived product that are indicative of radiances AIRS channels would have seen if the field of view were completely clear. Clear column radiances for all channels are determined using tropospheric sounding 15 micron CO2 observations. This approach allows for the generation of accurate values of clear column radiances and T(p) under most cloud conditions. 2) Another very significant improvement in Version 5 is the ability to generate accurate case-by-case, level-by-level error estimates for the atmospheric temperature profile, as well as for channel-by-channel clear column radiances. These error estimates are used for quality control of the retrieved products. Based on error estimate thresholds, each temperature profiles is assigned a characteristic pressure, pg, down to which the profile is characterized as good for use for data assimilation purposes. We have conducted forecast impact experiments assimilating AIRS quality controlled temperature profiles using the NASA GEOS-5 data assimilation system, consisting of the NCEP GSI analysis coupled with the NASA FVGCM, at a spatial resolution of 0.5 deg by 0.5 deg. Assimilation of Quality Controlled AIRS temperature profiles down to pg resulted in significantly improved forecast skill compared to that obtained from experiments when all data used operationally by NCEP, except for AIRS data, is assimilated. These forecasts were also significantly better than to those obtained when AIRS radiances (rather than temperature profiles) are assimilated, which is the way AIRS data is used operationally by NCEP and ECMWF.
Research Data Alliance: Understanding Big Data Analytics Applications in Earth Science
NASA Astrophysics Data System (ADS)
Riedel, Morris; Ramachandran, Rahul; Baumann, Peter
2014-05-01
The Research Data Alliance (RDA) enables data to be shared across barriers through focused working groups and interest groups, formed of experts from around the world - from academia, industry and government. Its Big Data Analytics (BDA) interest groups seeks to develop community based recommendations on feasible data analytics approaches to address scientific community needs of utilizing large quantities of data. BDA seeks to analyze different scientific domain applications (e.g. earth science use cases) and their potential use of various big data analytics techniques. These techniques reach from hardware deployment models up to various different algorithms (e.g. machine learning algorithms such as support vector machines for classification). A systematic classification of feasible combinations of analysis algorithms, analytical tools, data and resource characteristics and scientific queries will be covered in these recommendations. This contribution will outline initial parts of such a classification and recommendations in the specific context of the field of Earth Sciences. Given lessons learned and experiences are based on a survey of use cases and also providing insights in a few use cases in detail.
Research Data Alliance: Understanding Big Data Analytics Applications in Earth Science
NASA Technical Reports Server (NTRS)
Riedel, Morris; Ramachandran, Rahul; Baumann, Peter
2014-01-01
The Research Data Alliance (RDA) enables data to be shared across barriers through focused working groups and interest groups, formed of experts from around the world - from academia, industry and government. Its Big Data Analytics (BDA) interest groups seeks to develop community based recommendations on feasible data analytics approaches to address scientific community needs of utilizing large quantities of data. BDA seeks to analyze different scientific domain applications (e.g. earth science use cases) and their potential use of various big data analytics techniques. These techniques reach from hardware deployment models up to various different algorithms (e.g. machine learning algorithms such as support vector machines for classification). A systematic classification of feasible combinations of analysis algorithms, analytical tools, data and resource characteristics and scientific queries will be covered in these recommendations. This contribution will outline initial parts of such a classification and recommendations in the specific context of the field of Earth Sciences. Given lessons learned and experiences are based on a survey of use cases and also providing insights in a few use cases in detail.
Key Provenance of Earth Science Observational Data Products
NASA Astrophysics Data System (ADS)
Conover, H.; Plale, B.; Aktas, M.; Ramachandran, R.; Purohit, P.; Jensen, S.; Graves, S. J.
2011-12-01
As the sheer volume of data increases, particularly evidenced in the earth and environmental sciences, local arrangements for sharing data need to be replaced with reliable records about the what, who, how, and where of a data set or collection. This is frequently called the provenance of a data set. While observational data processing systems in the earth sciences have a long history of capturing metadata about the processing pipeline, current processes are limited in both what is captured and how it is disseminated to the science community. Provenance capture plays a role in scientific data preservation and stewardship precisely because it can automatically capture and represent a coherent picture of the what, how and who of a particular scientific collection. It reflects the transformations that a data collection underwent prior to its current form and the sequence of tasks that were executed and data products applied to generate a new product. In the NASA-funded Instant Karma project, we examine provenance capture in earth science applications, specifically the Advanced Microwave Scanning Radiometer - Earth Observing System (AMSR-E) Science Investigator-led Processing system (SIPS). The project is integrating the Karma provenance collection and representation tool into the AMSR-E SIPS production environment, with an initial focus on Sea Ice. This presentation will describe capture and representation of provenance that is guided by the Open Provenance Model (OPM). Several things have become clear during the course of the project to date. One is that core OPM entities and relationships are not adequate for expressing the kinds of provenance that is of interest in the science domain. OPM supports name-value pair annotations that can be used to augment what is known about the provenance entities and relationships, but in Karma, annotations cannot be added during capture, but only after the fact. This limits the capture system's ability to record something it learned about an entity after the event of its creation in the provenance record. We will discuss extensions to the Open Provenance Model (OPM) and modifications to the Karma tool suite to address this issue, more efficient representations of earth science kinds of provenance, and definition of metadata structures for capturing related knowledge about the data products and science algorithms used to generate them. Use scenarios for provenance information is an active topic of investigation. It has additionally become clear through the project that not all provenance is created equal. In processing pipelines, some provenance is repetitive and uninteresting. Because of the volume of provenance, this obscures what are the interesting pieces of provenance. Methodologies to reveal science-relevant provenance will be presented, along with a preview of the AMSR-E Provenance Browser.
Directly data processing algorithm for multi-wavelength pyrometer (MWP).
Xing, Jian; Peng, Bo; Ma, Zhao; Guo, Xin; Dai, Li; Gu, Weihong; Song, Wenlong
2017-11-27
Data processing of multi-wavelength pyrometer (MWP) is a difficult problem because unknown emissivity. So far some solutions developed generally assumed particular mathematical relations for emissivity versus wavelength or emissivity versus temperature. Due to the deviation between the hypothesis and actual situation, the inversion results can be seriously affected. So directly data processing algorithm of MWP that does not need to assume the spectral emissivity model in advance is main aim of the study. Two new data processing algorithms of MWP, Gradient Projection (GP) algorithm and Internal Penalty Function (IPF) algorithm, each of which does not require to fix emissivity model in advance, are proposed. The novelty core idea is that data processing problem of MWP is transformed into constraint optimization problem, then it can be solved by GP or IPF algorithms. By comparison of simulation results for some typical spectral emissivity models, it is found that IPF algorithm is superior to GP algorithm in terms of accuracy and efficiency. Rocket nozzle temperature experiment results show that true temperature inversion results from IPF algorithm agree well with the theoretical design temperature as well. So the proposed combination IPF algorithm with MWP is expected to be a directly data processing algorithm to clear up the unknown emissivity obstacle for MWP.
Aural mapping of STEM concepts using literature mining
NASA Astrophysics Data System (ADS)
Bharadwaj, Venkatesh
Recent technological applications have made the life of people too much dependent on Science, Technology, Engineering, and Mathematics (STEM) and its applications. Understanding basic level science is a must in order to use and contribute to this technological revolution. Science education in middle and high school levels however depends heavily on visual representations such as models, diagrams, figures, animations and presentations etc. This leaves visually impaired students with very few options to learn science and secure a career in STEM related areas. Recent experiments have shown that small aural clues called Audemes are helpful in understanding and memorization of science concepts among visually impaired students. Audemes are non-verbal sound translations of a science concept. In order to facilitate science concepts as Audemes, for visually impaired students, this thesis presents an automatic system for audeme generation from STEM textbooks. This thesis describes the systematic application of multiple Natural Language Processing tools and techniques, such as dependency parser, POS tagger, Information Retrieval algorithm, Semantic mapping of aural words, machine learning etc., to transform the science concept into a combination of atomic-sounds, thus forming an audeme. We present a rule based classification method for all STEM related concepts. This work also presents a novel way of mapping and extracting most related sounds for the words being used in textbook. Additionally, machine learning methods are used in the system to guarantee the customization of output according to a user's perception. The system being presented is robust, scalable, fully automatic and dynamically adaptable for audeme generation.
NASA Astrophysics Data System (ADS)
Gorelick, Noel
2013-04-01
The Google Earth Engine platform is a system designed to enable petabyte-scale, scientific analysis and visualization of geospatial datasets. Earth Engine provides a consolidated environment including a massive data catalog co-located with thousands of computers for analysis. The user-friendly front-end provides a workbench environment to allow interactive data and algorithm development and exploration and provides a convenient mechanism for scientists to share data, visualizations and analytic algorithms via URLs. The Earth Engine data catalog contains a wide variety of popular, curated datasets, including the world's largest online collection of Landsat scenes (> 2.0M), numerous MODIS collections, and many vector-based data sets. The platform provides a uniform access mechanism to a variety of data types, independent of their bands, projection, bit-depth, resolution, etc..., facilitating easy multi-sensor analysis. Additionally, a user is able to add and curate their own data and collections. Using a just-in-time, distributed computation model, Earth Engine can rapidly process enormous quantities of geo-spatial data. All computation is performed lazily; nothing is computed until it's required either for output or as input to another step. This model allows real-time feedback and preview during algorithm development, supporting a rapid algorithm development, test, and improvement cycle that scales seamlessly to large-scale production data processing. Through integration with a variety of other services, Earth Engine is able to bring to bear considerable analytic and technical firepower in a transparent fashion, including: AI-based classification via integration with Google's machine learning infrastructure, publishing and distribution at Google scale through integration with the Google Maps API, Maps Engine and Google Earth, and support for in-the-field activities such as validation, ground-truthing, crowd-sourcing and citizen science though the Android Open Data Kit.
NASA Astrophysics Data System (ADS)
Gorelick, N.
2012-12-01
The Google Earth Engine platform is a system designed to enable petabyte-scale, scientific analysis and visualization of geospatial datasets. Earth Engine provides a consolidated environment including a massive data catalog co-located with thousands of computers for analysis. The user-friendly front-end provides a workbench environment to allow interactive data and algorithm development and exploration and provides a convenient mechanism for scientists to share data, visualizations and analytic algorithms via URLs. The Earth Engine data catalog contains a wide variety of popular, curated datasets, including the world's largest online collection of Landsat scenes (> 2.0M), numerous MODIS collections, and many vector-based data sets. The platform provides a uniform access mechanism to a variety of data types, independent of their bands, projection, bit-depth, resolution, etc..., facilitating easy multi-sensor analysis. Additionally, a user is able to add and curate their own data and collections. Using a just-in-time, distributed computation model, Earth Engine can rapidly process enormous quantities of geo-spatial data. All computation is performed lazily; nothing is computed until it's required either for output or as input to another step. This model allows real-time feedback and preview during algorithm development, supporting a rapid algorithm development, test, and improvement cycle that scales seamlessly to large-scale production data processing. Through integration with a variety of other services, Earth Engine is able to bring to bear considerable analytic and technical firepower in a transparent fashion, including: AI-based classification via integration with Google's machine learning infrastructure, publishing and distribution at Google scale through integration with the Google Maps API, Maps Engine and Google Earth, and support for in-the-field activities such as validation, ground-truthing, crowd-sourcing and citizen science though the Android Open Data Kit.
Level 1 Processing of MODIS Direct Broadcast Data at the GSFC DAAC
NASA Technical Reports Server (NTRS)
Lynnes, Christopher; Kempler, Steven J. (Technical Monitor)
2001-01-01
The GSFC DAAC is working to test and package the MODIS Level 1 Processing software for Aqua Direct Broadcast data. This entails the same code base, but different lookup tables for Aqua and Terra. However, the most significant change is the use of ancillary attitude and ephemeris files instead of orbit/attitude information within the science data stream (as with Terra). In addition, we are working on Linux: ports of the algorithms, which could eventually enable processing on PC clusters. Finally, the GSFC DAAC is also working with the GSFC Direct Readout laboratory to ingest Level 0 data from the GSFC DB antenna into the main DAAC, enabling level 1 production in near real time in support of applications users, such as the Synergy project. The mechanism developed for this could conceivably be extended to other participating stations.
Automated Data Processing as an AI Planning Problem
NASA Technical Reports Server (NTRS)
Golden, Keith; Pang, Wanlin; Nemani, Ramakrishna; Votava, Petr
2003-01-01
NASA s vision for Earth Science is to build a "sensor web"; an adaptive array of heterogeneous satellites and other sensors that will track important events, such as storms, and provide real-time information about the state of the Earth to a wide variety of customers. Achieving his vision will require automation not only in the scheduling of the observations but also in the processing af tee resulting data. Ta address this need, we have developed a planner-based agent to automatically generate and execute data-flow programs to produce the requested data products. Data processing domains are substantially different from other planning domains that have been explored, and this has led us to substantially different choices in terms of representation and algorithms. We discuss some of these differences and discuss the approach we have adopted.
Wavelet Filter Banks for Super-Resolution SAR Imaging
NASA Technical Reports Server (NTRS)
Sheybani, Ehsan O.; Deshpande, Manohar; Memarsadeghi, Nargess
2011-01-01
This paper discusses Innovative wavelet-based filter banks designed to enhance the analysis of super resolution Synthetic Aperture Radar (SAR) images using parametric spectral methods and signal classification algorithms, SAR finds applications In many of NASA's earth science fields such as deformation, ecosystem structure, and dynamics of Ice, snow and cold land processes, and surface water and ocean topography. Traditionally, standard methods such as Fast-Fourier Transform (FFT) and Inverse Fast-Fourier Transform (IFFT) have been used to extract Images from SAR radar data, Due to non-parametric features of these methods and their resolution limitations and observation time dependence, use of spectral estimation and signal pre- and post-processing techniques based on wavelets to process SAR radar data has been proposed. Multi-resolution wavelet transforms and advanced spectral estimation techniques have proven to offer efficient solutions to this problem.
NASA's Soil Moisture Active and Passive (SMAP) Mission
NASA Technical Reports Server (NTRS)
Kellogg, Kent; Njoku, Eni; Thurman, Sam; Edelstein, Wendy; Jai, Ben; Spencer, Mike; Chen, Gun-Shing; Entekhabi, Dara; O'Neill, Peggy; Piepmeier, Jeffrey;
2010-01-01
The Soil Moisture Active-Passive (SMAP) Mission is one of the first Earth observation satellites being formulated by NASA in response to the 2007 National Research Council s Decadal Survey. SMAP will make global measurements of soil moisture at the Earth's land surface and its freeze-thaw state. These measurements will allow significantly improved estimates of water, energy and carbon transfers between the land and atmosphere. Soil moisture measurements are also of great importance in assessing flooding and monitoring drought. Knowledge gained from SMAP observations can help mitigate these natural hazards, resulting in potentially great economic and social benefits. SMAP observations of soil moisture and freeze/thaw timing over the boreal latitudes will also reduce a major uncertainty in quantifying the global carbon balance and help to resolve an apparent missing carbon sink over land. The SMAP mission concept will utilize an L-band radar and radiometer sharing a rotating 6-meter mesh reflector antenna flying in a 680 km polar orbit with an 8-day exact ground track repeat aboard a 3-axis stabilized spacecraft to provide high-resolution and high-accuracy global maps of soil moisture and freeze/thaw state every two to three days. In addition, the SMAP project will use these surface observations with advanced modeling and data assimilation to provide estimates of deeper root-zone soil moisture and net ecosystem exchange of carbon. SMAP recently completed its Phase A Mission Concept Study Phase for NASA and transitioned into Phase B (Formulation and Detailed Design). A number of significant accomplishments occurred during this initial phase of mission development. The SMAP project held several open meetings to solicit community feedback on possible science algorithms, prepared preliminary draft Algorithm Theoretical Basis Documents (ATBDs) for each mission science product, and established a prototype algorithm testbed to enable testing and evaluation of the performance of candidate algorithms. SMAP conducted an Applications Workshop in September 2009 to coordinate with potential application users interested in the mission data. A draft Applications Plan describing the Project s planned outreach to potential applications users has been prepared and will be updated during Phase B. SMAP made a significant evaluation of the potential terrestrial radio frequency interference (RFI) source environment and established radiometer and radar flight hardware and ground processing mitigation approaches. SMAP finalized its science orbit and orbit injection approach to optimize launch mass and prepared launch and commissioning scenarios and timeline. A science data communications approach was developed to maximize available science data volume to improve science margins while maintaining moderately short data product latencies to support many potential applications using existing ground assets and with minimum impact to the flight system. SMAP developed rigid multi-body and flexible body dynamics and control models and system designs for the 6-meter rotating instrument reflector-boom assembly (RBA) and flight system to confirm pointing and control performance, and devised strategies to efficiently implement on-orbit balancing if needed. Industry partners were selected for the spin mechanism assembly (SMA) and RBA. Preliminary designs for the radar and radiometer were initiated, including constructing breadboards of key assemblies.
NASA GPM GV Science Requirements
NASA Technical Reports Server (NTRS)
Smith, E.
2003-01-01
An important scientific objective of the NASA portion of the GPM Mission is to generate quantitatively-based error characterization information along with the rainrate retrievals emanating from the GPM constellation of satellites. These data must serve four main purposes: (1) they must be of sufficient quality, uniformity, and timeliness to govern the observation weighting schemes used in the data assimilation modules of numerical weather prediction models; (2) they must extend over that portion of the globe accessible by the GPM core satellite to which the NASA GV program is focused - (approx.65 degree inclination); (3) they must have sufficient specificity to enable detection of physically-formulated microphysical and meteorological weaknesses in the standard physical level 2 rainrate algorithms to be used in the GPM Precipitation Processing System (PPS), i.e., algorithms which will have evolved from the TRMM standard physical level 2 algorithms; and (4) they must support the use of physical error modeling as a primary validation tool and as the eventual replacement of the conventional GV approach of statistically intercomparing surface rainrates fiom ground and satellite measurements. This approach to ground validation research represents a paradigm shift vis-&-vis the program developed for the TRMM mission, which conducted ground validation largely as a statistical intercomparison process between raingauge-derived or radar-derived rainrates and the TRMM satellite rainrate retrievals -- long after the original satellite retrievals were archived. This approach has been able to quantify averaged rainrate differences between the satellite algorithms and the ground instruments, but has not been able to explain causes of algorithm failures or produce error information directly compatible with the cost functions of data assimilation schemes. These schemes require periodic and near-realtime bias uncertainty (i.e., global space-time distributed conditional accuracy of the retrieved rainrates) and local error covariance structure (i.e., global space-time distributed error correlation information for the local 4-dimensional space-time domain -- or in simpler terms, the matrix form of precision error). This can only be accomplished by establishing a network of high quality-heavily instrumented supersites selectively distributed at a few oceanic, continental, and coastal sites. Economics and pragmatics dictate that the network must be made up of a relatively small number of sites (6-8) created through international cooperation. This presentation will address some of the details of the methodology behind the error characterization approach, some proposed solutions for expanding site-developed error properties to regional scales, a data processing and communications concept that would enable rapid implementation of algorithm improvement by the algorithm developers, and the likely available options for developing the supersite network.
The Swiss Data Science Center on a mission to empower reproducible, traceable and reusable science
NASA Astrophysics Data System (ADS)
Schymanski, Stanislaus; Bouillet, Eric; Verscheure, Olivier
2017-04-01
Our abilities to collect, store and analyse scientific data have sky-rocketed in the past decades, but at the same time, a disconnect between data scientists, domain experts and data providers has begun to emerge. Data scientists are developing more and more powerful algorithms for data mining and analysis, while data providers are making more and more data publicly available, and yet many, if not most, discoveries are based on specific data and/or algorithms that "are available from the authors upon request". In the strong belief that scientific progress would be much faster if reproduction and re-use of such data and algorithms was made easier, the Swiss Data Science Center (SDSC) has committed to provide an open framework for the handling and tracking of scientific data and algorithms, from raw data and first principle equations to final data products and visualisations, modular simulation models and benchmark evaluation algorithms. Led jointly by EPFL and ETH Zurich, the SDSC is composed of a distributed multi-disciplinary team of data scientists and experts in select domains. The center aims to federate data providers, data and computer scientists, and subject-matter experts around a cutting-edge analytics platform offering user-friendly tooling and services to help with the adoption of Open Science, fostering research productivity and excellence. In this presentation, we will discuss our vision of a high-scalable open but secure community-based platform for sharing, accessing, exploring, and analyzing scientific data in easily reproducible workflows, augmented by automated provenance and impact tracking, knowledge graphs, fine-grained access right and digital right management, and a variety of domain-specific software tools. For maximum interoperability, transparency and ease of use, we plan to utilize notebook interfaces wherever possible, such as Apache Zeppelin and Jupyter. Feedback and suggestions from the audience will be gratefully considered.
EDITORIAL: Industrial Process Tomography
NASA Astrophysics Data System (ADS)
Anton Johansen, Geir; Wang, Mi
2008-09-01
There has been tremendous development within measurement science and technology over the past couple of decades. New sensor technologies and compact versatile signal recovery electronics are continuously expanding the limits of what can be measured and the accuracy with which this can be done. Miniaturization of sensors and the use of nanotechnology push these limits further. Also, thanks to powerful and cost-effective computer systems, sophisticated measurement and reconstruction algorithms previously only accessible in advanced laboratories are now available for in situ online measurement systems. The process industries increasingly require more process-related information, motivated by key issues such as improved process control, process utilization and process yields, ultimately driven by cost-effectiveness, quality assurance, environmental and safety demands. Industrial process tomography methods have taken advantage of the general progress in measurement science, and aim at providing more information, both quantitatively and qualitatively, on multiphase systems and their dynamics. The typical approach for such systems has been to carry out one local or bulk measurement and assume that this is representative of the whole system. In some cases, this is sufficient. However, there are many complex systems where the component distribution varies continuously and often unpredictably in space and time. The foundation of industrial tomography is to conduct several measurements around the periphery of a multiphase process, and use these measurements to unravel the cross-sectional distribution of the process components in time and space. This information is used in the design and optimization of industrial processes and process equipment, and also to improve the accuracy of multiphase system measurements in general. In this issue we are proud to present a selection of the 145 papers presented at the 5th World Congress on Industrial Process Tomography in Bergen, September 2007. Interestingly, x-ray technologies, one of the first imaging modalities available, keep on moving the limits on both spatial and temporal measurement resolution; experimental results of less than 100 nm and several thousand frames/s are reported, respectively. Important progress is demonstrated in research and development on sensor technologies and algorithms for data processing and image reconstruction, including unconventional sensor design and adaptation of the sensors to the application in question. The number of applications to which tomographic methods are applied is steadily increasing, and results obtained in a representative selection of applications are included. As guest editors we would like express our appreciation and thanks to all authors who have contributed and to IOP staff for excellent collaboration in the process of finalizing this special feature.
Probabilistic numerics and uncertainty in computations
Hennig, Philipp; Osborne, Michael A.; Girolami, Mark
2015-01-01
We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations. PMID:26346321
Probabilistic numerics and uncertainty in computations.
Hennig, Philipp; Osborne, Michael A; Girolami, Mark
2015-07-08
We deliver a call to arms for probabilistic numerical methods : algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.
A Modified Particle Swarm Optimization Technique for Finding Optimal Designs for Mixture Models
Wong, Weng Kee; Chen, Ray-Bing; Huang, Chien-Chih; Wang, Weichung
2015-01-01
Particle Swarm Optimization (PSO) is a meta-heuristic algorithm that has been shown to be successful in solving a wide variety of real and complicated optimization problems in engineering and computer science. This paper introduces a projection based PSO technique, named ProjPSO, to efficiently find different types of optimal designs, or nearly optimal designs, for mixture models with and without constraints on the components, and also for related models, like the log contrast models. We also compare the modified PSO performance with Fedorov's algorithm, a popular algorithm used to generate optimal designs, Cocktail algorithm, and the recent algorithm proposed by [1]. PMID:26091237
Telecommunications end-to-end systems monitoring on TOPEX/Poseidon: Tools and techniques
NASA Technical Reports Server (NTRS)
Calanche, Bruno J.
1994-01-01
The TOPEX/Poseidon Project Satellite Performance Analysis Team's (SPAT) roles and responsibilities have grown to include functions that are typically performed by other teams on JPL Flight Projects. In particular, SPAT Telecommunication's role has expanded beyond the nominal function of monitoring, assessing, characterizing, and trending the spacecraft (S/C) RF/Telecom subsystem to one of End-to-End Information Systems (EEIS) monitoring. This has been accomplished by taking advantage of the spacecraft and ground data system structures and protocols. By processing both the received spacecraft telemetry minor frame ground generated CRC flags and NASCOM block poly error flags, bit error rates (BER) for each link segment can be determined. This provides the capability to characterize the separate link segments, determine science data recovery, and perform fault/anomaly detection and isolation. By monitoring and managing the links, TOPEX has successfully recovered approximately 99.9 percent of the science data with an integrity (BER) of better than 1 x 10(exp 8). This paper presents the algorithms used to process the above flags and the techniques used for EEIS monitoring.
Artificial Muscles Based on Electroactive Polymers as an Enabling Tool in Biomimetics
NASA Technical Reports Server (NTRS)
Bar-Cohen, Y.
2007-01-01
Evolution has resolved many of nature's challenges leading to working and lasting solutions that employ principles of physics, chemistry, mechanical engineering, materials science, and many other fields of science and engineering. Nature's inventions have always inspired human achievements leading to effective materials, structures, tools, mechanisms, processes, algorithms, methods, systems, and many other benefits. Some of the technologies that have emerged include artificial intelligence, artificial vision, and artificial muscles, where the latter is the moniker for electroactive polymers (EAPs). To take advantage of these materials and make them practical actuators, efforts are made worldwide to develop capabilities that are critical to the field infrastructure. Researchers are developing analytical model and comprehensive understanding of EAP materials response mechanism as well as effective processing and characterization techniques. The field is still in its emerging state and robust materials are still not readily available; however, in recent years, significant progress has been made and commercial products have already started to appear. In the current paper, the state-of-the-art and challenges to artificial muscles as well as their potential application to biomimetic mechanisms and devices are described and discussed.
SRT Status and Plans for Version-7
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John; Iredell, Lena; Kouvaris, Louis
2015-01-01
The AIRS Science Team Version-6 retrieval algorithm is currently producing level-3 Climate Data Records (CDRs) from AIRS that have been proven useful to scientists in understanding climate processes. CDRs are gridded level-3 products which include all cases passing AIRS Climate QC. SRT has made significant further improvements to AIRS Version-6. Research is continuing at SRT toward the development of AIRS Version-7. At the last Science Team Meeting, we described results using SRT AIRS Version-6.19. SRT Version-6.19 is now an official build at JPL called 6.2. SRTs latest version is AIRS Version-6.22. We have also adapted AIRS Version-6.22 to run with CrISATMS. AIRS Version-6.22 and CrIS Version- 6.22 both run now on JPL computers, but are not yet official builds. The main reason for finalization of Version-7, and using it in the relatively near future for the future processing and reprocessing of old AIRS data, is to produce even better CDRs for use by climate scientists. For this reason all results shown in this talk use only AIRS Climate QC.
NASA Astrophysics Data System (ADS)
Voter, Arthur
Many important materials processes take place on time scales that far exceed the roughly one microsecond accessible to molecular dynamics simulation. Typically, this long-time evolution is characterized by a succession of thermally activated infrequent events involving defects in the material. In the accelerated molecular dynamics (AMD) methodology, known characteristics of infrequent-event systems are exploited to make reactive events take place more frequently, in a dynamically correct way. For certain processes, this approach has been remarkably successful, offering a view of complex dynamical evolution on time scales of microseconds, milliseconds, and sometimes beyond. We have recently made advances in all three of the basic AMD methods (hyperdynamics, parallel replica dynamics, and temperature accelerated dynamics (TAD)), exploiting both algorithmic advances and novel parallelization approaches. I will describe these advances, present some examples of our latest results, and discuss what should be possible when exascale computing arrives in roughly five years. Funded by the U.S. Department of Energy, Office of Basic Energy Sciences, Materials Sciences and Engineering Division, and by the Los Alamos Laboratory Directed Research and Development program.
The LSST Data Mining Research Agenda
NASA Astrophysics Data System (ADS)
Borne, K.; Becla, J.; Davidson, I.; Szalay, A.; Tyson, J. A.
2008-12-01
We describe features of the LSST science database that are amenable to scientific data mining, object classification, outlier identification, anomaly detection, image quality assurance, and survey science validation. The data mining research agenda includes: scalability (at petabytes scales) of existing machine learning and data mining algorithms; development of grid-enabled parallel data mining algorithms; designing a robust system for brokering classifications from the LSST event pipeline (which may produce 10,000 or more event alerts per night) multi-resolution methods for exploration of petascale databases; indexing of multi-attribute multi-dimensional astronomical databases (beyond spatial indexing) for rapid querying of petabyte databases; and more.
NASA Astrophysics Data System (ADS)
Nurdiyanto, Heri; Rahim, Robbi; Wulan, Nur
2017-12-01
Symmetric type cryptography algorithm is known many weaknesses in encryption process compared with asymmetric type algorithm, symmetric stream cipher are algorithm that works on XOR process between plaintext and key, to improve the security of symmetric stream cipher algorithm done improvisation by using Triple Transposition Key which developed from Transposition Cipher and also use Base64 algorithm for encryption ending process, and from experiment the ciphertext that produced good enough and very random.
2009-03-01
policy, elliptic curve public key cryptography using the 256 -bit prime modulus elliptic curve as specified in FIPS-186-2 and SHA - 256 are appropriate for...publications/fips/fips186-2/fips186-2-change1.pdf 76 I P ART I . CH A PT E R 5 Hashing via the Secure Hash Algorithm (using SHA - 256 and...lithography and processing techniques. Field programmable gate arrays ( FPGAs ) are a chip design of interest. These devices are extensively used in
NASA Astrophysics Data System (ADS)
Rojali, Siahaan, Ida Sri Rejeki; Soewito, Benfano
2017-08-01
Steganography is the art and science of hiding the secret messages so the existence of the message cannot be detected by human senses. The data concealment is using the Multi Pixel Value Differencing (MPVD) algorithm, utilizing the difference from each pixel. The development was done by using six interval tables. The objective of this algorithm is to enhance the message capacity and to maintain the data security.
NASA Technical Reports Server (NTRS)
Orlov, I. G.
1979-01-01
The BASIC algorithmic language is described, and a guide is presented for the programmer using the language interpreter. The high-level algorithm BASIC is a problem-oriented programming language intended for solution of computational and engineering problems.
A Low-Tech, Hands-On Approach To Teaching Sorting Algorithms to Working Students.
ERIC Educational Resources Information Center
Dios, R.; Geller, J.
1998-01-01
Focuses on identifying the educational effects of "activity oriented" instructional techniques. Examines which instructional methods produce enhanced learning and comprehension. Discusses the problem of learning "sorting algorithms," a major topic in every Computer Science curriculum. Presents a low-tech, hands-on teaching method for sorting…
Using Small-Step Refinement for Algorithm Verification in Computer Science Education
ERIC Educational Resources Information Center
Simic, Danijela
2015-01-01
Stepwise program refinement techniques can be used to simplify program verification. Programs are better understood since their main properties are clearly stated, and verification of rather complex algorithms is reduced to proving simple statements connecting successive program specifications. Additionally, it is easy to analyse similar…
Overview of the Joint NASA ISRO Imaging Spectroscopy Science Campaign in India
NASA Astrophysics Data System (ADS)
Green, R. O.; Bhattacharya, B. K.; Eastwood, M. L.; Saxena, M.; Thompson, D. R.; Sadasivarao, B.
2016-12-01
In the period from December 2015 to March 2016 the Airborne Visible-Infrared Imaging Spectrometer Next Generation (AVIRIS-NG) was deployed to India for a joint NASA ISRO science campaign. This campaign was conceived to provide first of their kind high fidelity imaging spectroscopy measurements of a diverse set of Asian environments for science and applications research. During this campaign measurements were acquired for 57 high priority sites that have objectives spanning: snow/ice of the Himalaya; coastal habitats and water quality; mangrove forests; soils; dry and humid forests; hydrocarbon alteration; mineralogy; agriculture; urban materials; atmospheric properties; and calibration/validation. Measurements from the campaign have been processed to at-instrument spectral radiance and atmospherically corrected surface reflectance. New AVIRIS-NG algorithms for retrieval of vegetation canopy water and for estimation of the fractions of photosynthetic, non-photosynthetic vegetation have been tested and evaluated on these measurements. An inflight calibration validation experiment was performed on the 11thof December 2015 in Hyderabad to assess the spectral and radiometric calibration of AVIRIS-NG in the flight environment. We present an overview of the campaign, calibration and validation results, and initial science analysis of a subset of these unique and diverse data sets.
Application of a multiscale maximum entropy image restoration algorithm to HXMT observations
NASA Astrophysics Data System (ADS)
Guan, Ju; Song, Li-Ming; Huo, Zhuo-Xi
2016-08-01
This paper introduces a multiscale maximum entropy (MSME) algorithm for image restoration of the Hard X-ray Modulation Telescope (HXMT), which is a collimated scan X-ray satellite mainly devoted to a sensitive all-sky survey and pointed observations in the 1-250 keV range. The novelty of the MSME method is to use wavelet decomposition and multiresolution support to control noise amplification at different scales. Our work is focused on the application and modification of this method to restore diffuse sources detected by HXMT scanning observations. An improved method, the ensemble multiscale maximum entropy (EMSME) algorithm, is proposed to alleviate the problem of mode mixing exiting in MSME. Simulations have been performed on the detection of the diffuse source Cen A by HXMT in all-sky survey mode. The results show that the MSME method is adapted to the deconvolution task of HXMT for diffuse source detection and the improved method could suppress noise and improve the correlation and signal-to-noise ratio, thus proving itself a better algorithm for image restoration. Through one all-sky survey, HXMT could reach a capacity of detecting a diffuse source with maximum differential flux of 0.5 mCrab. Supported by Strategic Priority Research Program on Space Science, Chinese Academy of Sciences (XDA04010300) and National Natural Science Foundation of China (11403014)
Semi-automatic mapping of linear-trending bedforms using 'Self-Organizing Maps' algorithm
NASA Astrophysics Data System (ADS)
Foroutan, M.; Zimbelman, J. R.
2017-09-01
Increased application of high resolution spatial data such as high resolution satellite or Unmanned Aerial Vehicle (UAV) images from Earth, as well as High Resolution Imaging Science Experiment (HiRISE) images from Mars, makes it necessary to increase automation techniques capable of extracting detailed geomorphologic elements from such large data sets. Model validation by repeated images in environmental management studies such as climate-related changes as well as increasing access to high-resolution satellite images underline the demand for detailed automatic image-processing techniques in remote sensing. This study presents a methodology based on an unsupervised Artificial Neural Network (ANN) algorithm, known as Self Organizing Maps (SOM), to achieve the semi-automatic extraction of linear features with small footprints on satellite images. SOM is based on competitive learning and is efficient for handling huge data sets. We applied the SOM algorithm to high resolution satellite images of Earth and Mars (Quickbird, Worldview and HiRISE) in order to facilitate and speed up image analysis along with the improvement of the accuracy of results. About 98% overall accuracy and 0.001 quantization error in the recognition of small linear-trending bedforms demonstrate a promising framework.
NASA Astrophysics Data System (ADS)
Erickson, T. A.; Granger, B.; Grout, J.; Corlay, S.
2017-12-01
The volume of Earth science data gathered from satellites, aircraft, drones, and field instruments continues to increase. For many scientific questions in the Earth sciences, managing this large volume of data is a barrier to progress, as it is difficult to explore and analyze large volumes of data using the traditional paradigm of downloading datasets to a local computer for analysis. Furthermore, methods for communicating Earth science algorithms that operate on large datasets in an easily understandable and reproducible way are needed. Here we describe a system for developing, interacting, and sharing well-documented Earth Science algorithms that combines existing software components: Jupyter Notebook: An open-source, web-based environment that supports documents that combine code and computational results with text narrative, mathematics, images, and other media. These notebooks provide an environment for interactive exploration of data and development of well documented algorithms. Jupyter Widgets / ipyleaflet: An architecture for creating interactive user interface controls (such as sliders, text boxes, etc.) in Jupyter Notebooks that communicate with Python code. This architecture includes a default set of UI controls (sliders, dropboxes, etc.) as well as APIs for building custom UI controls. The ipyleaflet project is one example that offers a custom interactive map control that allows a user to display and manipulate geographic data within the Jupyter Notebook. Google Earth Engine: A cloud-based geospatial analysis platform that provides access to petabytes of Earth science data via a Python API. The combination of Jupyter Notebooks, Jupyter Widgets, ipyleaflet, and Google Earth Engine makes it possible to explore and analyze massive Earth science datasets via a web browser, in an environment suitable for interactive exploration, teaching, and sharing. Using these environments can make Earth science analyses easier to understand and reproducible, which may increase the rate of scientific discoveries and the transition of discoveries into real-world impacts.
NASA Technical Reports Server (NTRS)
Brownston, Lee; Jenkins, Jon M.
2015-01-01
The Kepler Mission was launched in 2009 as NASAs first mission capable of finding Earth-size planets in the habitable zone of Sun-like stars. Its telescope consists of a 1.5-m primary mirror and a 0.95-m aperture. The 42 charge-coupled devices in its focal plane are read out every half hour, compressed, and then downlinked monthly. After four years, the second of four reaction wheels failed, ending the original mission. Back on earth, the Science Operations Center developed the Science Pipeline to analyze about 200,000 target stars in Keplers field of view, looking for evidence of periodic dimming suggesting that one or more planets had crossed the face of its host star. The Pipeline comprises several steps, from pixel-level calibration, through noise and artifact removal, to detection of transit-like signals and the construction of a suite of diagnostic tests to guard against false positives. The Kepler Science Pipeline consists of a pipeline infrastructure written in the Java programming language, which marshals data input to and output from MATLAB applications that are executed as external processes. The pipeline modules, which underwent continuous development and refinement even after data started arriving, employ several analytic techniques, many developed for the Kepler Project. Because of the large number of targets, the large amount of data per target and the complexity of the pipeline algorithms, the processing demands are daunting. Some pipeline modules require days to weeks to process all of their targets, even when run on NASA's 128-node Pleiades supercomputer. The software developers are still seeking ways to increase the throughput. To date, the Kepler project has discovered more than 4000 planetary candidates, of which more than 1000 have been independently confirmed or validated to be exoplanets. Funding for this mission is provided by NASAs Science Mission Directorate.
Radar Array Processing of Experimental Data Via the Scan-MUSIC Algorithm
2004-06-01
Radar Array Processing of Experimental Data Via the Scan- MUSIC Algorithm by Canh Ly ARL-TR-3135 June 2004...Processing of Experimental Data Via the Scan- MUSIC Algorithm Canh Ly Sensors and Electron Devices Directorate, ARL...NUMBER 5b. GRANT NUMBER 4. TITLE AND SUBTITLE Radar Array Processing of Experimental Data Via the Scan- MUSIC Algorithm 5c. PROGRAM ELEMENT NUMBER 5d