Sample records for shared commodity hardware

  1. Hardware Testing and System Evaluation: Procedures to Evaluate Commodity Hardware for Production Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goebel, J

    2004-02-27

    Without stable hardware any program will fail. The frustration and expense of supporting bad hardware can drain an organization, delay progress, and frustrate everyone involved. At Stanford Linear Accelerator Center (SLAC), we have created a testing method that helps our group, SLAC Computer Services (SCS), weed out potentially bad hardware and purchase the best hardware at the best possible cost. Commodity hardware changes often, so new evaluations happen periodically each time we purchase systems and minor re-evaluations happen for revised systems for our clusters, about twice a year. This general framework helps SCS perform correct, efficient evaluations. This article outlinesmore » SCS's computer testing methods and our system acceptance criteria. We expanded the basic ideas to other evaluations such as storage, and we think the methods outlined in this article has helped us choose hardware that is much more stable and supportable than our previous purchases. We have found that commodity hardware ranges in quality, so systematic method and tools for hardware evaluation were necessary. This article is based on one instance of a hardware purchase, but the guidelines apply to the general problem of purchasing commodity computer systems for production computational work.« less

  2. 78 FR 53509 - Self-Regulatory Organizations; BATS Exchange, Inc.; Order Approving a Proposed Rule Change, as...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-29

    ... for the following securities: Index-Linked Exchangeable Notes; Equity Gold Shares; Trust Certificates; Commodity-Based Trust Shares; Currency Trust Shares; Commodity Index Trust Shares; Commodity Futures Trust Shares; Partnership Units; Trust Units; Managed Trust Securities; and Currency Warrants (together with...

  3. Node Resource Manager: A Distributed Computing Software Framework Used for Solving Geophysical Problems

    NASA Astrophysics Data System (ADS)

    Lawry, B. J.; Encarnacao, A.; Hipp, J. R.; Chang, M.; Young, C. J.

    2011-12-01

    With the rapid growth of multi-core computing hardware, it is now possible for scientific researchers to run complex, computationally intensive software on affordable, in-house commodity hardware. Multi-core CPUs (Central Processing Unit) and GPUs (Graphics Processing Unit) are now commonplace in desktops and servers. Developers today have access to extremely powerful hardware that enables the execution of software that could previously only be run on expensive, massively-parallel systems. It is no longer cost-prohibitive for an institution to build a parallel computing cluster consisting of commodity multi-core servers. In recent years, our research team has developed a distributed, multi-core computing system and used it to construct global 3D earth models using seismic tomography. Traditionally, computational limitations forced certain assumptions and shortcuts in the calculation of tomographic models; however, with the recent rapid growth in computational hardware including faster CPU's, increased RAM, and the development of multi-core computers, we are now able to perform seismic tomography, 3D ray tracing and seismic event location using distributed parallel algorithms running on commodity hardware, thereby eliminating the need for many of these shortcuts. We describe Node Resource Manager (NRM), a system we developed that leverages the capabilities of a parallel computing cluster. NRM is a software-based parallel computing management framework that works in tandem with the Java Parallel Processing Framework (JPPF, http://www.jppf.org/), a third party library that provides a flexible and innovative way to take advantage of modern multi-core hardware. NRM enables multiple applications to use and share a common set of networked computers, regardless of their hardware platform or operating system. Using NRM, algorithms can be parallelized to run on multiple processing cores of a distributed computing cluster of servers and desktops, which results in a dramatic speedup in execution time. NRM is sufficiently generic to support applications in any domain, as long as the application is parallelizable (i.e., can be subdivided into multiple individual processing tasks). At present, NRM has been effective in decreasing the overall runtime of several algorithms: 1) the generation of a global 3D model of the compressional velocity distribution in the Earth using tomographic inversion, 2) the calculation of the model resolution matrix, model covariance matrix, and travel time uncertainty for the aforementioned velocity model, and 3) the correlation of waveforms with archival data on a massive scale for seismic event detection. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  4. 78 FR 41462 - Self-Regulatory Organizations; BATS Exchange, Inc.; Notice of Filing of a Proposed Rule Change...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-10

    ... Asset. The term ``Currency,'' as used in the proposed rule, means one or more currencies, or currency...; Commodity-Based Trust Shares; Currency Trust Shares; Commodity Index Trust Shares; Commodity Futures Trust Shares; Partnership Units; Trust Units; Managed Trust Securities; and Currency Warrants. Specifically...

  5. Disk storage at CERN

    NASA Astrophysics Data System (ADS)

    Mascetti, L.; Cano, E.; Chan, B.; Espinal, X.; Fiorot, A.; González Labrador, H.; Iven, J.; Lamanna, M.; Lo Presti, G.; Mościcki, JT; Peters, AJ; Ponce, S.; Rousseau, H.; van der Ster, D.

    2015-12-01

    CERN IT DSS operates the main storage resources for data taking and physics analysis mainly via three system: AFS, CASTOR and EOS. The total usable space available on disk for users is about 100 PB (with relative ratios 1:20:120). EOS actively uses the two CERN Tier0 centres (Meyrin and Wigner) with 50:50 ratio. IT DSS also provide sizeable on-demand resources for IT services most notably OpenStack and NFS-based clients: this is provided by a Ceph infrastructure (3 PB) and few proprietary servers (NetApp). We will describe our operational experience and recent changes to these systems with special emphasis to the present usages for LHC data taking, the convergence to commodity hardware (nodes with 200-TB each with optional SSD) shared across all services. We also describe our experience in coupling commodity and home-grown solution (e.g. CERNBox integration in EOS, Ceph disk pools for AFS, CASTOR and NFS) and finally the future evolution of these systems for WLCG and beyond.

  6. RabbitQR: fast and flexible big data processing at LSST data rates using existing, shared-use hardware

    NASA Astrophysics Data System (ADS)

    Kotulla, Ralf; Gopu, Arvind; Hayashi, Soichi

    2016-08-01

    Processing astronomical data to science readiness was and remains a challenge, in particular in the case of multi detector instruments such as wide-field imagers. One such instrument, the WIYN One Degree Imager, is available to the astronomical community at large, and, in order to be scientifically useful to its varied user community on a short timescale, provides its users fully calibrated data in addition to the underlying raw data. However, time-efficient re-processing of the often large datasets with improved calibration data and/or software requires more than just a large number of CPU-cores and disk space. This is particularly relevant if all computing resources are general purpose and shared with a large number of users in a typical university setup. Our approach to address this challenge is a flexible framework, combining the best of both high performance (large number of nodes, internal communication) and high throughput (flexible/variable number of nodes, no dedicated hardware) computing. Based on the Advanced Message Queuing Protocol, we a developed a Server-Manager- Worker framework. In addition to the server directing the work flow and the worker executing the actual work, the manager maintains a list of available worker, adds and/or removes individual workers from the worker pool, and re-assigns worker to different tasks. This provides the flexibility of optimizing the worker pool to the current task and workload, improves load balancing, and makes the most efficient use of the available resources. We present performance benchmarks and scaling tests, showing that, today and using existing, commodity shared- use hardware we can process data with data throughputs (including data reduction and calibration) approaching that expected in the early 2020s for future observatories such as the Large Synoptic Survey Telescope.

  7. Performance evaluation of throughput computing workloads using multi-core processors and graphics processors

    NASA Astrophysics Data System (ADS)

    Dave, Gaurav P.; Sureshkumar, N.; Blessy Trencia Lincy, S. S.

    2017-11-01

    Current trend in processor manufacturing focuses on multi-core architectures rather than increasing the clock speed for performance improvement. Graphic processors have become as commodity hardware for providing fast co-processing in computer systems. Developments in IoT, social networking web applications, big data created huge demand for data processing activities and such kind of throughput intensive applications inherently contains data level parallelism which is more suited for SIMD architecture based GPU. This paper reviews the architectural aspects of multi/many core processors and graphics processors. Different case studies are taken to compare performance of throughput computing applications using shared memory programming in OpenMP and CUDA API based programming.

  8. 77 FR 6833 - Self-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Filing of Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-09

    ... Reference Asset. The term ``Currency,'' as used in the proposed rule, means one or more currencies, or.... Description Proposed Rule 5711(e)(iii) provides that the term ``Currency Trust Shares'' as used in these...-Based Trust Shares; Currency Trust Shares; Commodity Index Trust Shares; Commodity Futures Trust Shares...

  9. 15 CFR Supplement No. 6 to Part 742 - Guidelines for Submitting Review Requests for Encryption Items

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... brochures or other documentation or specifications related to the technology, commodity or software... commodity or software, provide the following information: (1) Description of all the symmetric and... is provided by third-party hardware or software encryption components (if any). Identify the...

  10. 75 FR 47652 - Self-Regulatory Organizations; NYSE Arca, Inc.; Order Granting Approval of a Proposed Rule Change...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-06

    ... Listing and Trading of WisdomTree Dreyfus Commodity Currency Fund under NYSE Arca Equities Rule 8.600... and trade the shares (``Shares'') of the WisdomTree Dreyfus Commodity Currency Fund (``Fund'') under... exchange traded fund. The Shares will [[Page 47653

  11. 17 CFR 160.12 - Limits on sharing account number information for marketing purposes.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ....12 Limits on sharing account number information for marketing purposes. (a) General prohibition on... 17 Commodity and Securities Exchanges 1 2010-04-01 2010-04-01 false Limits on sharing account number information for marketing purposes. 160.12 Section 160.12 Commodity and Securities Exchanges...

  12. 17 CFR 248.12 - Limits on sharing account number information for marketing purposes.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... account number information for marketing purposes. (a) General prohibition on disclosure of account... 17 Commodity and Securities Exchanges 3 2010-04-01 2010-04-01 false Limits on sharing account number information for marketing purposes. 248.12 Section 248.12 Commodity and Securities Exchanges...

  13. Millisecond precision psychological research in a world of commodity computers: new hardware, new problems?

    PubMed

    Plant, Richard R; Turner, Garry

    2009-08-01

    Since the publication of Plant, Hammond, and Turner (2004), which highlighted a pressing need for researchers to pay more attention to sources of error in computer-based experiments, the landscape has undoubtedly changed, but not necessarily for the better. Readily available hardware has improved in terms of raw speed; multi core processors abound; graphics cards now have hundreds of megabytes of RAM; main memory is measured in gigabytes; drive space is measured in terabytes; ever larger thin film transistor displays capable of single-digit response times, together with newer Digital Light Processing multimedia projectors, enable much greater graphic complexity; and new 64-bit operating systems, such as Microsoft Vista, are now commonplace. However, have millisecond-accurate presentation and response timing improved, and will they ever be available in commodity computers and peripherals? In the present article, we used a Black Box ToolKit to measure the variability in timing characteristics of hardware used commonly in psychological research.

  14. GREEN SUPERCOMPUTING IN A DESKTOP BOX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HSU, CHUNG-HSING; FENG, WU-CHUN; CHING, AVERY

    2007-01-17

    The computer workstation, introduced by Sun Microsystems in 1982, was the tool of choice for scientists and engineers as an interactive computing environment for the development of scientific codes. However, by the mid-1990s, the performance of workstations began to lag behind high-end commodity PCs. This, coupled with the disappearance of BSD-based operating systems in workstations and the emergence of Linux as an open-source operating system for PCs, arguably led to the demise of the workstation as we knew it. Around the same time, computational scientists started to leverage PCs running Linux to create a commodity-based (Beowulf) cluster that provided dedicatedmore » computer cycles, i.e., supercomputing for the rest of us, as a cost-effective alternative to large supercomputers, i.e., supercomputing for the few. However, as the cluster movement has matured, with respect to cluster hardware and open-source software, these clusters have become much more like their large-scale supercomputing brethren - a shared (and power-hungry) datacenter resource that must reside in a machine-cooled room in order to operate properly. Consequently, the above observations, when coupled with the ever-increasing performance gap between the PC and cluster supercomputer, provide the motivation for a 'green' desktop supercomputer - a turnkey solution that provides an interactive and parallel computing environment with the approximate form factor of a Sun SPARCstation 1 'pizza box' workstation. In this paper, they present the hardware and software architecture of such a solution as well as its prowess as a developmental platform for parallel codes. In short, imagine a 12-node personal desktop supercomputer that achieves 14 Gflops on Linpack but sips only 185 watts of power at load, resulting in a performance-power ratio that is over 300% better than their reference SMP platform.« less

  15. A dataset on tail risk of commodities markets.

    PubMed

    Powell, Robert J; Vo, Duc H; Pham, Thach N; Singh, Abhay K

    2017-12-01

    This article contains the datasets related to the research article "The long and short of commodity tails and their relationship to Asian equity markets"(Powell et al., 2017) [1]. The datasets contain the daily prices (and price movements) of 24 different commodities decomposed from the S&P GSCI index and the daily prices (and price movements) of three share market indices including World, Asia, and South East Asia for the period 2004-2015. Then, the dataset is divided into annual periods, showing the worst 5% of price movements for each year. The datasets are convenient to examine the tail risk of different commodities as measured by Conditional Value at Risk (CVaR) as well as their changes over periods. The datasets can also be used to investigate the association between commodity markets and share markets.

  16. CUDA compatible GPU cards as efficient hardware accelerators for Smith-Waterman sequence alignment

    PubMed Central

    Manavski, Svetlin A; Valle, Giorgio

    2008-01-01

    Background Searching for similarities in protein and DNA databases has become a routine procedure in Molecular Biology. The Smith-Waterman algorithm has been available for more than 25 years. It is based on a dynamic programming approach that explores all the possible alignments between two sequences; as a result it returns the optimal local alignment. Unfortunately, the computational cost is very high, requiring a number of operations proportional to the product of the length of two sequences. Furthermore, the exponential growth of protein and DNA databases makes the Smith-Waterman algorithm unrealistic for searching similarities in large sets of sequences. For these reasons heuristic approaches such as those implemented in FASTA and BLAST tend to be preferred, allowing faster execution times at the cost of reduced sensitivity. The main motivation of our work is to exploit the huge computational power of commonly available graphic cards, to develop high performance solutions for sequence alignment. Results In this paper we present what we believe is the fastest solution of the exact Smith-Waterman algorithm running on commodity hardware. It is implemented in the recently released CUDA programming environment by NVidia. CUDA allows direct access to the hardware primitives of the last-generation Graphics Processing Units (GPU) G80. Speeds of more than 3.5 GCUPS (Giga Cell Updates Per Second) are achieved on a workstation running two GeForce 8800 GTX. Exhaustive tests have been done to compare our implementation to SSEARCH and BLAST, running on a 3 GHz Intel Pentium IV processor. Our solution was also compared to a recently published GPU implementation and to a Single Instruction Multiple Data (SIMD) solution. These tests show that our implementation performs from 2 to 30 times faster than any other previous attempt available on commodity hardware. Conclusions The results show that graphic cards are now sufficiently advanced to be used as efficient hardware accelerators for sequence alignment. Their performance is better than any alternative available on commodity hardware platforms. The solution presented in this paper allows large scale alignments to be performed at low cost, using the exact Smith-Waterman algorithm instead of the largely adopted heuristic approaches. PMID:18387198

  17. A Data Driven Network Approach to Rank Countries Production Diversity and Food Specialization

    PubMed Central

    Tu, Chengyi; Carr, Joel

    2016-01-01

    The easy access to large data sets has allowed for leveraging methodology in network physics and complexity science to disentangle patterns and processes directly from the data, leading to key insights in the behavior of systems. Here we use country specific food production data to study binary and weighted topological properties of the bipartite country-food production matrix. This country-food production matrix can be: 1) transformed into overlap matrices which embed information regarding shared production of products among countries, and or shared countries for individual products, 2) identify subsets of countries which produce similar commodities or subsets of commodities shared by a given country allowing for visualization of correlations in large networks, and 3) used to rank country fitness (the ability to produce a diverse array of products weighted on the type of food commodities) and food specialization (quantified on the number of countries producing a specific food product weighted on their fitness). Our results show that, on average, countries with high fitness produce both low and high specializion food commodities, whereas nations with low fitness tend to produce a small basket of diverse food products, typically comprised of low specializion food commodities. PMID:27832118

  18. A Data Driven Network Approach to Rank Countries Production Diversity and Food Specialization.

    PubMed

    Tu, Chengyi; Carr, Joel; Suweis, Samir

    2016-01-01

    The easy access to large data sets has allowed for leveraging methodology in network physics and complexity science to disentangle patterns and processes directly from the data, leading to key insights in the behavior of systems. Here we use country specific food production data to study binary and weighted topological properties of the bipartite country-food production matrix. This country-food production matrix can be: 1) transformed into overlap matrices which embed information regarding shared production of products among countries, and or shared countries for individual products, 2) identify subsets of countries which produce similar commodities or subsets of commodities shared by a given country allowing for visualization of correlations in large networks, and 3) used to rank country fitness (the ability to produce a diverse array of products weighted on the type of food commodities) and food specialization (quantified on the number of countries producing a specific food product weighted on their fitness). Our results show that, on average, countries with high fitness produce both low and high specializion food commodities, whereas nations with low fitness tend to produce a small basket of diverse food products, typically comprised of low specializion food commodities.

  19. 75 FR 69058 - Request for Comment on a Proposal to Exempt, Pursuant to the Authority in Section 4(c) of the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-10

    ... categorical Section 4(c) exemption to permit options and futures on shares of all or some precious metal commodity-based ETFs to be traded and cleared as options on securities and security futures, respectively... options and futures on shares of precious metal commodity- based ETFs. The Commission believes that...

  20. Multi-core processing and scheduling performance in CMS

    NASA Astrophysics Data System (ADS)

    Hernández, J. M.; Evans, D.; Foulkes, S.

    2012-12-01

    Commodity hardware is going many-core. We might soon not be able to satisfy the job memory needs per core in the current single-core processing model in High Energy Physics. In addition, an ever increasing number of independent and incoherent jobs running on the same physical hardware not sharing resources might significantly affect processing performance. It will be essential to effectively utilize the multi-core architecture. CMS has incorporated support for multi-core processing in the event processing framework and the workload management system. Multi-core processing jobs share common data in memory, such us the code libraries, detector geometry and conditions data, resulting in a much lower memory usage than standard single-core independent jobs. Exploiting this new processing model requires a new model in computing resource allocation, departing from the standard single-core allocation for a job. The experiment job management system needs to have control over a larger quantum of resource since multi-core aware jobs require the scheduling of multiples cores simultaneously. CMS is exploring the approach of using whole nodes as unit in the workload management system where all cores of a node are allocated to a multi-core job. Whole-node scheduling allows for optimization of the data/workflow management (e.g. I/O caching, local merging) but efficient utilization of all scheduled cores is challenging. Dedicated whole-node queues have been setup at all Tier-1 centers for exploring multi-core processing workflows in CMS. We present the evaluation of the performance scheduling and executing multi-core workflows in whole-node queues compared to the standard single-core processing workflows.

  1. Basic Requirements for Systems Software Research and Development

    NASA Technical Reports Server (NTRS)

    Kuszmaul, Chris; Nitzberg, Bill

    1996-01-01

    Our success over the past ten years evaluating and developing advanced computing technologies has been due to a simple research and development (R/D) model. Our model has three phases: (a) evaluating the state-of-the-art, (b) identifying problems and creating innovations, and (c) developing solutions, improving the state- of-the-art. This cycle has four basic requirements: a large production testbed with real users, a diverse collection of state-of-the-art hardware, facilities for evalua- tion of emerging technologies and development of innovations, and control over system management on these testbeds. Future research will be irrelevant and future products will not work if any of these requirements is eliminated. In order to retain our effectiveness, the numerical aerospace simulator (NAS) must replace out-of-date production testbeds in as timely a fashion as possible, and cannot afford to ignore innovative designs such as new distributed shared memory machines, clustered commodity-based computers, and multi-threaded architectures.

  2. The IQ-wall and IQ-station -- harnessing our collective intelligence to realize the potential of ultra-resolution and immersive visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eric A. Wernert; William R. Sherman; Chris Eller

    2012-03-01

    We present a pair of open-recipe, affordably-priced, easy-to-integrate, and easy-to-use visualization systems. The IQ-wall is an ultra-resolution tiled display wall that scales up to 24 screens with a single PC. The IQ-station is a semi-immersive display system that utilizes commodity stereoscopic displays, lower cost tracking systems, and touch overlays. These systems have been designed to support a wide range of research, education, creative activities, and information presentations. They were designed to work equally well as stand-alone installations or as part of a larger distributed visualization ecosystem. We detail the hardware and software components of these systems, describe our deployments andmore » experiences in a variety of research lab and university environments, and share our insights for effective support and community development.« less

  3. Multi-core processing and scheduling performance in CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez, J. M.; Evans, D.; Foulkes, S.

    2012-01-01

    Commodity hardware is going many-core. We might soon not be able to satisfy the job memory needs per core in the current single-core processing model in High Energy Physics. In addition, an ever increasing number of independent and incoherent jobs running on the same physical hardware not sharing resources might significantly affect processing performance. It will be essential to effectively utilize the multi-core architecture. CMS has incorporated support for multi-core processing in the event processing framework and the workload management system. Multi-core processing jobs share common data in memory, such us the code libraries, detector geometry and conditions data, resultingmore » in a much lower memory usage than standard single-core independent jobs. Exploiting this new processing model requires a new model in computing resource allocation, departing from the standard single-core allocation for a job. The experiment job management system needs to have control over a larger quantum of resource since multi-core aware jobs require the scheduling of multiples cores simultaneously. CMS is exploring the approach of using whole nodes as unit in the workload management system where all cores of a node are allocated to a multi-core job. Whole-node scheduling allows for optimization of the data/workflow management (e.g. I/O caching, local merging) but efficient utilization of all scheduled cores is challenging. Dedicated whole-node queues have been setup at all Tier-1 centers for exploring multi-core processing workflows in CMS. We present the evaluation of the performance scheduling and executing multi-core workflows in whole-node queues compared to the standard single-core processing workflows.« less

  4. 49 CFR 1248.101 - Commodity codes required.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Hardware. 343 Plumbing Fixtures and Heating Apparatus, Except Electric. 3433 Heating equipment, except electric. 344 Fabricated structural metal products. 3441 Fabricated structural metal products. 345 Bolts... fabricated pipe fittings. 35 Machinery, Except Electrical. 351 Engines and Turbines. 352 Farm Machinery and...

  5. 49 CFR 1248.101 - Commodity codes required.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Hardware. 343 Plumbing Fixtures and Heating Apparatus, Except Electric. 3433 Heating equipment, except electric. 344 Fabricated structural metal products. 3441 Fabricated structural metal products. 345 Bolts... fabricated pipe fittings. 35 Machinery, Except Electrical. 351 Engines and Turbines. 352 Farm Machinery and...

  6. 49 CFR 1248.101 - Commodity codes required.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Hardware. 343 Plumbing Fixtures and Heating Apparatus, Except Electric. 3433 Heating equipment, except electric. 344 Fabricated structural metal products. 3441 Fabricated structural metal products. 345 Bolts... fabricated pipe fittings. 35 Machinery, Except Electrical. 351 Engines and Turbines. 352 Farm Machinery and...

  7. 49 CFR 1248.101 - Commodity codes required.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Hardware. 343 Plumbing Fixtures and Heating Apparatus, Except Electric. 3433 Heating equipment, except electric. 344 Fabricated structural metal products. 3441 Fabricated structural metal products. 345 Bolts... fabricated pipe fittings. 35 Machinery, Except Electrical. 351 Engines and Turbines. 352 Farm Machinery and...

  8. Precise Truss Assembly using Commodity Parts and Low Precision Welding

    NASA Technical Reports Server (NTRS)

    Komendera, Erik; Reishus, Dustin; Dorsey, John T.; Doggett, William R.; Correll, Nikolaus

    2013-01-01

    We describe an Intelligent Precision Jigging Robot (IPJR), which allows high precision assembly of commodity parts with low-precision bonding. We present preliminary experiments in 2D that are motivated by the problem of assembling a space telescope optical bench on orbit using inexpensive, stock hardware and low-precision welding. An IPJR is a robot that acts as the precise "jigging", holding parts of a local assembly site in place while an external low precision assembly agent cuts and welds members. The prototype presented in this paper allows an assembly agent (in this case, a human using only low precision tools), to assemble a 2D truss made of wooden dowels to a precision on the order of millimeters over a span on the order of meters. We report the challenges of designing the IPJR hardware and software, analyze the error in assembly, document the test results over several experiments including a large-scale ring structure, and describe future work to implement the IPJR in 3D and with micron precision.

  9. Live HDR video streaming on commodity hardware

    NASA Astrophysics Data System (ADS)

    McNamee, Joshua; Hatchett, Jonathan; Debattista, Kurt; Chalmers, Alan

    2015-09-01

    High Dynamic Range (HDR) video provides a step change in viewing experience, for example the ability to clearly see the soccer ball when it is kicked from the shadow of the stadium into sunshine. To achieve the full potential of HDR video, so-called true HDR, it is crucial that all the dynamic range that was captured is delivered to the display device and tone mapping is confined only to the display. Furthermore, to ensure widespread uptake of HDR imaging, it should be low cost and available on commodity hardware. This paper describes an end-to-end HDR pipeline for capturing, encoding and streaming high-definition HDR video in real-time using off-the-shelf components. All the lighting that is captured by HDR-enabled consumer cameras is delivered via the pipeline to any display, including HDR displays and even mobile devices with minimum latency. The system thus provides an integrated HDR video pipeline that includes everything from capture to post-production, archival and storage, compression, transmission, and display.

  10. Simple techniques for improving deep neural network outcomes on commodity hardware

    NASA Astrophysics Data System (ADS)

    Colina, Nicholas Christopher A.; Perez, Carlos E.; Paraan, Francis N. C.

    2017-08-01

    We benchmark improvements in the performance of deep neural networks (DNN) on the MNIST data test upon imple-menting two simple modifications to the algorithm that have little overhead computational cost. First is GPU parallelization on a commodity graphics card, and second is initializing the DNN with random orthogonal weight matrices prior to optimization. Eigenspectra analysis of the weight matrices reveal that the initially orthogonal matrices remain nearly orthogonal after training. The probability distributions from which these orthogonal matrices are drawn are also shown to significantly affect the performance of these deep neural networks.

  11. 7 CFR 1435.317 - Revisions of allocations and proportionate shares.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...) COMMODITY CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.317 Revisions of allocations and proportionate shares. The...

  12. 7 CFR 1435.317 - Revisions of allocations and proportionate shares.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) COMMODITY CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.317 Revisions of allocations and proportionate shares. The...

  13. 7 CFR 1435.317 - Revisions of allocations and proportionate shares.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...) COMMODITY CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.317 Revisions of allocations and proportionate shares. The...

  14. 7 CFR 1435.317 - Revisions of allocations and proportionate shares.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) COMMODITY CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.317 Revisions of allocations and proportionate shares. The...

  15. 7 CFR 1435.317 - Revisions of allocations and proportionate shares.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) COMMODITY CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.317 Revisions of allocations and proportionate shares. The...

  16. Real-Time GPS-Alternative Navigation Using Commodity Hardware

    DTIC Science & Technology

    2007-06-01

    4.1 Test Plan and Setup ..............................................................................................84 4.1.1 Component and...improvements planned , the most influential for navigation are additional signals, frequencies, and improved signal strength. These improvements will... planned and implemented to provide maximum extensibility for additional sensors and functionality without disturbing the core GPU-accelerated

  17. 7 CFR 1435.314 - Temporary transfer of proportionate share due to disasters.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...) COMMODITY CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.314 Temporary transfer of proportionate share due to...

  18. 7 CFR 1435.314 - Temporary transfer of proportionate share due to disasters.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) COMMODITY CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.314 Temporary transfer of proportionate share due to...

  19. 7 CFR 1435.314 - Temporary transfer of proportionate share due to disasters.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...) COMMODITY CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.314 Temporary transfer of proportionate share due to...

  20. 7 CFR 1435.314 - Temporary transfer of proportionate share due to disasters.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) COMMODITY CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.314 Temporary transfer of proportionate share due to...

  1. 7 CFR 1435.314 - Temporary transfer of proportionate share due to disasters.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) COMMODITY CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.314 Temporary transfer of proportionate share due to...

  2. 17 CFR 230.480 - Title of securities.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 17 Commodity and Securities Exchanges 2 2013-04-01 2013-04-01 false Title of securities. 230.480 Section 230.480 Commodity and Securities Exchanges SECURITIES AND EXCHANGE COMMISSION GENERAL RULES AND... shares, the par or stated value, if any; the rate of dividends, if fixed, and whether cumulative or non...

  3. 17 CFR 230.480 - Title of securities.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 17 Commodity and Securities Exchanges 2 2011-04-01 2011-04-01 false Title of securities. 230.480 Section 230.480 Commodity and Securities Exchanges SECURITIES AND EXCHANGE COMMISSION GENERAL RULES AND... shares, the par or stated value, if any; the rate of dividends, if fixed, and whether cumulative or non...

  4. 17 CFR 230.480 - Title of securities.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 17 Commodity and Securities Exchanges 2 2012-04-01 2012-04-01 false Title of securities. 230.480 Section 230.480 Commodity and Securities Exchanges SECURITIES AND EXCHANGE COMMISSION GENERAL RULES AND... shares, the par or stated value, if any; the rate of dividends, if fixed, and whether cumulative or non...

  5. 7 CFR 1484.50 - What cost share contributions are eligible?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... section, eligible contributions are: (1) Cash; (2) Compensation paid to personnel; (3) The cost of... 7 Agriculture 10 2010-01-01 2010-01-01 false What cost share contributions are eligible? 1484.50... MARKETS FOR AGRICULTURAL COMMODITIES Contributions and Reimbursements § 1484.50 What cost share...

  6. A Scalable Software Architecture Booting and Configuring Nodes in the Whitney Commodity Computing Testbed

    NASA Technical Reports Server (NTRS)

    Fineberg, Samuel A.; Kutler, Paul (Technical Monitor)

    1997-01-01

    The Whitney project is integrating commodity off-the-shelf PC hardware and software technology to build a parallel supercomputer with hundreds to thousands of nodes. To build such a system, one must have a scalable software model, and the installation and maintenance of the system software must be completely automated. We describe the design of an architecture for booting, installing, and configuring nodes in such a system with particular consideration given to scalability and ease of maintenance. This system has been implemented on a 40-node prototype of Whitney and is to be used on the 500 processor Whitney system to be built in 1998.

  7. DNA Assembly in 3D Printed Fluidics (Open Access, Publisher’s Version)

    DTIC Science & Technology

    2015-12-30

    advances in commodity digital fabrication tools, it is now possible to directly print fluidic devices and supporting hardware. 3D printed micro- and...millifluidic devices are inexpensive, easy to make and quick to pro- duce. We demonstrate Golden Gate DNA assembly in 3D - printed fluidics with reaction vol

  8. DYNER: A DYNamic ClustER for Education and Research

    ERIC Educational Resources Information Center

    Kehagias, Dimitris; Grivas, Michael; Mamalis, Basilis; Pantziou, Grammati

    2006-01-01

    Purpose: The purpose of this paper is to evaluate the use of a non-expensive dynamic computing resource, consisting of a Beowulf class cluster and a NoW, as an educational and research infrastructure. Design/methodology/approach: Clusters, built using commodity-off-the-shelf (COTS) hardware components and free, or commonly used, software, provide…

  9. Evaluation of Low-Pressure Cold Plasma for Disinfection of ISS Grown Produce and Metal Instruments

    NASA Technical Reports Server (NTRS)

    Hummerick, Mary E.; Hintze, Paul E.; Maloney, Philip R.; Spencer, Lashelle E.; Coutts, Janelle L.; Franco, Carolina

    2016-01-01

    Low pressure cold plasma, using breathing air as the plasma gas, has been shown to be effective at precision cleaning aerospace hardware at Kennedy Space Center.Both atmospheric and low pressure plasmas are relatively new technologies being investigated for disinfecting agricultural commodities and medical instruments.

  10. Mobile-IT Education (MIT.EDU): M-Learning Applications for Classroom Settings

    ERIC Educational Resources Information Center

    Sung, M.; Gips, J.; Eagle, N.; Madan, A.; Caneel, R.; DeVaul, R.; Bonsen, J.; Pentland, A.

    2005-01-01

    In this paper, we describe the Mobile-IT Education (MIT.EDU) system, which demonstrates the potential of using a distributed mobile device architecture for rapid prototyping of wireless mobile multi-user applications for use in classroom settings. MIT.EDU is a stable, accessible system that combines inexpensive, commodity hardware, a flexible…

  11. Enforcing Hardware-Assisted Integrity for Secure Transactions from Commodity Operating Systems

    DTIC Science & Technology

    2015-08-17

    OS. First, we dedicate one hard disk to each OS. A System Management Mode ( SMM )-based monitoring module monitors if an OS is accessing another hard...hypervisor- based systems. An adversary can only target the BIOS-anchored SMM code, which is tiny, and without any need for foreign code (i.e. third

  12. Precise Truss Assembly Using Commodity Parts and Low Precision Welding

    NASA Technical Reports Server (NTRS)

    Komendera, Erik; Reishus, Dustin; Dorsey, John T.; Doggett, W. R.; Correll, Nikolaus

    2014-01-01

    Hardware and software design and system integration for an intelligent precision jigging robot (IPJR), which allows high precision assembly using commodity parts and low-precision bonding, is described. Preliminary 2D experiments that are motivated by the problem of assembling space telescope optical benches and very large manipulators on orbit using inexpensive, stock hardware and low-precision welding are also described. An IPJR is a robot that acts as the precise "jigging", holding parts of a local structure assembly site in place, while an external low precision assembly agent cuts and welds members. The prototype presented in this paper allows an assembly agent (for this prototype, a human using only low precision tools), to assemble a 2D truss made of wooden dowels to a precision on the order of millimeters over a span on the order of meters. The analysis of the assembly error and the results of building a square structure and a ring structure are discussed. Options for future work, to extend the IPJR paradigm to building in 3D structures at micron precision are also summarized.

  13. APRON: A Cellular Processor Array Simulation and Hardware Design Tool

    NASA Astrophysics Data System (ADS)

    Barr, David R. W.; Dudek, Piotr

    2009-12-01

    We present a software environment for the efficient simulation of cellular processor arrays (CPAs). This software (APRON) is used to explore algorithms that are designed for massively parallel fine-grained processor arrays, topographic multilayer neural networks, vision chips with SIMD processor arrays, and related architectures. The software uses a highly optimised core combined with a flexible compiler to provide the user with tools for the design of new processor array hardware architectures and the emulation of existing devices. We present performance benchmarks for the software processor array implemented on standard commodity microprocessors. APRON can be configured to use additional processing hardware if necessary and can be used as a complete graphical user interface and development environment for new or existing CPA systems, allowing more users to develop algorithms for CPA systems.

  14. 75 FR 81977 - Order Exempting the Trading and Clearing of Certain Products Related to the CBOE Gold ETF...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-29

    ... derivatives transaction execution facility for transactions for future delivery in any commodity under section... categorical Section 4(c) exemption to permit options and futures on shares of all or some precious metal commodity-based ETFs to be traded and cleared as options on securities and security futures, respectively...

  15. Measuring whole-plant transpiration gravimetrically: a scalable automated system built from components

    Treesearch

    Damian Cirelli; Victor J. Lieffers; Melvin T. Tyree

    2012-01-01

    Measuring whole-plant transpiration is highly relevant considering the increasing interest in understanding and improving plant water use at the whole-plant level. We present an original software package (Amalthea) and a design to create a system for measuring transpiration using laboratory balances based on the readily available commodity hardware. The system is...

  16. A real-time biomimetic acoustic localizing system using time-shared architecture

    NASA Astrophysics Data System (ADS)

    Nourzad Karl, Marianne; Karl, Christian; Hubbard, Allyn

    2008-04-01

    In this paper a real-time sound source localizing system is proposed, which is based on previously developed mammalian auditory models. Traditionally, following the models, which use interaural time delay (ITD) estimates, the amount of parallel computations needed by a system to achieve real-time sound source localization is a limiting factor and a design challenge for hardware implementations. Therefore a new approach using a time-shared architecture implementation is introduced. The proposed architecture is a purely sample-base-driven digital system, and it follows closely the continuous-time approach described in the models. Rather than having dedicated hardware on a per frequency channel basis, a specialized core channel, shared for all frequency bands is used. Having an optimized execution time, which is much less than the system's sample rate, the proposed time-shared solution allows the same number of virtual channels to be processed as the dedicated channels in the traditional approach. Hence, the time-shared approach achieves a highly economical and flexible implementation using minimal silicon area. These aspects are particularly important in efficient hardware implementation of a real time biomimetic sound source localization system.

  17. 75 FR 37510 - Self-Regulatory Organizations; Notice of Filing of Proposed Rule Change by NYSE Arca, Inc...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-29

    ... Trading of WisdomTree Dreyfus Commodity Currency Fund Under NYSE Arca Equities Rule 8.600 June 22, 2010... proposes to list and trade shares of the following fund of the WisdomTree Trust (``Trust'') under NYSE Arca Equities Rule 8.600: WisdomTree Dreyfus Commodity Currency Fund (``Fund''). The text of the proposed rule...

  18. 77 FR 43620 - Self-Regulatory Organizations; NYSE Arca, Inc.; Order Instituting Proceedings to Determine...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-25

    ..., United States Senator Carl Levin submitted a comment letter on the proposed rule change.\\8\\ \\1\\ 15 U.S.C... governs the listing and trading of commodity-based trust shares. J.P. Morgan Commodity ETF Services LLC is the sponsor of the Trust (``Sponsor'').\\9\\ J.P. Morgan Treasury Securities Services, a division of...

  19. 78 FR 75406 - Self-Regulatory Organizations; NYSE Arca, Inc.; Order Granting Approval of Proposed Rule Change...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-11

    ... Shares of the WisdomTree Bloomberg U.S. Dollar Bullish Fund, WisdomTree Bloomberg U.S. Dollar Bearish Fund, and the WisdomTree Commodity Currency Bearish Fund Under NYSE Arca Equities Rule 8.600 December 5... and trade shares (``Shares'') of WisdomTree Bloomberg U.S. Dollar Bullish Fund, WisdomTree Bloomberg U...

  20. Information Technology Strategic Plan 2009-2013

    DTIC Science & Technology

    2009-01-01

    and the absence of Enterprise funding models for shared services . Also, though progress has been made within the DHS IT community regarding...security access regulations for shared services ; and difficulties associated with 3 Office of the Chief Information Officer...infrastructure and shared services is the vision for the Infrastructure Transformation Program at DHS and is the means by which to reduce IT commodity

  1. Will Commodity Properties Affect Seller's Creditworthy: Evidence in C2C E-commerce Market in China

    NASA Astrophysics Data System (ADS)

    Peng, Hui; Ling, Min

    This paper finds out that the credit rating level shows significant difference among different sub-commodity markets in E-commerce, which provides room for sellers to get higher credit rating by entering businesses with higher average credit level before fraud. In order to study the influence of commodity properties on credit rating, this paper analyzes how commodity properties affect average crediting rating through the degree of information asymmetry, returns and costs of fraud, credibility perception and fraud tolerance. Empirical study shows that Delivery, average trading volume, average price and complaint possibility have decisive impacts on credit performance; brand market share, the degree of standardization and the degree of imitation also have a relatively less significant effect on credit rating. Finally, this paper suggests that important commodity properties should be introduced to modify reputation system, for preventing credit rating arbitrage behavior where sellers move into low-rating commodity after being assigned high credit rating.

  2. 7 CFR 1484.50 - What cost share contributions are eligible?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 10 2011-01-01 2011-01-01 false What cost share contributions are eligible? 1484.50 Section 1484.50 Agriculture Regulations of the Department of Agriculture (Continued) COMMODITY CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS PROGRAMS TO HELP DEVELOP FOREIGN...

  3. Fast image interpolation for motion estimation using graphics hardware

    NASA Astrophysics Data System (ADS)

    Kelly, Francis; Kokaram, Anil

    2004-05-01

    Motion estimation and compensation is the key to high quality video coding. Block matching motion estimation is used in most video codecs, including MPEG-2, MPEG-4, H.263 and H.26L. Motion estimation is also a key component in the digital restoration of archived video and for post-production and special effects in the movie industry. Sub-pixel accurate motion vectors can improve the quality of the vector field and lead to more efficient video coding. However sub-pixel accuracy requires interpolation of the image data. Image interpolation is a key requirement of many image processing algorithms. Often interpolation can be a bottleneck in these applications, especially in motion estimation due to the large number pixels involved. In this paper we propose using commodity computer graphics hardware for fast image interpolation. We use the full search block matching algorithm to illustrate the problems and limitations of using graphics hardware in this way.

  4. Evaluation of accelerated iterative x-ray CT image reconstruction using floating point graphics hardware.

    PubMed

    Kole, J S; Beekman, F J

    2006-02-21

    Statistical reconstruction methods offer possibilities to improve image quality as compared with analytical methods, but current reconstruction times prohibit routine application in clinical and micro-CT. In particular, for cone-beam x-ray CT, the use of graphics hardware has been proposed to accelerate the forward and back-projection operations, in order to reduce reconstruction times. In the past, wide application of this texture hardware mapping approach was hampered owing to limited intrinsic accuracy. Recently, however, floating point precision has become available in the latest generation commodity graphics cards. In this paper, we utilize this feature to construct a graphics hardware accelerated version of the ordered subset convex reconstruction algorithm. The aims of this paper are (i) to study the impact of using graphics hardware acceleration for statistical reconstruction on the reconstructed image accuracy and (ii) to measure the speed increase one can obtain by using graphics hardware acceleration. We compare the unaccelerated algorithm with the graphics hardware accelerated version, and for the latter we consider two different interpolation techniques. A simulation study of a micro-CT scanner with a mathematical phantom shows that at almost preserved reconstructed image accuracy, speed-ups of a factor 40 to 222 can be achieved, compared with the unaccelerated algorithm, and depending on the phantom and detector sizes. Reconstruction from physical phantom data reconfirms the usability of the accelerated algorithm for practical cases.

  5. 7 CFR 1435.316 - Acreage reports for purposes of proportionate shares.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...) COMMODITY CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.316 Acreage reports for purposes of proportionate shares. (a) A report of planted and failed acreage shall be required on farms that produce sugarcane for sugar...

  6. 7 CFR 1435.312 - Establishment of acreage bases under proportionate shares.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...) COMMODITY CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.312 Establishment of acreage bases under proportionate... shares as the simple average of the acreage planted and considered planted for harvest for sugar or seed...

  7. 7 CFR 1435.316 - Acreage reports for purposes of proportionate shares.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) COMMODITY CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.316 Acreage reports for purposes of proportionate shares. (a) A report of planted and failed acreage shall be required on farms that produce sugarcane for sugar...

  8. 7 CFR 1435.312 - Establishment of acreage bases under proportionate shares.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) COMMODITY CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.312 Establishment of acreage bases under proportionate... shares as the simple average of the acreage planted and considered planted for harvest for sugar or seed...

  9. 7 CFR 1435.312 - Establishment of acreage bases under proportionate shares.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) COMMODITY CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.312 Establishment of acreage bases under proportionate... shares as the simple average of the acreage planted and considered planted for harvest for sugar or seed...

  10. 7 CFR 1435.316 - Acreage reports for purposes of proportionate shares.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) COMMODITY CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.316 Acreage reports for purposes of proportionate shares. (a) A report of planted and failed acreage shall be required on farms that produce sugarcane for sugar...

  11. 7 CFR 1435.316 - Acreage reports for purposes of proportionate shares.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...) COMMODITY CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.316 Acreage reports for purposes of proportionate shares. (a) A report of planted and failed acreage shall be required on farms that produce sugarcane for sugar...

  12. 7 CFR 1435.316 - Acreage reports for purposes of proportionate shares.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) COMMODITY CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.316 Acreage reports for purposes of proportionate shares. (a) A report of planted and failed acreage shall be required on farms that produce sugarcane for sugar...

  13. 7 CFR 1435.312 - Establishment of acreage bases under proportionate shares.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...) COMMODITY CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.312 Establishment of acreage bases under proportionate... shares as the simple average of the acreage planted and considered planted for harvest for sugar or seed...

  14. 7 CFR 1435.312 - Establishment of acreage bases under proportionate shares.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) COMMODITY CREDIT CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing Allotments For Sugar § 1435.312 Establishment of acreage bases under proportionate... shares as the simple average of the acreage planted and considered planted for harvest for sugar or seed...

  15. 31 CFR 1026.520 - Special information sharing procedures to deter money laundering and terrorist activity for...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 31 Money and Finance:Treasury 3 2013-07-01 2013-07-01 false Special information sharing procedures to deter money laundering and terrorist activity for futures commission merchants and introducing brokers in commodities. 1026.520 Section 1026.520 Money and Finance: Treasury Regulations Relating to...

  16. 31 CFR 1026.520 - Special information sharing procedures to deter money laundering and terrorist activity for...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 31 Money and Finance:Treasury 3 2012-07-01 2012-07-01 false Special information sharing procedures to deter money laundering and terrorist activity for futures commission merchants and introducing brokers in commodities. 1026.520 Section 1026.520 Money and Finance: Treasury Regulations Relating to...

  17. 31 CFR 1026.520 - Special information sharing procedures to deter money laundering and terrorist activity for...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 31 Money and Finance:Treasury 3 2014-07-01 2014-07-01 false Special information sharing procedures to deter money laundering and terrorist activity for futures commission merchants and introducing brokers in commodities. 1026.520 Section 1026.520 Money and Finance: Treasury Regulations Relating to...

  18. 31 CFR 1026.520 - Special information sharing procedures to deter money laundering and terrorist activity for...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 31 Money and Finance:Treasury 3 2011-07-01 2011-07-01 false Special information sharing procedures to deter money laundering and terrorist activity for futures commission merchants and introducing brokers in commodities. 1026.520 Section 1026.520 Money and Finance: Treasury Regulations Relating to...

  19. Effect of Heterogeneity on Decorrelation Mechanisms in Spiking Neural Networks: A Neuromorphic-Hardware Study

    NASA Astrophysics Data System (ADS)

    Pfeil, Thomas; Jordan, Jakob; Tetzlaff, Tom; Grübl, Andreas; Schemmel, Johannes; Diesmann, Markus; Meier, Karlheinz

    2016-04-01

    High-level brain function, such as memory, classification, or reasoning, can be realized by means of recurrent networks of simplified model neurons. Analog neuromorphic hardware constitutes a fast and energy-efficient substrate for the implementation of such neural computing architectures in technical applications and neuroscientific research. The functional performance of neural networks is often critically dependent on the level of correlations in the neural activity. In finite networks, correlations are typically inevitable due to shared presynaptic input. Recent theoretical studies have shown that inhibitory feedback, abundant in biological neural networks, can actively suppress these shared-input correlations and thereby enable neurons to fire nearly independently. For networks of spiking neurons, the decorrelating effect of inhibitory feedback has so far been explicitly demonstrated only for homogeneous networks of neurons with linear subthreshold dynamics. Theory, however, suggests that the effect is a general phenomenon, present in any system with sufficient inhibitory feedback, irrespective of the details of the network structure or the neuronal and synaptic properties. Here, we investigate the effect of network heterogeneity on correlations in sparse, random networks of inhibitory neurons with nonlinear, conductance-based synapses. Emulations of these networks on the analog neuromorphic-hardware system Spikey allow us to test the efficiency of decorrelation by inhibitory feedback in the presence of hardware-specific heterogeneities. The configurability of the hardware substrate enables us to modulate the extent of heterogeneity in a systematic manner. We selectively study the effects of shared input and recurrent connections on correlations in membrane potentials and spike trains. Our results confirm that shared-input correlations are actively suppressed by inhibitory feedback also in highly heterogeneous networks exhibiting broad, heavy-tailed firing-rate distributions. In line with former studies, cell heterogeneities reduce shared-input correlations. Overall, however, correlations in the recurrent system can increase with the level of heterogeneity as a consequence of diminished effective negative feedback.

  20. Scalable tuning of building models to hourly data

    DOE PAGES

    Garrett, Aaron; New, Joshua Ryan

    2015-03-31

    Energy models of existing buildings are unreliable unless calibrated so they correlate well with actual energy usage. Manual tuning requires a skilled professional, is prohibitively expensive for small projects, imperfect, non-repeatable, non-transferable, and not scalable to the dozens of sensor channels that smart meters, smart appliances, and cheap/ubiquitous sensors are beginning to make available today. A scalable, automated methodology is needed to quickly and intelligently calibrate building energy models to all available data, increase the usefulness of those models, and facilitate speed-and-scale penetration of simulation-based capabilities into the marketplace for actualized energy savings. The "Autotune'' project is a novel, model-agnosticmore » methodology which leverages supercomputing, large simulation ensembles, and big data mining with multiple machine learning algorithms to allow automatic calibration of simulations that match measured experimental data in a way that is deployable on commodity hardware. This paper shares several methodologies employed to reduce the combinatorial complexity to a computationally tractable search problem for hundreds of input parameters. Furthermore, accuracy metrics are provided which quantify model error to measured data for either monthly or hourly electrical usage from a highly-instrumented, emulated-occupancy research home.« less

  1. CephFS: a new generation storage platform for Australian high energy physics

    NASA Astrophysics Data System (ADS)

    Borges, G.; Crosby, S.; Boland, L.

    2017-10-01

    This paper presents an implementation of a Ceph file system (CephFS) use case at the ARC Center of Excellence for Particle Physics at the Terascale (CoEPP). CoEPP’s CephFS provides a posix-like file system on top of a Ceph RADOS object store, deployed on commodity hardware and without single points of failure. By delivering a unique file system namespace at different CoEPP centres spread across Australia, local HEP researchers can store, process and share data independently of their geographical locations. CephFS is also used as the back-end file system for a WLCG ATLAS user area at the Australian Tier-2. Dedicated SRM and XROOTD services, deployed on top of CoEPP’s CephFS, integrates it in ATLAS data distributed operations. This setup, while allowing Australian HEP researchers to trigger data movement via ATLAS grid tools, also enables local posix-like read access providing greater control to scientists of their data flows. In this article we will present details on CoEPP’s Ceph/CephFS implementation and report performance I/O metrics collected during the testing/tuning phase of the system.

  2. CFTC-EPA Memorandum of Understanding

    EPA Pesticide Factsheets

    Memorandum of Understanding Between the Environmental Protection Agency and the Commodity Futures Trading Commission on the Sharing of Information Available to EPA Related to the Functioning of Renewable Fuel and Related Markets

  3. Information Security Considerations for Applications Using Apache Accumulo

    DTIC Science & Technology

    2014-09-01

    Distributed File System INSCOM United States Army Intelligence and Security Command JPA Java Persistence API JSON JavaScript Object Notation MAC Mandatory... MySQL [13]. BigTable can process 20 petabytes per day [14]. High degree of scalability on commodity hardware. NoSQL databases do not rely on highly...manipulation in relational databases. NoSQL databases each have a unique programming interface that uses a lower level procedural language (e.g., Java

  4. Guaranteeing Spoof-Resilient Multi-Robot Networks

    DTIC Science & Technology

    2016-02-12

    key-distribution. Our core contribution is a novel al- gorithm implemented on commercial Wi - Fi radios that can “sense” spoofers using the physics of...encrypted key exchange, but rather a commercial Wi - Fi card and software to implement our so- lution. Our virtual sensor leverages the rich physical...cheap commodity Wi - Fi radios, unlike hardware-based solutions [46, 48]. (3) It is robust to client mobility and power-scaling at- tacks. Finally, our

  5. Fifty Years of Observing Hardware and Human Behavior

    NASA Technical Reports Server (NTRS)

    McMann, Joe

    2011-01-01

    During this half-day workshop, Joe McMann presented the lessons learned during his 50 years of experience in both industry and government, which included all U.S. manned space programs, from Mercury to the ISS. He shared his thoughts about hardware and people and what he has learned from first-hand experience. Included were such topics as design, testing, design changes, development, failures, crew expectations, hardware, requirements, and meetings.

  6. Improving the energy efficiency of sparse linear system solvers on multicore and manycore systems.

    PubMed

    Anzt, H; Quintana-Ortí, E S

    2014-06-28

    While most recent breakthroughs in scientific research rely on complex simulations carried out in large-scale supercomputers, the power draft and energy spent for this purpose is increasingly becoming a limiting factor to this trend. In this paper, we provide an overview of the current status in energy-efficient scientific computing by reviewing different technologies used to monitor power draft as well as power- and energy-saving mechanisms available in commodity hardware. For the particular domain of sparse linear algebra, we analyse the energy efficiency of a broad collection of hardware architectures and investigate how algorithmic and implementation modifications can improve the energy performance of sparse linear system solvers, without negatively impacting their performance. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  7. Force sharing in high-power parallel servo-actuators

    NASA Technical Reports Server (NTRS)

    Neal, T. P.

    1974-01-01

    The various existing force sharing schemes were examined by conducting a literature survey. A list of potentially applicable concepts was compiled from this survey, and a brief analysis was then made of each concept, which resulted in two competing schemes being selected for in-depth evaluation. A functional design of the equalization logic for the two schemes was undertaken and specific space shuttle application was chosen for experimental evaluation. The application was scaled down so that existing hardware could be utilized. Next, an analog computer study was conducted to evaluate the more important characteristics of the two competing force sharing schemes. On the basis of the computers study, a final configuration was selected. A load simulator was then designed to evaluate this configuration on actual hardware.

  8. 7 CFR 1220.610 - Producer.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE SOYBEAN PROMOTION, RESEARCH, AND... person engaged in the growing of soybeans in the United States who owns or who shares the ownership and risk of loss of such soybeans. ...

  9. 7 CFR 1220.610 - Producer.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE SOYBEAN PROMOTION, RESEARCH, AND... person engaged in the growing of soybeans in the United States who owns or who shares the ownership and risk of loss of such soybeans. ...

  10. An experimental distributed microprocessor implementation with a shared memory communications and control medium

    NASA Technical Reports Server (NTRS)

    Mejzak, R. S.

    1980-01-01

    The distributed processing concept is defined in terms of control primitives, variables, and structures and their use in performing a decomposed discrete Fourier transform (DET) application function. The design assumes interprocessor communications to be anonymous. In this scheme, all processors can access an entire common database by employing control primitives. Access to selected areas within the common database is random, enforced by a hardware lock, and determined by task and subtask pointers. This enables the number of processors to be varied in the configuration without any modifications to the control structure. Decompositional elements of the DFT application function in terms of tasks and subtasks are also described. The experimental hardware configuration consists of IMSAI 8080 chassis which are independent, 8 bit microcomputer units. These chassis are linked together to form a multiple processing system by means of a shared memory facility. This facility consists of hardware which provides a bus structure to enable up to six microcomputers to be interconnected. It provides polling and arbitration logic so that only one processor has access to shared memory at any one time.

  11. Defense AT&L (Volume 35, Number 5, September-October 2006)

    DTIC Science & Technology

    2006-10-01

    percent of production. The criti- Defense AT&L: September-October 2006 8 cal path elements driving the IOT &E schedule are not pro- duction hardware...reduced costs, and successful completion of work in the scheduled time. 30 The Commodity Approach to Aircraft Protection Systems Capt. Bill Chubb, USN The...piece of Littoral Combat Ship Two during the ship’s keel laying ceremony. The Navy’s second Littoral Combat Ship is scheduled for commissioning in

  12. PageRank as a method to rank biomedical literature by importance.

    PubMed

    Yates, Elliot J; Dixon, Louise C

    2015-01-01

    Optimal ranking of literature importance is vital in overcoming article overload. Existing ranking methods are typically based on raw citation counts, giving a sum of 'inbound' links with no consideration of citation importance. PageRank, an algorithm originally developed for ranking webpages at the search engine, Google, could potentially be adapted to bibliometrics to quantify the relative importance weightings of a citation network. This article seeks to validate such an approach on the freely available, PubMed Central open access subset (PMC-OAS) of biomedical literature. On-demand cloud computing infrastructure was used to extract a citation network from over 600,000 full-text PMC-OAS articles. PageRanks and citation counts were calculated for each node in this network. PageRank is highly correlated with citation count (R = 0.905, P < 0.01) and we thus validate the former as a surrogate of literature importance. Furthermore, the algorithm can be run in trivial time on cheap, commodity cluster hardware, lowering the barrier of entry for resource-limited open access organisations. PageRank can be trivially computed on commodity cluster hardware and is linearly correlated with citation count. Given its putative benefits in quantifying relative importance, we suggest it may enrich the citation network, thereby overcoming the existing inadequacy of citation counts alone. We thus suggest PageRank as a feasible supplement to, or replacement of, existing bibliometric ranking methods.

  13. Large-scale virtual screening on public cloud resources with Apache Spark.

    PubMed

    Capuccini, Marco; Ahmed, Laeeq; Schaal, Wesley; Laure, Erwin; Spjuth, Ola

    2017-01-01

    Structure-based virtual screening is an in-silico method to screen a target receptor against a virtual molecular library. Applying docking-based screening to large molecular libraries can be computationally expensive, however it constitutes a trivially parallelizable task. Most of the available parallel implementations are based on message passing interface, relying on low failure rate hardware and fast network connection. Google's MapReduce revolutionized large-scale analysis, enabling the processing of massive datasets on commodity hardware and cloud resources, providing transparent scalability and fault tolerance at the software level. Open source implementations of MapReduce include Apache Hadoop and the more recent Apache Spark. We developed a method to run existing docking-based screening software on distributed cloud resources, utilizing the MapReduce approach. We benchmarked our method, which is implemented in Apache Spark, docking a publicly available target receptor against [Formula: see text]2.2 M compounds. The performance experiments show a good parallel efficiency (87%) when running in a public cloud environment. Our method enables parallel Structure-based virtual screening on public cloud resources or commodity computer clusters. The degree of scalability that we achieve allows for trying out our method on relatively small libraries first and then to scale to larger libraries. Our implementation is named Spark-VS and it is freely available as open source from GitHub (https://github.com/mcapuccini/spark-vs).Graphical abstract.

  14. Human Exploration Spacecraft Testbed for Integration and Advancement (HESTIA)

    NASA Technical Reports Server (NTRS)

    Banker, Brian F.; Robinson, Travis

    2016-01-01

    The proposed paper will cover ongoing effort named HESTIA (Human Exploration Spacecraft Testbed for Integration and Advancement), led at the National Aeronautics and Space Administration (NASA) Johnson Space Center (JSC) to promote a cross-subsystem approach to developing Mars-enabling technologies with the ultimate goal of integrated system optimization. HESTIA also aims to develop the infrastructure required to rapidly test these highly integrated systems at a low cost. The initial focus is on the common fluids architecture required to enable human exploration of mars, specifically between life support and in-situ resource utilization (ISRU) subsystems. An overview of the advancements in both integrated technologies, in infrastructure, in simulation, and in modeling capabilities will be presented, as well as the results and findings of integrated testing,. Due to the enormous mass gear-ratio required for human exploration beyond low-earth orbit, (for every 1 kg of payload landed on Mars, 226 kg will be required on Earth), minimization of surface hardware and commodities is paramount. Hardware requirements can be minimized by reduction of equipment performing similar functions though for different subsystems. If hardware could be developed which meets the requirements of both life support and ISRU it could result in the reduction of primary hardware and/or reduction in spares. Minimization of commodities to the surface of mars can be achieved through the creation of higher efficiency systems producing little to no undesired waste, such as a closed-loop life support subsystem. Where complete efficiency is impossible or impractical, makeup commodities could be manufactured via ISRU. Although, utilization of ISRU products (oxygen and water) for crew consumption holds great promise of reducing demands on life support hardware, there exist concerns as to the purity and transportation of commodities. To date, ISRU has been focused on production rates and purities for propulsion needs. The meshing of requirements between all potential users, producers, and cleaners of oxygen and water is crucial to guiding the development of technologies which will be used to perform these functions. Various new capabilities are being developed as part of HESTIA, which will enable the integrated testing of these technologies. This includes the upgrading of a 20' diameter habitat chamber to eventually support long duration (90+ day) human-in-the-loop testing of advanced life support systems. Additionally, a 20' diameter vacuum chamber is being modified to create Mars atmospheric pressures and compositions. This chamber, designated the Mars Environment Chamber (MEC), will eventually be upgraded to include a dusty environment and thermal shroud to simulate conditions on the surface of Mars. In view that individual technologies will be in geographically diverse locations across NASA facilities and elsewhere in the world, schedule and funding constraints will likely limit the frequency of physical integration. When this is the case, absent subsystems can be either digitally or physically simulated. Using the Integrated Power Avionics and Software (iPAS) environment, HESTIA is able to bring together data from various subsystems in simulated surroundings, insert faults, errors, time delays, etc., and feed data into computer models or physical systems capable of reproducing the output of the absent subsystems for the consumption of a local subsystems. Although imperfect, this capability provides opportunities to test subsystem integration and interactions at a fraction of the cost. When a subsystem technology is too immature for integrated testing, models can be produced using the General-Use Nodal Network Solver (GUNNS) capability to simulate the overall system performance. In doing so, even technologies not yet on the drawing board can be integrated and overall system performance estimated. Through the integrated development of technologies, as well as of the infrastructure to rapidly and at a low cost, model, simulate, and test subsystem technologies early in their development, HESTIA is pioneering a new way of developing the future of human space exploration.

  15. Toward Millions of File System IOPS on Low-Cost, Commodity Hardware

    PubMed Central

    Zheng, Da; Burns, Randal; Szalay, Alexander S.

    2013-01-01

    We describe a storage system that removes I/O bottlenecks to achieve more than one million IOPS based on a user-space file abstraction for arrays of commodity SSDs. The file abstraction refactors I/O scheduling and placement for extreme parallelism and non-uniform memory and I/O. The system includes a set-associative, parallel page cache in the user space. We redesign page caching to eliminate CPU overhead and lock-contention in non-uniform memory architecture machines. We evaluate our design on a 32 core NUMA machine with four, eight-core processors. Experiments show that our design delivers 1.23 million 512-byte read IOPS. The page cache realizes the scalable IOPS of Linux asynchronous I/O (AIO) and increases user-perceived I/O performance linearly with cache hit rates. The parallel, set-associative cache matches the cache hit rates of the global Linux page cache under real workloads. PMID:24402052

  16. Toward Millions of File System IOPS on Low-Cost, Commodity Hardware.

    PubMed

    Zheng, Da; Burns, Randal; Szalay, Alexander S

    2013-01-01

    We describe a storage system that removes I/O bottlenecks to achieve more than one million IOPS based on a user-space file abstraction for arrays of commodity SSDs. The file abstraction refactors I/O scheduling and placement for extreme parallelism and non-uniform memory and I/O. The system includes a set-associative, parallel page cache in the user space. We redesign page caching to eliminate CPU overhead and lock-contention in non-uniform memory architecture machines. We evaluate our design on a 32 core NUMA machine with four, eight-core processors. Experiments show that our design delivers 1.23 million 512-byte read IOPS. The page cache realizes the scalable IOPS of Linux asynchronous I/O (AIO) and increases user-perceived I/O performance linearly with cache hit rates. The parallel, set-associative cache matches the cache hit rates of the global Linux page cache under real workloads.

  17. Pointblank: Acts on the Eve of War, 1938-1939

    DTIC Science & Technology

    2012-06-01

    Russians.52 This horsepower took the form of US concentrated industry supported by a robust transportation network, both of which were critical...ground to a halt and civilians endured hardship. The steel industry shared some of the same robustness as the oil industry in terms of distribution...these two industries shared the consumption of another, more decisive commodity. The commercial power industry was the most vulnerable target set

  18. Best bang for your buck: GPU nodes for GROMACS biomolecular simulations

    PubMed Central

    Páll, Szilárd; Fechner, Martin; Esztermann, Ansgar; de Groot, Bert L.; Grubmüller, Helmut

    2015-01-01

    The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing clusters. Hardware features are well‐exploited with a combination of single instruction multiple data, multithreading, and message passing interface (MPI)‐based single program multiple data/multiple program multiple data parallelism while graphics processing units (GPUs) can be used as accelerators to compute interactions off‐loaded from the CPU. Here, we evaluate which hardware produces trajectories with GROMACS 4.6 or 5.0 in the most economical way. We have assembled and benchmarked compute nodes with various CPU/GPU combinations to identify optimal compositions in terms of raw trajectory production rate, performance‐to‐price ratio, energy efficiency, and several other criteria. Although hardware prices are naturally subject to trends and fluctuations, general tendencies are clearly visible. Adding any type of GPU significantly boosts a node's simulation performance. For inexpensive consumer‐class GPUs this improvement equally reflects in the performance‐to‐price ratio. Although memory issues in consumer‐class GPUs could pass unnoticed as these cards do not support error checking and correction memory, unreliable GPUs can be sorted out with memory checking tools. Apart from the obvious determinants for cost‐efficiency like hardware expenses and raw performance, the energy consumption of a node is a major cost factor. Over the typical hardware lifetime until replacement of a few years, the costs for electrical power and cooling can become larger than the costs of the hardware itself. Taking that into account, nodes with a well‐balanced ratio of CPU and consumer‐class GPU resources produce the maximum amount of GROMACS trajectory over their lifetime. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:26238484

  19. Best bang for your buck: GPU nodes for GROMACS biomolecular simulations.

    PubMed

    Kutzner, Carsten; Páll, Szilárd; Fechner, Martin; Esztermann, Ansgar; de Groot, Bert L; Grubmüller, Helmut

    2015-10-05

    The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing clusters. Hardware features are well-exploited with a combination of single instruction multiple data, multithreading, and message passing interface (MPI)-based single program multiple data/multiple program multiple data parallelism while graphics processing units (GPUs) can be used as accelerators to compute interactions off-loaded from the CPU. Here, we evaluate which hardware produces trajectories with GROMACS 4.6 or 5.0 in the most economical way. We have assembled and benchmarked compute nodes with various CPU/GPU combinations to identify optimal compositions in terms of raw trajectory production rate, performance-to-price ratio, energy efficiency, and several other criteria. Although hardware prices are naturally subject to trends and fluctuations, general tendencies are clearly visible. Adding any type of GPU significantly boosts a node's simulation performance. For inexpensive consumer-class GPUs this improvement equally reflects in the performance-to-price ratio. Although memory issues in consumer-class GPUs could pass unnoticed as these cards do not support error checking and correction memory, unreliable GPUs can be sorted out with memory checking tools. Apart from the obvious determinants for cost-efficiency like hardware expenses and raw performance, the energy consumption of a node is a major cost factor. Over the typical hardware lifetime until replacement of a few years, the costs for electrical power and cooling can become larger than the costs of the hardware itself. Taking that into account, nodes with a well-balanced ratio of CPU and consumer-class GPU resources produce the maximum amount of GROMACS trajectory over their lifetime. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc.

  20. birgHPC: creating instant computing clusters for bioinformatics and molecular dynamics.

    PubMed

    Chew, Teong Han; Joyce-Tan, Kwee Hong; Akma, Farizuwana; Shamsir, Mohd Shahir

    2011-05-01

    birgHPC, a bootable Linux Live CD has been developed to create high-performance clusters for bioinformatics and molecular dynamics studies using any Local Area Network (LAN)-networked computers. birgHPC features automated hardware and slots detection as well as provides a simple job submission interface. The latest versions of GROMACS, NAMD, mpiBLAST and ClustalW-MPI can be run in parallel by simply booting the birgHPC CD or flash drive from the head node, which immediately positions the rest of the PCs on the network as computing nodes. Thus, a temporary, affordable, scalable and high-performance computing environment can be built by non-computing-based researchers using low-cost commodity hardware. The birgHPC Live CD and relevant user guide are available for free at http://birg1.fbb.utm.my/birghpc.

  1. Scalable large format 3D displays

    NASA Astrophysics Data System (ADS)

    Chang, Nelson L.; Damera-Venkata, Niranjan

    2010-02-01

    We present a general framework for the modeling and optimization of scalable large format 3-D displays using multiple projectors. Based on this framework, we derive algorithms that can robustly optimize the visual quality of an arbitrary combination of projectors (e.g. tiled, superimposed, combinations of the two) without manual adjustment. The framework creates for the first time a new unified paradigm that is agnostic to a particular configuration of projectors yet robustly optimizes for the brightness, contrast, and resolution of that configuration. In addition, we demonstrate that our algorithms support high resolution stereoscopic video at real-time interactive frame rates achieved on commodity graphics hardware. Through complementary polarization, the framework creates high quality multi-projector 3-D displays at low hardware and operational cost for a variety of applications including digital cinema, visualization, and command-and-control walls.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Millar, A. P.; Baranova, T.; Behrmann, G.

    For over a decade, dCache has been synonymous with large-capacity, fault-tolerant storage using commodity hardware that supports seamless data migration to and from tape. In this paper we provide some recent news of changes within dCache and the community surrounding it. We describe the flexible nature of dCache that allows both externally developed enhancements to dCache facilities and the adoption of new technologies. Finally, we present information about avenues the dCache team is exploring for possible future improvements in dCache.

  3. Autonomic Recovery: HyperCheck: A Hardware-Assisted Integrity Monitor

    DTIC Science & Technology

    2013-08-01

    system (OS). HyperCheck leverages the CPU System Management Mode ( SMM ), present in x86 systems, to securely generate and transmit the full state of the...HyperCheck harnesses the CPU System Management Mode ( SMM ) which is present in all x86 commodity systems to create a snapshot view of the current state of the...protect the software above it. Our assumptions are that the attacker does not have physical access to the machine and that the SMM BIOS is locked and

  4. Real-time lens distortion correction: speed, accuracy and efficiency

    NASA Astrophysics Data System (ADS)

    Bax, Michael R.; Shahidi, Ramin

    2014-11-01

    Optical lens systems suffer from nonlinear geometrical distortion. Optical imaging applications such as image-enhanced endoscopy and image-based bronchoscope tracking require correction of this distortion for accurate localization, tracking, registration, and measurement of image features. Real-time capability is desirable for interactive systems and live video. The use of a texture-mapping graphics accelerator, which is standard hardware on current motherboard chipsets and add-in video graphics cards, to perform distortion correction is proposed. Mesh generation for image tessellation, an error analysis, and performance results are presented. It is shown that distortion correction using commodity graphics hardware is substantially faster than using the main processor and can be performed at video frame rates (faster than 30 frames per second), and that the polar-based method of mesh generation proposed here is more accurate than a conventional grid-based approach. Using graphics hardware to perform distortion correction is not only fast and accurate but also efficient as it frees the main processor for other tasks, which is an important issue in some real-time applications.

  5. The evolution of the Trigger and Data Acquisition System in the ATLAS experiment

    NASA Astrophysics Data System (ADS)

    Krasznahorkay, A.; Atlas Collaboration

    2014-06-01

    The ATLAS experiment, aimed at recording the results of LHC proton-proton collisions, is upgrading its Trigger and Data Acquisition (TDAQ) system during the current LHC first long shutdown. The purpose of the upgrade is to add robustness and flexibility to the selection and the conveyance of the physics data, simplify the maintenance of the infrastructure, exploit new technologies and, overall, make ATLAS data-taking capable of dealing with increasing event rates. The TDAQ system used to date is organised in a three-level selection scheme, including a hardware-based first-level trigger and second- and third-level triggers implemented as separate software systems distributed on separate, commodity hardware nodes. While this architecture was successfully operated well beyond the original design goals, the accumulated experience stimulated interest to explore possible evolutions. We will also be upgrading the hardware of the TDAQ system by introducing new elements to it. For the high-level trigger, the current plan is to deploy a single homogeneous system, which merges the execution of the second and third trigger levels, still separated, on a unique hardware node. Prototyping efforts already demonstrated many benefits to the simplified design. In this paper we report on the design and the development status of this new system.

  6. Acceleration of fluoro-CT reconstruction for a mobile C-Arm on GPU and FPGA hardware: a simulation study

    NASA Astrophysics Data System (ADS)

    Xue, Xinwei; Cheryauka, Arvi; Tubbs, David

    2006-03-01

    CT imaging in interventional and minimally-invasive surgery requires high-performance computing solutions that meet operational room demands, healthcare business requirements, and the constraints of a mobile C-arm system. The computational requirements of clinical procedures using CT-like data are increasing rapidly, mainly due to the need for rapid access to medical imagery during critical surgical procedures. The highly parallel nature of Radon transform and CT algorithms enables embedded computing solutions utilizing a parallel processing architecture to realize a significant gain of computational intensity with comparable hardware and program coding/testing expenses. In this paper, using a sample 2D and 3D CT problem, we explore the programming challenges and the potential benefits of embedded computing using commodity hardware components. The accuracy and performance results obtained on three computational platforms: a single CPU, a single GPU, and a solution based on FPGA technology have been analyzed. We have shown that hardware-accelerated CT image reconstruction can be achieved with similar levels of noise and clarity of feature when compared to program execution on a CPU, but gaining a performance increase at one or more orders of magnitude faster. 3D cone-beam or helical CT reconstruction and a variety of volumetric image processing applications will benefit from similar accelerations.

  7. 7 CFR 1209.15 - Producer.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE MUSHROOM PROMOTION, RESEARCH, AND CONSUMER INFORMATION ORDER Mushroom Promotion, Research, and Consumer Information Order Definitions § 1209.15 Producer. Producer means any person engaged in the production of mushrooms who owns or shares the...

  8. 7 CFR 1209.15 - Producer.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE MUSHROOM PROMOTION, RESEARCH, AND CONSUMER INFORMATION ORDER Mushroom Promotion, Research, and Consumer Information Order Definitions § 1209.15 Producer. Producer means any person engaged in the production of mushrooms who owns or shares the...

  9. 7 CFR 1209.15 - Producer.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE MUSHROOM PROMOTION, RESEARCH, AND CONSUMER INFORMATION ORDER Mushroom Promotion, Research, and Consumer Information Order Definitions § 1209.15 Producer. Producer means any person engaged in the production of mushrooms who owns or shares the...

  10. 7 CFR 1209.15 - Producer.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE MUSHROOM PROMOTION, RESEARCH, AND CONSUMER INFORMATION ORDER Mushroom Promotion, Research, and Consumer Information Order Definitions § 1209.15 Producer. Producer means any person engaged in the production of mushrooms who owns or shares the...

  11. 7 CFR 1209.15 - Producer.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE MUSHROOM PROMOTION, RESEARCH, AND CONSUMER INFORMATION ORDER Mushroom Promotion, Research, and Consumer Information Order Definitions § 1209.15 Producer. Producer means any person engaged in the production of mushrooms who owns or shares the...

  12. Liquid Oxygen/Liquid Methane Integrated Power and Propulsion

    NASA Technical Reports Server (NTRS)

    Banker, Brian; Ryan, Abigail

    2016-01-01

    The proposed paper will cover ongoing work at the National Aeronautics and Space Administration (NASA) Johnson Space Center (JSC) on integrated power and propulsion for advanced human exploration. Specifically, it will present findings of the integrated design, testing, and operational challenges of a liquid oxygen / liquid methane (LOx/LCH4) propulsion brassboard and Solid Oxide Fuel Cell (SOFC) system. Human-Mars architectures point to an oxygen-methane economy utilizing common commodities, scavenged from the planetary atmosphere and soil via In-Situ Resource Utilization (ISRU), and common commodities across sub-systems. Due to the enormous mass gear-ratio required for human exploration beyond low-earth orbit, (for every 1 kg of payload landed on Mars, 226 kg will be required on Earth) increasing commonality between spacecraft subsystems such as power and propulsion can result in tremendous launch mass and volume savings. Historically, propulsion and fuel cell power subsystems have had little interaction outside of the generation (fuel cell) and consumption (propulsion) of electrical power. This was largely due to a mismatch in preferred commodities (hypergolics for propulsion; oxygen & hydrogen for fuel cells). Although this stove-piped approach benefits from simplicity in the design process, it means each subsystem has its own tanks, pressurization system, fluid feed system, etc. increasing overall spacecraft mass and volume. A liquid oxygen / liquid methane commodities architecture across propulsion and power subsystems would enable the use of common tankage and associated pressurization and commodity delivery hardware for both. Furthermore, a spacecraft utilizing integrated power and propulsion could use propellant residuals - propellant which could not be expelled from the tank near depletion due to hydrodynamic considerations caused by large flow demands of a rocket engine - to generate power after all propulsive maneuvers are complete thus utilizing previously wasted mass. Such is the case for human and robotic planetary landers. Although many potential benefits through integrated power & propulsion exist, integrated operations have yet to be successfully demonstrated and many challenges have already been identified the most obvious of which is the large temperature gradient. SOFC chemistry is exothermic with operating temperatures in excess of 1,000 K; however, any shared commodities will be undoubtedly stored at cryogenic temperatures (90-112 K) for mass efficiency reasons. Spacecraft packaging will drive these two subsystems in close proximity thus heat leak into the commodity tankage must be minimized and/or mitigated. Furthermore, commodities must be gasified prior to consumption by the SOFC. Excess heat generated by the SOFC could be used to perform this phase change; however, this has yet to be demonstrated. A further identified challenge is the ability of the SOFC to handle the sudden power spikes created by the propulsion system. A power accumulator (battery) will likely be necessary to handle these sudden demands while the SOFC thermally adjusts. JSC's current SOFC test system consists of a 1 kW fuel cell designed by Delphi. The fuel cell is currently undergoing characterization testing at the NASA JSC Energy Systems Test Area (ESTA) after which a Steam Methane Reformer (SMR) will be integrated and the combined system tested in closed-loop. The propulsion brassboard is approximately the size of what could be flown on a sounding rocket. It consists of one 100 lbf thrust "main" engine developed for NASA by Aerojet and two 10 lbf thrusters to simulate a reaction control system developed at NASA JSC. This system is also under development and initial testing at ESTA. After initial testing, combined testing will occur which will provide data on the fuel cell's ability to sufficiently handle the power spikes created by the propulsion system. These two systems will also be modeled using General-Use Nodal Network Solver (GUNNS) software. Once anchored with test data, this model will be used to extrapolate onto other firing profiles and used to size the power accumulator.

  13. MONTAGE: A Methodology for Designing Composable End-to-End Secure Distributed Systems

    DTIC Science & Technology

    2012-08-01

    83 7.6 Formal Model of Loc Separation . . . . . . . . . . . . . . . . . . . . . . . . . 84 7.6.1 Static Partitions...Next, we derive five requirements (called Loc Separation, Implicit Parameter Separation, Error Signaling Separation, Conf Separation, and Next Call...hypervisors and hardware) and a real cloud (with shared hypervisors and hardware) that satisfies these requirements. Finally we study Loc Separation

  14. Multiple pathways of commodity crop expansion in tropical forest landscapes

    NASA Astrophysics Data System (ADS)

    Meyfroidt, Patrick; Carlson, Kimberly M.; Fagan, Matthew E.; Gutiérrez-Vélez, Victor H.; Macedo, Marcia N.; Curran, Lisa M.; DeFries, Ruth S.; Dyer, George A.; Gibbs, Holly K.; Lambin, Eric F.; Morton, Douglas C.; Robiglio, Valentina

    2014-07-01

    Commodity crop expansion, for both global and domestic urban markets, follows multiple land change pathways entailing direct and indirect deforestation, and results in various social and environmental impacts. Here we compare six published case studies of rapid commodity crop expansion within forested tropical regions. Across cases, between 1.7% and 89.5% of new commodity cropland was sourced from forestlands. Four main factors controlled pathways of commodity crop expansion: (i) the availability of suitable forestland, which is determined by forest area, agroecological or accessibility constraints, and land use policies, (ii) economic and technical characteristics of agricultural systems, (iii) differences in constraints and strategies between small-scale and large-scale actors, and (iv) variable costs and benefits of forest clearing. When remaining forests were unsuitable for agriculture and/or policies restricted forest encroachment, a larger share of commodity crop expansion occurred by conversion of existing agricultural lands, and land use displacement was smaller. Expansion strategies of large-scale actors emerge from context-specific balances between the search for suitable lands; transaction costs or conflicts associated with expanding into forests or other state-owned lands versus smallholder lands; net benefits of forest clearing; and greater access to infrastructure in already-cleared lands. We propose five hypotheses to be tested in further studies: (i) land availability mediates expansion pathways and the likelihood that land use is displaced to distant, rather than to local places; (ii) use of already-cleared lands is favored when commodity crops require access to infrastructure; (iii) in proportion to total agricultural expansion, large-scale actors generate more clearing of mature forests than smallholders; (iv) property rights and land tenure security influence the actors participating in commodity crop expansion, the form of land use displacement, and livelihood outcomes; (v) intensive commodity crops may fail to spare land when inducing displacement. We conclude that understanding pathways of commodity crop expansion is essential to improve land use governance.

  15. 7 CFR 1220.119 - Producer.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE SOYBEAN PROMOTION, RESEARCH, AND CONSUMER INFORMATION Soybean Promotion and Research Order Definitions § 1220.119 Producer. The term producer means any person engaged in the growing of soybeans in the United States who owns, or who shares...

  16. 7 CFR 1220.119 - Producer.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE SOYBEAN PROMOTION, RESEARCH, AND CONSUMER INFORMATION Soybean Promotion and Research Order Definitions § 1220.119 Producer. The term producer means any person engaged in the growing of soybeans in the United States who owns, or who shares...

  17. 7 CFR 1216.22 - Producer.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE PEANUT PROMOTION, RESEARCH, AND INFORMATION ORDER Peanut Promotion, Research, and Information Order Definitions § 1216.22 Producer. Producer means any person engaged in the production and sale of peanuts and who owns, or shares the ownership and...

  18. 7 CFR 1216.22 - Producer.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE PEANUT PROMOTION, RESEARCH, AND INFORMATION ORDER Peanut Promotion, Research, and Information Order Definitions § 1216.22 Producer. Producer means any person engaged in the production and sale of peanuts and who owns, or shares the ownership and...

  19. Performance Comparison of Mainframe, Workstations, Clusters, and Desktop Computers

    NASA Technical Reports Server (NTRS)

    Farley, Douglas L.

    2005-01-01

    A performance evaluation of a variety of computers frequently found in a scientific or engineering research environment was conducted using a synthetic and application program benchmarks. From a performance perspective, emerging commodity processors have superior performance relative to legacy mainframe computers. In many cases, the PC clusters exhibited comparable performance with traditional mainframe hardware when 8-12 processors were used. The main advantage of the PC clusters was related to their cost. Regardless of whether the clusters were built from new computers or whether they were created from retired computers their performance to cost ratio was superior to the legacy mainframe computers. Finally, the typical annual maintenance cost of legacy mainframe computers is several times the cost of new equipment such as multiprocessor PC workstations. The savings from eliminating the annual maintenance fee on legacy hardware can result in a yearly increase in total computational capability for an organization.

  20. 7 CFR 1214.101 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE CHRISTMAS TREE PROMOTION, RESEARCH... act in the Administrator's stead. (b) Customs means the United States Customs and Border Protection or... Christmas trees annually in the United States, and who: (1) Owns, or shares the ownership and risk of loss...

  1. 7 CFR 1214.101 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE CHRISTMAS TREE PROMOTION, RESEARCH... act in the Administrator's stead. (b) Customs means the United States Customs and Border Protection or... Christmas trees annually in the United States, and who: (1) Owns, or shares the ownership and risk of loss...

  2. 7 CFR 1214.101 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE CHRISTMAS TREE PROMOTION, RESEARCH... act in the Administrator's stead. (b) Customs means the United States Customs and Border Protection or... Christmas trees annually in the United States, and who: (1) Owns, or shares the ownership and risk of loss...

  3. 7 CFR 1280.402 - Assessments.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE LAMB PROMOTION, RESEARCH, AND... producer, feeder, or seedstock producer shares the proceeds received for the lamb or lamb products sold..., or livestock market in the business of receiving lambs or lamb products for sale on commission for or...

  4. 7 CFR 1245.102 - Voting.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE U.S. HONEY PRODUCER RESEARCH... arrangement involving totally independent entities cooperating only to produce U.S. honey or honey products... referendum covering only that producer's share of the ownership of U.S. honey or honey products. (b) Proxy...

  5. 7 CFR 1212.19 - Producer.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE HONEY PACKERS AND IMPORTERS RESEARCH, PROMOTION, CONSUMER EDUCATION AND INDUSTRY INFORMATION ORDER Honey Packers and Importers Research, Promotion... person who is engaged in the production and sale of honey in any State and who owns, or shares the...

  6. Parallel computing for probabilistic fatigue analysis

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Lua, Yuan J.; Smith, Mark D.

    1993-01-01

    This paper presents the results of Phase I research to investigate the most effective parallel processing software strategies and hardware configurations for probabilistic structural analysis. We investigate the efficiency of both shared and distributed-memory architectures via a probabilistic fatigue life analysis problem. We also present a parallel programming approach, the virtual shared-memory paradigm, that is applicable across both types of hardware. Using this approach, problems can be solved on a variety of parallel configurations, including networks of single or multiprocessor workstations. We conclude that it is possible to effectively parallelize probabilistic fatigue analysis codes; however, special strategies will be needed to achieve large-scale parallelism to keep large number of processors busy and to treat problems with the large memory requirements encountered in practice. We also conclude that distributed-memory architecture is preferable to shared-memory for achieving large scale parallelism; however, in the future, the currently emerging hybrid-memory architectures will likely be optimal.

  7. A multiarchitecture parallel-processing development environment

    NASA Technical Reports Server (NTRS)

    Townsend, Scott; Blech, Richard; Cole, Gary

    1993-01-01

    A description is given of the hardware and software of a multiprocessor test bed - the second generation Hypercluster system. The Hypercluster architecture consists of a standard hypercube distributed-memory topology, with multiprocessor shared-memory nodes. By using standard, off-the-shelf hardware, the system can be upgraded to use rapidly improving computer technology. The Hypercluster's multiarchitecture nature makes it suitable for researching parallel algorithms in computational field simulation applications (e.g., computational fluid dynamics). The dedicated test-bed environment of the Hypercluster and its custom-built software allows experiments with various parallel-processing concepts such as message passing algorithms, debugging tools, and computational 'steering'. Such research would be difficult, if not impossible, to achieve on shared, commercial systems.

  8. FELIX: The new detector readout system for the ATLAS experiment

    NASA Astrophysics Data System (ADS)

    Ryu, Soo; ATLAS TDAQ Collaboration

    2017-10-01

    After the Phase-I upgrades (2019) of the ATLAS experiment, the Front-End Link eXchange (FELIX) system will be the interface between the data acquisition system and the detector front-end and trigger electronics. FELIX will function as a router between custom serial links and a commodity switch network using standard technologies (Ethernet or Infiniband) to communicate with commercial data collecting and processing components. The system architecture of FELIX will be described and the status of the firmware implementation and hardware development currently in progress will be presented.

  9. The Design and Evolution of Jefferson Lab's Jasmine Mass Storage System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bryan Hess; M. Andrew Kowalski; Michael Haddox-Schatz

    We describe the Jasmine mass storage system, in operation since 2001. Jasmine has scaled to meet the challenges of grid applications, petabyte class storage, and hundreds of MB/sec throughput using commodity hardware, Java technologies, and a small but focused development team. The evolution of the integrated disk cache system, which provides a managed online subset of the tape contents, is examined in detail. We describe how the storage system has grown to meet the special needs of the batch farm, grid clients, and new performance demands.

  10. LAPACKrc: Fast linear algebra kernels/solvers for FPGA accelerators

    NASA Astrophysics Data System (ADS)

    Gonzalez, Juan; Núñez, Rafael C.

    2009-07-01

    We present LAPACKrc, a family of FPGA-based linear algebra solvers able to achieve more than 100x speedup per commodity processor on certain problems. LAPACKrc subsumes some of the LAPACK and ScaLAPACK functionalities, and it also incorporates sparse direct and iterative matrix solvers. Current LAPACKrc prototypes demonstrate between 40x-150x speedup compared against top-of-the-line hardware/software systems. A technology roadmap is in place to validate current performance of LAPACKrc in HPC applications, and to increase the computational throughput by factors of hundreds within the next few years.

  11. Shape and texture fused recognition of flying targets

    NASA Astrophysics Data System (ADS)

    Kovács, Levente; Utasi, Ákos; Kovács, Andrea; Szirányi, Tamás

    2011-06-01

    This paper presents visual detection and recognition of flying targets (e.g. planes, missiles) based on automatically extracted shape and object texture information, for application areas like alerting, recognition and tracking. Targets are extracted based on robust background modeling and a novel contour extraction approach, and object recognition is done by comparisons to shape and texture based query results on a previously gathered real life object dataset. Application areas involve passive defense scenarios, including automatic object detection and tracking with cheap commodity hardware components (CPU, camera and GPS).

  12. An evolutionary solution to anesthesia automated record keeping.

    PubMed

    Bicker, A A; Gage, J S; Poppers, P J

    1998-08-01

    In the course of five years the development of an automated anesthesia record keeper has evolved through nearly a dozen stages, each marked by new features and sophistication. Commodity PC hardware and software minimized development costs. Object oriented analysis, programming and design supported the process of change. In addition, we developed an evolutionary strategy that optimized motivation, risk management, and maximized return on investment. Besides providing record keeping services, the system supports educational and research activities and through a flexible plotting paradigm, supports each anesthesiologist's focus on physiological data during and after anesthesia.

  13. Solutions and debugging for data consistency in multiprocessors with noncoherent caches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernstein, D.; Mendelson, B.; Breternitz, M. Jr.

    1995-02-01

    We analyze two important problems that arise in shared-memory multiprocessor systems. The stale data problem involves ensuring that data items in local memory of individual processors are current, independent of writes done by other processors. False sharing occurs when two processors have copies of the same shared data block but update different portions of the block. The false sharing problem involves guaranteeing that subsequent writes are properly combined. In modern architectures these problems are usually solved in hardware, by exploiting mechanisms for hardware controlled cache consistency. This leads to more expensive and nonscalable designs. Therefore, we are concentrating on softwaremore » methods for ensuring cache consistency that would allow for affordable and scalable multiprocessing systems. Unfortunately, providing software control is nontrivial, both for the compiler writer and for the application programmer. For this reason we are developing a debugging environment that will facilitate the development of compiler-based techniques and will help the programmer to tune his or her application using explicit cache management mechanisms. We extend the notion of a race condition for IBM Shared Memory System POWER/4, taking into consideration its noncoherent caches, and propose techniques for detection of false sharing problems. Identification of the stale data problem is discussed as well, and solutions are suggested.« less

  14. Integration of an open interface PC scene generator using COTS DVI converter hardware

    NASA Astrophysics Data System (ADS)

    Nordland, Todd; Lyles, Patrick; Schultz, Bret

    2006-05-01

    Commercial-Off-The-Shelf (COTS) personal computer (PC) hardware is increasingly capable of computing high dynamic range (HDR) scenes for military sensor testing at high frame rates. New electro-optical and infrared (EO/IR) scene projectors feature electrical interfaces that can accept the DVI output of these PC systems. However, military Hardware-in-the-loop (HWIL) facilities such as those at the US Army Aviation and Missile Research Development and Engineering Center (AMRDEC) utilize a sizeable inventory of existing projection systems that were designed to use the Silicon Graphics Incorporated (SGI) digital video port (DVP, also known as DVP2 or DD02) interface. To mate the new DVI-based scene generation systems to these legacy projection systems, CG2 Inc., a Quantum3D Company (CG2), has developed a DVI-to-DVP converter called Delta DVP. This device takes progressive scan DVI input, converts it to digital parallel data, and combines and routes color components to derive a 16-bit wide luminance channel replicated on a DVP output interface. The HWIL Functional Area of AMRDEC has developed a suite of modular software to perform deterministic real-time, wave band-specific rendering of sensor scenes, leveraging the features of commodity graphics hardware and open source software. Together, these technologies enable sensor simulation and test facilities to integrate scene generation and projection components with diverse pedigrees.

  15. Operating System Support for Shared Hardware Data Structures

    DTIC Science & Technology

    2013-01-31

    Carbon [73] uses hardware queues to improve fine-grained multitasking for Recognition, Mining , and Synthesis. Compared to software ap- proaches...web transaction processing, data mining , and multimedia. Early work in database processors [114, 96, 79, 111] reduce the costs of relational database...assignment can be solved statically or dynamically. Static assignment deter- mines offline which data structures are assigned to use HWDS resources and at

  16. An Analysis of Scalable GPU-Based Ray-Guided Volume Rendering

    PubMed Central

    Fogal, Thomas; Schiewe, Alexander; Krüger, Jens

    2014-01-01

    Volume rendering continues to be a critical method for analyzing large-scale scalar fields, in disciplines as diverse as biomedical engineering and computational fluid dynamics. Commodity desktop hardware has struggled to keep pace with data size increases, challenging modern visualization software to deliver responsive interactions for O(N3) algorithms such as volume rendering. We target the data type common in these domains: regularly-structured data. In this work, we demonstrate that the major limitation of most volume rendering approaches is their inability to switch the data sampling rate (and thus data size) quickly. Using a volume renderer inspired by recent work, we demonstrate that the actual amount of visualizable data for a scene is typically bound considerably lower than the memory available on a commodity GPU. Our instrumented renderer is used to investigate design decisions typically swept under the rug in volume rendering literature. The renderer is freely available, with binaries for all major platforms as well as full source code, to encourage reproduction and comparison with future research. PMID:25506079

  17. Virtual suturing simulation based on commodity physics engine for medical learning.

    PubMed

    Choi, Kup-Sze; Chan, Sze-Ho; Pang, Wai-Man

    2012-06-01

    Development of virtual-reality medical applications is usually a complicated and labour intensive task. This paper explores the feasibility of using commodity physics engine to develop a suturing simulator prototype for manual skills training in the fields of nursing and medicine, so as to enjoy the benefits of rapid development and hardware-accelerated computation. In the prototype, spring-connected boxes of finite dimension are used to simulate soft tissues, whereas needle and thread are modelled with chained segments. Spherical joints are used to simulate suture's flexibility and to facilitate thread cutting. An algorithm is developed to simulate needle insertion and thread advancement through the tissue. Two-handed manipulations and force feedback are enabled with two haptic devices. Experiments on the closure of a wound show that the prototype is able to simulate suturing procedures at interactive rates. The simulator is also used to study a curvature-adaptive suture modelling technique. Issues and limitations of the proposed approach and future development are discussed.

  18. 7 CFR 1207.305 - Producer.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POTATO RESEARCH AND PROMOTION PLAN Potato Research and Promotion Plan Definitions § 1207.305 Producer. Producer means any person engaged in the growing of 5 or more acres of potatoes who owns or shares the ownership and risk of loss of such...

  19. 7 CFR 1207.305 - Producer.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POTATO RESEARCH AND PROMOTION PLAN Potato Research and Promotion Plan Definitions § 1207.305 Producer. Producer means any person engaged in the growing of 5 or more acres of potatoes who owns or shares the ownership and risk of loss of such...

  20. 7 CFR 1207.305 - Producer.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POTATO RESEARCH AND PROMOTION PLAN Potato Research and Promotion Plan Definitions § 1207.305 Producer. Producer means any person engaged in the growing of 5 or more acres of potatoes who owns or shares the ownership and risk of loss of such...

  1. 7 CFR 1207.305 - Producer.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POTATO RESEARCH AND PROMOTION PLAN Potato Research and Promotion Plan Definitions § 1207.305 Producer. Producer means any person engaged in the growing of 5 or more acres of potatoes who owns or shares the ownership and risk of loss of such...

  2. 7 CFR 1207.305 - Producer.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POTATO RESEARCH AND PROMOTION PLAN Potato Research and Promotion Plan Definitions § 1207.305 Producer. Producer means any person engaged in the growing of 5 or more acres of potatoes who owns or shares the ownership and risk of loss of such...

  3. Hawaiian Electric Advanced Inverter Test Plan - Result Summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoke, Anderson; Nelson, Austin; Prabakar, Kumaraguru

    This presentation is intended to share the results of lab testing of five PV inverters with the Hawaiian Electric Companies and other stakeholders and interested parties. The tests included baseline testing of advanced inverter grid support functions, as well as distribution circuit-level tests to examine the impact of the PV inverters on simulated distribution feeders using power hardware-in-the-loop (PHIL) techniques. hardware-in-the-loop (PHIL) techniques.

  4. The design of flight hardware: Organizational and technical ideas from the MITRE/WPI Shuttle Program

    NASA Technical Reports Server (NTRS)

    Looft, F. J.

    1986-01-01

    The Mitre Corporation of Bedford Mass. and the Worcester Polytechnic Institute are developing several experiments for a future Shuttle flight. Several design practices for the development of the electrical equipment for the flight hardware have been standardized. Some of the ideas are presented, not as hard and fast rules but rather in the interest of stimulating discussions for sharing such ideas.

  5. 7 CFR 1486.209 - How are program applications evaluated and approved?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... or new uses. Examples include food service development, market research on potential for consumer... MARKETS PROGRAM Eligibility, Applications, and Funding § 1486.209 How are program applications evaluated... affecting the level of U.S. exports and market share for the agricultural commodity/product; (4) The degree...

  6. Implementation of a parallel unstructured Euler solver on shared and distributed memory architectures

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.; Das, Raja; Saltz, Joel; Vermeland, R. E.

    1992-01-01

    An efficient three dimensional unstructured Euler solver is parallelized on a Cray Y-MP C90 shared memory computer and on an Intel Touchstone Delta distributed memory computer. This paper relates the experiences gained and describes the software tools and hardware used in this study. Performance comparisons between two differing architectures are made.

  7. Shared Freight Transportation and Energy Commodities Phase One: Coal, Crude Petroleum, & Natural Gas Flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chin, Shih-Miao; Hwang, Ho-Ling; Davidson, Diane

    2016-07-01

    The Freight Analysis Framework (FAF) integrates data from a variety of sources to create a comprehensive picture of nationwide freight movements among states and major metropolitan areas for all modes of transportation. It provides a national picture of current freight flows to, from, and within the United States, assigns selected flows to the transportation network, and projects freight flow patterns into the future. The latest release of FAF is known as FAF4 with a base year of 2012. The FAF4 origin-destination-commodity-mode (ODCM) matrix is provided at national, state, major metropolitan areas, and major gateways with significant freight activities (e.g., Elmore » Paso, Texas). The U.S. Department of Energy (DOE) is interested in using FAF4 database for its strategic planning and policy analysis, particularly in association with the transportation of energy commodities. However, the geographic specification that DOE requires is a county-level ODCM matrix. Unfortunately, the geographic regions in the FAF4 database were not available at the DOE desired detail. Due to this limitation, DOE tasked Oak Ridge National Laboratory (ORNL) to assist in generating estimates of county-level flows for selected energy commodities by mode of transportation.« less

  8. Estimates of immediate effects on world markets of a hypothetical disruption to Russia’s supply of six mineral commodities

    USGS Publications Warehouse

    Safirova, Elena; Barry, James J.; Hastorun, Sinan; Matos, Grecia R.; Perez, Alberto Alexander; Bedinger, George M.; Bray, E. Lee; Jasinski, Stephen M.; Kuck, Peter H.; Loferski, Patricia J.

    2017-05-18

    The potential immediate effects of a hypothetical shock to Russia’s supply of selected mineral commodities on the world market and on individual countries were determined and monetized (in 2014 U.S. dollars). The mineral commodities considered were aluminum (refined primary), nickel (refined primary), palladium (refined) and platinum (refined), potash, and titanium (mill products), and the regions and countries of primary interest were the United States, the European Union (EU–28), and China. The shock is assumed to have infinite duration, but only the immediate effects, those limited by a 1-year period, are considered.A methodology for computing and monetizing the potential impacts was developed. Then the data pertaining to all six mineral commodities were collected and the most likely effects were computed. Because of the uncertainties associated with some of the data, sensitivity analyses were conducted to confirm the validity of the results.Results indicate that the impact on the United States arising from a shock to Russia’s supply, in terms of the value of net exports, would range from a gain of \\$336 million for titanium mill products to a loss of \\$237 million for potash; thus, the overall effect of a supply shock is likely to be quite modest. The study also demonstrates that, taken alone, Russia’s share in the world production of a particular commodity is not necessarily indicative of the size of potential impacts resulting from a supply shock; other factors, such as prices, domestic production, and the structure of international commodity flows were found to be important as well.

  9. Centralized Planning for Multiple Exploratory Robots

    NASA Technical Reports Server (NTRS)

    Estlin, Tara; Rabideau, Gregg; Chien, Steve; Barrett, Anthony

    2005-01-01

    A computer program automatically generates plans for a group of robotic vehicles (rovers) engaged in geological exploration of terrain. The program rapidly generates multiple command sequences that can be executed simultaneously by the rovers. Starting from a set of high-level goals, the program creates a sequence of commands for each rover while respecting hardware constraints and limitations on resources of each rover and of hardware (e.g., a radio communication terminal) shared by all the rovers. First, a separate model of each rover is loaded into a centralized planning subprogram. The centralized planning software uses the models of the rovers plus an iterative repair algorithm to resolve conflicts posed by demands for resources and by constraints associated with the all the rovers and the shared hardware. During repair, heuristics are used to make planning decisions that will result in solutions that will be better and will be found faster than would otherwise be possible. In particular, techniques from prior solutions of the multiple-traveling- salesmen problem are used as heuristics to generate plans in which the paths taken by the rovers to assigned scientific targets are shorter than they would otherwise be.

  10. Reciprocal Exchange Patterned by Market Forces Helps Explain Cooperation in a Small-Scale Society.

    PubMed

    Jaeggi, Adrian V; Hooper, Paul L; Beheim, Bret A; Kaplan, Hillard; Gurven, Michael

    2016-08-22

    Social organisms sometimes depend on help from reciprocating partners to solve adaptive problems [1], and individual cooperation strategies should aim to offer high supply commodities at low cost to the donor in exchange for high-demand commodities with large return benefits [2, 3]. Although such market dynamics have been documented in some animals [4-7], naturalistic studies of human cooperation are often limited by focusing on single commodities [8]. We analyzed cooperation in five domains (meat sharing, produce sharing, field labor, childcare, and sick care) among 2,161 household dyads of Tsimane' horticulturalists, using Bayesian multilevel models and information-theoretic model comparison. Across domains, the best-fit models included kinship and residential proximity, exchanges in kind and across domains, measures of supply and demand and their interactions with exchange, and household-specific exchange slopes. In these best models, giving, receiving, and reciprocating were to some extent shaped by market forces, and reciprocal exchange across domains had a strong partial effect on cooperation independent of more exogenous factors like kinship and proximity. Our results support the view that reciprocal exchange can provide a reliable solution to adaptive problems [8-11]. Although individual strategies patterned by market forces may generate gains from trade in any species [3], humans' slow life history and skill-intensive foraging niche favor specialization and create interdependence [12, 13], thus stabilizing cooperation and fostering divisions of labor even in informal economies [14, 15]. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Nebula: reconstruction and visualization of scattering data in reciprocal space.

    PubMed

    Reiten, Andreas; Chernyshov, Dmitry; Mathiesen, Ragnvald H

    2015-04-01

    Two-dimensional solid-state X-ray detectors can now operate at considerable data throughput rates that allow full three-dimensional sampling of scattering data from extended volumes of reciprocal space within second to minute time-scales. For such experiments, simultaneous analysis and visualization allows for remeasurements and a more dynamic measurement strategy. A new software, Nebula , is presented. It efficiently reconstructs X-ray scattering data, generates three-dimensional reciprocal space data sets that can be visualized interactively, and aims to enable real-time processing in high-throughput measurements by employing parallel computing on commodity hardware.

  12. Nebula: reconstruction and visualization of scattering data in reciprocal space

    PubMed Central

    Reiten, Andreas; Chernyshov, Dmitry; Mathiesen, Ragnvald H.

    2015-01-01

    Two-dimensional solid-state X-ray detectors can now operate at considerable data throughput rates that allow full three-dimensional sampling of scattering data from extended volumes of reciprocal space within second to minute time­scales. For such experiments, simultaneous analysis and visualization allows for remeasurements and a more dynamic measurement strategy. A new software, Nebula, is presented. It efficiently reconstructs X-ray scattering data, generates three-dimensional reciprocal space data sets that can be visualized interactively, and aims to enable real-time processing in high-throughput measurements by employing parallel computing on commodity hardware. PMID:25844083

  13. Migrating EO/IR sensors to cloud-based infrastructure as service architectures

    NASA Astrophysics Data System (ADS)

    Berglie, Stephen T.; Webster, Steven; May, Christopher M.

    2014-06-01

    The Night Vision Image Generator (NVIG), a product of US Army RDECOM CERDEC NVESD, is a visualization tool used widely throughout Army simulation environments to provide fully attributed synthesized, full motion video using physics-based sensor and environmental effects. The NVIG relies heavily on contemporary hardware-based acceleration and GPU processing techniques, which push the envelope of both enterprise and commodity-level hypervisor support for providing virtual machines with direct access to hardware resources. The NVIG has successfully been integrated into fully virtual environments where system architectures leverage cloudbased technologies to various extents in order to streamline infrastructure and service management. This paper details the challenges presented to engineers seeking to migrate GPU-bound processes, such as the NVIG, to virtual machines and, ultimately, Cloud-Based IAS architectures. In addition, it presents the path that led to success for the NVIG. A brief overview of Cloud-Based infrastructure management tool sets is provided, and several virtual desktop solutions are outlined. A discrimination is made between general purpose virtual desktop technologies compared to technologies that expose GPU-specific capabilities, including direct rendering and hard ware-based video encoding. Candidate hypervisor/virtual machine configurations that nominally satisfy the virtualized hardware-level GPU requirements of the NVIG are presented , and each is subsequently reviewed in light of its implications on higher-level Cloud management techniques. Implementation details are included from the hardware level, through the operating system, to the 3D graphics APls required by the NVIG and similar GPU-bound tools.

  14. DNA Assembly in 3D Printed Fluidics

    PubMed Central

    Patrick, William G.; Nielsen, Alec A. K.; Keating, Steven J.; Levy, Taylor J.; Wang, Che-Wei; Rivera, Jaime J.; Mondragón-Palomino, Octavio; Carr, Peter A.; Voigt, Christopher A.; Oxman, Neri; Kong, David S.

    2015-01-01

    The process of connecting genetic parts—DNA assembly—is a foundational technology for synthetic biology. Microfluidics present an attractive solution for minimizing use of costly reagents, enabling multiplexed reactions, and automating protocols by integrating multiple protocol steps. However, microfluidics fabrication and operation can be expensive and requires expertise, limiting access to the technology. With advances in commodity digital fabrication tools, it is now possible to directly print fluidic devices and supporting hardware. 3D printed micro- and millifluidic devices are inexpensive, easy to make and quick to produce. We demonstrate Golden Gate DNA assembly in 3D-printed fluidics with reaction volumes as small as 490 nL, channel widths as fine as 220 microns, and per unit part costs ranging from $0.61 to $5.71. A 3D-printed syringe pump with an accompanying programmable software interface was designed and fabricated to operate the devices. Quick turnaround and inexpensive materials allowed for rapid exploration of device parameters, demonstrating a manufacturing paradigm for designing and fabricating hardware for synthetic biology. PMID:26716448

  15. SIMD Optimization of Linear Expressions for Programmable Graphics Hardware

    PubMed Central

    Bajaj, Chandrajit; Ihm, Insung; Min, Jungki; Oh, Jinsang

    2009-01-01

    The increased programmability of graphics hardware allows efficient graphical processing unit (GPU) implementations of a wide range of general computations on commodity PCs. An important factor in such implementations is how to fully exploit the SIMD computing capacities offered by modern graphics processors. Linear expressions in the form of ȳ = Ax̄ + b̄, where A is a matrix, and x̄, ȳ and b̄ are vectors, constitute one of the most basic operations in many scientific computations. In this paper, we propose a SIMD code optimization technique that enables efficient shader codes to be generated for evaluating linear expressions. It is shown that performance can be improved considerably by efficiently packing arithmetic operations into four-wide SIMD instructions through reordering of the operations in linear expressions. We demonstrate that the presented technique can be used effectively for programming both vertex and pixel shaders for a variety of mathematical applications, including integrating differential equations and solving a sparse linear system of equations using iterative methods. PMID:19946569

  16. Eye gaze correction with stereovision for video-teleconferencing.

    PubMed

    Yang, Ruigang; Zhang, Zhengyou

    2004-07-01

    The lack of eye contact in desktop video teleconferencing substantially reduces the effectiveness of video contents. While expensive and bulky hardware is available on the market to correct eye gaze, researchers have been trying to provide a practical software-based solution to bring video-teleconferencing one step closer to the mass market. This paper presents a novel approach: Based on stereo analysis combined with rich domain knowledge (a personalized face model), we synthesize, using graphics hardware, a virtual video that maintains eye contact. A 3D stereo head tracker with a personalized face model is used to compute initial correspondences across two views. More correspondences are then added through template and feature matching. Finally, all the correspondence information is fused together for view synthesis using view morphing techniques. The combined methods greatly enhance the accuracy and robustness of the synthesized views. Our current system is able to generate an eye-gaze corrected video stream at five frames per second on a commodity 1 GHz PC.

  17. Merlin - Massively parallel heterogeneous computing

    NASA Technical Reports Server (NTRS)

    Wittie, Larry; Maples, Creve

    1989-01-01

    Hardware and software for Merlin, a new kind of massively parallel computing system, are described. Eight computers are linked as a 300-MIPS prototype to develop system software for a larger Merlin network with 16 to 64 nodes, totaling 600 to 3000 MIPS. These working prototypes help refine a mapped reflective memory technique that offers a new, very general way of linking many types of computer to form supercomputers. Processors share data selectively and rapidly on a word-by-word basis. Fast firmware virtual circuits are reconfigured to match topological needs of individual application programs. Merlin's low-latency memory-sharing interfaces solve many problems in the design of high-performance computing systems. The Merlin prototypes are intended to run parallel programs for scientific applications and to determine hardware and software needs for a future Teraflops Merlin network.

  18. Space Shuttle Program (SSP) Shock Test and Specification Experience for Reusable Flight Hardware Equipment

    NASA Technical Reports Server (NTRS)

    Larsen, Curtis E.

    2012-01-01

    As commercial companies are nearing a preliminary design review level of design maturity, several companies are identifying the process for qualifying their multi-use electrical and mechanical components for various shock environments, including pyrotechnic, mortar firing, and water impact. The experience in quantifying the environments consists primarily of recommendations from Military Standard-1540, Product Verification Requirement for Launch, Upper Stage, and Space Vehicles. Therefore, the NASA Engineering and Safety Center (NESC) formed a team of NASA shock experts to share the NASA experience with qualifying hardware for the Space Shuttle Program (SSP) and other applicable programs and projects. Several team teleconferences were held to discuss past experience and to share ideas of possible methods for qualifying components for multiple missions. This document contains the information compiled from the discussions

  19. 78 FR 785 - Self-Regulatory Organizations; NYSE Arca, Inc.; Order Granting Approval of Proposed Rule Change...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-04

    ...; securities; options on securities and indices; futures contracts; options on futures contracts; forward... options, futures, or options on futures on, Shares through ETP Holders, in connection with such ETP... pool operator with the Commodity Futures Trading Commission (``CFTC'') and is a member of the National...

  20. 5 CFR 2634.303 - Purchases, sales, and exchanges.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... defined in § 2634.105(l) of this part; and (2) Of stocks, bonds, commodity futures, mutual fund shares...) Transactions involving Treasury bills, notes, and bonds; money market mutual funds or accounts; and personal... involving portfolio holdings of trusts and investment funds described in § 2634.310 (b) and (c) of this...

  1. Resource Management and Risk Mitigation in Online Storage Grids

    ERIC Educational Resources Information Center

    Du, Ye

    2010-01-01

    This dissertation examines the economic value of online storage resources that could be traded and shared as potential commodities and the consequential investments and deployment of such resources. The value proposition of emergent business models such as Akamai and Amazon S3 in online storage grids is capacity provision and content delivery at…

  2. Ship and Shoot

    NASA Technical Reports Server (NTRS)

    Woods, Ron

    2012-01-01

    Ron Woods shared incredibly valuable insights gained during his 28 years at the Kennedy Space Center (KSC) packaging Flight Crew Equipment for shuttle and ISS missions. In particular, Woods shared anecdotes and photos from various processing events. The moral of these stories and the main focus of this discussion were the additional processing efforts and effects related to a "ship-and-shoot" philosophy toward flight hardware.

  3. Implementing the concurrent operation of sub-arrays in the ALMA correlator

    NASA Astrophysics Data System (ADS)

    Amestica, Rodrigo; Perez, Jesus; Lacasse, Richard; Saez, Alejandro

    2016-07-01

    The ALMA correlator processes the digitized signals from 64 individual antennas to produce a grand total of 2016 correlated base-lines, with runtime selectable lags resolution and integration time. The on-line software system can process a maximum of 125M visibilities per second, producing an archiving data rate close to one sixteenth of the former (7.8M visibilities per second with a network transfer limit of 60 MB/sec). Mechanisms in the correlator hardware design make it possible to split the total number of antennas in the array into smaller subsets, or sub-arrays, such that they can share correlator resources while executing independent observations. The software part of the sub-system is responsible for configuring and scheduling correlator resources in such a way that observations among independent subarrays occur simultaneously while internally sharing correlator resources under a cooperative arrangement. Configuration of correlator modes through its CAN-bus interface and periodic geometric delay updates are the most relevant activities to schedule concurrently while observations happen at the same time among a number of sub-arrays. For that to work correctly, the software interface to sub-arrays schedules shared correlator resources sequentially before observations actually start on each sub-array. Start times for specific observations are optimized and reported back to the higher level observing software. After that initial sequential phase has taken place then simultaneous executions and recording of correlated data across different sub-arrays move forward concurrently, sharing the local network to broadcast results to other software sub-systems. The present paper presents an overview of the different hardware and software actors within the correlator sub-system that implement some degree of concurrency and synchronization needed for seamless and simultaneous operation of multiple sub-arrays, limitations stemming from the resource-sharing nature of the correlator, limitations intrinsic to the digital technology available in the correlator hardware, and milestones so far reached by this new ALMA feature.

  4. Scalable isosurface visualization of massive datasets on commodity off-the-shelf clusters

    PubMed Central

    Bajaj, Chandrajit

    2009-01-01

    Tomographic imaging and computer simulations are increasingly yielding massive datasets. Interactive and exploratory visualizations have rapidly become indispensable tools to study large volumetric imaging and simulation data. Our scalable isosurface visualization framework on commodity off-the-shelf clusters is an end-to-end parallel and progressive platform, from initial data access to the final display. Interactive browsing of extracted isosurfaces is made possible by using parallel isosurface extraction, and rendering in conjunction with a new specialized piece of image compositing hardware called Metabuffer. In this paper, we focus on the back end scalability by introducing a fully parallel and out-of-core isosurface extraction algorithm. It achieves scalability by using both parallel and out-of-core processing and parallel disks. It statically partitions the volume data to parallel disks with a balanced workload spectrum, and builds I/O-optimal external interval trees to minimize the number of I/O operations of loading large data from disk. We also describe an isosurface compression scheme that is efficient for progress extraction, transmission and storage of isosurfaces. PMID:19756231

  5. Address tracing for parallel machines

    NASA Technical Reports Server (NTRS)

    Stunkel, Craig B.; Janssens, Bob; Fuchs, W. Kent

    1991-01-01

    Recently implemented parallel system address-tracing methods based on several metrics are surveyed. The issues specific to collection of traces for both shared and distributed memory parallel computers are highlighted. Five general categories of address-trace collection methods are examined: hardware-captured, interrupt-based, simulation-based, altered microcode-based, and instrumented program-based traces. The problems unique to shared memory and distributed memory multiprocessors are examined separately.

  6. Comparison between Frame-Constrained Fix-Pixel-Value and Frame-Free Spiking-Dynamic-Pixel ConvNets for Visual Processing

    PubMed Central

    Farabet, Clément; Paz, Rafael; Pérez-Carrasco, Jose; Zamarreño-Ramos, Carlos; Linares-Barranco, Alejandro; LeCun, Yann; Culurciello, Eugenio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe

    2012-01-01

    Most scene segmentation and categorization architectures for the extraction of features in images and patches make exhaustive use of 2D convolution operations for template matching, template search, and denoising. Convolutional Neural Networks (ConvNets) are one example of such architectures that can implement general-purpose bio-inspired vision systems. In standard digital computers 2D convolutions are usually expensive in terms of resource consumption and impose severe limitations for efficient real-time applications. Nevertheless, neuro-cortex inspired solutions, like dedicated Frame-Based or Frame-Free Spiking ConvNet Convolution Processors, are advancing real-time visual processing. These two approaches share the neural inspiration, but each of them solves the problem in different ways. Frame-Based ConvNets process frame by frame video information in a very robust and fast way that requires to use and share the available hardware resources (such as: multipliers, adders). Hardware resources are fixed- and time-multiplexed by fetching data in and out. Thus memory bandwidth and size is important for good performance. On the other hand, spike-based convolution processors are a frame-free alternative that is able to perform convolution of a spike-based source of visual information with very low latency, which makes ideal for very high-speed applications. However, hardware resources need to be available all the time and cannot be time-multiplexed. Thus, hardware should be modular, reconfigurable, and expansible. Hardware implementations in both VLSI custom integrated circuits (digital and analog) and FPGA have been already used to demonstrate the performance of these systems. In this paper we present a comparison study of these two neuro-inspired solutions. A brief description of both systems is presented and also discussions about their differences, pros and cons. PMID:22518097

  7. Comparison between Frame-Constrained Fix-Pixel-Value and Frame-Free Spiking-Dynamic-Pixel ConvNets for Visual Processing.

    PubMed

    Farabet, Clément; Paz, Rafael; Pérez-Carrasco, Jose; Zamarreño-Ramos, Carlos; Linares-Barranco, Alejandro; Lecun, Yann; Culurciello, Eugenio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe

    2012-01-01

    Most scene segmentation and categorization architectures for the extraction of features in images and patches make exhaustive use of 2D convolution operations for template matching, template search, and denoising. Convolutional Neural Networks (ConvNets) are one example of such architectures that can implement general-purpose bio-inspired vision systems. In standard digital computers 2D convolutions are usually expensive in terms of resource consumption and impose severe limitations for efficient real-time applications. Nevertheless, neuro-cortex inspired solutions, like dedicated Frame-Based or Frame-Free Spiking ConvNet Convolution Processors, are advancing real-time visual processing. These two approaches share the neural inspiration, but each of them solves the problem in different ways. Frame-Based ConvNets process frame by frame video information in a very robust and fast way that requires to use and share the available hardware resources (such as: multipliers, adders). Hardware resources are fixed- and time-multiplexed by fetching data in and out. Thus memory bandwidth and size is important for good performance. On the other hand, spike-based convolution processors are a frame-free alternative that is able to perform convolution of a spike-based source of visual information with very low latency, which makes ideal for very high-speed applications. However, hardware resources need to be available all the time and cannot be time-multiplexed. Thus, hardware should be modular, reconfigurable, and expansible. Hardware implementations in both VLSI custom integrated circuits (digital and analog) and FPGA have been already used to demonstrate the performance of these systems. In this paper we present a comparison study of these two neuro-inspired solutions. A brief description of both systems is presented and also discussions about their differences, pros and cons.

  8. Shared-resource computing for small research labs.

    PubMed

    Ackerman, M J

    1982-04-01

    A real time laboratory computer network is described. This network is composed of four real-time laboratory minicomputers located in each of four division laboratories and a larger minicomputer in a centrally located computer room. Off the shelf hardware and software were used with no customization. The network is configured for resource sharing using DECnet communications software and the RSX-11-M multi-user real-time operating system. The cost effectiveness of the shared resource network and multiple real-time processing using priority scheduling is discussed. Examples of utilization within a medical research department are given.

  9. A distributed, graphical user interface based, computer control system for atomic physics experiments

    NASA Astrophysics Data System (ADS)

    Keshet, Aviv; Ketterle, Wolfgang

    2013-01-01

    Atomic physics experiments often require a complex sequence of precisely timed computer controlled events. This paper describes a distributed graphical user interface-based control system designed with such experiments in mind, which makes use of off-the-shelf output hardware from National Instruments. The software makes use of a client-server separation between a user interface for sequence design and a set of output hardware servers. Output hardware servers are designed to use standard National Instruments output cards, but the client-server nature should allow this to be extended to other output hardware. Output sequences running on multiple servers and output cards can be synchronized using a shared clock. By using a field programmable gate array-generated variable frequency clock, redundant buffers can be dramatically shortened, and a time resolution of 100 ns achieved over effectively arbitrary sequence lengths.

  10. A distributed, graphical user interface based, computer control system for atomic physics experiments.

    PubMed

    Keshet, Aviv; Ketterle, Wolfgang

    2013-01-01

    Atomic physics experiments often require a complex sequence of precisely timed computer controlled events. This paper describes a distributed graphical user interface-based control system designed with such experiments in mind, which makes use of off-the-shelf output hardware from National Instruments. The software makes use of a client-server separation between a user interface for sequence design and a set of output hardware servers. Output hardware servers are designed to use standard National Instruments output cards, but the client-server nature should allow this to be extended to other output hardware. Output sequences running on multiple servers and output cards can be synchronized using a shared clock. By using a field programmable gate array-generated variable frequency clock, redundant buffers can be dramatically shortened, and a time resolution of 100 ns achieved over effectively arbitrary sequence lengths.

  11. Hardware/software codesign for embedded RISC core

    NASA Astrophysics Data System (ADS)

    Liu, Peng

    2001-12-01

    This paper describes hardware/software codesign method of the extendible embedded RISC core VIRGO, which based on MIPS-I instruction set architecture. VIRGO is described by Verilog hardware description language that has five-stage pipeline with shared 32-bit cache/memory interface, and it is controlled by distributed control scheme. Every pipeline stage has one small controller, which controls the pipeline stage status and cooperation among the pipeline phase. Since description use high level language and structure is distributed, VIRGO core has highly extension that can meet the requirements of application. We take look at the high-definition television MPEG2 MPHL decoder chip, constructed the hardware/software codesign virtual prototyping machine that can research on VIRGO core instruction set architecture, and system on chip memory size requirements, and system on chip software, etc. We also can evaluate the system on chip design and RISC instruction set based on the virtual prototyping machine platform.

  12. QMachine: commodity supercomputing in web browsers.

    PubMed

    Wilkinson, Sean R; Almeida, Jonas S

    2014-06-09

    Ongoing advancements in cloud computing provide novel opportunities in scientific computing, especially for distributed workflows. Modern web browsers can now be used as high-performance workstations for querying, processing, and visualizing genomics' "Big Data" from sources like The Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC) without local software installation or configuration. The design of QMachine (QM) was driven by the opportunity to use this pervasive computing model in the context of the Web of Linked Data in Biomedicine. QM is an open-sourced, publicly available web service that acts as a messaging system for posting tasks and retrieving results over HTTP. The illustrative application described here distributes the analyses of 20 Streptococcus pneumoniae genomes for shared suffixes. Because all analytical and data retrieval tasks are executed by volunteer machines, few server resources are required. Any modern web browser can submit those tasks and/or volunteer to execute them without installing any extra plugins or programs. A client library provides high-level distribution templates including MapReduce. This stark departure from the current reliance on expensive server hardware running "download and install" software has already gathered substantial community interest, as QM received more than 2.2 million API calls from 87 countries in 12 months. QM was found adequate to deliver the sort of scalable bioinformatics solutions that computation- and data-intensive workflows require. Paradoxically, the sandboxed execution of code by web browsers was also found to enable them, as compute nodes, to address critical privacy concerns that characterize biomedical environments.

  13. Stream computing for biomedical signal processing: A QRS complex detection case-study.

    PubMed

    Murphy, B M; O'Driscoll, C; Boylan, G B; Lightbody, G; Marnane, W P

    2015-01-01

    Recent developments in "Big Data" have brought significant gains in the ability to process large amounts of data on commodity server hardware. Stream computing is a relatively new paradigm in this area, addressing the need to process data in real time with very low latency. While this approach has been developed for dealing with large scale data from the world of business, security and finance, there is a natural overlap with clinical needs for physiological signal processing. In this work we present a case study of streams processing applied to a typical physiological signal processing problem: QRS detection from ECG data.

  14. Copilot: Monitoring Embedded Systems

    NASA Technical Reports Server (NTRS)

    Pike, Lee; Wegmann, Nis; Niller, Sebastian; Goodloe, Alwyn

    2012-01-01

    Runtime verification (RV) is a natural fit for ultra-critical systems, where correctness is imperative. In ultra-critical systems, even if the software is fault-free, because of the inherent unreliability of commodity hardware and the adversity of operational environments, processing units (and their hosted software) are replicated, and fault-tolerant algorithms are used to compare the outputs. We investigate both software monitoring in distributed fault-tolerant systems, as well as implementing fault-tolerance mechanisms using RV techniques. We describe the Copilot language and compiler, specifically designed for generating monitors for distributed, hard real-time systems. We also describe two case-studies in which we generated Copilot monitors in avionics systems.

  15. Experiences with Transitioning Science Data Production from a Symmetric Multiprocessor Platform to a Linux Cluster Environment

    NASA Astrophysics Data System (ADS)

    Walter, R. J.; Protack, S. P.; Harris, C. J.; Caruthers, C.; Kusterer, J. M.

    2008-12-01

    NASA's Atmospheric Science Data Center at the NASA Langley Research Center performs all of the science data processing for the Multi-angle Imaging SpectroRadiometer (MISR) instrument. MISR is one of the five remote sensing instruments flying aboard NASA's Terra spacecraft. From the time of Terra launch in December 1999 until February 2008, all MISR science data processing was performed on a Silicon Graphics, Inc. (SGI) platform. However, dramatic improvements in commodity computing technology coupled with steadily declining project budgets during that period eventually made transitioning MISR processing to a commodity computing environment both feasible and necessary. The Atmospheric Science Data Center has successfully ported the MISR science data processing environment from the SGI platform to a Linux cluster environment. There were a multitude of technical challenges associated with this transition. Even though the core architecture of the production system did not change, the manner in which it interacted with underlying hardware was fundamentally different. In addition, there are more potential throughput bottlenecks in a cluster environment than there are in a symmetric multiprocessor environment like the SGI platform and each of these had to be addressed. Once all the technical issues associated with the transition were resolved, the Atmospheric Science Data Center had a MISR science data processing system with significantly higher throughput than the SGI platform at a fraction of the cost. In addition to the commodity hardware, free and open source software such as S4PM, Sun Grid Engine, PostgreSQL and Ganglia play a significant role in the new system. Details of the technical challenges and resolutions, software systems, performance improvements, and cost savings associated with the transition will be discussed. The Atmospheric Science Data Center in Langley's Science Directorate leads NASA's program for the processing, archival and distribution of Earth science data in the areas of radiation budget, clouds, aerosols, and tropospheric chemistry. The Data Center was established in 1991 to support NASA's Earth Observing System and the U.S. Global Change Research Program. It is unique among NASA data centers in the size of its archive, cutting edge computing technology, and full range of data services. For more information regarding ASDC data holdings, documentation, tools and services, visit http://eosweb.larc.nasa.gov

  16. Rapid Calculation of Max-Min Fair Rates for Multi-Commodity Flows in Fat-Tree Networks

    DOE PAGES

    Mollah, Md Atiqul; Yuan, Xin; Pakin, Scott; ...

    2017-08-29

    Max-min fairness is often used in the performance modeling of interconnection networks. Existing methods to compute max-min fair rates for multi-commodity flows have high complexity and are computationally infeasible for large networks. In this paper, we show that by considering topological features, this problem can be solved efficiently for the fat-tree topology that is widely used in data centers and high performance compute clusters. Several efficient new algorithms are developed for this problem, including a parallel algorithm that can take advantage of multi-core and shared-memory architectures. Using these algorithms, we demonstrate that it is possible to find the max-min fairmore » rate allocation for multi-commodity flows in fat-tree networks that support tens of thousands of nodes. We evaluate the run-time performance of the proposed algorithms and show improvement in orders of magnitude over the previously best known method. Finally, we further demonstrate a new application of max-min fair rate allocation that is only computationally feasible using our new algorithms.« less

  17. Rapid Calculation of Max-Min Fair Rates for Multi-Commodity Flows in Fat-Tree Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mollah, Md Atiqul; Yuan, Xin; Pakin, Scott

    Max-min fairness is often used in the performance modeling of interconnection networks. Existing methods to compute max-min fair rates for multi-commodity flows have high complexity and are computationally infeasible for large networks. In this paper, we show that by considering topological features, this problem can be solved efficiently for the fat-tree topology that is widely used in data centers and high performance compute clusters. Several efficient new algorithms are developed for this problem, including a parallel algorithm that can take advantage of multi-core and shared-memory architectures. Using these algorithms, we demonstrate that it is possible to find the max-min fairmore » rate allocation for multi-commodity flows in fat-tree networks that support tens of thousands of nodes. We evaluate the run-time performance of the proposed algorithms and show improvement in orders of magnitude over the previously best known method. Finally, we further demonstrate a new application of max-min fair rate allocation that is only computationally feasible using our new algorithms.« less

  18. 78 FR 62751 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Proposed Rule Change To List...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-22

    ... of the WisdomTree Bloomberg U.S. Dollar Bullish Fund, WisdomTree Bloomberg U.S. Dollar Bearish Fund, and the WisdomTree Commodity Currency Bearish Fund Under NYSE Arca Equities Rule 8.600 October 8, 2013... of the WisdomTree Trust (``Trust'') under NYSE Arca Equities Rule 8.600 (``Managed Fund Shares...

  19. 17 CFR 230.152a - Offer or sale of certain fractional interests.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... interests for the purpose of combining such interests into whole shares, and for the sale of such number of... 17 Commodity and Securities Exchanges 2 2010-04-01 2010-04-01 false Offer or sale of certain... COMMISSION GENERAL RULES AND REGULATIONS, SECURITIES ACT OF 1933 General § 230.152a Offer or sale of certain...

  20. 17 CFR 230.152a - Offer or sale of certain fractional interests.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... interests for the purpose of combining such interests into whole shares, and for the sale of such number of... 17 Commodity and Securities Exchanges 2 2011-04-01 2011-04-01 false Offer or sale of certain... COMMISSION GENERAL RULES AND REGULATIONS, SECURITIES ACT OF 1933 General § 230.152a Offer or sale of certain...

  1. 17 CFR 230.152a - Offer or sale of certain fractional interests.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... interests for the purpose of combining such interests into whole shares, and for the sale of such number of... 17 Commodity and Securities Exchanges 2 2013-04-01 2013-04-01 false Offer or sale of certain... COMMISSION GENERAL RULES AND REGULATIONS, SECURITIES ACT OF 1933 General § 230.152a Offer or sale of certain...

  2. 17 CFR 230.152a - Offer or sale of certain fractional interests.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... interests for the purpose of combining such interests into whole shares, and for the sale of such number of... 17 Commodity and Securities Exchanges 3 2014-04-01 2014-04-01 false Offer or sale of certain... COMMISSION GENERAL RULES AND REGULATIONS, SECURITIES ACT OF 1933 General § 230.152a Offer or sale of certain...

  3. 17 CFR 230.152a - Offer or sale of certain fractional interests.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... interests for the purpose of combining such interests into whole shares, and for the sale of such number of... 17 Commodity and Securities Exchanges 2 2012-04-01 2012-04-01 false Offer or sale of certain... COMMISSION GENERAL RULES AND REGULATIONS, SECURITIES ACT OF 1933 General § 230.152a Offer or sale of certain...

  4. 17 CFR 270.12d1-1 - Exemptions for investments in money market funds.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... money market funds. 270.12d1-1 Section 270.12d1-1 Commodity and Securities Exchanges SECURITIES AND... Exemptions for investments in money market funds. (a) Exemptions for acquisition of money market fund shares... issued by a money market fund; and (2) A money market fund, any principal underwriter thereof, and a...

  5. 17 CFR 270.12d1-1 - Exemptions for investments in money market funds.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... money market funds. 270.12d1-1 Section 270.12d1-1 Commodity and Securities Exchanges SECURITIES AND... Exemptions for investments in money market funds. (a) Exemptions for acquisition of money market fund shares... issued by a money market fund; and (2) A money market fund, any principal underwriter thereof, and a...

  6. 17 CFR 270.12d1-1 - Exemptions for investments in money market funds.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... money market funds. 270.12d1-1 Section 270.12d1-1 Commodity and Securities Exchanges SECURITIES AND... Exemptions for investments in money market funds. (a) Exemptions for acquisition of money market fund shares... issued by a money market fund; and (2) A money market fund, any principal underwriter thereof, and a...

  7. 17 CFR 270.12d1-1 - Exemptions for investments in money market funds.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... money market funds. 270.12d1-1 Section 270.12d1-1 Commodity and Securities Exchanges SECURITIES AND... Exemptions for investments in money market funds. (a) Exemptions for acquisition of money market fund shares... issued by a money market fund; and (2) A money market fund, any principal underwriter thereof, and a...

  8. Space Costing: Who Should Pay for the Use of College Space? A Report.

    ERIC Educational Resources Information Center

    Zacher, Sy

    The costs of owning and operating physical facilities are consuming an increasing share of the budgets of colleges and universities. In the past, academic and operating units of colleges have viewed their space as a free commodity and often used it extravagantly. Space costing is a method of cost accounting the space and operating and maintenance…

  9. The Jet Propulsion Laboratory shared control architecture and implementation

    NASA Technical Reports Server (NTRS)

    Backes, Paul G.; Hayati, Samad

    1990-01-01

    A hardware and software environment for shared control of telerobot task execution has been implemented. Modes of task execution range from fully teleoperated to fully autonomous as well as shared where hand controller inputs from the human operator are mixed with autonomous system inputs in real time. The objective of the shared control environment is to aid the telerobot operator during task execution by merging real-time operator control from hand controllers with autonomous control to simplify task execution for the operator. The operator is the principal command source and can assign as much autonomy for a task as desired. The shared control hardware environment consists of two PUMA 560 robots, two 6-axis force reflecting hand controllers, Universal Motor Controllers for each of the robots and hand controllers, a SUN4 computer, and VME chassis containing 68020 processors and input/output boards. The operator interface for shared control, the User Macro Interface (UMI), is a menu driven interface to design a task and assign the levels of teleoperated and autonomous control. The operator also sets up the system monitor which checks safety limits during task execution. Cartesian-space degrees of freedom for teleoperated and/or autonomous control inputs are selected within UMI as well as the weightings for the teleoperation and autonmous inputs. These are then used during task execution to determine the mix of teleoperation and autonomous inputs. Some of the autonomous control primitives available to the user are Joint-Guarded-Move, Cartesian-Guarded-Move, Move-To-Touch, Pin-Insertion/Removal, Door/Crank-Turn, Bolt-Turn, and Slide. The operator can execute a task using pure teleoperation or mix control execution from the autonomous primitives with teleoperated inputs. Presently the shared control environment supports single arm task execution. Work is presently underway to provide the shared control environment for dual arm control. Teleoperation during shared control is only Cartesian space control and no force-reflection is provided. Force-reflecting teleoperation and joint space operator inputs are planned extensions to the environment.

  10. Orthos, an alarm system for the ALICE DAQ operations

    NASA Astrophysics Data System (ADS)

    Chapeland, Sylvain; Carena, Franco; Carena, Wisla; Chibante Barroso, Vasco; Costa, Filippo; Denes, Ervin; Divia, Roberto; Fuchs, Ulrich; Grigore, Alexandru; Simonetti, Giuseppe; Soos, Csaba; Telesca, Adriana; Vande Vyvre, Pierre; von Haller, Barthelemy

    2012-12-01

    ALICE (A Large Ion Collider Experiment) is the heavy-ion detector studying the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). The DAQ (Data Acquisition System) facilities handle the data flow from the detectors electronics up to the mass storage. The DAQ system is based on a large farm of commodity hardware consisting of more than 600 devices (Linux PCs, storage, network switches), and controls hundreds of distributed hardware and software components interacting together. This paper presents Orthos, the alarm system used to detect, log, report, and follow-up abnormal situations on the DAQ machines at the experimental area. The main objective of this package is to integrate alarm detection and notification mechanisms with a full-featured issues tracker, in order to prioritize, assign, and fix system failures optimally. This tool relies on a database repository with a logic engine, SQL interfaces to inject or query metrics, and dynamic web pages for user interaction. We describe the system architecture, the technologies used for the implementation, and the integration with existing monitoring tools.

  11. The Evolution of Software and Its Impact on Complex System Design in Robotic Spacecraft Embedded Systems

    NASA Technical Reports Server (NTRS)

    Butler, Roy

    2013-01-01

    The growth in computer hardware performance, coupled with reduced energy requirements, has led to a rapid expansion of the resources available to software systems, driving them towards greater logical abstraction, flexibility, and complexity. This shift in focus from compacting functionality into a limited field towards developing layered, multi-state architectures in a grand field has both driven and been driven by the history of embedded processor design in the robotic spacecraft industry.The combinatorial growth of interprocess conditions is accompanied by benefits (concurrent development, situational autonomy, and evolution of goals) and drawbacks (late integration, non-deterministic interactions, and multifaceted anomalies) in achieving mission success, as illustrated by the case of the Mars Reconnaissance Orbiter. Approaches to optimizing the benefits while mitigating the drawbacks have taken the form of the formalization of requirements, modular design practices, extensive system simulation, and spacecraft data trend analysis. The growth of hardware capability and software complexity can be expected to continue, with future directions including stackable commodity subsystems, computer-generated algorithms, runtime reconfigurable processors, and greater autonomy.

  12. Multi-terabyte EIDE disk arrays running Linux RAID5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanders, D.A.; Cremaldi, L.M.; Eschenburg, V.

    2004-11-01

    High-energy physics experiments are currently recording large amounts of data and in a few years will be recording prodigious quantities of data. New methods must be developed to handle this data and make analysis at universities possible. Grid Computing is one method; however, the data must be cached at the various Grid nodes. We examine some storage techniques that exploit recent developments in commodity hardware. Disk arrays using RAID level 5 (RAID-5) include both parity and striping. The striping improves access speed. The parity protects data in the event of a single disk failure, but not in the case ofmore » multiple disk failures. We report on tests of dual-processor Linux Software RAID-5 arrays and Hardware RAID-5 arrays using a 12-disk 3ware controller, in conjunction with 250 and 300 GB disks, for use in offline high-energy physics data analysis. The price of IDE disks is now less than $1/GB. These RAID-5 disk arrays can be scaled to sizes affordable to small institutions and used when fast random access at low cost is important.« less

  13. PANDA: A distributed multiprocessor operating system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chubb, P.

    1989-01-01

    PANDA is a design for a distributed multiprocessor and an operating system. PANDA is designed to allow easy expansion of both hardware and software. As such, the PANDA kernel provides only message passing and memory and process management. The other features needed for the system (device drivers, secondary storage management, etc.) are provided as replaceable user tasks. The thesis presents PANDA's design and implementation, both hardware and software. PANDA uses multiple 68010 processors sharing memory on a VME bus, each such node potentially connected to others via a high speed network. The machine is completely homogeneous: there are no differencesmore » between processors that are detectable by programs running on the machine. A single two-processor node has been constructed. Each processor contains memory management circuits designed to allow processors to share page tables safely. PANDA presents a programmers' model similar to the hardware model: a job is divided into multiple tasks, each having its own address space. Within each task, multiple processes share code and data. Tasks can send messages to each other, and set up virtual circuits between themselves. Peripheral devices such as disc drives are represented within PANDA by tasks. PANDA divides secondary storage into volumes, each volume being accessed by a volume access task, or VAT. All knowledge about the way that data is stored on a disc is kept in its volume's VAT. The design is such that PANDA should provide a useful testbed for file systems and device drivers, as these can be installed without recompiling PANDA itself, and without rebooting the machine.« less

  14. Innovations in dynamic test restraint systems

    NASA Technical Reports Server (NTRS)

    Fuld, Christopher J.

    1990-01-01

    Recent launch system development programs have led to a new generation of large scale dynamic tests. The variety of test scenarios share one common requirement: restrain and capture massive high velocity flight hardware with no structural damage. The Space Systems Lab of McDonnell Douglas developed a remarkably simple and cost effective approach to such testing using ripstitch energy absorbers adapted from the sport of technical rockclimbing. The proven system reliability of the capture system concept has led to a wide variety of applications in test system design and in aerospace hardware design.

  15. Parallel Computing for Probabilistic Response Analysis of High Temperature Composites

    NASA Technical Reports Server (NTRS)

    Sues, R. H.; Lua, Y. J.; Smith, M. D.

    1994-01-01

    The objective of this Phase I research was to establish the required software and hardware strategies to achieve large scale parallelism in solving PCM problems. To meet this objective, several investigations were conducted. First, we identified the multiple levels of parallelism in PCM and the computational strategies to exploit these parallelisms. Next, several software and hardware efficiency investigations were conducted. These involved the use of three different parallel programming paradigms and solution of two example problems on both a shared-memory multiprocessor and a distributed-memory network of workstations.

  16. Extravehicular Activity (EVA) Power, Avionics, and Software (PAS) 101

    NASA Technical Reports Server (NTRS)

    Irimies, David

    2011-01-01

    EVA systems consist of a spacesuit or garment, a PLSS, a PAS system, and spacesuit interface hardware. The PAS system is responsible for providing power for the suit, communication of several types of data between the suit and other mission assets, avionics hardware to perform numerous data display and processing functions, and information systems that provide crewmembers data to perform their tasks with more autonomy and efficiency. Irimies discussed how technology development efforts have advanced the state-of-the-art in these areas and shared technology development challenges.

  17. Design and implementation of a robot control system with traded and shared control capability

    NASA Technical Reports Server (NTRS)

    Hayati, S.; Venkataraman, S. T.

    1989-01-01

    Preliminary results are reported from efforts to design and develop a robotic system that will accept and execute commands from either a six-axis teleoperator device or an autonomous planner, or combine the two. Such a system should have both traded as well as shared control capability. A sharing strategy is presented whereby the overall system, while retaining positive features of teleoperated and autonomous operation, loses its individual negative features. A two-tiered shared control architecture is considered here, consisting of a task level and a servo level. Also presented is a computer architecture for the implementation of this system, including a description of the hardware and software.

  18. A High-Throughput Processor for Flight Control Research Using Small UAVs

    NASA Technical Reports Server (NTRS)

    Klenke, Robert H.; Sleeman, W. C., IV; Motter, Mark A.

    2006-01-01

    There are numerous autopilot systems that are commercially available for small (<100 lbs) UAVs. However, they all share several key disadvantages for conducting aerodynamic research, chief amongst which is the fact that most utilize older, slower, 8- or 16-bit microcontroller technologies. This paper describes the development and testing of a flight control system (FCS) for small UAV s based on a modern, high throughput, embedded processor. In addition, this FCS platform contains user-configurable hardware resources in the form of a Field Programmable Gate Array (FPGA) that can be used to implement custom, application-specific hardware. This hardware can be used to off-load routine tasks such as sensor data collection, from the FCS processor thereby further increasing the computational throughput of the system.

  19. VLSI 'smart' I/O module development

    NASA Astrophysics Data System (ADS)

    Kirk, Dan

    The developmental history, design, and operation of the MIL-STD-1553A/B discrete and serial module (DSM) for the U.S. Navy AN/AYK-14(V) avionics computer are described and illustrated with diagrams. The ongoing preplanned product improvement for the AN/AYK-14(V) includes five dual-redundant MIL-STD-1553 channels based on DSMs. The DSM is a front-end processor for transferring data to and from a common memory, sharing memory with a host processor to provide improved 'smart' input/output performance. Each DSM comprises three hardware sections: three VLSI-6000 semicustomized CMOS arrays, memory units to support the arrays, and buffers and resynchronization circuits. The DSM hardware module design, VLSI-6000 design tools, controlware and test software, and checkout procedures (using a hardware simulator) are characterized in detail.

  20. Method for prefetching non-contiguous data structures

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Brewster, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Takken, Todd E [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2009-05-05

    A low latency memory system access is provided in association with a weakly-ordered multiprocessor system. Each processor in the multiprocessor shares resources, and each shared resource has an associated lock within a locking device that provides support for synchronization between the multiple processors in the multiprocessor and the orderly sharing of the resources. A processor only has permission to access a resource when it owns the lock associated with that resource, and an attempt by a processor to own a lock requires only a single load operation, rather than a traditional atomic load followed by store, such that the processor only performs a read operation and the hardware locking device performs a subsequent write operation rather than the processor. A simple perfecting for non-contiguous data structures is also disclosed. A memory line is redefined so that in addition to the normal physical memory data, every line includes a pointer that is large enough to point to any other line in the memory, wherein the pointers to determine which memory line to prefect rather than some other predictive algorithm. This enables hardware to effectively prefect memory access patterns that are non-contiguous, but repetitive.

  1. Low latency memory access and synchronization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blumrich, Matthias A.; Chen, Dong; Coteus, Paul W.

    A low latency memory system access is provided in association with a weakly-ordered multiprocessor system. Each processor in the multiprocessor shares resources, and each shared resource has an associated lock within a locking device that provides support for synchronization between the multiple processors in the multiprocessor and the orderly sharing of the resources. A processor only has permission to access a resource when it owns the lock associated with that resource, and an attempt by a processor to own a lock requires only a single load operation, rather than a traditional atomic load followed by store, such that the processormore » only performs a read operation and the hardware locking device performs a subsequent write operation rather than the processor. A simple prefetching for non-contiguous data structures is also disclosed. A memory line is redefined so that in addition to the normal physical memory data, every line includes a pointer that is large enough to point to any other line in the memory, wherein the pointers to determine which memory line to prefetch rather than some other predictive algorithm. This enables hardware to effectively prefetch memory access patterns that are non-contiguous, but repetitive.« less

  2. Low latency memory access and synchronization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blumrich, Matthias A.; Chen, Dong; Coteus, Paul W.

    A low latency memory system access is provided in association with a weakly-ordered multiprocessor system. Bach processor in the multiprocessor shares resources, and each shared resource has an associated lock within a locking device that provides support for synchronization between the multiple processors in the multiprocessor and the orderly sharing of the resources. A processor only has permission to access a resource when it owns the lock associated with that resource, and an attempt by a processor to own a lock requires only a single load operation, rather than a traditional atomic load followed by store, such that the processormore » only performs a read operation and the hardware locking device performs a subsequent write operation rather than the processor. A simple prefetching for non-contiguous data structures is also disclosed. A memory line is redefined so that in addition to the normal physical memory data, every line includes a pointer that is large enough to point to any other line in the memory, wherein the pointers to determine which memory line to prefetch rather than some other predictive algorithm. This enables hardware to effectively prefetch memory access patterns that are non-contiguous, but repetitive.« less

  3. In the Field Feasibility of a Simple Method to Check for Radioactivity in Commodities and in the Environment

    PubMed Central

    Alessandri, Stefano

    2017-01-01

    Introduction: Some release of radionuclides into the environment can be expected from the growing number of nuclear plants, either in or out of service. The citizen and the big organization could be both interested in simple and innovative methods for checking the radiological safety of their environment and of commodities, starting from foods. Methods: In this work three methods to detect radioactivity are briefly compared  focusing on the most recent, which converts a smartphone into a radiation counter. Results: The results of a simple sensitivity test are presented showing the measure of the activity of reference sources put at different distances from each sensor. Discussion: The three methods are discussed in terms of availability, technology, sensitivity, resolution and usefulness. The reported results can be usefully transferred into a radiological emergency scenario and they also offer some interesting implication for our current everyday life, but show that the hardware of the tested smart-phone can detect only high levels of radioactivity. However the technology could be interesting to build a working detection and measurement chain which could start from a diffused and networked first screening before the final high resolution analysis. PMID:28744409

  4. A Ground Testbed to Advance US Capability in Autonomous Rendezvous and Docking Project

    NASA Technical Reports Server (NTRS)

    D'Souza, Chris

    2014-01-01

    This project will advance the Autonomous Rendezvous and Docking (AR&D) GNC system by testing it on hardware, particularly in a flight processor, with a goal of testing it in IPAS with the Waypoint L2 AR&D scenario. The entire Agency supports development of a Commodity for Autonomous Rendezvous and Docking (CARD) as outlined in the Agency-wide Community of Practice whitepaper entitled: "A Strategy for the U.S. to Develop and Maintain a Mainstream Capability for Automated/Autonomous Rendezvous and Docking in Low Earth Orbit and Beyond". The whitepaper establishes that 1) the US is in a continual state of AR&D point-designs and therefore there is no US "off-the-shelf" AR&D capability in existence today, 2) the US has fallen behind our foreign counterparts particularly in the autonomy of AR&D systems, 3) development of an AR&D commodity is a national need that would benefit NASA, our commercial partners, and DoD, and 4) an initial estimate indicates that the development of a standardized AR&D capability could save the US approximately $60M for each AR&D project and cut each project's AR&D flight system implementation time in half.

  5. Should Secondary Schools Buy Local Area Networks?

    ERIC Educational Resources Information Center

    Hyde, Hartley

    1986-01-01

    The advantages of microcomputer networks include resource sharing, multiple user communications, and integrating data processing and office automation. This article nonetheless favors stand-alone computers for Australian secondary school classrooms because of unreliable hardware, software design, and copyright problems, and individual progress…

  6. 7 CFR 4280.3 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Regulations of the Department of Agriculture (Continued) RURAL BUSINESS-COOPERATIVE SERVICE AND RURAL..., fish, or birds, either for fiber, food for human consumption, or livestock feed. Business Incubator. A facility in which small businesses can share premises, support staff, computers, software or hardware...

  7. Walheim during EVA 3

    NASA Image and Video Library

    2008-02-15

    ISS016-E-029500 (15 Feb. 2008) --- Astronaut Rex Walheim, mission specialist, holds onto a handrail on the Columbus laboratory, the newest piece of hardware on the International Space Station. Astronaut Stanley Love (out of frame), mission specialist, shared this extravehicular activity with Walheim.

  8. Walheim during EVA 3

    NASA Image and Video Library

    2008-02-15

    S122-E-008781 (15 Feb. 2008) --- Astronaut Rex Walheim, mission specialist, holds onto a handrail on the Columbus laboratory, the newest piece of hardware on the International Space Station. Astronaut Stanley Love (out of frame), mission specialist, shared this extravehicular activity with Walheim.

  9. Walheim during EVA 3

    NASA Image and Video Library

    2008-02-15

    S122-E-008764 (15 Feb. 2008) --- Astronaut Rex Walheim, mission specialist, holds onto a handrail on the Columbus laboratory, the newest piece of hardware on the International Space Station. Astronaut Stanley Love (out of frame), mission specialist, shared this extravehicular activity with Walheim.

  10. IPCS user's manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGoldrick, P.R.

    1980-12-11

    The Interprocess Communications System (IPCS) was written to provide a virtual machine upon which the Supervisory Control and Diagnostic System (SCDS) for the Mirror Fusion Test Facility (MFTF) could be built. The hardware upon which the IPCS runs consists of nine minicomputers sharing some common memory.

  11. Cache Sharing and Isolation Tradeoffs in Multicore Mixed-Criticality Systems

    DTIC Science & Technology

    2015-05-01

    of lockdown registers, to provide way-based partitioning. These alternatives are illustrated in Fig. 1 with respect to a quad-core ARM Cortex A9...presented a cache-partitioning scheme that allows multiple tasks to share the same cache partition on a single processor (as we do for Level-A and...sets and determined the fraction that were schedulable on our target hardware platform, the quad-core ARM Cortex A9 machine mentioned earlier, the LLC

  12. Efficiently passing messages in distributed spiking neural network simulation.

    PubMed

    Thibeault, Corey M; Minkovich, Kirill; O'Brien, Michael J; Harris, Frederick C; Srinivasa, Narayan

    2013-01-01

    Efficiently passing spiking messages in a neural model is an important aspect of high-performance simulation. As the scale of networks has increased so has the size of the computing systems required to simulate them. In addition, the information exchange of these resources has become more of an impediment to performance. In this paper we explore spike message passing using different mechanisms provided by the Message Passing Interface (MPI). A specific implementation, MVAPICH, designed for high-performance clusters with Infiniband hardware is employed. The focus is on providing information about these mechanisms for users of commodity high-performance spiking simulators. In addition, a novel hybrid method for spike exchange was implemented and benchmarked.

  13. Terabyte IDE RAID-5 Disk Arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David A. Sanders et al.

    2003-09-30

    High energy physics experiments are currently recording large amounts of data and in a few years will be recording prodigious quantities of data. New methods must be developed to handle this data and make analysis at universities possible. We examine some techniques that exploit recent developments in commodity hardware. We report on tests of redundant arrays of integrated drive electronics (IDE) disk drives for use in offline high energy physics data analysis. IDE redundant array of inexpensive disks (RAID) prices now are less than the cost per terabyte of million-dollar tape robots! The arrays can be scaled to sizes affordablemore » to institutions without robots and used when fast random access at low cost is important.« less

  14. High Performance Programming Using Explicit Shared Memory Model on Cray T3D1

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; Saini, Subhash; Grassi, Charles

    1994-01-01

    The Cray T3D system is the first-phase system in Cray Research, Inc.'s (CRI) three-phase massively parallel processing (MPP) program. This system features a heterogeneous architecture that closely couples DEC's Alpha microprocessors and CRI's parallel-vector technology, i.e., the Cray Y-MP and Cray C90. An overview of the Cray T3D hardware and available programming models is presented. Under Cray Research adaptive Fortran (CRAFT) model four programming methods (data parallel, work sharing, message-passing using PVM, and explicit shared memory model) are available to the users. However, at this time data parallel and work sharing programming models are not available to the user community. The differences between standard PVM and CRI's PVM are highlighted with performance measurements such as latencies and communication bandwidths. We have found that the performance of neither standard PVM nor CRI s PVM exploits the hardware capabilities of the T3D. The reasons for the bad performance of PVM as a native message-passing library are presented. This is illustrated by the performance of NAS Parallel Benchmarks (NPB) programmed in explicit shared memory model on Cray T3D. In general, the performance of standard PVM is about 4 to 5 times less than obtained by using explicit shared memory model. This degradation in performance is also seen on CM-5 where the performance of applications using native message-passing library CMMD on CM-5 is also about 4 to 5 times less than using data parallel methods. The issues involved (such as barriers, synchronization, invalidating data cache, aligning data cache etc.) while programming in explicit shared memory model are discussed. Comparative performance of NPB using explicit shared memory programming model on the Cray T3D and other highly parallel systems such as the TMC CM-5, Intel Paragon, Cray C90, IBM-SP1, etc. is presented.

  15. A speculative look at the future of the American Petroleum Industry based on a full-cycle analysis of the American Whale Oil Industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coleman, J.L. Jr.

    1995-09-01

    A full-cycle, industry-scale look at the American whaling industry of the 19th century suggests a number of comparisons with the American petroleum industry of the 20th century. Using the King Hubbert production profile for extraction industries as a guide, both industries show a similar business life span. An understanding of the history of American whaling will, perhaps, gives us a more complete understanding of the history of the American petroleum industry. The rise of the American whaling industry to the premier investment opportunity of its day is little known to most in today`s oil and gas industry. Yet, we allmore » know that abundant and inexpensive crude oil was a key factor in its demise. From a careful study of the history of the American whaling industry a set of factors (or stages of transition), common to similar extraction industries, can be developed, which may help investors and workers determine the state of health of our industry: (1) defection of highly skilled personnel to other, comparable, technical industries; (2) discovery and initial development of a replacement commodity; (3) major calamity, which adversely affects the industry in terms of significant loss of working capital and/or resources; (4) loss of sufficient investment capital to continue resource addition; (5) rapid development of a replacement commodity with attendant decrease in per unit price to a position lower than the primary commodity; (6) significant loss of market share by the primary commodity; and (7) end of the primary commodity as a major economic force.« less

  16. A Distributed Operating System for BMD Applications.

    DTIC Science & Technology

    1982-01-01

    Defense) applications executing on distributed hardware with local and shared memories. The objective was to develop real - time operating system functions...make the Basic Real - Time Operating System , and the set of new EPL language primitives that provide BMD application processes with efficient mechanisms

  17. 77 FR 20820 - Change in Bank Control Notices; Acquisitions of Shares of a Bank or Bank Holding Company

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-06

    ... Trust, and Gloria Foley, all of Lovington, Illinois, and Paul Michael Hrvol, Jr. and Paul Michael Hrvol... Bancorp, Inc. and thereby indirectly control Hardware State Bank, both of Lovington, Illinois. Board of...

  18. STS-64 Extravehicular activity (EVA) training view in WETF

    NASA Image and Video Library

    1994-08-10

    S94-39775 (August 1994) --- Astronaut Carl J. Meade, STS-64 mission specialist, listens to ground monitors during a simulation of a spacewalk scheduled for his September mission. Meade, who shared the rehearsal in the Johnson Space Center's (JSC) Weightless Environment Training Facility (WET-F) pool with crewmate astronaut Mark C. Lee, is equipped with a training version of new extravehicular activity (EVA) hardware called the Simplified Aid for EVA Rescue (SAFER) system. The hardware includes a mobility-aiding back harness and a chest-mounted hand control module. Photo credit: NASA or National Aeronautics and Space Administration

  19. STS-64 Extravehicular activity (EVA) training view in WETF

    NASA Image and Video Library

    1994-08-10

    S94-39762 (August 1994) --- Astronaut Carl J. Meade, STS-64 mission specialist, listens to ground monitors prior to a simulation of a spacewalk scheduled for his September mission. Meade, who shared the rehearsal in Johnson Space Center's (JSC) Weightless Environment Training Facility (WET-F) pool with crewmate astronaut Mark C. Lee (out of frame), is equipped with a training version of new extravehicular activity (EVA) hardware called the Simplified Aid for EVA Rescue (SAFER) system. The hardware includes a mobility-aiding back harness and a chest-mounted hand control module. Photo credit: NASA or National Aeronautics and Space Administration

  20. Efficient architecture for spike sorting in reconfigurable hardware.

    PubMed

    Hwang, Wen-Jyi; Lee, Wei-Hao; Lin, Shiow-Jyu; Lai, Sheng-Ying

    2013-11-01

    This paper presents a novel hardware architecture for fast spike sorting. The architecture is able to perform both the feature extraction and clustering in hardware. The generalized Hebbian algorithm (GHA) and fuzzy C-means (FCM) algorithm are used for feature extraction and clustering, respectively. The employment of GHA allows efficient computation of principal components for subsequent clustering operations. The FCM is able to achieve near optimal clustering for spike sorting. Its performance is insensitive to the selection of initial cluster centers. The hardware implementations of GHA and FCM feature low area costs and high throughput. In the GHA architecture, the computation of different weight vectors share the same circuit for lowering the area costs. Moreover, in the FCM hardware implementation, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. To show the effectiveness of the circuit, the proposed architecture is physically implemented by field programmable gate array (FPGA). It is embedded in a System-on-Chip (SOC) platform for performance measurement. Experimental results show that the proposed architecture is an efficient spike sorting design for attaining high classification correct rate and high speed computation.

  1. Determination of performance characteristics of scientific applications on IBM Blue Gene/Q

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evangelinos, C.; Walkup, R. E.; Sachdeva, V.

    The IBM Blue Gene®/Q platform presents scientists and engineers with a rich set of hardware features such as 16 cores per chip sharing a Level 2 cache, a wide SIMD (single-instruction, multiple-data) unit, a five-dimensional torus network, and hardware support for collective operations. Especially important is the feature related to cores that have four “hardware threads,” which makes it possible to hide latencies and obtain a high fraction of the peak issue rate from each core. All of these hardware resources present unique performance-tuning opportunities on Blue Gene/Q. We provide an overview of several important applications and solvers and studymore » them on Blue Gene/Q using performance counters and Message Passing Interface profiles. We also discuss how Blue Gene/Q tools help us understand the interaction of the application with the hardware and software layers and provide guidance for optimization. Furthermore, on the basis of our analysis, we discuss code improvement strategies targeting Blue Gene/Q. Information about how these algorithms map to the Blue Gene® architecture is expected to have an impact on future system design as we move to the exascale era.« less

  2. Rapid recovery from transient faults in the fault-tolerant processor with fault-tolerant shared memory

    NASA Technical Reports Server (NTRS)

    Harper, Richard E.; Butler, Bryan P.

    1990-01-01

    The Draper fault-tolerant processor with fault-tolerant shared memory (FTP/FTSM), which is designed to allow application tasks to continue execution during the memory alignment process, is described. Processor performance is not affected by memory alignment. In addition, the FTP/FTSM incorporates a hardware scrubber device to perform the memory alignment quickly during unused memory access cycles. The FTP/FTSM architecture is described, followed by an estimate of the time required for channel reintegration.

  3. Compiler-directed cache management in multiprocessors

    NASA Technical Reports Server (NTRS)

    Cheong, Hoichi; Veidenbaum, Alexander V.

    1990-01-01

    The necessity of finding alternatives to hardware-based cache coherence strategies for large-scale multiprocessor systems is discussed. Three different software-based strategies sharing the same goals and general approach are presented. They consist of a simple invalidation approach, a fast selective invalidation scheme, and a version control scheme. The strategies are suitable for shared-memory multiprocessor systems with interconnection networks and a large number of processors. Results of trace-driven simulations conducted on numerical benchmark routines to compare the performance of the three schemes are presented.

  4. High Level Synthesis in ASP

    DTIC Science & Technology

    1986-08-19

    Thus in and g (X, Y) A and X share one element, and B and Y share another. Assigning a value to A (via its storage element) also assigns that value to X...functionality as well as generate it. i4 29 References [Ada] ’ADA as a Hardware Description Language: An Initial Report’ M.R. Bar- bacci, S. Grout, G ...1985; pp. 303-320. (Expert] ’An Expert-System Paradigm for Design’ Forrest D. Brewer, Daniel D. Gajski ; 23rd Design Automation Conference, 1986; pp

  5. A Combination Therapy of JO-I and Chemotherapy in Ovarian Cancer Models

    DTIC Science & Technology

    2013-10-01

    which consists of a 3PAR storage backend and is sharing data via a highly available NetApp storage gateway and 2 high throughput commodity storage...Environment is configured as self- service Enterprise cloud and currently hosts more than 700 virtual machines. The network infrastructure consists of...technology infrastructure and information system applications designed to integrate, automate, and standardize operations. These systems fuse state of

  6. JPRS Report. Soviet Union: World Economy & International Relations, No. 2, February 1989

    DTIC Science & Technology

    1989-05-13

    a small group of the biggest companies of a number of highly concentrated base sectors of industry—steel casting, automotive, chemical , industrial...forth. As of the present the TNC practically share among themselves the capitalist markets for automobiles and steel and a number of chemical ...automobile, textile, chemical and agricultural commodity, electronics and transport services markets. Competition is now heating up not only in the sphere of

  7. Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villa, Oreste; Tumeo, Antonino; Secchi, Simone

    Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, wemore » introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.« less

  8. QMachine: commodity supercomputing in web browsers

    PubMed Central

    2014-01-01

    Background Ongoing advancements in cloud computing provide novel opportunities in scientific computing, especially for distributed workflows. Modern web browsers can now be used as high-performance workstations for querying, processing, and visualizing genomics’ “Big Data” from sources like The Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC) without local software installation or configuration. The design of QMachine (QM) was driven by the opportunity to use this pervasive computing model in the context of the Web of Linked Data in Biomedicine. Results QM is an open-sourced, publicly available web service that acts as a messaging system for posting tasks and retrieving results over HTTP. The illustrative application described here distributes the analyses of 20 Streptococcus pneumoniae genomes for shared suffixes. Because all analytical and data retrieval tasks are executed by volunteer machines, few server resources are required. Any modern web browser can submit those tasks and/or volunteer to execute them without installing any extra plugins or programs. A client library provides high-level distribution templates including MapReduce. This stark departure from the current reliance on expensive server hardware running “download and install” software has already gathered substantial community interest, as QM received more than 2.2 million API calls from 87 countries in 12 months. Conclusions QM was found adequate to deliver the sort of scalable bioinformatics solutions that computation- and data-intensive workflows require. Paradoxically, the sandboxed execution of code by web browsers was also found to enable them, as compute nodes, to address critical privacy concerns that characterize biomedical environments. PMID:24913605

  9. What Do Stroke Patients Look for in Game-Based Rehabilitation

    PubMed Central

    Hung, Ya-Xuan; Huang, Pei-Chen; Chen, Kuan-Ta; Chu, Woei-Chyn

    2016-01-01

    Abstract Stroke is one of the most common causes of physical disability, and early, intensive, and repetitive rehabilitation exercises are crucial to the recovery of stroke survivors. Unfortunately, research shows that only one third of stroke patients actually perform recommended exercises at home, because of the repetitive and mundane nature of conventional rehabilitation exercises. Thus, to motivate stroke survivors to engage in monotonous rehabilitation is a significant issue in the therapy process. Game-based rehabilitation systems have the potential to encourage patients continuing rehabilitation exercises at home. However, these systems are still rarely adopted at patients’ places. Discovering and eliminating the obstacles in promoting game-based rehabilitation at home is therefore essential. For this purpose, we conducted a study to collect and analyze the opinions and expectations of stroke patients and clinical therapists. The study is composed of 2 parts: Rehab-preference survey – interviews to both patients and therapists to understand the current practices, challenges, and expectations on game-based rehabilitation systems; and Rehab-compatibility survey – a gaming experiment with therapists to elaborate what commercial games are compatible with rehabilitation. The study is conducted with 30 outpatients with stroke and 19 occupational therapists from 2 rehabilitation centers in Taiwan. Our surveys show that game-based rehabilitation systems can turn the rehabilitation exercises more appealing and provide personalized motivation for various stroke patients. Patients prefer to perform rehabilitation exercises with more diverse and fun games, and need cost-effective rehabilitation systems, which are often built on commodity hardware. Our study also sheds light on incorporating the existing design-for-fun games into rehabilitation system. We envision the results are helpful in developing a platform which enables rehab-compatible (i.e., existing, appropriately selected) games to be operated on commodity hardware and brings cost-effective rehabilitation systems to more and more patients’ home for long-term recovery. PMID:26986120

  10. What Do Stroke Patients Look for in Game-Based Rehabilitation: A Survey Study.

    PubMed

    Hung, Ya-Xuan; Huang, Pei-Chen; Chen, Kuan-Ta; Chu, Woei-Chyn

    2016-03-01

    Stroke is one of the most common causes of physical disability, and early, intensive, and repetitive rehabilitation exercises are crucial to the recovery of stroke survivors. Unfortunately, research shows that only one third of stroke patients actually perform recommended exercises at home, because of the repetitive and mundane nature of conventional rehabilitation exercises. Thus, to motivate stroke survivors to engage in monotonous rehabilitation is a significant issue in the therapy process. Game-based rehabilitation systems have the potential to encourage patients continuing rehabilitation exercises at home. However, these systems are still rarely adopted at patients' places. Discovering and eliminating the obstacles in promoting game-based rehabilitation at home is therefore essential. For this purpose, we conducted a study to collect and analyze the opinions and expectations of stroke patients and clinical therapists. The study is composed of 2 parts: Rehab-preference survey - interviews to both patients and therapists to understand the current practices, challenges, and expectations on game-based rehabilitation systems; and Rehab-compatibility survey - a gaming experiment with therapists to elaborate what commercial games are compatible with rehabilitation. The study is conducted with 30 outpatients with stroke and 19 occupational therapists from 2 rehabilitation centers in Taiwan. Our surveys show that game-based rehabilitation systems can turn the rehabilitation exercises more appealing and provide personalized motivation for various stroke patients. Patients prefer to perform rehabilitation exercises with more diverse and fun games, and need cost-effective rehabilitation systems, which are often built on commodity hardware. Our study also sheds light on incorporating the existing design-for-fun games into rehabilitation system. We envision the results are helpful in developing a platform which enables rehab-compatible (i.e., existing, appropriately selected) games to be operated on commodity hardware and brings cost-effective rehabilitation systems to more and more patients' home for long-term recovery.

  11. Global flows of critical metals necessary for low-carbon technologies: the case of neodymium, cobalt, and platinum.

    PubMed

    Nansai, Keisuke; Nakajima, Kenichi; Kagawa, Shigemi; Kondo, Yasushi; Suh, Sangwon; Shigetomi, Yosuke; Oshita, Yuko

    2014-01-01

    This study, encompassing 231 countries and regions, quantifies the global transfer of three critical metals (neodymium, cobalt, and platinum) considered vital for low-carbon technologies by means of material flow analysis (MFA), using trade data (BACI) and the metal contents of trade commodities, resolving the optimization problem to ensure the material balance of the metals within each country and region. The study shows that in 2005 international trade led to global flows of 18.6 kt of neodymium, 154 kt of cobalt, and 402 t of platinum and identifies the main commodities and top 50 bilateral trade links embodying these metals. To explore the issue of consumption efficiency, the flows were characterized according to the technological level of each country or region and divided into three types: green ("efficient use"), yellow ("moderately efficient use"), and red ("inefficient use"). On this basis, the shares of green, yellow, and red flows in the aggregate global flow of Nd were found to be 1.2%, 98%, and 1.2%, respectively. For Co, the respective figures are 53%, 28%, and 19%, and for Pt 15%, 84%, and 0.87%. Furthermore, a simple indicator focusing on the composition of the three colored flows for each commodity was developed to identify trade commodities that should be prioritized for urgent technical improvement to reduce wasteful use of the metals. Based on the indicator, we discuss logical, strategic identification of the responsibilities and roles of the countries involved in the global flows.

  12. Supporting Non-Standard Micro Hardware and Software at Bowling Green.

    ERIC Educational Resources Information Center

    Whitmire, Duane E.

    1990-01-01

    Bowling Green State University (Ohio) produces a microcomputer resource handbook that contains a comprehensive listing of campus employees with unique microcomputer expertise they are willing to share. The project's current success suggests expansion and follow-up activities in the future. (Author/MSE)

  13. Networking CD-ROMs: A Tutorial Introduction.

    ERIC Educational Resources Information Center

    Perone, Karen

    1996-01-01

    Provides an introduction to CD-ROM networking. Highlights include LAN (local area network) architectures for CD-ROM networks, peer-to-peer networks, shared file and dedicated file servers, commercial software/vendor solutions, problems, multiple hardware platforms, and multimedia. Six figures illustrate network architectures and a sidebar contains…

  14. Walheim during EVA 3

    NASA Image and Video Library

    2008-02-15

    S122-E-008796 (15 Feb. 2008) --- Astronaut Rex Walheim, mission specialist, uses a power tool while installing a handrail on the Columbus laboratory, the newest piece of hardware on the International Space Station. Astronaut Stanley Love (out of frame), mission specialist, shared this extravehicular activity with Walheim.

  15. Team Production of Learner-Controlled Courseware: A Progress Report.

    ERIC Educational Resources Information Center

    Bunderson, C. Victor

    A project being conducted by the MITRE Corporation and Brigham Young University (BYU) is developing hardware, software, and courseware for the TICCIT (Time Shared, Interactive, Computer Controlled Information Television) computer-assisted instructional system. Four instructional teams at BYU, each having an instructional psychologist, subject…

  16. A LAN Primer.

    ERIC Educational Resources Information Center

    Hazari, Sunil I.

    1991-01-01

    Local area networks (LANs) are systems of computers and peripherals connected together for the purposes of electronic mail and the convenience of sharing information and expensive resources. In planning the design of such a system, the components to consider are hardware, software, transmission media, topology, operating systems, and protocols.…

  17. NASA Ames Participates in Two Major Bay Area Events (Reporter Package)NASA Ames Research Center participated in two important outreach events: Maker Faire and a gathering of hardware and software industry professionals called the Solid Conference. The conference was an opportunity for the Intelligent Robotics Group from NASA Ames to publicly unveil their latest version of the free flying robot used on the International Space Station. NASA also participated at the Bay Area Maker Faire, a gathering of more than 120,000 innovators, enthusiasts, crafters, hobbyists and tinkerers to share what they have invented and made.

    NASA Image and Video Library

    2014-05-28

    NASA Ames Research Center participated in two important outreach events: Maker Faire and a gathering of hardware and software industry professionals called the Solid Conference. The conference was an opportunity for the Intelligent Robotics Group from NASA Ames to publicly unveil their latest version of the free flying robot used on the International Space Station. NASA also participated at the Bay Area Maker Faire, a gathering of more than 120,000 innovators, enthusiasts, crafters, hobbyists and tinkerers to share what they have invented and made.

  18. Automated System Marketplace 1988: Focused on Fulfilling Commitments.

    ERIC Educational Resources Information Center

    Walton, Robert A.; Bridge, Frank R.

    1989-01-01

    Analyzes trends in the library automation marketplace. Market shares for online vendors are examined in terms of total installations, academic libraries, public libraries, revenues, differently sized systems, and foreign installations. Hardware availability, operating systems, and interfaces with MARC are also discussed for each vendor. A source…

  19. Walheim during EVA 3

    NASA Image and Video Library

    2008-02-15

    S122-E-008922 (15 Feb. 2008) --- Astronaut Rex Walheim, mission specialist, performs work on the outside of the Columbus laboratory, the newest piece of hardware on the International Space Station. Astronaut Stanley Love (out of frame), mission specialist, shared this final period of STS-122 extravehicular activity with Walheim.

  20. Walheim during EVA 3

    NASA Image and Video Library

    2008-02-15

    S122-E-008923 (15 Feb. 2008) --- Astronaut Rex Walheim, mission specialist, performs work on the outside of the Columbus laboratory, the newest piece of hardware on the International Space Station. Astronaut Stanley Love (out of frame), mission specialist, shared this final period of STS-122 extravehicular activity with Walheim.

  1. Walheim during EVA 3

    NASA Image and Video Library

    2008-02-15

    S122-E-008916 (15 Feb. 2008) --- Astronaut Rex Walheim, mission specialist, performs work on the outside of the Columbus laboratory, the newest piece of hardware on the International Space Station. Astronaut Stanley Love (out of frame), mission specialist, shared this final period of STS-122 extravehicular activity with Walheim.

  2. The Political Economy of Biofuels and Farming: The Case of Smallholders in Tanzania

    NASA Astrophysics Data System (ADS)

    Winters, Kristen

    Following decades of neoliberal policies promoting commodity driven export production, the small scale farming sector in many developing countries has suffered from declining market share, lessening productivity and deepening poverty. In recent years, biofuels have been promoted within developing countries to foster rural development and provide new markets for the smallholders. Using Tanzania as a case study, this thesis evaluates the extent to which the emerging biofuel sector provides opportunities for smallholders to gain beneficial access to markets -- or whether the sector is following the trajectory of other export-oriented commodity projects of the past and resulting in the marginalisation of smallholders. This thesis asserts that the biofuel sector in Tanzania presents more threats than benefits for smallholders; a pattern can be witnessed that favours foreign investors and dispossesses farmers of existing land, while providing few opportunities at a local level for income generation and employment.

  3. Creating a Rackspace and NASA Nebula compatible cloud using the OpenStack project (Invited)

    NASA Astrophysics Data System (ADS)

    Clark, R.

    2010-12-01

    NASA and Rackspace have both provided technology to the OpenStack that allows anyone to create a private Infrastructure as a Service (IaaS) cloud using open source software and commodity hardware. OpenStack is designed and developed completely in the open and with an open governance process. NASA donated Nova, which powers the compute portion of NASA Nebula Cloud Computing Platform, and Rackspace donated Swift, which powers Rackspace Cloud Files. The project is now in continuous development by NASA, Rackspace, and hundreds of other participants. When you create a private cloud using Openstack, you will have the ability to easily interact with your private cloud, a government cloud, and an ecosystem of public cloud providers, using the same API.

  4. Data sharing in neuroimaging research

    PubMed Central

    Poline, Jean-Baptiste; Breeze, Janis L.; Ghosh, Satrajit; Gorgolewski, Krzysztof; Halchenko, Yaroslav O.; Hanke, Michael; Haselgrove, Christian; Helmer, Karl G.; Keator, David B.; Marcus, Daniel S.; Poldrack, Russell A.; Schwartz, Yannick; Ashburner, John; Kennedy, David N.

    2012-01-01

    Significant resources around the world have been invested in neuroimaging studies of brain function and disease. Easier access to this large body of work should have profound impact on research in cognitive neuroscience and psychiatry, leading to advances in the diagnosis and treatment of psychiatric and neurological disease. A trend toward increased sharing of neuroimaging data has emerged in recent years. Nevertheless, a number of barriers continue to impede momentum. Many researchers and institutions remain uncertain about how to share data or lack the tools and expertise to participate in data sharing. The use of electronic data capture (EDC) methods for neuroimaging greatly simplifies the task of data collection and has the potential to help standardize many aspects of data sharing. We review here the motivations for sharing neuroimaging data, the current data sharing landscape, and the sociological or technical barriers that still need to be addressed. The INCF Task Force on Neuroimaging Datasharing, in conjunction with several collaborative groups around the world, has started work on several tools to ease and eventually automate the practice of data sharing. It is hoped that such tools will allow researchers to easily share raw, processed, and derived neuroimaging data, with appropriate metadata and provenance records, and will improve the reproducibility of neuroimaging studies. By providing seamless integration of data sharing and analysis tools within a commodity research environment, the Task Force seeks to identify and minimize barriers to data sharing in the field of neuroimaging. PMID:22493576

  5. Low-level rf control of Spallation Neutron Source: System and characterization

    NASA Astrophysics Data System (ADS)

    Ma, Hengjie; Champion, Mark; Crofford, Mark; Kasemir, Kay-Uwe; Piller, Maurice; Doolittle, Lawrence; Ratti, Alex

    2006-03-01

    The low-level rf control system currently commissioned throughout the Spallation Neutron Source (SNS) LINAC evolved from three design iterations over 1 yr intensive research and development. Its digital hardware implementation is efficient, and has succeeded in achieving a minimum latency of less than 150 ns which is the key for accomplishing an all-digital feedback control for the full bandwidth. The control bandwidth is analyzed in frequency domain and characterized by testing its transient response. The hardware implementation also includes the provision of a time-shared input channel for a superior phase differential measurement between the cavity field and the reference. A companion cosimulation system for the digital hardware was developed to ensure a reliable long-term supportability. A large effort has also been made in the operation software development for the practical issues such as the process automations, cavity filling, beam loading compensation, and the cavity mechanical resonance suppression.

  6. Building a Library Network from Scratch: Eric & Veronica's Excellent Adventure.

    ERIC Educational Resources Information Center

    Sisler, Eric; Smith, Veronica

    2000-01-01

    Describes library automation issues during the planning and construction of College Hill Library (Colorado), a joint-use facility shared by a community college and a public library. Discuses computer networks; hardware selection; public access to catalogs and electronic resources; classification schemes and bibliographic data; children's…

  7. Automated System Marketplace 1987: Maturity and Competition.

    ERIC Educational Resources Information Center

    Walton, Robert A.; Bridge, Frank R.

    1988-01-01

    This annual review of the library automation marketplace presents profiles of 15 major library automation firms and looks at emerging trends. Seventeen charts and tables provide data on market shares, number and size of installations, hardware availability, operating systems, and interfaces. A directory of 49 automation sources is included. (MES)

  8. Software Hardware Asset Reuse Enterprise (SHARE) Repository Framework Final Report: Component Specification and Ontology

    DTIC Science & Technology

    2009-08-19

    SSDS Ship Self Defense System TSTS Total Ship Training System UDDI Universal Description, Discovery, and Integration UML Unified Modeling...34ContractorOrganization" type="ContractorOrganizationType"> <xs:annotation> <xs:documentation>Identifies a contractor organization resposible for the

  9. New Directions in Statewide Computer Planning and Cooperation.

    ERIC Educational Resources Information Center

    Norris, Donald M.; St. John, Edward P.

    1981-01-01

    In the 1960s and early 1970s, statewide planning efforts usually resulted in plans for centralized hardware networks. The focus of statewide planning has shifted to the issue of improved computer financing, information sharing, and enhanced utilization in instruction, administration. A "facilitating network" concept and Missouri efforts…

  10. A hardware implementation of the discrete Pascal transform for image processing

    NASA Astrophysics Data System (ADS)

    Goodman, Thomas J.; Aburdene, Maurice F.

    2006-02-01

    The discrete Pascal transform is a polynomial transform with applications in pattern recognition, digital filtering, and digital image processing. It already has been shown that the Pascal transform matrix can be decomposed into a product of binary matrices. Such a factorization leads to a fast and efficient hardware implementation without the use of multipliers, which consume large amounts of hardware. We recently developed a field-programmable gate array (FPGA) implementation to compute the Pascal transform. Our goal was to demonstrate the computational efficiency of the transform while keeping hardware requirements at a minimum. Images are uploaded into memory from a remote computer prior to processing, and the transform coefficients can be offloaded from the FPGA board for analysis. Design techniques like as-soon-as-possible scheduling and adder sharing allowed us to develop a fast and efficient system. An eight-point, one-dimensional transform completes in 13 clock cycles and requires only four adders. An 8x8 two-dimensional transform completes in 240 cycles and requires only a top-level controller in addition to the one-dimensional transform hardware. Finally, through minor modifications to the controller, the transform operations can be pipelined to achieve 100% utilization of the four adders, allowing one eight-point transform to complete every seven clock cycles.

  11. Contraception supply chain challenges: a review of evidence from low- and middle-income countries.

    PubMed

    Mukasa, Bakali; Ali, Moazzam; Farron, Madeline; Van de Weerdt, Renee

    2017-10-01

    To identify and assess factors determining the functioning of supply chain systems for modern contraception in low- and middle-income countries (LMICs), and to identify challenges contributing to contraception stockouts that may lead to unmet need. Scientific databases and grey literature were searched including Database of Abstracts of Reviews of Effectiveness (DARE), PubMed, MEDLINE, POPLINE, CINAHL, Academic Search Complete, Science Direct, Web of Science, Cochrane Central, Google Scholar, WHO databases and websites of key international organisations. Studies indicated that supply chain system inefficiencies significantly affect availability of modern FP and contraception commodities in LMICs, especially in rural public facilities where distribution barriers may be acute. Supply chain failures or bottlenecks may be attributed to: weak and poorly institutionalized logistic management information systems (LMIS), poor physical infrastructures in LMICs, lack of trained and dedicated staff for supply chain management, inadequate funding, and rigid government policies on task sharing. However, there is evidence that implementing effective LMISs and involving public and private providers will distribution channels resulted in reduction in medical commodities' stockout rates. Supply chain bottlenecks contribute significantly to persistent high stockout rates for modern contraceptives in LMICs. Interventions aimed at enhancing uptake of contraceptives to reduce the problem of unmet need in LMICs should make strong commitments towards strengthening these countries' health commodities supply chain management systems. Current evidence is limited and additional, and well-designed implementation research on contraception supply chain systems is warranted to gain further understanding and insights on the determinants of supply chain bottlenecks and their impact on stockouts of contraception commodities.

  12. Shared Mind: Communication, Decision Making, and Autonomy in Serious Illness

    PubMed Central

    Epstein, Ronald M.; Street, Richard L.

    2011-01-01

    In the context of serious illness, individuals usually rely on others to help them think and feel their way through difficult decisions. To help us to understand why, when, and how individuals involve trusted others in sharing information, deliberation, and decision making, we offer the concept of shared mind—ways in which new ideas and perspectives can emerge through the sharing of thoughts, feelings, perceptions, meanings, and intentions among 2 or more people. We consider how shared mind manifests in relationships and organizations in general, building on studies of collaborative cognition, attunement, and sensemaking. Then, we explore how shared mind might be promoted through communication, when appropriate, and the implications of shared mind for decision making and patient autonomy. Next, we consider a continuum of patient-centered approaches to patient-clinician interactions. At one end of the continuum, an interactional approach promotes knowing the patient as a person, tailoring information, constructing preferences, achieving consensus, and promoting relational autonomy. At the other end, a transactional approach focuses on knowledge about the patient, information-as-commodity, negotiation, consent, and individual autonomy. Finally, we propose that autonomy and decision making should consider not only the individual perspectives of patients, their families, and members of the health care team, but also the perspectives that emerge from the interactions among them. By drawing attention to shared mind, clinicians can observe in what ways they can promote it through bidirectional sharing of information and engaging in shared deliberation. PMID:21911765

  13. Shared mind: communication, decision making, and autonomy in serious illness.

    PubMed

    Epstein, Ronald M; Street, Richard L

    2011-01-01

    In the context of serious illness, individuals usually rely on others to help them think and feel their way through difficult decisions. To help us to understand why, when, and how individuals involve trusted others in sharing information, deliberation, and decision making, we offer the concept of shared mind-ways in which new ideas and perspectives can emerge through the sharing of thoughts, feelings, perceptions, meanings, and intentions among 2 or more people. We consider how shared mind manifests in relationships and organizations in general, building on studies of collaborative cognition, attunement, and sensemaking. Then, we explore how shared mind might be promoted through communication, when appropriate, and the implications of shared mind for decision making and patient autonomy. Next, we consider a continuum of patient-centered approaches to patient-clinician interactions. At one end of the continuum, an interactional approach promotes knowing the patient as a person, tailoring information, constructing preferences, achieving consensus, and promoting relational autonomy. At the other end, a transactional approach focuses on knowledge about the patient, information-as-commodity, negotiation, consent, and individual autonomy. Finally, we propose that autonomy and decision making should consider not only the individual perspectives of patients, their families, and members of the health care team, but also the perspectives that emerge from the interactions among them. By drawing attention to shared mind, clinicians can observe in what ways they can promote it through bidirectional sharing of information and engaging in shared deliberation.

  14. Experiences using OpenMP based on Computer Directed Software DSM on a PC Cluster

    NASA Technical Reports Server (NTRS)

    Hess, Matthias; Jost, Gabriele; Mueller, Matthias; Ruehle, Roland

    2003-01-01

    In this work we report on our experiences running OpenMP programs on a commodity cluster of PCs running a software distributed shared memory (DSM) system. We describe our test environment and report on the performance of a subset of the NAS Parallel Benchmarks that have been automaticaly parallelized for OpenMP. We compare the performance of the OpenMP implementations with that of their message passing counterparts and discuss performance differences.

  15. Balancing detail and scale in assessing transparency to improve the governance of agricultural commodity supply chains

    NASA Astrophysics Data System (ADS)

    Godar, Javier; Suavet, Clément; Gardner, Toby A.; Dawkins, Elena; Meyfroidt, Patrick

    2016-03-01

    To date, assessments of the sustainability of agricultural commodity supply chains have largely relied on some combination of macro-scale footprint accounts, detailed life-cycle analyses and fine-scale traceability systems. Yet these approaches are limited in their ability to support the sustainability governance of agricultural supply chains, whether because they are intended for coarser-grained analyses, do not identify individual actors, or are too costly to be implemented in a consistent manner for an entire region of production. Here we illustrate some of the advantages of a complementary middle-ground approach that balances detail and scale of supply chain transparency information by combining consistent country-wide data on commodity production at the sub-national (e.g. municipal) level with per shipment customs data to describe trade flows of a given commodity covering all companies and production regions within that country. This approach can support supply chain governance in two key ways. First, enhanced spatial resolution of the production regions that connect to individual supply chains allows for a more accurate consideration of geographic variability in measures of risk and performance that are associated with different production practices. Second, identification of key actors that operate within a specific supply chain, including producers, traders, shippers and consumers can help discriminate coalitions of actors that have shared stake in a particular region, and that together are capable of delivering more cost-effective and coordinated interventions. We illustrate the potential of this approach with examples from Brazil, Indonesia and Colombia. We discuss how transparency information can deepen understanding of the environmental and social impacts of commodity production systems, how benefits are distributed among actors, and some of the trade-offs involved in efforts to improve supply chain sustainability. We then discuss the challenges and opportunities of our approach to strengthen supply chain governance and leverage more effective and fair accountability systems.

  16. Multi-Level Secure Information Sharing Between Smart Cloud Systems of Systems

    DTIC Science & Technology

    2014-03-01

    implementation of virtual hardware (VMWare), along with a commercial implementation of virtual networking (VPN), such as OpenVPN . 1. VMWare Virtualization...en.wikipedia.org/wiki/MongoDB. Wikipedia. 2014b. Accessed February 26. s.v. “Open VPN,” http://en.wikipedia.org/wiki/ OpenVPN . Wikipedia. 2014c. Accessed

  17. AN Integrated Bibliographic Information System: Concept and Application for Resource Sharing in Special Libraries

    DTIC Science & Technology

    1987-05-01

    workload (beyond that of say an equivalent academic or corporate technical libary ) for the Defense Department libraries. Figure 9 illustrates the range...summer. The hardware configuration for the system is as follows: " Digital Equipment Corporation VAX 11/750 central processor with 6 mega- bytes of real

  18. Resource Sharing of Micro-Software, or, What Ever Happened to All That CP/M Compatibility?

    ERIC Educational Resources Information Center

    DeYoung, Barbara

    1984-01-01

    Explores incompatible operating systems as the basic reason why software packages will not work on different microcomputers; defines operating system; explores compatibility issues surrounding the IBM MS-DOS; and presents two future trends in hardware and software developments which indicate a return to true compatibility. (Author/MBR)

  19. Networked Microcomputers--The Next Generation in College Computing.

    ERIC Educational Resources Information Center

    Harris, Albert L.

    The evolution of computer hardware for college computing has mirrored the industry's growth. When computers were introduced into the educational environment, they had limited capacity and served one user at a time. Then came large mainframes with many terminals sharing the resource. Next, the use of computers in office automation emerged. As…

  20. Software Hardware Asset Reuse Enterprise (SHARE) Repository Framework Final Report: Component Specification and Ontology

    DTIC Science & Technology

    2008-09-30

    89 Integrated Surface Ship ASW Combat System (AN/SQQ-89) SSDS Ship Self Defense System TSTS Total Ship Training System UDDI Universal Description...34ContractorOrganization" type="ContractorOrganizationType"> <xs:annotation> <xs:documentation>Identifies a contractor organization resposible for the

  1. Documentary with Ephemeral Media: Curation Practices in Online Social Spaces

    ERIC Educational Resources Information Center

    Erickson, Ingrid

    2010-01-01

    New hardware such as mobile handheld devices and digital cameras; new online social venues such as social networking, microblogging, and online photo sharing sites; and new infrastructures such as the global positioning system are beginning to establish new practices--what the author refers to as "sociolocative"--that combine data about a physical…

  2. Dare to Share

    ERIC Educational Resources Information Center

    Briggs, Linda L.

    2007-01-01

    Today, as difficult as it is for large institutions to keep software and hardware up-to-date, the challenge and expense of keeping up is only amplified for smaller colleges and universities. In the area of data-driven decision-making (DDD), the challenge can be even greater. Because smaller schools are pressed for time and resources on nearly all…

  3. Collective Machine Learning: Team Learning and Classification in Multi-Agent Systems

    ERIC Educational Resources Information Center

    Gifford, Christopher M.

    2009-01-01

    This dissertation focuses on the collaboration of multiple heterogeneous, intelligent agents (hardware or software) which collaborate to learn a task and are capable of sharing knowledge. The concept of collaborative learning in multi-agent and multi-robot systems is largely under studied, and represents an area where further research is needed to…

  4. 77 FR 1759 - Self-Regulatory Organizations; New York Stock Exchange LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-11

    ..., which Items have been prepared by the Exchange. The Commission is publishing this notice to solicit... Customer Gateway (``CCG'') that accesses the equity trading systems that it shares with its affiliates... increasing connectivity costs, including additional costs based on gateway software and hardware enhancements...

  5. Computing at DESY — current setup, trends and strategic directions

    NASA Astrophysics Data System (ADS)

    Ernst, Michael

    1998-05-01

    Since the HERA experiments H1 and ZEUS started data taking in '92, the computing environment at DESY has changed dramatically. Running a mainframe centred computing for more than 20 years, DESY switched to a heterogeneous, fully distributed computing environment within only about two years in almost every corner where computing has its applications. The computing strategy was highly influenced by the needs of the user community. The collaborations are usually limited by current technology and their ever increasing demands is the driving force for central computing to always move close to the technology edge. While DESY's central computing has a multidecade experience in running Central Data Recording/Central Data Processing for HEP experiments, the most challenging task today is to provide for clear and homogeneous concepts in the desktop area. Given that lowest level commodity hardware draws more and more attention, combined with the financial constraints we are facing already today, we quickly need concepts for integrated support of a versatile device which has the potential to move into basically any computing area in HEP. Though commercial solutions, especially addressing the PC management/support issues, are expected to come to market in the next 2-3 years, we need to provide for suitable solutions now. Buying PC's at DESY currently at a rate of about 30/month will otherwise absorb any available manpower in central computing and still will leave hundreds of unhappy people alone. Though certainly not the only region, the desktop issue is one of the most important one where we need HEP-wide collaboration to a large extent, and right now. Taking into account that there is traditionally no room for R&D at DESY, collaboration, meaning sharing experience and development resources within the HEP community, is a predominant factor for us.

  6. The Open Data Repositorys Data Publisher

    NASA Technical Reports Server (NTRS)

    Stone, N.; Lafuente, B.; Downs, R. T.; Blake, D.; Bristow, T.; Fonda, M.; Pires, A.

    2015-01-01

    Data management and data publication are becoming increasingly important components of researcher's workflows. The complexity of managing data, publishing data online, and archiving data has not decreased significantly even as computing access and power has greatly increased. The Open Data Repository's Data Publisher software strives to make data archiving, management, and publication a standard part of a researcher's workflow using simple, web-based tools and commodity server hardware. The publication engine allows for uploading, searching, and display of data with graphing capabilities and downloadable files. Access is controlled through a robust permissions system that can control publication at the field level and can be granted to the general public or protected so that only registered users at various permission levels receive access. Data Publisher also allows researchers to subscribe to meta-data standards through a plugin system, embargo data publication at their discretion, and collaborate with other researchers through various levels of data sharing. As the software matures, semantic data standards will be implemented to facilitate machine reading of data and each database will provide a REST application programming interface for programmatic access. Additionally, a citation system will allow snapshots of any data set to be archived and cited for publication while the data itself can remain living and continuously evolve beyond the snapshot date. The software runs on a traditional LAMP (Linux, Apache, MySQL, PHP) server and is available on GitHub (http://github.com/opendatarepository) under a GPLv2 open source license. The goal of the Open Data Repository is to lower the cost and training barrier to entry so that any researcher can easily publish their data and ensure it is archived for posterity.

  7. GPU accelerated fuzzy connected image segmentation by using CUDA.

    PubMed

    Zhuge, Ying; Cao, Yong; Miller, Robert W

    2009-01-01

    Image segmentation techniques using fuzzy connectedness principles have shown their effectiveness in segmenting a variety of objects in several large applications in recent years. However, one problem of these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays commodity graphics hardware provides high parallel computing power. In this paper, we present a parallel fuzzy connected image segmentation algorithm on Nvidia's Compute Unified Device Architecture (CUDA) platform for segmenting large medical image data sets. Our experiments based on three data sets with small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 7.2x, 7.3x, and 14.4x, correspondingly, for the three data sets over the sequential implementation of fuzzy connected image segmentation algorithm on CPU.

  8. High-efficiency space-based software radio architectures & algorithms (a minimum size, weight, and power TeraOps processor)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunham, Mark Edward; Baker, Zachary K; Stettler, Matthew W

    2009-01-01

    Los Alamos has recently completed the latest in a series of Reconfigurable Software Radios, which incorporates several key innovations in both hardware design and algorithms. Due to our focus on satellite applications, each design must extract the best size, weight, and power performance possible from the ensemble of Commodity Off-the-Shelf (COTS) parts available at the time of design. In this case we have achieved 1 TeraOps/second signal processing on a 1920 Megabit/second datastream, while using only 53 Watts mains power, 5.5 kg, and 3 liters. This processing capability enables very advanced algorithms such as our wideband RF compression scheme tomore » operate remotely, allowing network bandwidth constrained applications to deliver previously unattainable performance.« less

  9. Impact of Machine Virtualization on Timing Precision for Performance-critical Tasks

    NASA Astrophysics Data System (ADS)

    Karpov, Kirill; Fedotova, Irina; Siemens, Eduard

    2017-07-01

    In this paper we present a measurement study to characterize the impact of hardware virtualization on basic software timing, as well as on precise sleep operations of an operating system. We investigated how timer hardware is shared among heavily CPU-, I/O- and Network-bound tasks on a virtual machine as well as on the host machine. VMware ESXi and QEMU/KVM have been chosen as commonly used examples of hypervisor- and host-based models. Based on statistical parameters of retrieved distributions, our results provide a very good estimation of timing behavior. It is essential for real-time and performance-critical applications such as image processing or real-time control.

  10. An all digital low data rate communication system

    NASA Technical Reports Server (NTRS)

    Chen, C.-H.; Fan, M.

    1973-01-01

    The advent of digital hardwares has made it feasible to implement many communication system components digitally. With the exception of frequency down conversion, the proposed low data rate communication system uses digital hardware completely. Although the system is designed primarily for deep space communications with large frequency uncertainty and low signal-to-noise ratio, it is also suitable for other low data rate applications with time-shared operation among a number of channels. Emphasis is placed on the fast Fourier transform receiver and the automatic frequency control via digital filtering. The speed available from the digital system allows sophisticated signal processing to reduce frequency uncertainty and to increase the signal-to-noise ratio.

  11. Protective interior wall and attach8ing means for a fusion reactor vacuum vessel

    DOEpatents

    Phelps, Richard D.; Upham, Gerald A.; Anderson, Paul M.

    1988-01-01

    An array of connected plates mounted on the inside wall of the vacuum vessel of a magnetic confinement reactor in order to provide a protective surface for energy deposition inside the vessel. All fasteners are concealed and protected beneath the plates, while the plates themselves share common mounting points. The entire array is installed with torqued nuts on threaded studs; provision also exists for thermal expansion by mounting each plate with two of its four mounts captured in an oversize grooved spool. A spool-washer mounting hardware allows one edge of a protective plate to be torqued while the other side remains loose, by simply inverting the spool-washer hardware.

  12. MIDAS - A microcomputer-based image display and analysis system with full Landsat frame processing capabilities

    NASA Technical Reports Server (NTRS)

    Hofman, L. B.; Erickson, W. K.; Donovan, W. E.

    1984-01-01

    Image Display and Analysis Systems (MIDAS) developed at NASA/Ames for the analysis of Landsat MSS images is described. The MIDAS computer power and memory, graphics, resource-sharing, expansion and upgrade, environment and maintenance, and software/user-interface requirements are outlined; the implementation hardware (including 32-bit microprocessor, 512K error-correcting RAM, 70 or 140-Mbyte formatted disk drive, 512 x 512 x 24 color frame buffer, and local-area-network transceiver) and applications software (ELAS, CIE, and P-EDITOR) are characterized; and implementation problems, performance data, and costs are examined. Planned improvements in MIDAS hardware and design goals and areas of exploration for MIDAS software are discussed.

  13. Future Challenges in Managing Human Health and Performance Risks for Space Flight

    NASA Technical Reports Server (NTRS)

    Corbin, Barbara J.; Barratt, Michael

    2013-01-01

    The global economy forces many nations to consider their national investments and make difficult decisions regarding their investment in future exploration. To enable safe, reliable, and productive human space exploration, we must pool global resources to understand and mitigate human health & performance risks prior to embarking on human exploration of deep space destinations. Consensus on the largest risks to humans during exploration is required to develop an integrated approach to mitigating risks. International collaboration in human space flight research will focus research on characterizing the effects of spaceflight on humans and the development of countermeasures or systems. Sharing existing data internationally will facilitate high quality research and sufficient power to make sound recommendations. Efficient utilization of ISS and unique ground-based analog facilities allows greater progress. Finally, a means to share results of human research in time to influence decisions for follow-on research, system design, new countermeasures and medical practices should be developed. Although formidable barriers to overcome, International working groups are working to define the risks, establish international research opportunities, share data among partners, share flight hardware and unique analog facilities, and establish forums for timely exchange of results. Representatives from the ISS partnership research and medical communities developed a list of the top ten human health & performance risks and their impact on exploration missions. They also drafted a multilateral data sharing plan to establish guidelines and principles for sharing human spaceflight data. Other working groups are also developing methods to promote international research solicitations. Collaborative use of analog facilities and shared development of space flight research and medical hardware continues. Establishing a forum for exchange of results between researchers, aerospace physicians and program managers takes careful consideration of researcher concerns and decision maker needs. Active participation by researchers in the development of this forum is essential, and the benefit can be tremendous. The ability to rapidly respond to research results without compromising publication rights and intellectual property will facilitate timely reduction in human health and performance risks in support of international exploration missions.

  14. If We Share Data, Will Anyone Use Them? Data Sharing and Reuse in the Long Tail of Science and Technology

    PubMed Central

    Wallis, Jillian C.; Rolando, Elizabeth; Borgman, Christine L.

    2013-01-01

    Research on practices to share and reuse data will inform the design of infrastructure to support data collection, management, and discovery in the long tail of science and technology. These are research domains in which data tend to be local in character, minimally structured, and minimally documented. We report on a ten-year study of the Center for Embedded Network Sensing (CENS), a National Science Foundation Science and Technology Center. We found that CENS researchers are willing to share their data, but few are asked to do so, and in only a few domain areas do their funders or journals require them to deposit data. Few repositories exist to accept data in CENS research areas.. Data sharing tends to occur only through interpersonal exchanges. CENS researchers obtain data from repositories, and occasionally from registries and individuals, to provide context, calibration, or other forms of background for their studies. Neither CENS researchers nor those who request access to CENS data appear to use external data for primary research questions or for replication of studies. CENS researchers are willing to share data if they receive credit and retain first rights to publish their results. Practices of releasing, sharing, and reusing of data in CENS reaffirm the gift culture of scholarship, in which goods are bartered between trusted colleagues rather than treated as commodities. PMID:23935830

  15. If we share data, will anyone use them? Data sharing and reuse in the long tail of science and technology.

    PubMed

    Wallis, Jillian C; Rolando, Elizabeth; Borgman, Christine L

    2013-01-01

    Research on practices to share and reuse data will inform the design of infrastructure to support data collection, management, and discovery in the long tail of science and technology. These are research domains in which data tend to be local in character, minimally structured, and minimally documented. We report on a ten-year study of the Center for Embedded Network Sensing (CENS), a National Science Foundation Science and Technology Center. We found that CENS researchers are willing to share their data, but few are asked to do so, and in only a few domain areas do their funders or journals require them to deposit data. Few repositories exist to accept data in CENS research areas.. Data sharing tends to occur only through interpersonal exchanges. CENS researchers obtain data from repositories, and occasionally from registries and individuals, to provide context, calibration, or other forms of background for their studies. Neither CENS researchers nor those who request access to CENS data appear to use external data for primary research questions or for replication of studies. CENS researchers are willing to share data if they receive credit and retain first rights to publish their results. Practices of releasing, sharing, and reusing of data in CENS reaffirm the gift culture of scholarship, in which goods are bartered between trusted colleagues rather than treated as commodities.

  16. Stopping Illicit Procurement: Lessons from Global Finance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hund, Gretchen; Kurzrok, Andrew J.

    Government regulators and the financial sector cooperate to combat money laundering and terrorist financing. This information-sharing relationship is built upon a strong legislative foundation and effective operational procedures. As with money-laundering and terrorist financing, halting the illicit procurement of dual-use commodities requires close coordination between government and industry. However, many of the legal and operational features present in financial threat cooperation do not exist in the export control realm. This article analyzes the applicability of financial industry cooperative measures to nonproliferation.

  17. OpenMP Performance on the Columbia Supercomputer

    NASA Technical Reports Server (NTRS)

    Haoqiang, Jin; Hood, Robert

    2005-01-01

    This presentation discusses Columbia World Class Supercomputer which is one of the world's fastest supercomputers providing 61 TFLOPs (10/20/04). Conceived, designed, built, and deployed in just 120 days. A 20-node supercomputer built on proven 512-processor nodes. The largest SGI system in the world with over 10,000 Intel Itanium 2 processors and provides the largest node size incorporating commodity parts (512) and the largest shared-memory environment (2048) with 88% efficiency tops the scalar systems on the Top500 list.

  18. Experiences Using OpenMP Based on Compiler Directed Software DSM on a PC Cluster

    NASA Technical Reports Server (NTRS)

    Hess, Matthias; Jost, Gabriele; Mueller, Matthias; Ruehle, Roland; Biegel, Bryan (Technical Monitor)

    2002-01-01

    In this work we report on our experiences running OpenMP (message passing) programs on a commodity cluster of PCs (personal computers) running a software distributed shared memory (DSM) system. We describe our test environment and report on the performance of a subset of the NAS (NASA Advanced Supercomputing) Parallel Benchmarks that have been automatically parallelized for OpenMP. We compare the performance of the OpenMP implementations with that of their message passing counterparts and discuss performance differences.

  19. Power: a changing commodity.

    PubMed

    Hage, S J

    1991-01-01

    "Rapid and tumultuous change in health care as well as business has precipitated a power shift," declares Mr. Hage in this candid discussion of a quality that is both abstract and concrete. Centralized power is no longer the order of the day; in fact, the new stance supports pushing power down into organizations where it can be better used by those closer to the action. The author maintains that effective participants in this new model will learn to share power and respect knowledge as the only tool that wields it.

  20. Automatic Generation of Directive-Based Parallel Programs for Shared Memory Parallel Systems

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Yan, Jerry; Frumkin, Michael

    2000-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. Due to its ease of programming and its good performance, the technique has become very popular. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate directive-based, OpenMP, parallel programs. We outline techniques used in the implementation of the tool and present test results on the NAS parallel benchmarks and ARC3D, a CFD application. This work demonstrates the great potential of using computer-aided tools to quickly port parallel programs and also achieve good performance.

  1. Viewpoints: A High-Performance High-Dimensional Exploratory Data Analysis Tool

    NASA Astrophysics Data System (ADS)

    Gazis, P. R.; Levit, C.; Way, M. J.

    2010-12-01

    Scientific data sets continue to increase in both size and complexity. In the past, dedicated graphics systems at supercomputing centers were required to visualize large data sets, but as the price of commodity graphics hardware has dropped and its capability has increased, it is now possible, in principle, to view large complex data sets on a single workstation. To do this in practice, an investigator will need software that is written to take advantage of the relevant graphics hardware. The Viewpoints visualization package described herein is an example of such software. Viewpoints is an interactive tool for exploratory visual analysis of large high-dimensional (multivariate) data. It leverages the capabilities of modern graphics boards (GPUs) to run on a single workstation or laptop. Viewpoints is minimalist: it attempts to do a small set of useful things very well (or at least very quickly) in comparison with similar packages today. Its basic feature set includes linked scatter plots with brushing, dynamic histograms, normalization, and outlier detection/removal. Viewpoints was originally designed for astrophysicists, but it has since been used in a variety of fields that range from astronomy, quantum chemistry, fluid dynamics, machine learning, bioinformatics, and finance to information technology server log mining. In this article, we describe the Viewpoints package and show examples of its usage.

  2. Production Level CFD Code Acceleration for Hybrid Many-Core Architectures

    NASA Technical Reports Server (NTRS)

    Duffy, Austen C.; Hammond, Dana P.; Nielsen, Eric J.

    2012-01-01

    In this work, a novel graphics processing unit (GPU) distributed sharing model for hybrid many-core architectures is introduced and employed in the acceleration of a production-level computational fluid dynamics (CFD) code. The latest generation graphics hardware allows multiple processor cores to simultaneously share a single GPU through concurrent kernel execution. This feature has allowed the NASA FUN3D code to be accelerated in parallel with up to four processor cores sharing a single GPU. For codes to scale and fully use resources on these and the next generation machines, codes will need to employ some type of GPU sharing model, as presented in this work. Findings include the effects of GPU sharing on overall performance. A discussion of the inherent challenges that parallel unstructured CFD codes face in accelerator-based computing environments is included, with considerations for future generation architectures. This work was completed by the author in August 2010, and reflects the analysis and results of the time.

  3. E-Books Mediator: Nicholas Bogaty--Open Ebook Forum, New York

    ERIC Educational Resources Information Center

    Library Journal, 2004

    2004-01-01

    This article is about the work of Nick Bogaty, executive director of the Open eBook Forum. Nick Bogaty is not a librarian, but he plays nicely with them, along with publishers, hardware manufacturers, software producers, database vendors, and disability rights advocates. All are groups that share an interest in making e-books work for their…

  4. Sequoia: A fault-tolerant tightly coupled multiprocessor for transaction processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernstein, P.A.

    1988-02-01

    The Sequoia computer is a tightly coupled multiprocessor, and thus attains the performance advantages of this style of architecture. It avoids most of the fault-tolerance disadvantages of tight coupling by using a new fault-tolerance design. The Sequoia architecture is similar to other multimicroprocessor architectures, such as those of Encore and Sequent, in that it gives dozens of microprocessors shared access to a large main memory. It resembles the Stratus architecture in its extensive use of hardware fault-detection techniques. It resembles Stratus and Auragen in its ability to quickly recover all processes after a single point failure, transparently to the user.more » However, Sequoia is unique in its combination of a large-scale tightly coupled architecture with a hardware approach to fault tolerance. This article gives an overview of how the hardware architecture and operating systems (OS) work together to provide a high degree of fault tolerance with good system performance.« less

  5. An Efficient Hardware Circuit for Spike Sorting Based on Competitive Learning Networks.

    PubMed

    Chen, Huan-Yuan; Chen, Chih-Chang; Hwang, Wen-Jyi

    2017-09-28

    This study aims to present an effective VLSI circuit for multi-channel spike sorting. The circuit supports the spike detection, feature extraction and classification operations. The detection circuit is implemented in accordance with the nonlinear energy operator algorithm. Both the peak detection and area computation operations are adopted for the realization of the hardware architecture for feature extraction. The resulting feature vectors are classified by a circuit for competitive learning (CL) neural networks. The CL circuit supports both online training and classification. In the proposed architecture, all the channels share the same detection, feature extraction, learning and classification circuits for a low area cost hardware implementation. The clock-gating technique is also employed for reducing the power dissipation. To evaluate the performance of the architecture, an application-specific integrated circuit (ASIC) implementation is presented. Experimental results demonstrate that the proposed circuit exhibits the advantages of a low chip area, a low power dissipation and a high classification success rate for spike sorting.

  6. An Efficient Hardware Circuit for Spike Sorting Based on Competitive Learning Networks

    PubMed Central

    Chen, Huan-Yuan; Chen, Chih-Chang

    2017-01-01

    This study aims to present an effective VLSI circuit for multi-channel spike sorting. The circuit supports the spike detection, feature extraction and classification operations. The detection circuit is implemented in accordance with the nonlinear energy operator algorithm. Both the peak detection and area computation operations are adopted for the realization of the hardware architecture for feature extraction. The resulting feature vectors are classified by a circuit for competitive learning (CL) neural networks. The CL circuit supports both online training and classification. In the proposed architecture, all the channels share the same detection, feature extraction, learning and classification circuits for a low area cost hardware implementation. The clock-gating technique is also employed for reducing the power dissipation. To evaluate the performance of the architecture, an application-specific integrated circuit (ASIC) implementation is presented. Experimental results demonstrate that the proposed circuit exhibits the advantages of a low chip area, a low power dissipation and a high classification success rate for spike sorting. PMID:28956859

  7. Impact of Data Placement on Resilience in Large-Scale Object Storage Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carns, Philip; Harms, Kevin; Jenkins, John

    Distributed object storage architectures have become the de facto standard for high-performance storage in big data, cloud, and HPC computing. Object storage deployments using commodity hardware to reduce costs often employ object replication as a method to achieve data resilience. Repairing object replicas after failure is a daunting task for systems with thousands of servers and billions of objects, however, and it is increasingly difficult to evaluate such scenarios at scale on realworld systems. Resilience and availability are both compromised if objects are not repaired in a timely manner. In this work we leverage a high-fidelity discrete-event simulation model tomore » investigate replica reconstruction on large-scale object storage systems with thousands of servers, billions of objects, and petabytes of data. We evaluate the behavior of CRUSH, a well-known object placement algorithm, and identify configuration scenarios in which aggregate rebuild performance is constrained by object placement policies. After determining the root cause of this bottleneck, we then propose enhancements to CRUSH and the usage policies atop it to enable scalable replica reconstruction. We use these methods to demonstrate a simulated aggregate rebuild rate of 410 GiB/s (within 5% of projected ideal linear scaling) on a 1,024-node commodity storage system. We also uncover an unexpected phenomenon in rebuild performance based on the characteristics of the data stored on the system.« less

  8. Software and hardware infrastructure for research in electrophysiology

    PubMed Central

    Mouček, Roman; Ježek, Petr; Vařeka, Lukáš; Řondík, Tomáš; Brůha, Petr; Papež, Václav; Mautner, Pavel; Novotný, Jiří; Prokop, Tomáš; Štěbeták, Jan

    2014-01-01

    As in other areas of experimental science, operation of electrophysiological laboratory, design and performance of electrophysiological experiments, collection, storage and sharing of experimental data and metadata, analysis and interpretation of these data, and publication of results are time consuming activities. If these activities are well organized and supported by a suitable infrastructure, work efficiency of researchers increases significantly. This article deals with the main concepts, design, and development of software and hardware infrastructure for research in electrophysiology. The described infrastructure has been primarily developed for the needs of neuroinformatics laboratory at the University of West Bohemia, the Czech Republic. However, from the beginning it has been also designed and developed to be open and applicable in laboratories that do similar research. After introducing the laboratory and the whole architectural concept the individual parts of the infrastructure are described. The central element of the software infrastructure is a web-based portal that enables community researchers to store, share, download and search data and metadata from electrophysiological experiments. The data model, domain ontology and usage of semantic web languages and technologies are described. Current data publication policy used in the portal is briefly introduced. The registration of the portal within Neuroscience Information Framework is described. Then the methods used for processing of electrophysiological signals are presented. The specific modifications of these methods introduced by laboratory researches are summarized; the methods are organized into a laboratory workflow. Other parts of the software infrastructure include mobile and offline solutions for data/metadata storing and a hardware stimulator communicating with an EEG amplifier and recording software. PMID:24639646

  9. Software and hardware infrastructure for research in electrophysiology.

    PubMed

    Mouček, Roman; Ježek, Petr; Vařeka, Lukáš; Rondík, Tomáš; Brůha, Petr; Papež, Václav; Mautner, Pavel; Novotný, Jiří; Prokop, Tomáš; Stěbeták, Jan

    2014-01-01

    As in other areas of experimental science, operation of electrophysiological laboratory, design and performance of electrophysiological experiments, collection, storage and sharing of experimental data and metadata, analysis and interpretation of these data, and publication of results are time consuming activities. If these activities are well organized and supported by a suitable infrastructure, work efficiency of researchers increases significantly. This article deals with the main concepts, design, and development of software and hardware infrastructure for research in electrophysiology. The described infrastructure has been primarily developed for the needs of neuroinformatics laboratory at the University of West Bohemia, the Czech Republic. However, from the beginning it has been also designed and developed to be open and applicable in laboratories that do similar research. After introducing the laboratory and the whole architectural concept the individual parts of the infrastructure are described. The central element of the software infrastructure is a web-based portal that enables community researchers to store, share, download and search data and metadata from electrophysiological experiments. The data model, domain ontology and usage of semantic web languages and technologies are described. Current data publication policy used in the portal is briefly introduced. The registration of the portal within Neuroscience Information Framework is described. Then the methods used for processing of electrophysiological signals are presented. The specific modifications of these methods introduced by laboratory researches are summarized; the methods are organized into a laboratory workflow. Other parts of the software infrastructure include mobile and offline solutions for data/metadata storing and a hardware stimulator communicating with an EEG amplifier and recording software.

  10. Development of a Universal Waste Management System

    NASA Technical Reports Server (NTRS)

    Stapleton, Thomas J.; Baccus, Shelley; Broyan, James L., Jr.

    2013-01-01

    NASA is working with a number of commercial companies to develop the next low Earth orbit spacecraft. The hardware volume and weight constraints are similar to or greater than those of the Apollo era. This, coupled with the equally demanding cost challenge of the proposed commercial vehicles, causes much of the Environmental Control and Life Support System (ECLSS) designs to be reconsidered. The Waste Collection System (WCS) is within this group of ECLSS hardware. The development to support this new initiative is discussed within. A WCS concept - intended to be common for all the vehicle platforms currently on the drawing board - is being developed. The new concept, referred to as the Universal Waste Management System (UWMS), includes favorable features from previous designs while improving on other areas on previous Space Shuttle and the existing International Space Station (ISS) WCS hardware, as needed. The intent is to build a commode that requires less crew time, improved cleanliness, and a 75% reduction in volume and weight compared to the previous US ISS/Extended Duration Orbitor WCS developed in the 1990s. The UWMS is most similar to the ISS Development Test Objective (DTO) WCS design. It is understood that the most dramatic cost reduction opportunity occurs at the beginning of the design process. To realize this opportunity, the cost of each similar component between the UWMS and the DTO WCS was determined. The comparison outlined were the design changes that would result with the greatest impact. The changes resulted in simplifying the approach or eliminating components completely. This initial UWMS paper will describe the system layout approach and a few key features of major components. Future papers will describe the UWMS functionality, test results, and components as they are developed.

  11. Demonstration Advanced Avionics System (DAAS) function description

    NASA Technical Reports Server (NTRS)

    Bailey, A. J.; Bailey, D. G.; Gaabo, R. J.; Lahn, T. G.; Larson, J. C.; Peterson, E. M.; Schuck, J. W.; Rodgers, D. L.; Wroblewski, K. A.

    1982-01-01

    The Demonstration Advanced Avionics System, DAAS, is an integrated avionics system utilizing microprocessor technologies, data busing, and shared displays for demonstrating the potential of these technologies in improving the safety and utility of general aviation operations in the late 1980's and beyond. Major hardware elements of the DAAS include a functionally distributed microcomputer complex, an integrated data control center, an electronic horizontal situation indicator, and a radio adaptor unit. All processing and display resources are interconnected by an IEEE-488 bus in order to enhance the overall system effectiveness, reliability, modularity and maintainability. A detail description of the DAAS architecture, the DAAS hardware, and the DAAS functions is presented. The system is designed for installation and flight test in a NASA Cessna 402-B aircraft.

  12. Secure management of biomedical data with cryptographic hardware.

    PubMed

    Canim, Mustafa; Kantarcioglu, Murat; Malin, Bradley

    2012-01-01

    The biomedical community is increasingly migrating toward research endeavors that are dependent on large quantities of genomic and clinical data. At the same time, various regulations require that such data be shared beyond the initial collecting organization (e.g., an academic medical center). It is of critical importance to ensure that when such data are shared, as well as managed, it is done so in a manner that upholds the privacy of the corresponding individuals and the overall security of the system. In general, organizations have attempted to achieve these goals through deidentification methods that remove explicitly, and potentially, identifying features (e.g., names, dates, and geocodes). However, a growing number of studies demonstrate that deidentified data can be reidentified to named individuals using simple automated methods. As an alternative, it was shown that biomedical data could be shared, managed, and analyzed through practical cryptographic protocols without revealing the contents of any particular record. Yet, such protocols required the inclusion of multiple third parties, which may not always be feasible in the context of trust or bandwidth constraints. Thus, in this paper, we introduce a framework that removes the need for multiple third parties by collocating services to store and to process sensitive biomedical data through the integration of cryptographic hardware. Within this framework, we define a secure protocol to process genomic data and perform a series of experiments to demonstrate that such an approach can be run in an efficient manner for typical biomedical investigations.

  13. Secure Management of Biomedical Data With Cryptographic Hardware

    PubMed Central

    Canim, Mustafa; Kantarcioglu, Murat; Malin, Bradley

    2014-01-01

    The biomedical community is increasingly migrating toward research endeavors that are dependent on large quantities of genomic and clinical data. At the same time, various regulations require that such data be shared beyond the initial collecting organization (e.g., an academic medical center). It is of critical importance to ensure that when such data are shared, as well as managed, it is done so in a manner that upholds the privacy of the corresponding individuals and the overall security of the system. In general, organizations have attempted to achieve these goals through deidentification methods that remove explicitly, and potentially, identifying features (e.g., names, dates, and geocodes). However, a growing number of studies demonstrate that deidentified data can be reidentified to named individuals using simple automated methods. As an alternative, it was shown that biomedical data could be shared, managed, and analyzed through practical cryptographic protocols without revealing the contents of any particular record. Yet, such protocols required the inclusion of multiple third parties, which may not always be feasible in the context of trust or bandwidth constraints. Thus, in this paper, we introduce a framework that removes the need for multiple third parties by collocating services to store and to process sensitive biomedical data through the integration of cryptographic hardware. Within this framework, we define a secure protocol to process genomic data and perform a series of experiments to demonstrate that such an approach can be run in an efficient manner for typical biomedical investigations. PMID:22010157

  14. Wake Sensor Evaluation Program and Results of JFK-1 Wake Vortex Sensor Intercomparisons

    NASA Technical Reports Server (NTRS)

    Barker, Ben C., Jr.; Burnham, David C.; Rudis, Robert P.

    1997-01-01

    The overall approach should be to: (1) Seek simplest, sufficiently robust, integrated ground based sensor systems (wakes and weather) for AVOSS; (2) Expand all sensor performance cross-comparisons and data mergings in on-going field deployments; and (3) Achieve maximal cost effectiveness through hardware/info sharing. An effective team is in place to accomplish the above tasks.

  15. Parallel Implementation of Triangular Cellular Automata for Computing Two-Dimensional Elastodynamic Response on Arbitrary Domains

    NASA Astrophysics Data System (ADS)

    Leamy, Michael J.; Springer, Adam C.

    In this research we report parallel implementation of a Cellular Automata-based simulation tool for computing elastodynamic response on complex, two-dimensional domains. Elastodynamic simulation using Cellular Automata (CA) has recently been presented as an alternative, inherently object-oriented technique for accurately and efficiently computing linear and nonlinear wave propagation in arbitrarily-shaped geometries. The local, autonomous nature of the method should lead to straight-forward and efficient parallelization. We address this notion on symmetric multiprocessor (SMP) hardware using a Java-based object-oriented CA code implementing triangular state machines (i.e., automata) and the MPI bindings written in Java (MPJ Express). We use MPJ Express to reconfigure our existing CA code to distribute a domain's automata to cores present on a dual quad-core shared-memory system (eight total processors). We note that this message passing parallelization strategy is directly applicable to computer clustered computing, which will be the focus of follow-on research. Results on the shared memory platform indicate nearly-ideal, linear speed-up. We conclude that the CA-based elastodynamic simulator is easily configured to run in parallel, and yields excellent speed-up on SMP hardware.

  16. Storage and Retrieval of Large RDF Graph Using Hadoop and MapReduce

    NASA Astrophysics Data System (ADS)

    Farhan Husain, Mohammad; Doshi, Pankil; Khan, Latifur; Thuraisingham, Bhavani

    Handling huge amount of data scalably is a matter of concern for a long time. Same is true for semantic web data. Current semantic web frameworks lack this ability. In this paper, we describe a framework that we built using Hadoop to store and retrieve large number of RDF triples. We describe our schema to store RDF data in Hadoop Distribute File System. We also present our algorithms to answer a SPARQL query. We make use of Hadoop's MapReduce framework to actually answer the queries. Our results reveal that we can store huge amount of semantic web data in Hadoop clusters built mostly by cheap commodity class hardware and still can answer queries fast enough. We conclude that ours is a scalable framework, able to handle large amount of RDF data efficiently.

  17. Reimagining Building Sensing and Control (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polese, L.

    2014-06-01

    Buildings are responsible for 40% of US energy consumption, and sensing and control technologies are an important element in creating a truly sustainable built environment. Motion-based occupancy sensors are often part of these control systems, but are usually altered or disabled in response to occupants' complaints, at the expense of energy savings. Can we leverage commodity hardware developed for other sectors and embedded software to produce more capable sensors for robust building controls? The National Renewable Energy Laboratory's (NREL) 'Image Processing Occupancy Sensor (IPOS)' is one example of leveraging embedded systems to create smarter, more reliable, multi-function sensors that openmore » the door to new control strategies for building heating, cooling, ventilation, and lighting control. In this keynote, we will discuss how cost-effective embedded systems are changing the state-of-the-art of building sensing and control.« less

  18. Real-time orthorectification by FPGA-based hardware acceleration

    NASA Astrophysics Data System (ADS)

    Kuo, David; Gordon, Don

    2010-10-01

    Orthorectification that corrects the perspective distortion of remote sensing imagery, providing accurate geolocation and ease of correlation to other images is a valuable first-step in image processing for information extraction. However, the large amount of metadata and the floating-point matrix transformations required to operate on each pixel make this a computation and I/O (Input/Output) intensive process. As result much imagery is either left unprocessed or loses timesensitive value in the long processing cycle. However, the computation on each pixel can be reduced substantially by using computational results of the neighboring pixels and accelerated by special pipelined hardware architecture in one to two orders of magnitude. A specialized coprocessor that is implemented inside an FPGA (Field Programmable Gate Array) chip and surrounded by vendorsupported hardware IP (Intellectual Property) shares the computation workload with CPU through PCI-Express interface. The ultimate speed of one pixel per clock (125 MHz) is achieved by the pipelined systolic array architecture. The optimal partition between software and hardware, the timing profile among image I/O and computation, and the highly automated GUI (Graphical User Interface) that fully exploits this speed increase to maximize overall image production throughput will also be discussed. The software that runs on a workstation with the acceleration hardware orthorectifies 16 Megapixels per second, which is 16 times faster than without the hardware. It turns the production time from months to days. A real-life successful story of an imaging satellite company that adopted such workstations for their orthorectified imagery production will be presented. The potential candidacy of the image processing computation that can be accelerated more efficiently by the same approach will also be analyzed.

  19. Automatic Generation of OpenMP Directives and Its Application to Computational Fluid Dynamics Codes

    NASA Technical Reports Server (NTRS)

    Yan, Jerry; Jin, Haoqiang; Frumkin, Michael; Yan, Jerry (Technical Monitor)

    2000-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate OpenMP-based parallel programs with nominal user assistance. We outline techniques used in the implementation of the tool and discuss the application of this tool on the NAS Parallel Benchmarks and several computational fluid dynamics codes. This work demonstrates the great potential of using the tool to quickly port parallel programs and also achieve good performance that exceeds some of the commercial tools.

  20. Formation Flight of Multiple UAVs via Onboard Sensor Information Sharing.

    PubMed

    Park, Chulwoo; Cho, Namhoon; Lee, Kyunghyun; Kim, Youdan

    2015-07-17

    To monitor large areas or simultaneously measure multiple points, multiple unmanned aerial vehicles (UAVs) must be flown in formation. To perform such flights, sensor information generated by each UAV should be shared via communications. Although a variety of studies have focused on the algorithms for formation flight, these studies have mainly demonstrated the performance of formation flight using numerical simulations or ground robots, which do not reflect the dynamic characteristics of UAVs. In this study, an onboard sensor information sharing system and formation flight algorithms for multiple UAVs are proposed. The communication delays of radiofrequency (RF) telemetry are analyzed to enable the implementation of the onboard sensor information sharing system. Using the sensor information sharing, the formation guidance law for multiple UAVs, which includes both a circular and close formation, is designed. The hardware system, which includes avionics and an airframe, is constructed for the proposed multi-UAV platform. A numerical simulation is performed to demonstrate the performance of the formation flight guidance and control system for multiple UAVs. Finally, a flight test is conducted to verify the proposed algorithm for the multi-UAV system.

  1. Formation Flight of Multiple UAVs via Onboard Sensor Information Sharing

    PubMed Central

    Park, Chulwoo; Cho, Namhoon; Lee, Kyunghyun; Kim, Youdan

    2015-01-01

    To monitor large areas or simultaneously measure multiple points, multiple unmanned aerial vehicles (UAVs) must be flown in formation. To perform such flights, sensor information generated by each UAV should be shared via communications. Although a variety of studies have focused on the algorithms for formation flight, these studies have mainly demonstrated the performance of formation flight using numerical simulations or ground robots, which do not reflect the dynamic characteristics of UAVs. In this study, an onboard sensor information sharing system and formation flight algorithms for multiple UAVs are proposed. The communication delays of radiofrequency (RF) telemetry are analyzed to enable the implementation of the onboard sensor information sharing system. Using the sensor information sharing, the formation guidance law for multiple UAVs, which includes both a circular and close formation, is designed. The hardware system, which includes avionics and an airframe, is constructed for the proposed multi-UAV platform. A numerical simulation is performed to demonstrate the performance of the formation flight guidance and control system for multiple UAVs. Finally, a flight test is conducted to verify the proposed algorithm for the multi-UAV system. PMID:26193281

  2. 3D in the Fast Lane: Render as You Go with the Latest OpenGL Boards.

    ERIC Educational Resources Information Center

    Sauer, Jeff; Murphy, Sam

    1997-01-01

    NT OpenGL hardware allows modelers and animators to work at relatively inexpensive NT workstations in their own offices or homes previous to shared space and workstation time in expensive studios. Rates seven OpenGL boards and two QuickDraw 3D accelerator boards for Mac users on overall value, wireframe and texture rendering, 2D acceleration, and…

  3. Programming model for distributed intelligent systems

    NASA Technical Reports Server (NTRS)

    Sztipanovits, J.; Biegl, C.; Karsai, G.; Bogunovic, N.; Purves, B.; Williams, R.; Christiansen, T.

    1988-01-01

    A programming model and architecture which was developed for the design and implementation of complex, heterogeneous measurement and control systems is described. The Multigraph Architecture integrates artificial intelligence techniques with conventional software technologies, offers a unified framework for distributed and shared memory based parallel computational models and supports multiple programming paradigms. The system can be implemented on different hardware architectures and can be adapted to strongly different applications.

  4. STS-43 crewmembers perform various tasks on OV-104's aft flight deck

    NASA Image and Video Library

    1991-08-11

    STS043-37-012 (2-11 Aug 1991) --- Three STS-43 astronauts are busy at work onboard the earth-orbiting space shuttle Atlantis. Astronaut Shannon W. Lucid is pictured performing one of several tests on Computer hardware with space station applications in mind. Sharing the aft flight deck with Lucid are Michael A. Baker (left), pilot and John E. Blaha, mission commander.

  5. Distributed Systems Technology Survey.

    DTIC Science & Technology

    1987-03-01

    and prolocols. 2. Hardware Technology Ecnomic factor we a majo reonm for the prolierat of dlstbted systoe. Processors, memory, an magne tc ndoptical...destined messages and pertorn the a pro te forwarding. There gImsno agreement that a ightweight process mechanism is essential to support com- monly used...Xerox PARC environment [311. Shared file servers, discussed below, are essential to the success of such a scheme. 11. ecurlity A distributed

  6. Megatux

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-09-25

    The Megatux platform enables the emulation of large scale (multi-million node) distributed systems. In particular, it allows for the emulation of large-scale networks interconnecting a very large number of emulated computer systems. It does this by leveraging virtualization and associated technologies to allow hundreds of virtual computers to be hosted on a single moderately sized server or workstation. Virtualization technology provided by modern processors allows for multiple guest OSs to run at the same time, sharing the hardware resources. The Megatux platform can be deployed on a single PC, a small cluster of a few boxes or a large clustermore » of computers. With a modest cluster, the Megatux platform can emulate complex organizational networks. By using virtualization, we emulate the hardware, but run actual software enabling large scale without sacrificing fidelity.« less

  7. A physical layer perspective on access network sharing

    NASA Astrophysics Data System (ADS)

    Pfeiffer, Thomas

    2015-12-01

    Unlike in copper or wireless networks, there is no sharing of resources in fiber access networks yet, other than bit stream access or cable sharing, in which the fibers of a cable are let to one or multiple operators. Sharing optical resources on a single fiber among multiple operators or different services has not yet been applied. While this would allow for a better exploitation of installed infrastructures, there are operational issues which still need to be resolved, before this sharing model can be implemented in networks. Operating multiple optical systems and services over a common fiber plant, autonomously and independently from each other, can result in mutual distortions on the physical layer. These distortions will degrade the performance of the involved systems, unless precautions are taken in the infrastructure hardware to eliminate or to reduce them to an acceptable level. Moreover, the infrastructure needs to be designed such as to support different system technologies and to ensure a guaranteed quality of the end-to-end connections. In this paper, suitable means are proposed to be introduced in fiber access infrastructures that will allow for shared utilization of the fibers while safeguarding the operational needs and business interests of the involved parties.

  8. A Compact Synchronous Cellular Model of Nonlinear Calcium Dynamics: Simulation and FPGA Synthesis Results.

    PubMed

    Soleimani, Hamid; Drakakis, Emmanuel M

    2017-06-01

    Recent studies have demonstrated that calcium is a widespread intracellular ion that controls a wide range of temporal dynamics in the mammalian body. The simulation and validation of such studies using experimental data would benefit from a fast large scale simulation and modelling tool. This paper presents a compact and fully reconfigurable cellular calcium model capable of mimicking Hopf bifurcation phenomenon and various nonlinear responses of the biological calcium dynamics. The proposed cellular model is synthesized on a digital platform for a single unit and a network model. Hardware synthesis, physical implementation on FPGA, and theoretical analysis confirm that the proposed cellular model can mimic the biological calcium behaviors with considerably low hardware overhead. The approach has the potential to speed up large-scale simulations of slow intracellular dynamics by sharing more cellular units in real-time. To this end, various networks constructed by pipelining 10 k to 40 k cellular calcium units are compared with an equivalent simulation run on a standard PC workstation. Results show that the cellular hardware model is, on average, 83 times faster than the CPU version.

  9. 17 CFR 4.41 - Advertising by commodity pool operators, commodity trading advisors, and the principals thereof.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... operators, commodity trading advisors, and the principals thereof. 4.41 Section 4.41 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION COMMODITY POOL OPERATORS AND COMMODITY TRADING ADVISORS Advertising § 4.41 Advertising by commodity pool operators, commodity trading advisors, and the...

  10. Accelerating epistasis analysis in human genetics with consumer graphics hardware.

    PubMed

    Sinnott-Armstrong, Nicholas A; Greene, Casey S; Cancare, Fabio; Moore, Jason H

    2009-07-24

    Human geneticists are now capable of measuring more than one million DNA sequence variations from across the human genome. The new challenge is to develop computationally feasible methods capable of analyzing these data for associations with common human disease, particularly in the context of epistasis. Epistasis describes the situation where multiple genes interact in a complex non-linear manner to determine an individual's disease risk and is thought to be ubiquitous for common diseases. Multifactor Dimensionality Reduction (MDR) is an algorithm capable of detecting epistasis. An exhaustive analysis with MDR is often computationally expensive, particularly for high order interactions. This challenge has previously been met with parallel computation and expensive hardware. The option we examine here exploits commodity hardware designed for computer graphics. In modern computers Graphics Processing Units (GPUs) have more memory bandwidth and computational capability than Central Processing Units (CPUs) and are well suited to this problem. Advances in the video game industry have led to an economy of scale creating a situation where these powerful components are readily available at very low cost. Here we implement and evaluate the performance of the MDR algorithm on GPUs. Of primary interest are the time required for an epistasis analysis and the price to performance ratio of available solutions. We found that using MDR on GPUs consistently increased performance per machine over both a feature rich Java software package and a C++ cluster implementation. The performance of a GPU workstation running a GPU implementation reduces computation time by a factor of 160 compared to an 8-core workstation running the Java implementation on CPUs. This GPU workstation performs similarly to 150 cores running an optimized C++ implementation on a Beowulf cluster. Furthermore this GPU system provides extremely cost effective performance while leaving the CPU available for other tasks. The GPU workstation containing three GPUs costs $2000 while obtaining similar performance on a Beowulf cluster requires 150 CPU cores which, including the added infrastructure and support cost of the cluster system, cost approximately $82,500. Graphics hardware based computing provides a cost effective means to perform genetic analysis of epistasis using MDR on large datasets without the infrastructure of a computing cluster.

  11. Fast 2D flood modelling using GPU technology - recent applications and new developments

    NASA Astrophysics Data System (ADS)

    Crossley, Amanda; Lamb, Rob; Waller, Simon; Dunning, Paul

    2010-05-01

    In recent years there has been considerable interest amongst scientists and engineers in exploiting the potential of commodity graphics hardware for desktop parallel computing. The Graphics Processing Units (GPUs) that are used in PC graphics cards have now evolved into powerful parallel co-processors that can be used to accelerate the numerical codes used for floodplain inundation modelling. We report in this paper on experience over the past two years in developing and applying two dimensional (2D) flood inundation models using GPUs to achieve significant practical performance benefits. Starting with a solution scheme for the 2D diffusion wave approximation to the 2D Shallow Water Equations (SWEs), we have demonstrated the capability to reduce model run times in ‘real-world' applications using GPU hardware and programming techniques. We then present results from a GPU-based 2D finite volume SWE solver. A series of numerical test cases demonstrate that the model produces outputs that are accurate and consistent with reference results published elsewhere. In comparisons conducted for a real world test case, the GPU-based SWE model was over 100 times faster than the CPU version. We conclude with some discussion of practical experience in using the GPU technology for flood mapping applications, and for research projects investigating use of Monte Carlo simulation methods for the analysis of uncertainty in 2D flood modelling.

  12. GPU-Accelerated Molecular Modeling Coming Of Age

    PubMed Central

    Stone, John E.; Hardy, David J.; Ufimtsev, Ivan S.

    2010-01-01

    Graphics processing units (GPUs) have traditionally been used in molecular modeling solely for visualization of molecular structures and animation of trajectories resulting from molecular dynamics simulations. Modern GPUs have evolved into fully programmable, massively parallel co-processors that can now be exploited to accelerate many scientific computations, typically providing about one order of magnitude speedup over CPU code and in special cases providing speedups of two orders of magnitude. This paper surveys the development of molecular modeling algorithms that leverage GPU computing, the advances already made and remaining issues to be resolved, and the continuing evolution of GPU technology that promises to become even more useful to molecular modeling. Hardware acceleration with commodity GPUs is expected to benefit the overall computational biology community by bringing teraflops performance to desktop workstations and in some cases potentially changing what were formerly batch-mode computational jobs into interactive tasks. PMID:20675161

  13. Dense and Sparse Matrix Operations on the Cell Processor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Samuel W.; Shalf, John; Oliker, Leonid

    2005-05-01

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. Therefore, the high performance computing community is examining alternative architectures that address the limitations of modern superscalar designs. In this work, we examine STI's forthcoming Cell processor: a novel, low-power architecture that combines a PowerPC core with eight independent SIMD processing units coupled with a software-controlled memory to offer high FLOP/s/Watt. Since neither Cell hardware nor cycle-accurate simulators are currently publicly available, we develop an analytic framework to predict Cell performance on dense and sparse matrix operations, usingmore » a variety of algorithmic approaches. Results demonstrate Cell's potential to deliver more than an order of magnitude better GFLOP/s per watt performance, when compared with the Intel Itanium2 and Cray X1 processors.« less

  14. Approximating the Generalized Voronoi Diagram of Closely Spaced Objects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, John; Daniel, Eric; Pascucci, Valerio

    2015-06-22

    We present an algorithm to compute an approximation of the generalized Voronoi diagram (GVD) on arbitrary collections of 2D or 3D geometric objects. In particular, we focus on datasets with closely spaced objects; GVD approximation is expensive and sometimes intractable on these datasets using previous algorithms. With our approach, the GVD can be computed using commodity hardware even on datasets with many, extremely tightly packed objects. Our approach is to subdivide the space with an octree that is represented with an adjacency structure. We then use a novel adaptive distance transform to compute the distance function on octree vertices. Themore » computed distance field is sampled more densely in areas of close object spacing, enabling robust and parallelizable GVD surface generation. We demonstrate our method on a variety of data and show example applications of the GVD in 2D and 3D.« less

  15. Processing of the WLCG monitoring data using NoSQL

    NASA Astrophysics Data System (ADS)

    Andreeva, J.; Beche, A.; Belov, S.; Dzhunov, I.; Kadochnikov, I.; Karavakis, E.; Saiz, P.; Schovancova, J.; Tuckett, D.

    2014-06-01

    The Worldwide LHC Computing Grid (WLCG) today includes more than 150 computing centres where more than 2 million jobs are being executed daily and petabytes of data are transferred between sites. Monitoring the computing activities of the LHC experiments, over such a huge heterogeneous infrastructure, is extremely demanding in terms of computation, performance and reliability. Furthermore, the generated monitoring flow is constantly increasing, which represents another challenge for the monitoring systems. While existing solutions are traditionally based on Oracle for data storage and processing, recent developments evaluate NoSQL for processing large-scale monitoring datasets. NoSQL databases are getting increasingly popular for processing datasets at the terabyte and petabyte scale using commodity hardware. In this contribution, the integration of NoSQL data processing in the Experiment Dashboard framework is described along with first experiences of using this technology for monitoring the LHC computing activities.

  16. Rare Earth Metals: Resourcefulness and Recovery

    NASA Astrophysics Data System (ADS)

    Wang, Shijie

    2013-10-01

    When we appreciate the digital revolution carried over from the twentieth century with mobile communication and the Internet, and when we enjoy our high-tech lifestyle filled with iDevices, hybrid cars, wind turbines, and solar cells in this new century, we should also appreciate that all of these advanced products depend on rare earth metals to function. Although there are only 136,000 tons of annual worldwide demand, (Cho, Rare Earth Metals, Will We Have Enough?)1 rare earth metals are becoming such hot commodities on international markets, due to not only to their increasing uses, including in most critical military hardware, but also to Chinese growth, which accounts for 95% of global rare earth metal production. Hence, the 2013 technical calendar topic, planned by the TMS/Hydrometallurgy and Electrometallurgy Committee, is particularly relevant, with four articles (including this commentary) contributed to the JOM October Issue discussing rare earth metals' resourcefulness and recovery.

  17. GPU-accelerated molecular modeling coming of age.

    PubMed

    Stone, John E; Hardy, David J; Ufimtsev, Ivan S; Schulten, Klaus

    2010-09-01

    Graphics processing units (GPUs) have traditionally been used in molecular modeling solely for visualization of molecular structures and animation of trajectories resulting from molecular dynamics simulations. Modern GPUs have evolved into fully programmable, massively parallel co-processors that can now be exploited to accelerate many scientific computations, typically providing about one order of magnitude speedup over CPU code and in special cases providing speedups of two orders of magnitude. This paper surveys the development of molecular modeling algorithms that leverage GPU computing, the advances already made and remaining issues to be resolved, and the continuing evolution of GPU technology that promises to become even more useful to molecular modeling. Hardware acceleration with commodity GPUs is expected to benefit the overall computational biology community by bringing teraflops performance to desktop workstations and in some cases potentially changing what were formerly batch-mode computational jobs into interactive tasks. (c) 2010 Elsevier Inc. All rights reserved.

  18. Two-party secret key distribution via a modified quantum secret sharing protocol.

    PubMed

    Grice, W P; Evans, P G; Lawrie, B; Legré, M; Lougovski, P; Ray, W; Williams, B P; Qi, B; Smith, A M

    2015-03-23

    We present and demonstrate a novel protocol for distributing secret keys between two and only two parties based on N-party single-qubit Quantum Secret Sharing (QSS). We demonstrate our new protocol with N = 3 parties using phase-encoded photons. We show that any two out of N parties can build a secret key based on partial information from each other and with collaboration from the remaining N - 2 parties. Our implementation allows for an accessible transition between N-party QSS and arbitrary two party QKD without modification of hardware. In addition, our approach significantly reduces the number of resources such as single photon detectors, lasers and dark fiber connections needed to implement QKD.

  19. Optimizing CMS build infrastructure via Apache Mesos

    NASA Astrophysics Data System (ADS)

    Abdurachmanov, David; Degano, Alessandro; Elmer, Peter; Eulisse, Giulio; Mendez, David; Muzaffar, Shahzad

    2015-12-01

    The Offline Software of the CMS Experiment at the Large Hadron Collider (LHC) at CERN consists of 6M lines of in-house code, developed over a decade by nearly 1000 physicists, as well as a comparable amount of general use open-source code. A critical ingredient to the success of the construction and early operation of the WLCG was the convergence, around the year 2000, on the use of a homogeneous environment of commodity x86-64 processors and Linux. Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, Jenkins, Spark, Aurora, and other applications on a dynamically shared pool of nodes. We present how we migrated our continuous integration system to schedule jobs on a relatively small Apache Mesos enabled cluster and how this resulted in better resource usage, higher peak performance and lower latency thanks to the dynamic scheduling capabilities of Mesos.

  20. Software Hardware Asset Reuse Enterprise (SHARE) Repository Framework: Related Work and Development Plan

    DTIC Science & Technology

    2009-08-19

    designed to collect the data and assist the analyst in drawing relationships between the data. Palantir Technologies has created one such software...application to support the DoD intelligence community by providing robust capabilities for managing data from various sources10. The Palantir tool...www.palantirtech.com/ - 38 - Figure 17. Palantir Graphical Interface (Gordon-Schlosberg, 2008) Similar examples of the use of ontologies to support data

  1. Multi-National Information Sharing -- Cross Domain Collaborative Information Environment (CDCIE) Solution. Revision 4

    DTIC Science & Technology

    2005-04-12

    Hardware, Database, and Operating System independence using Java • Enterprise-class Architecture using Java2 Enterprise Edition 1.4 • Standards based...portal applications. Compliance with the Java Specification Request for Portlet APIs (JSR-168) (Portlet API) and Web Services for Remote Portals...authentication and authorization • Portal Standards using Java Specification Request for Portlet APIs (JSR-168) (Portlet API) and Web Services for Remote

  2. Software system safety

    NASA Technical Reports Server (NTRS)

    Uber, James G.

    1988-01-01

    Software itself is not hazardous, but since software and hardware share common interfaces there is an opportunity for software to create hazards. Further, these software systems are complex, and proven methods for the design, analysis, and measurement of software safety are not yet available. Some past software failures, future NASA software trends, software engineering methods, and tools and techniques for various software safety analyses are reviewed. Recommendations to NASA are made based on this review.

  3. Information Sharing for Computing Trust Metrics on COTS Electronic Components

    DTIC Science & Technology

    2008-09-01

    8 a. Standard SDLCs ...........................8 b. The Waterfall Model ......................9 c. V -shaped Model ...development of a system. There are many well-known SDLC models , the most popular of which are: • Waterfall • V -shaped • Spiral • Agile a. Standard...the SDLC or applied to software and hardware distribution chain. A. JØSANG’S MODEL DEFINED Jøsang expresses "opinions" mathematically as: 1

  4. Ethanol for a sustainable energy future.

    PubMed

    Goldemberg, José

    2007-02-09

    Renewable energy is one of the most efficient ways to achieve sustainable development. Increasing its share in the world matrix will help prolong the existence of fossil fuel reserves, address the threats posed by climate change, and enable better security of the energy supply on a global scale. Most of the "new renewable energy sources" are still undergoing large-scale commercial development, but some technologies are already well established. These include Brazilian sugarcane ethanol, which, after 30 years of production, is a global energy commodity that is fully competitive with motor gasoline and appropriate for replication in many countries.

  5. 17 CFR 5.4 - Applicability of part 4 of this chapter to commodity pool operators and commodity trading advisors.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... this chapter to commodity pool operators and commodity trading advisors. 5.4 Section 5.4 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION OFF-EXCHANGE FOREIGN CURRENCY TRANSACTIONS § 5.4 Applicability of part 4 of this chapter to commodity pool operators and commodity trading advisors. Part 4 of...

  6. 17 CFR 5.4 - Applicability of part 4 of this chapter to commodity pool operators and commodity trading advisors.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... this chapter to commodity pool operators and commodity trading advisors. 5.4 Section 5.4 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION OFF-EXCHANGE FOREIGN CURRENCY TRANSACTIONS § 5.4 Applicability of part 4 of this chapter to commodity pool operators and commodity trading advisors. Part 4 of...

  7. A Parallel Rendering Algorithm for MIMD Architectures

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.; Orloff, Tobias

    1991-01-01

    Applications such as animation and scientific visualization demand high performance rendering of complex three dimensional scenes. To deliver the necessary rendering rates, highly parallel hardware architectures are required. The challenge is then to design algorithms and software which effectively use the hardware parallelism. A rendering algorithm targeted to distributed memory MIMD architectures is described. For maximum performance, the algorithm exploits both object-level and pixel-level parallelism. The behavior of the algorithm is examined both analytically and experimentally. Its performance for large numbers of processors is found to be limited primarily by communication overheads. An experimental implementation for the Intel iPSC/860 shows increasing performance from 1 to 128 processors across a wide range of scene complexities. It is shown that minimal modifications to the algorithm will adapt it for use on shared memory architectures as well.

  8. Design and specification of a centralized manufacturing data management and scheduling system

    NASA Technical Reports Server (NTRS)

    Farrington, Phillip A.

    1993-01-01

    As was revealed in a previous study, the Materials and Processes Laboratory's Productivity Enhancement Complex (PEC) has a number of automated production areas/cells that are not effectively integrated, limiting the ability of users to readily share data. The recent decision to utilize the PEC for the fabrication of flight hardware has focused new attention on the problem and brought to light the need for an integrated data management and scheduling system. This report addresses this need by developing preliminary designs specifications for a centralized manufacturing data management and scheduling system for managing flight hardware fabrication in the PEC. This prototype system will be developed under the auspices of the Integrated Engineering Environment (IEE) Oversight team and the IEE Committee. At their recommendation the system specifications were based on the fabrication requirements of the AXAF-S Optical Bench.

  9. The JPL telerobotic Manipulator Control and Mechanization (MCM) subsystem

    NASA Technical Reports Server (NTRS)

    Hayati, Samad; Lee, Thomas S.; Tso, Kam; Backes, Paul; Kan, Edwin; Lloyd, J.

    1989-01-01

    The Manipulator Control and Mechanization (MCM) subsystem of the telerobot system provides the real-time control of the robot manipulators in autonomous and teleoperated modes and real time input/output for a variety of sensors and actuators. Substantial hardware and software are included in this subsystem which interfaces in the hierarchy of the telerobot system with the other subsystems. The other subsystems are: run time control, task planning and reasoning, sensing and perception, and operator control subsystem. The architecture of the MCM subsystem, its capabilities, and details of various hardware and software elements are described. Important improvements in the MCM subsystem over the first version are: dual arm coordinated trajectory generation and control, addition of integrated teleoperation, shared control capability, replacement of the ultimate controllers with motor controllers, and substantial increase in real time processing capability.

  10. 17 CFR 4.13 - Exemption from registration as a commodity pool operator.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... a commodity pool operator. 4.13 Section 4.13 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION COMMODITY POOL OPERATORS AND COMMODITY TRADING ADVISORS General Provisions, Definitions and Exemptions § 4.13 Exemption from registration as a commodity pool operator. This section is...

  11. 17 CFR 32.9 - Fraud in connection with commodity option transactions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 17 Commodity and Securities Exchanges 1 2010-04-01 2010-04-01 false Fraud in connection with commodity option transactions. 32.9 Section 32.9 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION REGULATION OF COMMODITY OPTION TRANSACTIONS § 32.9 Fraud in connection with commodity...

  12. The Principles and the Specifics of Trading in Commodities

    NASA Astrophysics Data System (ADS)

    Baran, Dušan; Herbacsková, Anita

    2012-12-01

    In the present period of instability on financial markets, investments in commodities are the solution for elimination of the consequences of inflation and ensure the yield. When investing in commodities, the use of specifics of commodities compared to other assets. The distribution of commodities we can interpret for agricultural commodities, commodities of energy, precious and other metals, and weather. Therefore, in the framework of the investment portfolio are the commodities. This is the reason why one of the most popular types of investment assets now become commodities. In the interpretation of particular commodities we talk about commodity futures. The reason is that the spot market with commodities is limited storage facilities. The growth of the popularity, which allows a wide range of commodities, has caused that in addition to from institutional investors and speculators for trade may involve even small investors. This development will be supplemented by interpretation of the charts and figers, which will be commented and used for generalization of knowledge. Finally, the article will be interpreted by the further development of the market for commodities as it by article assumes from the results of research.

  13. Can your software engineer program your PLC?

    NASA Astrophysics Data System (ADS)

    Borrowman, Alastair J.; Taylor, Philip

    2016-07-01

    The use of Programmable Logic Controllers (PLCs) in the control of large physics experiments is ubiquitous1, 2, 3. The programming of these controllers is normally the domain of engineers with a background in electronics, this paper introduces PLC program development from the software engineer's perspective. PLC programs provide the link between control software running on PC architecture systems and physical hardware controlled and monitored by digital and analog signals. The higher-level software running on the PC is typically responsible for accepting operator input and from this deciding when and how hardware connected to the PLC is controlled. The PLC accepts demands from the PC, considers the current state of its connected hardware and if correct to do so (based upon interlocks or other constraints) adjusts its hardware output signals appropriately for the PC's demands. A published ICD (Interface Control Document) defines the PLC memory locations available to be written and read by the PC to control and monitor the hardware. Historically the method of programming PLCs has been ladder diagrams that closely resemble circuit diagrams, however, PLC manufacturers nowadays also provide, and promote, the use of higher-level programming languages4. Based on techniques used in the development of high-level PC software to control PLCs for multiple telescopes, this paper examines the development of PLC programs to operate the hardware of a medical cyclotron beamline controlled from a PC using the Experimental Physics and Industrial Control System (EPICS), which is also widely used in telescope control5, 6, 7. The PLC used is the new generation Siemens S7-1200 programmed using Siemens Pascal based Structured Control Language (SCL), which is their implementation of Structured Text (ST). The approach described is that from a software engineer's perspective, utilising Siemens Totally Integrated Automation (TIA) Portal integrated development environment (IDE) to create modular PLC programs based upon reusable functions capable of being unit tested without the PLC connected to hardware. Emphasis has been placed on designing an interface between EPICS and SCL that enforces correct operation of hardware through stringent separation of PC accessible PLC memory and hardware I/O addresses used only by the PLC. The paper also introduces the method used to automate the creation, from the same source document, the PLC memory structure (tag) definitions (defining memory used to access hardware I/O and that accessed by the PC) and creation of the PC program data structures (EPICS database records) used to access the permitted PLC addresses. From direct experience this paper demonstrates the advantages of PLC program development being shared between electronic and software engineers, to enable use of the most appropriate processes from both the perspective of the hardware and the higher-level software used to control it.

  14. Data Telemetry and Acquisition System for Acoustic Signal Processing Investigations.

    DTIC Science & Technology

    1996-02-20

    were VME- based computer systems operating under the VxWorks real - time operating system . Each system shared a common hardware and software... real - time operating system . It interfaces to the Berg PCM Decommutator board, which searches for the embedded synchronization word in the data and re...software were built on top of this architecture. The multi-tasking, message queue and memory management facilities of the VxWorks real - time operating system are

  15. Engineering computer graphics in gas turbine engine design, analysis and manufacture

    NASA Technical Reports Server (NTRS)

    Lopatka, R. S.

    1975-01-01

    A time-sharing and computer graphics facility designed to provide effective interactive tools to a large number of engineering users with varied requirements was described. The application of computer graphics displays at several levels of hardware complexity and capability is discussed, with examples of graphics systems tracing gas turbine product development, beginning with preliminary design through manufacture. Highlights of an operating system stylized for interactive engineering graphics is described.

  16. Disgust and Shame Based Safe Water and Handwashing Promotion

    ClinicalTrials.gov

    2015-06-11

    Develop a New Group Version of the Becker-DeGroot-Marsckek (BDM) Auction to Measure Willingness to Pay of Compound Members for Shared Hardware.; Develop a New Survey Instrument to Measure Behavioural Determinants of Hand Washing and Water Treatment Like Disgust and Shame or Social Pressure.; Identify New Methods for Measuring Hand Washing and Water Treatment Behaviour.; Compare the Effectiveness of the Disgust and Shame Based Interventions With Standard Public Health Interventions.

  17. Hierarchical Process Composition: Dynamic Maintenance of Structure in a Distributed Environment

    DTIC Science & Technology

    1988-01-01

    One prominent hne of research stresses the independence of address space and thread of control, and the resulting efficiencies due to shared memory...cooperating processes. StarOS focuses on case of use and a general capability mechanism, while Medusa stresses the effect of distributed hardware on system...process structure and the asynchrony among agents and between agents and sources of failure. By stressing dynamic structure, we are led to adopt an

  18. Commercial Digital/ADP Equipment in the Ocean Environment. Volume 2. User Appendices

    DTIC Science & Technology

    1978-12-15

    is that the LINDA system uses a mini computer with a time sharing system software which allows several terminals to be operated at the same time...Acquisition System (ODAS) consists of sensors, computer hardware and computer software . Certain sensors are interfaced to the computers for real time...on USNS KANE, USNS BENT, and USKS WILKES. Commercial automatic data processing equipment used in ODAS includes: Item Model Computer PDP-9 Tape

  19. 17 CFR 32.13 - Exemption from prohibition of commodity option transactions for trade options on certain...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 17 Commodity and Securities Exchanges 1 2011-04-01 2011-04-01 false Exemption from prohibition of commodity option transactions for trade options on certain agricultural commodities. 32.13 Section 32.13 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION REGULATION OF COMMODITY OPTION TRANSACTIONS § 32.13 Exemption from...

  20. Performance Analysis of a Hardware Implemented Complex Signal Kurtosis Radio-Frequency Interference Detector

    NASA Technical Reports Server (NTRS)

    Schoenwald, Adam J.; Bradley, Damon C.; Mohammed, Priscilla N.; Piepmeier, Jeffrey R.; Wong, Mark

    2016-01-01

    Radio-frequency interference (RFI) is a known problem for passive remote sensing as evidenced in the L-band radiometers SMOS, Aquarius and more recently, SMAP. Various algorithms have been developed and implemented on SMAP to improve science measurements. This was achieved by the use of a digital microwave radiometer. RFI mitigation becomes more challenging for microwave radiometers operating at higher frequencies in shared allocations. At higher frequencies larger bandwidths are also desirable for lower measurement noise further adding to processing challenges. This work focuses on finding improved RFI mitigation techniques that will be effective at additional frequencies and at higher bandwidths. To aid the development and testing of applicable detection and mitigation techniques, a wide-band RFI algorithm testing environment has been developed using the Reconfigurable Open Architecture Computing Hardware System (ROACH) built by the Collaboration for Astronomy Signal Processing and Electronics Research (CASPER) Group. The testing environment also consists of various test equipment used to reproduce typical signals that a radiometer may see including those with and without RFI. The testing environment permits quick evaluations of RFI mitigation algorithms as well as show that they are implementable in hardware. The algorithm implemented is a complex signal kurtosis detector which was modeled and simulated. The complex signal kurtosis detector showed improved performance over the real kurtosis detector under certain conditions. The real kurtosis is implemented on SMAP at 24 MHz bandwidth. The complex signal kurtosis algorithm was then implemented in hardware at 200 MHz bandwidth using the ROACH. In this work, performance of the complex signal kurtosis and the real signal kurtosis are compared. Performance evaluations and comparisons in both simulation as well as experimental hardware implementations were done with the use of receiver operating characteristic (ROC) curves.

  1. Real-time optimizations for integrated smart network camera

    NASA Astrophysics Data System (ADS)

    Desurmont, Xavier; Lienard, Bruno; Meessen, Jerome; Delaigle, Jean-Francois

    2005-02-01

    We present an integrated real-time smart network camera. This system is composed of an image sensor, an embedded PC based electronic card for image processing and some network capabilities. The application detects events of interest in visual scenes, highlights alarms and computes statistics. The system also produces meta-data information that could be shared between other cameras in a network. We describe the requirements of such a system and then show how the design of the system is optimized to process and compress video in real-time. Indeed, typical video-surveillance algorithms as background differencing, tracking and event detection should be highly optimized and simplified to be used in this hardware. To have a good adequation between hardware and software in this light embedded system, the software management is written on top of the java based middle-ware specification established by the OSGi alliance. We can integrate easily software and hardware in complex environments thanks to the Java Real-Time specification for the virtual machine and some network and service oriented java specifications (like RMI and Jini). Finally, we will report some outcomes and typical case studies of such a camera like counter-flow detection.

  2. The Perfect Neuroimaging-Genetics-Computation Storm: Collision of Petabytes of Data, Millions of Hardware Devices and Thousands of Software Tools

    PubMed Central

    Dinov, Ivo D.; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Zamanyan, Alen; Torri, Federica; Macciardi, Fabio; Hobel, Sam; Moon, Seok Woo; Sung, Young Hee; Jiang, Zhiguo; Labus, Jennifer; Kurth, Florian; Ashe-McNalley, Cody; Mayer, Emeran; Vespa, Paul M.; Van Horn, John D.; Toga, Arthur W.

    2013-01-01

    The volume, diversity and velocity of biomedical data are exponentially increasing providing petabytes of new neuroimaging and genetics data every year. At the same time, tens-of-thousands of computational algorithms are developed and reported in the literature along with thousands of software tools and services. Users demand intuitive, quick and platform-agnostic access to data, software tools, and infrastructure from millions of hardware devices. This explosion of information, scientific techniques, computational models, and technological advances leads to enormous challenges in data analysis, evidence-based biomedical inference and reproducibility of findings. The Pipeline workflow environment provides a crowd-based distributed solution for consistent management of these heterogeneous resources. The Pipeline allows multiple (local) clients and (remote) servers to connect, exchange protocols, control the execution, monitor the states of different tools or hardware, and share complete protocols as portable XML workflows. In this paper, we demonstrate several advanced computational neuroimaging and genetics case-studies, and end-to-end pipeline solutions. These are implemented as graphical workflow protocols in the context of analyzing imaging (sMRI, fMRI, DTI), phenotypic (demographic, clinical), and genetic (SNP) data. PMID:23975276

  3. Neurolab: Final Report for the Ames Research Center Payload

    NASA Technical Reports Server (NTRS)

    Maese, A. Christopher (Editor); Ostrach, Louis H. (Editor); Dalton, Bonnie P. (Technical Monitor)

    2002-01-01

    Neurolab, the final Spacelab mission, launched on STS-90 on April 17, 1998, was dedicated to studying the nervous system. NASA cooperated with domestic and international partners to conduct the mission. ARC's (Ames Research Center's) Payload included 15 experiments designed to study the adaptation and development of the nervous system in microgravity. The payload had the largest number of Principal and Co-Investigators, largest complement of habitats and experiment unique equipment flown to date, and most diverse distribution of live specimens ever undertaken by ARC, including rodents, toadfish, swordtail fish, water snails, hornweed and crickets To facilitate tissue sharing and optimization of science objectives, investigators were grouped into four science discipline teams: Neuronal Plasticity, Mammalian Development, Aquatic, and Neurobiology. Several payload development challenges were experienced and required an extraordinary effort, by all involved, to meet the launch schedule. With respect to hardware and the total amount of recovered science, Neurolab was regarded as an overall success. However, a high mortality rate in one rodent group and several hardware anomalies occurred inflight that warranted postflight investigations. Hardware, science, and operations lessons were learned that should be taken into consideration by payload teams developing payloads for future Shuttle missions and the International Space Station.

  4. Remote visualization and scale analysis of large turbulence datatsets

    NASA Astrophysics Data System (ADS)

    Livescu, D.; Pulido, J.; Burns, R.; Canada, C.; Ahrens, J.; Hamann, B.

    2015-12-01

    Accurate simulations of turbulent flows require solving all the dynamically relevant scales of motions. This technique, called Direct Numerical Simulation, has been successfully applied to a variety of simple flows; however, the large-scale flows encountered in Geophysical Fluid Dynamics (GFD) would require meshes outside the range of the most powerful supercomputers for the foreseeable future. Nevertheless, the current generation of petascale computers has enabled unprecedented simulations of many types of turbulent flows which focus on various GFD aspects, from the idealized configurations extensively studied in the past to more complex flows closer to the practical applications. The pace at which such simulations are performed only continues to increase; however, the simulations themselves are restricted to a small number of groups with access to large computational platforms. Yet the petabytes of turbulence data offer almost limitless information on many different aspects of the flow, from the hierarchy of turbulence moments, spectra and correlations, to structure-functions, geometrical properties, etc. The ability to share such datasets with other groups can significantly reduce the time to analyze the data, help the creative process and increase the pace of discovery. Using the largest DOE supercomputing platforms, we have performed some of the biggest turbulence simulations to date, in various configurations, addressing specific aspects of turbulence production and mixing mechanisms. Until recently, the visualization and analysis of such datasets was restricted by access to large supercomputers. The public Johns Hopkins Turbulence database simplifies the access to multi-Terabyte turbulence datasets and facilitates turbulence analysis through the use of commodity hardware. First, one of our datasets, which is part of the database, will be described and then a framework that adds high-speed visualization and wavelet support for multi-resolution analysis of turbulence will be highlighted. The addition of wavelet support reduces the latency and bandwidth requirements for visualization, allowing for many concurrent users, and enables new types of analyses, including scale decomposition and coherent feature extraction.

  5. The Open Data Repository's Data Publisher

    NASA Astrophysics Data System (ADS)

    Stone, N.; Lafuente, B.; Downs, R. T.; Bristow, T.; Blake, D. F.; Fonda, M.; Pires, A.

    2015-12-01

    Data management and data publication are becoming increasingly important components of research workflows. The complexity of managing data, publishing data online, and archiving data has not decreased significantly even as computing access and power has greatly increased. The Open Data Repository's Data Publisher software (http://www.opendatarepository.org) strives to make data archiving, management, and publication a standard part of a researcher's workflow using simple, web-based tools and commodity server hardware. The publication engine allows for uploading, searching, and display of data with graphing capabilities and downloadable files. Access is controlled through a robust permissions system that can control publication at the field level and can be granted to the general public or protected so that only registered users at various permission levels receive access. Data Publisher also allows researchers to subscribe to meta-data standards through a plugin system, embargo data publication at their discretion, and collaborate with other researchers through various levels of data sharing. As the software matures, semantic data standards will be implemented to facilitate machine reading of data and each database will provide a REST application programming interface for programmatic access. Additionally, a citation system will allow snapshots of any data set to be archived and cited for publication while the data itself can remain living and continuously evolve beyond the snapshot date. The software runs on a traditional LAMP (Linux, Apache, MySQL, PHP) server and is available on GitHub (http://github.com/opendatarepository) under a GPLv2 open source license. The goal of the Open Data Repository is to lower the cost and training barrier to entry so that any researcher can easily publish their data and ensure it is archived for posterity. We gratefully acknowledge the support for this study by the Science-Enabling Research Activity (SERA), and NASA NNX11AP82A, Mars Science Laboratory Investigations and University of Arizona Geosciences.

  6. Implementing Shared Memory Parallelism in MCBEND

    NASA Astrophysics Data System (ADS)

    Bird, Adam; Long, David; Dobson, Geoff

    2017-09-01

    MCBEND is a general purpose radiation transport Monte Carlo code from AMEC Foster Wheelers's ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. The existing MCBEND parallel capability effectively involves running the same calculation on many processors. This works very well except when the memory requirements of a model restrict the number of instances of a calculation that will fit on a machine. To more effectively utilise parallel hardware OpenMP has been used to implement shared memory parallelism in MCBEND. This paper describes the reasoning behind the choice of OpenMP, notes some of the challenges of multi-threading an established code such as MCBEND and assesses the performance of the parallel method implemented in MCBEND.

  7. Adaptable state based control system

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert D. (Inventor); Dvorak, Daniel L. (Inventor); Gostelow, Kim P. (Inventor); Starbird, Thomas W. (Inventor); Gat, Erann (Inventor); Chien, Steve Ankuo (Inventor); Keller, Robert M. (Inventor)

    2004-01-01

    An autonomous controller, comprised of a state knowledge manager, a control executor, hardware proxies and a statistical estimator collaborates with a goal elaborator, with which it shares common models of the behavior of the system and the controller. The elaborator uses the common models to generate from temporally indeterminate sets of goals, executable goals to be executed by the controller. The controller may be updated to operate in a different system or environment than that for which it was originally designed by the replacement of shared statistical models and by the instantiation of a new set of state variable objects derived from a state variable class. The adaptation of the controller does not require substantial modification of the goal elaborator for its application to the new system or environment.

  8. Design distributed simulation platform for vehicle management system

    NASA Astrophysics Data System (ADS)

    Wen, Zhaodong; Wang, Zhanlin; Qiu, Lihua

    2006-11-01

    Next generation military aircraft requires the airborne management system high performance. General modules, data integration, high speed data bus and so on are needed to share and manage information of the subsystems efficiently. The subsystems include flight control system, propulsion system, hydraulic power system, environmental control system, fuel management system, electrical power system and so on. The unattached or mixed architecture is changed to integrated architecture. That means the whole airborne system is regarded into one system to manage. So the physical devices are distributed but the system information is integrated and shared. The process function of each subsystem are integrated (including general process modules, dynamic reconfiguration), furthermore, the sensors and the signal processing functions are shared. On the other hand, it is a foundation for power shared. Establish a distributed vehicle management system using 1553B bus and distributed processors which can provide a validation platform for the research of airborne system integrated management. This paper establishes the Vehicle Management System (VMS) simulation platform. Discuss the software and hardware configuration and analyze the communication and fault-tolerant method.

  9. 17 CFR 5.4 - Applicability of part 4 of this chapter to commodity pool operators and commodity trading advisors.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 17 Commodity and Securities Exchanges 1 2011-04-01 2011-04-01 false Applicability of part 4 of this chapter to commodity pool operators and commodity trading advisors. 5.4 Section 5.4 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION OFF-EXCHANGE FOREIGN CURRENCY TRANSACTIONS § 5.4...

  10. Perspectives on Extremes as a Climate Scientist and Farmer

    NASA Astrophysics Data System (ADS)

    Grotjahn, R.

    2016-12-01

    The speaker is both a climate scientist whose research emphasizes climate extremes and a small farmer in the most agriculturally productive region in the world. He will share some perspectives about the future of extremes over the United States as they relate to farming. General information will be drawn from the National Climate Assessment (NCA) published in 2014. Different weather-related quantities are useful for different commodities. While plant and animal production are time-integrative, extreme events can cause lasting harm long after the event is over. Animal production, including dairy, is sensitive to combinations of high heat and humidity; lasting impacts include suspended milk production, aborted fetuses, and increased mortality. The rice crop can be devastated by the wrong combination of wind and humidity just before harvest time. Extremes at the bud break, flowering, and nascent fruit stage and greatly reduce the fruit production for the year in tree crops. Saturated soils from heavy rainfall cause major losses to some crops (for example, by fostering pathogen growth), harm water delivery systems, and disrupt timing of field activities (primarily harvest).After an overview of some general issues relating to Agriculture, some extreme weather impacts on specific commodities (primarily dairy and specialty crops, some grains) will be highlighted including quantities relevant to agriculture. Example extreme events economic impacts will be summarized. If there is interest, issues related to water availability and management will be described. Projected extreme event changes over the US will be discussed. Some conclusions will be drawn about: future impacts and possible changes to farming (some are already occurring). Perspectives will be given on including the diverse range of quantities useful to agriculture when developing climate models. As time permits, some personal experiences with climate change and discussing it with fellow farmers will be shared.

  11. Virtual collaborative environments: programming and controlling robotic devices remotely

    NASA Astrophysics Data System (ADS)

    Davies, Brady R.; McDonald, Michael J., Jr.; Harrigan, Raymond W.

    1995-12-01

    This paper describes a technology for remote sharing of intelligent electro-mechanical devices. An architecture and actual system have been developed and tested, based on the proposed National Information Infrastructure (NII) or Information Highway, to facilitate programming and control of intelligent programmable machines (like robots, machine tools, etc.). Using appropriate geometric models, integrated sensors, video systems, and computing hardware; computer controlled resources owned and operated by different (in a geographic sense as well as legal sense) entities can be individually or simultaneously programmed and controlled from one or more remote locations. Remote programming and control of intelligent machines will create significant opportunities for sharing of expensive capital equipment. Using the technology described in this paper, university researchers, manufacturing entities, automation consultants, design entities, and others can directly access robotic and machining facilities located across the country. Disparate electro-mechanical resources will be shared in a manner similar to the way supercomputers are accessed by multiple users. Using this technology, it will be possible for researchers developing new robot control algorithms to validate models and algorithms right from their university labs without ever owning a robot. Manufacturers will be able to model, simulate, and measure the performance of prospective robots before selecting robot hardware optimally suited for their intended application. Designers will be able to access CNC machining centers across the country to fabricate prototypic parts during product design validation. An existing prototype architecture and system has been developed and proven. Programming and control of a large gantry robot located at Sandia National Laboratories in Albuquerque, New Mexico, was demonstrated from such remote locations as Washington D.C., Washington State, and Southern California.

  12. Lessons Learned From Developing Three Generations of Remote Sensing Science Data Processing Systems

    NASA Technical Reports Server (NTRS)

    Tilmes, Curt; Fleig, Albert J.

    2005-01-01

    The Biospheric Information Systems Branch at NASA s Goddard Space Flight Center has developed three generations of Science Investigator-led Processing Systems for use with various remote sensing instruments. The first system is used for data from the MODIS instruments flown on NASA s Earth Observing Systems @OS) Terra and Aqua Spacecraft launched in 1999 and 2002 respectively. The second generation is for the Ozone Measuring Instrument flying on the EOS Aura spacecraft launched in 2004. We are now developing a third generation of the system for evaluation science data processing for the Ozone Mapping and Profiler Suite (OMPS) to be flown by the NPOESS Preparatory Project (NPP) in 2006. The initial system was based on large scale proprietary hardware, operating and database systems. The current OMI system and the OMPS system being developed are based on commodity hardware, the LINUX Operating System and on PostgreSQL, an Open Source RDBMS. The new system distributes its data archive across multiple server hosts and processes jobs on multiple processor boxes. We have created several instances of this system, including one for operational processing, one for testing and reprocessing and one for applications development and scientific analysis. Prior to receiving the first data from OMI we applied the system to reprocessing information from the Solar Backscatter Ultraviolet (SBUV) and Total Ozone Mapping Spectrometer (TOMS) instruments flown from 1978 until now. The system was able to process 25 years (108,000 orbits) of data and produce 800,000 files (400 GiB) of level 2 and level 3 products in less than a week. We will describe the lessons we have learned and tradeoffs between system design, hardware, operating systems, operational staffing, user support and operational procedures. During each generational phase, the system has become more generic and reusable. While the system is not currently shrink wrapped we believe it is to the point where it could be readily adopted, with substantial cost savings, for other similar tasks.

  13. Virtualization for the LHCb Online system

    NASA Astrophysics Data System (ADS)

    Bonaccorsi, Enrico; Brarda, Loic; Moine, Gary; Neufeld, Niko

    2011-12-01

    Virtualization has long been advertised by the IT-industry as a way to cut down cost, optimise resource usage and manage the complexity in large data-centers. The great number and the huge heterogeneity of hardware, both industrial and custom-made, has up to now led to reluctance in the adoption of virtualization in the IT infrastructure of large experiment installations. Our experience in the LHCb experiment has shown that virtualization improves the availability and the manageability of the whole system. We have done an evaluation of available hypervisors / virtualization solutions and find that the Microsoft HV technology provides a high level of maturity and flexibility for our purpose. We present the results of these comparison tests, describing in detail, the architecture of our virtualization infrastructure with a special emphasis on the security for services visible to the outside world. Security is achieved by a sophisticated combination of VLANs, firewalls and virtual routing - the cost and benefits of this solution are analysed. We have adapted our cluster management tools, notably Quattor, for the needs of virtual machines and this allows us to migrate smoothly services on physical machines to the virtualized infrastructure. The procedures for migration will also be described. In the final part of the document we describe our recent R&D activities aiming to replacing the SAN-backend for the virtualization by a cheaper iSCSI solution - this will allow to move all servers and related services to the virtualized infrastructure, excepting the ones doing hardware control via non-commodity PCI plugin cards.

  14. Optical multicast system for data center networks.

    PubMed

    Samadi, Payman; Gupta, Varun; Xu, Junjie; Wang, Howard; Zussman, Gil; Bergman, Keren

    2015-08-24

    We present the design and experimental evaluation of an Optical Multicast System for Data Center Networks, a hardware-software system architecture that uniquely integrates passive optical splitters in a hybrid network architecture for faster and simpler delivery of multicast traffic flows. An application-driven control plane manages the integrated optical and electronic switched traffic routing in the data plane layer. The control plane includes a resource allocation algorithm to optimally assign optical splitters to the flows. The hardware architecture is built on a hybrid network with both Electronic Packet Switching (EPS) and Optical Circuit Switching (OCS) networks to aggregate Top-of-Rack switches. The OCS is also the connectivity substrate of splitters to the optical network. The optical multicast system implementation requires only commodity optical components. We built a prototype and developed a simulation environment to evaluate the performance of the system for bulk multicasting. Experimental and numerical results show simultaneous delivery of multicast flows to all receivers with steady throughput. Compared to IP multicast that is the electronic counterpart, optical multicast performs with less protocol complexity and reduced energy consumption. Compared to peer-to-peer multicast methods, it achieves at minimum an order of magnitude higher throughput for flows under 250 MB with significantly less connection overheads. Furthermore, for delivering 20 TB of data containing only 15% multicast flows, it reduces the total delivery energy consumption by 50% and improves latency by 55% compared to a data center with a sole non-blocking EPS network.

  15. 17 CFR 33.3 - Unlawful commodity option transactions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 17 Commodity and Securities Exchanges 1 2010-04-01 2010-04-01 false Unlawful commodity option... REGULATION OF DOMESTIC EXCHANGE-TRADED COMMODITY OPTION TRANSACTIONS § 33.3 Unlawful commodity option... of, or maintain a position in, any commodity option transaction subject to the provisions of this...

  16. 49 CFR 1248.100 - Commodity classification designated.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 9 2011-10-01 2011-10-01 false Commodity classification designated. 1248.100... STATISTICS Commodity Code § 1248.100 Commodity classification designated. Commencing with reports for the..., reports of commodity statistics required to be made to the Board, shall be based on the commodity codes...

  17. Open Source Hardware for DIY Environmental Sensing

    NASA Astrophysics Data System (ADS)

    Aufdenkampe, A. K.; Hicks, S. D.; Damiano, S. G.; Montgomery, D. S.

    2014-12-01

    The Arduino open source electronics platform has been very popular within the DIY (Do It Yourself) community for several years, and it is now providing environmental science researchers with an inexpensive alternative to commercial data logging and transmission hardware. Here we present the designs for our latest series of custom Arduino-based dataloggers, which include wireless communication options like self-meshing radio networks and cellular phone modules. The main Arduino board uses a custom interface board to connect to various research-grade sensors to take readings of turbidity, dissolved oxygen, water depth and conductivity, soil moisture, solar radiation, and other parameters. Sensors with SDI-12 communications can be directly interfaced to the logger using our open Arduino-SDI-12 software library (https://github.com/StroudCenter/Arduino-SDI-12). Different deployment options are shown, like rugged enclosures to house the loggers and rigs for mounting the sensors in both fresh water and marine environments. After the data has been collected and transmitted by the logger, the data is received by a mySQL-PHP stack running on a web server that can be accessed from anywhere in the world. Once there, the data can be visualized on web pages or served though REST requests and Water One Flow (WOF) services. Since one of the main benefits of using open source hardware is the easy collaboration between users, we are introducing a new web platform for discussion and sharing of ideas and plans for hardware and software designs used with DIY environmental sensors and data loggers.

  18. 17 CFR 33.10 - Fraud in connection with commodity option transactions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 17 Commodity and Securities Exchanges 1 2010-04-01 2010-04-01 false Fraud in connection with commodity option transactions. 33.10 Section 33.10 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION REGULATION OF DOMESTIC EXCHANGE-TRADED COMMODITY OPTION TRANSACTIONS § 33.10 Fraud in...

  19. 17 CFR 32.11 - Suspension of commodity option transactions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 17 Commodity and Securities Exchanges 1 2010-04-01 2010-04-01 false Suspension of commodity option... REGULATION OF COMMODITY OPTION TRANSACTIONS § 32.11 Suspension of commodity option transactions. (a... accept money, securities or property in connection with, the purchase or sale of any commodity option, or...

  20. Interoperability Trends in Extravehicular Activity (EVA) Space Operations for the 21st Century

    NASA Technical Reports Server (NTRS)

    Miller, Gerald E.

    1999-01-01

    No other space operations in the 21 st century more comprehensively embody the challenges and dependencies of interoperability than EVA. This discipline is already functioning at an W1paralleled level of interagency, inter-organizational and international cooperation. This trend will only increase as space programs endeavor to expand in the face of shrinking budgets. Among the topics examined in this paper are hardware-oriented issues. Differences in design standards among various space participants dictate differences in the EVA tools that must be manufactured, flown and maintained on-orbit. Presently only two types of functional space suits exist in the world. However, three versions of functional airlocks are in operation. Of the three airlocks, only the International Space Station (ISS) Joint Airlock can accommodate both types of suits. Due to functional differences in the suits, completely different operating protocols are required for each. Should additional space suit or airlock designs become available, the complexity will increase. The lessons learned as a result of designing and operating within such a system are explored. This paper also examines the non-hardware challenges presented by interoperability for a discipline that is as uniquely dependent upon the individual as EVA. Operation of space suits (essentially single-person spacecrafts) by persons whose native language is not that of the suits' designers is explored. The intricacies of shared mission planning, shared control and shared execution of joint EVA's are explained. For example, once ISS is fully functional, the potential exists for two crewmembers of different nationality to be wearing suits manufactured and controlled by a third nation, while operating within an airlock manufactured and controlled by a fourth nation, in an effort to perform tasks upon hardware belonging to a fifth nation. Everything from training issues, to procedures development and writing, to real-time operations is addressed. Finally, this paper looks to the management challenges presented by interoperability in general. With budgets being reduced among all space-faring nations, the need to expand cooperation in the highly expensive field of human space operations is only going to intensify. The question facing management is not if the trend toward interoperation will continue, but how to best facilitate its doing so. Real-world EVA interoperability experience throughout the ShuttlelMir and ISS Programs is discussed to illustrate the challenges and

  1. Use of Hawaii Analog Sites for Lunar Science and In-Situ Resource Utilization

    NASA Astrophysics Data System (ADS)

    Sanders, G. B.; Larson, W. E.; Picard, M.; Hamilton, J. C.

    2011-10-01

    In-Situ Resource Utilization (ISRU) and lunar science share similar objectives with respect to analyzing and characterizing the physical, mineral, and volatile materials and resources at sites of robotic and human exploration. To help mature and stress instruments, technologies, and hardware and to evaluate operations and procedures, space agencies have utilized demonstrations at analog sites on Earth before use in future missions. The US National Aeronautics and Space Administration (NASA), the Canadian Space Agency (CSA), and the German Space Agency (DLR) have utilized an analog site on the slope of Mauna Kea on the Big Island of Hawaii to test ISRU and lunar science hardware and operations in two previously held analog field tests. NASA and CSA are currently planning on a 3rd analog field test to be held in June, 2012 in Hawaii that will expand upon the successes from the previous two field tests.

  2. Cloud Computing for radiologists.

    PubMed

    Kharat, Amit T; Safvi, Amjad; Thind, Ss; Singh, Amarjit

    2012-07-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.

  3. Design of transnational mobile e-payment application based on SIM card

    NASA Astrophysics Data System (ADS)

    Qian, Tang; Zhen, Li

    2018-05-01

    Facing the stronger demands of transnational mobile communications and internet-based mobile wireless value-added services, the interconnection and interworking of multiple communication operators and their win-win cooperations become a crucial target in the new round of mobile economic development. Previous researches showed that mobile communications and value-add services are not only technical problems, but also more economic problems.we design a general oncard operating system based on SIM card that could be responsible for coordinating and distributing card hardware and software resources. These applications such as transnational mobile payment, consumption management and many other supplemented functions share the API interfaces, hardware and software resources provided by the operation system, although they are independent of each other. The layer structure of SIM card design not only greatly reduces the complexity of COS development, but also saves the most tense card resources and extends SIM cards applications.

  4. Use of Hawaii Analog Sites for Lunar Science and In-Situ Resource Utilization

    NASA Technical Reports Server (NTRS)

    Sanders, G. B.; Larson, W. E.; Picard, M.; Hamilton, J. C.

    2011-01-01

    In-Situ Resource Utilization (ISRU) and lunar science share similar objectives with respect to analyzing and characterizing the physical, mineral, and volatile materials and resources at sites of robotic and human exploration. To help mature and stress instruments, technologies, and hardware and to evaluate operations and procedures, space agencies have utilized demonstrations at analog sites on Earth before use in future missions. The US National Aeronautics and Space Administration (NASA), the Canadian Space Agency (CSA), and the German Space Agency (DLR) have utilized an analog site on the slope of Mauna Kea on the Big Island of Hawaii to test ISRU and lunar science hardware and operations in two previously held analog field tests. NASA and CSA are currently planning on a 3rd analog field test to be held in June, 2012 in Hawaii that will expand upon the successes from the previous two field tests.

  5. An all digital low data rate communication system

    NASA Technical Reports Server (NTRS)

    Chen, C.; Fan, M.

    1973-01-01

    The advent of digital hardwares has made it feasible to implement many communication system components digitally. With the exception of frequency down conversion, the proposed low data rate communication system uses digital hardwares completely. Although the system is designed primarily for deep space communications with large frequency uncertainty and low signal-to-noise ratio, it is also suitable for other low data rate applications with time-shared operation among a number of channels. Emphasis is placed on the fast Fourier transform receiver and the automatic frequency control via digital filtering. The speed available from the digital system allows sophisticated signal processing to reduce frequency uncertainty and to increase the signal-to-noise ratio. The practical limitations of the system such as the finite register length are examined. It is concluded that the proposed all-digital system is not only technically feasible but also has potential cost reduction over the existing receiving systems.

  6. Open-Source 3-D Platform for Low-Cost Scientific Instrument Ecosystem.

    PubMed

    Zhang, C; Wijnen, B; Pearce, J M

    2016-08-01

    The combination of open-source software and hardware provides technically feasible methods to create low-cost, highly customized scientific research equipment. Open-source 3-D printers have proven useful for fabricating scientific tools. Here the capabilities of an open-source 3-D printer are expanded to become a highly flexible scientific platform. An automated low-cost 3-D motion control platform is presented that has the capacity to perform scientific applications, including (1) 3-D printing of scientific hardware; (2) laboratory auto-stirring, measuring, and probing; (3) automated fluid handling; and (4) shaking and mixing. The open-source 3-D platform not only facilities routine research while radically reducing the cost, but also inspires the creation of a diverse array of custom instruments that can be shared and replicated digitally throughout the world to drive down the cost of research and education further. © 2016 Society for Laboratory Automation and Screening.

  7. Cloud Computing for radiologists

    PubMed Central

    Kharat, Amit T; Safvi, Amjad; Thind, SS; Singh, Amarjit

    2012-01-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future. PMID:23599560

  8. Message Passing vs. Shared Address Space on a Cluster of SMPs

    NASA Technical Reports Server (NTRS)

    Shan, Hongzhang; Singh, Jaswinder Pal; Oliker, Leonid; Biswas, Rupak

    2000-01-01

    The convergence of scalable computer architectures using clusters of PCs (or PC-SMPs) with commodity networking has become an attractive platform for high end scientific computing. Currently, message-passing and shared address space (SAS) are the two leading programming paradigms for these systems. Message-passing has been standardized with MPI, and is the most common and mature programming approach. However message-passing code development can be extremely difficult, especially for irregular structured computations. SAS offers substantial ease of programming, but may suffer from performance limitations due to poor spatial locality, and high protocol overhead. In this paper, we compare the performance of and programming effort, required for six applications under both programming models on a 32 CPU PC-SMP cluster. Our application suite consists of codes that typically do not exhibit high efficiency under shared memory programming. due to their high communication to computation ratios and complex communication patterns. Results indicate that SAS can achieve about half the parallel efficiency of MPI for most of our applications: however, on certain classes of problems SAS performance is competitive with MPI. We also present new algorithms for improving the PC cluster performance of MPI collective operations.

  9. 7 CFR 65.135 - Covered commodity.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., PEANUTS, AND GINSENG General Provisions Definitions § 65.135 Covered commodity. (a) Covered commodity... nuts; (6) Pecans; and (7) Ginseng. (b) Covered commodities are excluded from this part if the commodity...

  10. 7 CFR 65.135 - Covered commodity.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ..., PEANUTS, AND GINSENG General Provisions Definitions § 65.135 Covered commodity. (a) Covered commodity... nuts; (6) Pecans; and (7) Ginseng. (b) Covered commodities are excluded from this part if the commodity...

  11. 7 CFR 65.135 - Covered commodity.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., PEANUTS, AND GINSENG General Provisions Definitions § 65.135 Covered commodity. (a) Covered commodity... nuts; (6) Pecans; and (7) Ginseng. (b) Covered commodities are excluded from this part if the commodity...

  12. 7 CFR 65.135 - Covered commodity.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., PEANUTS, AND GINSENG General Provisions Definitions § 65.135 Covered commodity. (a) Covered commodity... nuts; (6) Pecans; and (7) Ginseng. (b) Covered commodities are excluded from this part if the commodity...

  13. Evaluating the impact of climate policies on regional food availability and accessibility using an Integrated Assessment Model

    NASA Astrophysics Data System (ADS)

    Gilmore, E.; Cui, Y. R.; Waldhoff, S.

    2015-12-01

    Beyond 2015, eradicating hunger will remain a critical part of the global development agenda through the Sustainable Development Goals (SDG). Efforts to limit climate change through both mitigation of greenhouse gas emissions and land use policies may interact with food availability and accessibility in complex and unanticipated ways. Here, we develop projections of regional food accessibility to 2050 under the alternative futures outlined by the Shared Socioeconomic Pathways (SSPs) and under different climate policy targets and structures. We use the Global Change Assessment Model (GCAM), an integrated assessment model (IAM), for our projections. We calculate food access as the weighted average of consumption of five staples and the portion of income spend on those commodities and extend the GCAM calculated universal global producer price to regional consumer prices drawing on historical relationships of these prices. Along the SSPs, food access depends largely on expectations of increases in population and economic status. Under a more optimistic scenario, the pressures on food access from increasing demand and rising prices can be counterbalanced by faster economic development. Stringent climate policies that increase commodity prices, however, may hinder vulnerable regions, namely Sub-Saharan Africa, from achieving greater food accessibility.

  14. 17 CFR 4.6 - Exclusion for certain otherwise regulated persons from the definition of the term “commodity...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... otherwise regulated persons from the definition of the term âcommodity trading advisor.â 4.6 Section 4.6 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION COMMODITY POOL OPERATORS AND COMMODITY TRADING ADVISORS General Provisions, Definitions and Exemptions § 4.6 Exclusion for certain...

  15. 17 CFR 33.4 - Designation as a contract market for the trading of commodity options.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... market for the trading of commodity options. 33.4 Section 33.4 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION REGULATION OF DOMESTIC EXCHANGE-TRADED COMMODITY OPTION TRANSACTIONS § 33.4 Designation as a contract market for the trading of commodity options. The Commission may...

  16. 17 CFR 4.6 - Exclusion for certain otherwise regulated persons from the definition of the term “commodity...

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... otherwise regulated persons from the definition of the term âcommodity trading advisor.â 4.6 Section 4.6 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION COMMODITY POOL OPERATORS AND COMMODITY TRADING ADVISORS General Provisions, Definitions and Exemptions § 4.6 Exclusion for certain...

  17. 17 CFR 4.6 - Exclusion for certain otherwise regulated persons from the definition of the term “commodity...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... otherwise regulated persons from the definition of the term âcommodity trading advisor.â 4.6 Section 4.6 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION COMMODITY POOL OPERATORS AND COMMODITY TRADING ADVISORS General Provisions, Definitions and Exemptions § 4.6 Exclusion for certain...

  18. 17 CFR 33.4 - Designation as a contract market for the trading of commodity options.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... market for the trading of commodity options. 33.4 Section 33.4 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION REGULATION OF DOMESTIC EXCHANGE-TRADED COMMODITY OPTION TRANSACTIONS § 33.4 Designation as a contract market for the trading of commodity options. The Commission may...

  19. 17 CFR 4.6 - Exclusion for certain otherwise regulated persons from the definition of the term “commodity...

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... otherwise regulated persons from the definition of the term âcommodity trading advisor.â 4.6 Section 4.6 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION COMMODITY POOL OPERATORS AND COMMODITY TRADING ADVISORS General Provisions, Definitions and Exemptions § 4.6 Exclusion for certain...

  20. 17 CFR 4.6 - Exclusion for certain otherwise regulated persons from the definition of the term “commodity...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... otherwise regulated persons from the definition of the term âcommodity trading advisor.â 4.6 Section 4.6 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION COMMODITY POOL OPERATORS AND COMMODITY TRADING ADVISORS General Provisions, Definitions and Exemptions § 4.6 Exclusion for certain...

  1. Software-Controlled Caches in the VMP Multiprocessor

    DTIC Science & Technology

    1986-03-01

    programming system level that Processors is tuned for the VMP design. In this vein, we are interested in exploring how far the software support can go to ...handled in software, analogously to the handling agement of the shared program state is familiar and of virtual memory page faults. Hardware support for...ensure good behavior, as opposed to how Each cache miss results in bus traffic. Table 2 pro- vides the bus cost for the "average" cache miss. Fig

  2. Antarctica EVA

    NASA Technical Reports Server (NTRS)

    Love, Stan

    2013-01-01

    NASA astronaut Stan Love shared his experiences with the Antarctic Search for Meteorites (ANSMET), an annual expedition to the southern continent to collect valuable samples for research in planetary science. ANSMET teams operate from isolated, remote field camps on the polar plateau, where windchill factors often reach -40? F. Several astronaut participants have noted ANSMET's similarity to a space mission. Some of the operational concepts, tools, and equipment employed by ANSMET teams may offer valuable insights to designers of future planetary surface exploration hardware.

  3. The Top 10 Challenges in Extreme-Scale Visual Analytics

    PubMed Central

    Wong, Pak Chung; Shen, Han-Wei; Johnson, Christopher R.; Chen, Chaomei; Ross, Robert B.

    2013-01-01

    In this issue of CG&A, researchers share their R&D findings and results on applying visual analytics (VA) to extreme-scale data. Having surveyed these articles and other R&D in this field, we’ve identified what we consider the top challenges of extreme-scale VA. To cater to the magazine’s diverse readership, our discussion evaluates challenges in all areas of the field, including algorithms, hardware, software, engineering, and social issues. PMID:24489426

  4. The Tera Multithreaded Architecture and Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Bokhari, Shahid H.; Mavriplis, Dimitri J.

    1998-01-01

    The Tera Multithreaded Architecture (MTA) is a new parallel supercomputer currently being installed at San Diego Supercomputing Center (SDSC). This machine has an architecture quite different from contemporary parallel machines. The computational processor is a custom design and the machine uses hardware to support very fine grained multithreading. The main memory is shared, hardware randomized and flat. These features make the machine highly suited to the execution of unstructured mesh problems, which are difficult to parallelize on other architectures. We report the results of a study carried out during July-August 1998 to evaluate the execution of EUL3D, a code that solves the Euler equations on an unstructured mesh, on the 2 processor Tera MTA at SDSC. Our investigation shows that parallelization of an unstructured code is extremely easy on the Tera. We were able to get an existing parallel code (designed for a shared memory machine), running on the Tera by changing only the compiler directives. Furthermore, a serial version of this code was compiled to run in parallel on the Tera by judicious use of directives to invoke the "full/empty" tag bits of the machine to obtain synchronization. This version achieves 212 and 406 Mflop/s on one and two processors respectively, and requires no attention to partitioning or placement of data issues that would be of paramount importance in other parallel architectures.

  5. Resource potential for commodities in addition to Uranium in sandstone-hosted deposits: Chapter 13

    USGS Publications Warehouse

    Breit, George N.

    2016-01-01

    Sandstone-hosted deposits mined primarily for their uranium content also have been a source of vanadium and modest amounts of copper. Processing of these ores has also recovered small amounts of molybdenum, rhenium, rare earth elements, scandium, and selenium. These deposits share a generally common origin, but variations in the source of metals, composition of ore-forming solutions, and geologic history result in complex variability in deposit composition. This heterogeneity is evident regionally within the same host rock, as well as within districts. Future recovery of elements associated with uranium in these deposits will be strongly dependent on mining and ore-processing methods.

  6. 17 CFR 33.5 - Application for designation as a contract market for the trading of commodity options.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... a contract market for the trading of commodity options. 33.5 Section 33.5 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION REGULATION OF DOMESTIC EXCHANGE-TRADED COMMODITY OPTION TRANSACTIONS § 33.5 Application for designation as a contract market for the trading of commodity options. (a...

  7. 17 CFR 37.4 - Election to trade excluded and exempt commodities.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 17 Commodity and Securities Exchanges 1 2011-04-01 2011-04-01 false Election to trade excluded and exempt commodities. 37.4 Section 37.4 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION DERIVATIVES TRANSACTION EXECUTION FACILITIES § 37.4 Election to trade excluded and exempt commodities. A board of trade that is or elects...

  8. 17 CFR 33.5 - Application for designation as a contract market for the trading of commodity options.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... a contract market for the trading of commodity options. 33.5 Section 33.5 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION REGULATION OF COMMODITY OPTION TRANSACTIONS THAT ARE OPTIONS... contract market for the trading of commodity options. (a) Any board of trade desiring to be designated as a...

  9. 17 CFR 33.5 - Application for designation as a contract market for the trading of commodity options.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... a contract market for the trading of commodity options. 33.5 Section 33.5 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION REGULATION OF DOMESTIC EXCHANGE-TRADED COMMODITY OPTION TRANSACTIONS § 33.5 Application for designation as a contract market for the trading of commodity options. (a...

  10. 17 CFR 33.6 - Suspension or revocation of designation as a contract market for the trading of commodity options.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... designation as a contract market for the trading of commodity options. 33.6 Section 33.6 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION REGULATION OF COMMODITY OPTION TRANSACTIONS THAT... designation as a contract market for the trading of commodity options. The Commission may, after notice and...

  11. 17 CFR 33.4 - Designation as a contract market for the trading of commodity options.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... market for the trading of commodity options. 33.4 Section 33.4 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION REGULATION OF COMMODITY OPTION TRANSACTIONS THAT ARE OPTIONS ON CONTRACTS OF SALE OF A COMMODITY FOR FUTURE DELIVERY § 33.4 Designation as a contract market for the trading...

  12. 17 CFR 33.5 - Application for designation as a contract market for the trading of commodity options.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... a contract market for the trading of commodity options. 33.5 Section 33.5 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION REGULATION OF DOMESTIC EXCHANGE-TRADED COMMODITY OPTION TRANSACTIONS § 33.5 Application for designation as a contract market for the trading of commodity options. (a...

  13. 17 CFR 210.4-08 - General notes to financial statements.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ..., options, and other financial instruments with similar characteristics. (ii) Derivative commodity... futures, commodity forwards, commodity swaps, commodity options, and other commodity instruments with... policies for certain derivative instruments. Disclosures regarding accounting policies shall include...

  14. The structure of the clouds distributed operating system

    NASA Technical Reports Server (NTRS)

    Dasgupta, Partha; Leblanc, Richard J., Jr.

    1989-01-01

    A novel system architecture, based on the object model, is the central structuring concept used in the Clouds distributed operating system. This architecture makes Clouds attractive over a wide class of machines and environments. Clouds is a native operating system, designed and implemented at Georgia Tech. and runs on a set of generated purpose computers connected via a local area network. The system architecture of Clouds is composed of a system-wide global set of persistent (long-lived) virtual address spaces, called objects that contain persistent data and code. The object concept is implemented at the operating system level, thus presenting a single level storage view to the user. Lightweight treads carry computational activity through the code stored in the objects. The persistent objects and threads gives rise to a programming environment composed of shared permanent memory, dispensing with the need for hardware-derived concepts such as the file systems and message systems. Though the hardware may be distributed and may have disks and networks, the Clouds provides the applications with a logically centralized system, based on a shared, structured, single level store. The current design of Clouds uses a minimalist philosophy with respect to both the kernel and the operating system. That is, the kernel and the operating system support a bare minimum of functionality. Clouds also adheres to the concept of separation of policy and mechanism. Most low-level operating system services are implemented above the kernel and most high level services are implemented at the user level. From the measured performance of using the kernel mechanisms, we are able to demonstrate that efficient implementations are feasible for the object model on commercially available hardware. Clouds provides a rich environment for conducting research in distributed systems. Some of the topics addressed in this paper include distributed programming environments, consistency of persistent data and fault-tolerance.

  15. Optimizing CMS build infrastructure via Apache Mesos

    DOE PAGES

    Abdurachmanov, David; Degano, Alessandro; Elmer, Peter; ...

    2015-12-23

    The Offline Software of the CMS Experiment at the Large Hadron Collider (LHC) at CERN consists of 6M lines of in-house code, developed over a decade by nearly 1000 physicists, as well as a comparable amount of general use open-source code. A critical ingredient to the success of the construction and early operation of the WLCG was the convergence, around the year 2000, on the use of a homogeneous environment of commodity x86-64 processors and Linux.Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, Jenkins, Spark, Aurora,more » and other applications on a dynamically shared pool of nodes. Lastly, we present how we migrated our continuous integration system to schedule jobs on a relatively small Apache Mesos enabled cluster and how this resulted in better resource usage, higher peak performance and lower latency thanks to the dynamic scheduling capabilities of Mesos.« less

  16. Optimizing CMS build infrastructure via Apache Mesos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdurachmanov, David; Degano, Alessandro; Elmer, Peter

    The Offline Software of the CMS Experiment at the Large Hadron Collider (LHC) at CERN consists of 6M lines of in-house code, developed over a decade by nearly 1000 physicists, as well as a comparable amount of general use open-source code. A critical ingredient to the success of the construction and early operation of the WLCG was the convergence, around the year 2000, on the use of a homogeneous environment of commodity x86-64 processors and Linux.Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, Jenkins, Spark, Aurora,more » and other applications on a dynamically shared pool of nodes. Lastly, we present how we migrated our continuous integration system to schedule jobs on a relatively small Apache Mesos enabled cluster and how this resulted in better resource usage, higher peak performance and lower latency thanks to the dynamic scheduling capabilities of Mesos.« less

  17. [Research progress on standards of commodity classes of Chinese materia medica and discussion on several key problems].

    PubMed

    Yang, Guang; Zeng, Yan; Guo, Lan-Ping; Huang, Lu-Qi; Jin, Yan; Zheng, Yu-Guang; Wang, Yong-Yan

    2014-05-01

    Standards of commodity classes of Chinese materia medica is an important way to solve the "Lemons Problem" of traditional Chinese medicine market. Standards of commodity classes are also helpful to rebuild market mechanisms for "high price for good quality". The previous edition of commodity classes standards of Chinese materia medica was made 30 years ago. It is no longer adapted to the market demand. This article researched progress on standards of commodity classes of Chinese materia medica. It considered that biological activity is a better choice than chemical constituents for standards of commodity classes of Chinese materia medica. It is also considered that the key point to set standards of commodity classes is finding the influencing factors between "good quality" and "bad quality". The article also discussed the range of commodity classes of Chinese materia medica, and how to coordinate standards of pharmacopoeia and commodity classes. According to different demands, diversiform standards can be used in commodity classes of Chinese materia medica, but efficacy is considered the most important index of commodity standard. Decoction pieces can be included in standards of commodity classes of Chinese materia medica. The authors also formulated the standards of commodity classes of Notoginseng Radix as an example, and hope this study can make a positive and promotion effect on traditional Chinese medicine market related research.

  18. Determinants of quality of shared sanitation facilities in informal settlements: case study of Kisumu, Kenya.

    PubMed

    Simiyu, Sheillah; Swilling, Mark; Cairncross, Sandy; Rheingans, Richard

    2017-01-11

    Shared facilities are not recognised as improved sanitation due to challenges of maintenance as they easily can be avenues for the spread of diseases. Thus there is need to evaluate the quality of shared facilities, especially in informal settlements, where they are commonly used. A shared facility can be equated to a common good whose management depends on the users. If users do not work collectively towards keeping the facility clean, it is likely that the quality may depreciate due to lack of maintenance. This study examined the quality of shared sanitation facilities and used the common pool resource (CPR) management principles to examine the determinants of shared sanitation quality in the informal settlements of Kisumu, Kenya. Using a multiple case study design, the study employed both quantitative and qualitative methods. In both phases, users of shared sanitation facilities were interviewed, while shared sanitation facilities were inspected. Shared sanitation quality was a score which was the dependent variable in a regression analysis. Interviews during the qualitative stage were aimed at understanding management practices of shared sanitation users. Qualitative data was analysed thematically by following the CPR principles. Shared facilities, most of which were dirty, were shared by an average of eight households, and their quality decreased with an increase in the number of households sharing. The effect of numbers on quality is explained by behaviour reflected in the CPR principles, as it was easier to define boundaries of shared facilities when there were fewer users who cooperated towards improving their shared sanitation facility. Other factors, such as defined management systems, cooperation, collective decision making, and social norms, also played a role in influencing the behaviour of users towards keeping shared facilities clean and functional. Apart from hardware factors, quality of shared sanitation is largely due to group behaviour of users. The CPR principles form a crucial lens through which the dynamics of shared sanitation facilities in informal settlements can be understood. Development and policy efforts should incorporate group behaviour as they determine the quality of shared sanitation facilities.

  19. 17 CFR 32.5 - Disclosure.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... effect of any foreign currency fluctuations with respect to commodity option transactions which are to be... 17 Commodity and Securities Exchanges 1 2010-04-01 2010-04-01 false Disclosure. 32.5 Section 32.5 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION REGULATION OF COMMODITY OPTION...

  20. 17 CFR 31.6 - Registration of leverage commodities.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... commodities. 31.6 Section 31.6 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION... applied to the National Futures Association for registration as a leverage transaction merchant; (2... the spot, forward, and futures markets for the generic commodity; (3) Specify a commercial or retail...

  1. 7 CFR 250.57 - Commodity schools.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 4 2011-01-01 2011-01-01 false Commodity schools. 250.57 Section 250.57 Agriculture... TERRITORIES AND POSSESSIONS AND AREAS UNDER ITS JURISDICTION National School Lunch Program (NSLP) and Other Child Nutrition Programs § 250.57 Commodity schools. (a) Categorization of commodity schools. Commodity...

  2. 7 CFR 250.57 - Commodity schools.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 4 2010-01-01 2010-01-01 false Commodity schools. 250.57 Section 250.57 Agriculture... TERRITORIES AND POSSESSIONS AND AREAS UNDER ITS JURISDICTION National School Lunch Program (NSLP) and Other Child Nutrition Programs § 250.57 Commodity schools. (a) Categorization of commodity schools. Commodity...

  3. Free Factories: Unified Infrastructure for Data Intensive Web Services

    PubMed Central

    Zaranek, Alexander Wait; Clegg, Tom; Vandewege, Ward; Church, George M.

    2010-01-01

    We introduce the Free Factory, a platform for deploying data-intensive web services using small clusters of commodity hardware and free software. Independently administered virtual machines called Freegols give application developers the flexibility of a general purpose web server, along with access to distributed batch processing, cache and storage services. Each cluster exploits idle RAM and disk space for cache, and reserves disks in each node for high bandwidth storage. The batch processing service uses a variation of the MapReduce model. Virtualization allows every CPU in the cluster to participate in batch jobs. Each 48-node cluster can achieve 4-8 gigabytes per second of disk I/O. Our intent is to use multiple clusters to process hundreds of simultaneous requests on multi-hundred terabyte data sets. Currently, our applications achieve 1 gigabyte per second of I/O with 123 disks by scheduling batch jobs on two clusters, one of which is located in a remote data center. PMID:20514356

  4. The CMS High Level Trigger System: Experience and Future Development

    NASA Astrophysics Data System (ADS)

    Bauer, G.; Behrens, U.; Bowen, M.; Branson, J.; Bukowiec, S.; Cittolin, S.; Coarasa, J. A.; Deldicque, C.; Dobson, M.; Dupont, A.; Erhan, S.; Flossdorf, A.; Gigi, D.; Glege, F.; Gomez-Reino, R.; Hartl, C.; Hegeman, J.; Holzner, A.; Hwong, Y. L.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; O'Dell, V.; Orsini, L.; Paus, C.; Petrucci, A.; Pieri, M.; Polese, G.; Racz, A.; Raginel, O.; Sakulin, H.; Sani, M.; Schwick, C.; Shpakov, D.; Simon, S.; Spataru, A. C.; Sumorok, K.

    2012-12-01

    The CMS experiment at the LHC features a two-level trigger system. Events accepted by the first level trigger, at a maximum rate of 100 kHz, are read out by the Data Acquisition system (DAQ), and subsequently assembled in memory in a farm of computers running a software high-level trigger (HLT), which selects interesting events for offline storage and analysis at a rate of order few hundred Hz. The HLT algorithms consist of sequences of offline-style reconstruction and filtering modules, executed on a farm of 0(10000) CPU cores built from commodity hardware. Experience from the operation of the HLT system in the collider run 2010/2011 is reported. The current architecture of the CMS HLT, its integration with the CMS reconstruction framework and the CMS DAQ, are discussed in the light of future development. The possible short- and medium-term evolution of the HLT software infrastructure to support extensions of the HLT computing power, and to address remaining performance and maintenance issues, are discussed.

  5. Remote visual analysis of large turbulence databases at multiple scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pulido, Jesus; Livescu, Daniel; Kanov, Kalin

    The remote analysis and visualization of raw large turbulence datasets is challenging. Current accurate direct numerical simulations (DNS) of turbulent flows generate datasets with billions of points per time-step and several thousand time-steps per simulation. Until recently, the analysis and visualization of such datasets was restricted to scientists with access to large supercomputers. The public Johns Hopkins Turbulence database simplifies access to multi-terabyte turbulence datasets and facilitates the computation of statistics and extraction of features through the use of commodity hardware. In this paper, we present a framework designed around wavelet-based compression for high-speed visualization of large datasets and methodsmore » supporting multi-resolution analysis of turbulence. By integrating common technologies, this framework enables remote access to tools available on supercomputers and over 230 terabytes of DNS data over the Web. Finally, the database toolset is expanded by providing access to exploratory data analysis tools, such as wavelet decomposition capabilities and coherent feature extraction.« less

  6. Remote visual analysis of large turbulence databases at multiple scales

    DOE PAGES

    Pulido, Jesus; Livescu, Daniel; Kanov, Kalin; ...

    2018-06-15

    The remote analysis and visualization of raw large turbulence datasets is challenging. Current accurate direct numerical simulations (DNS) of turbulent flows generate datasets with billions of points per time-step and several thousand time-steps per simulation. Until recently, the analysis and visualization of such datasets was restricted to scientists with access to large supercomputers. The public Johns Hopkins Turbulence database simplifies access to multi-terabyte turbulence datasets and facilitates the computation of statistics and extraction of features through the use of commodity hardware. In this paper, we present a framework designed around wavelet-based compression for high-speed visualization of large datasets and methodsmore » supporting multi-resolution analysis of turbulence. By integrating common technologies, this framework enables remote access to tools available on supercomputers and over 230 terabytes of DNS data over the Web. Finally, the database toolset is expanded by providing access to exploratory data analysis tools, such as wavelet decomposition capabilities and coherent feature extraction.« less

  7. Scale out databases for CERN use cases

    NASA Astrophysics Data System (ADS)

    Baranowski, Zbigniew; Grzybek, Maciej; Canali, Luca; Lanza Garcia, Daniel; Surdy, Kacper

    2015-12-01

    Data generation rates are expected to grow very fast for some database workloads going into LHC run 2 and beyond. In particular this is expected for data coming from controls, logging and monitoring systems. Storing, administering and accessing big data sets in a relational database system can quickly become a very hard technical challenge, as the size of the active data set and the number of concurrent users increase. Scale-out database technologies are a rapidly developing set of solutions for deploying and managing very large data warehouses on commodity hardware and with open source software. In this paper we will describe the architecture and tests on database systems based on Hadoop and the Cloudera Impala engine. We will discuss the results of our tests, including tests of data loading and integration with existing data sources and in particular with relational databases. We will report on query performance tests done with various data sets of interest at CERN, notably data from the accelerator log database.

  8. Cache Sharing and Isolation Tradeoffs in Multicore Mixed-Criticality Systems

    DTIC Science & Technology

    2015-05-01

    form of lockdown registers, to provide way-based partitioning. These alternatives are illustrated in Fig. 1 with respect to a quad-core ARM Cortex A9... processor (as we do for Level-A and -B tasks), but they did not consider MC systems. Altmeyer et al. [1] considered uniprocessor scheduling on a system with a...framework. We randomly generated task sets and determined the fraction that were schedulable on our target hardware platform, the quad-core ARM Cortex A9

  9. Design and implementation of real-time wireless projection system based on ARM embedded system

    NASA Astrophysics Data System (ADS)

    Long, Zhaohua; Tang, Hao; Huang, Junhua

    2018-04-01

    Aiming at the shortage of existing real-time screen sharing system, a real-time wireless projection system is proposed in this paper. Based on the proposed system, a weight-based frame deletion strategy combined sampling time period and data variation is proposed. By implementing the system on the hardware platform, the results show that the system can achieve good results. The weight-based strategy can improve the service quality, reduce the delay and optimize the real-time customer service system [1].

  10. STS 135 Landing

    NASA Image and Video Library

    2017-12-08

    Goddard's Ritsko Wins 2011 SAVE Award The winner of the 2011 SAVE Award is Matthew Ritsko, a Goddard financial manager. His tool lending library would track and enable sharing of expensive space-flight tools and hardware after projects no longer need them. This set of images represents the types of tools used at NASA. To read more go to: www.nasa.gov/topics/people/features/ritsko-save.html Exploration Systems Project Manager Mike Weiss speaks about a Hubble Servicing Mission hand tool, developed at Goddard. Credit: NASA/GSFC/Debbie McCallum

  11. Integrating Software Modules For Robot Control

    NASA Technical Reports Server (NTRS)

    Volpe, Richard A.; Khosla, Pradeep; Stewart, David B.

    1993-01-01

    Reconfigurable, sensor-based control system uses state variables in systematic integration of reusable control modules. Designed for open-architecture hardware including many general-purpose microprocessors, each having own local memory plus access to global shared memory. Implemented in software as extension of Chimera II real-time operating system. Provides transparent computing mechanism for intertask communication between control modules and generic process-module architecture for multiprocessor realtime computation. Used to control robot arm. Proves useful in variety of other control and robotic applications.

  12. Design and Implementation of Telemedicine based on Java Media Framework

    NASA Astrophysics Data System (ADS)

    Xiong, Fengguang; Jia, Zhiyan

    According to analyze the importance and problem of telemedicine in this paper, a telemedicine system based on JMF is proposed to design and implement capturing, compression, storage, transmission, reception and play of a medical audio and video. The telemedicine system can solve existing problems that medical information is not shared, platform-dependent is high, software is incompatibilities and so on. Experimental data prove that the system has low hardware cost, and is easy to transmission and storage, and is portable and powerful.

  13. Development and evaluation of a fault-tolerant multiprocessor (FTMP) computer. Volume 1: FTMP principles of operation

    NASA Technical Reports Server (NTRS)

    Smith, T. B., Jr.; Lala, J. H.

    1983-01-01

    The basic organization of the fault tolerant multiprocessor, (FTMP) is that of a general purpose homogeneous multiprocessor. Three processors operate on a shared system (memory and I/O) bus. Replication and tight synchronization of all elements and hardware voting is employed to detect and correct any single fault. Reconfiguration is then employed to repair a fault. Multiple faults may be tolerated as a sequence of single faults with repair between fault occurrences.

  14. Hardware Model Of A Shipboard Zonal Electrical Distribution System (ZEDS): Alternating Current/Direct Current (AC/DC)

    DTIC Science & Technology

    2010-06-01

    essential to fostering the loyalty , dedication and pride that enables the diverse student population within your department to be the very best systems...that I have enjoyed in my short time with you. Without you in my life, to share my success, I could not have ever achieved the level of satisfaction ...used. A typical wall mounted light switch is a single pole single throw switch. A common industrial motor start switch is a three pole single throw

  15. Hardware Model of a Shipboard Zonal Electrical Distribution System (ZEDS): Alternating Current/Direct Current (AC/DC)

    DTIC Science & Technology

    2010-06-01

    perfect example on how to lead, manage and strive for excellence in every aspect of your life. Your leadership is essential to fostering the loyalty ...share my success, I could not have ever achieved the level of satisfaction and enjoyment that I have. You will never understand how helpful the...A typical wall mounted light switch is a single pole single throw switch. A common industrial motor start switch is a three pole single throw switch

  16. 17 CFR 14.4 - Violation of Commodity Exchange Act.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 17 Commodity and Securities Exchanges 1 2010-04-01 2010-04-01 false Violation of Commodity Exchange Act. 14.4 Section 14.4 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION... Exchange Act. The Commission may deny, temporarily or permanently, the privilege of appearing or practicing...

  17. 17 CFR 3.10 - Registration of futures commission merchants, retail foreign exchange dealers, introducing...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ..., commodity pool operators and leverage transaction merchants. 3.10 Section 3.10 Commodity and Securities..., commodity pool operators and leverage transaction merchants. (a) Application for registration. (1)(i) Except... merchant, retail foreign exchange dealers, introducing broker, commodity trading advisor, commodity pool...

  18. Understanding GPU Power. A Survey of Profiling, Modeling, and Simulation Methods

    DOE PAGES

    Bridges, Robert A.; Imam, Neena; Mintz, Tiffany M.

    2016-09-01

    Modern graphics processing units (GPUs) have complex architectures that admit exceptional performance and energy efficiency for high throughput applications.Though GPUs consume large amounts of power, their use for high throughput applications facilitate state-of-the-art energy efficiency and performance. Consequently, continued development relies on understanding their power consumption. Our work is a survey of GPU power modeling and profiling methods with increased detail on noteworthy efforts. Moreover, as direct measurement of GPU power is necessary for model evaluation and parameter initiation, internal and external power sensors are discussed. Hardware counters, which are low-level tallies of hardware events, share strong correlation to powermore » use and performance. Statistical correlation between power and performance counters has yielded worthwhile GPU power models, yet the complexity inherent to GPU architectures presents new hurdles for power modeling. Developments and challenges of counter-based GPU power modeling is discussed. Often building on the counter-based models, research efforts for GPU power simulation, which make power predictions from input code and hardware knowledge, provide opportunities for optimization in programming or architectural design. Noteworthy strides in power simulations for GPUs are included along with their performance or functional simulator counterparts when appropriate. Lastly, possible directions for future research are discussed.« less

  19. Virtualization and cloud computing in dentistry.

    PubMed

    Chow, Frank; Muftu, Ali; Shorter, Richard

    2014-01-01

    The use of virtualization and cloud computing has changed the way we use computers. Virtualization is a method of placing software called a hypervisor on the hardware of a computer or a host operating system. It allows a guest operating system to run on top of the physical computer with a virtual machine (i.e., virtual computer). Virtualization allows multiple virtual computers to run on top of one physical computer and to share its hardware resources, such as printers, scanners, and modems. This increases the efficient use of the computer by decreasing costs (e.g., hardware, electricity administration, and management) since only one physical computer is needed and running. This virtualization platform is the basis for cloud computing. It has expanded into areas of server and storage virtualization. One of the commonly used dental storage systems is cloud storage. Patient information is encrypted as required by the Health Insurance Portability and Accountability Act (HIPAA) and stored on off-site private cloud services for a monthly service fee. As computer costs continue to increase, so too will the need for more storage and processing power. Virtual and cloud computing will be a method for dentists to minimize costs and maximize computer efficiency in the near future. This article will provide some useful information on current uses of cloud computing.

  20. Code-modulated interferometric imaging system using phased arrays

    NASA Astrophysics Data System (ADS)

    Chauhan, Vikas; Greene, Kevin; Floyd, Brian

    2016-05-01

    Millimeter-wave (mm-wave) imaging provides compelling capabilities for security screening, navigation, and bio- medical applications. Traditional scanned or focal-plane mm-wave imagers are bulky and costly. In contrast, phased-array hardware developed for mass-market wireless communications and automotive radar promise to be extremely low cost. In this work, we present techniques which can allow low-cost phased-array receivers to be reconfigured or re-purposed as interferometric imagers, removing the need for custom hardware and thereby reducing cost. Since traditional phased arrays power combine incoming signals prior to digitization, orthogonal code-modulation is applied to each incoming signal using phase shifters within each front-end and two-bit codes. These code-modulated signals can then be combined and processed coherently through a shared hardware path. Once digitized, visibility functions can be recovered through squaring and code-demultiplexing operations. Pro- vided that codes are selected such that the product of two orthogonal codes is a third unique and orthogonal code, it is possible to demultiplex complex visibility functions directly. As such, the proposed system modulates incoming signals but demodulates desired correlations. In this work, we present the operation of the system, a validation of its operation using behavioral models of a traditional phased array, and a benchmarking of the code-modulated interferometer against traditional interferometer and focal-plane arrays.

  1. Developing an Integration Infrastructure for Distributed Engine Control Technologies

    NASA Technical Reports Server (NTRS)

    Culley, Dennis; Zinnecker, Alicia; Aretskin-Hariton, Eliot; Kratz, Jonathan

    2014-01-01

    Turbine engine control technology is poised to make the first revolutionary leap forward since the advent of full authority digital engine control in the mid-1980s. This change aims squarely at overcoming the physical constraints that have historically limited control system hardware on aero-engines to a federated architecture. Distributed control architecture allows complex analog interfaces existing between system elements and the control unit to be replaced by standardized digital interfaces. Embedded processing, enabled by high temperature electronics, provides for digitization of signals at the source and network communications resulting in a modular system at the hardware level. While this scheme simplifies the physical integration of the system, its complexity appears in other ways. In fact, integration now becomes a shared responsibility among suppliers and system integrators. While these are the most obvious changes, there are additional concerns about performance, reliability, and failure modes due to distributed architecture that warrant detailed study. This paper describes the development of a new facility intended to address the many challenges of the underlying technologies of distributed control. The facility is capable of performing both simulation and hardware studies ranging from component to system level complexity. Its modular and hierarchical structure allows the user to focus their interaction on specific areas of interest.

  2. Understanding GPU Power. A Survey of Profiling, Modeling, and Simulation Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bridges, Robert A.; Imam, Neena; Mintz, Tiffany M.

    Modern graphics processing units (GPUs) have complex architectures that admit exceptional performance and energy efficiency for high throughput applications.Though GPUs consume large amounts of power, their use for high throughput applications facilitate state-of-the-art energy efficiency and performance. Consequently, continued development relies on understanding their power consumption. Our work is a survey of GPU power modeling and profiling methods with increased detail on noteworthy efforts. Moreover, as direct measurement of GPU power is necessary for model evaluation and parameter initiation, internal and external power sensors are discussed. Hardware counters, which are low-level tallies of hardware events, share strong correlation to powermore » use and performance. Statistical correlation between power and performance counters has yielded worthwhile GPU power models, yet the complexity inherent to GPU architectures presents new hurdles for power modeling. Developments and challenges of counter-based GPU power modeling is discussed. Often building on the counter-based models, research efforts for GPU power simulation, which make power predictions from input code and hardware knowledge, provide opportunities for optimization in programming or architectural design. Noteworthy strides in power simulations for GPUs are included along with their performance or functional simulator counterparts when appropriate. Lastly, possible directions for future research are discussed.« less

  3. A lightweight, inexpensive robotic system for insect vision.

    PubMed

    Sabo, Chelsea; Chisholm, Robert; Petterson, Adam; Cope, Alex

    2017-09-01

    Designing hardware for miniaturized robotics which mimics the capabilities of flying insects is of interest, because they share similar constraints (i.e. small size, low weight, and low energy consumption). Research in this area aims to enable robots with similarly efficient flight and cognitive abilities. Visual processing is important to flying insects' impressive flight capabilities, but currently, embodiment of insect-like visual systems is limited by the hardware systems available. Suitable hardware is either prohibitively expensive, difficult to reproduce, cannot accurately simulate insect vision characteristics, and/or is too heavy for small robotic platforms. These limitations hamper the development of platforms for embodiment which in turn hampers the progress on understanding of how biological systems fundamentally work. To address this gap, this paper proposes an inexpensive, lightweight robotic system for modelling insect vision. The system is mounted and tested on a robotic platform for mobile applications, and then the camera and insect vision models are evaluated. We analyse the potential of the system for use in embodiment of higher-level visual processes (i.e. motion detection) and also for development of navigation based on vision for robotics in general. Optic flow from sample camera data is calculated and compared to a perfect, simulated bee world showing an excellent resemblance. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. 29 CFR 780.114 - Wild commodities.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Agricultural Or Horticultural Commodities § 780.114 Wild commodities. Employees engaged in the gathering or harvesting of wild commodities such as mosses, wild rice, burls and laurel plants, the trapping of wild... 29 Labor 3 2013-07-01 2013-07-01 false Wild commodities. 780.114 Section 780.114 Labor Regulations...

  5. 29 CFR 780.114 - Wild commodities.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Agricultural Or Horticultural Commodities § 780.114 Wild commodities. Employees engaged in the gathering or harvesting of wild commodities such as mosses, wild rice, burls and laurel plants, the trapping of wild... 29 Labor 3 2011-07-01 2011-07-01 false Wild commodities. 780.114 Section 780.114 Labor Regulations...

  6. 29 CFR 780.114 - Wild commodities.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Agricultural Or Horticultural Commodities § 780.114 Wild commodities. Employees engaged in the gathering or harvesting of wild commodities such as mosses, wild rice, burls and laurel plants, the trapping of wild... 29 Labor 3 2014-07-01 2014-07-01 false Wild commodities. 780.114 Section 780.114 Labor Regulations...

  7. 29 CFR 780.114 - Wild commodities.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Agricultural Or Horticultural Commodities § 780.114 Wild commodities. Employees engaged in the gathering or harvesting of wild commodities such as mosses, wild rice, burls and laurel plants, the trapping of wild... 29 Labor 3 2010-07-01 2010-07-01 false Wild commodities. 780.114 Section 780.114 Labor Regulations...

  8. 29 CFR 780.114 - Wild commodities.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Agricultural Or Horticultural Commodities § 780.114 Wild commodities. Employees engaged in the gathering or harvesting of wild commodities such as mosses, wild rice, burls and laurel plants, the trapping of wild... 29 Labor 3 2012-07-01 2012-07-01 false Wild commodities. 780.114 Section 780.114 Labor Regulations...

  9. 17 CFR 37.3 - Requirements for underlying commodities.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 5a(b)(3) of the Act, may trade any contract of sale of a commodity for future delivery (or option on... that are a security futures product, and the registered derivatives transaction execution facility is a... commodities. 37.3 Section 37.3 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION...

  10. 17 CFR 4.32 - Trading on a Registered Derivatives Transaction Execution Facility for Non-Institutional Customers.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 17 Commodity and Securities Exchanges 1 2011-04-01 2011-04-01 false Trading on a Registered... Securities Exchanges COMMODITY FUTURES TRADING COMMISSION COMMODITY POOL OPERATORS AND COMMODITY TRADING ADVISORS Commodity Trading Advisors § 4.32 Trading on a Registered Derivatives Transaction Execution...

  11. 17 CFR 32.3 - Unlawful commodity option transactions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 17 Commodity and Securities Exchanges 1 2010-04-01 2010-04-01 false Unlawful commodity option... REGULATION OF COMMODITY OPTION TRANSACTIONS § 32.3 Unlawful commodity option transactions. (a) On and after... extend credit in lieu thereof) from an option customer as payment of the purchase price in connection...

  12. 17 CFR 37.4 - Election to trade excluded and exempt commodities.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 17 Commodity and Securities Exchanges 1 2010-04-01 2010-04-01 false Election to trade excluded and exempt commodities. 37.4 Section 37.4 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION DERIVATIVES TRANSACTION EXECUTION FACILITIES § 37.4 Election to trade excluded and exempt...

  13. 17 CFR 4.32 - Trading on a Registered Derivatives Transaction Execution Facility for Non-Institutional Customers.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 17 Commodity and Securities Exchanges 1 2012-04-01 2012-04-01 false Trading on a Registered... Securities Exchanges COMMODITY FUTURES TRADING COMMISSION COMMODITY POOL OPERATORS AND COMMODITY TRADING ADVISORS Commodity Trading Advisors § 4.32 Trading on a Registered Derivatives Transaction Execution...

  14. 49 CFR 1248.100 - Commodity classification designated.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... STATISTICS Commodity Code § 1248.100 Commodity classification designated. Commencing with reports for the..., reports of commodity statistics required to be made to the Board, shall be based on the commodity codes... Statistics, 1963, issued by the Bureau of the Budget, and on additional codes 411 through 462 shown in § 1248...

  15. Multiple commodities in statistical microeconomics: Model and market

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Yu, Miao; Du, Xin

    2016-11-01

    A statistical generalization of microeconomics has been made in Baaquie (2013). In Baaquie et al. (2015), the market behavior of single commodities was analyzed and it was shown that market data provides strong support for the statistical microeconomic description of commodity prices. The case of multiple commodities is studied and a parsimonious generalization of the single commodity model is made for the multiple commodities case. Market data shows that the generalization can accurately model the simultaneous correlation functions of up to four commodities. To accurately model five or more commodities, further terms have to be included in the model. This study shows that the statistical microeconomics approach is a comprehensive and complete formulation of microeconomics, and which is independent to the mainstream formulation of microeconomics.

  16. The Influence Of Highway Transportation Infrastructure Condition Toward Commodity Production Generation for The Resilience Needs at Regional Internal Zone

    NASA Astrophysics Data System (ADS)

    Akbardin, Juang; Parikesit, Danang; Riyanto, Bambang; Mulyono, Agus Taufik

    2018-02-01

    The poultry commodity consumption and requirement is one of the main commodities that must be fulfilled in a region to maintain the availability of meat from poultry. Poultry commodity production is one of the production sectors that have a clean environment resistance. An increasing of poultry commodity generation production requires a smooth distribution to arrive at the processing. The livestock location as a commodity production is placed at a considerable far distance from residential and market locations. Zones that have poultry commodity production have an excess potential to supply other zones that are lacking in production to the consumption of these commodities. The condition of highway transportation infrastructure that is very diverse with the damage level availability in a zone has an influence in the supply and demand of poultry commodity requirement in the regional internal of Central Java province. In order to know the effect of highway transportation infrastructure condition toward the poultry commodity movement, demography factor and availability of freight vehicles will be reviewed to estimate the amount of poultry commodity movement generation production. Thus the poultry commodity consumption requirement that located in the internal - regional zone of central java province can be adequated from the zone. So it can be minimized the negative impacts that affect the environment at the zone in terms of comparison of the movement attraction and generation production at poultry commodity in Central Java.

  17. From Data-Sharing to Model-Sharing: SCEC and the Development of Earthquake System Science (Invited)

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.

    2009-12-01

    Earthquake system science seeks to construct system-level models of earthquake phenomena and use them to predict emergent seismic behavior—an ambitious enterprise that requires high degree of interdisciplinary, multi-institutional collaboration. This presentation will explore model-sharing structures that have been successful in promoting earthquake system science within the Southern California Earthquake Center (SCEC). These include disciplinary working groups to aggregate data into community models; numerical-simulation working groups to investigate system-specific phenomena (process modeling) and further improve the data models (inverse modeling); and interdisciplinary working groups to synthesize predictive system-level models. SCEC has developed a cyberinfrastructure, called the Community Modeling Environment, that can distribute the community models; manage large suites of numerical simulations; vertically integrate the hardware, software, and wetware needed for system-level modeling; and promote the interactions among working groups needed for model validation and refinement. Various socio-scientific structures contribute to successful model-sharing. Two of the most important are “communities of trust” and collaborations between government and academic scientists on mission-oriented objectives. The latter include improvements of earthquake forecasts and seismic hazard models and the use of earthquake scenarios in promoting public awareness and disaster management.

  18. Integrating a Microwave Radiometer into Radar Hardware for Simultaneous Data Collection Between the Instruments

    NASA Technical Reports Server (NTRS)

    McLinden, Matthew; Piepmeier, Jeffrey

    2013-01-01

    The conventional method for integrating a radiometer into radar hardware is to share the RF front end between the instruments, and to have separate IF receivers that take data at separate times. Alternatively, the radar and radiometer could share the antenna through the use of a diplexer, but have completely independent receivers. This novel method shares the radar's RF electronics and digital receiver with the radiometer, while allowing for simultaneous operation of the radar and radiometer. Radars and radiometers, while often having near-identical RF receivers, generally have substantially different IF and baseband receivers. Operation of the two instruments simultaneously is difficult, since airborne radars will pulse at a rate of hundreds of microseconds. Radiometer integration time is typically 10s or 100s of milliseconds. The bandwidth of radar may be 1 to 25 MHz, while a radiometer will have an RF bandwidth of up to a GHz. As such, the conventional method of integrating radar and radiometer hardware is to share the highfrequency RF receiver, but to have separate IF subsystems and digitizers. To avoid corruption of the radiometer data, the radar is turned off during the radiometer dwell time. This method utilizes a modern radar digital receiver to allow simultaneous operation of a radiometer and radar with a shared RF front end and digital receiver. The radiometer signal is coupled out after the first down-conversion stage. From there, the radar transmit frequencies are heavily filtered, and the bands outside the transmit filter are amplified and passed to a detector diode. This diode produces a DC output proportional to the input power. For a conventional radiometer, this level would be digitized. By taking this DC output and mixing it with a system oscillator at 10 MHz, the signal can instead be digitized by a second channel on the radar digital receiver (which typically do not accept DC inputs), and can be down-converted to a DC level again digitally. This unintuitive step allows the digital receiver to sample both the radiometer and radar data at a rapid, synchronized data rate (greater than 1 MHz bandwidth). Once both signals are sampled by the same digital receiver, high-speed quality control can be performed on the radiometer data to allow it to take data simultaneously with the radar. The radiometer data can be blanked during radar transmit, or when the radar return is of a power level high enough to corrupt the radiometer data. Additionally, the receiver protection switches in the RF front end can double as radiometer calibration sources, the short (four-microsecond level) switching periods integrated over many seconds to estimate the radiometer offset. The major benefit of this innovation is that there is minimal impact on the radar performance due to the integration of the radiometer, and the radiometer performance is similarly minimally affected by the radar. As the radar and radiometer are able to operate simultaneously, there is no extended period of integration time loss for the radiometer (maximizing sensitivity), and the radar is able to maintain its full number of pulses (increasing sensitivity and decreasing measurement uncertainty).

  19. Commodes: inconvenient conveniences.

    PubMed

    Naylor, J R; Mulley, G P

    1993-11-13

    To investigate use of commodes and attitudes of users and carers to them. Interview with semi-structured questionnaire of subjects supplied with commodes from Leeds community appliance centre. 140 users of a commode and 105 of their carers. Main reasons for being supplied with a commode were impaired mobility (130 subjects), difficulty in climbing stairs (128), and urinary incontinence (127). Main concerns of users and carers were lack of privacy (120 subjects felt embarrassed about using their commode, and 96 would not use it if someone was present); unpleasant smells (especially for 20 subjects who were confined to one room); physical appearance of commode chair (101 users said it had an unfavourable appearance, and 44 had tried to disguise it); and lack of follow up after commode was supplied (only 15 users and carers knew who to contact if there were problems). Users generally either had very positive or very negative attitudes to their commodes but most carers viewed them very negatively, especially with regard to cleaning them. Health professionals should be aware of people's need for privacy when advising them where to keep their commode. A standard commode is inappropriate for people confined to one room, and alternatives such as a chemical toilet should be considered. Regular follow up is needed to identify any problems such as uncomfortable or unsafe chairs. More thought should be given to the appearance of commodes in their design.

  20. Case for Deploying Complex Systems Utilizing Commodity Components

    NASA Technical Reports Server (NTRS)

    Bryant, Barry S.; Pitts, R. Lee

    2003-01-01

    When the International Space Station (ISS) finally reached an operational state, many of the Payload Operations and Integration Facility (POIF) hardware components were reaching end of life, COTS product costs were soaring, and the ISS budget was becoming severely constrained. However, most requirement development was complete. In addition, the ISS program is a fully functioning program with at least fifteen years of operational life remaining. Therefore it is critical that any upgrades, refurbishments, or enhancements be accomplished in realtime with minimal disruptions to service. For these and other reasons, it was necessary to ensure the viability of the POIF. Due to the to the breadth of capability of the POIF (a NASA ground station), it is believed that the lessons to be learned by other complex systems are applicable and any solutions garnered by the POIF are applicable to other complex systems as well. With that in mind, a number of new approaches have been investigated to increase the portability of the POIF and reduce the cost of refurbishment, operations, and maintenance. These new approaches were directed at the Total Cost of Ownership (TCO); not only the refurbishment but also current operational difficulties, licensing, and anticipation of the next refurbishment. Our basic premise is that technology had evolved dramatically since the concept of the POIF ground system and we should leverage our experience on this new technological landscape. Fortunately, Moore's law and market forces have changed the landscape considerably. These changes are manifest in five (5) ways that are particularly relevant to POIF: 1. Complex Instruction Set Computing (CISC) processors have advanced to unprecedented levels of compute capacity with a dramatic cost break, 2. Linux has become a major operating system supported by most vendors on a broad range of platforms, 3. Windows(TradeMark) based desktops are pervasive in the office environment, 4. Stable and affordable WindowsTM development environments and tools are available and offer a rich set of capabilities, 5. The WindowsTM 2000 provides a stable client platform, Therefore, five studies were proposed, developed, and are in the current process of deployment which dramatically reduces the cost of operations, maintenance, refurbishment, and deployment of a ground system. Restating and refining the basic premise stated earlier, it is possible to enhance operations through the replacement of hardware and software components with commodity based items wherever applicable. This will dramatically reduce the overall lifecycle cost of the project. The first study leveraged the POIF S secure, three-tier, web architecture to replace the client workstations with lower cost PC platforms. A second study initiated a review of COTS products to examine the level of added value of each product. This study included replacement of some COTS products with custom code, deletions, substitutions, and consolidation of COTS products. Studies three and four reviewed the server architectures of the data distribution systems and Enhanced HOSC System (EHS) command and telemetry system to propose migration to new platforms, both software and hardware. The final study reviewed current IP communication technologies, developed an operational model for flight operations, and demonstrated that voice over IP was practical and could be integrated into operations.

  1. 75 FR 67794 - Self-Regulatory Organizations; Chicago Board Options Exchange, Incorporated; Order Granting...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-03

    ... commodities or commodity futures, options on commodities, or other commodity derivatives or Commodity-Based... options or other derivatives on any of the foregoing; or (b) interest rate futures or options or... derivatives on any of the foregoing; or (b) interest rate futures or options or derivatives on the foregoing...

  2. 17 CFR 15.00 - Definitions of terms used in parts 15 to 21 of this chapter.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... commodity, means the actual commodity as distinguished from a futures or options contract in such commodity... for future delivery or commodity option transactions, or for effecting settlements of contracts for future delivery or commodity option transactions, for and between members of any designated contract...

  3. 75 FR 71762 - Self-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-24

    ... commodities or commodity futures, options on commodities, or other commodity derivatives or Commodity-Based...) interest rate futures or options or derivatives on the foregoing in this subparagraph (b) (``Futures... options or other derivatives on any of the foregoing; or (b) interest rate futures or options or...

  4. 17 CFR 4.14 - Exemption from registration as a commodity trading advisor.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... TRADING COMMISSION COMMODITY POOL OPERATORS AND COMMODITY TRADING ADVISORS General Provisions, Definitions... commodity pool operator and the person's commodity trading advice is directed solely to, and for the sole use of, the pool or pools for which it is so registered; (5) It is exempt from registration as a...

  5. 17 CFR Appendix B to Part 43 - Enumerated Physical Commodity Contracts and Other Contracts

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 17 Commodity and Securities Exchanges 1 2012-04-01 2012-04-01 false Enumerated Physical Commodity... TRADING COMMISSION REAL-TIME PUBLIC REPORTING Pt. 43, App. B Appendix B to Part 43—Enumerated Physical Commodity Contracts and Other Contracts Enumerated Physical Commodity Contracts Agriculture ICE Futures U.S...

  6. 17 CFR 32.13 - Exemption from prohibition of commodity option transactions for trade options on certain...

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 17 Commodity and Securities Exchanges 1 2012-04-01 2012-04-01 false Exemption from prohibition of... Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION REGULATION OF COMMODITY OPTION... are met at the time of the solicitation or acceptance: (1) That person is registered with the...

  7. 17 CFR 4.32 - Trading on a Registered Derivatives Transaction Execution Facility for Non-Institutional Customers.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 17 Commodity and Securities Exchanges 1 2010-04-01 2010-04-01 false Trading on a Registered Derivatives Transaction Execution Facility for Non-Institutional Customers. 4.32 Section 4.32 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION COMMODITY POOL OPERATORS AND COMMODITY TRADING...

  8. Kinetic market models with single commodity having price fluctuations

    NASA Astrophysics Data System (ADS)

    Chatterjee, A.; Chakrabarti, B. K.

    2006-12-01

    We study here numerically the behavior of an ideal gas like model of markets having only one non-consumable commodity. We investigate the behavior of the steady-state distributions of money, commodity and total wealth, as the dynamics of trading or exchange of money and commodity proceeds, with local (in time) fluctuations in the price of the commodity. These distributions are studied in markets with agents having uniform and random saving factors. The self-organizing features in money distribution are similar to the cases without any commodity (or with consumable commodities), while the commodity distribution shows an exponential decay. The wealth distribution shows interesting behavior: gamma like distribution for uniform saving propensity and has the same power-law tail, as that of the money distribution, for a market with agents having random saving propensity.

  9. A Crosswalk of Mineral Commodity End Uses and North American Industry Classification System (NAICS) codes

    USGS Publications Warehouse

    Barry, James J.; Matos, Grecia R.; Menzie, W. David

    2015-09-14

    The links between the end uses of mineral commodities and the NAICS codes provide an instrument for analyzing the use of mineral commodities in the economy. The crosswalk is also a guide, highlighting those industrial sectors in the economy that rely heavily on mineral commodities. The distribution of mineral commodities across the economy is dynamic and does differ from year to year. This report reflects a snapshot of the state of the economy and mineral commodities in 2010.

  10. 17 CFR 1.19 - Prohibited trading in certain “puts” and “calls”.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 17 Commodity and Securities Exchanges 1 2011-04-01 2011-04-01 false Prohibited trading in certain âputsâ and âcallsâ. 1.19 Section 1.19 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION GENERAL REGULATIONS UNDER THE COMMODITY EXCHANGE ACT Prohibited Trading in Commodity Options § 1...

  11. 17 CFR 33.6 - Suspension or revocation of designation as a contract market for the trading of commodity options.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... designation as a contract market for the trading of commodity options. 33.6 Section 33.6 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION REGULATION OF DOMESTIC EXCHANGE-TRADED COMMODITY OPTION TRANSACTIONS § 33.6 Suspension or revocation of designation as a contract market for the trading...

  12. 17 CFR 1.19 - Prohibited trading in certain “puts” and “calls”.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 17 Commodity and Securities Exchanges 1 2014-04-01 2014-04-01 false Prohibited trading in certain âputsâ and âcallsâ. 1.19 Section 1.19 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION GENERAL REGULATIONS UNDER THE COMMODITY EXCHANGE ACT Prohibited Trading in Commodity Options § 1...

  13. 17 CFR 33.6 - Suspension or revocation of designation as a contract market for the trading of commodity options.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... designation as a contract market for the trading of commodity options. 33.6 Section 33.6 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION REGULATION OF DOMESTIC EXCHANGE-TRADED COMMODITY OPTION TRANSACTIONS § 33.6 Suspension or revocation of designation as a contract market for the trading...

  14. 17 CFR 1.19 - Prohibited trading in certain “puts” and “calls”.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 17 Commodity and Securities Exchanges 1 2012-04-01 2012-04-01 false Prohibited trading in certain âputsâ and âcallsâ. 1.19 Section 1.19 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION GENERAL REGULATIONS UNDER THE COMMODITY EXCHANGE ACT Prohibited Trading in Commodity Options § 1...

  15. 17 CFR 1.19 - Prohibited trading in certain “puts” and “calls”.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 17 Commodity and Securities Exchanges 1 2013-04-01 2013-04-01 false Prohibited trading in certain âputsâ and âcallsâ. 1.19 Section 1.19 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION GENERAL REGULATIONS UNDER THE COMMODITY EXCHANGE ACT Prohibited Trading in Commodity Options § 1...

  16. 17 CFR 1.19 - Prohibited trading in certain “puts” and “calls”.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 17 Commodity and Securities Exchanges 1 2010-04-01 2010-04-01 false Prohibited trading in certain âputsâ and âcallsâ. 1.19 Section 1.19 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION GENERAL REGULATIONS UNDER THE COMMODITY EXCHANGE ACT Prohibited Trading in Commodity Options § 1...

  17. 7 CFR 1421.5 - Eligible commodities.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...)(1) To be an eligible commodity, the commodity must be merchantable for food, feed, or other uses... poisonous to humans or animals. A commodity containing vomitoxin, aflatoxin, or Aspergillus mold may not be...

  18. 7 CFR 1421.5 - Eligible commodities.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...)(1) To be an eligible commodity, the commodity must be merchantable for food, feed, or other uses... poisonous to humans or animals. A commodity containing vomitoxin, aflatoxin, or Aspergillus mold may not be...

  19. Performance Analysis of a Hardware Implemented Complex Signal Kurtosis Radio-Frequency Interference Detector

    NASA Technical Reports Server (NTRS)

    Schoenwald, Adam J.; Bradley, Damon C.; Mohammed, Priscilla N.; Piepmeier, Jeffrey R.; Wong, Mark

    2016-01-01

    Radio-frequency interference (RFI) is a known problem for passive remote sensing as evidenced in the L-band radiometers SMOS, Aquarius and more recently, SMAP. Various algorithms have been developed and implemented on SMAP to improve science measurements. This was achieved by the use of a digital microwave radiometer. RFI mitigation becomes more challenging for microwave radiometers operating at higher frequencies in shared allocations. At higher frequencies larger bandwidths are also desirable for lower measurement noise further adding to processing challenges. This work focuses on finding improved RFI mitigation techniques that will be effective at additional frequencies and at higher bandwidths. To aid the development and testing of applicable detection and mitigation techniques, a wide-band RFI algorithm testing environment has been developed using the Reconfigurable Open Architecture Computing Hardware System (ROACH) built by the Collaboration for Astronomy Signal Processing and Electronics Research (CASPER) Group. The testing environment also consists of various test equipment used to reproduce typical signals that a radiometer may see including those with and without RFI. The testing environment permits quick evaluations of RFI mitigation algorithms as well as show that they are implementable in hardware. The algorithm implemented is a complex signal kurtosis detector which was modeled and simulated. The complex signal kurtosis detector showed improved performance over the real kurtosis detector under certain conditions. The real kurtosis is implemented on SMAP at 24 MHz bandwidth. The complex signal kurtosis algorithm was then implemented in hardware at 200 MHz bandwidth using the ROACH. In this work, performance of the complex signal kurtosis and the real signal kurtosis are compared. Performance evaluations and comparisons in both simulation as well as experimental hardware implementations were done with the use of receiver operating characteristic (ROC) curves. The complex kurtosis algorithm has the potential to reduce data rate due to onboard processing in addition to improving RFI detection performance.

  20. Commonly Consumed Food Commodities

    EPA Pesticide Factsheets

    Commonly consumed foods are those ingested for their nutrient properties. Food commodities can be either raw agricultural commodities or processed commodities, provided that they are the forms that are sold or distributed for human consumption. Learn more.

  1. 76 FR 28641 - Commodity Pool Operators: Relief From Compliance With Certain Disclosure, Reporting and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-18

    ... are subject to certain operational and advertising requirements under Part 4, to all other provisions... 4 Advertising, Brokers, Commodity futures, Commodity pool operators, Commodity trading advisors...

  2. Commodes: inconvenient conveniences.

    PubMed Central

    Naylor, J R; Mulley, G P

    1993-01-01

    OBJECTIVES--To investigate use of commodes and attitudes of users and carers to them. DESIGN--Interview with semi-structured questionnaire of subjects supplied with commodes from Leeds community appliance centre. SUBJECTS--140 users of a commode and 105 of their carers. RESULTS--Main reasons for being supplied with a commode were impaired mobility (130 subjects), difficulty in climbing stairs (128), and urinary incontinence (127). Main concerns of users and carers were lack of privacy (120 subjects felt embarrassed about using their commode, and 96 would not use it if someone was present); unpleasant smells (especially for 20 subjects who were confined to one room); physical appearance of commode chair (101 users said it had an unfavourable appearance, and 44 had tried to disguise it); and lack of follow up after commode was supplied (only 15 users and carers knew who to contact if there were problems). Users generally either had very positive or very negative attitudes to their commodes but most carers viewed them very negatively, especially with regard to cleaning them. CONCLUSIONS--Health professionals should be aware of people's need for privacy when advising them where to keep their commode. A standard commode is inappropriate for people confined to one room, and alternatives such as a chemical toilet should be considered. Regular follow up is needed to identify any problems such as uncomfortable or unsafe chairs. More thought should be given to the appearance of commodes in their design. Images FIG 1 FIG 2 PMID:8281060

  3. Immersive Visual Data Analysis For Geoscience Using Commodity VR Hardware

    NASA Astrophysics Data System (ADS)

    Kreylos, O.; Kellogg, L. H.

    2017-12-01

    Immersive visualization using virtual reality (VR) display technology offers tremendous benefits for the visual analysis of complex three-dimensional data like those commonly obtained from geophysical and geological observations and models. Unlike "traditional" visualization, which has to project 3D data onto a 2D screen for display, VR can side-step this projection and display 3D data directly, in a pseudo-holographic (head-tracked stereoscopic) form, and does therefore not suffer the distortions of relative positions, sizes, distances, and angles that are inherent in 2D projection. As a result, researchers can apply their spatial reasoning skills to virtual data in the same way they can to real objects or environments. The UC Davis W.M. Keck Center for Active Visualization in the Earth Sciences (KeckCAVES, http://keckcaves.org) has been developing VR methods for data analysis since 2005, but the high cost of VR displays has been preventing large-scale deployment and adoption of KeckCAVES technology. The recent emergence of high-quality commodity VR, spearheaded by the Oculus Rift and HTC Vive, has fundamentally changed the field. With KeckCAVES' foundational VR operating system, Vrui, now running natively on the HTC Vive, all KeckCAVES visualization software, including 3D Visualizer, LiDAR Viewer, Crusta, Nanotech Construction Kit, and ProtoShop, are now available to small labs, single researchers, and even home users. LiDAR Viewer and Crusta have been used for rapid response to geologic events including earthquakes and landslides, to visualize the impacts of sealevel rise, to investigate reconstructed paleooceanographic masses, and for exploration of the surface of Mars. The Nanotech Construction Kit is being used to explore the phases of carbon in Earth's deep interior, while ProtoShop can be used to construct and investigate protein structures.

  4. Building A Community Focused Data and Modeling Collaborative platform with Hardware Virtualization Technology

    NASA Astrophysics Data System (ADS)

    Michaelis, A.; Wang, W.; Melton, F. S.; Votava, P.; Milesi, C.; Hashimoto, H.; Nemani, R. R.; Hiatt, S. H.

    2009-12-01

    As the length and diversity of the global earth observation data records grow, modeling and analyses of biospheric conditions increasingly requires multiple terabytes of data from a diversity of models and sensors. With network bandwidth beginning to flatten, transmission of these data from centralized data archives presents an increasing challenge, and costs associated with local storage and management of data and compute resources are often significant for individual research and application development efforts. Sharing community valued intermediary data sets, results and codes from individual efforts with others that are not in direct funded collaboration can also be a challenge with respect to time, cost and expertise. We purpose a modeling, data and knowledge center that houses NASA satellite data, climate data and ancillary data where a focused community may come together to share modeling and analysis codes, scientific results, knowledge and expertise on a centralized platform, named Ecosystem Modeling Center (EMC). With the recent development of new technologies for secure hardware virtualization, an opportunity exists to create specific modeling, analysis and compute environments that are customizable, “archiveable” and transferable. Allowing users to instantiate such environments on large compute infrastructures that are directly connected to large data archives may significantly reduce costs and time associated with scientific efforts by alleviating users from redundantly retrieving and integrating data sets and building modeling analysis codes. The EMC platform also provides the possibility for users receiving indirect assistance from expertise through prefabricated compute environments, potentially reducing study “ramp up” times.

  5. Towards Microeconomic Resource Sharing in End System Multicast Networks Based on Walrasian General Equilibrium

    NASA Astrophysics Data System (ADS)

    Rezvani, Mohammad Hossein; Analoui, Morteza

    2010-11-01

    We have designed a competitive economical mechanism for application level multicast in which a number of independent services are provided to the end-users by a number of origin servers. Each offered service can be thought of as a commodity and the origin servers and the users who relay the service to their downstream nodes can thus be thought of as producers of the economy. Also, the end-users can be viewed as consumers of the economy. The proposed mechanism regulates the price of each service in such a way that general equilibrium holds. So, all allocations will be Pareto optimal in the sense that the social welfare of the users is maximized.

  6. The CECAM Electronic Structure Library: community-driven development of software libraries for electronic structure simulations

    NASA Astrophysics Data System (ADS)

    Oliveira, Micael

    The CECAM Electronic Structure Library (ESL) is a community-driven effort to segregate shared pieces of software as libraries that could be contributed and used by the community. Besides allowing to share the burden of developing and maintaining complex pieces of software, these can also become a target for re-coding by software engineers as hardware evolves, ensuring that electronic structure codes remain at the forefront of HPC trends. In a series of workshops hosted at the CECAM HQ in Lausanne, the tools and infrastructure for the project were prepared, and the first contributions were included and made available online (http://esl.cecam.org). In this talk I will present the different aspects and aims of the ESL and how these can be useful for the electronic structure community.

  7. Flight Testing of the Capillary Pumped Loop 3 Experiment

    NASA Technical Reports Server (NTRS)

    Ottenstein, Laura; Butler, Dan; Ku, Jentung; Cheung, Kwok; Baldauff, Robert; Hoang, Triem

    2002-01-01

    The Capillary Pumped Loop 3 (CAPL 3) experiment was a multiple evaporator capillary pumped loop experiment that flew in the Space Shuttle payload bay in December 2001 (STS-108). The main objective of CAPL 3 was to demonstrate in micro-gravity a multiple evaporator capillary pumped loop system, capable of reliable start-up, reliable continuous operation, and heat load sharing, with hardware for a deployable radiator. Tests performed on orbit included start-ups, power cycles, low power tests (100 W total), high power tests (up to 1447 W total), heat load sharing, variable/fixed conductance transition tests, and saturation temperature change tests. The majority of the tests were completed successfully, although the experiment did exhibit an unexpected sensitivity to shuttle maneuvers. This paper describes the experiment, the tests performed during the mission, and the test results.

  8. 75 FR 27338 - NASDAQ OMX Commodities Clearing-Contract Merchant LLC; NASDAQ OMX Commodities Clearing-Delivery...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-14

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket Nos. ER10-912-000; ER10-913-000; ER10-914-000] NASDAQ OMX Commodities Clearing--Contract Merchant LLC; NASDAQ OMX Commodities Clearing--Delivery LLC; NASDAQ OMX Commodities Clearing--Finance LLC; Notice of Filing May 6, 2010. Take notice that, on May 3, 2010, NASDAQ OMX Commoditie...

  9. 17 CFR Appendix C to Part 4 - Form CTA-PR

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 17 Commodity and Securities Exchanges 1 2014-04-01 2014-04-01 false Form CTA-PR C Appendix C to Part 4 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION COMMODITY POOL OPERATORS AND COMMODITY TRADING ADVISORS Pt. 4, App. C Appendix C to Part 4—Form CTA-PR ER24FE12.052 ER24FE12...

  10. If I do not have enough water, then how could I bring additional water for toilet cleaning?! Addressing water scarcity to promote hygienic use of shared toilets in Dhaka, Bangladesh.

    PubMed

    Saxton, Ronald E; Yeasmin, Farzana; Alam, Mahbub-Ul; Al-Masud, Abdullah; Dutta, Notan Chandra; Yeasmin, Dalia; Luby, Stephen P; Unicomb, Leanne; Winch, Peter J

    2017-09-01

    Provision of toilets is necessary but not sufficient to impact health as poor maintenance may impair toilet function and discourage their consistent use. Water in urban slums is both scarce and a prerequisite for toilet maintenance behaviours. We describe the development of behaviour change communications and selection of low-cost water storage hardware to facilitate adequate flushing among users of shared toilets. We conducted nine focus group discussions and six ranking exercises with adult users of shared toilets (50 females, 35 males), then designed and implemented three pilot interventions to facilitate regular flushing and improve hygienic conditions of shared toilets. We conducted follow-up assessments 1 and 2 months post-pilot including nine in-depth interviews and three focus group discussions with adult residents (23 females, 15 males) and three landlords in the pilot communities. Periodic water scarcity was common in the study communities. Residents felt embarrassed to carry water for flushing. Reserving water adjacent to the shared toilet enabled slum residents to flush regularly. Signs depicting rules for toilet use empowered residents and landlords to communicate these expectations for flushing to transient tenants. Residents in the pilot reported improvements in cleanliness and reduced odour inside toilet cubicles. Our pilot demonstrates the potential efficacy of low-cost water storage and behaviour change communications to improve maintenance of and user satisfaction with shared toilets in urban slum settings. © 2017 John Wiley & Sons Ltd.

  11. Responding to climate change and the global land crisis: REDD+, market transformation and low-emissions rural development

    PubMed Central

    Nepstad, Daniel C.; Boyd, William; Stickler, Claudia M.; Bezerra, Tathiana; Azevedo, Andrea A.

    2013-01-01

    Climate change and rapidly escalating global demand for food, fuel, fibre and feed present seemingly contradictory challenges to humanity. Can greenhouse gas (GHG) emissions from land-use, more than one-fourth of the global total, decline as growth in land-based production accelerates? This review examines the status of two major international initiatives that are designed to address different aspects of this challenge. REDD+ is an emerging policy framework for providing incentives to tropical nations and states that reduce their GHG emissions from deforestation and forest degradation. Market transformation, best represented by agricultural commodity roundtables, seeks to exclude unsustainable farmers from commodity markets through international social and environmental standards for farmers and processors. These global initiatives could potentially become synergistically integrated through (i) a shared approach for measuring and favouring high environmental and social performance of land use across entire jurisdictions and (ii) stronger links with the domestic policies, finance and laws in the jurisdictions where agricultural expansion is moving into forests. To achieve scale, the principles of REDD+ and sustainable farming systems must be embedded in domestic low-emission rural development models capable of garnering support across multiple constituencies. We illustrate this potential with the case of Mato Grosso State in the Brazilian Amazon. PMID:23610173

  12. Responding to climate change and the global land crisis: REDD+, market transformation and low-emissions rural development.

    PubMed

    Nepstad, Daniel C; Boyd, William; Stickler, Claudia M; Bezerra, Tathiana; Azevedo, Andrea A

    2013-06-05

    Climate change and rapidly escalating global demand for food, fuel, fibre and feed present seemingly contradictory challenges to humanity. Can greenhouse gas (GHG) emissions from land-use, more than one-fourth of the global total, decline as growth in land-based production accelerates? This review examines the status of two major international initiatives that are designed to address different aspects of this challenge. REDD+ is an emerging policy framework for providing incentives to tropical nations and states that reduce their GHG emissions from deforestation and forest degradation. Market transformation, best represented by agricultural commodity roundtables, seeks to exclude unsustainable farmers from commodity markets through international social and environmental standards for farmers and processors. These global initiatives could potentially become synergistically integrated through (i) a shared approach for measuring and favouring high environmental and social performance of land use across entire jurisdictions and (ii) stronger links with the domestic policies, finance and laws in the jurisdictions where agricultural expansion is moving into forests. To achieve scale, the principles of REDD+ and sustainable farming systems must be embedded in domestic low-emission rural development models capable of garnering support across multiple constituencies. We illustrate this potential with the case of Mato Grosso State in the Brazilian Amazon.

  13. Changing Face of Family Planning Funding in Kenya: A Cross-Sectional Survey of Two Urban Counties.

    PubMed

    Keyonzo, Nelson; Korir, Julius; Abilla, Faith; Sirera, Morine; Nyakwara, Peter; Bazant, Eva; Waka, Charles; Koskei, Nancy; Kabue, Mark

    2017-12-01

    As international development partners reduce funding for family planning (FP) programs, the need to estimate the financial resources devoted to FP is becoming increasingly important both at all levels. This cross-sectional assessment examined the FP financing sources, agents, and expenditures in two counties of Kenya for fiscal years 2010/2011 and 2011/2012 to guide local decision-making on financial allocations. Data were collected through a participatory process. This involved stakeholder interviews, review of financial records and service statistics, and a survey of facilities offering FP services. Financing sources and agents were identified, and source amounts calculated. Types of FP provider organizations and the amounts spent by expenditure categories were identified. Overall, five financing sources and seven agents for FP were identified. Total two-year expenditures were KSh 307.8 M (US$ 3.62 M). The government's share of funding rose from 12% to 21% over the two years (p=0.029). In 2010/2011, the largest expense categories were administration, commodities, and labor; however, spending on commodities increased by 47% (p=0.042). This study provides local managers with FP financing and expenditure information for use in budget allocation decision-making. These analyses can be done routinely and replicated in other local counties or countries in a context of devolution.

  14. 76 FR 69333 - Derivatives Clearing Organization General Provisions and Core Principles

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-08

    ...The Commodity Futures Trading Commission (Commission) is adopting final regulations to implement certain provisions of Title VII and Title VIII of the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) governing derivatives clearing organization (DCO) activities. More specifically, the regulations establish the regulatory standards for compliance with DCO Core Principles A (Compliance), B (Financial Resources), C (Participant and Product Eligibility), D (Risk Management), E (Settlement Procedures), F (Treatment of Funds), G (Default Rules and Procedures), H (Rule Enforcement), I (System Safeguards), J (Reporting), K (Recordkeeping), L (Public Information), M (Information Sharing), N (Antitrust Considerations), and R (Legal Risk) set forth in Section 5b of the Commodity Exchange Act (CEA). The Commission also is updating and adding related definitions; adopting implementing rules for DCO chief compliance officers (CCOs); revising procedures for DCO applications including the required use of a new Form DCO; adopting procedural rules applicable to the transfer of a DCO registration; and adding requirements for approval of DCO rules establishing a portfolio margining program for customer accounts carried by a futures commission merchant (FCM) that is also registered as a securities broker-dealer (FCM/BD). In addition, the Commission is adopting certain technical amendments to parts 21 and 39, and is adopting certain delegation provisions under part 140.

  15. Simple and inexpensive hardware and software method to measure volume changes in Xenopus oocytes expressing aquaporins.

    PubMed

    Dorr, Ricardo; Ozu, Marcelo; Parisi, Mario

    2007-04-15

    Water channels (aquaporins) family members have been identified in central nervous system cells. A classic method to measure membrane water permeability and its regulation is to capture and analyse images of Xenopus laevis oocytes expressing them. Laboratories dedicated to the analysis of motion images usually have powerful equipment valued in thousands of dollars. However, some scientists consider that new approaches are needed to reduce costs in scientific labs, especially in developing countries. The objective of this work is to share a very low-cost hardware and software setup based on a well-selected webcam, a hand-made adapter to a microscope and the use of free software to measure membrane water permeability in Xenopus oocytes. One of the main purposes of this setup is to maintain a high level of quality in images obtained at brief intervals (shorter than 70 ms). The presented setup helps to economize without sacrificing image analysis requirements.

  16. A Streaming PCA VLSI Chip for Neural Data Compression.

    PubMed

    Wu, Tong; Zhao, Wenfeng; Guo, Hongsun; Lim, Hubert H; Yang, Zhi

    2017-12-01

    Neural recording system miniaturization and integration with low-power wireless technologies require compressing neural data before transmission. Feature extraction is a procedure to represent data in a low-dimensional space; its integration into a recording chip can be an efficient approach to compress neural data. In this paper, we propose a streaming principal component analysis algorithm and its microchip implementation to compress multichannel local field potential (LFP) and spike data. The circuits have been designed in a 65-nm CMOS technology and occupy a silicon area of 0.06 mm. Throughout the experiments, the chip compresses LFPs by 10 at the expense of as low as 1% reconstruction errors and 144-nW/channel power consumption; for spikes, the achieved compression ratio is 25 with 8% reconstruction errors and 3.05-W/channel power consumption. In addition, the algorithm and its hardware architecture can swiftly adapt to nonstationary spiking activities, which enables efficient hardware sharing among multiple channels to support a high-channel count recorder.

  17. Systems engineering and integration: Advanced avionics laboratories

    NASA Technical Reports Server (NTRS)

    1990-01-01

    In order to develop the new generation of avionics which will be necessary for upcoming programs such as the Lunar/Mars Initiative, Advanced Launch System, and the National Aerospace Plane, new Advanced Avionics Laboratories are required. To minimize costs and maximize benefits, these laboratories should be capable of supporting multiple avionics development efforts at a single location, and should be of a common design to support and encourage data sharing. Recent technological advances provide the capability of letting the designer or analyst perform simulations and testing in an environment similar to his engineering environment and these features should be incorporated into the new laboratories. Existing and emerging hardware and software standards must be incorporated wherever possible to provide additional cost savings and compatibility. Special care must be taken to design the laboratories such that real-time hardware-in-the-loop performance is not sacrificed in the pursuit of these goals. A special program-independent funding source should be identified for the development of Advanced Avionics Laboratories as resources supporting a wide range of upcoming NASA programs.

  18. 17 CFR 41.43 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... options with persons other than brokers, dealers, futures commission merchants, floor brokers, or floor... securities, commodity futures, or commodity options with persons other than brokers, dealers, persons....43 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION SECURITY FUTURES PRODUCTS...

  19. 75 FR 54794 - Commodity Pool Operators: Relief From Compliance With Certain Disclosure, Reporting and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-09

    ... are subject to certain operational \\7\\ and advertising requirements \\8\\ under Part 4, to all other... in 17 CFR Part 4 Advertising, Brokers, Commodity futures, Commodity pool operators, Commodity trading...

  20. 78 FR 41384 - Agricultural Advisory Committee Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-10

    ... COMMODITY FUTURES TRADING COMMISSION Agricultural Advisory Committee Meeting AGENCY: Commodity Futures Trading Commission. ACTION: Notice of Meeting. SUMMARY: The Commodity Futures Trading Commission's... Lachenmayr, Commodity Futures Trading Commission, Three Lafayette Centre, 1155 21st Street NW., Washington...

  1. A Study on Market Efficiency of Selected Commodity Derivatives Traded on NCDEX During 2011

    NASA Astrophysics Data System (ADS)

    Sajipriya, N.

    2012-10-01

    The study aims at testing the weak form of Efficient Market Hypothesis in the context of an emerging commodity market - National Commodity Derivatives Exchange (NCDEX), which is considered as the prime commodity derivatives market in India. The study considered daily spot and futures prices of five selected commodities traded on NCDEX over 12 month period (the futures contracts originating and expiring during the period January 2011 to December 2011) The five commodities chosen are Pepper, Crude palm Oil, steel silver and Chana as they account for almost two-thirds of the value of agricultural commodity derivatives traded on NCDEX. The results of Run test indicate that both spot and futures prices are weak form efficient

  2. 17 CFR 242.401 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... of whose business consists of transactions in securities, commodity futures, or commodity options... securities, commodity futures, or commodity options with persons other than brokers, dealers, persons... M, SHO, ATS, AC, AND NMS AND CUSTOMER MARGIN REQUIREMENTS FOR SECURITY FUTURES Customer Margin...

  3. 40 CFR 180.108 - Acephate; tolerances for residues.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... phosphoramidothioate, in or on the commodity. Commodity 1 Parts per million Bean, dry, seed 3.0 Bean, succulent 3.0... phosphoramidothioate, in or on the commodity. Commodity Parts per million Bean, dry, seed 1 Bean, succulent 1 Brussels...

  4. 40 CFR 180.108 - Acephate; tolerances for residues.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... phosphoramidothioate, in or on the commodity. Commodity 1 Parts per million Bean, dry, seed 3.0 Bean, succulent 3.0... phosphoramidothioate, in or on the commodity. Commodity Parts per million Bean, dry, seed 1 Bean, succulent 1 Brussels...

  5. 40 CFR 180.108 - Acephate; tolerances for residues.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... phosphoramidothioate, in or on the commodity. Commodity 1 Parts per million Bean, dry, seed 3.0 Bean, succulent 3.0... phosphoramidothioate, in or on the commodity. Commodity Parts per million Bean, dry, seed 1 Bean, succulent 1 Brussels...

  6. 40 CFR 180.108 - Acephate; tolerances for residues.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... phosphoramidothioate, in or on the commodity. Commodity 1 Parts per million Bean, dry, seed 3.0 Bean, succulent 3.0... phosphoramidothioate, in or on the commodity. Commodity Parts per million Bean, dry, seed 1 Bean, succulent 1 Brussels...

  7. 17 CFR 162.2 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... control with a covered affiliate. (b) Clear and conspicuous. The term “clear and conspicuous” means... exchange dealer, commodity trading advisor, commodity pool operator, introducing broker, major swap..., commodity trading advisor, commodity pool operator, introducing broker, major swap participant or swap...

  8. 17 CFR 162.2 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... corporate control with a covered affiliate. (b) Clear and conspicuous. The term “clear and conspicuous... exchange dealer, commodity trading advisor, commodity pool operator, introducing broker, major swap..., commodity trading advisor, commodity pool operator, introducing broker, major swap participant or swap...

  9. 17 CFR 162.2 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... control with a covered affiliate. (b) Clear and conspicuous. The term “clear and conspicuous” means... exchange dealer, commodity trading advisor, commodity pool operator, introducing broker, major swap..., commodity trading advisor, commodity pool operator, introducing broker, major swap participant or swap...

  10. Cross-commodity delay discounting of alcohol and money in alcohol users

    PubMed Central

    Moody, Lara N.; Tegge, Allison N.; Bickel, Warren K.

    2017-01-01

    Despite real-world implications, the pattern of delay discounting in alcohol users when the commodities now and later differ has not been well characterized. In this study, 60 participants on Amazon's Mechanical Turk completed the Alcohol Use Disorder Identification Test (AUDIT) to assess severity of use and completed four delay discounting tasks between hypothetical, equivalent amounts of alcohol and money available at five delays. The tasks included two cross-commodity (alcohol now-money later and money now-alcohol later) and two same-commodity (money now-money later and alcohol now-alcohol later) conditions. Delay discounting was significantly associated with clinical cutoffs of the AUDIT for both of the cross-commodity conditions but not for either of the same-commodity delay discounting tasks. The cross-commodity discounting conditions were related to severity of use wherein heavy users discounted future alcohol less and future money more. The change in direction of the discounting effect was dependent on the commodity that was distally available suggesting a distinctive pattern of discounting across commodities when comparing light and heavy alcohol users. PMID:29056767

  11. Cross-commodity delay discounting of alcohol and money in alcohol users.

    PubMed

    Moody, Lara N; Tegge, Allison N; Bickel, Warren K

    2017-06-01

    Despite real-world implications, the pattern of delay discounting in alcohol users when the commodities now and later differ has not been well characterized. In this study, 60 participants on Amazon's Mechanical Turk completed the Alcohol Use Disorder Identification Test (AUDIT) to assess severity of use and completed four delay discounting tasks between hypothetical, equivalent amounts of alcohol and money available at five delays. The tasks included two cross-commodity (alcohol now-money later and money now-alcohol later) and two same-commodity (money now-money later and alcohol now-alcohol later) conditions. Delay discounting was significantly associated with clinical cutoffs of the AUDIT for both of the cross-commodity conditions but not for either of the same-commodity delay discounting tasks. The cross-commodity discounting conditions were related to severity of use wherein heavy users discounted future alcohol less and future money more. The change in direction of the discounting effect was dependent on the commodity that was distally available suggesting a distinctive pattern of discounting across commodities when comparing light and heavy alcohol users.

  12. 17 CFR Appendix C to Part 1 - [Reserved

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 17 Commodity and Securities Exchanges 1 2014-04-01 2014-04-01 false [Reserved] C Appendix C to Part 1 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION GENERAL REGULATIONS UNDER THE COMMODITY EXCHANGE ACT Appendix C to Part 1 [Reserved] ...

  13. 17 CFR 1.1 - [Reserved

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 17 Commodity and Securities Exchanges 1 2011-04-01 2011-04-01 false [Reserved] 1.1 Section 1.1 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION GENERAL REGULATIONS UNDER THE COMMODITY EXCHANGE ACT Definitions § 1.1 [Reserved] [66 FR 42269, Aug. 10, 2001] ...

  14. 75 FR 77576 - General Regulations and Derivatives Clearing Organizations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-13

    ... Derivatives Clearing Organizations AGENCY: Commodity Futures Trading Commission. ACTION: Notice of proposed... clearing transactions in commodities for future delivery or commodity option transactions, or for effecting settlements of contracts for future delivery or commodity option transactions, for and between members of any...

  15. 40 CFR 414.60 - Applicability; description of the commodity organic chemicals subcategory.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... commodity organic chemicals subcategory. 414.60 Section 414.60 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS ORGANIC CHEMICALS, PLASTICS, AND SYNTHETIC FIBERS Commodity Organic Chemicals § 414.60 Applicability; description of the commodity organic chemicals...

  16. 40 CFR 414.60 - Applicability; description of the commodity organic chemicals subcategory.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... commodity organic chemicals subcategory. 414.60 Section 414.60 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS ORGANIC CHEMICALS, PLASTICS, AND SYNTHETIC FIBERS Commodity Organic Chemicals § 414.60 Applicability; description of the commodity organic chemicals...

  17. 40 CFR 414.60 - Applicability; description of the commodity organic chemicals subcategory.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... commodity organic chemicals subcategory. 414.60 Section 414.60 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS ORGANIC CHEMICALS, PLASTICS, AND SYNTHETIC FIBERS Commodity Organic Chemicals § 414.60 Applicability; description of the commodity organic chemicals...

  18. 40 CFR 414.60 - Applicability; description of the commodity organic chemicals subcategory.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... commodity organic chemicals subcategory. 414.60 Section 414.60 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS ORGANIC CHEMICALS, PLASTICS, AND SYNTHETIC FIBERS Commodity Organic Chemicals § 414.60 Applicability; description of the commodity organic chemicals...

  19. 40 CFR 414.60 - Applicability; description of the commodity organic chemicals subcategory.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... commodity organic chemicals subcategory. 414.60 Section 414.60 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS ORGANIC CHEMICALS, PLASTICS, AND SYNTHETIC FIBERS Commodity Organic Chemicals § 414.60 Applicability; description of the commodity organic chemicals...

  20. 40 CFR 180.473 - Glufosinate ammonium; tolerances for residues.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 2-amino-4-(hydroxymethylphosphinyl)butanoic acid, in or on the commodity. Commodity Parts per... measuring only the sum of glufosinate ammonium, 2-amino-4-(hydroxymethylphosphinyl)butanoic acid... stoichiometric equivalent of 2-amino-4-(hydroxymethylphosphinyl)butanoic acid, in or on the commodity. Commodity...

Top