Sample records for cube compression techniques

  1. Performance of target detection algorithm in compressive sensing miniature ultraspectral imaging compressed sensing system

    NASA Astrophysics Data System (ADS)

    Gedalin, Daniel; Oiknine, Yaniv; August, Isaac; Blumberg, Dan G.; Rotman, Stanley R.; Stern, Adrian

    2017-04-01

    Compressive sensing theory was proposed to deal with the high quantity of measurements demanded by traditional hyperspectral systems. Recently, a compressive spectral imaging technique dubbed compressive sensing miniature ultraspectral imaging (CS-MUSI) was presented. This system uses a voltage controlled liquid crystal device to create multiplexed hyperspectral cubes. We evaluate the utility of the data captured using the CS-MUSI system for the task of target detection. Specifically, we compare the performance of the matched filter target detection algorithm in traditional hyperspectral systems and in CS-MUSI multiplexed hyperspectral cubes. We found that the target detection algorithm performs similarly in both cases, despite the fact that the CS-MUSI data is up to an order of magnitude less than that in conventional hyperspectral cubes. Moreover, the target detection is approximately an order of magnitude faster in CS-MUSI data.

  2. Comparison between various patch wise strategies for reconstruction of ultra-spectral cubes captured with a compressive sensing system

    NASA Astrophysics Data System (ADS)

    Oiknine, Yaniv; August, Isaac Y.; Revah, Liat; Stern, Adrian

    2016-05-01

    Recently we introduced a Compressive Sensing Miniature Ultra-Spectral Imaging (CS-MUSI) system. The system is based on a single Liquid Crystal (LC) cell and a parallel sensor array where the liquid crystal cell performs spectral encoding. Within the framework of compressive sensing, the CS-MUSI system is able to reconstruct ultra-spectral cubes captured with only an amount of ~10% samples compared to a conventional system. Despite the compression, the technique is extremely complex computationally, because reconstruction of ultra-spectral images requires processing huge data cubes of Gigavoxel size. Fortunately, the computational effort can be alleviated by using separable operation. An additional way to reduce the reconstruction effort is to perform the reconstructions on patches. In this work, we consider processing on various patch shapes. We present an experimental comparison between various patch shapes chosen to process the ultra-spectral data captured with CS-MUSI system. The patches may be one dimensional (1D) for which the reconstruction is carried out spatially pixel-wise, or two dimensional (2D) - working on spatial rows/columns of the ultra-spectral cube, as well as three dimensional (3D).

  3. Compression strategies for LiDAR waveform cube

    NASA Astrophysics Data System (ADS)

    Jóźków, Grzegorz; Toth, Charles; Quirk, Mihaela; Grejner-Brzezinska, Dorota

    2015-01-01

    Full-waveform LiDAR data (FWD) provide a wealth of information about the shape and materials of the surveyed areas. Unlike discrete data that retains only a few strong returns, FWD generally keeps the whole signal, at all times, regardless of the signal intensity. Hence, FWD will have an increasingly well-deserved role in mapping and beyond, in the much desired classification in the raw data format. Full-waveform systems currently perform only the recording of the waveform data at the acquisition stage; the return extraction is mostly deferred to post-processing. Although the full waveform preserves most of the details of the real data, it presents a serious practical challenge for a wide use: much larger datasets compared to those from the classical discrete return systems. Atop the need for more storage space, the acquisition speed of the FWD may also limit the pulse rate on most systems that cannot store data fast enough, and thus, reduces the perceived system performance. This work introduces a waveform cube model to compress waveforms in selected subsets of the cube, aimed at achieving decreased storage while maintaining the maximum pulse rate of FWD systems. In our experiments, the waveform cube is compressed using classical methods for 2D imagery that are further tested to assess the feasibility of the proposed solution. The spatial distribution of airborne waveform data is irregular; however, the manner of the FWD acquisition allows the organization of the waveforms in a regular 3D structure similar to familiar multi-component imagery, as those of hyper-spectral cubes or 3D volumetric tomography scans. This study presents the performance analysis of several lossy compression methods applied to the LiDAR waveform cube, including JPEG-1, JPEG-2000, and PCA-based techniques. Wide ranges of tests performed on real airborne datasets have demonstrated the benefits of the JPEG-2000 Standard where high compression rates incur fairly small data degradation. In addition, the JPEG-2000 Standard-compliant compression implementation can be fast and, thus, used in real-time systems, as compressed data sequences can be formed progressively during the waveform data collection. We conclude from our experiments that 2D image compression strategies are feasible and efficient approaches, thus they might be applied during the acquisition of the FWD sensors.

  4. A High Performance Image Data Compression Technique for Space Applications

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Venbrux, Jack

    2003-01-01

    A highly performing image data compression technique is currently being developed for space science applications under the requirement of high-speed and pushbroom scanning. The technique is also applicable to frame based imaging data. The algorithm combines a two-dimensional transform with a bitplane encoding; this results in an embedded bit string with exact desirable compression rate specified by the user. The compression scheme performs well on a suite of test images acquired from spacecraft instruments. It can also be applied to three-dimensional data cube resulting from hyper-spectral imaging instrument. Flight qualifiable hardware implementations are in development. The implementation is being designed to compress data in excess of 20 Msampledsec and support quantization from 2 to 16 bits. This paper presents the algorithm, its applications and status of development.

  5. Technique for Solving Electrically Small to Large Structures for Broadband Applications

    NASA Technical Reports Server (NTRS)

    Jandhyala, Vikram; Chowdhury, Indranil

    2011-01-01

    Fast iterative algorithms are often used for solving Method of Moments (MoM) systems, having a large number of unknowns, to determine current distribution and other parameters. The most commonly used fast methods include the fast multipole method (FMM), the precorrected fast Fourier transform (PFFT), and low-rank QR compression methods. These methods reduce the O(N) memory and time requirements to O(N log N) by compressing the dense MoM system so as to exploit the physics of Green s Function interactions. FFT-based techniques for solving such problems are efficient for spacefilling and uniform structures, but their performance substantially degrades for non-uniformly distributed structures due to the inherent need to employ a uniform global grid. FMM or QR techniques are better suited than FFT techniques; however, neither the FMM nor the QR technique can be used at all frequencies. This method has been developed to efficiently solve for a desired parameter of a system or device that can include both electrically large FMM elements, and electrically small QR elements. The system or device is set up as an oct-tree structure that can include regions of both the FMM type and the QR type. The system is enclosed with a cube at a 0- th level, splitting the cube at the 0-th level into eight child cubes. This forms cubes at a 1st level, recursively repeating the splitting process for cubes at successive levels until a desired number of levels is created. For each cube that is thus formed, neighbor lists and interaction lists are maintained. An iterative solver is then used to determine a first matrix vector product for any electrically large elements as well as a second matrix vector product for any electrically small elements that are included in the structure. These matrix vector products for the electrically large and small elements are combined, and a net delta for a combination of the matrix vector products is determined. The iteration continues until a net delta is obtained that is within the predefined limits. The matrix vector products that were last obtained are used to solve for the desired parameter. The solution for the desired parameter is then presented to a user in a tangible form; for example, on a display.

  6. Three-Dimensional Inverse Transport Solver Based on Compressive Sensing Technique

    NASA Astrophysics Data System (ADS)

    Cheng, Yuxiong; Wu, Hongchun; Cao, Liangzhi; Zheng, Youqi

    2013-09-01

    According to the direct exposure measurements from flash radiographic image, a compressive sensing-based method for three-dimensional inverse transport problem is presented. The linear absorption coefficients and interface locations of objects are reconstructed directly at the same time. It is always very expensive to obtain enough measurements. With limited measurements, compressive sensing sparse reconstruction technique orthogonal matching pursuit is applied to obtain the sparse coefficients by solving an optimization problem. A three-dimensional inverse transport solver is developed based on a compressive sensing-based technique. There are three features in this solver: (1) AutoCAD is employed as a geometry preprocessor due to its powerful capacity in graphic. (2) The forward projection matrix rather than Gauss matrix is constructed by the visualization tool generator. (3) Fourier transform and Daubechies wavelet transform are adopted to convert an underdetermined system to a well-posed system in the algorithm. Simulations are performed and numerical results in pseudo-sine absorption problem, two-cube problem and two-cylinder problem when using compressive sensing-based solver agree well with the reference value.

  7. Hyperspectral image compressing using wavelet-based method

    NASA Astrophysics Data System (ADS)

    Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng

    2017-10-01

    Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.

  8. A cost effective cultivation medium for biocalcification of Bacillus pasteurii KCTC 3558 and its effect on cement cubes properties.

    PubMed

    Yoosathaporn, S; Tiangburanatham, P; Bovonsombut, S; Chaipanich, A; Pathom-Aree, W

    2016-01-01

    Application of carbonate precipitation induced by Bacillus pasteurii for improving some properties of cement has been reported. However, it is not yet successful in commercial scale due to the high cost of cultivation medium. This is the first report on the application of effluent from chicken manure bio-gas plant, a high protein content agricultural waste, as an alternative growth medium for carbonate precipitation by B. pasteurii KCTC3558. Urease activity of B. pasteurii KCTC3558 cultured in chicken manure effluent medium and other three standard media were examined using phenate method. The highest urease production was achieved in chicken manure effluent medium (16.756Umg(-1) protein). Cost per liter of chicken manure effluent medium is up to 88.2% lower than other standard media. The most effective cultivation media was selected for carbonate precipitation study in cement cubes. Water absorption, voids, apparent density and compressive strength of cement cubes were measured according to the ASTM standard. The correlation between the increasing density and compressive strength of bacterial added cement cube was evident. The density of bacterial cement cube is 5.1% higher than control while the compressive strength of cement mixed with bacterial cells in chicken manure effluent medium increases up to 30.2% compared with control. SEM and XRD analysis also found the crystalline phase of calcium carbonate within bacterial cement which confirmed that the increasing density and compressive strength were resulted from bacterial carbonate precipitation. This study indicated that the effluent from chicken manure bio-gas plant could be used as an alternative cost effective culture medium for cultivation and biocalcification of B. pasteurii KCTC3558 in cement. Copyright © 2016. Published by Elsevier GmbH.

  9. Characterization of strain and its effects on ferromagnetic nickel nanocubes

    NASA Astrophysics Data System (ADS)

    Manna, Sohini; Kim, Jong Woo; Lubarda, Marko V.; Wingert, James; Harder, Ross; Spada, Fred; Lomakin, Vitaliy; Shpyrko, Oleg; Fullerton, Eric E.

    2017-12-01

    We report on the interplay of magnetic properties and intrinsic strain in ferromagnetic nickel nanocubes with cubic anisotropy. Via coherent x-ray diffraction imaging we observed compressive stress at the bottom surface of these cubes. The nanocubes with {100} facets described and imaged in this study were synthesized using a single-step CVD process. Micromagnetic simulations predict the presence of vortices at remanence in the absence of strain. The effects of strain resulting from the compressive stress on the magnetic response of the ferromagnetic cubes is investigated. We observe that measured intrinsic strain is too low to change the magnetic anisotropy of ferromagnetic cubes but topological behavior of magnetic vortices is sensitive to even this low range of strain.

  10. Deformation response of cube-on-cube and non-coherent twin interfaces in AgCu eutectic after dynamic plastic compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eftink, Benjamin P.; Mara, Nathan Allan; Kingstedt, Owen T.

    For this research, Split-Hopkinson pressure bar dynamic compression experiments were conducted to determine the defect/interface interaction dependence on interface type, bilayer thickness and interface orientation with respect to the loading direction in the Ag-Cu eutectic system. Specifically, the deformation microstructure in alloys with either a cube-on-cube orientation relationship with {111} Ag||{111} Cu interface habit planes or a twin orientation relationship with {more » $$\\overline{3}13$$} Ag||{$$\\overline{1}12$$} Cu interface habit planes and with bilayer thicknesses of 500 nm, 1.1 µm and 2.2 µm were probed using TEM. The deformation was carried by dislocation slip and in certain conditions, deformation twinning. The twinning response was dependent on loading orientation with respect to the interface plane, bilayer thickness, and interface type. Twinning was only observed when loading at orientations away from the growth direction and decreased in prevalence with decreasing bilayer thickness. Twinning in Cu was dependent on twinning partial dislocations being transmitted from Ag, which only occurred for cube-on-cube interfaces. Lastly, dislocation slip and deformation twin transfer across the interfaces is discussed in terms of the slip transfer conditions developed for grain boundaries in FCC alloys.« less

  11. Deformation response of cube-on-cube and non-coherent twin interfaces in AgCu eutectic after dynamic plastic compression

    DOE PAGES

    Eftink, Benjamin P.; Mara, Nathan Allan; Kingstedt, Owen T.; ...

    2017-12-02

    For this research, Split-Hopkinson pressure bar dynamic compression experiments were conducted to determine the defect/interface interaction dependence on interface type, bilayer thickness and interface orientation with respect to the loading direction in the Ag-Cu eutectic system. Specifically, the deformation microstructure in alloys with either a cube-on-cube orientation relationship with {111} Ag||{111} Cu interface habit planes or a twin orientation relationship with {more » $$\\overline{3}13$$} Ag||{$$\\overline{1}12$$} Cu interface habit planes and with bilayer thicknesses of 500 nm, 1.1 µm and 2.2 µm were probed using TEM. The deformation was carried by dislocation slip and in certain conditions, deformation twinning. The twinning response was dependent on loading orientation with respect to the interface plane, bilayer thickness, and interface type. Twinning was only observed when loading at orientations away from the growth direction and decreased in prevalence with decreasing bilayer thickness. Twinning in Cu was dependent on twinning partial dislocations being transmitted from Ag, which only occurred for cube-on-cube interfaces. Lastly, dislocation slip and deformation twin transfer across the interfaces is discussed in terms of the slip transfer conditions developed for grain boundaries in FCC alloys.« less

  12. Correlation between compressive strength and ultrasonic pulse velocity of high strength concrete incorporating chopped basalt fibre

    NASA Astrophysics Data System (ADS)

    Shafiq, Nasir; Fadhilnuruddin, Muhd; Elshekh, Ali Elheber Ahmed; Fathi, Ahmed

    2015-07-01

    Ultrasonic pulse velocity (UPV), is considered as the most important test for non-destructive techniques that are used to evaluate the mechanical characteristics of high strength concrete (HSC). The relationship between the compressive strength of HSC containing chopped basalt fibre stands (CBSF) and UPV was investigated. The concrete specimens were prepared using a different ratio of CBSF as internal strengthening materials. The compressive strength measurements were conducted at the sample ages of 3, 7, 28, 56 and 90 days; whilst, the ultrasonic pulse velocity was measured at 28 days. The result of HSC's compressive strength with the chopped basalt fibre did not show any improvement; instead, it was decreased. The UPV of the chopped basalt fibre reinforced concrete has been found to be less than that of the control mix for each addition ratio of the basalt fibre. A relationship plot is gained between the cube compressive strength for HSC and UPV with various amounts of chopped basalt fibres.

  13. Compressive spectral testbed imaging system based on thin-film color-patterned filter arrays.

    PubMed

    Rueda, Hoover; Arguello, Henry; Arce, Gonzalo R

    2016-11-20

    Compressive spectral imaging systems can reliably capture multispectral data using far fewer measurements than traditional scanning techniques. In this paper, a thin-film patterned filter array-based compressive spectral imager is demonstrated, including its optical design and implementation. The use of a patterned filter array entails a single-step three-dimensional spatial-spectral coding on the input data cube, which provides higher flexibility on the selection of voxels being multiplexed on the sensor. The patterned filter array is designed and fabricated with micrometer pitch size thin films, referred to as pixelated filters, with three different wavelengths. The performance of the system is evaluated in terms of references measured by a commercially available spectrometer and the visual quality of the reconstructed images. Different distributions of the pixelated filters, including random and optimized structures, are explored.

  14. Non-destructive testing techniques for the forensic engineering investigation of reinforced concrete buildings.

    PubMed

    Hobbs, Brian; Tchoketch Kebir, Mohamed

    2007-04-11

    This study describes in detail the results of a laboratory investigation where the compressive strength of 150mm side-length cubes was evaluated. Non-destructive testing (NDT) was carried out using ultrasonic pulse velocity (UPV) and impact rebound hammer (IRH) techniques to establish a correlation with the compressive strengths of compression tests. To adapt the Schmidt hammer apparatus and the ultrasonic pulse velocity tester to the type of concrete used in Algeria, concrete mix proportions that are recommended by the Algerian code were chosen. The resulting correlation curve for each test is obtained by changing the level of compaction, water/cement ratio and concrete age of specimens. Unlike other works, the research highlights the significant effect of formwork material on surface hardness of concrete where two different mould materials for specimens were used (plastic and wood). A combined method for the above two tests, reveals an improvement in the strength estimation of concrete. The latter shows more improvement by including the concrete density. The resulting calibration curves for strength estimation were compared with others from previous published literature.

  15. Band-Moment Compression of AVIRIS Hyperspectral Data and its Use in the Detection of Vegetation Stress

    NASA Technical Reports Server (NTRS)

    Estep, L.; Davis, B.

    2001-01-01

    A remote sensing campaign was conducted over a U.S. Department of Agriculture test farm at Shelton, Nebraska. An experimental field was set off in plots that were differentially treated with anhydrous ammonia. Four replicates of 0-kg/ha to 200-kg/ha plots, in 50-kg/ha increments, were set out in a random block design. Low-altitude (GSD of 3 m) Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) hyperspectral data were collected over the site in 224 bands. Simultaneously, ground data were collected to support the airborne imagery. In an effort to reduce data load while maintaining or enhancing algorithm performance for vegetation stress detection, band-moment compression and analysis was applied to the AVIRIS image cube. The results indicated that band-moment techniques compress the AVIRIS dataset significantly while retaining the capability of detecting environmentally induced vegetation stress.

  16. PREPARATION OF HIGH-DENSITY THORIUM OXIDE SPHERES

    DOEpatents

    McNees, R.A. Jr.; Taylor, A.J.

    1963-12-31

    A method of preparing high-density thorium oxide spheres for use in pellet beds in nuclear reactors is presented. Sinterable thorium oxide is first converted to free-flowing granules by means such as compression into a compact and comminution of the compact. The granules are then compressed into cubes having a density of 5.0 to 5.3 grams per cubic centimeter. The cubes are tumbled to form spheres by attrition, and the spheres are then fired at 1250 to 1350 deg C. The fired spheres are then polished and fired at a temperature above 1650 deg C to obtain high density. Spherical pellets produced by this method are highly resistant to mechanical attrition hy water. (AEC)

  17. Miniature Compressive Ultra-spectral Imaging System Utilizing a Single Liquid Crystal Phase Retarder

    NASA Astrophysics Data System (ADS)

    August, Isaac; Oiknine, Yaniv; Abuleil, Marwan; Abdulhalim, Ibrahim; Stern, Adrian

    2016-03-01

    Spectroscopic imaging has been proved to be an effective tool for many applications in a variety of fields, such as biology, medicine, agriculture, remote sensing and industrial process inspection. However, due to the demand for high spectral and spatial resolution it became extremely challenging to design and implement such systems in a miniaturized and cost effective manner. Using a Compressive Sensing (CS) setup based on a single variable Liquid Crystal (LC) retarder and a sensor array, we present an innovative Miniature Ultra-Spectral Imaging (MUSI) system. The LC retarder acts as a compact wide band spectral modulator. Within the framework of CS, a sequence of spectrally modulated images is used to recover ultra-spectral image cubes. Using the presented compressive MUSI system, we demonstrate the reconstruction of gigapixel spatio-spectral image cubes from spectral scanning shots numbering an order of magnitude less than would be required using conventional systems.

  18. Miniature Compressive Ultra-spectral Imaging System Utilizing a Single Liquid Crystal Phase Retarder.

    PubMed

    August, Isaac; Oiknine, Yaniv; AbuLeil, Marwan; Abdulhalim, Ibrahim; Stern, Adrian

    2016-03-23

    Spectroscopic imaging has been proved to be an effective tool for many applications in a variety of fields, such as biology, medicine, agriculture, remote sensing and industrial process inspection. However, due to the demand for high spectral and spatial resolution it became extremely challenging to design and implement such systems in a miniaturized and cost effective manner. Using a Compressive Sensing (CS) setup based on a single variable Liquid Crystal (LC) retarder and a sensor array, we present an innovative Miniature Ultra-Spectral Imaging (MUSI) system. The LC retarder acts as a compact wide band spectral modulator. Within the framework of CS, a sequence of spectrally modulated images is used to recover ultra-spectral image cubes. Using the presented compressive MUSI system, we demonstrate the reconstruction of gigapixel spatio-spectral image cubes from spectral scanning shots numbering an order of magnitude less than would be required using conventional systems.

  19. High speed fluorescence imaging with compressed ultrafast photography

    NASA Astrophysics Data System (ADS)

    Thompson, J. V.; Mason, J. D.; Beier, H. T.; Bixler, J. N.

    2017-02-01

    Fluorescent lifetime imaging is an optical technique that facilitates imaging molecular interactions and cellular functions. Because the excited lifetime of a fluorophore is sensitive to its local microenvironment,1, 2 measurement of fluorescent lifetimes can be used to accurately detect regional changes in temperature, pH, and ion concentration. However, typical state of the art fluorescent lifetime methods are severely limited when it comes to acquisition time (on the order of seconds to minutes) and video rate imaging. Here we show that compressed ultrafast photography (CUP) can be used in conjunction with fluorescent lifetime imaging to overcome these acquisition rate limitations. Frame rates up to one hundred billion frames per second have been demonstrated with compressed ultrafast photography using a streak camera.3 These rates are achieved by encoding time in the spatial direction with a pseudo-random binary pattern. The time domain information is then reconstructed using a compressed sensing algorithm, resulting in a cube of data (x,y,t) for each readout image. Thus, application of compressed ultrafast photography will allow us to acquire an entire fluorescent lifetime image with a single laser pulse. Using a streak camera with a high-speed CMOS camera, acquisition rates of 100 frames per second can be achieved, which will significantly enhance our ability to quantitatively measure complex biological events with high spatial and temporal resolution. In particular, we will demonstrate the ability of this technique to do single-shot fluorescent lifetime imaging of cells and microspheres.

  20. Lunar cement and lunar concrete

    NASA Technical Reports Server (NTRS)

    Lin, T. D.

    1991-01-01

    Results of a study to investigate methods of producing cements from lunar materials are presented. A chemical process and a differential volatilization process to enrich lime content in selected lunar materials were identified. One new cement made from lime and anorthite developed compressive strengths of 39 Mpa (5500 psi) for 1 inch paste cubes. The second, a hypothetical composition based on differential volatilization of basalt, formed a mineral glass which was activated with an alkaline additive. The 1 inch paste cubes, cured at 100C and 100 percent humidity, developed compressive strengths in excess of 49 Mpa (7100 psi). Also discussed are tests made with Apollo 16 lunar soil and an ongoing investigation of a proposed dry mix/steam injection procedure for casting concrete on the Moon.

  1. Size-dependent nonlinear bending of micro/nano-beams made of nanoporous biomaterials including a refined truncated cube cell

    NASA Astrophysics Data System (ADS)

    Sahmani, S.; Aghdam, M. M.

    2017-12-01

    Morphology and pore size plays an essential role in the mechanical properties as well as the associated biological capability of a porous structure made of biomaterials. The objective of the current study is to predict the Young's modulus and Poisson's ratio of nanoporous biomaterials including refined truncated cube cells based on a hyperbolic shear deformable beam model. Analytical relationships for the mechanical properties of nanoporous biomaterials are given as a function of the refined cell's dimensions. After that, the size dependency in the nonlinear bending behavior of micro/nano-beams made of such nanoporous biomaterials is analyzed using the nonlocal strain gradient elasticity theory. It is assumed that the micro/nano-beam has one movable end under axial compression in conjunction with a uniform distributed lateral load. The Galerkin method together with an improved perturbation technique is employed to propose explicit analytical expression for nonlocal strain gradient load-deflection curves of the micro/nano-beams made of nanoporous biomaterials subjected to uniform transverse distributed load. It is found that through increment of the pore size, the micro/nano-beam will undergo much more deflection corresponding to a specific distributed load due to the reduction in the stiffness of nanoporous biomaterial. This pattern is more prominent for lower value of applied axial compressive load at the free end of micro/nano-beam.

  2. Cubic map algebra functions for spatio-temporal analysis

    USGS Publications Warehouse

    Mennis, J.; Viger, R.; Tomlin, C.D.

    2005-01-01

    We propose an extension of map algebra to three dimensions for spatio-temporal data handling. This approach yields a new class of map algebra functions that we call "cube functions." Whereas conventional map algebra functions operate on data layers representing two-dimensional space, cube functions operate on data cubes representing two-dimensional space over a third-dimensional period of time. We describe the prototype implementation of a spatio-temporal data structure and selected cube function versions of conventional local, focal, and zonal map algebra functions. The utility of cube functions is demonstrated through a case study analyzing the spatio-temporal variability of remotely sensed, southeastern U.S. vegetation character over various land covers and during different El Nin??o/Southern Oscillation (ENSO) phases. Like conventional map algebra, the application of cube functions may demand significant data preprocessing when integrating diverse data sets, and are subject to limitations related to data storage and algorithm performance. Solutions to these issues include extending data compression and computing strategies for calculations on very large data volumes to spatio-temporal data handling.

  3. Compression of rehydratable vegetables and cereals

    NASA Technical Reports Server (NTRS)

    Burns, E. E.

    1978-01-01

    Characteristics of freeze-dried compressed carrots, such as rehydration, volatile retention, and texture, were studied by relating histological changes to textural quality evaluation, and by determining the effects of storage temperature on freeze-dried compressed carrot bars. Results show that samples compressed with a high moisture content undergo only slight structural damage and rehydrate quickly. Cellular disruption as a result of compression at low moisture levels was the main reason for rehydration and texture differences. Products prepared from carrot cubes having 48% moisture compared favorably with a freshly cooked product in cohesiveness and elasticity, but were found slightly harder and more chewy.

  4. Research on the principle and experimentation of optical compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Chen, Yuheng; Chen, Xinhua; Zhou, Jiankang; Ji, Yiqun; Shen, Weimin

    2013-12-01

    The optical compressive spectral imaging method is a novel spectral imaging technique that draws in the inspiration of compressed sensing, which takes on the advantages such as reducing acquisition data amount, realizing snapshot imaging, increasing signal to noise ratio and so on. Considering the influence of the sampling quality on the ultimate imaging quality, researchers match the sampling interval with the modulation interval in former reported imaging system, while the depressed sampling rate leads to the loss on the original spectral resolution. To overcome that technical defect, the demand for the matching between the sampling interval and the modulation interval is disposed of and the spectral channel number of the designed experimental device increases more than threefold comparing to that of the previous method. Imaging experiment is carried out by use of the experiment installation and the spectral data cube of the shooting target is reconstructed with the acquired compressed image by use of the two-step iterative shrinkage/thresholding algorithms. The experimental result indicates that the spectral channel number increases effectively and the reconstructed data stays high-fidelity. The images and spectral curves are able to accurately reflect the spatial and spectral character of the target.

  5. Application and optimisation of air-steam cooking on selected vegetables: impact on physical and antioxidant properties.

    PubMed

    Paciulli, Maria; Dall'Asta, Chiara; Rinaldi, Massimiliano; Pellegrini, Nicoletta; Pugliese, Alessandro; Chiavaro, Emma

    2018-04-01

    Several studies investigated the impact of different cooking techniques on the quality of vegetables. However, the use of the combined air-steam cooking is still scarcely debated, despite the advantages informally referred by professional catering workers. In this study, its optimisation was studied on Brussels sprouts and pumpkin cubes to obtain the best physical (texture, colour) and antioxidant (FRAP, total phenols) response, in comparison to a conventional steaming treatment. Increasing the strength of the air-steam treatment, Brussels sprouts resulted to be softer, less green (higher a* value), richer in phenols and exhibited lower FRAP values than the steamed ones. The air-steamed pumpkin cubes exhibited an equivalent softening degree to that of steamed ones and, under the strongest cooking conditions, a higher antioxidant quality and a yellow darkening (lower b* value). Varying the cooking time and/or temperature, a linear change of force/compression hardness and a* (negative a*: greenness) for Brussels sprouts, b* (yellowness) and total phenol content for pumpkin cubes was observed. A predictive model for these variables was obtained by response surface methodology. The best process conditions to achieve the optimal desirability were also identified. The application of air-steam cooking under suitable time/temperature conditions could be proposed as an alternative method to a traditional steam cooking on Brussels sprouts and pumpkin cubes, being able to preserve or improve their quality. The best air-steam cooking conditions were 25 min at 90 °C for Brussels sprouts and 10 min at 110 °C for pumpkin. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  6. Study on compressive strength of self compacting mortar cubes under normal & electric oven curing methods

    NASA Astrophysics Data System (ADS)

    Prasanna Venkatesh, G. J.; Vivek, S. S.; Dhinakaran, G.

    2017-07-01

    In the majority of civil engineering applications, the basic building blocks were the masonry units. Those masonry units were developed as a monolithic structure by plastering process with the help of binding agents namely mud, lime, cement and their combinations. In recent advancements, the mortar study plays an important role in crack repairs, structural rehabilitation, retrofitting, pointing and plastering operations. The rheology of mortar includes flowable, passing and filling properties which were analogous with the behaviour of self compacting concrete. In self compacting (SC) mortar cubes, the cement was replaced by mineral admixtures namely silica fume (SF) from 5% to 20% (with an increment of 5%), metakaolin (MK) from 10% to 30% (with an increment of 10%) and ground granulated blast furnace slag (GGBS) from 25% to 75% (with an increment of 25%). The ratio between cement and fine aggregate was kept constant as 1: 2 for all normal and self compacting mortar mixes. The accelerated curing namely electric oven curing with the differential temperature of 128°C for the period of 4 hours was adopted. It was found that the compressive strength obtained from the normal and electric oven method of curing was higher for self compacting mortar cubes than normal mortar cube. The cement replacement by 15% SF, 20% MK and 25%GGBS obtained higher strength under both curing conditions.

  7. Trajectory design for a cislunar CubeSat leveraging dynamical systems techniques: The Lunar IceCube mission

    NASA Astrophysics Data System (ADS)

    Bosanac, Natasha; Cox, Andrew D.; Howell, Kathleen C.; Folta, David C.

    2018-03-01

    Lunar IceCube is a 6U CubeSat that is designed to detect and observe lunar volatiles from a highly inclined orbit. This spacecraft, equipped with a low-thrust engine, is expected to be deployed from the upcoming Exploration Mission-1 vehicle. However, significant uncertainty in the deployment conditions for secondary payloads impacts both the availability and geometry of transfers that deliver the spacecraft to the lunar vicinity. A framework that leverages dynamical systems techniques is applied to a recently updated set of deployment conditions and spacecraft parameter values for the Lunar IceCube mission, demonstrating the capability for rapid trajectory design.

  8. Collapsible Cubes and Other Curiosities.

    ERIC Educational Resources Information Center

    Johnson, Scott; Walser, Hans

    1997-01-01

    Describes some general techniques for making collapsible models, including spiral models, for all the Platonic solids except the cube. Discusses the nature of the dissections of the faces necessary for the construction of the spiral cube. (ASK)

  9. Study on Mechanical Properties of Hybrid Fiber Reinforced Concrete

    NASA Astrophysics Data System (ADS)

    He, Dongqing; Wu, Min; Jie, Pengyu

    2017-12-01

    Several common high elastic modulus fibers (steel fibers, basalt fibers, polyvinyl alcohol fibers) and low elastic modulus fibers (polypropylene fiber) are incorporated into the concrete, and its cube compressive strength, splitting tensile strength and flexural strength are studied. The test result and analysis demonstrate that single fiber and hybrid fiber will improve the integrity of the concrete at failure. The mechanical properties of hybrid steel fiber-polypropylene fiber reinforced concrete are excellent, and the cube compressive strength, splitting tensile strength and flexural strength respectively increase than plain concrete by 6.4%, 3.7%, 11.4%. Doped single basalt fiber or polypropylene fiber and basalt fibers hybrid has little effect on the mechanical properties of concrete. Polyvinyl alcohol fiber and polypropylene fiber hybrid exhibit ‘negative confounding effect’ on concrete, its splitting tensile and flexural strength respectively are reduced by 17.8% and 12.9% than the single-doped polyvinyl alcohol fiber concrete.

  10. Effects of Cobalt Concentration on the Relative Resistance to Octahedral and Cube Slip in Nickel-Base Superalloys

    NASA Technical Reports Server (NTRS)

    Bobeck, Gene E.; Miner, R. V.

    1988-01-01

    Compression yielding tests were performed at 760 C on crystals of the Ni base superalloys Rene 150 and a modified MAR-M247, both having two different Co concentrations. For both alloy bases, increasing Co concentration was shown to decrease the critical resolved shear stress for octahedral slip, but to have little effect on that for cube slip. The results suggest that decreasing complex stacking fault energy in the gamma-prime with increasing Co could account for the observed effects.

  11. D Visualization of Volcanic Ash Dispersion Prediction with Spatial Information Open Platform in Korea

    NASA Astrophysics Data System (ADS)

    Youn, J.; Kim, T.

    2016-06-01

    Visualization of disaster dispersion prediction enables decision makers and civilian to prepare disaster and to reduce the damage by showing the realistic simulation results. With advances of GIS technology and the theory of volcanic disaster prediction algorithm, the predicted disaster dispersions are displayed in spatial information. However, most of volcanic ash dispersion predictions are displayed in 2D. 2D visualization has a limitation to understand the realistic dispersion prediction since its height could be presented only by colour. Especially for volcanic ash, 3D visualization of dispersion prediction is essential since it could bring out big aircraft accident. In this paper, we deals with 3D visualization techniques of volcanic ash dispersion prediction with spatial information open platform in Korea. First, time-series volcanic ash 3D position and concentrations are calculated with WRF (Weather Research and Forecasting) model and Modified Fall3D algorithm. For 3D visualization, we propose three techniques; those are 'Cube in the air', 'Cube in the cube', and 'Semi-transparent plane in the air' methods. In the 'Cube in the Air', which locates the semitransparent cubes having different color depends on its particle concentration. Big cube is not realistic when it is zoomed. Therefore, cube is divided into small cube with Octree algorithm. That is 'Cube in the Cube' algorithm. For more realistic visualization, we apply 'Semi-transparent Volcanic Ash Plane' which shows the ash as fog. The results are displayed in the 'V-world' which is a spatial information open platform implemented by Korean government. Proposed techniques were adopted in Volcanic Disaster Response System implemented by Korean Ministry of Public Safety and Security.

  12. Comparative study on strength properties of cement mortar by partial replacement of cement with ceramic powder and silica fume

    NASA Astrophysics Data System (ADS)

    Himabindu, Ch.; Geethasri, Ch.; Hari, N.

    2018-05-01

    Cement mortar is a mixture of cement and sand. Usage of high amount of cement increases the consumption of natural resources and electric power. To overcome this problem we need to replace cement with some other material. Cement is replaced with many other materials like ceramic powder, silica fume, fly ash, granulated blast furnace slag, metakaolin etc.. In this research cement is replaced with ceramic powder and silica fume. Different combinations of ceramic powder and silica fume in cement were replaced. Cement mortar cubes of 1:3 grade were prepared. These cubes were cured under normal water for 7 days, 14days and 28 days. Compressive strength test was conducted for all mixes of cement mortar cubes.

  13. PCA Tomography: how to extract information from data cubes

    NASA Astrophysics Data System (ADS)

    Steiner, J. E.; Menezes, R. B.; Ricci, T. V.; Oliveira, A. S.

    2009-05-01

    Astronomy has evolved almost exclusively by the use of spectroscopic and imaging techniques, operated separately. With the development of modern technologies, it is possible to obtain data cubes in which one combines both techniques simultaneously, producing images with spectral resolution. To extract information from them can be quite complex, and hence the development of new methods of data analysis is desirable. We present a method of analysis of data cube (data from single field observations, containing two spatial and one spectral dimension) that uses Principal Component Analysis (PCA) to express the data in the form of reduced dimensionality, facilitating efficient information extraction from very large data sets. PCA transforms the system of correlated coordinates into a system of uncorrelated coordinates ordered by principal components of decreasing variance. The new coordinates are referred to as eigenvectors, and the projections of the data on to these coordinates produce images we will call tomograms. The association of the tomograms (images) to eigenvectors (spectra) is important for the interpretation of both. The eigenvectors are mutually orthogonal, and this information is fundamental for their handling and interpretation. When the data cube shows objects that present uncorrelated physical phenomena, the eigenvector's orthogonality may be instrumental in separating and identifying them. By handling eigenvectors and tomograms, one can enhance features, extract noise, compress data, extract spectra, etc. We applied the method, for illustration purpose only, to the central region of the low ionization nuclear emission region (LINER) galaxy NGC 4736, and demonstrate that it has a type 1 active nucleus, not known before. Furthermore, we show that it is displaced from the centre of its stellar bulge. Based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the National Science Foundation on behalf of the Gemini partnership: the National Science Foundation (United States), the Science and Technology Facilities Council (United Kingdom), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Ministério da Ciência e Tecnologia (Brazil) and SECYT (Argentina). E-mail: steiner@astro.iag.usp.br

  14. Compressive strength, flexural strength and water absorption of concrete containing palm oil kernel shell

    NASA Astrophysics Data System (ADS)

    Noor, Nurazuwa Md; Xiang-ONG, Jun; Noh, Hamidun Mohd; Hamid, Noor Azlina Abdul; Kuzaiman, Salsabila; Ali, Adiwijaya

    2017-11-01

    Effect of inclusion of palm oil kernel shell (PKS) and palm oil fibre (POF) in concrete was investigated on the compressive strength and flexural strength. In addition, investigation of palm oil kernel shell on concrete water absorption was also conducted. Total of 48 concrete cubes and 24 concrete prisms with the size of 100mm × 100mm × 100mm and 100mm × 100mm × 500mm were prepared, respectively. Four (4) series of concrete mix consists of coarse aggregate was replaced by 0%, 25%, 50% and 75% palm kernel shell and each series were divided into two (2) main group. The first group is without POF, while the second group was mixed with the 5cm length of 0.25% of the POF volume fraction. All specimen were tested after 7 and 28 days of water curing for a compression test, and flexural test at 28 days of curing period. Water absorption test was conducted on concrete cube age 28 days. The results showed that the replacement of PKS achieves lower compressive and flexural strength in comparison with conventional concrete. However, the 25% replacement of PKS concrete showed acceptable compressive strength which within the range of requirement for structural concrete. Meanwhile, the POF which should act as matrix reinforcement showed no enhancement in flexural strength due to the balling effect in concrete. As expected, water absorption was increasing with the increasing of PKS in the concrete cause by the porous characteristics of PKS

  15. Compression Strength of Sulfur Concrete Subjected to Extreme Cold

    NASA Technical Reports Server (NTRS)

    Grugel, Richard N.

    2008-01-01

    Sulfur concrete cubes were cycled between liquid nitrogen and room temperature to simulate extreme exposure conditions. Subsequent compression testing showed the strength of cycled samples to be roughly five times less than those non-cycled. Fracture surface examination showed de-bonding of the sulfur from the aggregate material in the cycled samples but not in those non-cycled. The large discrepancy found, between the samples is attributed to the relative thermal properties of the materials constituting the concrete.

  16. Active CryoCubeSat

    NASA Technical Reports Server (NTRS)

    Swenson, Charles

    2016-01-01

    The Active CryoCubeSat project will demonstrate an advanced thermal control system for a 6-Unit (6U) CubeSat platform. A miniature, active thermal control system, in which a fluid is circulated in a closed loop from thermal loads to radiators, will be developed. A miniature cryogenic cooler will be integrated with this system to form a two-stage thermal control system. Key components will be miniaturized by using advanced additive manufacturing techniques resulting in a thermal testbed for proving out these technologies. Previous CubeSat missions have not tackled the problem of active thermal control systems nor have any past or current CubeSat missions included cryogenic instrumentation. This Active CryoCubeSat development effort will provide completely new capacities for CubeSats and constitutes a major advancement over the state-of-the-art in CubeSat thermal control.

  17. Intrinsic spatial resolution evaluation of the X'tal cube PET detector based on a 3D crystal block segmented by laser processing.

    PubMed

    Yoshida, Eiji; Tashima, Hideaki; Inadama, Naoko; Nishikido, Fumihiko; Moriya, Takahiro; Omura, Tomohide; Watanabe, Mitsuo; Murayama, Hideo; Yamaya, Taiga

    2013-01-01

    The X'tal cube is a depth-of-interaction (DOI)-PET detector which is aimed at obtaining isotropic resolution by effective readout of scintillation photons from the six sides of a crystal block. The X'tal cube is composed of the 3D crystal block with isotropic resolution and arrays of multi-pixel photon counters (MPPCs). In this study, to fabricate the 3D crystal block efficiently and precisely, we applied a sub-surface laser engraving (SSLE) technique to a monolithic crystal block instead of gluing segmented small crystals. The SSLE technique provided micro-crack walls which carve a groove into a monolithic scintillator block. Using the fabricated X'tal cube, we evaluated its intrinsic spatial resolution to show a proof of concept of isotropic resolution. The 3D grids of 2 mm pitch were fabricated into an 18 × 18 × 18 mm(3) monolithic lutetium yttrium orthosilicate (LYSO) crystal by the SSLE technique. 4 × 4 MPPCs were optically coupled to each surface of the crystal block. The X'tal cube was uniformly irradiated by (22)Na gamma rays, and all of the 3D grids on the 3D position histogram were separated clearly by an Anger-type calculation from the 96-channel MPPC signals. Response functions of the X'tal cube were measured by scanning with a (22)Na point source. The gamma-ray beam with a 1.0 mm slit was scanned in 0.25 mm steps by positioning of the X'tal cube at vertical and 45° incident angles. The average FWHM resolution at both incident angles was 2.1 mm. Therefore, we confirmed the isotropic spatial resolution performance of the X'tal cube.

  18. A study on polypropylene encapsulation and solidification of textile sludge.

    PubMed

    Kumari, V Krishna; Kanmani, S

    2011-10-01

    The textile sludge is an inevitable solid waste from the textile wastewater process and is categorised under toxic substances by statutory authorities. In this study, an attempt has been made to encapsulate and solidify heavy metals and dyes present in textile sludge using polypropylene and Portland cement. Sludge samples (2 Nos.) were characterized for pH (8.5, 9.5), moisture content (1.5%, 1.96%) and chlorides (245mg/L, 425.4mg/L). Sludge samples were encapsulated into polypropylene with calcium carbonate (additive) and solidified with cement at four different proportions (20, 30, 40, 50%) of sludge. Encapsulated and solidified cubes were made and then tested for compressive strength. Maximum compressive strength of cubes (size, 7.06cm) containing sludge (50%) for encapsulation (16.72 N/mm2) and solidification (18.84 N/mm2) was more than that of standard M15 mortar cubes. The leachability of copper, nickel and chromium has been effectively reduced from 0.58 mg/L, 0.53 mg/L and 0.07 mg/L to 0.28mg/L, 0.26mg/L and BDL respectively in encapsulated products and to 0.24mg/L, BDL and BDL respectively in solidified products. This study has shown that the solidification process is slightly more effective than encapsulation process. Both the products were recommended for use in the construction of non-load bearing walls.

  19. Novel Technique for Hepatic Fiducial Marker Placement for Stereotactic Body Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarraya, Hajer, E-mail: h-jarraya@o-lambret.fr; Chalayer, Chloé; Tresch, Emmanuelle

    2014-09-01

    Purpose: To report experience with fiducial marker insertion and describe an advantageous, novel technique for fiducial placement in the liver for stereotactic body radiation therapy with respiratory tracking. Methods and Materials: We implanted 1444 fiducials (single: 834; linked: 610) in 328 patients with 424 hepatic lesions. Two methods of implantation were compared: the standard method (631 single fiducials) performed on 153 patients from May 2007 to May 2010, and the cube method (813 fiducials: 610 linked/203 single) applied to 175 patients from April 2010 to March 2013. The standard method involved implanting a single marker at a time. The novel techniquemore » entailed implanting 2 pairs of linked markers when possible in a way to occupy the perpendicular edges of a cube containing the tumor inside. Results: Mean duration of the cube method was shorter than the standard method (46 vs 61 minutes; P<.0001). Median numbers of skin and subcapsular entries were significantly smaller with the cube method (2 vs 4, P<.0001, and 2 vs 4, P<.0001, respectively). The rate of overall complications (total, major, and minor) was significantly lower in the cube method group compared with the standard method group (5.7% vs 13.7%; P=.013). Major complications occurred while using single markers only. The success rate was 98.9% for the cube method and 99.3% for the standard method. Conclusions: We propose a new technique of hepatic fiducial implantation that makes use of linked fiducials and involves fewer skin entries and shorter time of implantation. The technique is less complication-prone and is migration-resistant.« less

  20. Radiation Hardening by Software Techniques on FPGAs: Flight Experiment Evaluation and Results

    NASA Technical Reports Server (NTRS)

    Schmidt, Andrew G.; Flatley, Thomas

    2017-01-01

    We present our work on implementing Radiation Hardening by Software (RHBSW) techniques on the Xilinx Virtex5 FPGAs PowerPC 440 processors on the SpaceCube 2.0 platform. The techniques have been matured and tested through simulation modeling, fault emulation, laser fault injection and now in a flight experiment, as part of the Space Test Program- Houston 4-ISS SpaceCube Experiment 2.0 (STP-H4-ISE 2.0). This work leverages concepts such as heartbeat monitoring, control flow assertions, and checkpointing, commonly used in the High Performance Computing industry, and adapts them for use in remote sensing embedded systems. These techniques are extremely low overhead (typically <1.3%), enabling a 3.3x gain in processing performance as compared to the equivalent traditionally radiation hardened processor. The recently concluded STP-H4 flight experiment was an opportunity to upgrade the RHBSW techniques for the Virtex5 FPGA and demonstrate them on-board the ISS to achieve TRL 7. This work details the implementation of the RHBSW techniques, that were previously developed for the Virtex4-based SpaceCube 1.0 platform, on the Virtex5-based SpaceCube 2.0 flight platform. The evaluation spans the development and integration with flight software, remotely uploading the new experiment to the ISS SpaceCube 2.0 platform, and conducting the experiment continuously for 16 days before the platform was decommissioned. The experiment was conducted on two PowerPCs embedded within the Virtex5 FPGA devices and the experiment collected 19,400 checkpoints, processed 253,482 status messages, and incurred 0 faults. These results are highly encouraging and future work is looking into longer duration testing as part of the STP-H5 flight experiment.

  1. Trajectory Design for a Cislunar Cubesat Leveraging Dynamical Systems Techniques: The Lunar Icecube Mission

    NASA Technical Reports Server (NTRS)

    Bosanac, Natasha; Cox, Andrew; Howell, Kathleen C.; Folta, David C.

    2017-01-01

    Lunar IceCube is a 6U CubeSat that is designed to detect and observe lunar volatiles from a highly inclined orbit. This spacecraft, equipped with a low-thrust engine, will be deployed from the upcoming Exploration Mission-1 vehicle in late 2018. However, significant uncertainty in the deployment conditions for secondary payloads impacts both the availability and geometry of transfers that deliver the spacecraft to the lunar vicinity. A framework that leverages dynamical systems techniques is applied to a recently updated set of deployment conditions and spacecraft parameter values for the Lunar IceCube mission, demonstrating the capability for rapid trajectory design.

  2. Calcite-forming bacteria for compressive strength improvement in mortar.

    PubMed

    Park, Sung-Jin; Park, Yu-Mi; Chun, Woo-Young; Kim, Wha-Jung; Ghim, Sa-Youl

    2010-04-01

    Microbiological calcium carbonate precipitation (MCP) has been investigated for its ability to improve the compressive strength of concrete mortar. However, very few studies have been conducted on the use of calcite-forming bacteria (CFB) to improve compressive strength. In this study, we discovered new bacterial genera that are capable of improving the compressive strength of concrete mortar. We isolated 4 CFB from 7 environmental concrete structures. Using sequence analysis of the 16S rRNA genes, the CFB could be partially identified as Sporosarcina soli KNUC401, Bacillus massiliensis KNUC402, Arthrobacter crystallopoietes KNUC403, and Lysinibacillus fusiformis KNUC404. Crystal aggregates were apparent in the bacterial colonies grown on an agar medium. Stereomicroscopy, scanning electron microscopy, and x-ray diffraction analyses illustrated both the crystal growth and the crystalline structure of the CaCO3 crystals. We used the isolates to improve the compressive strength of concrete mortar cubes and found that KNUC403 offered the best improvement in compressive strength.

  3. Selection of nutrient used in biogenic healing agent for cementitious materials

    NASA Astrophysics Data System (ADS)

    Tziviloglou, Eirini; Wiktor, Virginie; Jonkers, Henk M.; Schlangen, Erik

    2017-06-01

    Biogenic self-healing cementitious materials target on the closure of micro-cracks with precipitated inorganic minerals originating from bacterial metabolic activity. Dormant bacterial spores and organic mineral compounds often constitute a biogenic healing agent. The current paper focuses on the investigation of the most appropriate organic carbon source to be used as component of a biogenic healing agent. It is of great importance to use an appropriate organic source, since it will firstly ensure an optimal bacterial performance in terms of metabolic activity, while it should secondly affect the least the properties of the cementitious matrix. The selection is made among three different organic compounds, namely calcium lactate, calcium acetate and sodium gluconate. The methodology that was used for the research was based on continuous and non-continuous oxygen consumption measurements of washed bacterial cultures and on compressive strength tests on mortar cubes. The oxygen consumption investigation revealed a preference for calcium lactate and acetate, but an indifferent behaviour for sodium gluconate. The compressive strength on mortar cubes with different amounts of either calcium lactate or acetate (up to 2.24% per cement weight) was not or it was positively affected when the compounds were dissolved in the mixing water. In fact, for calcium lactate the increase in compressive strength reached 8%, while for calcium acetate the maximum strength increase was 13.4%.

  4. Orientation and temperature dependence of some mechanical properties of the single-crystal nickel-base superalloy Rene N4. 3: Tension-compression anisotropy

    NASA Technical Reports Server (NTRS)

    Miner, R. V.; Gaab, T. P.; Gayda, J.; Hemker, K. J.

    1985-01-01

    Single crystal superalloy specimens with various crystallographic directions along their axes were tested in compression at room temperature, 650, 760, 870, and 980 deg C. These results are compared with the tensile behavior studied previously. The alloy, Rene N4, was developed for gas turbine engine blades and has the nominal composition 3.7 Al, 4.2 Ti, 4 Ta, 0.5 Nb, 6 W, 1.5 Mo 9 Cr. 7.5 Co, balance Ni, in weight percent. Slip trace analysis showed that primary cube slip occurred even at room temperature for the 111 specimens. With increasing test temperature more orientations exhibited primary cube slip, until at 870 deg C only the 100 and 011 specimens exhibited normal octahedral slip. The yield strength for octahedral slip was numerically analysed using a model proposed by Lall, Chin, and Pope to explain deviations from Schmid's Law in the yielding behavior of a single phase Gamma prime alloy, Ni3(Al, Nb). The Schmid's Law deviations in Rene N4 were found to be largely due to a tension-compression anisotropy. A second effect, which increases trength for orientations away from 001, was found to be small in Rene N4. Analysis of recently published data on the single crystal superalloy PWA 1480 yielded the same result.

  5. The Turn the Tables Technique (T[cube]): A Program Activity to Provide Group Facilitators Insight into Teen Sexual Behaviors and Beliefs

    ERIC Educational Resources Information Center

    Sclafane, Jamie Heather; Merves, Marni Loiacono; Rivera, Angelic; Long, Laura; Wilson, Ken; Bauman, Laurie J.

    2012-01-01

    The Turn the Tables Technique (T[cube]) is an activity designed to provide group facilitators who lead HIV/STI prevention and sexual health promotion programs with detailed and current information on teenagers' sexual behaviors and beliefs. This information can be used throughout a program to tailor content. Included is a detailed lesson plan of…

  6. Compressive Coded-Aperture Multimodal Imaging Systems

    NASA Astrophysics Data System (ADS)

    Rueda-Chacon, Hoover F.

    Multimodal imaging refers to the framework of capturing images that span different physical domains such as space, spectrum, depth, time, polarization, and others. For instance, spectral images are modeled as 3D cubes with two spatial and one spectral coordinate. Three-dimensional cubes spanning just the space domain, are referred as depth volumes. Imaging cubes varying in time, spectra or depth, are referred as 4D-images. Nature itself spans different physical domains, thus imaging our real world demands capturing information in at least 6 different domains simultaneously, giving turn to 3D-spatial+spectral+polarized dynamic sequences. Conventional imaging devices, however, can capture dynamic sequences with up-to 3 spectral channels, in real-time, by the use of color sensors. Capturing multiple spectral channels require scanning methodologies, which demand long time. In general, to-date multimodal imaging requires a sequence of different imaging sensors, placed in tandem, to simultaneously capture the different physical properties of a scene. Then, different fusion techniques are employed to mix all the individual information into a single image. Therefore, new ways to efficiently capture more than 3 spectral channels of 3D time-varying spatial information, in a single or few sensors, are of high interest. Compressive spectral imaging (CSI) is an imaging framework that seeks to optimally capture spectral imagery (tens of spectral channels of 2D spatial information), using fewer measurements than that required by traditional sensing procedures which follows the Shannon-Nyquist sampling. Instead of capturing direct one-to-one representations of natural scenes, CSI systems acquire linear random projections of the scene and then solve an optimization algorithm to estimate the 3D spatio-spectral data cube by exploiting the theory of compressive sensing (CS). To date, the coding procedure in CSI has been realized through the use of ``block-unblock" coded apertures, commonly implemented as chrome-on-quartz photomasks. These apertures block or permit to pass the entire spectrum from the scene at given spatial locations, thus modulating the spatial characteristics of the scene. In the first part, this thesis aims to expand the framework of CSI by replacing the traditional block-unblock coded apertures by patterned optical filter arrays, referred as ``color" coded apertures. These apertures are formed by tiny pixelated optical filters, which in turn, allow the input image to be modulated not only spatially but spectrally as well, entailing more powerful coding strategies. The proposed colored coded apertures are either synthesized through linear combinations of low-pass, high-pass and band-pass filters, paired with binary pattern ensembles realized by a digital-micromirror-device (DMD), or experimentally realized through thin-film color-patterned filter arrays. The optical forward model of the proposed CSI architectures will be presented along with the design and proof-of-concept implementations, which achieve noticeable improvements in the quality of the reconstructions compared with conventional block-unblock coded aperture-based CSI architectures. On another front, due to the rich information contained in the infrared spectrum as well as the depth domain, this thesis aims to explore multimodal imaging by extending the range sensitivity of current CSI systems to a dual-band visible+near-infrared spectral domain, and also, it proposes, for the first time, a new imaging device that captures simultaneously 4D data cubes (2D spatial+1D spectral+depth imaging) with as few as a single snapshot. Due to the snapshot advantage of this camera, video sequences are possible, thus enabling the joint capture of 5D imagery. It aims to create super-human sensing that will enable the perception of our world in new and exciting ways. With this, we intend to advance in the state of the art in compressive sensing systems to extract depth while accurately capturing spatial and spectral material properties. The applications of such a sensor are self-evident in fields such as computer/robotic vision because they would allow an artificial intelligence to make informed decisions about not only the location of objects within a scene but also their material properties.

  7. Space-Based Three-Dimensional Imaging of Equatorial Plasma Bubbles: Advancing the Understanding of Ionospheric Density Depletions and Scintillation

    DTIC Science & Technology

    2012-03-28

    Scintillation 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER Comberiate, Joseph M. 5e. TASK NUMBER 5f. WORK...bubble climatology. A tomographic reconstruction technique was modified and applied to SSUSI data to reconstruct three-dimensional cubes of ionospheric... modified and applied to SSUSI data to reconstruct three-dimensional cubes of ionospheric electron density. These data cubes allowed for 3-D imaging of

  8. Exploratory Research on Bearing Characteristics of Confined Stabilized Soil

    NASA Astrophysics Data System (ADS)

    Wu, Shuai Shuai; Gao, Zheng Guo; Li, Shi Yang; Cui, Wen Bo; Huang, Xin

    2018-06-01

    The performance of a new kind of confined stabilized soil (CSS) was investigated which was constructed by filling the stabilized soil, which was made by mixing soil with a binder containing a high content of expansive component, into an engineering plastic pipe. Cube compressive strength of the stabilized soil formed with constraint and axial compression performance of stabilized soil cylinders confined with the constraint pipe were measured. The results indicated that combining the constraint pipe and the binder containing expansion component could achieve such effects: higher production of expansive hydrates could be adopted so as to fill more voids in the stabilized soil and improve its strength; at the same time compressive prestress built on the core stabilized soil, combined of which hoop constraint provided effective radial compressive force on the core stabilized soil. These effects made the CSS acquire plastic failure mode and more than twice bearing capacity of ordinary stabilized soil with the same binder content.

  9. Strength of mortar containing rubber tire particle

    NASA Astrophysics Data System (ADS)

    Jusoh, M. A.; Abdullah, S. R.; Adnan, S. H.

    2018-04-01

    The main focus in this investigation is to determine the strength consist compressive and tensile strength of mortar containing rubber tire particle. In fact, from the previous study, the strength of mortar containing waste rubber tire in mortar has a slightly decreases compare to normal mortar. In this study, rubber tire particle was replacing on volume of fine aggregate with 6%. 9% and 12%. The sample were indicated M0 (0%), M6 (6%), M9 (9%) and M12 (12%). In this study, two different size of sample used with cube 100mm x 100mm x 100mm for compressive strength and 40mm x 40mm x 160mm for flexural strength. Morphology test was conducted by using Scanning electron microscopic (SEM) were done after testing compressive strength test. The concrete sample were cured for day 3, 7 and 28 before testing. Results compressive strength and flexural strength of rubber mortar shown improved compare to normal mortar.

  10. Strength development of pervious concrete containing engineered biomass aggregate

    NASA Astrophysics Data System (ADS)

    Sharif, A. A. M.; Shahidan, S.; Koh, H. B.; Kandash, A.; Zuki, S. S. Mohd

    2017-11-01

    Pervious concrete with high porosity has good permeability and low mechanical strengths are commonly used in controlling storm water management. It is different from normal concrete. It is only containing single size of coarse aggregate and has lower density compared with normal concrete. This study was focused on the effect of Engineered Biomass Aggregate (EBA) on the compressive strength, void ratio and water permeability of pervious concrete. EBA was prepared by coating the biomass aggregate with epoxy resin. EBA was used to replace natural coarse aggregate ranging from 0% to 25%. 150 mm cube specimens were prepared and used to study the compressive strength, void ratio and water permeability. Compressive strength was tested at 7, 14 and 28 days. Meanwhile, void ratio and permeability tests were carried out on 28 days. The experimental results showed that pervious concrete containing EBA gained lower compressive strength. The compressive strength was reduced gradually by increasing the percentage of EBA. Overall, Pervious concrete containing EBA achieved higher void ratio and permeability.

  11. An evaluation of space time cube representation of spatiotemporal patterns.

    PubMed

    Kristensson, Per Ola; Dahlbäck, Nils; Anundi, Daniel; Björnstad, Marius; Gillberg, Hanna; Haraldsson, Jonas; Mårtensson, Ingrid; Nordvall, Mathias; Ståhl, Josefine

    2009-01-01

    Space time cube representation is an information visualization technique where spatiotemporal data points are mapped into a cube. Information visualization researchers have previously argued that space time cube representation is beneficial in revealing complex spatiotemporal patterns in a data set to users. The argument is based on the fact that both time and spatial information are displayed simultaneously to users, an effect difficult to achieve in other representations. However, to our knowledge the actual usefulness of space time cube representation in conveying complex spatiotemporal patterns to users has not been empirically validated. To fill this gap, we report on a between-subjects experiment comparing novice users' error rates and response times when answering a set of questions using either space time cube or a baseline 2D representation. For some simple questions, the error rates were lower when using the baseline representation. For complex questions where the participants needed an overall understanding of the spatiotemporal structure of the data set, the space time cube representation resulted in on average twice as fast response times with no difference in error rates compared to the baseline. These results provide an empirical foundation for the hypothesis that space time cube representation benefits users analyzing complex spatiotemporal patterns.

  12. High-order central ENO finite-volume scheme for hyperbolic conservation laws on three-dimensional cubed-sphere grids

    NASA Astrophysics Data System (ADS)

    Ivan, L.; De Sterck, H.; Susanto, A.; Groth, C. P. T.

    2015-02-01

    A fourth-order accurate finite-volume scheme for hyperbolic conservation laws on three-dimensional (3D) cubed-sphere grids is described. The approach is based on a central essentially non-oscillatory (CENO) finite-volume method that was recently introduced for two-dimensional compressible flows and is extended to 3D geometries with structured hexahedral grids. Cubed-sphere grids feature hexahedral cells with nonplanar cell surfaces, which are handled with high-order accuracy using trilinear geometry representations in the proposed approach. Varying stencil sizes and slope discontinuities in grid lines occur at the boundaries and corners of the six sectors of the cubed-sphere grid where the grid topology is unstructured, and these difficulties are handled naturally with high-order accuracy by the multidimensional least-squares based 3D CENO reconstruction with overdetermined stencils. A rotation-based mechanism is introduced to automatically select appropriate smaller stencils at degenerate block boundaries, where fewer ghost cells are available and the grid topology changes, requiring stencils to be modified. Combining these building blocks results in a finite-volume discretization for conservation laws on 3D cubed-sphere grids that is uniformly high-order accurate in all three grid directions. While solution-adaptivity is natural in the multi-block setting of our code, high-order accurate adaptive refinement on cubed-sphere grids is not pursued in this paper. The 3D CENO scheme is an accurate and robust solution method for hyperbolic conservation laws on general hexahedral grids that is attractive because it is inherently multidimensional by employing a K-exact overdetermined reconstruction scheme, and it avoids the complexity of considering multiple non-central stencil configurations that characterizes traditional ENO schemes. Extensive numerical tests demonstrate fourth-order convergence for stationary and time-dependent Euler and magnetohydrodynamic flows on cubed-sphere grids, and robustness against spurious oscillations at 3D shocks. Performance tests illustrate efficiency gains that can be potentially achieved using fourth-order schemes as compared to second-order methods for the same error level. Applications on extended cubed-sphere grids incorporating a seventh root block that discretizes the interior of the inner sphere demonstrate the versatility of the spatial discretization method.

  13. Foamed concrete containing rice husk ash as sand replacement: an experimental study on compressive strength

    NASA Astrophysics Data System (ADS)

    Rum, R. H. M.; Jaini, Z. M.; Boon, K. H.; Khairaddin, S. A. A.; Rahman, N. A.

    2017-11-01

    This study presents the utilization of rice husk ash (RHA) as sand replacement in foamed concrete. The study focuses on the effect of RHA on the compressive strength of foamed concrete. RHA contains high pozzolanic material that reacts with cementitious to enhance the strength and durability of foamed concrete. RHA also acts as filler causing the foamed concrete to become denser while retaining its unique low density. A total 243 cube specimens was prepared for the compression test. Two sets of mix design were employed at water-cement (W/C) ratio of 0.55, 0.60 and cement-sand ratio of 0.50, 0.33. The results revealed that the presence of RHA as sand replacement resulted in an increase in the compressive strength of foamed concrete. Moreover, 30% to 40% RHA was the optimum content level, contributing to the compressive strength of 18.1 MPa to 22.4 MPa. The W/C ratio and superplasticiser dosage play small roles in improving workability. In contrast, density governs the compressive strength of foamed concrete.

  14. Study on potential of carbon dioxide absorption in reinforced concrete beams

    NASA Astrophysics Data System (ADS)

    Bambroo, Vibhas; Gupta, Shipali; Bhoite, Pratik; Sekar, S. K.

    2017-11-01

    The global gas emission is keeping on increasing for which cement industry alone contributes 5%. The enormous water is required for curing of concrete in construction industry which can effectively be used for various purposes. The accelerated carbonation curing shows a way to reduce these emissions in a very effective way by sequestering it in concrete elements. In this research the effect of accelerated carbonation curing was checked on non-reinforced concrete elements (cubes) and reinforced concrete elements (prisms). The 100mm × 100mm × 100 mm cubes and 150mm × 150mm × 1200mm prisms were cast. They were CO2 cured for 4 and 8 hours and were tested for compressive strength and flexural strength test. The CO2 curing results showed 27.7% and 1.8% increase in strength of cubes and prisms, respectively when compared to water cured specimens. This early age strength through waste gas proves beneficial in terms of reducing in atmospheric pollution and saving the water which is a critical resource now-a-days.

  15. Picometer Level Modeling of a Shared Vertex Double Corner Cube in the Space Interferometry Mission Kite Testbed

    NASA Technical Reports Server (NTRS)

    Kuan, Gary M.; Dekens, Frank G.

    2006-01-01

    The Space Interferometry Mission (SIM) is a microarcsecond interferometric space telescope that requires picometer level precision measurements of its truss and interferometer baselines. Single-gauge metrology errors due to non-ideal physical characteristics of corner cubes reduce the angular measurement capability of the science instrument. Specifically, the non-common vertex error (NCVE) of a shared vertex, double corner cube introduces micrometer level single-gauge errors in addition to errors due to dihedral angles and reflection phase shifts. A modified SIM Kite Testbed containing an articulating double corner cube is modeled and the results are compared to the experimental testbed data. The results confirm modeling capability and viability of calibration techniques.

  16. Landsat 8 Data Modeled as DGGS Data Cubes

    NASA Astrophysics Data System (ADS)

    Sherlock, M. J.; Tripathi, G.; Samavati, F.

    2016-12-01

    In the context of tracking recent global changes in the Earth's landscape, Landsat 8 provides high-resolution multi-wavelength data with a temporal resolution of sixteen days. Such a live dataset can benefit novel applications in environmental monitoring. However, a temporal analysis of this dataset in its native format is a challenging task mostly due to the huge volume of geospatial images and imperfect overlay of different day Landsat 8 images. We propose the creation of data cubes derived from Landsat 8 data, through the use of a Discrete Global Grid System (DGGS). DGGS referencing of Landsat 8 data provides a cell-based representation of the pixel values for a fixed area on earth, indexed by keys. Having the calibrated cell-based Landsat 8 images can speed up temporal analysis and facilitate parallel processing using distributed systems. In our method, the Landsat 8 dataset hosted on Amazon Web Services (AWS) is downloaded using a web crawler and stored on a filesystem. We apply the cell-based DGGS referencing (using Pyxis SDK) to Landsat 8 images which provide a rhombus based tessellation of equal area cells for our use-case. After this step, the cell-images which overlay perfectly on different days, are stacked in the temporal dimension and stored into data cube units. The depth of the cube represents the number of temporal images of the same cell and can be updated when new images are received each day. Harnessing the regular spatio-temporal structure of data cubes, we want to compress, query, transmit and visualize big Landsat 8 data in an efficient way for temporal analysis.

  17. The orthotropic elastic properties of fibrolamellar bone tissue in juvenile white-tailed deer femora

    PubMed Central

    Barrera, John W.; Le Cabec, Adeline; Barak, Meir M.

    2017-01-01

    Fibrolamellar bone is a transient primary bone tissue found in fast growing juvenile mammals, several species of birds and large dinosaurs. Despite the fact that this bone tissue is prevalent in many species, the vast majority of bone structural and mechanical studies are focused on humans osteonal bone tissue. Previous research revealed the orthotropic structure of fibrolamellar bone, but only a handful of experiments investigated its elastic properties, mostly in the axial direction. Here we have performed for the first time an extensive biomechanical study to determine the elastic properties of fibrolamellar bone in all three orthogonal directions. We have tested 30 fibrolamellar bone cubes (2×2×2mm) from the femora of five juvenile white-tailed deer (Odocoileus virginianus) in compression. Each bone cube was compressed iteratively, within its elastic region, in the axial, transverse and radial directions and bone stiffness (Young’s modulus) was recorded. Next, the cubes were kept for seven days at 4°C and then compressed again to test whether bone stiffness had significantly deteriorated. Our results demonstrated that bone tissue in the deer femora has orthotropic elastic behavior where the highest stiffness was in the axial direction followed by the transverse and the radial directions respectively (21.6±3.3 GPa, 17.6±3.0 GPa and 14.9±1.9 GPa respectively). Our results also revealed a slight non-significant decrease in bone stiffness after seven days. Finally, our sample size allowed us to establish that population variance was much bigger in the axial direction compared to the radial direction which potentially reflects bone adaptation to the large diversity in loading activity between individuals in the loading direction (axial) compared to the normal (radial) direction. This study confirms that the well mechanically-studied human transverse-isotropic osteonal bone is just one possible functional adaptation of bone tissue and that other vertebrate species use an orthotropic bone tissue structure which is more suitable for their mechanical requirements. PMID:27231028

  18. Measuring Attending Behavior and Short-Term Memory with Knox's Cube Test.

    ERIC Educational Resources Information Center

    Stone, Mark H.; Wright, Benjamin D.

    1983-01-01

    A new revision was developed using Rasch psychometric techniques to build a Knox's Cube Test (KCT) variable and item bank using the tapping series from all previous editions. The report forms developed give a clear picture of the subject's performance set in a context that is both normative and criterion. (Author/BW)

  19. The cyclic stress-strain behavior of a nickel-base superalloy at 650 C

    NASA Technical Reports Server (NTRS)

    Gabb, T. P.; Welsch, G. E.

    1986-01-01

    It is pointed out that examinations of the monotonic tensile and fatigue behaviors of single crystal nickel-base superalloys have disclosed orientation-dependent tension-compression anisotropies and significant differences in the mechanical response of octahedral and cube slip at intermediate temperatures. An examination is conducted of the cyclic hardening response of the single crystal superalloy PWA 1480 at 650 C. In the considered case, tension-compression anisotropy is present, taking into account primarily conditions under which a single slip system is operative. Aspects of a deformation by single slip are considered along with cyclic hardening anisotropy in tension and compression. It is found that specimens deforming by octahedral slip on a single slip system have similar hardening responses in tensile and low cycle fatigue loading. Cyclic strain hardening is very low for specimens displaying single slip.

  20. Effects of heating durations on normal concrete residual properties: compressive strength and mass loss

    NASA Astrophysics Data System (ADS)

    Nazri, Fadzli Mohamed; Shahidan, Shahiron; Khaida Baharuddin, Nur; Beddu, Salmia; Hisyam Abu Bakar, Badorul

    2017-11-01

    This study investigates the effects of high temperature with five different heating durations on residual properties of 30 MPa normal concrete. Concrete cubes were being heated up to 600°C for 30, 60, 90, 120 and 150 minutes. The temperature will keep constant for 30, 60, 90, 120 and 150 minutes. The standard temperature-time curve ISO 834 is referred to. After heating the specimen were left to cool in the furnace and removed. After cooling down to ambient temperature, the residual mass and residual compressive strength were observed. The obtained result shows that, the compressive strength of concrete decrease as the heating duration increases. This heating duration influence, might affects the loss of free water present and decomposition of hydration products in concrete. As the heating duration increases, the amount of water evaporated also increases led to loss in concrete mass. Conclusively, the percentage of mass and compressive strength loss increased as the heating duration increased.

  1. Compressive Strength and Modulus of Elasticity of Concrete with Cubed Waste Tire Rubbers as Coarse Aggregates

    NASA Astrophysics Data System (ADS)

    Haryanto, Y.; Hermanto, N. I. S.; Pamudji, G.; Wardana, K. P.

    2017-11-01

    One feasible solution to overcome the issue of tire disposal waste is the use of waste tire rubber to replace aggregate in concrete. We have conducted an experimental investigation on the effect of rubber tire waste aggregate in cuboid form on the compressive strength and modulus of elasticity of concrete. The test was performed on 72 cylindrical specimens with the height of 300 mm and diameter of 150 mm. We found that the workability of concrete with waste tire rubber aggregate has increased. The concrete density with waste tire rubber aggregate was decreased, and so was the compressive strength. The decrease of compressive strength is up to 64.34%. If the content of waste tire rubber aggregate is more than 40%, then the resulting concrete cannot be categorized as structural concrete. The modulus of elasticity decreased to 59.77%. The theoretical equation developed to determine the modulus of elasticity of concrete with rubber tire waste aggregate has an accuracy of 84.27%.

  2. SEM technique for displaying the three-dimensional structure of wood

    Treesearch

    C.W. McMillin

    1977-01-01

    Samples of green Liriodendron tulipifera L. were bandsawed into l/4-inch cubes and boiled in water for 1 hour. Smooth intersecting radial, tangential, and transverse surfaces were prepared with a handheld, single-edge razor blade. After drying, the cubes were affixed to stubs so that the intersection point of the three sectioned surfaces was...

  3. SEM technique for displaying the three-dimensional structure of wood

    Treesearch

    Charles W. McMillin

    1977-01-01

    Samples of green Liriodendron tulipifera L. were bandsawed into 1/4-inch cubes and boiled in water for 1 hour. Smooth intersecting radial, tangential, and transverse surfaces were prepared with a handheld, single-edge razor blade. After drying, the cubes were affixed to stubs so that the intersection point of the three sectioned surfaces was...

  4. Using Additive Manufacturing to Print a CubeSat Propulsion System

    NASA Technical Reports Server (NTRS)

    Marshall, William M.; Zemba, Michael; Shemelya, Corey; Wicker, Ryan; Espalin, David; MacDonald, Eric; Keif, Craig; Kwas, Andrew

    2015-01-01

    Small satellites, such as CubeSats, are increasingly being called upon to perform missions traditionally ascribed to larger satellite systems. However, the market of components and hardware for small satellites, particularly CubeSats, still falls short of providing the necessary capabilities required by ever increasing mission demands. One way to overcome this shortfall is to develop the ability to customize every build. By utilizing fabrication methods such as additive manufacturing, mission specific capabilities can be built into a system, or into the structure, that commercial off-the-shelf components may not be able to provide. A partnership between the University of Texas at El Paso, COSMIAC at the University of New Mexico, Northrop Grumman, and the NASA Glenn Research Center is looking into using additive manufacturing techniques to build a complete CubeSat, under the Small Spacecraft Technology Program. The W. M. Keck Center at the University of Texas at El Paso has previously demonstrated the ability to embed electronics and wires into the addtively manufactured structures. Using this technique, features such as antennas and propulsion systems can be included into the CubeSat structural body. Of interest to this paper, the team is investigating the ability to take a commercial micro pulsed plasma thruster and embed it into the printing process. Tests demonstrating the dielectric strength of the printed material and proof-of-concept demonstration of the printed thruster will be shown.

  5. Corner-Cube Retroreflector Instrument for Advanced Lunar Laser Ranging

    NASA Technical Reports Server (NTRS)

    Turyshev, Slava G.; Folkner, William M.; Gutt, Gary M.; Williams, James G.; Somawardhana, Ruwan P.; Baran, Richard T.

    2012-01-01

    A paper describes how, based on a structural-thermal-optical-performance analysis, it has been determined that a single, large, hollow corner cube (170- mm outer diameter) with custom dihedral angles offers a return signal comparable to the Apollo 11 and 14 solid-corner-cube arrays (each consisting of 100 small, solid corner cubes), with negligible pulse spread and much lower mass. The design of the corner cube, and its surrounding mounting and casing, is driven by the thermal environment on the lunar surface, which is subject to significant temperature variations (in the range between 70 and 390 K). Therefore, the corner cube is enclosed in an insulated container open at one end; a narrow-bandpass solar filter is used to reduce the solar energy that enters the open end during the lunar day, achieving a nearly uniform temperature inside the container. Also, the materials and adhesive techniques that will be used for this corner-cube reflector must have appropriate thermal and mechanical characteristics (e.g., silica or beryllium for the cube and aluminum for the casing) to further reduce the impact of the thermal environment on the instrument's performance. The instrument would consist of a single, open corner cube protected by a separate solar filter, and mounted in a cylindrical or spherical case. A major goal in the design of a new lunar ranging system is a measurement accuracy improvement to better than 1 mm by reducing the pulse spread due to orientation. While achieving this goal, it was desired to keep the intensity of the return beam at least as bright as the Apollo 100-corner-cube arrays. These goals are met in this design by increasing the optical aperture of a single corner cube to approximately 170 mm outer diameter. This use of an "open" corner cube allows the selection of corner cube materials to be based primarily on thermal considerations, with no requirements on optical transparency. Such a corner cube also allows for easier pointing requirements, because there is no dependence on total internal reflection, which can fail off-axis.

  6. Making every gram count - Big measurements from tiny platforms (Invited)

    NASA Astrophysics Data System (ADS)

    Fish, C. S.; Neilsen, T. L.; Stromberg, E. M.

    2013-12-01

    The most significant advances in Earth, solar, and space physics over the next decades will originate from new, system-level observational techniques. The most promising technique to still be fully developed and exploited requires conducting multi-point or distributed constellation-based observations. This system-level observational approach is required to understand the 'big picture' coupling between disparate regions such as the solar-wind, magnetosphere, ionosphere, upper atmosphere, land, and ocean. The national research council, NASA science mission directorate, and the larger heliophysics community have repeatedly identified the pressing need for multipoint scientific investigations to be implemented via satellite constellations. The NASA Solar Terrestrial Probes Magnetospheric Multiscale (MMS) mission and NASA Earth Science Division's 'A-train', consisting of the AQUA, CloudSat, CALIPSO and AURA satellites, are examples of such constellations. However, the costs to date of these and other similar proposed constellations have been prohibitive given the 'large satellite' architectures and the multiple launch vehicles required for implementing the constellations. Financially sustainable development and deployment of multi-spacecraft constellations can only be achieved through the use of small spacecraft that allow for multiple hostings per launch vehicle. The revolution in commercial mobile and other battery powered consumer technology has helped enable researchers in recent years to build and fly very small yet capable satellites, principally CubeSats. A majority of the CubeSat activity and development to date has come from international academia and the amateur radio satellite community, but several of the typical large-satellite vendors have developed CubeSats as well. Recent government-sponsored CubeSat initiatives, such as the NRO Colony, NSF CubeSat Space Weather, NASA Office of Chief Technologist Edison and CubeSat Launch Initiative (CSLI) Educational Launch of Nanosatellites Educational Launch of Nano-satellites (ELaNa), the Air Force Space Environmental NanoSat Experiment (SENSE), and the ESA QB50 programs have spurred the development of very proficient miniature space sensors and technologies that enable technology demonstration, space and earth science research, and operational CubeSat based missions. In this paper we will review many of the small, low cost sensor and instrumentation technologies that have been developed to date as part of the CubeSat movement and examine how these new CubeSat based technologies are helping us do more with less.

  7. Physical properties of concrete made with Apollo 16 lunar soil sample

    NASA Technical Reports Server (NTRS)

    Lin, T. D.; Love, H.; Stark, D.

    1992-01-01

    This paper describes the first phase of the long-term investigation for the construction of concrete lunar bases. In this phase, petrographic and scanning electron microscope examinations showed that the morphology and elemental composition of the lunar soil made it suitable for use as a fine aggregate for concrete. Based on this finding, calcium aluminate cement and distilled water were mixed with the lunar soil to fabricate test specimens. The test specimens consisted of a 1-in cube, a 1/2-in cube, and three 0.12 x 0.58 x 3.15-in beam specimens. Tests were performed on these specimens to determine compressive strength, modulus of rupture, modulus of elasticity, and thermal coefficient of expansion. Based on examination of the material and test results, it is concluded that lunar soil can be used as a fine aggregate for concrete.

  8. Damage Tolerance of Pre-Stressed Composite Panels Under Impact Loads

    NASA Astrophysics Data System (ADS)

    Johnson, Alastair F.; Toso-Pentecôte, Nathalie; Schueler, Dominik

    2014-02-01

    An experimental test campaign studied the structural integrity of carbon fibre/epoxy panels preloaded in tension or compression then subjected to gas gun impact tests causing significant damage. The test programme used representative composite aircraft fuselage panels composed of aerospace carbon fibre toughened epoxy prepreg laminates. Preload levels in tension were representative of design limit loads for fuselage panels of this size, and maximum compression preloads were in the post-buckle region. Two main impact scenarios were considered: notch damage from a 12 mm steel cube projectile, at velocities in the range 93-136 m/s; blunt impact damage from 25 mm diameter glass balls, at velocities 64-86 m/s. The combined influence of preload and impact damage on panel residual strengths was measured and results analysed in the context of damage tolerance requirements for composite aircraft panels. The tests showed structural integrity well above design limit loads for composite panels preloaded in tension and compression with visible notch impact damage from hard body impact tests. However, blunt impact tests on buckled compression loaded panels caused large delamination damage regions which lowered plate bending stiffness and reduced significantly compression strengths in buckling.

  9. Study on Mechanical Properties of Concrete Using Plastic Waste as an Aggregate

    NASA Astrophysics Data System (ADS)

    Jaivignesh, B.; Sofi, A.

    2017-07-01

    Disposal of large quantity of plastic causes land, water and air pollution etc.., so a study is conducted to recycle the plastic in concrete. This work investigates about the replacement of natural aggregate with non-biodegradable plastic aggregate made up of mixed plastic waste in concrete. Several tests are conducted such as compressive strength of cube, split tensile strength of cylinder, flexural strength test of prism to identify the properties and behavior of concrete using plastic aggregate. Replacement of fine aggregate weight by 10%, 15%, 20% with Plastic fine (PF) aggregate and for each replacement of fine aggregate 15%, 20%, 25% of coarse aggregate replacement also conducted with Plastic Coarse(PC) aggregate. In literatures reported that the addition of plastic aggregate in concrete causes the reduction of strength in concrete due to poor bonding between concrete and plastic aggregate, so addition of 0.3% of steel fiber by weight of cement in concrete is done to improve the concrete strength. Totally 60 cubes, 60 cylinders and 40 prisms are casted to identify the compressive strength, split tensile strength and flexural strength respectively. Casted specimens are tested at 7 and 28 days. The identified results from concrete using plastic aggregate are compared with conventional concrete. Result shows that reduction in mechanical properties of plastic aggregate added concrete. This reduction in strength is mainly due to poor bond strength between cement and plastic aggregate.

  10. a Spatiotemporal Aggregation Query Method Using Multi-Thread Parallel Technique Based on Regional Division

    NASA Astrophysics Data System (ADS)

    Liao, S.; Chen, L.; Li, J.; Xiong, W.; Wu, Q.

    2015-07-01

    Existing spatiotemporal database supports spatiotemporal aggregation query over massive moving objects datasets. Due to the large amounts of data and single-thread processing method, the query speed cannot meet the application requirements. On the other hand, the query efficiency is more sensitive to spatial variation then temporal variation. In this paper, we proposed a spatiotemporal aggregation query method using multi-thread parallel technique based on regional divison and implemented it on the server. Concretely, we divided the spatiotemporal domain into several spatiotemporal cubes, computed spatiotemporal aggregation on all cubes using the technique of multi-thread parallel processing, and then integrated the query results. By testing and analyzing on the real datasets, this method has improved the query speed significantly.

  11. Preparation, characterization and nonlinear absorption studies of cuprous oxide nanoclusters, micro-cubes and micro-particles

    NASA Astrophysics Data System (ADS)

    Sekhar, H.; Narayana Rao, D.

    2012-07-01

    Cuprous oxide nanoclusters, micro-cubes and micro-particles were successfully synthesized by reducing copper(II) salt with ascorbic acid in the presence of sodium hydroxide via a co-precipitation method. The X-ray diffraction and FTIR studies revealed that the formation of pure single-phase cubic. Raman and EPR spectral studies show the presence of CuO in as-synthesized powders of Cu2O. Transmission electron microscopy and field emission scanning electron microscopy data revealed that the morphology evolves from nanoclusters to micro-cubes and micro-particles by increasing the concentration of NaOH. Linear optical measurements show absorption peak maximum shifts towards red with changing morphology from nanoclusters to micro-cubes and micro-particles. The nonlinear optical properties were studied using open aperture Z-scan technique with 532 nm 6 ns laser pulses. Samples-exhibited both saturable as well as reverse saturable absorption. Due to confinement effects (enhanced band gap), we observed enhanced nonlinear absorption coefficient (β) in the case of nanoclusters compared to their micro-cubes and micro-particles.

  12. Indexing data cubes for content-based searches in radio astronomy

    NASA Astrophysics Data System (ADS)

    Araya, M.; Candia, G.; Gregorio, R.; Mendoza, M.; Solar, M.

    2016-01-01

    Methods for observing space have changed profoundly in the past few decades. The methods needed to detect and record astronomical objects have shifted from conventional observations in the optical range to more sophisticated methods which permit the detection of not only the shape of an object but also the velocity and frequency of emissions in the millimeter-scale wavelength range and the chemical substances from which they originate. The consolidation of radio astronomy through a range of global-scale projects such as the Very Long Baseline Array (VLBA) and the Atacama Large Millimeter/submillimeter Array (ALMA) reinforces the need to develop better methods of data processing that can automatically detect regions of interest (ROIs) within data cubes (position-position-velocity), index them and facilitate subsequent searches via methods based on queries using spatial coordinates and/or velocity ranges. In this article, we present the development of an automatic system for indexing ROIs in data cubes that is capable of automatically detecting and recording ROIs while reducing the necessary storage space. The system is able to process data cubes containing megabytes of data in fractions of a second without human supervision, thus allowing it to be incorporated into a production line for displaying objects in a virtual observatory. We conducted a set of comprehensive experiments to illustrate how our system works. As a result, an index of 3% of the input size was stored in a spatial database, representing a compression ratio equal to 33:1 over an input of 20.875 GB, achieving an index of 773 MB approximately. On the other hand, a single query can be evaluated over our system in a fraction of second, showing that the indexing step works as a shock-absorber of the computational time involved in data cube processing. The system forms part of the Chilean Virtual Observatory (ChiVO), an initiative which belongs to the International Virtual Observatory Alliance (IVOA) that seeks to provide the capability of content-based searches on data cubes to the astronomical community.

  13. A Transformation Approach to Optimal Control Problems with Bounded State Variables

    NASA Technical Reports Server (NTRS)

    Hanafy, Lawrence Hanafy

    1971-01-01

    A technique is described and utilized in the study of the solutions to various general problems in optimal control theory, which are converted in to Lagrange problems in the calculus of variations. This is accomplished by mapping certain properties in Euclidean space onto closed control and state regions. Nonlinear control problems with a unit m cube as control region and unit n cube as state region are considered.

  14. SpaceCubeX: A Framework for Evaluating Hybrid Multi-Core CPU FPGA DSP Architectures

    NASA Technical Reports Server (NTRS)

    Schmidt, Andrew G.; Weisz, Gabriel; French, Matthew; Flatley, Thomas; Villalpando, Carlos Y.

    2017-01-01

    The SpaceCubeX project is motivated by the need for high performance, modular, and scalable on-board processing to help scientists answer critical 21st century questions about global climate change, air quality, ocean health, and ecosystem dynamics, while adding new capabilities such as low-latency data products for extreme event warnings. These goals translate into on-board processing throughput requirements that are on the order of 100-1,000 more than those of previous Earth Science missions for standard processing, compression, storage, and downlink operations. To study possible future architectures to achieve these performance requirements, the SpaceCubeX project provides an evolvable testbed and framework that enables a focused design space exploration of candidate hybrid CPU/FPGA/DSP processing architectures. The framework includes ArchGen, an architecture generator tool populated with candidate architecture components, performance models, and IP cores, that allows an end user to specify the type, number, and connectivity of a hybrid architecture. The framework requires minimal extensions to integrate new processors, such as the anticipated High Performance Spaceflight Computer (HPSC), reducing time to initiate benchmarking by months. To evaluate the framework, we leverage a wide suite of high performance embedded computing benchmarks and Earth science scenarios to ensure robust architecture characterization. We report on our projects Year 1 efforts and demonstrate the capabilities across four simulation testbed models, a baseline SpaceCube 2.0 system, a dual ARM A9 processor system, a hybrid quad ARM A53 and FPGA system, and a hybrid quad ARM A53 and DSP system.

  15. The IceCube Collaboration:contributions to the 30 th International Cosmic Ray Conference (ICRC 2007),

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    IceCube Collaboration; Ackermann, M.

    2007-11-02

    This paper bundles 40 contributions by the IceCube collaboration that were submitted to the 30th International Cosmic Ray Conference ICRC 2007. The articles cover studies on cosmic rays and atmospheric neutrinos, searches for non-localized, extraterrestrial {nu}{sub e}, {nu}{sub {mu}} and {nu}{sub {tau}} signals, scans for steady and intermittent neutrino point sources, searches for dark matter candidates, magnetic monopoles and other exotic particles, improvements in analysis techniques, as well as future detector extensions. The IceCube observatory will be finalized in 2011 to form a cubic-kilometer ice-Cherenkov detector at the location of the geographic South Pole. At the present state of construction,more » IceCube consists of 52 paired IceTop surface tanks and 22 IceCube strings with a total of 1426 Digital Optical Modules deployed at depths up to 2350 m. The observatory also integrates the 19 string AMANDA subdetector, that was completed in 2000 and extends IceCube's reach to lower energies. Before the deployment of IceTop, cosmic air showers were registered with the 30 station SPASE-2 surface array. IceCube's low noise Digital Optical Modules are very reliable, show a uniform response and record waveforms of arriving photons that are resolvable with nanosecond precision over a large dynamic range. Data acquisition, reconstruction and simulation software are running in production mode and the analyses, profiting from the improved data quality and increased overall sensitivity, are well under way.« less

  16. RX for Writer's Block.

    ERIC Educational Resources Information Center

    Tompkins, Gail E.; Camp, Donna J.

    1988-01-01

    Describes four prewriting techniques that elementary and middle grade students can use to gather and organize ideas for writing, and by so doing, cure writer's block. Techniques discussed are: (1) brainstorming; (2) clustering; (3) freewriting; and (4) cubing.

  17. A Compressed Sensing-based Image Reconstruction Algorithm for Solar Flare X-Ray Observations

    NASA Astrophysics Data System (ADS)

    Felix, Simon; Bolzern, Roman; Battaglia, Marina

    2017-11-01

    One way of imaging X-ray emission from solar flares is to measure Fourier components of the spatial X-ray source distribution. We present a new compressed sensing-based algorithm named VIS_CS, which reconstructs the spatial distribution from such Fourier components. We demonstrate the application of the algorithm on synthetic and observed solar flare X-ray data from the Reuven Ramaty High Energy Solar Spectroscopic Imager satellite and compare its performance with existing algorithms. VIS_CS produces competitive results with accurate photometry and morphology, without requiring any algorithm- and X-ray-source-specific parameter tuning. Its robustness and performance make this algorithm ideally suited for the generation of quicklook images or large image cubes without user intervention, such as for imaging spectroscopy analysis.

  18. A Compressed Sensing-based Image Reconstruction Algorithm for Solar Flare X-Ray Observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felix, Simon; Bolzern, Roman; Battaglia, Marina, E-mail: simon.felix@fhnw.ch, E-mail: roman.bolzern@fhnw.ch, E-mail: marina.battaglia@fhnw.ch

    One way of imaging X-ray emission from solar flares is to measure Fourier components of the spatial X-ray source distribution. We present a new compressed sensing-based algorithm named VIS-CS, which reconstructs the spatial distribution from such Fourier components. We demonstrate the application of the algorithm on synthetic and observed solar flare X-ray data from the Reuven Ramaty High Energy Solar Spectroscopic Imager satellite and compare its performance with existing algorithms. VIS-CS produces competitive results with accurate photometry and morphology, without requiring any algorithm- and X-ray-source-specific parameter tuning. Its robustness and performance make this algorithm ideally suited for the generation ofmore » quicklook images or large image cubes without user intervention, such as for imaging spectroscopy analysis.« less

  19. Effect of mineral admixtures on kinetic property and compressive strength of self Compacting Concrete

    NASA Astrophysics Data System (ADS)

    Jagalur Mahalingasharma, Srishaila; Prakash, Parasivamurthy; Vishwanath, K. N.; Jawali, Veena

    2017-06-01

    This paper presents experimental investigations made on the influence of chemical, physical, morphological and mineralogical properties of mineral admixtures such as fly ash, ground granulate blast furnace slag, metakaoline and micro silica used as a replacement of cement in self compacting concrete on workability and compressive strength. Nineteen concrete mixes were cast by replacing with cement by fly ash or ground granulated blast furnace slag as binary blend at 30%, 40%, 50% and with addition of micro silica and metakaoline at 10% as a ternary blend with fly ash, ground granulated blast furnace slag and obtained results were compare with control mix. Water powder ratio 0.3 and super plasticizer dosage 1% of cementitious material was kept constant for all the mixes. The self compacting concrete tested for slump flow, V-funnel, L-Box, J-Ring, T50, and compressive strength on concrete cube were determined at age of 3, 7, 28, 56, 90 days.

  20. Compressive strength performance of OPS lightweight aggregate concrete containing coal bottom ash as partial fine aggregate replacement

    NASA Astrophysics Data System (ADS)

    Muthusamy, K.; Mohamad Hafizuddin, R.; Mat Yahaya, F.; Sulaiman, M. A.; Syed Mohsin, S. M.; Tukimat, N. N.; Omar, R.; Chin, S. C.

    2018-04-01

    Concerns regarding the negative impact towards environment due to the increasing use of natural sand in construction industry and dumping of industrial solid wastes namely coal bottom ash (CBA) and oil palm shell (OPS) has resulted in the development of environmental friendly lightweight concrete. The present study investigates the effect of coal bottom ash as partial fine aggregate replacement towards workability and compressive strength of oil palm shell lightweight aggregate concrete (OPS LWAC). The fresh and mechanical properties of this concrete containing various percentage of coal bottom ash as partial fine aggregate replacement were investigated. The result was compared to OPS LWAC with 100 % sand as a control specimen. The concrete workability investigated by conducting slump test. All specimens were cast in form of cubes and water cured until the testing age. The compressive strength test was carried out at 7 and 28 days. The finding shows that integration of coal bottom ash at suitable proportion enhances the strength of oil palm shell lightweight aggregate concrete.

  1. Impact of plunging breaking waves on a partially submerged cube

    NASA Astrophysics Data System (ADS)

    Wang, A.; Ikeda, C.; Duncan, J. H.

    2013-11-01

    The impact of a deep-water plunging breaking wave on a partially submerged cube is studied experimentally in a tank that is 14.8 m long and 1.2 m wide with a water depth of 0.91 m. The breakers are created from dispersively focused wave packets generated by a programmable wave maker. The water surface profile in the vertical center plane of the cube is measured using a cinematic laser-induced fluorescence technique with movie frame rates ranging from 300 to 4,500 Hz. The pressure distribution on the front face of the cube is measured with 24 fast-response sensors simultaneously with the wave profile measurements. The cube is positioned vertically at three heights relative to the mean water level and horizontally at a distance from the wave maker where a strong vertical water jet is formed. The portion of the water surface between the contact point on the front face of the cube and the wave crest is fitted with a circular arc and the radius and vertical position of the fitted circle is tracked during the impact. The vertical acceleration of the contact point reaches more than 50 times the acceleration of gravity and the pressure distribution just below the free surface shows a localized high-pressure region with a very high vertical pressure gradient. This work is supported by the Office of Naval Research under grant N000141110095.

  2. Sparsity based target detection for compressive spectral imagery

    NASA Astrophysics Data System (ADS)

    Boada, David Alberto; Arguello Fuentes, Henry

    2016-09-01

    Hyperspectral imagery provides significant information about the spectral characteristics of objects and materials present in a scene. It enables object and feature detection, classification, or identification based on the acquired spectral characteristics. However, it relies on sophisticated acquisition and data processing systems able to acquire, process, store, and transmit hundreds or thousands of image bands from a given area of interest which demands enormous computational resources in terms of storage, computationm, and I/O throughputs. Specialized optical architectures have been developed for the compressed acquisition of spectral images using a reduced set of coded measurements contrary to traditional architectures that need a complete set of measurements of the data cube for image acquisition, dealing with the storage and acquisition limitations. Despite this improvement, if any processing is desired, the image has to be reconstructed by an inverse algorithm in order to be processed, which is also an expensive task. In this paper, a sparsity-based algorithm for target detection in compressed spectral images is presented. Specifically, the target detection model adapts a sparsity-based target detector to work in a compressive domain, modifying the sparse representation basis in the compressive sensing problem by means of over-complete training dictionaries and a wavelet basis representation. Simulations show that the presented method can achieve even better detection results than the state of the art methods.

  3. In vitro dissolution kinetic study of theophylline from hydrophilic and hydrophobic matrices.

    PubMed

    Maswadeh, Hamzah M; Semreen, Mohammad H; Abdulhalim, Abdulatif A

    2006-01-01

    Oral dosage forms containing 300 mg theophylline in matrix type tablets, were prepared by direct compression method using two kinds of matrices, glycerylbehenate (hydrophobic), and (hydroxypropyl)methyl cellulose (hydrophilic). The in vitro release kinetics of these formulations were studied at pH 6.8 using the USP dissolution apparatus with the paddle assemble. The kinetics of the dissolution process were studied by analyzing the dissolution data using four kinetic equations, the zero-order equation, the first-order equation, the Higuchi square root equation and the Hixson-Crowell cube root law. The analysis of the dissolution kinetic data for the theophylline preparations in this study shows that it follows the first order kinetics and the release process involves erosion / diffusion and an alteration in the surface area and diameter of the matrix system, as well as in the diffusion path length from the matrix drug load during the dissolution process. This relation is best described by the use of both the first-order equation and the Hixson-Crowell cube root law.

  4. Cosmic ray composition and energy spectrum from 1-30 PeV using the 40-string configuration of IceTop and IceCube

    NASA Astrophysics Data System (ADS)

    IceCube Collaboration; Abbasi, R.; Abdou, Y.; Ackermann, M.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Altmann, D.; Andeen, K.; Auffenberg, J.; Bai, X.; Baker, M.; Barwick, S. W.; Baum, V.; Bay, R.; Beattie, K.; Beatty, J. J.; Bechet, S.; Becker, J. K.; Becker, K.-H.; Bell, M.; Benabderrahmane, M. L.; BenZvi, S.; Berdermann, J.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Bindig, D.; Bissok, M.; Blaufuss, E.; Blumenthal, J.; Boersma, D. J.; Bohm, C.; Bose, D.; Böser, S.; Botner, O.; Brayeur, L.; Brown, A. M.; Bruijn, R.; Brunner, J.; Buitink, S.; Caballero-Mora, K. S.; Carson, M.; Casey, J.; Casier, M.; Chirkin, D.; Christy, B.; Clevermann, F.; Cohen, S.; Cowen, D. F.; Silva, A. H. Cruz; Danninger, M.; Daughhetee, J.; Davis, J. C.; De Clercq, C.; Descamps, F.; Desiati, P.; de Vries-Uiterweerd, G.; DeYoung, T.; Díaz-Vélez, J. C.; Dreyer, J.; Dumm, J. P.; Dunkman, M.; Eagan, R.; Eisch, J.; Ellsworth, R. W.; Engdegård, O.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Fedynitch, A.; Feintzeig, J.; Feusels, T.; Filimonov, K.; Finley, C.; Fischer-Wasels, T.; Flis, S.; Franckowiak, A.; Franke, R.; Frantzen, K.; Fuchs, T.; Gaisser, T. K.; Gallagher, J.; Gerhardt, L.; Gladstone, L.; Glüsenkamp, T.; Goldschmidt, A.; Goodman, J. A.; Góra, D.; Grant, D.; Groß, A.; Grullon, S.; Gurtner, M.; Ha, C.; Ismail, A. Haj; Hallgren, A.; Halzen, F.; Hanson, K.; Heereman, D.; Heimann, P.; Heinen, D.; Helbing, K.; Hellauer, R.; Hickford, S.; Hill, G. C.; Hoffman, K. D.; Hoffmann, R.; Homeier, A.; Hoshina, K.; Huelsnitz, W.; Hulth, P. O.; Hultqvist, K.; Hussain, S.; Ishihara, A.; Jacobi, E.; Jacobsen, J.; Japaridze, G. S.; Jlelati, O.; Johansson, H.; Kappes, A.; Karg, T.; Karle, A.; Kiryluk, J.; Kislat, F.; Kläs, J.; Klein, S. R.; Köhne, J.-H.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Kopper, C.; Kopper, S.; Koskinen, D. J.; Kowalski, M.; Krasberg, M.; Kroll, G.; Kunnen, J.; Kurahashi, N.; Kuwabara, T.; Labare, M.; Laihem, K.; Landsman, H.; Larson, M. J.; Lauer, R.; Lesiak-Bzdak, M.; Lünemann, J.; Madsen, J.; Maruyama, R.; Mase, K.; Matis, H. S.; McNally, F.; Meagher, K.; Merck, M.; Mészáros, P.; Meures, T.; Miarecki, S.; Middell, E.; Milke, N.; Miller, J.; Mohrmann, L.; Montaruli, T.; Morse, R.; Movit, S. M.; Nahnhauer, R.; Naumann, U.; Nowicki, S. C.; Nygren, D. R.; Obertacke, A.; Odrowski, S.; Olivas, A.; Olivo, M.; O'Murchadha, A.; Panknin, S.; Paul, L.; Pepper, J. A.; de los Heros, C. Pérez; Pieloth, D.; Pirk, N.; Posselt, J.; Price, P. B.; Przybylski, G. T.; Rädel, L.; Rawlins, K.; Redl, P.; Resconi, E.; Rhode, W.; Ribordy, M.; Richman, M.; Riedel, B.; Rodrigues, J. P.; Rothmaier, F.; Rott, C.; Ruhe, T.; Rutledge, D.; Ruzybayev, B.; Ryckbosch, D.; Salameh, T.; Sander, H.-G.; Santander, M.; Sarkar, S.; Saba, S. M.; Schatto, K.; Scheel, M.; Scheriau, F.; Schmidt, T.; Schmitz, M.; Schoenen, S.; Schöneberg, S.; Schönherr, L.; Schönwald, A.; Schukraft, A.; Schulte, L.; Schulz, O.; Seckel, D.; Seo, S. H.; Sestayo, Y.; Seunarine, S.; Smith, M. W. E.; Soiron, M.; Soldin, D.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Stasik, A.; Stezelberger, T.; Stokstad, R. G.; Stößl, A.; Strahler, E. A.; Ström, R.; Sullivan, G. W.; Taavola, H.; Taboada, I.; Tamburro, A.; Ter-Antonyan, S.; Tilav, S.; Toale, P. A.; Toscano, S.; Usner, M.; van Eijndhoven, N.; van der Drift, D.; Van Overloop, A.; van Santen, J.; Vehring, M.; Voge, M.; Walck, C.; Waldenmaier, T.; Wallraff, M.; Walter, M.; Wasserman, R.; Weaver, Ch.; Wendt, C.; Westerhoff, S.; Whitehorn, N.; Wiebe, K.; Wiebusch, C. H.; Williams, D. R.; Wissing, H.; Wolf, M.; Wood, T. R.; Woschnagg, K.; Xu, C.; Xu, D. L.; Xu, X. W.; Yanez, J. P.; Yodh, G.; Yoshida, S.; Zarzhitsky, P.; Ziemann, J.; Zilles, A.; Zoll, M.

    2013-02-01

    The mass composition of high energy cosmic rays depends on their production, acceleration, and propagation. The study of cosmic ray composition can therefore reveal hints of the origin of these particles. At the South Pole, the IceCube Neutrino Observatory is capable of measuring two components of cosmic ray air showers in coincidence: the electromagnetic component at high altitude (2835 m) using the IceTop surface array, and the muonic component above ˜1 TeV using the IceCube array. This unique detector arrangement provides an opportunity for precision measurements of the cosmic ray energy spectrum and composition in the region of the knee and beyond. We present the results of a neural network analysis technique to study the cosmic ray composition and the energy spectrum from 1 PeV to 30 PeV using data recorded using the 40-string/40-station configuration of the IceCube Neutrino Observatory.

  5. Side information in coded aperture compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Galvis, Laura; Arguello, Henry; Lau, Daniel; Arce, Gonzalo R.

    2017-02-01

    Coded aperture compressive spectral imagers sense a three-dimensional cube by using two-dimensional projections of the coded and spectrally dispersed source. These imagers systems often rely on FPA detectors, SLMs, micromirror devices (DMDs), and dispersive elements. The use of the DMDs to implement the coded apertures facilitates the capture of multiple projections, each admitting a different coded aperture pattern. The DMD allows not only to collect the sufficient number of measurements for spectrally rich scenes or very detailed spatial scenes but to design the spatial structure of the coded apertures to maximize the information content on the compressive measurements. Although sparsity is the only signal characteristic usually assumed for reconstruction in compressing sensing, other forms of prior information such as side information have been included as a way to improve the quality of the reconstructions. This paper presents the coded aperture design in a compressive spectral imager with side information in the form of RGB images of the scene. The use of RGB images as side information of the compressive sensing architecture has two main advantages: the RGB is not only used to improve the reconstruction quality but to optimally design the coded apertures for the sensing process. The coded aperture design is based on the RGB scene and thus the coded aperture structure exploits key features such as scene edges. Real reconstructions of noisy compressed measurements demonstrate the benefit of the designed coded apertures in addition to the improvement in the reconstruction quality obtained by the use of side information.

  6. Investigating Access Performance of Long Time Series with Restructured Big Model Data

    NASA Astrophysics Data System (ADS)

    Shen, S.; Ostrenga, D.; Vollmer, B.; Meyer, D. J.

    2017-12-01

    Data sets generated by models are substantially increasing in volume, due to increases in spatial and temporal resolution, and the number of output variables. Many users wish to download subsetted data in preferred data formats and structures, as it is getting increasingly difficult to handle the original full-size data files. For example, application research users, such as those involved with wind or solar energy, or extreme weather events, are likely only interested in daily or hourly model data at a single point or for a small area for a long time period, and prefer to have the data downloaded in a single file. With native model file structures, such as hourly data from NASA Modern-Era Retrospective analysis for Research and Applications Version-2 (MERRA-2), it may take over 10 hours for the extraction of interested parameters at a single point for 30 years. The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) is exploring methods to address this particular user need. One approach is to create value-added data by reconstructing the data files. Taking MERRA-2 data as an example, we have tested converting hourly data from one-day-per-file into different data cubes, such as one-month, one-year, or whole-mission. Performance are compared for reading local data files and accessing data through interoperable service, such as OPeNDAP. Results show that, compared to the original file structure, the new data cubes offer much better performance for accessing long time series. We have noticed that performance is associated with the cube size and structure, the compression method, and how the data are accessed. An optimized data cube structure will not only improve data access, but also may enable better online analytic services.

  7. Investigating Access Performance of Long Time Series with Restructured Big Model Data

    NASA Technical Reports Server (NTRS)

    Shen, Suhung; Ostrenga, Dana M.; Vollmer, Bruce E.; Meyer, Dave

    2017-01-01

    Data sets generated by models are substantially increasing in volume, due to increases in spatial and temporal resolution, and the number of output variables. Many users wish to download subsetted data in preferred data formats and structures, as it is getting increasingly difficult to handle the original full-size data files. For example, application research users such as those involved with wind or solar energy, or extreme weather events are likely only interested in daily or hourly model data at a single point (or for a small area) for a long time period, and prefer to have the data downloaded in a single file. With native model file structures, such as hourly data from NASA Modern-Era Retrospective analysis for Research and Applications Version-2 (MERRA-2), it may take over 10 hours for the extraction of parameters-of-interest at a single point for 30 years. The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) is exploring methods to address this particular user need. One approach is to create value-added data by reconstructing the data files. Taking MERRA-2 data as an example, we have tested converting hourly data from one-day-per-file into different data cubes, such as one-month, or one-year. Performance is compared for reading local data files and accessing data through interoperable services, such as OPeNDAP. Results show that, compared to the original file structure, the new data cubes offer much better performance for accessing long time series. We have noticed that performance is associated with the cube size and structure, the compression method, and how the data are accessed. An optimized data cube structure will not only improve data access, but also may enable better online analysis services

  8. Compressive and Flexural Tests on Adobe Samples Reinforced with Wire Mesh

    NASA Astrophysics Data System (ADS)

    Jokhio, G. A.; Al-Tawil, Y. M. Y.; Syed Mohsin, S. M.; Gul, Y.; Ramli, N. I.

    2018-03-01

    Adobe is an economical, naturally available, and environment friendly construction material that offers excellent thermal and sound insulations as well as indoor air quality. It is important to understand and enhance the mechanical properties of this material, where a high degree of variation is reported in the literature owing to lack of research and standardization in this field. The present paper focuses first on the understanding of mechanical behaviour of adobe subjected to compressive stresses as well as flexure and then on enhancing the same with the help of steel wire mesh as reinforcement. A total of 22 samples were tested out of which, 12 cube samples were tested for compressive strength, whereas 10 beams samples were tested for modulus of rupture. Half of the samples in each category were control samples i.e. without wire mesh reinforcement, whereas the remaining half were reinforced with a single layer of wire mesh per sample. It has been found that the compressive strength of adobe increases by about 43% after adding a single layer of wire mesh reinforcement. The flexural response of adobe has also shown improvement with the addition of wire mesh reinforcement.

  9. Simulating the WFIRST coronagraph integral field spectrograph

    NASA Astrophysics Data System (ADS)

    Rizzo, Maxime J.; Groff, Tyler D.; Zimmermann, Neil T.; Gong, Qian; Mandell, Avi M.; Saxena, Prabal; McElwain, Michael W.; Roberge, Aki; Krist, John; Riggs, A. J. Eldorado; Cady, Eric J.; Mejia Prada, Camilo; Brandt, Timothy; Douglas, Ewan; Cahoy, Kerri

    2017-09-01

    A primary goal of direct imaging techniques is to spectrally characterize the atmospheres of planets around other stars at extremely high contrast levels. To achieve this goal, coronagraphic instruments have favored integral field spectrographs (IFS) as the science cameras to disperse the entire search area at once and obtain spectra at each location, since the planet position is not known a priori. These spectrographs are useful against confusion from speckles and background objects, and can also help in the speckle subtraction and wavefront control stages of the coronagraphic observation. We present a software package, the Coronagraph and Rapid Imaging Spectrograph in Python (crispy) to simulate the IFS of the WFIRST Coronagraph Instrument (CGI). The software propagates input science cubes using spatially and spectrally resolved coronagraphic focal plane cubes, transforms them into IFS detector maps and ultimately reconstructs the spatio-spectral input scene as a 3D datacube. Simulated IFS cubes can be used to test data extraction techniques, refine sensitivity analyses and carry out design trade studies of the flight CGI-IFS instrument. crispy is a publicly available Python package and can be adapted to other IFS designs.

  10. BIRDY - Interplanetary CubeSat for planetary geodesy of Small Solar System Bodies (SSSB).

    NASA Astrophysics Data System (ADS)

    Hestroffer, D.; Agnan, M.; Segret, B.; Quinsac, G.; Vannitsen, J.; Rosenblatt, P.; Miau, J. J.

    2017-12-01

    We are developing the Birdy concept of a scientific interplanetary CubeSat, for cruise, or proximity operations around a Small body of the Solar System (asteroid, comet, irregular satellite). The scientific aim is to characterise the body's shape, gravity field, and internal structure through imaging and radio-science techniques. Radio-science is now of common use in planetary science (flybys or orbiters) to derive the mass of the scientific target and possibly higher order terms of its gravity field. Its application to a nano-satellite brings the advantage of enabling low orbits that can get closer to the body's surface, hence increasing the SNR for precise orbit determination (POD), with a fully dedicated instrument. Additionally, it can be applied to two or more satellites, on a leading-trailing trajectory, to improve the gravity field determination. However, the application of this technique to CubeSats in deep space, and inter-satellite link has to be proven. Interplanetary CubeSats need to overcome a few challenges before reaching successfully their deep-space objectives: link to ground-segment, energy supply, protection against radiation, etc. Besides, the Birdy CubeSat — as our basis concept — is designed to be accompanying a mothercraft, and relies partly on the main mission for reaching the target, as well as on data-link with the Earth. However, constraints to the mothercraft needs to be reduced, by having the CubeSat as autonomous as possible. In this respect, propulsion and auto-navigation are key aspects, that we are studying in a Birdy-T engineering model. We envisage a 3U size CubeSat with radio link, object-tracker and imaging function, and autonomous ionic propulsion system. We are considering two case studies for autonomous guidance, navigation and control, with autonomous propulsion: in cruise and in proximity, necessitating ΔV up to 2m/s for a total budget of about 50m/s. In addition to the propulsion, in-flight orbit determination (IFOD) and maintenance are studied, through analysis of images by an object-tracker and astrometry of solar system objects in front of background stars. Before going to deep-space, our project will start with BIRDY-1 orbiting the Earth, to validate the concepts of adopted propulsion, IFOD and orbit maintenance, as well as the radio-science and POD.

  11. Exorcising the Ghost in the Machine: Synthetic Spectral Data Cubes for Assessing Big Data Algorithms

    NASA Astrophysics Data System (ADS)

    Araya, M.; Solar, M.; Mardones, D.; Hochfärber, T.

    2015-09-01

    The size and quantity of the data that is being generated by large astronomical projects like ALMA, requires a paradigm change in astronomical data analysis. Complex data, such as highly sensitive spectroscopic data in the form of large data cubes, are not only difficult to manage, transfer and visualize, but they make traditional data analysis techniques unfeasible. Consequently, the attention has been placed on machine learning and artificial intelligence techniques, to develop approximate and adaptive methods for astronomical data analysis within a reasonable computational time. Unfortunately, these techniques are usually sub optimal, stochastic and strongly dependent of the parameters, which could easily turn into “a ghost in the machine” for astronomers and practitioners. Therefore, a proper assessment of these methods is not only desirable but mandatory for trusting them in large-scale usage. The problem is that positively verifiable results are scarce in astronomy, and moreover, science using bleeding-edge instrumentation naturally lacks of reference values. We propose an Astronomical SYnthetic Data Observations (ASYDO), a virtual service that generates synthetic spectroscopic data in the form of data cubes. The objective of the tool is not to produce accurate astrophysical simulations, but to generate a large number of labelled synthetic data, to assess advanced computing algorithms for astronomy and to develop novel Big Data algorithms. The synthetic data is generated using a set of spectral lines, template functions for spatial and spectral distributions, and simple models that produce reasonable synthetic observations. Emission lines are obtained automatically using IVOA's SLAP protocol (or from a relational database) and their spectral profiles correspond to distributions in the exponential family. The spatial distributions correspond to simple functions (e.g., 2D Gaussian), or to scalable template objects. The intensity, broadening and radial velocity of each line is given by very simple and naive physical models, yet ASYDO's generic implementation supports new user-made models, which potentially allows adding more realistic simulations. The resulting data cube is saved as a FITS file, also including all the tables and images used for generating the cube. We expect to implement ASYDO as a virtual observatory service in the near future.

  12. Soft bilateral filtering volumetric shadows using cube shadow maps

    PubMed Central

    Ali, Hatam H.; Sunar, Mohd Shahrizal; Kolivand, Hoshang

    2017-01-01

    Volumetric shadows often increase the realism of rendered scenes in computer graphics. Typical volumetric shadows techniques do not provide a smooth transition effect in real-time with conservation on crispness of boundaries. This research presents a new technique for generating high quality volumetric shadows by sampling and interpolation. Contrary to conventional ray marching method, which requires extensive time, this proposed technique adopts downsampling in calculating ray marching. Furthermore, light scattering is computed in High Dynamic Range buffer to generate tone mapping. The bilateral interpolation is used along a view rays to smooth transition of volumetric shadows with respect to preserving-edges. In addition, this technique applied a cube shadow map to create multiple shadows. The contribution of this technique isreducing the number of sample points in evaluating light scattering and then introducing bilateral interpolation to improve volumetric shadows. This contribution is done by removing the inherent deficiencies significantly in shadow maps. This technique allows obtaining soft marvelous volumetric shadows, having a good performance and high quality, which show its potential for interactive applications. PMID:28632740

  13. An Image Processing Technique for Achieving Lossy Compression of Data at Ratios in Excess of 100:1

    DTIC Science & Technology

    1992-11-01

    5 Lempel , Ziv , Welch (LZW) Compression ............... 7 Lossless Compression Tests Results ................. 9 Exact...since IBM holds the patent for this technique. Lempel , Ziv , Welch (LZW) Compression The LZW compression is related to two compression techniques known as... compression , using the input stream as data . This step is possible because the compression algorithm always outputs the phrase and character components of a

  14. An Encoding Method for Compressing Geographical Coordinates in 3d Space

    NASA Astrophysics Data System (ADS)

    Qian, C.; Jiang, R.; Li, M.

    2017-09-01

    This paper proposed an encoding method for compressing geographical coordinates in 3D space. By the way of reducing the length of geographical coordinates, it helps to lessen the storage size of geometry information. In addition, the encoding algorithm subdivides the whole space according to octree rules, which enables progressive transmission and loading. Three main steps are included in this method: (1) subdividing the whole 3D geographic space based on octree structure, (2) resampling all the vertices in 3D models, (3) encoding the coordinates of vertices with a combination of Cube Index Code (CIC) and Geometry Code. A series of geographical 3D models were applied to evaluate the encoding method. The results showed that this method reduced the storage size of most test data by 90 % or even more under the condition of a speed of encoding and decoding. In conclusion, this method achieved a remarkable compression rate in vertex bit size with a steerable precision loss. It shall be of positive meaning to the web 3d map storing and transmission.

  15. Deformation modeling and constitutive modeling for anisotropic superalloys

    NASA Technical Reports Server (NTRS)

    Milligan, Walter W.; Antolovich, Stephen D.

    1989-01-01

    A study of deformation mechanisms in the single crystal superalloy PWA 1480 was conducted. Monotonic and cyclic tests were conducted from 20 to 1093 C. Both (001) and near-(123) crystals were tested, at strain rates of 0.5 and 50 percent/minute. The deformation behavior could be grouped into two temperature regimes: low temperatures, below 760 C; and high temperatures, above 820 to 950 C depending on the strain rate. At low temperatures, the mechanical behavior was very anisotropic. An orientation dependent CRSS, a tension-compression asymmetry, and anisotropic strain hardening were all observed. The material was deformed by planar octahedral slip. The anisotropic properties were correlated with the ease of cube cross-slip, as well as the number of active slip systems. At high temperatures, the material was isotropic, and deformed by homogeneous gamma by-pass. It was found that the temperature dependence of the formation of superlattice-intrinsic stacking faults was responsible for the local minimum in the CRSS of this alloy at 400 C. It was proposed that the cube cross-slip process must be reversible. This was used to explain the reversible tension-compression asymmetry, and was used to study models of cross-slip. As a result, the cross-slip model proposed by Paidar, Pope and Vitek was found to be consistent with the proposed slip reversibility. The results were related to anisotropic viscoplastic constitutive models. The model proposed by Walter and Jordan was found to be capable of modeling all aspects of the material anisotropy. Temperature and strain rate boundaries for the model were proposed, and guidelines for numerical experiments were proposed.

  16. The durability of concrete containing recycled tyres as a partial replacement of fine aggregate

    NASA Astrophysics Data System (ADS)

    Syamir Senin, Mohamad; Shahidan, Shahiron; Syazani Leman, Alif; Othman, Nurulain; Shamsuddin, Shamrul-mar; Ibrahim, M. H. W.; Zuki, S. S. Mohd

    2017-11-01

    Nowadays, uncontrolled disposal of waste materials such as tyres can affect the environment. Therefore, careful management of waste disposal must be done in order to conserve the environment. Waste tyres can be use as a replacement for both fine aggregate and coarse aggregate in the production of concrete. This research was conducted to assess the durability of concrete containing recycled tyres which have been crushed into fine fragments to replace fine aggregate in the concrete mix. This study presents an overview of the use of waste rubber as a partial replacement of natural fine aggregate in a concrete mix. 36 concrete cubes measuring 100mm × 100mm × 100mm and 12 concrete cubes measuring 150mm × 150mm × 150mm were prepared and added with different percentages of rubber from recycled tyres (0%, 3%, 5% and 7%) as fine aggregate replacement. The results obtained show that the replacement of fine aggregate with 7% of rubber recorded a compressive strength of 43.7MPa while the addition of 3% of rubber in the concrete sample recorded a high compressive strength of 50.8MPa. This shows that there is a decrease in the strength and workability of concrete as the amount of rubber used a replacement for fine aggregate in concrete increases. On the other hand, the water absorption test indicated that concrete which contains rubber has better water absorption ability. In this study, 3% of rubber was found to be the optimal percentage as a partial replacement for fine aggregate in the production of concrete.

  17. Model predictive and reallocation problem for CubeSat fault recovery and attitude control

    NASA Astrophysics Data System (ADS)

    Franchi, Loris; Feruglio, Lorenzo; Mozzillo, Raffaele; Corpino, Sabrina

    2018-01-01

    In recent years, thanks to the increase of the know-how on machine-learning techniques and the advance of the computational capabilities of on-board processing, expensive computing algorithms, such as Model Predictive Control, have begun to spread in space applications even on small on-board processor. The paper presents an algorithm for an optimal fault recovery of a 3U CubeSat, developed in MathWorks Matlab & Simulink environment. This algorithm involves optimization techniques aiming at obtaining the optimal recovery solution, and involves a Model Predictive Control approach for the attitude control. The simulated system is a CubeSat in Low Earth Orbit: the attitude control is performed with three magnetic torquers and a single reaction wheel. The simulation neglects the errors in the attitude determination of the satellite, and focuses on the recovery approach and control method. The optimal recovery approach takes advantage of the properties of magnetic actuation, which gives the possibility of the redistribution of the control action when a fault occurs on a single magnetic torquer, even in absence of redundant actuators. In addition, the paper presents the results of the implementation of Model Predictive approach to control the attitude of the satellite.

  18. Data Processing Methods for 3D Seismic Imaging of Subsurface Volcanoes: Applications to the Tarim Flood Basalt.

    PubMed

    Wang, Lei; Tian, Wei; Shi, Yongmin

    2017-08-07

    The morphology and structure of plumbing systems can provide key information on the eruption rate and style of basalt lava fields. The most powerful way to study subsurface geo-bodies is to use industrial 3D reflection seismological imaging. However, strategies to image subsurface volcanoes are very different from that of oil and gas reservoirs. In this study, we process seismic data cubes from the Northern Tarim Basin, China, to illustrate how to visualize sills through opacity rendering techniques and how to image the conduits by time-slicing. In the first case, we isolated probes by the seismic horizons marking the contacts between sills and encasing strata, applying opacity rendering techniques to extract sills from the seismic cube. The resulting detailed sill morphology shows that the flow direction is from the dome center to the rim. In the second seismic cube, we use time-slices to image the conduits, which corresponds to marked discontinuities within the encasing rocks. A set of time-slices obtained at different depths show that the Tarim flood basalts erupted from central volcanoes, fed by separate pipe-like conduits.

  19. BASKET on-board software library

    NASA Astrophysics Data System (ADS)

    Luntzer, Armin; Ottensamer, Roland; Kerschbaum, Franz

    2014-07-01

    The University of Vienna is a provider of on-board data processing software with focus on data compression, such as used on board the highly successful Herschel/PACS instrument, as well as in the small BRITE-Constellation fleet of cube-sats. Current contributions are made to CHEOPS, SAFARI and PLATO. The effort was taken to review the various functions developed for Herschel and provide a consolidated software library to facilitate the work for future missions. This library is a shopping basket of algorithms. Its contents are separated into four classes: auxiliary functions (e.g. circular buffers), preprocessing functions (e.g. for calibration), lossless data compression (arithmetic or Rice coding) and lossy reduction steps (ramp fitting etc.). The "BASKET" has all functionality that is needed to create an on-board data processing chain. All sources are written in C, supplemented by optimized versions in assembly, targeting popular CPU architectures for space applications. BASKET is open source and constantly growing

  20. Properties of concrete containing ground palm oil fuel ash as fine aggregate replacement

    NASA Astrophysics Data System (ADS)

    Saffuan, W. A.; Muthusamy, K.; Salleh, N. A. Mohd; Nordin, N.

    2017-11-01

    Environmental degradation resulting from increasing sand mining activities and disposal of palm oil fuel ash (POFA), a solid waste generated from palm oil mill needs to be resolved. Thus, the present research investigates the effect of ground palm oil fuel ash as partial fine aggregate replacement on workability, compressive and flexural strength of concrete. Five mixtures of concrete containing POFA as partial sand replacement designed with 0%, 10%, 20%, 30% and 40% of POFA by the weight of sand were used in this experimental work. The cube and beam specimens were casted and water cured up to 28 days before subjected to compressive strength and flexural strength testing respectively. Finding shows that concrete workability reduces as the amount of POFA added become larger. It is worth to note that 10% of POFA is the best amount to be used as partial fine aggregate replacement to produce concrete with enhanced strength.

  1. ExoCube INMS with Neutral Hydrogen Mode

    NASA Astrophysics Data System (ADS)

    Jones, S.; Paschalidis, N.; Rodriguez, M.; Sittler, E. C., Jr.; Chornay, D. J.; Cameron, T.; Uribe, P.; Nanan, G.; Noto, J.; Waldrop, L.; Mierkiewicz, E. J.; Gardner, D.; Nossal, S. M.; Puig-Suari, J.; Bellardo, J.

    2015-12-01

    The ExoCube mission launched on Jan 31 2015 into a polar orbit to acquire global knowledge of in situ densities of neutral and ionized H, He, and O in the upper ionosphere and lower exosphere. The CubeSat platform is used in combination with incoherent scatter radar and optical ground stations distributed throughout the Americas. ExoCube seeks to obtain the first in situ measurement of neutral exospheric hydrogen and will measure in situ atomic oxygen for the first time in decades. The compact Ion and Neutral Mass Spectrometer (INMS) developed by GSFC uses the gated Time of Flight technique for in situ measurements of ions and neutrals (H, He, N, O, N2, O2) with M/dM of approximately 10. The compact sensor has a dual symmetric configuration with ion and neutral sensor heads. Neutral particles are ionized by electron impact using a thermionic emitter. In situ measurements of neutral hydrogen are notoriously difficult as historically the signal has been contaminated by hydrogen outgassing which persists even years after commissioning. In order to obtain neutral atmospheric hydrogen fluxes, either the atmospheric peak and outgassing peak must be well resolved, or the outgassing component subtracted off. The ExoCube INMS employs a separate mode, specifically for measuring neutral Hydrogen. The details of this mode and lessons learned will be presented as well as in flight instrument validation data for the neutral channel and preliminary flight ion spectra. At the time of abstract submission, the ExoCube spacecraft is currently undergoing attitude control maneuvers to orient INMS in the ram direction for science operations.

  2. Effect of palm oil fuel ash on compressive strength of palm oil boiler stone lightweight aggregate concrete

    NASA Astrophysics Data System (ADS)

    Muthusamy, K.; Zamri, N. A.; Kusbiantoro, A.; Lim, N. H. A. S.; Ariffin, M. A. Mohd

    2018-04-01

    Both palm oil fuel ash (POFA) and palm oil boiler stone (POBS) are by-products which has been continuously generated by local palm oil mill in large amount. Both by products is usually disposed as profitless waste and considered as nuisance to environment. The present research investigates the workability and compressive strength performance of lightweight aggregate concrete (LWAC) made of palm oil boiler stone (POBS) known as palm oil boiler stone lightweight aggregate concrete (POBS LWAC) containing various content of palm oil fuel ash. The control specimen that is POBS LWAC of grade 60 were produced using 100% OPC. Then, another 4 mixes were prepared by varying the POFA percentage from 10%, 20%, 30% and 40% by weight of cement. Fresh mixes were subjected to slump test to determine its workability before casted in form of cubes. Then, all specimens were subjected to water curing up to 28 days and then tested for its compressive strength. It was found out that utilizing of optimum amount of POFA in POBS LWAC would improve the workability and compressive strength of the concrete. However, inclusion of POFA more than optimum amount is not recommended as it will increase the water demand leading to lower workability and strength reduction.

  3. Thin film growth of CaFe2As2 by molecular beam epitaxy

    NASA Astrophysics Data System (ADS)

    Hatano, T.; Kawaguchi, T.; Fujimoto, R.; Nakamura, I.; Mori, Y.; Harada, S.; Ujihara, T.; Ikuta, H.

    2016-01-01

    Film growth of CaFe2As2 was realized by molecular beam epitaxy on six different substrates that have a wide variation in the lattice mismatch to the target compound. By carefully adjusting the Ca-to-Fe flux ratio, we obtained single-phase thin films for most of the substrates. Interestingly, an expansion of the CaFe2As2 lattice to the out-of-plane direction was observed for all films, even when an opposite strain was expected. A detailed microstructure observation of the thin film grown on MgO by transmission electron microscope revealed that it consists of cube-on-cube and 45°-rotated domains. The latter domains were compressively strained in plane, which caused a stretching along the c-axis direction. Because the domains were well connected across the boundary with no appreciable discontinuity, we think that the out-of-plane expansion in the 45°-rotated domains exerted a tensile stress on the other domains, resulting in the unexpectedly large c-axis lattice parameter, despite the apparently opposite lattice mismatch.

  4. Wave Impact on a Wall: Comparison of Experiments with Similarity Solutions

    NASA Astrophysics Data System (ADS)

    Wang, A.; Duncan, J. H.; Lathrop, D. P.

    2014-11-01

    The impact of a steep water wave on a fixed partially submerged cube is studied with experiments and theory. The temporal evolution of the water surface profile upstream of the front face of the cube in its center plane is measured with a cinematic laser-induced fluorescence technique using frame rates up to 4,500 Hz. For a small range of cube positions, the surface profiles are found to form a nearly circular arc with upward curvature between the front face of the cube and a point just downstream of the wave crest. As the crest approaches the cube, the effective radius of this portion of the profile decreases rapidly. At the same time, the portion of the profile that is upstream of the crest approaches a straight line with a downward slope of about 15°. As the wave impact continues, the circular arc shrinks to zero radius with very high acceleration and a sudden transition to a high-speed vertical jet occurs. This flow singularity is modeled with a power-law scaling in time, which is used to create a time-independent system of equations of motion. The scaled governing equations are solved numerically and the similarly scaled measured free surface shapes, are favorably compared with the solutions. The support of the Office of Naval Research is gratefully acknowledged.

  5. NEUDOSE: A CubeSat Mission for Dosimetry of Charged Particles and Neutrons in Low-Earth Orbit.

    PubMed

    Hanu, A R; Barberiz, J; Bonneville, D; Byun, S H; Chen, L; Ciambella, C; Dao, E; Deshpande, V; Garnett, R; Hunter, S D; Jhirad, A; Johnston, E M; Kordic, M; Kurnell, M; Lopera, L; McFadden, M; Melnichuk, A; Nguyen, J; Otto, A; Scott, R; Wagner, D L; Wiendels, M

    2017-01-01

    During space missions, astronauts are exposed to a stream of energetic and highly ionizing radiation particles that can suppress immune system function, increase cancer risks and even induce acute radiation syndrome if the exposure is large enough. As human exploration goals shift from missions in low-Earth orbit (LEO) to long-duration interplanetary missions, radiation protection remains one of the key technological issues that must be resolved. In this work, we introduce the NEUtron DOSimetry & Exploration (NEUDOSE) CubeSat mission, which will provide new measurements of dose and space radiation quality factors to improve the accuracy of cancer risk projections for current and future space missions. The primary objective of the NEUDOSE CubeSat is to map the in situ lineal energy spectra produced by charged particles and neutrons in LEO where most of the preparatory activities for future interplanetary missions are currently taking place. To perform these measurements, the NEUDOSE CubeSat is equipped with the Charged & Neutral Particle Tissue Equivalent Proportional Counter (CNP-TEPC), an advanced radiation monitoring instrument that uses active coincidence techniques to separate the interactions of charged particles and neutrons in real time. The NEUDOSE CubeSat, currently under development at McMaster University, provides a modern approach to test the CNP-TEPC instrument directly in the unique environment of outer space while simultaneously collecting new georeferenced lineal energy spectra of the radiation environment in LEO.

  6. Optical Alignment of the Global Precipitation Measurement (GPM) Star Trackers

    NASA Technical Reports Server (NTRS)

    Hetherington, Samuel; Osgood, Dean; McMann, Joe; Roberts, Viki; Gill, James; Mclean, Kyle

    2013-01-01

    The optical alignment of the star trackers on the Global Precipitation Measurement (GPM) core spacecraft at NASA Goddard Space Flight Center (GSFC) was challenging due to the layout and structural design of the GPM Lower Bus Structure (LBS) in which the star trackers are mounted as well as the presence of the star tracker shades that blocked line-of-sight to the primary star tracker optical references. The initial solution was to negotiate minor changes in the original LBS design to allow for the installation of a removable item of ground support equipment (GSE) that could be installed whenever measurements of the star tracker optical references were needed. However, this GSE could only be used to measure secondary optical reference cube faces not used by the star tracker vendor to obtain the relationship information and matrix transformations necessary to determine star tracker alignment. Unfortunately, due to unexpectedly large orthogonality errors between the measured secondary adjacent cube faces and the lack of cube calibration data, we required a method that could be used to measure the same reference cube faces as originally measured by the vendor. We describe an alternative technique to theodolite auto-collimation for measurement of an optical reference mirror pointing direction when normal incidence measurements are not possible. This technique was used to successfully align the GPM star trackers and has been used on a number of other NASA flight projects. We also discuss alignment theory as well as a GSFC-developed theodolite data analysis package used to analyze angular metrology data.

  7. Cosmological Particle Data Compression in Practice

    NASA Astrophysics Data System (ADS)

    Zeyen, M.; Ahrens, J.; Hagen, H.; Heitmann, K.; Habib, S.

    2017-12-01

    In cosmological simulations trillions of particles are handled and several terabytes of unstructured particle data are generated in each time step. Transferring this data directly from memory to disk in an uncompressed way results in a massive load on I/O and storage systems. Hence, one goal of domain scientists is to compress the data before storing it to disk while minimizing the loss of information. To prevent reading back uncompressed data from disk, this can be done in an in-situ process. Since the simulation continuously generates data, the available time for the compression of one time step is limited. Therefore, the evaluation of compression techniques has shifted from only focusing on compression rates to include run-times and scalability.In recent years several compression techniques for cosmological data have become available. These techniques can be either lossy or lossless, depending on the technique. For both cases, this study aims to evaluate and compare the state of the art compression techniques for unstructured particle data. This study focuses on the techniques available in the Blosc framework with its multi-threading support, the XZ Utils toolkit with the LZMA algorithm that achieves high compression rates, and the widespread FPZIP and ZFP methods for lossy compressions.For the investigated compression techniques, quantitative performance indicators such as compression rates, run-time/throughput, and reconstruction errors are measured. Based on these factors, this study offers a comprehensive analysis of the individual techniques and discusses their applicability for in-situ compression. In addition, domain specific measures are evaluated on the reconstructed data sets, and the relative error rates and statistical properties are analyzed and compared. Based on this study future challenges and directions in the compression of unstructured cosmological particle data were identified.

  8. Low-Complexity Lossless and Near-Lossless Data Compression Technique for Multispectral Imagery

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Klimesh, Matthew A.

    2009-01-01

    This work extends the lossless data compression technique described in Fast Lossless Compression of Multispectral- Image Data, (NPO-42517) NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26. The original technique was extended to include a near-lossless compression option, allowing substantially smaller compressed file sizes when a small amount of distortion can be tolerated. Near-lossless compression is obtained by including a quantization step prior to encoding of prediction residuals. The original technique uses lossless predictive compression and is designed for use on multispectral imagery. A lossless predictive data compression algorithm compresses a digitized signal one sample at a time as follows: First, a sample value is predicted from previously encoded samples. The difference between the actual sample value and the prediction is called the prediction residual. The prediction residual is encoded into the compressed file. The decompressor can form the same predicted sample and can decode the prediction residual from the compressed file, and so can reconstruct the original sample. A lossless predictive compression algorithm can generally be converted to a near-lossless compression algorithm by quantizing the prediction residuals prior to encoding them. In this case, since the reconstructed sample values will not be identical to the original sample values, the encoder must determine the values that will be reconstructed and use these values for predicting later sample values. The technique described here uses this method, starting with the original technique, to allow near-lossless compression. The extension to allow near-lossless compression adds the ability to achieve much more compression when small amounts of distortion are tolerable, while retaining the low complexity and good overall compression effectiveness of the original algorithm.

  9. NeuCube: a spiking neural network architecture for mapping, learning and understanding of spatio-temporal brain data.

    PubMed

    Kasabov, Nikola K

    2014-04-01

    The brain functions as a spatio-temporal information processing machine. Spatio- and spectro-temporal brain data (STBD) are the most commonly collected data for measuring brain response to external stimuli. An enormous amount of such data has been already collected, including brain structural and functional data under different conditions, molecular and genetic data, in an attempt to make a progress in medicine, health, cognitive science, engineering, education, neuro-economics, Brain-Computer Interfaces (BCI), and games. Yet, there is no unifying computational framework to deal with all these types of data in order to better understand this data and the processes that generated it. Standard machine learning techniques only partially succeeded and they were not designed in the first instance to deal with such complex data. Therefore, there is a need for a new paradigm to deal with STBD. This paper reviews some methods of spiking neural networks (SNN) and argues that SNN are suitable for the creation of a unifying computational framework for learning and understanding of various STBD, such as EEG, fMRI, genetic, DTI, MEG, and NIRS, in their integration and interaction. One of the reasons is that SNN use the same computational principle that generates STBD, namely spiking information processing. This paper introduces a new SNN architecture, called NeuCube, for the creation of concrete models to map, learn and understand STBD. A NeuCube model is based on a 3D evolving SNN that is an approximate map of structural and functional areas of interest of the brain related to the modeling STBD. Gene information is included optionally in the form of gene regulatory networks (GRN) if this is relevant to the problem and the data. A NeuCube model learns from STBD and creates connections between clusters of neurons that manifest chains (trajectories) of neuronal activity. Once learning is applied, a NeuCube model can reproduce these trajectories, even if only part of the input STBD or the stimuli data is presented, thus acting as an associative memory. The NeuCube framework can be used not only to discover functional pathways from data, but also as a predictive system of brain activities, to predict and possibly, prevent certain events. Analysis of the internal structure of a model after training can reveal important spatio-temporal relationships 'hidden' in the data. NeuCube will allow the integration in one model of various brain data, information and knowledge, related to a single subject (personalized modeling) or to a population of subjects. The use of NeuCube for classification of STBD is illustrated in a case study problem of EEG data. NeuCube models result in a better accuracy of STBD classification than standard machine learning techniques. They are robust to noise (so typical in brain data) and facilitate a better interpretation of the results and understanding of the STBD and the brain conditions under which data was collected. Future directions for the use of SNN for STBD are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. The CuSPED Mission: CubeSat for GNSS Sounding of the Ionosphere-Plasmasphere Electron Density

    NASA Technical Reports Server (NTRS)

    Gross, Jason N.; Keesee, Amy M.; Christian, John A.; Gu, Yu; Scime, Earl; Komjathy, Attila; Lightsey, E. Glenn; Pollock, Craig J.

    2016-01-01

    The CubeSat for GNSS Sounding of Ionosphere-Plasmasphere Electron Density (CuSPED) is a 3U CubeSat mission concept that has been developed in response to the NASA Heliophysics program's decadal science goal of the determining of the dynamics and coupling of the Earth's magnetosphere, ionosphere, and atmosphere and their response to solar and terrestrial inputs. The mission was formulated through a collaboration between West Virginia University, Georgia Tech, NASA GSFC and NASA JPL, and features a 3U CubeSat that hosts both a miniaturized space capable Global Navigation Satellite System (GNSS) receiver for topside atmospheric sounding, along with a Thermal Electron Capped Hemispherical Spectrometer (TECHS) for the purpose of in situ electron precipitation measurements. These two complimentary measurement techniques will provide data for the purpose of constraining ionosphere-magnetosphere coupling models and will also enable studies of the local plasma environment and spacecraft charging; a phenomenon which is known to lead to significant errors in the measurement of low-energy, charged species from instruments aboard spacecraft traversing the ionosphere. This paper will provide an overview of the concept including its science motivation and implementation.

  11. NASA Tech Briefs, June 2012

    NASA Technical Reports Server (NTRS)

    2012-01-01

    Topics covered include: iGlobe Interactive Visualization and Analysis of Spatial Data; Broad-Bandwidth FPGA-Based Digital Polyphase Spectrometer; Small Aircraft Data Distribution System; Earth Science Datacasting v2.0; Algorithm for Compressing Time-Series Data; Onboard Science and Applications Algorithm for Hyperspectral Data Reduction; Sampling Technique for Robust Odorant Detection Based on MIT RealNose Data; Security Data Warehouse Application; Integrated Laser Characterization, Data Acquisition, and Command and Control Test System; Radiation-Hard SpaceWire/Gigabit Ethernet-Compatible Transponder; Hardware Implementation of Lossless Adaptive Compression of Data From a Hyperspectral Imager; High-Voltage, Low-Power BNC Feedthrough Terminator; SpaceCube Mini; Dichroic Filter for Separating W-Band and Ka-Band; Active Mirror Predictive and Requirement Verification Software (AMP-ReVS); Navigation/Prop Software Suite; Personal Computer Transport Analysis Program; Pressure Ratio to Thermal Environments; Probabilistic Fatigue Damage Program (FATIG); ASCENT Program; JPL Genesis and Rapid Intensification Processes (GRIP) Portal; Data::Downloader; Fault Tolerance Middleware for a Multi-Core System; DspaceOgreTerrain 3D Terrain Visualization Tool; Trick Simulation Environment 07; Geometric Reasoning for Automated Planning; Water Detection Based on Color Variation; Single-Layer, All-Metal Patch Antenna Element with Wide Bandwidth; Scanning Laser Infrared Molecular Spectrometer (SLIMS); Next-Generation Microshutter Arrays for Large-Format Imaging and Spectroscopy; Detection of Carbon Monoxide Using Polymer-Composite Films with a Porphyrin-Functionalized Polypyrrole; Enhanced-Adhesion Multiwalled Carbon Nanotubes on Titanium Substrates for Stray Light Control; Three-Dimensional Porous Particles Composed of Curved, Two-Dimensional, Nano-Sized Layers for Li-Ion Batteries 23 Ultra-Lightweight; and Ultra-Lightweight Nanocomposite Foams and Sandwich Structures for Space Structure Applications.

  12. Potential for reducing the numbers of SiPM readout surfaces of laser-processed X'tal cube PET detectors.

    PubMed

    Hirano, Yoshiyuki; Inadama, Naoko; Yoshida, Eiji; Nishikido, Fumihiko; Murayama, Hideo; Watanabe, Mitsuo; Yamaya, Taiga

    2013-03-07

    We are developing a three-dimensional (3D) position-sensitive detector with isotropic spatial resolution, the X'tal cube. Originally, our design consisted of a crystal block for which all six surfaces were covered with arrays of multi-pixel photon counters (MPPCs). In this paper, we examined the feasibility of reducing the number of surfaces on which a MPPC array must be connected with the aim of reducing the complexity of the system. We evaluated two kinds of laser-processed X'tal cubes of 3 mm and 2 mm pitch segments while varying the numbers of the 4 × 4 MPPC arrays down to two surfaces. The sub-surface laser engraving technique was used to fabricate 3D grids into a monolithic crystal block. The 3D flood histograms were obtained by the Anger-type calculation. Two figures of merit, peak-to-valley ratios and distance-to-width ratios, were used to evaluate crystal identification performance. Clear separation was obtained even in the 2-surface configuration for the 3 mm X'tal cube, and the average peak-to-valley ratios and the distance-to-width ratios were 6.7 and 2.6, respectively. Meanwhile, in the 2 mm X'tal cube, the 6-surface configuration could separate all crystals and even the 2-surface case could also, but the flood histograms were relatively shrunk in the 2-surface case, especially on planes parallel to the sensitive surfaces. However, the minimum peak-to-valley ratio did not fall below 3.9. We concluded that reducing the numbers of MPPC readout surfaces was feasible for both the 3 mm and the 2 mm X'tal cubes.

  13. An efficient numerical technique for calculating thermal spreading resistance

    NASA Technical Reports Server (NTRS)

    Gale, E. H., Jr.

    1977-01-01

    An efficient numerical technique for solving the equations resulting from finite difference analyses of fields governed by Poisson's equation is presented. The method is direct (noniterative)and the computer work required varies with the square of the order of the coefficient matrix. The computational work required varies with the cube of this order for standard inversion techniques, e.g., Gaussian elimination, Jordan, Doolittle, etc.

  14. A discrete fibre dispersion method for excluding fibres under compression in the modelling of fibrous tissues.

    PubMed

    Li, Kewei; Ogden, Ray W; Holzapfel, Gerhard A

    2018-01-01

    Recently, micro-sphere-based methods derived from the angular integration approach have been used for excluding fibres under compression in the modelling of soft biological tissues. However, recent studies have revealed that many of the widely used numerical integration schemes over the unit sphere are inaccurate for large deformation problems even without excluding fibres under compression. Thus, in this study, we propose a discrete fibre dispersion model based on a systematic method for discretizing a unit hemisphere into a finite number of elementary areas, such as spherical triangles. Over each elementary area, we define a representative fibre direction and a discrete fibre density. Then, the strain energy of all the fibres distributed over each elementary area is approximated based on the deformation of the representative fibre direction weighted by the corresponding discrete fibre density. A summation of fibre contributions over all elementary areas then yields the resultant fibre strain energy. This treatment allows us to exclude fibres under compression in a discrete manner by evaluating the tension-compression status of the representative fibre directions only. We have implemented this model in a finite-element programme and illustrate it with three representative examples, including simple tension and simple shear of a unit cube, and non-homogeneous uniaxial extension of a rectangular strip. The results of all three examples are consistent and accurate compared with the previously developed continuous fibre dispersion model, and that is achieved with a substantial reduction of computational cost. © 2018 The Author(s).

  15. Effect of amorphous silica ash used as a partial replacement for cement on the compressive and flexural strengths cement mortar.

    NASA Astrophysics Data System (ADS)

    Usman, Aliyu; Ibrahim, Muhammad B.; Bala, Nura

    2018-04-01

    This research is aimed at investigating the effect of using amorphous silica ash (ASA) obtained from rice husk as a partial replacement of ordinary Portland cement (OPC) on the compressive and flexural strength of mortar. ASA was used in partial replacement of ordinary Portland cement in the following percentages 2.5 percent, 5 percent, 7.5 percent and 10 percent. These partial replacements were used to produce Cement-ASA mortar. ASA was found to contain all major chemical compounds found in cement with the exception of alumina, which are SiO2 (91.5%), CaO (2.84%), Fe2O3 (1.96%), and loss on ignition (LOI) was found to be 9.18%. It also contains other minor oxides found in cement. The test on hardened mortar were destructive in nature which include flexural strength test on prismatic beam (40mm x 40mm x 160mm) and compressive strength test on the cube size (40mm x 40mm, by using the auxiliary steel plates) at 2,7,14 and 28 days curing. The Cement-ASA mortar flexural and compressive strengths were found to be increasing with curing time and decreases with cement replacement by ASA. It was observed that 5 percent replacement of cement with ASA attained the highest strength for all the curing ages and all the percentage replacements attained the targeted compressive strength of 6N/mm2 for 28 days for the cement mortar

  16. Skin Parameter Map Retrieval from a Dedicated Multispectral Imaging System Applied to Dermatology/Cosmetology

    PubMed Central

    2013-01-01

    In vivo quantitative assessment of skin lesions is an important step in the evaluation of skin condition. An objective measurement device can help as a valuable tool for skin analysis. We propose an explorative new multispectral camera specifically developed for dermatology/cosmetology applications. The multispectral imaging system provides images of skin reflectance at different wavebands covering visible and near-infrared domain. It is coupled with a neural network-based algorithm for the reconstruction of reflectance cube of cutaneous data. This cube contains only skin optical reflectance spectrum in each pixel of the bidimensional spatial information. The reflectance cube is analyzed by an algorithm based on a Kubelka-Munk model combined with evolutionary algorithm. The technique allows quantitative measure of cutaneous tissue and retrieves five skin parameter maps: melanin concentration, epidermis/dermis thickness, haemoglobin concentration, and the oxygenated hemoglobin. The results retrieved on healthy participants by the algorithm are in good accordance with the data from the literature. The usefulness of the developed technique was proved during two experiments: a clinical study based on vitiligo and melasma skin lesions and a skin oxygenation experiment (induced ischemia) with healthy participant where normal tissues are recorded at normal state and when temporary ischemia is induced. PMID:24159326

  17. Image splitting and remapping method for radiological image compression

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.

    1990-07-01

    A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.

  18. Testing and validation of multi-lidar scanning strategies for wind energy applications: Testing and validation of multi-lidar scanning strategies for wind energy applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newman, Jennifer F.; Bonin, Timothy A.; Klein, Petra M.

    Several factors cause lidars to measure different values of turbulence than an anemometer on a tower, including volume averaging, instrument noise, and the use of a scanning circle to estimate the wind field. One way to avoid the use of a scanning circle is to deploy multiple scanning lidars and point them toward the same volume in space to collect velocity measurements and extract high-resolution turbulence information. This paper explores the use of two multi-lidar scanning strategies, the tri-Doppler technique and the virtual tower technique, for measuring 3-D turbulence. In Summer 2013, a vertically profiling Leosphere WindCube lidar and threemore » Halo Photonics Streamline lidars were operated at the Southern Great Plains Atmospheric Radiation Measurement site to test these multi-lidar scanning strategies. During the first half of the field campaign, all three scanning lidars were pointed at approximately the same point in space and a tri-Doppler analysis was completed to calculate the three-dimensional wind vector every second. Next, all three scanning lidars were used to build a “virtual tower” above the WindCube lidar. Results indicate that the tri-Doppler technique measures higher values of horizontal turbulence than the WindCube lidar under stable atmospheric conditions, reduces variance contamination under unstable conditions, and can measure highresolution profiles of mean wind speed and direction. The virtual tower technique provides adequate turbulence information under stable conditions but cannot capture the full temporal variability of turbulence experienced under unstable conditions because of the time needed to readjust the scans.« less

  19. Asphalt dust waste material as a paste volume in developing sustainable self compacting concrete (SCC)

    NASA Astrophysics Data System (ADS)

    Ismail, Isham; Shahidan, Shahiron; Bahari, Nur Amira Afiza Saiful

    2017-12-01

    Self-compacting concrete (SCC) mixtures are usually designed to have high workability during the fresh state through the influence of higher volumes of paste in concrete mixtures. Asphalt dust waste (ADW) is one of disposed materials obtained during the production of asphalt premix. These fine powder wastes contribute to environmental problems today. However, these waste materials can be utilized in the development of sustainable and economical SCC. This paper focuses on the preliminary evaluations of the fresh properties and compressive strength of developed SCC for 7 and 28 days only. 144 cube samples from 24 mixtures with varying water binder ratios (0.2, 0.3 and 0.4) and ADW volume (0% to 100%) were prepared. MD940 and MD950 showed a satisfactory performance for the slump flow, J-Ring, L-Box and V-Funnel tests at fresh state. The compressive strength after 28 days for MD940 and MD950 was 36.9 MPa and 28.0 MPa respectively. In conclusion, the use of ADW as paste volume should be limited and a higher water binder ratio will significantly reduce the compressive strength.

  20. Optimization of compressive 4D-spatio-spectral snapshot imaging

    NASA Astrophysics Data System (ADS)

    Zhao, Xia; Feng, Weiyi; Lin, Lihua; Su, Wu; Xu, Guoqing

    2017-10-01

    In this paper, a modified 3D computational reconstruction method in the compressive 4D-spectro-volumetric snapshot imaging system is proposed for better sensing spectral information of 3D objects. In the design of the imaging system, a microlens array (MLA) is used to obtain a set of multi-view elemental images (EIs) of the 3D scenes. Then, these elemental images with one dimensional spectral information and different perspectives are captured by the coded aperture snapshot spectral imager (CASSI) which can sense the spectral data cube onto a compressive 2D measurement image. Finally, the depth images of 3D objects at arbitrary depths, like a focal stack, are computed by inversely mapping the elemental images according to geometrical optics. With the spectral estimation algorithm, the spectral information of 3D objects is also reconstructed. Using a shifted translation matrix, the contrast of the reconstruction result is further enhanced. Numerical simulation results verify the performance of the proposed method. The system can obtain both 3D spatial information and spectral data on 3D objects using only one single snapshot, which is valuable in the agricultural harvesting robots and other 3D dynamic scenes.

  1. Utilization of fly ash as partial sand replacement in oil palm shell lightweight aggregate concrete

    NASA Astrophysics Data System (ADS)

    Nazrin Akmal, A. Z. Muhammad; Muthusamy, K.; Mat Yahaya, F.; Hanafi, H. Mohd; Nur Azzimah, Z.

    2017-11-01

    Realization on the increasing demand for river sand supply in construction sector has inspired the current research to find alternative material to reduce the use of natural sand in oil palm shell lightweight aggregate concrete (OPS LWAC) production. The existence of fly ash, a by-product generated from coal power plant, which pose negative impact to the environment when it is disposed as waste, were used in this research. The effect of fly ash content as partial sand replacement towards workability and compressive strength of OPS lightweight aggregate concrete were investigated. Four concrete mixes containing various percentage of fly ash that are 0%, 10%, 20% and 30% by weight of sand were used in the experimental work. All mixes were cast in form of cubes before subjected to water curing until the testing age. Compressive strength test were conducted at 1, 3, 7 and 28 days. The finding shows that the workability of the OPS LWAC decreases when more fly ash are used as sand replacement. It was found that adding of 10% fly ash as sand replacement content resulted in better compressive strength of OPS LWAC, which is higher than the control mix.

  2. Effect of Elevated Temperature on the Residual Properties of Quartzite, Granite and Basalt Aggregate Concrete

    NASA Astrophysics Data System (ADS)

    Masood, A.; Shariq, M.; Alam, M. Masroor; Ahmad, T.; Beg, A.

    2018-05-01

    In the present study, experimental investigations have been carried out to determine the effect of elevated temperature on the residual properties of quartzite, granite and basalt aggregate concrete mixes. Ultrasonic pulse velocity and unstressed residual compressive strength tests on cube specimens have been conducted at ambient and after single heating-cooling cycle of elevated temperature ranging from 200 to 600 °C. The relationship between ultrasonic pulse velocity and residual compressive strength of all concrete mixes have been developed. Scanning electron microscopy was also carried out to study micro structure of quartzite, granite and basalt aggregate concrete subjected to single heating-cooling cycle of elevated temperature. The results show that the residual compressive strength of quartzite aggregate concrete has been found higher than granite and basalt aggregate concrete at ambient and at all temperatures. It has also been found that the loss of strength in concrete is due to the development of micro-cracks result in failure of cement matrix and coarse aggregate bond. Further, the basalt aggregate concrete has been observed lower strength due to low affinity with Portland cements ascribed to its ferro-magnesium rich mineral composition.

  3. Reuse of waste iron as a partial replacement of sand in concrete.

    PubMed

    Ismail, Zainab Z; Al-Hashmi, Enas A

    2008-11-01

    One of the major environmental issues in Iraq is the large quantity of waste iron resulting from the industrial sector which is deposited in domestic waste and in landfills. A series of 109 experiments and 586 tests were carried out in this study to examine the feasibility of reusing this waste iron in concrete. Overall, 130 kg of waste iron were reused to partially replace sand at 10%, 15%, and 20% in a total of 1703 kg concrete mixtures. The tests performed to evaluate waste-iron concrete quality included slump, fresh density, dry density, compressive strength, and flexural strength tests: 115 cubes of concrete were molded for the compressive strength and dry density tests, and 87 prisms were cast for the flexural strength tests. This work applied 3, 7, 14, and 28 days curing ages for the concrete mixes. The results confirm that reuse of solid waste material offers an approach to solving the pollution problems that arise from an accumulation of waste in a production site; in the meantime modified properties are added to the concrete. The results show that the concrete mixes made with waste iron had higher compressive strengths and flexural strengths than the plain concrete mixes.

  4. Application of alkaliphilic biofilm-forming bacteria to improve compressive strength of cement-sand mortar.

    PubMed

    Park, Sung-Jin; Chun, Woo-Young; Kim, Wha-Jung; Ghim, Sa-Youl

    2012-03-01

    The application of microorganisms in the field of construction material is rapidly increasing worldwide; however, almost all studies that were investigated were bacterial sources with mineral-producing activity and not with organic substances. The difference in the efficiency of using bacteria as an organic agent is that it could improve the durability of cement material. This study aimed to assess the use of biofilm-forming microorganisms as binding agents to increase the compressive strength of cement-sand material. We isolated 13 alkaliphilic biofilmforming bacteria (ABB) from a cement tetrapod block in the West Sea, Korea. Using 16S RNA sequence analysis, the ABB were partially identified as Bacillus algicola KNUC501 and Exiguobacterium marinum KNUC513. KNUC513 was selected for further study following analysis of pH and biofilm formation. Cement-sand mortar cubes containing KNUC513 exhibited greater compressive strength than mineral-forming bacteria (Sporosarcina pasteurii and Arthrobacter crystallopoietes KNUC403). To determine the biofilm effect, Dnase I was used to suppress the biofilm formation of KNUC513. Field emission scanning electron microscopy image revealed the direct involvement of organic-inorganic substance in cement-sand mortar.

  5. A Posteriori Restoration of Block Transform-Compressed Data

    NASA Technical Reports Server (NTRS)

    Brown, R.; Boden, A. F.

    1995-01-01

    The Galileo spacecraft will use lossy data compression for the transmission of its science imagery over the low-bandwidth communication system. The technique chosen for image compression is a block transform technique based on the Integer Cosine Transform, a derivative of the JPEG image compression standard. Considered here are two known a posteriori enhancement techniques, which are adapted.

  6. Techniques for information extraction from compressed GPS traces : final report.

    DOT National Transportation Integrated Search

    2015-12-31

    Developing techniques for extracting information requires a good understanding of methods used to compress the traces. Many techniques for compressing trace data : consisting of position (i.e., latitude/longitude) and time values have been developed....

  7. Digital compression algorithms for HDTV transmission

    NASA Technical Reports Server (NTRS)

    Adkins, Kenneth C.; Shalkhauser, Mary JO; Bibyk, Steven B.

    1990-01-01

    Digital compression of video images is a possible avenue for high definition television (HDTV) transmission. Compression needs to be optimized while picture quality remains high. Two techniques for compression the digital images are explained and comparisons are drawn between the human vision system and artificial compression techniques. Suggestions for improving compression algorithms through the use of neural and analog circuitry are given.

  8. A comparison of spectral decorrelation techniques and performance evaluation metrics for a wavelet-based, multispectral data compression algorithm

    NASA Technical Reports Server (NTRS)

    Matic, Roy M.; Mosley, Judith I.

    1994-01-01

    Future space-based, remote sensing systems will have data transmission requirements that exceed available downlinks necessitating the use of lossy compression techniques for multispectral data. In this paper, we describe several algorithms for lossy compression of multispectral data which combine spectral decorrelation techniques with an adaptive, wavelet-based, image compression algorithm to exploit both spectral and spatial correlation. We compare the performance of several different spectral decorrelation techniques including wavelet transformation in the spectral dimension. The performance of each technique is evaluated at compression ratios ranging from 4:1 to 16:1. Performance measures used are visual examination, conventional distortion measures, and multispectral classification results. We also introduce a family of distortion metrics that are designed to quantify and predict the effect of compression artifacts on multi spectral classification of the reconstructed data.

  9. Multiport optical circulator by using polarizing beam splitter cubes as spatial walk-off polarizers.

    PubMed

    Chen, Jing-Heng; Chen, Kun-Huang; Lin, Jiun-You; Hsieh, Hsiang-Yung

    2010-03-10

    Optical circulators are necessary passive devices applied in optical communication systems. In the design of optical circulators, the implementation of the function of spatial walk-off polarizers is a key technique that significantly influences the performance and cost of a device. This paper proposes a design of a multiport optical circulator by using polarizing beam splitter cubes as spatial walk-off polarizers. To show the feasibility of the design, a prototype of a six-port optical circulator was fabricated. The insertion losses are 0.94-1.49 dB, the isolations are 25-51 dB, and return losses are 27.72 dB.

  10. Analysis of the multigroup model for muon tomography based threat detection

    NASA Astrophysics Data System (ADS)

    Perry, J. O.; Bacon, J. D.; Borozdin, K. N.; Fabritius, J. M.; Morris, C. L.

    2014-02-01

    We compare different algorithms for detecting a 5 cm tungsten cube using cosmic ray muon technology. In each case, a simple tomographic technique was used for position reconstruction, but the scattering angles were used differently to obtain a density signal. Receiver operating characteristic curves were used to compare images made using average angle squared, median angle squared, average of the squared angle, and a multi-energy group fit of the angular distributions for scenes with and without a 5 cm tungsten cube. The receiver operating characteristic curves show that the multi-energy group treatment of the scattering angle distributions is the superior method for image reconstruction.

  11. Strength and fracture energy of foamed concrete incorporating rice husk ash and polypropylene mega-mesh 55

    NASA Astrophysics Data System (ADS)

    Jaini, Z. M.; Rum, R. H. M.; Boon, K. H.

    2017-10-01

    This paper presents the utilization of rice husk ash (RHA) as sand replacement and polypropylene mega-mesh 55 (PMM) as fiber reinforcement in foamed concrete. High pozzolanic reaction and the ability to become filler make RHA as a strategic material to enhance the strength and durability of foamed concrete. Furthermore, the presence of PMM optimizes the toughness of foamed concrete in resisting shrinkage and cracking. In this experimental study, cube and cylinder specimens were prepared for the compression and splitting-tensile tests. Meanwhile, notched beam specimens were cast for the three-point bending test. It was found that 40% RHA and 9kg/m3 PMM contribute to the highest strength and fracture energy. The compressive, tensile and flexural strengths are 32MPa, 2.88MPa and 6.68MPa respectively, while the fracture energy achieves 42.19N/m. The results indicate high potential of RHA and PMM in enhancing the mechanical properties of foamed concrete.

  12. Mechanical properties of polymer-modified porous concrete

    NASA Astrophysics Data System (ADS)

    Ariffin, N. F.; Jaafar, M. F. Md.; Shukor Lim, N. H. Abdul; Bhutta, M. A. R.; Hussin, M. W.

    2018-04-01

    In this research work, polymer-modified porous concretes (permeable concretes) using polymer latex and redispersible polymer powder with water-cement ratio of 30 %, polymer-cement ratios of 0 to 10 % and cement content of 300 kg/m3 are prepared. The porous concrete was tested for compressive strength, flexural strength, water permeability and void ratio. The cubes size of specimen is 100 mm ×100 mm × 100 mm and 150 mm × 150 mm × 150 mm while the beam size is 100 mm × 100 mm × 500 mm was prepared for particular tests. The tests results show that the addition of polymer as a binder to porous concrete gives an improvement on the strength properties and coefficient of water permeability of polymer-modified porous concrete. It is concluded from the test results that increase in compressive and flexural strengths and decrease in the coefficient of water permeability of the polymer-modified porous concrete are clearly observed with increasing of polymer-cement ratio.

  13. ³Cat-3/MOTS Nanosatellite Mission for Optical Multispectral and GNSS-R Earth Observation: Concept and Analysis.

    PubMed

    Castellví, Jordi; Camps, Adriano; Corbera, Jordi; Alamús, Ramon

    2018-01-06

    The ³Cat-3/MOTS (3: Cube, Cat: Catalunya, 3: 3rd CubeSat mission/Missió Observació Terra Satèl·lit) mission is a joint initiative between the Institut Cartogràfic i Geològic de Catalunya (ICGC) and the Universitat Politècnica de Catalunya-BarcelonaTech (UPC) to foster innovative Earth Observation (EO) techniques based on data fusion of Global Navigation Satellite Systems Reflectometry (GNSS-R) and optical payloads. It is based on a 6U CubeSat platform, roughly a 10 cm × 20 cm × 30 cm parallelepiped. Since 2012, there has been a fast growing trend to use small satellites, especially nanosatellites, and in particular those following the CubeSat form factor. Small satellites possess intrinsic advantages over larger platforms in terms of cost, flexibility, and scalability, and may also enable constellations, trains, federations, or fractionated satellites or payloads based on a large number of individual satellites at an affordable cost. This work summarizes the mission analysis of ³Cat-3/MOTS, including its payload results, power budget (PB), thermal budget (TB), and data budget (DB). This mission analysis is addressed to transform EO data into territorial climate variables (soil moisture and land cover change) at the best possible achievable spatio-temporal resolution.

  14. OLAP Cube Visualization of Hydrologic Data Catalogs

    NASA Astrophysics Data System (ADS)

    Zaslavsky, I.; Rodriguez, M.; Beran, B.; Valentine, D.; van Ingen, C.; Wallis, J. C.

    2007-12-01

    As part of the CUAHSI Hydrologic Information System project, we assemble comprehensive observations data catalogs that support CUAHSI data discovery services (WaterOneFlow services) and online mapping interfaces (e.g. the Data Access System for Hydrology, DASH). These catalogs describe several nation-wide data repositories that are important for hydrologists, including USGS NWIS and EPA STORET data collections. The catalogs contain a wealth of information reflecting the entire history and geography of hydrologic observations in the US. Managing such catalogs requires high performance analysis and visualization technologies. OLAP (Online Analytical Processing) cube, often called data cubes, is an approach to organizing and querying large multi-dimensional data collections. We have applied the OLAP techniques, as implemented in Microsoft SQL Server 2005, to the analysis of the catalogs from several agencies. In this initial report, we focus on the OLAP technology as applied to catalogs, and preliminary results of the analysis. Specifically, we describe the challenges of generating OLAP cube dimensions, and defining aggregations and views for data catalogs as opposed to observations data themselves. The initial results are related to hydrologic data availability from the observations data catalogs. The results reflect geography and history of available data totals from USGS NWIS and EPA STORET repositories, and spatial and temporal dynamics of available measurements for several key nutrient-related parameters.

  15. Observation of confinement effects through liner and nonlinear absorption spectroscopy in cuprous oxide

    NASA Astrophysics Data System (ADS)

    Sekhar, H.; Rakesh Kumar, Y.; Narayana Rao, D.

    2015-02-01

    Cuprous oxide nano clusters, micro cubes and micro particles were successfully synthesized by reducing copper (II) salt with ascorbic acid in the presence of sodium hydroxide via a co-precipitation method. The X-ray diffraction studies revealed the formation of pure single phase cubic. Raman spectrum shows the inevitable presence of CuO on the surface of the Cu2O powders which may have an impact on the stability of the phase. Transmission electron microscopy (TEM) data revealed that the morphology evolves from nanoclusters to micro cubes and micro particles by increasing the concentration of NaOH. Linear optical measurements show that the absorption peak maximum shifts towards red with changing morphology from nano clusters to micro cubes and micro particles. The nonlinear optical properties were studied using open aperture Z-scan technique with 532 nm, 6 ns laser pulses. Samples exhibited saturable as well as reverse saturable absorption. The results show that the transition from SA to RSA is ascribed to excited-state absorption (ESA) induced by two-photon absorption (TPA) process. Due to confinement effects (enhanced band gap) we observed enhanced nonlinear absorption coefficient (βeff) in the case of nano-clusters compared to their micro-cubes and micro-particles.

  16. Radiological Image Compression

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  17. An image compression survey and algorithm switching based on scene activity

    NASA Technical Reports Server (NTRS)

    Hart, M. M.

    1985-01-01

    Data compression techniques are presented. A description of these techniques is provided along with a performance evaluation. The complexity of the hardware resulting from their implementation is also addressed. The compression effect on channel distortion and the applicability of these algorithms to real-time processing are presented. Also included is a proposed new direction for an adaptive compression technique for real-time processing.

  18. Resolution Study of a Hyperspectral Sensor using Computed Tomography in the Presence of Noise

    DTIC Science & Technology

    2012-06-14

    diffraction efficiency is dependent on wavelength. Compared to techniques developed by later work, simple algebraic reconstruction techniques were used...spectral di- mension, using computed tomography (CT) techniques with only a finite number of diverse images. CTHIS require a reconstruction algorithm in...many frames are needed to reconstruct the spectral cube of a simple object using a theoretical lower bound. In this research a new algorithm is derived

  19. Application of washed rumen technique for rapid determination of fasting heat production in steers

    USDA-ARS?s Scientific Manuscript database

    Two experiments were conducted to evaluate the use of a washed rumen technique as an alternative approach for determining fasting HP in cattle. In Exp. 1, 8 Holstein steers (322±30 kg) were adapted to a cubed alfalfa-based diet (1.5xNEm) for 10 d. After which steers were placed into individual hea...

  20. Compression of Probabilistic XML Documents

    NASA Astrophysics Data System (ADS)

    Veldman, Irma; de Keijzer, Ander; van Keulen, Maurice

    Database techniques to store, query and manipulate data that contains uncertainty receives increasing research interest. Such UDBMSs can be classified according to their underlying data model: relational, XML, or RDF. We focus on uncertain XML DBMS with as representative example the Probabilistic XML model (PXML) of [10,9]. The size of a PXML document is obviously a factor in performance. There are PXML-specific techniques to reduce the size, such as a push down mechanism, that produces equivalent but more compact PXML documents. It can only be applied, however, where possibilities are dependent. For normal XML documents there also exist several techniques for compressing a document. Since Probabilistic XML is (a special form of) normal XML, it might benefit from these methods even more. In this paper, we show that existing compression mechanisms can be combined with PXML-specific compression techniques. We also show that best compression rates are obtained with a combination of PXML-specific technique with a rather simple generic DAG-compression technique.

  1. Compression for radiological images

    NASA Astrophysics Data System (ADS)

    Wilson, Dennis L.

    1992-07-01

    The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.

  2. Two-thumb technique is superior to two-finger technique during lone rescuer infant manikin CPR.

    PubMed

    Udassi, Sharda; Udassi, Jai P; Lamb, Melissa A; Theriaque, Douglas W; Shuster, Jonathan J; Zaritsky, Arno L; Haque, Ikram U

    2010-06-01

    Infant CPR guidelines recommend two-finger chest compression with a lone rescuer and two-thumb with two rescuers. Two-thumb provides better chest compression but is perceived to be associated with increased ventilation hands-off time. We hypothesized that lone rescuer two-thumb CPR is associated with increased ventilation cycle time, decreased ventilation quality and fewer chest compressions compared to two-finger CPR in an infant manikin model. Crossover observational study randomizing 34 healthcare providers to perform 2 min CPR at a compression rate of 100 min(-1) using a 30:2 compression:ventilation ratio comparing two-thumb vs. two-finger techniques. A Laerdal Baby ALS Trainer manikin was modified to digitally record compression rate, compression depth and compression pressure and ventilation cycle time (two mouth-to-mouth breaths). Manikin chest rise with breaths was video recorded and later reviewed by two blinded CPR instructors for percent effective breaths. Data (mean+/-SD) were analyzed using a two-tailed paired t-test. Significance was defined qualitatively as p< or =0.05. Mean % effective breaths were 90+/-18.6% in two-thumb and 88.9+/-21.1% in two-finger, p=0.65. Mean time (s) to deliver two mouth-to-mouth breaths was 7.6+/-1.6 in two-thumb and 7.0+/-1.5 in two-finger, p<0.0001. Mean delivered compressions per minute were 87+/-11 in two-thumb and 92+/-12 in two-finger, p=0.0005. Two-thumb resulted in significantly higher compression depth and compression pressure compared to the two-finger technique. Healthcare providers required 0.6s longer time to deliver two breaths during two-thumb lone rescuer infant CPR, but there was no significant difference in percent effective breaths delivered between the two techniques. Two-thumb CPR had 4 fewer delivered compressions per minute, which may be offset by far more effective compression depth and compression pressure compared to two-finger technique. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  3. Lossless Astronomical Image Compression and the Effects of Random Noise

    NASA Technical Reports Server (NTRS)

    Pence, William

    2009-01-01

    In this paper we compare a variety of modern image compression methods on a large sample of astronomical images. We begin by demonstrating from first principles how the amount of noise in the image pixel values sets a theoretical upper limit on the lossless compression ratio of the image. We derive simple procedures for measuring the amount of noise in an image and for quantitatively predicting how much compression will be possible. We then compare the traditional technique of using the GZIP utility to externally compress the image, with a newer technique of dividing the image into tiles, and then compressing and storing each tile in a FITS binary table structure. This tiled-image compression technique offers a choice of other compression algorithms besides GZIP, some of which are much better suited to compressing astronomical images. Our tests on a large sample of images show that the Rice algorithm provides the best combination of speed and compression efficiency. In particular, Rice typically produces 1.5 times greater compression and provides much faster compression speed than GZIP. Floating point images generally contain too much noise to be effectively compressed with any lossless algorithm. We have developed a compression technique which discards some of the useless noise bits by quantizing the pixel values as scaled integers. The integer images can then be compressed by a factor of 4 or more. Our image compression and uncompression utilities (called fpack and funpack) that were used in this study are publicly available from the HEASARC web site.Users may run these stand-alone programs to compress and uncompress their own images.

  4. Insightful problem solving in an Asian elephant.

    PubMed

    Foerder, Preston; Galloway, Marie; Barthel, Tony; Moore, Donald E; Reiss, Diana

    2011-01-01

    The "aha" moment or the sudden arrival of the solution to a problem is a common human experience. Spontaneous problem solving without evident trial and error behavior in humans and other animals has been referred to as insight. Surprisingly, elephants, thought to be highly intelligent, have failed to exhibit insightful problem solving in previous cognitive studies. We tested whether three Asian elephants (Elephas maximus) would use sticks or other objects to obtain food items placed out-of-reach and overhead. Without prior trial and error behavior, a 7-year-old male Asian elephant showed spontaneous problem solving by moving a large plastic cube, on which he then stood, to acquire the food. In further testing he showed behavioral flexibility, using this technique to reach other items and retrieving the cube from various locations to use as a tool to acquire food. In the cube's absence, he generalized this tool utilization technique to other objects and, when given smaller objects, stacked them in an attempt to reach the food. The elephant's overall behavior was consistent with the definition of insightful problem solving. Previous failures to demonstrate this ability in elephants may have resulted not from a lack of cognitive ability but from the presentation of tasks requiring trunk-held sticks as potential tools, thereby interfering with the trunk's use as a sensory organ to locate the targeted food.

  5. “Superluminal” FITS File Processing on Multiprocessors: Zero Time Endian Conversion Technique

    NASA Astrophysics Data System (ADS)

    Eguchi, Satoshi

    2013-05-01

    The FITS is the standard file format in astronomy, and it has been extended to meet the astronomical needs of the day. However, astronomical datasets have been inflating year by year. In the case of the ALMA telescope, a ˜TB-scale four-dimensional data cube may be produced for one target. Considering that typical Internet bandwidth is tens of MB/s at most, the original data cubes in FITS format are hosted on a VO server, and the region which a user is interested in should be cut out and transferred to the user (Eguchi et al. 2012). The system will equip a very high-speed disk array to process a TB-scale data cube in 10 s, and disk I/O speed, endian conversion, and data processing speeds will be comparable. Hence, reducing the endian conversion time is one of issues to solve in our system. In this article, I introduce a technique named “just-in-time endian conversion”, which delays the endian conversion for each pixel just before it is really needed, to sweep out the endian conversion time; by applying this method, the FITS processing speed increases 20% for single threading and 40% for multi-threading compared to CFITSIO. The speedup tightly relates to modern CPU architecture to improve the efficiency of instruction pipelines due to break of “causality”, a programmed instruction code sequence.

  6. Comparison of lossless compression techniques for prepress color images

    NASA Astrophysics Data System (ADS)

    Van Assche, Steven; Denecker, Koen N.; Philips, Wilfried R.; Lemahieu, Ignace L.

    1998-12-01

    In the pre-press industry color images have both a high spatial and a high color resolution. Such images require a considerable amount of storage space and impose long transmission times. Data compression is desired to reduce these storage and transmission problems. Because of the high quality requirements in the pre-press industry only lossless compression is acceptable. Most existing lossless compression schemes operate on gray-scale images. In this case the color components of color images must be compressed independently. However, higher compression ratios can be achieved by exploiting inter-color redundancies. In this paper we present a comparison of three state-of-the-art lossless compression techniques which exploit such color redundancies: IEP (Inter- color Error Prediction) and a KLT-based technique, which are both linear color decorrelation techniques, and Interframe CALIC, which uses a non-linear approach to color decorrelation. It is shown that these techniques are able to exploit color redundancies and that color decorrelation can be done effectively and efficiently. The linear color decorrelators provide a considerable coding gain (about 2 bpp) on some typical prepress images. The non-linear interframe CALIC predictor does not yield better results, but the full interframe CALIC technique does.

  7. Estimating the concrete compressive strength using hard clustering and fuzzy clustering based regression techniques.

    PubMed

    Nagwani, Naresh Kumar; Deo, Shirish V

    2014-01-01

    Understanding of the compressive strength of concrete is important for activities like construction arrangement, prestressing operations, and proportioning new mixtures and for the quality assurance. Regression techniques are most widely used for prediction tasks where relationship between the independent variables and dependent (prediction) variable is identified. The accuracy of the regression techniques for prediction can be improved if clustering can be used along with regression. Clustering along with regression will ensure the more accurate curve fitting between the dependent and independent variables. In this work cluster regression technique is applied for estimating the compressive strength of the concrete and a novel state of the art is proposed for predicting the concrete compressive strength. The objective of this work is to demonstrate that clustering along with regression ensures less prediction errors for estimating the concrete compressive strength. The proposed technique consists of two major stages: in the first stage, clustering is used to group the similar characteristics concrete data and then in the second stage regression techniques are applied over these clusters (groups) to predict the compressive strength from individual clusters. It is found from experiments that clustering along with regression techniques gives minimum errors for predicting compressive strength of concrete; also fuzzy clustering algorithm C-means performs better than K-means algorithm.

  8. Estimating the Concrete Compressive Strength Using Hard Clustering and Fuzzy Clustering Based Regression Techniques

    PubMed Central

    Nagwani, Naresh Kumar; Deo, Shirish V.

    2014-01-01

    Understanding of the compressive strength of concrete is important for activities like construction arrangement, prestressing operations, and proportioning new mixtures and for the quality assurance. Regression techniques are most widely used for prediction tasks where relationship between the independent variables and dependent (prediction) variable is identified. The accuracy of the regression techniques for prediction can be improved if clustering can be used along with regression. Clustering along with regression will ensure the more accurate curve fitting between the dependent and independent variables. In this work cluster regression technique is applied for estimating the compressive strength of the concrete and a novel state of the art is proposed for predicting the concrete compressive strength. The objective of this work is to demonstrate that clustering along with regression ensures less prediction errors for estimating the concrete compressive strength. The proposed technique consists of two major stages: in the first stage, clustering is used to group the similar characteristics concrete data and then in the second stage regression techniques are applied over these clusters (groups) to predict the compressive strength from individual clusters. It is found from experiments that clustering along with regression techniques gives minimum errors for predicting compressive strength of concrete; also fuzzy clustering algorithm C-means performs better than K-means algorithm. PMID:25374939

  9. Mechanical properties of regular porous biomaterials made from truncated cube repeating unit cells: Analytical solutions and computational models.

    PubMed

    Hedayati, R; Sadighi, M; Mohammadi-Aghdam, M; Zadpoor, A A

    2016-03-01

    Additive manufacturing (AM) has enabled fabrication of open-cell porous biomaterials based on repeating unit cells. The micro-architecture of the porous biomaterials and, thus, their physical properties could then be precisely controlled. Due to their many favorable properties, porous biomaterials manufactured using AM are considered as promising candidates for bone substitution as well as for several other applications in orthopedic surgery. The mechanical properties of such porous structures including static and fatigue properties are shown to be strongly dependent on the type of the repeating unit cell based on which the porous biomaterial is built. In this paper, we study the mechanical properties of porous biomaterials made from a relatively new unit cell, namely truncated cube. We present analytical solutions that relate the dimensions of the repeating unit cell to the elastic modulus, Poisson's ratio, yield stress, and buckling load of those porous structures. We also performed finite element modeling to predict the mechanical properties of the porous structures. The analytical solution and computational results were found to be in agreement with each other. The mechanical properties estimated using both the analytical and computational techniques were somewhat higher than the experimental data reported in one of our recent studies on selective laser melted Ti-6Al-4V porous biomaterials. In addition to porosity, the elastic modulus and Poisson's ratio of the porous structures were found to be strongly dependent on the ratio of the length of the inclined struts to that of the uninclined (i.e. vertical or horizontal) struts, α, in the truncated cube unit cell. The geometry of the truncated cube unit cell approaches the octahedral and cube unit cells when α respectively approaches zero and infinity. Consistent with those geometrical observations, the analytical solutions presented in this study approached those of the octahedral and cube unit cells when α approached respectively 0 and infinity. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. What's the Cube Quest Challenge?

    NASA Technical Reports Server (NTRS)

    Cockrell, Jim

    2016-01-01

    Cube Quest Challenge, sponsored by Space Technology Mission Directorates Centennial Challenges program, is NASAs first in-space prize competition. Cube Quest is open to any U.S.-based, non-government CubeSat developer. Entrants will compete for one of three available 6U CubeSat dispenser slots on the EM-1 mission the first un-crewed lunar flyby of the Orion spacecraft launched by the Space Launch System in early 2018. The Cube Quest Challenge will award up to $5M in prizes. The advanced CubeSat technologies demonstrated by Cube Quest winners will enable NASA, universities, and industry to more quickly and affordably accomplish science and exploration objectives. This paper describes the teams, their novel CubeSat designs, and the emerging technologies for CubeSat operations in deep space environment.

  11. Nondestructive testing and monitoring of stiff large-scale structures by measuring 3D coordinates of cardinal points using electronic distance measurements in a trilateration architecture

    NASA Astrophysics Data System (ADS)

    Parker, David H.

    2017-04-01

    By using three, or more, electronic distance measurement (EDM) instruments, such as commercially available laser trackers, in an unconventional trilateration architecture, 3-D coordinates of specialized retroreflector targets attached to cardinal points on a structure can be measured with absolute uncertainty of less than one part-permillion. For example, 3-D coordinates of a structure within a 100 meter cube can be measured within a volume of a 0.1 mm cube (the thickness of a sheet of paper). Relative dynamic movements, such as vibrations at 30 Hz, are typically measured 10 times better, i.e., within a 0.01 mm cube. Measurements of such accuracy open new areas for nondestructive testing and finite element model confirmation of stiff, large-scale structures, such as: buildings, bridges, cranes, boilers, tank cars, nuclear power plant containment buildings, post-tensioned concrete, and the like by measuring the response to applied loads, changes over the life of the structure, or changes following an accident, fire, earthquake, modification, etc. The sensitivity of these measurements makes it possible to measure parameters such as: linearity, hysteresis, creep, symmetry, damping coefficient, and the like. For example, cracks exhibit a highly non-linear response when strains are reversed from compression to tension. Due to the measurements being 3-D, unexpected movements, such as transverse motion produced by an axial load, could give an indication of an anomaly-such as an asymmetric crack or materials property in a beam, delamination of concrete, or other asymmetry due to failures. Details of the specialized retroreflector are included.

  12. An adaptive technique to maximize lossless image data compression of satellite images

    NASA Technical Reports Server (NTRS)

    Stewart, Robert J.; Lure, Y. M. Fleming; Liou, C. S. Joe

    1994-01-01

    Data compression will pay an increasingly important role in the storage and transmission of image data within NASA science programs as the Earth Observing System comes into operation. It is important that the science data be preserved at the fidelity the instrument and the satellite communication systems were designed to produce. Lossless compression must therefore be applied, at least, to archive the processed instrument data. In this paper, we present an analysis of the performance of lossless compression techniques and develop an adaptive approach which applied image remapping, feature-based image segmentation to determine regions of similar entropy and high-order arithmetic coding to obtain significant improvements over the use of conventional compression techniques alone. Image remapping is used to transform the original image into a lower entropy state. Several techniques were tested on satellite images including differential pulse code modulation, bi-linear interpolation, and block-based linear predictive coding. The results of these experiments are discussed and trade-offs between computation requirements and entropy reductions are used to identify the optimum approach for a variety of satellite images. Further entropy reduction can be achieved by segmenting the image based on local entropy properties then applying a coding technique which maximizes compression for the region. Experimental results are presented showing the effect of different coding techniques for regions of different entropy. A rule-base is developed through which the technique giving the best compression is selected. The paper concludes that maximum compression can be achieved cost effectively and at acceptable performance rates with a combination of techniques which are selected based on image contextual information.

  13. Photogrammetry Tool for Forensic Analysis

    NASA Technical Reports Server (NTRS)

    Lane, John

    2012-01-01

    A system allows crime scene and accident scene investigators the ability to acquire visual scene data using cameras for processing at a later time. This system uses a COTS digital camera, a photogrammetry calibration cube, and 3D photogrammetry processing software. In a previous instrument developed by NASA, the laser scaling device made use of parallel laser beams to provide a photogrammetry solution in 2D. This device and associated software work well under certain conditions. In order to make use of a full 3D photogrammetry system, a different approach was needed. When using multiple cubes, whose locations relative to each other are unknown, a procedure that would merge the data from each cube would be as follows: 1. One marks a reference point on cube 1, then marks points on cube 2 as unknowns. This locates cube 2 in cube 1 s coordinate system. 2. One marks reference points on cube 2, then marks points on cube 1 as unknowns. This locates cube 1 in cube 2 s coordinate system. 3. This procedure is continued for all combinations of cubes. 4. The coordinate of all of the found coordinate systems is then merged into a single global coordinate system. In order to achieve maximum accuracy, measurements are done in one of two ways, depending on scale: when measuring the size of objects, the coordinate system corresponding to the nearest cube is used, or when measuring the location of objects relative to a global coordinate system, a merged coordinate system is used. Presently, traffic accident analysis is time-consuming and not very accurate. Using cubes with differential GPS would give absolute positions of cubes in the accident area, so that individual cubes would provide local photogrammetry calibration to objects near a cube.

  14. CaloCube: An isotropic spaceborne calorimeter for high-energy cosmic rays. Optimization of the detector performance for protons and nuclei

    NASA Astrophysics Data System (ADS)

    Adriani, O.; Albergo, S.; Auditore, L.; Basti, A.; Berti, E.; Bigongiari, G.; Bonechi, L.; Bonechi, S.; Bongi, M.; Bonvicini, V.; Bottai, S.; Brogi, P.; Carotenuto, G.; Castellini, G.; Cattaneo, P. W.; Daddi, N.; D'Alessandro, R.; Detti, S.; Finetti, N.; Italiano, A.; Lenzi, P.; Maestro, P.; Marrocchesi, P. S.; Mori, N.; Orzan, G.; Olmi, M.; Pacini, L.; Papini, P.; Pellegriti, M. G.; Rappoldi, A.; Ricciarini, S.; Sciuto, A.; Spillantini, P.; Starodubtsev, O.; Stolzi, F.; Suh, J. E.; Sulaj, A.; Tiberio, A.; Tricomi, A.; Trifiro', A.; Trimarchi, M.; Vannuccini, E.; Zampa, G.; Zampa, N.

    2017-11-01

    The direct detection of high-energy cosmic rays up to the PeV region is one of the major challenges for the next generation of space-borne cosmic-ray detectors. The physics performance will be primarily determined by their geometrical acceptance and energy resolution. CaloCube is a homogeneous calorimeter whose geometry allows an almost isotropic response, so as to detect particles arriving from every direction in space, thus maximizing the acceptance. A comparative study of different scintillating materials and mechanical structures has been performed by means of Monte Carlo simulation. The scintillation-Cherenkov dual read-out technique has been also considered and its benefit evaluated.

  15. Measurement of Full Field Strains in Filament Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC) Technique

    DTIC Science & Technology

    2013-05-01

    Measurement of Full Field Strains in Filament Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC...of Full Field Strains in Filament Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC) Technique Todd C...Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC) Technique 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c

  16. A Comparison of Compressed Sensing and Sparse Recovery Algorithms Applied to Simulation Data

    DOE PAGES

    Fan, Ya Ju; Kamath, Chandrika

    2016-09-01

    The move toward exascale computing for scientific simulations is placing new demands on compression techniques. It is expected that the I/O system will not be able to support the volume of data that is expected to be written out. To enable quantitative analysis and scientific discovery, we are interested in techniques that compress high-dimensional simulation data and can provide perfect or near-perfect reconstruction. In this paper, we explore the use of compressed sensing (CS) techniques to reduce the size of the data before they are written out. Using large-scale simulation data, we investigate how the sufficient sparsity condition and themore » contrast in the data affect the quality of reconstruction and the degree of compression. Also, we provide suggestions for the practical implementation of CS techniques and compare them with other sparse recovery methods. Finally, our results show that despite longer times for reconstruction, compressed sensing techniques can provide near perfect reconstruction over a range of data with varying sparsity.« less

  17. A Comparison of Compressed Sensing and Sparse Recovery Algorithms Applied to Simulation Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, Ya Ju; Kamath, Chandrika

    The move toward exascale computing for scientific simulations is placing new demands on compression techniques. It is expected that the I/O system will not be able to support the volume of data that is expected to be written out. To enable quantitative analysis and scientific discovery, we are interested in techniques that compress high-dimensional simulation data and can provide perfect or near-perfect reconstruction. In this paper, we explore the use of compressed sensing (CS) techniques to reduce the size of the data before they are written out. Using large-scale simulation data, we investigate how the sufficient sparsity condition and themore » contrast in the data affect the quality of reconstruction and the degree of compression. Also, we provide suggestions for the practical implementation of CS techniques and compare them with other sparse recovery methods. Finally, our results show that despite longer times for reconstruction, compressed sensing techniques can provide near perfect reconstruction over a range of data with varying sparsity.« less

  18. Experimental Study on the Strength Characteristics and Water Permeability of Hybrid Steel Fibre Reinforced Concrete

    PubMed Central

    Singh, M. P.; Singh, S. P.; Singh, A. P.

    2014-01-01

    Results of an investigation conducted to study the effect of fibre hybridization on the strength characteristics such as compressive strength, split tensile strength, and water permeability of steel fibre reinforced concrete (SFRC) are presented. Steel fibres of different lengths, that is, 12.5 mm, 25 mm, and 50 mm, having constant diameter of 0.6 mm, were systematically combined in different mix proportions to obtain mono, binary, and ternary combinations at each of 0.5%, 1.0%, and 1.5% fibre volume fraction. A concrete mix containing no fibres was also cast for reference purpose. A total number of 1440 cube specimens of size 100∗100∗100 mm were tested, 480 each for compressive strength, split tensile strength, and water permeability at 7, 28, 90, and 120 days of curing. It has been observed from the results of this investigation that a fibre combination of 33% 12.5 mm + 33% 25 mm + 33% 50 mm long fibres can be adjudged as the most appropriate combination to be employed in hybrid steel fibre reinforced concrete (HySFRC) for optimum performance in terms of compressive strength, split tensile strength and water permeability requirements taken together. PMID:27379298

  19. Improved compression technique for multipass color printers

    NASA Astrophysics Data System (ADS)

    Honsinger, Chris

    1998-01-01

    A multipass color printer prints a color image by printing one color place at a time in a prescribed order, e.g., in a four-color systems, the cyan plane may be printed first, the magenta next, and so on. It is desirable to discard the data related to each color plane once it has been printed, so that data from the next print may be downloaded. In this paper, we present a compression scheme that allows the release of a color plane memory, but still takes advantage of the correlation between the color planes. The compression scheme is based on a block adaptive technique for decorrelating the color planes followed by a spatial lossy compression of the decorrelated data. A preferred method of lossy compression is the DCT-based JPEG compression standard, as it is shown that the block adaptive decorrelation operations can be efficiently performed in the DCT domain. The result of the compression technique are compared to that of using JPEG on RGB data without any decorrelating transform. In general, the technique is shown to improve the compression performance over a practical range of compression ratios by at least 30 percent in all images, and up to 45 percent in some images.

  20. Design of a digital compression technique for shuttle television

    NASA Technical Reports Server (NTRS)

    Habibi, A.; Fultz, G.

    1976-01-01

    The determination of the performance and hardware complexity of data compression algorithms applicable to color television signals, were studied to assess the feasibility of digital compression techniques for shuttle communications applications. For return link communications, it is shown that a nonadaptive two dimensional DPCM technique compresses the bandwidth of field-sequential color TV to about 13 MBPS and requires less than 60 watts of secondary power. For forward link communications, a facsimile coding technique is recommended which provides high resolution slow scan television on a 144 KBPS channel. The onboard decoder requires about 19 watts of secondary power.

  1. Solar and Space Physics Science Enabled by Pico and Nano Satellites

    NASA Astrophysics Data System (ADS)

    Swenson, C.; Fish, C. S.

    2012-12-01

    The most significant advances in solar and space physics, or Heliophysics, over the next decade are most likely to derive from new observational techniques. The connection between advances in scientific understanding and technology has historically been demonstrated across many disciplines and time. Progress on some of the most compelling scientific problems will most likely occur through multipoint observations within the space environment to understand the coupling between disparate regions: Heliosphere, magnetosphere, ionosphere, thermosphere and mesosphere. Multipoint measurements are also needed to develop understanding of the various scalars or vector field signatures (i.e gradients, divergence) that arise from coupling processes that occur across temporal and spatial scales or within localized regions. The resources that are available over the next decades for all areas of Heliophysics research have limits and it is therefore important that the community be innovative in developing new observational techniques to advance science. One of the most promising new observational techniques becoming available are miniaturized sensors and satellite systems called pico- or nano-satellites and CubeSats. These are enabled by the enormous investment of the commercial, medical, and defense industries in producing highly capable, portable and low-power battery-operated consumer electronics, in-situ composition probes, and novel reconnaissance sensors. The advancements represented by these technologies have direct application in developing pico- or nano-satellites and CubeSats system for Heliophysics research. In this talk we overview the current environment and technologies surrounding these novel small satellites and discuss the types and capabilities of the miniature sensors that are being developed. We discuss how pico- or nano-satellites and CubeSats can be used to address highest priority science identified in the Decadal Survey and the innovations and advancements that are required to make substantial progress.

  2. 3Cat-3/MOTS Nanosatellite Mission for Optical Multispectral and GNSS-R Earth Observation: Concept and Analysis

    PubMed Central

    Castellví, Jordi; Corbera, Jordi; Alamús, Ramon

    2018-01-01

    The 3Cat-3/MOTS (3: Cube, Cat: Catalunya, 3: 3rd CubeSat mission/Missió Observació Terra Satèl·lit) mission is a joint initiative between the Institut Cartogràfic i Geològic de Catalunya (ICGC) and the Universitat Politècnica de Catalunya-BarcelonaTech (UPC) to foster innovative Earth Observation (EO) techniques based on data fusion of Global Navigation Satellite Systems Reflectometry (GNSS-R) and optical payloads. It is based on a 6U CubeSat platform, roughly a 10 cm × 20 cm × 30 cm parallelepiped. Since 2012, there has been a fast growing trend to use small satellites, especially nanosatellites, and in particular those following the CubeSat form factor. Small satellites possess intrinsic advantages over larger platforms in terms of cost, flexibility, and scalability, and may also enable constellations, trains, federations, or fractionated satellites or payloads based on a large number of individual satellites at an affordable cost. This work summarizes the mission analysis of 3Cat-3/MOTS, including its payload results, power budget (PB), thermal budget (TB), and data budget (DB). This mission analysis is addressed to transform EO data into territorial climate variables (soil moisture and land cover change) at the best possible achievable spatio-temporal resolution. PMID:29316649

  3. Highly Integrated THz Receiver Systems for Small Satellite Remote Sensing Applications

    NASA Technical Reports Server (NTRS)

    Groppi, Christopher; Hunter, Roger C.; Baker, Christopher

    2017-01-01

    We are developing miniaturized, highly integrated Schottky receiver systems suitable for use in CubeSats or other small spacecraft platforms, where state-of-the-art performance and ultra-low mass, power, and volume are required. Current traditional Schottky receivers are too large to employ on a CubeSat. We will develop highly integrated receivers operating from 520-600 GHz and 1040-1200 GHz that are based on state-of-the-art receivers already developed at Jet Propulsion Laboratory (JPL) by using novel 3D multi layer packaging. This process will reduce both mass and volume by more than an order of magnitude, while preserving state-of-the-art noise performance. The resulting receiver systems will have a volume of approximately 25 x 25 x 40 millimeters (mm), a mass of 250 grams (g), and power consumption on the order of of 7 watts (W). Using these techniques, we will also integrate both receivers into a single frame, further reducing mass and volume for applications where dual band operation is advantageous. Additionally, as Schottky receivers offer significant gains in noise performance when cooled to 100 K, we will investigate the improvement gained by passively cooling these receivers. Work by Sierra Lobo Inc., with their Cryo Cube technology development program, offers the possibility of passive cooling to 100 K on CubeSat platforms for 1-unit (1U) sized instruments.

  4. Search for dark matter annihilation in the Galactic Center with IceCube-79

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aartsen, M. G.; Abraham, K.; Ackermann, M.

    The Milky Way is expected to be embedded in a halo of dark matter particles, with the highest density in the central region, and decreasing density with the halo-centric radius. Dark matter might be indirectly detectable at Earth through a flux of stable particles generated in dark matter annihilations and peaked in the direction of the Galactic Center. We present a search for an excess flux of muon (anti-) neutrinos from dark matter annihilation in the Galactic Center using the cubic-kilometer-sized IceCube neutrino detector at the South Pole. There, the Galactic Center is always seen above the horizon. Thus, newmore » and dedicated veto techniques against atmospheric muons are required to make the southern hemisphere accessible for IceCube. We used 319.7 live-days of data from IceCube operating in its 79-string configuration during 2010 and 2011. Here, no neutrino excess was found and the final result is compatible with the background. We present upper limits on the self-annihilation cross-section, Av>, for WIMP masses ranging from 30 GeV up to 10 TeV, assuming cuspy (NFW) and flat-cored (Burkert) dark matter halo profiles, reaching down to ≃4•10 –24 cm 3 s –1, and ≃2.6•10 –23 cm 3 s –1 for the ν ν¯ channel, respectively.« less

  5. Search for dark matter annihilation in the Galactic Center with IceCube-79

    DOE PAGES

    Aartsen, M. G.; Abraham, K.; Ackermann, M.; ...

    2015-10-15

    The Milky Way is expected to be embedded in a halo of dark matter particles, with the highest density in the central region, and decreasing density with the halo-centric radius. Dark matter might be indirectly detectable at Earth through a flux of stable particles generated in dark matter annihilations and peaked in the direction of the Galactic Center. We present a search for an excess flux of muon (anti-) neutrinos from dark matter annihilation in the Galactic Center using the cubic-kilometer-sized IceCube neutrino detector at the South Pole. There, the Galactic Center is always seen above the horizon. Thus, newmore » and dedicated veto techniques against atmospheric muons are required to make the southern hemisphere accessible for IceCube. We used 319.7 live-days of data from IceCube operating in its 79-string configuration during 2010 and 2011. Here, no neutrino excess was found and the final result is compatible with the background. We present upper limits on the self-annihilation cross-section, Av>, for WIMP masses ranging from 30 GeV up to 10 TeV, assuming cuspy (NFW) and flat-cored (Burkert) dark matter halo profiles, reaching down to ≃4•10 –24 cm 3 s –1, and ≃2.6•10 –23 cm 3 s –1 for the ν ν¯ channel, respectively.« less

  6. Autonomous Sensors for Large Scale Data Collection

    NASA Astrophysics Data System (ADS)

    Noto, J.; Kerr, R.; Riccobono, J.; Kapali, S.; Migliozzi, M. A.; Goenka, C.

    2017-12-01

    Presented here is a novel implementation of a "Doppler imager" which remotely measures winds and temperatures of the neutral background atmosphere at ionospheric altitudes of 87-300Km and possibly above. Incorporating both recent optical manufacturing developments, modern network awareness and the application of machine learning techniques for intelligent self-monitoring and data classification. This system achieves cost savings in manufacturing, deployment and lifetime operating costs. Deployed in both ground and space-based modalities, this cost-disruptive technology will allow computer models of, ionospheric variability and other space weather models to operate with higher precision. Other sensors can be folded into the data collection and analysis architecture easily creating autonomous virtual observatories. A prototype version of this sensor has recently been deployed in Trivandrum India for the Indian Government. This Doppler imager is capable of operation, even within the restricted CubeSat environment. The CubeSat bus offers a very challenging environment, even for small instruments. The lack of SWaP and the challenging thermal environment demand development of a new generation of instruments; the Doppler imager presented is well suited to this environment. Concurrent with this CubeSat development is the development and construction of ground based arrays of inexpensive sensors using the proposed technology. This instrument could be flown inexpensively on one or more CubeSats to provide valuable data to space weather forecasters and ionospheric scientists. Arrays of magnetometers have been deployed for the last 20 years [Alabi, 2005]. Other examples of ground based arrays include an array of white-light all sky imagers (THEMIS) deployed across Canada [Donovan et al., 2006], oceans sensors on buoys [McPhaden et al., 2010], and arrays of seismic sensors [Schweitzer et al., 2002]. A comparable array of Doppler imagers can be constructed and deployed on the ground, to compliment the CubeSat data.

  7. The VIMS Data Explorer: A tool for locating and visualizing hyperspectral data

    NASA Astrophysics Data System (ADS)

    Pasek, V. D.; Lytle, D. M.; Brown, R. H.

    2016-12-01

    Since successfully entering Saturn's orbit during Summer 2004 there have been over 300,000 hyperspectral data cubes returned from the visible and infrared mapping spectrometer (VIMS) instrument onboard the Cassini spacecraft. The VIMS Science Investigation is a multidisciplinary effort that uses these hyperspectral data to study a variety of scientific problems, including surface characterizations of the icy satellites and atmospheric analyses of Titan and Saturn. Such investigations may need to identify thousands of exemplary data cubes for analysis and can span many years in scope. Here we describe the VIMS data explorer (VDE) application, currently employed by the VIMS Investigation to search for and visualize data. The VDE application facilitates real-time inspection of the entire VIMS hyperspectral dataset, the construction of in situ maps, and markers to save and recall work. The application relies on two databases to provide comprehensive search capabilities. The first database contains metadata for every cube. These metadata searches are used to identify records based on parameters such as target, observation name, or date taken; they fall short in utility for some investigations. The cube metadata contains no target geometry information. Through the introduction of a post-calibration pixel database, the VDE tool enables users to greatly expand their searching capabilities. Users can select favorable cubes for further processing into 2-D and 3-D interactive maps, aiding in the data interpretation and selection process. The VDE application enables efficient search, visualization, and access to VIMS hyperspectral data. It is simple to use, requiring nothing more than a browser for access. Hyperspectral bands can be individually selected or combined to create real-time color images, a technique commonly employed by hyperspectral researchers to highlight compositional differences.

  8. Compressed NMR: Combining compressive sampling and pure shift NMR techniques.

    PubMed

    Aguilar, Juan A; Kenwright, Alan M

    2017-12-26

    Historically, the resolution of multidimensional nuclear magnetic resonance (NMR) has been orders of magnitude lower than the intrinsic resolution that NMR spectrometers are capable of producing. The slowness of Nyquist sampling as well as the existence of signals as multiplets instead of singlets have been two of the main reasons for this underperformance. Fortunately, two compressive techniques have appeared that can overcome these limitations. Compressive sensing, also known as compressed sampling (CS), avoids the first limitation by exploiting the compressibility of typical NMR spectra, thus allowing sampling at sub-Nyquist rates, and pure shift techniques eliminate the second issue "compressing" multiplets into singlets. This paper explores the possibilities and challenges presented by this combination (compressed NMR). First, a description of the CS framework is given, followed by a description of the importance of combining it with the right pure shift experiment. Second, examples of compressed NMR spectra and how they can be combined with covariance methods will be shown. Copyright © 2017 John Wiley & Sons, Ltd.

  9. An Optimum Space-to-Ground Communication Concept for CubeSat Platform Utilizing NASA Space Network and Near Earth Network

    NASA Technical Reports Server (NTRS)

    Wong, Yen F.; Kegege, Obadiah; Schaire, Scott H.; Bussey, George; Altunc, Serhat; Zhang, Yuwen; Patel Chitra

    2016-01-01

    National Aeronautics and Space Administration (NASA) CubeSat missions are expected to grow rapidly in the next decade. Higher data rate CubeSats are transitioning away from Amateur Radio bands to higher frequency bands. A high-level communication architecture for future space-to-ground CubeSat communication was proposed within NASA Goddard Space Flight Center. This architecture addresses CubeSat direct-to-ground communication, CubeSat to Tracking Data Relay Satellite System (TDRSS) communication, CubeSat constellation with Mothership direct-to-ground communication, and CubeSat Constellation with Mothership communication through K-Band Single Access (KSA). A study has been performed to explore this communication architecture, through simulations, analyses, and identifying technologies, to develop the optimum communication concepts for CubeSat communications. This paper presents details of the simulation and analysis that include CubeSat swarm, daughter ship/mother ship constellation, Near Earth Network (NEN) S and X-band direct to ground link, TDRSS Multiple Access (MA) array vs Single Access mode, notional transceiver/antenna configurations, ground asset configurations and Code Division Multiple Access (CDMA) signal trades for daughter ship/mother ship CubeSat constellation inter-satellite cross link. Results of space science X-band 10 MHz maximum achievable data rate study are summarized. CubeSat NEN Ka-Band end-to-end communication analysis is provided. Current CubeSat communication technologies capabilities are presented. Compatibility test of the CubeSat transceiver through NEN and SN is discussed. Based on the analyses, signal trade studies and technology assessments, the desired CubeSat transceiver features and operation concepts for future CubeSat end-to-end communications are derived.

  10. Modeling 3-D objects with planar surfaces for prediction of electromagnetic scattering

    NASA Technical Reports Server (NTRS)

    Koch, M. B.; Beck, F. B.; Cockrell, C. R.

    1992-01-01

    Electromagnetic scattering analysis of objects at resonance is difficult because low frequency techniques are slow and computer intensive, and high frequency techniques may not be reliable. A new technique for predicting the electromagnetic backscatter from electrically conducting objects at resonance is studied. This technique is based on modeling three dimensional objects as a combination of flat plates where some of the plates are blocking the scattering from others. A cube is analyzed as a simple example. The preliminary results compare well with the Geometrical Theory of Diffraction and with measured data.

  11. RainCube 6U CubeSat

    NASA Image and Video Library

    2018-05-17

    The RainCube 6U CubeSat with fully-deployed antenna. RainCube, CubeRRT and TEMPEST-D are currently integrated aboard Orbital ATKs Cygnus spacecraft and are awaiting launch on an Antares rocket. After the CubeSats have arrived at the station, they will be deployed into low-Earth orbit and will begin their missions to test these new technologies useful for predicting weather, ensuring data quality, and helping researchers better understand storms. https://photojournal.jpl.nasa.gov/catalog/PIA22457

  12. CubeSat Artist Rendering and NASA M-Cubed/COVE

    NASA Image and Video Library

    2012-02-14

    The image on the left is an artist rendering of Montana State University Explorer 1 CubeSat; at right is a CubeSat created by the University of Michigan designated the Michigan Mulitpurpose Mini-satellite, or M-Cubed.

  13. Effect of contact angle on the orientation, stability, and assembly of dense floating cubes.

    PubMed

    Daniello, Robert; Khan, Kashan; Donnell, Michael; Rothstein, Jonathan P

    2014-02-01

    In this paper, the effect of contact angle, density, and size on the orientation, stability, and assembly of floating cubes was investigated. All the cubes tested were more dense than water. Floatation occurred as a result of capillary stresses induced by deformation of the air-water interface. The advancing contact angle of the bare acrylic cubes was measured to be 85°. The contact angle of the cubes was increased by painting the cubes with a commercially available superhydrophobic paint to reach an advancing contact angle of 150°. Depending on their size, density, and contact angle, the cubes were observed to float in one of three primary orientations: edge up, vertex up, and face up. An experimental apparatus was built such that the sum of the gravitational force, buoyancy force, and capillary forces could be measured using a force transducer as a function of cube position as it was lowered through the air-water interface. Measurements showed that the maximum capillary forces were always experienced for the face up orientation. However, when floatation was possible in the vertex up orientation, it was found to be the most stable cube orientation because it had the lowest center of gravity. A series of theoretical predictions were performed for the cubes floating in each of the three primary orientations to calculate the net force on the cube. The theoretical predictions were found to match the experimental measurements well. A cube stability diagram of cube orientation as a function of cube contact angle and size was prepared from the predictions of theory and found to match the experimental observations quite well. The assembly of cubes floating face up and vertex up were also studied for assemblies of two, three, and many cubes. Cubes floating face up were found to assemble face-to-face and form regular square lattice patterns with no free interface between cubes. Cubes floating vertex up were found to assemble in a variety of different arrangements including edge-to-edge, vertex-to-vertex, face-to-face, and vertex-to-face with the most probably assembly being edge-to-edge. Large numbers of vertex up cubes were found to pack with a distribution of orientations and alignments.

  14. Mapping historical landscape changes with the use of a space-time cube

    NASA Astrophysics Data System (ADS)

    Bogucka, Edyta P.; Jahnke, Mathias

    2018-05-01

    In this contribution, we introduce geographic concepts in the humanities and present the results of a spacetime visualization of ancient buildings over the last centuries. The techniques and approaches used were based on cartographic research to visualize spatio-temporal information. As a case study, we applied cartographic styling techniques to a model of the Royal Castle in Warsaw and its different spatial elements, which were constructed and destroyed during their eventful history. In our case, the space-time cube approach seems to be the most suitable representation of this spatio-temporal information. Therefore, we digitized the different footprints of the castle during the ancient centuries as well as the landscape structure around, and annotated them with monarchies, epochs and time. During the digitization process, we had to cope with difficulties like sources in various scales and map projections, which resulted in varying accuracies. The results were stored in KML to support a wide variety of visualization platforms.

  15. SpaceCube v2.0 Space Flight Hybrid Reconfigurable Data Processing System

    NASA Technical Reports Server (NTRS)

    Petrick, Dave

    2014-01-01

    This paper details the design architecture, design methodology, and the advantages of the SpaceCube v2.0 high performance data processing system for space applications. The purpose in building the SpaceCube v2.0 system is to create a superior high performance, reconfigurable, hybrid data processing system that can be used in a multitude of applications including those that require a radiation hardened and reliable solution. The SpaceCube v2.0 system leverages seven years of board design, avionics systems design, and space flight application experiences. This paper shows how SpaceCube v2.0 solves the increasing computing demands of space data processing applications that cannot be attained with a standalone processor approach.The main objective during the design stage is to find a good system balance between power, size, reliability, cost, and data processing capability. These design variables directly impact each other, and it is important to understand how to achieve a suitable balance. This paper will detail how these critical design factors were managed including the construction of an Engineering Model for an experiment on the International Space Station to test out design concepts. We will describe the designs for the processor card, power card, backplane, and a mission unique interface card. The mechanical design for the box will also be detailed since it is critical in meeting the stringent thermal and structural requirements imposed by the processing system. In addition, the mechanical design uses advanced thermal conduction techniques to solve the internal thermal challenges.The SpaceCube v2.0 processing system is based on an extended version of the 3U cPCI standard form factor where each card is 190mm x 100mm in size The typical power draw of the processor card is 8 to 10W and scales with application complexity. The SpaceCube v2.0 data processing card features two Xilinx Virtex-5 QV Field Programmable Gate Arrays (FPGA), eight memory modules, a monitor FPGA with analog monitoring, Ethernet, configurable interconnect to the Xilinx FPGAs including gigabit transceivers, and the necessary voltage regulation. The processor board uses a back-to-back design methodology for common parts that maximizes the board real estate available. This paper will show how to meet the IPC 6012B Class 3A standard with a 22-layer board that has two column grid array devices with 1.0mm pitch. All layout trades such as stack-up options, via selection, and FPGA signal breakout will be discussed with feature size results. The overall board design process will be discussed including parts selection, circuit design, proper signal termination, layout placement and route planning, signal integrity design and verification, and power integrity results. The radiation mitigation techniques will also be detailed including configuration scrubbing options, Xilinx circuit mitigation and FPGA functional monitoring, and memory protection.Finally, this paper will describe how this system is being used to solve the extreme challenges of a robotic satellite servicing mission where typical space-rated processors are not sufficient enough to meet the intensive data processing requirements. The SpaceCube v2.0 is the main payload control computer and is required to control critical subsystems such as autonomous rendezvous and docking using a suite of vision sensors and object avoidance when controlling two robotic arms.

  16. 4800 B/S speech compression techniques for mobile satellite systems

    NASA Technical Reports Server (NTRS)

    Townes, S. A.; Barnwell, T. P., III; Rose, R. C.; Gersho, A.; Davidson, G.

    1986-01-01

    This paper will discuss three 4800 bps digital speech compression techniques currently being investigated for application in the mobile satellite service. These three techniques, vector adaptive predictive coding, vector excitation coding, and the self excited vocoder, are the most promising among a number of techniques being developed to possibly provide near-toll-quality speech compression while still keeping the bit-rate low enough for a power and bandwidth limited satellite service.

  17. Effect of Molarity of Sodium Hydroxide and Curing Method on the Compressive Strength of Ternary Blend Geopolymer Concrete

    NASA Astrophysics Data System (ADS)

    Sathish Kumar, V.; Ganesan, N.; Indira, P. V.

    2017-07-01

    Concrete plays a vital role in the development of infrastructure and buildings all over the world. Geopolymer based cement-less concrete is one of the current findings in the construction industry which leads to a green environment. This research paper deals with the results of the use of Fly ash (FA), Ground Granulated Blast Furnace Slag (GGBS) and Metakaolin (MK) as a ternary blend source material in Geopolymer concrete (GPC). The aspects that govern the compressive strength of GPC like the proportion of source material, Molarity of Sodium Hydroxide (NaOH) and Curing methods were investigated. The purpose of this research is to optimise the local waste material and use them effectively as a ternary blend in GPC. Seven combinations of binder were made in this study with replacement of FA with GGBS and MK by 35%, 30%, 25%, 20%, 15%, 10%, 5% and 5%, 10%, 15%, 20%, 25%, 30%, 35% respectively. The molarity of NaOH solution was varied by 12M, 14M and 16M and two types of curing method were adopted, viz. Hot air oven curing and closed steam curing for 24 hours at 60°C (140°F). The samples were kept at ambient temperature till testing. The compressive strength was obtained after 7 days and 28 days for the GPC cubes. The test data reveals that the ternary blend GPC with molarity 14M cured by hot air oven produces the maximum compressive strength. It was also observed that the compressive strength of the oven cured GPC is approximately 10% higher than the steam cured GPC using the ternary blend.

  18. Lunar and Lagrangian Point L1 L2 CubeSat Communication and Navigation Considerations

    NASA Technical Reports Server (NTRS)

    Schaire, Scott; Wong, Yen F.; Altunc, Serhat; Bussey, George D.; Shelton, Marta; Folta, Dave; Gramling, Cheryl; Celeste, Peter; Anderson, Mike; Perrotto, Trish; hide

    2017-01-01

    CubeSats have grown in sophistication to the point that relatively low-cost mission solutions could be undertaken for planetary exploration. There are unique considerations for Lunar and L1L2 CubeSat communication and navigation compared with low earth orbit CubeSats. This paper explores those considerations as they relate to the MoreheadGSFC Lunar IceCube Mission. The Lunar IceCube is a CubeSat mission led by Morehead State University with participation from NASA Goddard Space Flight Center, JPL, the Busek Company and Vermont Tech. It will search for surface water ice and other resources from a high inclination lunar orbit. Lunar IceCube is one of a select group of CubeSats designed to explore beyond low-earth orbit that will fly on NASAs Space Launch System (SLS) as secondary payloads for Exploration Mission (EM) 1. Lunar IceCube and the EM-1 CubeSats will lay the groundwork for future lunar and L1L2 CubeSat missions. This paper discusses communication and navigation needs for the Lunar IceCube mission and navigation and radiation tolerance requirements related to lunar and L1L2 orbits. Potential CubeSat radio and antennas for such missions are investigated and compared. Ground station coverage, link analysis, and ground station solutions are also discussed. There are currently modifications in process for the Morehead ground station. Further enhancement of the Morehead ground station and the NASA Near Earth Network (NEN) are being examined. This paper describes how the NEN may support Lunar and L1L2 CubeSats without any enhancements and potential expansion of NEN to better support such missions in the future. The potential NEN enhancements include upgrading current NEN Cortex receiver with Forward Error Correction (FEC) Turbo Code, providing X-band Uplink capability, and adding ranging options. The benefits of ground station enhancements for CubeSats flown on NASA Exploration Missions (EM) are presented. The paper also discusses other initiatives that the NEN is studying to better support the CubeSat community, including streamlining the compatibility test, planning and scheduling associated with CubeSat missions.

  19. Reinforcement of the Cube texture during recrystallization of a 1050 aluminum alloy partially recrystallized and 10% cold-rolled

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Wei; Helbert, Anne-Laure, E-mail: anne-laure.helbert@u-psud.fr; Baudin, Thierry

    In high purity Aluminum, very strong {l_brace}100{r_brace}<001> recrystallization texture is developed after 98% cold rolling and annealing at 500 Degree-Sign C. On the contrary, in Aluminum alloys of commercial purity, the Cube component hardly exceeds 30% after complete recrystallization. Parameters controlling Cube orientation development are mainly the solute dragging due to impurities in solid solution and the stored deformation energy. In the present study, besides the 85% cold rolling, two extra annealings and a slight cold rolling are introduced in the processing route to increase the Cube volume fraction. The Cube development was analyzed by X-ray diffraction and Electron BackScatteredmore » Diffraction (EBSD). The nucleation and growth mechanisms responsible for the large Cube growth were investigated using FEG/EBSD in-situ heating experiments. Continuous recrystallization was observed in Cube oriented grains and competed with SIBM (Strain Induced Boundary Migration) mechanism. This latter was favored by the stored energy gap introduced during the additional cold-rolling between the Cube grains and their neighbors. Finally, a Cube volume fraction of 65% was reached after final recrystallization. - Highlights: Black-Right-Pointing-Pointer EBSD in-situ heating experiments of aluminum alloy of commercial purity. Black-Right-Pointing-Pointer A 10% cold-rolling after a partial recrystallization improved Cube nucleation and growth. Black-Right-Pointing-Pointer Annealing before cold-rolling limited the solute drag effect and permitted a large Cube growth. Black-Right-Pointing-Pointer Cube development is enhanced by continuous recrystallization of Cube sub-grains. Black-Right-Pointing-Pointer The preferential Cube growth occurs by SIBM of small Cube grains.« less

  20. Comparative data compression techniques and multi-compression results

    NASA Astrophysics Data System (ADS)

    Hasan, M. R.; Ibrahimy, M. I.; Motakabber, S. M. A.; Ferdaus, M. M.; Khan, M. N. H.

    2013-12-01

    Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms.

  1. Hyperspectral Imaging Using Flexible Endoscopy for Laryngeal Cancer Detection

    PubMed Central

    Regeling, Bianca; Thies, Boris; Gerstner, Andreas O. H.; Westermann, Stephan; Müller, Nina A.; Bendix, Jörg; Laffers, Wiebke

    2016-01-01

    Hyperspectral imaging (HSI) is increasingly gaining acceptance in the medical field. Up until now, HSI has been used in conjunction with rigid endoscopy to detect cancer in vivo. The logical next step is to pair HSI with flexible endoscopy, since it improves access to hard-to-reach areas. While the flexible endoscope’s fiber optic cables provide the advantage of flexibility, they also introduce an interfering honeycomb-like pattern onto images. Due to the substantial impact this pattern has on locating cancerous tissue, it must be removed before the HS data can be further processed. Thereby, the loss of information is to minimize avoiding the suppression of small-area variations of pixel values. We have developed a system that uses flexible endoscopy to record HS cubes of the larynx and designed a special filtering technique to remove the honeycomb-like pattern with minimal loss of information. We have confirmed its feasibility by comparing it to conventional filtering techniques using an objective metric and by applying unsupervised and supervised classifications to raw and pre-processed HS cubes. Compared to conventional techniques, our method successfully removes the honeycomb-like pattern and considerably improves classification performance, while preserving image details. PMID:27529255

  2. Hyperspectral Imaging Using Flexible Endoscopy for Laryngeal Cancer Detection.

    PubMed

    Regeling, Bianca; Thies, Boris; Gerstner, Andreas O H; Westermann, Stephan; Müller, Nina A; Bendix, Jörg; Laffers, Wiebke

    2016-08-13

    Hyperspectral imaging (HSI) is increasingly gaining acceptance in the medical field. Up until now, HSI has been used in conjunction with rigid endoscopy to detect cancer in vivo. The logical next step is to pair HSI with flexible endoscopy, since it improves access to hard-to-reach areas. While the flexible endoscope's fiber optic cables provide the advantage of flexibility, they also introduce an interfering honeycomb-like pattern onto images. Due to the substantial impact this pattern has on locating cancerous tissue, it must be removed before the HS data can be further processed. Thereby, the loss of information is to minimize avoiding the suppression of small-area variations of pixel values. We have developed a system that uses flexible endoscopy to record HS cubes of the larynx and designed a special filtering technique to remove the honeycomb-like pattern with minimal loss of information. We have confirmed its feasibility by comparing it to conventional filtering techniques using an objective metric and by applying unsupervised and supervised classifications to raw and pre-processed HS cubes. Compared to conventional techniques, our method successfully removes the honeycomb-like pattern and considerably improves classification performance, while preserving image details.

  3. Development of Cooperative Communication Techniques for a Network of Small Satellites and Cubesats in Deep Space

    NASA Technical Reports Server (NTRS)

    Babuscia, Alessandra; Cheung, Kar-Ming; Divsalar, Dariush; Lee, Charles

    2014-01-01

    This paper aims to address this problem by proposing cooperative communication approaches in which multiple CubeSats communicate cooperatively together to improve the link performance with respect to the case of a single satellite transmitting. Three approaches are proposed: a beam-forming approach, a coding approach, and a network approach. The approaches are applied to the specific case of a proposed constellation of CubeSats at the Lunar Lagrangian point L1 which aims to perform radio astronomy at very low frequencies (30 KHz -3 MHz). The paper describes the development of the approaches, the simulation and a graphical user interface developed in Matlab which allows to perform trade-offs across multiple constellation's configurations.

  4. Hybrid display of static image and aerial image by use of transparent acrylic cubes and retro-reflectors

    NASA Astrophysics Data System (ADS)

    Morita, Shogo; Ito, Shusei; Yamamoto, Hirotsugu

    2017-02-01

    Aerial display can form transparent floating screen in the mid-air and expected to provide aerial floating signage. We have proposed aerial imaging by retro-reflection (AIRR) to form a large aerial LED screen. However, luminance of aerial image is not sufficiently high so as to be used for signage under broad daylight. The purpose of this paper is to propose a novel aerial display scheme that features hybrid display of two different types of images. Under daylight, signs made of cubes are visible. At night, or under dark lighting situation, aerial LED signs become visible. Our proposed hybrid display is composed of an LED sign, a beam splitter, retro-reflectors, and transparent acrylic cubes. Aerial LED sign is formed with AIRR. Furthermore, we place transparent acrylic cubes on the beam splitter. Light from the LED sign enters transparent acrylic cubes, reflects twice in the transparent acrylic cubes, exit and converge to planesymmetrical position with light source regarding the cube array. Thus, transparent acrylic cubes also form the real image of the source LED sign. Now, we form a sign with the transparent acrylic cubes so that this cube-based sign is apparent under daylight. We have developed a proto-type display by use of 1-cm transparent cubes and retro-reflective sheeting and successfully confirmed aerial image forming with AIRR and transparent cubes as well as cube-based sign under daylight.

  5. Improving throughput and user experience for information intensive websites by applying HTTP compression technique.

    PubMed

    Malla, Ratnakar

    2008-11-06

    HTTP compression is a technique specified as part of the W3C HTTP 1.0 standard. It allows HTTP servers to take advantage of GZIP compression technology that is built into latest browsers. A brief survey of medical informatics websites show that compression is not enabled. With compression enabled, downloaded files sizes are reduced by more than 50% and typical transaction time is also reduced from 20 to 8 minutes, thus providing a better user experience.

  6. Boron carbide nanostructures: A prospective material as an additive in concrete

    NASA Astrophysics Data System (ADS)

    Singh, Paviter; Kaur, Gurpreet; Kumar, Rohit; Kumar, Umesh; Singh, Kulwinder; Kumar, Manjeet; Bala, Rajni; Meena, Ramovatar; Kumar, Akshay

    2018-05-01

    In recent decades, manufacture and ingestion of concrete have increased particularly in developing countries. Due to its low cost, safety and strength, concrete have become an economical choice for protection of radiation shielding material in nuclear reactors. As boron carbide has been known as a neutron absorber material makes it a great candidate as an additive in concrete for shielding radiation. This paper presents the synthesis of boron carbide nanostructures by using ball milling method. The X-ray diffraction pattern, Fourier Transform Infrared Spectroscopy (FTIR) and Scanning Electron Microscope analysis confirms the formation of boron carbide nanostructures. The effect of boron carbide nanostructures on the strength of concrete samples was demonstrated. The compressive strength tests of concrete cube B4C powder additives for 0 % and 5 % of total weight of cement was compared for different curing time period such as 7, 14, 21 and 28 days. The high compressive strength was observed when 5 wt % boron carbide nanostructures were used as an additive in concrete samples after 28 days curing time and showed significant improvement in strength.

  7. Evaluation of a newly developed infant chest compression technique: A randomized crossover manikin trial.

    PubMed

    Smereka, Jacek; Bielski, Karol; Ladny, Jerzy R; Ruetzler, Kurt; Szarpak, Lukasz

    2017-04-01

    Providing adequate chest compression is essential during infant cardio-pulmonary-resuscitation (CPR) but was reported to be performed poor. The "new 2-thumb technique" (nTTT), which consists in using 2 thumbs directed at the angle of 90° to the chest while closing the fingers of both hands in a fist, was recently introduced. Therefore, the aim of this study was to compare 3 chest compression techniques, namely, the 2-finger-technique (TFT), the 2-thumb-technique (TTHT), and the nTTT in an randomized infant-CPR manikin setting. A total of 73 paramedics with at least 1 year of clinical experience performed 3 CPR settings with a chest compression:ventilation ratio of 15:2, according to current guidelines. Chest compression was performed with 1 out of the 3 chest compression techniques in a randomized sequence. Chest compression rate and depth, chest decompression, and adequate ventilation after chest compression served as outcome parameters. The chest compression depth was 29 (IQR, 28-29) mm in the TFT group, 42 (40-43) mm in the TTHT group, and 40 (39-40) mm in the nTTT group (TFT vs TTHT, P < 0.001; TFT vs nTTT, P < 0.001; TTHT vs nTTT, P < 0.01). The median compression rate with TFT, TTHT, and nTTT varied and amounted to 136 (IQR, 133-144) min versus 117 (115-121) min versus 111 (109-113) min. There was a statistically significant difference in the compression rate between TFT and TTHT (P < 0.001), TFT and nTTT (P < 0.001), as well as TTHT and nTTT (P < 0.001). Incorrect decompressions after CC were significantly increased in the TTHT group compared with the TFT (P < 0.001) and the nTTT (P < 0.001) group. The nTTT provides adequate chest compression depth and rate and was associated with adequate chest decompression and possibility to adequately ventilate the infant manikin. Further clinical studies are necessary to confirm these initial findings.

  8. Survey of Header Compression Techniques

    NASA Technical Reports Server (NTRS)

    Ishac, Joseph

    2001-01-01

    This report provides a summary of several different header compression techniques. The different techniques included are: (1) Van Jacobson's header compression (RFC 1144); (2) SCPS (Space Communications Protocol Standards) header compression (SCPS-TP, SCPS-NP); (3) Robust header compression (ROHC); and (4) The header compression techniques in RFC2507 and RFC2508. The methodology for compression and error correction for these schemes are described in the remainder of this document. All of the header compression schemes support compression over simplex links, provided that the end receiver has some means of sending data back to the sender. However, if that return path does not exist, then neither Van Jacobson's nor SCPS can be used, since both rely on TCP (Transmission Control Protocol). In addition, under link conditions of low delay and low error, all of the schemes perform as expected. However, based on the methodology of the schemes, each scheme is likely to behave differently as conditions degrade. Van Jacobson's header compression relies heavily on the TCP retransmission timer and would suffer an increase in loss propagation should the link possess a high delay and/or bit error rate (BER). The SCPS header compression scheme protects against high delay environments by avoiding delta encoding between packets. Thus, loss propagation is avoided. However, SCPS is still affected by an increased BER (bit-error-rate) since the lack of delta encoding results in larger header sizes. Next, the schemes found in RFC2507 and RFC2508 perform well for non-TCP connections in poor conditions. RFC2507 performance with TCP connections is improved by various techniques over Van Jacobson's, but still suffers a performance hit with poor link properties. Also, RFC2507 offers the ability to send TCP data without delta encoding, similar to what SCPS offers. ROHC is similar to the previous two schemes, but adds additional CRCs (cyclic redundancy check) into headers and improves compression schemes which provide better tolerances in conditions with a high BER.

  9. Application of content-based image compression to telepathology

    NASA Astrophysics Data System (ADS)

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace

    2002-05-01

    Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.

  10. Lossless compression techniques for maskless lithography data

    NASA Astrophysics Data System (ADS)

    Dai, Vito; Zakhor, Avideh

    2002-07-01

    Future lithography systems must produce more dense chips with smaller feature sizes, while maintaining the throughput of one wafer per sixty seconds per layer achieved by today's optical lithography systems. To achieve this throughput with a direct-write maskless lithography system, using 25 nm pixels for 50 nm feature sizes, requires data rates of about 10 Tb/s. In a previous paper, we presented an architecture which achieves this data rate contingent on consistent 25 to 1 compression of lithography data, and on implementation of a decoder-writer chip with a real-time decompressor fabricated on the same chip as the massively parallel array of lithography writers. In this paper, we examine the compression efficiency of a spectrum of techniques suitable for lithography data, including two industry standards JBIG and JPEG-LS, a wavelet based technique SPIHT, general file compression techniques ZIP and BZIP2, our own 2D-LZ technique, and a simple list-of-rectangles representation RECT. Layouts rasterized both to black-and-white pixels, and to 32 level gray pixels are considered. Based on compression efficiency, JBIG, ZIP, 2D-LZ, and BZIP2 are found to be strong candidates for application to maskless lithography data, in many cases far exceeding the required compression ratio of 25. To demonstrate the feasibility of implementing the decoder-writer chip, we consider the design of a hardware decoder based on ZIP, the simplest of the four candidate techniques. The basic algorithm behind ZIP compression is Lempel-Ziv 1977 (LZ77), and the design parameters of LZ77 decompression are optimized to minimize circuit usage while maintaining compression efficiency.

  11. An Optimum Space-to-Ground Communication Concept for CubeSat Platform Utilizing NASA Space Network and Near Earth Network

    NASA Technical Reports Server (NTRS)

    Wong, Yen F.; Kegege, Obadiah; Schaire, Scott H.; Bussey, George; Altunc, Serhat; Zhang, Yuwen; Patel, Chitra

    2016-01-01

    National Aeronautics and Space Administration (NASA) CubeSat missions are expected to grow rapidly in the next decade. Higher data rate CubeSats are transitioning away from Amateur Radio bands to higher frequency bands. A high-level communication architecture for future space-to-ground CubeSat communication was proposed within NASA Goddard Space Flight Center. This architecture addresses CubeSat direct-to-ground communication, CubeSat to Tracking Data Relay Satellite System (TDRSS) communication, CubeSat constellation with Mothership direct-to-ground communication, and CubeSat Constellation with Mothership communication through K-Band Single Access (KSA).A Study has been performed to explore this communication architecture, through simulations, analyses, and identifying technologies, to develop the optimum communication concepts for CubeSat communications. This paper will present details of the simulation and analysis that include CubeSat swarm, daughter shipmother ship constellation, Near Earth Network (NEN) S and X-band direct to ground link, TDRS Multiple Access (MA) array vs Single Access mode, notional transceiverantenna configurations, ground asset configurations and Code Division Multiple Access (CDMA) signal trades for daughter mother CubeSat constellation inter-satellite crosslink. Results of Space Science X-band 10 MHz maximum achievable data rate study will be summarized. Assessment of Technology Readiness Level (TRL) of current CubeSat communication technologies capabilities will be presented. Compatibility test of the CubeSat transceiver through NEN and Space Network (SN) will be discussed. Based on the analyses, signal trade studies and technology assessments, the functional design and performance requirements as well as operation concepts for future CubeSat end-to-end communications will be derived.

  12. CubeIndexer: Indexer for regions of interest in data cubes

    NASA Astrophysics Data System (ADS)

    Chilean Virtual Observatory; Araya, Mauricio; Candia, Gabriel; Gregorio, Rodrigo; Mendoza, Marcelo; Solar, Mauricio

    2015-12-01

    CubeIndexer indexes regions of interest (ROIs) in data cubes reducing the necessary storage space. The software can process data cubes containing megabytes of data in fractions of a second without human supervision, thus allowing it to be incorporated into a production line for displaying objects in a virtual observatory. The software forms part of the Chilean Virtual Observatory (ChiVO) and provides the capability of content-based searches on data cubes to the astronomical community.

  13. Wavelet-based audio embedding and audio/video compression

    NASA Astrophysics Data System (ADS)

    Mendenhall, Michael J.; Claypoole, Roger L., Jr.

    2001-12-01

    Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.

  14. Lunar and Lagrangian Point L1 L2 CubeSat Communication and Navigation Considerations

    NASA Technical Reports Server (NTRS)

    Schaire, Scott; Wong, Yen F.; Altunc, Serhat; Bussey, George; Shelton, Marta; Folta, Dave; Gramling, Cheryl; Celeste, Peter; Anderson, Mile; Perrotto, Trish; hide

    2017-01-01

    CubeSats have grown in sophistication to the point that relatively low-cost mission solutions could be undertaken for planetary exploration. There are unique considerations for lunar and L1/L2 CubeSat communication and navigation compared with low earth orbit CubeSats. This paper explores those considerations as they relate to the Lunar IceCube Mission. The Lunar IceCube is a CubeSat mission led by Morehead State University with participation from NASA Goddard Space Flight Center, Jet Propulsion Laboratory, the Busek Company and Vermont Tech. It will search for surface water ice and other resources from a high inclination lunar orbit. Lunar IceCube is one of a select group of CubeSats designed to explore beyond low-earth orbit that will fly on NASA’s Space Launch System (SLS) as secondary payloads for Exploration Mission (EM) 1. Lunar IceCube and the EM-1 CubeSats will lay the groundwork for future lunar and L1/L2 CubeSat missions. This paper discusses communication and navigation needs for the Lunar IceCube mission and navigation and radiation tolerance requirements related to lunar and L1/L2 orbits. Potential CubeSat radios and antennas for such missions are investigated and compared. Ground station coverage, link analysis, and ground station solutions are also discussed. This paper will describe modifications in process for the Morehead ground station, as well as further enhancements of the Morehead ground station and NASA Near Earth Network (NEN) that are being considered. The potential NEN enhancements include upgrading current NEN Cortex receiver with Forward Error Correction (FEC) Turbo Code, providing X-band uplink capability, and adding ranging options. The benefits of ground station enhancements for CubeSats flown on NASA Exploration Missions (EM) are presented. This paper also describes how the NEN may support lunar and L1/L2 CubeSats without any enhancements. In addition, NEN is studying other initiatives to better support the CubeSat community, including streamlining the compatibility testing, planning and scheduling associated with CubeSat missions. Because of the lower cost, opportunity for simultaneous multipoint observations, it is inevitable that CubeSats will continue to increase in popularity for not only LEO missions, but for lunar and L1/L2 missions as well. The challenges for lunar and L1/L2 missions for communication and navigation are much greater than for LEO missions, but are not insurmountable. Advancements in flight hardware and ground infrastructure will ease the burden.

  15. Evaluation of three lidar scanning strategies for turbulence measurements

    NASA Astrophysics Data System (ADS)

    Newman, J. F.; Klein, P. M.; Wharton, S.; Sathe, A.; Bonin, T. A.; Chilson, P. B.; Muschinski, A.

    2015-11-01

    Several errors occur when a traditional Doppler-beam swinging (DBS) or velocity-azimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers. Results indicate that the six-beam strategy mitigates some of the errors caused by VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.

  16. Evaluation of three lidar scanning strategies for turbulence measurements

    NASA Astrophysics Data System (ADS)

    Newman, Jennifer F.; Klein, Petra M.; Wharton, Sonia; Sathe, Ameya; Bonin, Timothy A.; Chilson, Phillip B.; Muschinski, Andreas

    2016-05-01

    Several errors occur when a traditional Doppler beam swinging (DBS) or velocity-azimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar, and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers.Results indicate that the six-beam strategy mitigates some of the errors caused by VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.

  17. CubeSat Constellation Cloud Winds(C3Winds) A New Wind Observing System to Study Mesoscale Cloud Dynamics and Processes

    NASA Technical Reports Server (NTRS)

    Wu, D. L.; Kelly, M.A.; Yee, J.-H.; Boldt, J.; Demajistre, R.; Reynolds, E. L.; Tripoli, G. J.; Oman, L. D.; Prive, N.; Heidinger, A. K.; hide

    2016-01-01

    The CubeSat Constellation Cloud Winds (C3Winds) is a NASA Earth Venture Instrument (EV-I) concept with the primary objective to better understand mesoscale dynamics and their structures in severe weather systems. With potential catastrophic damage and loss of life, strong extratropical and tropical cyclones (ETCs and TCs) have profound three-dimensional impacts on the atmospheric dynamic and thermodynamic structures, producing complex cloud precipitation patterns, strong low-level winds, extensive tropopause folds, and intense stratosphere-troposphere exchange. Employing a compact, stereo IR-visible imaging technique from two formation-flying CubeSats, C3Winds seeks to measure and map high-resolution (2 km) cloud motion vectors (CMVs) and cloud geometric height (CGH) accurately by tracking cloud features within 5-15 min. Complementary to lidar wind observations from space, the high-resolution wind fields from C3Winds will allow detailed investigations on strong low-level wind formation in an occluded ETC development, structural variations of TC inner-core rotation, and impacts of tropopause folding events on tropospheric ozone and air quality. Together with scatterometer ocean surface winds, C3Winds will provide a more comprehensive depiction of atmosphere-boundary-layer dynamics and interactive processes. Built upon mature imaging technologies and long history of stereoscopic remote sensing, C3Winds provides an innovative, cost-effective solution to global wind observations with potential of increased diurnal sampling via CubeSat constellation.

  18. Integration and Environmental Qualification Testing of Spacecraft Structures in Support of the Naval Postgraduate School CubeSat Launcher Program

    DTIC Science & Technology

    2009-06-01

    2 3. Space Access Challenges to the CubeSat Community........ 3 B. NPSCUL/NPSCUL-LITE PROGRAM HISTORY TO DATE...Astronautics, AIAA Space 2008 Conference and Exhibition, 2008. 3 3. Space Access Challenges to the CubeSat Community In less than ten years since... challenges to space access for CubeSats.5 Launch of a CubeSat aboard US launch vehicles from US launch facilities would allow CubeSats of a sensitive nature

  19. Classification Techniques for Digital Map Compression

    DTIC Science & Technology

    1989-03-01

    classification improved the performance of the K-means classification algorithm resulting in a compression of 8.06:1 with Lempel - Ziv coding. Run-length coding... compression performance are run-length coding [2], [8] and Lempel - Ziv coding 110], [11]. These techniques are chosen because they are most efficient when...investigated. After the classification, some standard file compression methods, such as Lempel - Ziv and run-length encoding were applied to the

  20. Temporal mapping and analysis

    NASA Technical Reports Server (NTRS)

    O'Hara, Charles G. (Inventor); Shrestha, Bijay (Inventor); Vijayaraj, Veeraraghavan (Inventor); Mali, Preeti (Inventor)

    2011-01-01

    A compositing process for selecting spatial data collected over a period of time, creating temporal data cubes from the spatial data, and processing and/or analyzing the data using temporal mapping algebra functions. In some embodiments, the temporal data cube is creating a masked cube using the data cubes, and computing a composite from the masked cube by using temporal mapping algebra.

  1. EarthCube's Assessment Framework: Ensuring Return on Investment

    NASA Astrophysics Data System (ADS)

    Lehnert, K.

    2016-12-01

    EarthCube is a community-governed, NSF-funded initiative to transform geoscience research by developing cyberinfrastructure that improves access, sharing, visualization, and analysis of all forms of geosciences data and related resources. EarthCube's goal is to enable geoscientists to tackle the challenges of understanding and predicting a complex and evolving solid Earth, hydrosphere, atmosphere, and space environment systems. EarthCube's infrastructure needs capabilities around data, software, and systems. It is essential for EarthCube to determine the value of new capabilities for the community and the progress of the overall effort to demonstrate its value to the science community and Return on Investment for the NSF. EarthCube is therefore developing an assessment framework for research proposals, projects funded by EarthCube, and the overall EarthCube program. As a first step, a software assessment framework has been developed that addresses the EarthCube Strategic Vision by promoting best practices in software development, complete and useful documentation, interoperability, standards adherence, open science, and education and training opportunities for research developers.

  2. Radiometric resolution enhancement by lossy compression as compared to truncation followed by lossless compression

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Manohar, Mareboyana

    1994-01-01

    Recent advances in imaging technology make it possible to obtain imagery data of the Earth at high spatial, spectral and radiometric resolutions from Earth orbiting satellites. The rate at which the data is collected from these satellites can far exceed the channel capacity of the data downlink. Reducing the data rate to within the channel capacity can often require painful trade-offs in which certain scientific returns are sacrificed for the sake of others. In this paper we model the radiometric version of this form of lossy compression by dropping a specified number of least significant bits from each data pixel and compressing the remaining bits using an appropriate lossless compression technique. We call this approach 'truncation followed by lossless compression' or TLLC. We compare the TLLC approach with applying a lossy compression technique to the data for reducing the data rate to the channel capacity, and demonstrate that each of three different lossy compression techniques (JPEG/DCT, VQ and Model-Based VQ) give a better effective radiometric resolution than TLLC for a given channel rate.

  3. CubeSat Launch Initiative Overview and CubeSat 101

    NASA Technical Reports Server (NTRS)

    Higginbotham, Scott

    2017-01-01

    The National Aeronautics and Space Administration (NASA) recognizes the tremendous potential that CubeSats (very small satellites) have to inexpensively demonstrate advanced technologies, collect scientific data, and enhance student engagement in Science, Technology, Engineering, and Mathematics (STEM). The CubeSat Launch Initiative (CSLI) was created to provide launch opportunities for CubeSats developed by academic institutions, non-profit entities, and NASA centers. This presentation will provide an overview of the CSLI, its benefits, and its results. This presentation will also provide high level CubeSat 101 information for prospective CubeSat developers, describing the development process from concept through mission operations while highlighting key points that developers need to be mindful of.

  4. Physical and mechanical properties of self-compacting concrete containing superplasticizer and metakaolin

    NASA Astrophysics Data System (ADS)

    Shahidan, Shahiron; Tayeh, Bassam A.; Jamaludin, A. A.; Bahari, N. A. A. S.; Mohd, S. S.; Zuki Ali, N.; Khalid, F. S.

    2017-11-01

    The development of concrete technology shows a variety of admixtures in concrete to produce special concrete. This includes the production of self-compacting concrete which is able to fill up all spaces, take formwork shapes and pass through congested reinforcement bars without vibrating or needing any external energy. In this study, the main objective is to compare the physical and mechanical properties of self-compacting concrete containing metakaolin with normal concrete. Four types of samples were produced to study the effect of metakaolin towards the physical and mechanical properties of self-compacting concrete where 0%, 5%, 10% and 15% of metakaolin were used as cement replacement. The physical properties were investigated using slump test for normal concrete and slump flow test for self-compacting concrete. The mechanical properties were tested for compressive strength and tensile strength. The findings of this study show that the inclusion of metakaolin as cement replacement can increase both compressive and tensile strength compared to normal concrete. The highest compressive strength was found in self-compacting concrete with 15% metakaolin replacement at 53.3 MPa while self-compacting concrete with 10% metakaolin replacement showed the highest tensile strength at 3.6 MPa. On top of that, the finishing or concrete surface of both cube and cylinder samples made of self-compacting concrete produced a smooth surface with the appearance of less honeycombs compared to normal concrete.

  5. Development of heat resistant geopolymer-based materials from red mud and rice husk ash

    NASA Astrophysics Data System (ADS)

    Thang, Nguyen Hoc; Nhung, Le Thuy; Quyen, Pham Vo Thi Ha; Phong, Dang Thanh; Khe, Dao Thanh; Van Phuc, Nguyen

    2018-04-01

    Geopolymer is an inorganic polymer composite developed by Joseph Davidovits in 1970s. Such material has potentials to replace Ordinary Portland Cement (OPC)-based materials in the future because of its lower energy consumption, minimal CO2 emissions and lower production cost as it utilizes industrial waste resources. Hence, geopolymerization and the process to produce geopolymers for various applications like building materials can be considered as green industry. Moreover, in this study, red mud and rice husk ash were used as raw materials for geopolymeric production, which are aluminum industrial and agricultural wastes that need to be managed to reduce their negative impact to the environment. The red mud and rice husk ash were mixed with sodium silicate (water glass) solution to form geopolymer paste. The geopolymer paste was filled into 5-cm cube molds according to ASTM C109/C109M 99, and then cured at room temperature for 28 days. These products were then tested for compressive strength and volumetric weight. Results indicated that the material can be considered lightweight with a compressive strength at 28 days that are in the range of 6.8 to 15.5 MPa. Moreover, the geopolymer specimens were also tested for heat resistance at a temperature of 1000oC for 2 hours. Results suggest high heat resistance with an increase of compressive strength from 262% to 417% after exposed at high temperature.

  6. Effect of jute yarn on the mechanical behavior of concrete composites.

    PubMed

    Zakaria, Mohammad; Ahmed, Mashud; Hoque, Md Mozammel; Hannan, Abdul

    2015-01-01

    The objective of the study is to investigate the effect of introducing jute yarn on the mechanical properties of concrete. Jute fibre is produced abundantly in Bangladesh and hence, very cheap. The investigation on the enhancement of mechanical properties of concrete with jute yarn as reinforcement, if enhanced, will not only explore a way to improve the properties of concrete, it will also explore the use of jute and restrict the utilization of polymer which is environmentally detrimental. To accomplish the objective, an experimental investigation of the compressive, flexural and tensile strengths of Jute Yarn Reinforced Concrete composites (JYRCC) has been conducted. Cylinders, prisms and cubes of standard dimensions have been made to introducing jute yarn varying the mix ratio of the ingredients in concrete, water cement ratio, length and volume of yarn to know the effect of parameters as mentioned. Compressive, flexural and tensile strength tests had been conducted on the prepared samples by appropriate testing apparatus following Standards of tests. Mechanical properties of JYRCC were observed to be enhanced for a particular range of lengths of cut (10, 15, 20 and 25 mm) and volume content of jute yarn (0.1, 0.25, 0.5 and 0.75 %). The maximum increment of compressive, flexural and tensile strengths observed in the investigation are 33, 23 and 38 %, respectively with respect to concrete without jute yarn.

  7. Additively Manufactured Open-Cell Porous Biomaterials Made from Six Different Space-Filling Unit Cells: The Mechanical and Morphological Properties

    PubMed Central

    Ahmadi, Seyed Mohammad; Amin Yavari, Saber; Wauthle, Ruebn; Pouran, Behdad; Schrooten, Jan; Weinans, Harrie; Zadpoor, Amir A.

    2015-01-01

    It is known that the mechanical properties of bone-mimicking porous biomaterials are a function of the morphological properties of the porous structure, including the configuration and size of the repeating unit cell from which they are made. However, the literature on this topic is limited, primarily because of the challenge in fabricating porous biomaterials with arbitrarily complex morphological designs. In the present work, we studied the relationship between relative density (RD) of porous Ti6Al4V EFI alloy and five compressive properties of the material, namely elastic gradient or modulus (Es20–70), first maximum stress, plateau stress, yield stress, and energy absorption. Porous structures with different RD and six different unit cell configurations (cubic (C), diamond (D), truncated cube (TC), truncated cuboctahedron (TCO), rhombic dodecahedron (RD), and rhombicuboctahedron (RCO)) were fabricated using selective laser melting. Each of the compressive properties increased with increase in RD, the relationship being of a power law type. Clear trends were seen in the influence of unit cell configuration and porosity on each of the compressive properties. For example, in terms of Es20–70, the structures may be divided into two groups: those that are stiff (comprising those made using C, TC, TCO, and RCO unit cell) and those that are compliant (comprising those made using D and RD unit cell). PMID:28788037

  8. Additively Manufactured Open-Cell Porous Biomaterials Made from Six Different Space-Filling Unit Cells: The Mechanical and Morphological Properties.

    PubMed

    Ahmadi, Seyed Mohammad; Yavari, Saber Amin; Wauthle, Ruebn; Pouran, Behdad; Schrooten, Jan; Weinans, Harrie; Zadpoor, Amir A

    2015-04-21

    It is known that the mechanical properties of bone-mimicking porous biomaterials are a function of the morphological properties of the porous structure, including the configuration and size of the repeating unit cell from which they are made. However, the literature on this topic is limited, primarily because of the challenge in fabricating porous biomaterials with arbitrarily complex morphological designs. In the present work, we studied the relationship between relative density (RD) of porous Ti6Al4V EFI alloy and five compressive properties of the material, namely elastic gradient or modulus (E s20 -70 ), first maximum stress, plateau stress, yield stress, and energy absorption. Porous structures with different RD and six different unit cell configurations (cubic (C), diamond (D), truncated cube (TC), truncated cuboctahedron (TCO), rhombic dodecahedron (RD), and rhombicuboctahedron (RCO)) were fabricated using selective laser melting. Each of the compressive properties increased with increase in RD, the relationship being of a power law type. Clear trends were seen in the influence of unit cell configuration and porosity on each of the compressive properties. For example, in terms of E s20 -70 , the structures may be divided into two groups: those that are stiff (comprising those made using C, TC, TCO, and RCO unit cell) and those that are compliant (comprising those made using D and RD unit cell).

  9. Composeable Chat over Low-Bandwidth Intermittent Communication Links

    DTIC Science & Technology

    2007-04-01

    Compression (STC), introduced in this report, is a data compression algorithm intended to compress alphanumeric... Ziv - Lempel coding, the grandfather of most modern general-purpose file compression programs, watches for input symbol sequences that have previously... data . This section applies these techniques to create a new compression algorithm called Small Text Compression . Various sequence compression

  10. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described.

  11. Shock temperature measurement of transparent materials under shock compression

    NASA Astrophysics Data System (ADS)

    Hu, Jinbiao

    1999-06-01

    Under shock compression, some materials have very small absorptance. So it's emissivity is very small too. For this kinds of materials, although they stand in high temperature state under shock compression, the temperature can not be detected easily by using optical radiation technique because of the low emissivity. In this paper, an optical radiation temperature measurement technique of measuring temperature of very low emissive material under shock compression was proposed. For making sure this technique, temperature of crystal NaCl at shock pressure 41 GPa was measured. The result agrees with the results of Kormer et al and Ahrens et al very well. This shows that this technique is reliable and can be used to measuring low emissive shock temperature.

  12. HyspIRI Intelligent Payload Module(IPM) and Benchmarking Algorithms for Upload

    NASA Technical Reports Server (NTRS)

    Mandl, Daniel

    2010-01-01

    Features: Hardware: a) Xilinx Virtex-5 (GSFC Space Cube 2); b) 2 x 400MHz PPC; c) 100MHz Bus; d) 2 x 512MB SDRAM; e) Dual Gigabit Ethernet. Support Linux kernel 2.6.31 (gcc version 4.2.2). Support software running in stand alone mode for better performance. Can stream raw data up to 800 Mbps. Ready for operations. Software Application Examples: Band-stripping Algiotrhmsl:cloud, sulfur, flood, thermal, SWIL, NDVI, NDWI, SIWI, oil spills, algae blooms, etc. Corrections: geometric, radiometric, atmospheric. Core Flight System/dynamic software bus. CCSDS File Delivery Protocol. Delay Tolerant Network. CASPER /onboard planning. Fault monitoring/recovery software. S/C command and telemetry software. Data compression. Sensor Web for Autonomous Mission Operations.

  13. CubeRovers for Lunar Exploration

    NASA Astrophysics Data System (ADS)

    Tallaksen, A. P.; Horchler, A. D.; Boirum, C.; Arnett, D.; Jones, H. L.; Fang, E.; Amoroso, E.; Chomas, L.; Papincak, L.; Sapunkov, O. B.; Whittaker, W. L.

    2017-10-01

    CubeRover is a 2-kg class of lunar rover that seeks to standardize and democratize surface mobility and science, analogous to CubeSats. This CubeRover will study in-situ lunar surface trafficability and descent engine blast ejecta phenomena.

  14. Chemistry Cube Game - Exploring Basic Principles of Chemistry by Turning Cubes.

    PubMed

    Müller, Markus T

    2018-02-01

    The Chemistry Cube Game invites students at secondary school level 1 and 2 to explore basic concepts of chemistry in a playful way, either as individuals or in teams. It consists of 15 different cubes, 9 cubes for different acids, their corresponding bases and precursors, and 6 cubes for different reducing and oxidising agents. The cubes can be rotated in those directions indicated. Each 'allowed' vertical or horizontal rotation of 90° stands for a chemical reaction or a physical transition. Two different games and playing modes are presented here: First, redox chemistry is introduced for the formation of salts from elementary metals and non-metals. Second, the speciation of acids and bases at different pH-values is shown. The cubes can be also used for games about environmental chemistry such as the carbon and sulphur cycle, covering the topic of acid rain, or the nitrogen cycle including ammoniac synthesis, nitrification and de-nitrification.

  15. Methods for gas detection using stationary hyperspectral imaging sensors

    DOEpatents

    Conger, James L [San Ramon, CA; Henderson, John R [Castro Valley, CA

    2012-04-24

    According to one embodiment, a method comprises producing a first hyperspectral imaging (HSI) data cube of a location at a first time using data from a HSI sensor; producing a second HSI data cube of the same location at a second time using data from the HSI sensor; subtracting on a pixel-by-pixel basis the second HSI data cube from the first HSI data cube to produce a raw difference cube; calibrating the raw difference cube to produce a calibrated raw difference cube; selecting at least one desired spectral band based on a gas of interest; producing a detection image based on the at least one selected spectral band and the calibrated raw difference cube; examining the detection image to determine presence of the gas of interest; and outputting a result of the examination. Other methods, systems, and computer program products for detecting the presence of a gas are also described.

  16. Design and analysis of non-polarizing beam splitter in a glass cube

    NASA Astrophysics Data System (ADS)

    Shi, Jinhui; Wang, Zhengping; Huang, Zongjun; Li, Qingbo

    2008-11-01

    The reflectance and transmittance of thin films at oblique incident angles exhibit strong polarization effects, particularly for the films inside a glass cube. However, the polarization effects are undesirable in many applications. To solve this problem, non-polarizing beam splitters with unique optical thin films have been achieved employing a method of combination of interference and frustrated total internal reflection, the non-polarizing condition expressions based on frustrated total internal reflection has been derived, and the design examples of non-polarizing beam splitters with an optimization technique have been also presented. The results of Rp=(50+/-0.5)%, Rs=(50+/-0.5)% andΔr=(0+/-0.3) degree in the wavelength range of 400-700nm have been obtained. The thickness sensitivity of NPBSs is also analyzed.

  17. Gold coatings for cube-corner retro-reflectors

    NASA Astrophysics Data System (ADS)

    Dligatch, Svetlana; Gross, Mark; Netterfield, Roger P.; Pereira, Nathan; Platt, Benjamin C.; Nemati, Bijan

    2005-09-01

    The Space Interferometry Mission (SIM) PlanetQuest is managed by the Jet Propulsion Laboratory for the National Aeronautics and Space Administration. SIM requires, among other things, high precision double cube-corner retroreflectors. A test device has recently been fabricated for this project with demanding specifications on the optical surfaces and gold reflective coatings. Several gold deposition techniques were examined to meet the stringent specifications on uniformity, optical properties, micro-roughness and surface quality. We report on a comparative study of optical performance of gold films deposited by resistive and e-beam pvaporation, including measurements of the scattering from the coated surfaces. The effects of oxygen bombardment and titanium under-layer on optical properties and adhesion were evaluated. The influence of surface preparation on the optical properties was examined also.

  18. A simple technique for measuring buoyant weight increment of entire, transplanted coral colonies in the field.

    PubMed

    Herler, Jürgen; Dirnwöber, Markus

    2011-10-31

    Estimating the impacts of global and local threats on coral reefs requires monitoring reef health and measuring coral growth and calcification rates at different time scales. This has traditionally been mostly performed in short-term experimental studies in which coral fragments were grown in the laboratory or in the field but measured ex situ. Practical techniques in which growth and measurements are performed over the long term in situ are rare. Apart from photographic approaches, weight increment measurements have also been applied. Past buoyant weight measurements under water involved a complicated and little-used apparatus. We introduce a new method that combines previous field and laboratory techniques to measure the buoyant weight of entire, transplanted corals under water. This method uses an electronic balance fitted into an acrylic glass underwater housing and placed atop of an acrylic glass cube. Within this cube, corals transplanted onto artificial bases can be attached to the balance and weighed at predetermined intervals while they continue growth in the field. We also provide a set of simple equations for the volume and weight determinations required to calculate net growth rates. The new technique is highly accurate: low error of weight determinations due to variation of coral density (< 0.08%) and low standard error (< 0.01%) for repeated measurements of the same corals. We outline a transplantation technique for properly preparing corals for such long-term in situ experiments and measurements.

  19. An alternative noninvasive technique for the treatment of iatrogenic femoral pseudoaneurysms: stethoscope-guided compression.

    PubMed

    Korkmaz, Ahmet; Duyuler, Serkan; Kalayci, Süleyman; Türker, Pinar; Sahan, Ekrem; Maden, Orhan; Selçuk, Mehmet Timur

    2013-06-01

    latrogenic femoral pseudoaneurysm is a well-known vascular access site complication. Many invasive and noninvasive techniques have been proposed for the management of this relatively common complication. In this study, we aimed to evaluate efficiency and safety of stethoscope-guided compression as a novel noninvasive technique in the femoral pseudoaneurysm treatment. We prospectively included 29 consecutive patients with the diagnosis of femoral pseudoaneurysm who underwent coronary angiography. Patients with a clinical suspicion of femoral pseudoaneurysm were referred to colour Doppler ultrasound evaluation. The adult (large) side of the stethoscope was used to determine the location where the bruit was best heard. Then compression with the paediatric (small) side of the stethoscope was applied until the bruit could no longer be heard and compression was maintained for at least two sessions. Once the bruit disappeared, a 12-hour bed rest with external elastic compression was advised to the patients, in order to prevent disintegration of newly formed thrombosis. Mean pseudoaneurysm size was 1.7 +/- 0.4 cmx 3.0 +/- 0.9 cm and the mean duration of compression was 36.2 +/- 8.5 minutes.Twenty-six (89.6%) of these 29 patients were successfully treated with stethoscope-guided compression. In 18 patients (62%), the pseuodoaneurysms were successfully closed after 2 sessions of 15-minute compression. No severe complication was observed. Stethoscope-guided compression of femoral pseudoaneurysms is a safe and effective novel technique which requires less equipment and expertise than other contemporary methods.

  20. A block-based JPEG-LS compression technique with lossless region of interest

    NASA Astrophysics Data System (ADS)

    Deng, Lihua; Huang, Zhenghua; Yao, Shoukui

    2018-03-01

    JPEG-LS lossless compression algorithm is used in many specialized applications that emphasize on the attainment of high fidelity for its lower complexity and better compression ratios than the lossless JPEG standard. But it cannot prevent error diffusion because of the context dependence of the algorithm, and have low compression rate when compared to lossy compression. In this paper, we firstly divide the image into two parts: ROI regions and non-ROI regions. Then we adopt a block-based image compression technique to decrease the range of error diffusion. We provide JPEG-LS lossless compression for the image blocks which include the whole or part region of interest (ROI) and JPEG-LS near lossless compression for the image blocks which are included in the non-ROI (unimportant) regions. Finally, a set of experiments are designed to assess the effectiveness of the proposed compression method.

  1. Data Compression Techniques for Advanced Space Transportation Systems

    NASA Technical Reports Server (NTRS)

    Bradley, William G.

    1998-01-01

    Advanced space transportation systems, including vehicle state of health systems, will produce large amounts of data which must be stored on board the vehicle and or transmitted to the ground and stored. The cost of storage or transmission of the data could be reduced if the number of bits required to represent the data is reduced by the use of data compression techniques. Most of the work done in this study was rather generic and could apply to many data compression systems, but the first application area to be considered was launch vehicle state of health telemetry systems. Both lossless and lossy compression techniques were considered in this study.

  2. Integrating dynamic and distributed compressive sensing techniques to enhance image quality of the compressive line sensing system for unmanned aerial vehicles application

    NASA Astrophysics Data System (ADS)

    Ouyang, Bing; Hou, Weilin; Caimi, Frank M.; Dalgleish, Fraser R.; Vuorenkoski, Anni K.; Gong, Cuiling

    2017-07-01

    The compressive line sensing imaging system adopts distributed compressive sensing (CS) to acquire data and reconstruct images. Dynamic CS uses Bayesian inference to capture the correlated nature of the adjacent lines. An image reconstruction technique that incorporates dynamic CS in the distributed CS framework was developed to improve the quality of reconstructed images. The effectiveness of the technique was validated using experimental data acquired in an underwater imaging test facility. Results that demonstrate contrast and resolution improvements will be presented. The improved efficiency is desirable for unmanned aerial vehicles conducting long-duration missions.

  3. A Real-Time High Performance Data Compression Technique For Space Applications

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Venbrux, Jack; Bhatia, Prakash; Miller, Warner H.

    2000-01-01

    A high performance lossy data compression technique is currently being developed for space science applications under the requirement of high-speed push-broom scanning. The technique is also error-resilient in that error propagation is contained within a few scan lines. The algorithm is based on block-transform combined with bit-plane encoding; this combination results in an embedded bit string with exactly the desirable compression rate. The lossy coder is described. The compression scheme performs well on a suite of test images typical of images from spacecraft instruments. Hardware implementations are in development; a functional chip set is expected by the end of 2001.

  4. NPS-SCAT (Solar Cell Array Tester), The Construction of NPS’ First Prototype CubeSat

    DTIC Science & Technology

    2008-09-01

    71 Figure 26. Pumpkin Solar Panel Clip Set on the Left and Clips Holding Solar...Panels on the Right......................................................................................................71 Figure 27. Pumpkin ...1998. The satellite provided a global, digital messaging system using spread spectrum techniques in the amateur radio 70 cm band. Most importantly

  5. A Novel CAI System for Space Conceptualization Training in Perspective Sketching

    ERIC Educational Resources Information Center

    Luh, Ding-Bang; Chen, Shao-Nung

    2013-01-01

    For many designers, freehand sketching is the primary tool for conceptualization in the early stage of the design process. However, current education on concept presentation techniques rarely emphasizes the construction of the most fundamental spatial unit, the cube. Incorrect construction of spatial units leads to disproportions that deviate from…

  6. Real-Time SCADA Cyber Protection Using Compression Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyle G. Roybal; Gordon H Rueff

    2013-11-01

    The Department of Energy’s Office of Electricity Delivery and Energy Reliability (DOE-OE) has a critical mission to secure the energy infrastructure from cyber attack. Through DOE-OE’s Cybersecurity for Energy Delivery Systems (CEDS) program, the Idaho National Laboratory (INL) has developed a method to detect malicious traffic on Supervisory, Control, and Data Acquisition (SCADA) network using a data compression technique. SCADA network traffic is often repetitive with only minor differences between packets. Research performed at the INL showed that SCADA network traffic has traits desirable for using compression analysis to identify abnormal network traffic. An open source implementation of a Lempel-Ziv-Welchmore » (LZW) lossless data compression algorithm was used to compress and analyze surrogate SCADA traffic. Infected SCADA traffic was found to have statistically significant differences in compression when compared against normal SCADA traffic at the packet level. The initial analyses and results are clearly able to identify malicious network traffic from normal traffic at the packet level with a very high confidence level across multiple ports and traffic streams. Statistical differentiation between infected and normal traffic level was possible using a modified data compression technique at the 99% probability level for all data analyzed. However, the conditions tested were rather limited in scope and need to be expanded into more realistic simulations of hacking events using techniques and approaches that are better representative of a real-world attack on a SCADA system. Nonetheless, the use of compression techniques to identify malicious traffic on SCADA networks in real time appears to have significant merit for infrastructure protection.« less

  7. Evaluation of a newly developed infant chest compression technique

    PubMed Central

    Smereka, Jacek; Bielski, Karol; Ladny, Jerzy R.; Ruetzler, Kurt; Szarpak, Lukasz

    2017-01-01

    Abstract Background: Providing adequate chest compression is essential during infant cardio-pulmonary-resuscitation (CPR) but was reported to be performed poor. The “new 2-thumb technique” (nTTT), which consists in using 2 thumbs directed at the angle of 90° to the chest while closing the fingers of both hands in a fist, was recently introduced. Therefore, the aim of this study was to compare 3 chest compression techniques, namely, the 2-finger-technique (TFT), the 2-thumb-technique (TTHT), and the nTTT in an randomized infant-CPR manikin setting. Methods: A total of 73 paramedics with at least 1 year of clinical experience performed 3 CPR settings with a chest compression:ventilation ratio of 15:2, according to current guidelines. Chest compression was performed with 1 out of the 3 chest compression techniques in a randomized sequence. Chest compression rate and depth, chest decompression, and adequate ventilation after chest compression served as outcome parameters. Results: The chest compression depth was 29 (IQR, 28–29) mm in the TFT group, 42 (40–43) mm in the TTHT group, and 40 (39–40) mm in the nTTT group (TFT vs TTHT, P < 0.001; TFT vs nTTT, P < 0.001; TTHT vs nTTT, P < 0.01). The median compression rate with TFT, TTHT, and nTTT varied and amounted to 136 (IQR, 133–144) min–1 versus 117 (115–121) min–1 versus 111 (109–113) min–1. There was a statistically significant difference in the compression rate between TFT and TTHT (P < 0.001), TFT and nTTT (P < 0.001), as well as TTHT and nTTT (P < 0.001). Incorrect decompressions after CC were significantly increased in the TTHT group compared with the TFT (P < 0.001) and the nTTT (P < 0.001) group. Conclusions: The nTTT provides adequate chest compression depth and rate and was associated with adequate chest decompression and possibility to adequately ventilate the infant manikin. Further clinical studies are necessary to confirm these initial findings. PMID:28383397

  8. Random sequential adsorption of cubes

    NASA Astrophysics Data System (ADS)

    Cieśla, Michał; Kubala, Piotr

    2018-01-01

    Random packings built of cubes are studied numerically using a random sequential adsorption algorithm. To compare the obtained results with previous reports, three different models of cube orientation sampling were used. Also, three different cube-cube intersection algorithms were tested to find the most efficient one. The study focuses on the mean saturated packing fraction as well as kinetics of packing growth. Microstructural properties of packings were analyzed using density autocorrelation function.

  9. BurstCube: A CubeSat for Gravitational Wave Counterparts

    NASA Astrophysics Data System (ADS)

    Perkins, Jeremy S.; Racusin, Judith; Briggs, Michael; de Nolfo, Georgia; Caputo, Regina; Krizmanic, John; McEnery, Julie E.; Shawhan, Peter; Morris, David; Connaughton, Valerie; Kocevski, Dan; Wilson-Hodge, Colleen A.; Hui, Michelle; Mitchell, Lee; McBreen, Sheila

    2018-01-01

    We present BurstCube, a novel CubeSat that will detect and localize Gamma-ray Bursts (GRBs). BurstCube is a selected mission that will detect long GRBs, attributed to the collapse of massive stars, short GRBs (sGRBs), resulting from binary neutron star mergers, as well as other gamma-ray transients in the energy range 10-1000 keV. sGRBs are of particular interest because they are predicted to be the counterparts of gravitational wave (GW) sources soon to be detectable by LIGO/Virgo. BurstCube contains 4 CsI scintillators coupled with arrays of compact low-power Silicon photomultipliers (SiPMs) on a 6U Dellingr bus, a flagship modular platform that is easily modifiable for a variety of 6U CubeSat architectures. BurstCube will complement existing facilities such as Swift and Fermi in the short term, and provide a means for GRB detection, localization, and characterization in the interim time before the next generation future gamma-ray mission flies, as well as space-qualify SiPMs and test technologies for future use on larger gamma-ray missions. The ultimate configuration of BurstCube is to have a set of ~10 BurstCubes to provide all-sky coverage to GRBs for substantially lower cost than a full-scale mission.

  10. Ionosphere research with a HF/MF cubesat radio instrument

    NASA Astrophysics Data System (ADS)

    Kallio, Esa; Aikio, Anita; Alho, Markku; Fontell, Mathias; Harri, Ari-Matti; Kauristie, Kirsti; Kestilä, Antti; Koskimaa, Petri; Mäkelä, Jakke; Mäkelä, Miika; Turunen, Esa; Vanhamäki, Heikki; Verronen, Pekka

    2017-04-01

    New technology provides new possibilities to study geospace and 3D ionosphere by using spacecraft and computer simulations. A type of nanosatellites, CubeSats, provide a cost effective possibility to provide in-situ measurements in the ionosphere. Moreover, combined CubeSat observations with ground-based observations gives a new view on auroras and associated electromagnetic phenomena. Especially joint and active CubeSat - ground based observation campaigns enable the possibility of studying the 3D structure of the ionosphere. Furthermore using several CubeSats to form satellite constellations enables much higher temporal resolution. At the same time, increasing computation capacity has made it possible to perform simulations where properties of the ionosphere, such as propagation of the electromagnetic waves in the medium frequency, MF (0.3-3 MHz) and high frequency, HF (3-30 MHz), ranges is based on a 3D ionospheric model and on first-principles modelling. Electromagnetic waves at those frequencies are strongly affected by ionospheric electrons and, consequently, those frequencies can be used for studying the plasma. On the other hand, even if the ionosphere originally enables long-range telecommunication at MF and HF frequencies, the frequent occurrence of spatiotemporal variations in the ionosphere disturbs communication channels, especially at high latitudes. Therefore, study of the MF and HF waves in the ionosphere has both a strong science and technology interests. We introduce recently developed simulation models as well as measuring principles and techniques to investigate the arctic ionosphere by a polar orbiting CubeSat whose novel AM radio instrument measures HF and MF waves. The cubesat, which contains also a white light aurora camera, is planned to be launched in late 2017 (http://www.suomi100satelliitti.fi/eng). The new models are (1) a 3D ray tracing model and (2) a 3D full kinetic electromagnetic simulation. We also introduce how combining of the cubesat measurements to ground based measurements provides new research possibilities to study 3D ionosphere.

  11. A new simultaneous compression and encryption method for images suitable to recognize form by optical correlation

    NASA Astrophysics Data System (ADS)

    Alfalou, Ayman; Elbouz, Marwa; Jridi, Maher; Loussert, Alain

    2009-09-01

    In some recognition form applications (which require multiple images: facial identification or sign-language), many images should be transmitted or stored. This requires the use of communication systems with a good security level (encryption) and an acceptable transmission rate (compression rate). In the literature, several encryption and compression techniques can be found. In order to use optical correlation, encryption and compression techniques cannot be deployed independently and in a cascade manner. Otherwise, our system will suffer from two major problems. In fact, we cannot simply use these techniques in a cascade manner without considering the impact of one technique over another. Secondly, a standard compression can affect the correlation decision, because the correlation is sensitive to the loss of information. To solve both problems, we developed a new technique to simultaneously compress & encrypt multiple images using a BPOF optimized filter. The main idea of our approach consists in multiplexing the spectrums of different transformed images by a Discrete Cosine Transform (DCT). To this end, the spectral plane should be divided into several areas and each of them corresponds to the spectrum of one image. On the other hand, Encryption is achieved using the multiplexing, a specific rotation functions, biometric encryption keys and random phase keys. A random phase key is widely used in optical encryption approaches. Finally, many simulations have been conducted. Obtained results corroborate the good performance of our approach. We should also mention that the recording of the multiplexed and encrypted spectra is optimized using an adapted quantification technique to improve the overall compression rate.

  12. Low Power LDPC Code Decoder Architecture Based on Intermediate Message Compression Technique

    NASA Astrophysics Data System (ADS)

    Shimizu, Kazunori; Togawa, Nozomu; Ikenaga, Takeshi; Goto, Satoshi

    Reducing the power dissipation for LDPC code decoder is a major challenging task to apply it to the practical digital communication systems. In this paper, we propose a low power LDPC code decoder architecture based on an intermediate message-compression technique which features as follows: (i) An intermediate message compression technique enables the decoder to reduce the required memory capacity and write power dissipation. (ii) A clock gated shift register based intermediate message memory architecture enables the decoder to decompress the compressed messages in a single clock cycle while reducing the read power dissipation. The combination of the above two techniques enables the decoder to reduce the power dissipation while keeping the decoding throughput. The simulation results show that the proposed architecture improves the power efficiency up to 52% and 18% compared to that of the decoder based on the overlapped schedule and the rapid convergence schedule without the proposed techniques respectively.

  13. The Pixon Method for Data Compression Image Classification, and Image Reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard; Yahil, Amos

    2002-01-01

    As initially proposed, this program had three goals: (1) continue to develop the highly successful Pixon method for image reconstruction and support other scientist in implementing this technique for their applications; (2) develop image compression techniques based on the Pixon method; and (3) develop artificial intelligence algorithms for image classification based on the Pixon approach for simplifying neural networks. Subsequent to proposal review the scope of the program was greatly reduced and it was decided to investigate the ability of the Pixon method to provide superior restorations of images compressed with standard image compression schemes, specifically JPEG-compressed images.

  14. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-12-30

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described. 22 figs.

  15. Bandwidth compression of multispectral satellite imagery

    NASA Technical Reports Server (NTRS)

    Habibi, A.

    1978-01-01

    The results of two studies aimed at developing efficient adaptive and nonadaptive techniques for compressing the bandwidth of multispectral images are summarized. These techniques are evaluated and compared using various optimality criteria including MSE, SNR, and recognition accuracy of the bandwidth compressed images. As an example of future requirements, the bandwidth requirements for the proposed Landsat-D Thematic Mapper are considered.

  16. CUBES Project Support

    NASA Technical Reports Server (NTRS)

    Jenkins, Kenneth T., Jr.

    2012-01-01

    CUBES stands for Creating Understanding and Broadening Education through Satellites. The goal of the project is to allow high school students to build a small satellite, or CubeSat. Merritt Island High School (MIHS) was selected to partner with NASA, and California Polytechnic State University (Cal-Poly}, to build a CubeSat. The objective of the mission is to collect flight data to better characterize maximum predicted environments inside the CubeSat launcher, Poly-Picosatellite Orbital Deplorer (P-POD), while attached to the launch vehicle. The MIHS CubeSat team will apply to the NASA CubeSat Launch Initiative, which provides opportunities for small satellite development teams to secure launch slots on upcoming expendable launch vehicle missions. The MIHS team is working to achieve a test launch, or proof of concept flight aboard a suborbital launch vehicle in early 2013.

  17. EarthCube - A Community-led, Interdisciplinary Collaboration for Geoscience Cyberinfrastructure

    NASA Astrophysics Data System (ADS)

    Allison, M. L.; Keane, C. M.; Robinson, E.

    2015-12-01

    The EarthCube Test Enterprise Governance Project completed its initial two-year long process to engage the community and test a demonstration governing organization with the goal of facilitating a community-led process on designing and developing a geoscience cyberinfrastructure. Conclusions are that EarthCube is viable, has engaged a broad spectrum of end-users and contributors, and has begun to foster a sense of urgency around the importance of open and shared data. Levels of trust among participants are growing. At the same time, the active participants in EarthCube represent a very small sub-set of the larger population of geoscientists. Results from Stage I of this project have impacted NSF decisions on the direction of the EarthCube program. The overall tone of EarthCube events has had a constructive, problem-solving orientation. The technical and organizational elements of EarthCube are poised to support a functional infrastructure for the geosciences community. The process for establishing shared technological standards has notable progress but there is a continuing need to expand technological and cultural alignment. Increasing emphasis is being given to the interdependencies among EarthCube funded projects. The newly developed EarthCube Technology Plan highlights important progress in this area by five working groups focusing on: 1. Use cases; 2. Funded project gap analysis; 3. Testbed development; 4. Standards; and 5. Architecture. There is ample justification to continue running a community-led governance framework that facilitates agreement on a system architecture, guides EarthCube activities, and plays an increasing role in making the EarthCube vision of cyberinfrastructure for the geosciences operational. There is widespread community expectation for support of a multiyear EarthCube governing effort to put into practice the science, technical, and organizational plans that have and are continuing to emerge.

  18. Real-time colouring and filtering with graphics shaders

    NASA Astrophysics Data System (ADS)

    Vohl, D.; Fluke, C. J.; Barnes, D. G.; Hassan, A. H.

    2017-11-01

    Despite the popularity of the Graphics Processing Unit (GPU) for general purpose computing, one should not forget about the practicality of the GPU for fast scientific visualization. As astronomers have increasing access to three-dimensional (3D) data from instruments and facilities like integral field units and radio interferometers, visualization techniques such as volume rendering offer means to quickly explore spectral cubes as a whole. As most 3D visualization techniques have been developed in fields of research like medical imaging and fluid dynamics, many transfer functions are not optimal for astronomical data. We demonstrate how transfer functions and graphics shaders can be exploited to provide new astronomy-specific explorative colouring methods. We present 12 shaders, including four novel transfer functions specifically designed to produce intuitive and informative 3D visualizations of spectral cube data. We compare their utility to classic colour mapping. The remaining shaders highlight how common computation like filtering, smoothing and line ratio algorithms can be integrated as part of the graphics pipeline. We discuss how this can be achieved by utilizing the parallelism of modern GPUs along with a shading language, letting astronomers apply these new techniques at interactive frame rates. All shaders investigated in this work are included in the open source software shwirl (Vohl 2017).

  19. Analysis-Preserving Video Microscopy Compression via Correlation and Mathematical Morphology

    PubMed Central

    Shao, Chong; Zhong, Alfred; Cribb, Jeremy; Osborne, Lukas D.; O’Brien, E. Timothy; Superfine, Richard; Mayer-Patel, Ketan; Taylor, Russell M.

    2015-01-01

    The large amount video data produced by multi-channel, high-resolution microscopy system drives the need for a new high-performance domain-specific video compression technique. We describe a novel compression method for video microscopy data. The method is based on Pearson's correlation and mathematical morphology. The method makes use of the point-spread function (PSF) in the microscopy video acquisition phase. We compare our method to other lossless compression methods and to lossy JPEG, JPEG2000 and H.264 compression for various kinds of video microscopy data including fluorescence video and brightfield video. We find that for certain data sets, the new method compresses much better than lossless compression with no impact on analysis results. It achieved a best compressed size of 0.77% of the original size, 25× smaller than the best lossless technique (which yields 20% for the same video). The compressed size scales with the video's scientific data content. Further testing showed that existing lossy algorithms greatly impacted data analysis at similar compression sizes. PMID:26435032

  20. Analysis of Compression Algorithm in Ground Collision Avoidance Systems (Auto-GCAS)

    NASA Technical Reports Server (NTRS)

    Schmalz, Tyler; Ryan, Jack

    2011-01-01

    Automatic Ground Collision Avoidance Systems (Auto-GCAS) utilizes Digital Terrain Elevation Data (DTED) stored onboard a plane to determine potential recovery maneuvers. Because of the current limitations of computer hardware on military airplanes such as the F-22 and F-35, the DTED must be compressed through a lossy technique called binary-tree tip-tilt. The purpose of this study is to determine the accuracy of the compressed data with respect to the original DTED. This study is mainly interested in the magnitude of the error between the two as well as the overall distribution of the errors throughout the DTED. By understanding how the errors of the compression technique are affected by various factors (topography, density of sampling points, sub-sampling techniques, etc.), modifications can be made to the compression technique resulting in better accuracy. This, in turn, would minimize unnecessary activation of A-GCAS during flight as well as maximizing its contribution to fighter safety.

  1. A Multi-Variate Fit to the Chemical Composition of the Cosmic-Ray Spectrum

    NASA Astrophysics Data System (ADS)

    Eisch, Jonathan

    Since the discovery of cosmic rays over a century ago, evidence of their origins has remained elusive. Deflected by galactic magnetic fields, the only direct evidence of their origin and propagation remain encoded in their energy distribution and chemical composition. Current models of galactic cosmic rays predict variations of the energy distribution of individual elements in an energy region around 3x1015 eV known as the knee. This work presents a method to measure the energy distribution of individual elemental groups in the knee region and its application to a year of data from the IceCube detector. The method uses cosmic rays detected by both IceTop, the surface-array component, and the deep-ice component of IceCube during the 2009-2010 operation of the IC-59 detector. IceTop is used to measure the energy and the relative likelihood of the mass composition using the signal from the cosmic-ray induced extensive air shower reaching the surface. IceCube, 1.5 km below the surface, measures the energy of the high-energy bundle of muons created in the very first interactions after the cosmic ray enters the atmosphere. These event distributions are fit by a constrained model derived from detailed simulations of cosmic rays representing five chemical elements. The results of this analysis are evaluated in terms of the theoretical uncertainties in cosmic-ray interactions and seasonal variations in the atmosphere. The improvements in high-energy cosmic ray hadronic-interaction models informed by this analysis, combined with increased data from subsequent operation of the IceCube detector, could provide crucial limits on the origin of cosmic rays and their propagation through the galaxy. In the course of developing this method, a number of analysis and statistical techniques were developed to deal with the difficulties inherent in this type of measurement. These include a composition-sensitive air shower reconstruction technique, a method to model simulated event distributions with limited statistics, and a method to optimize and estimate the error on a regularized fit.

  2. An iterative forward analysis technique to determine the equation of state of dynamically compressed materials

    DOE PAGES

    Ali, S. J.; Kraus, R. G.; Fratanduono, D. E.; ...

    2017-05-18

    Here, we developed an iterative forward analysis (IFA) technique with the ability to use hydrocode simulations as a fitting function for analysis of dynamic compression experiments. The IFA method optimizes over parameterized quantities in the hydrocode simulations, breaking the degeneracy of contributions to the measured material response. Velocity profiles from synthetic data generated using a hydrocode simulation are analyzed as a first-order validation of the technique. We also analyze multiple magnetically driven ramp compression experiments on copper and compare with more conventional techniques. Excellent agreement is obtained in both cases.

  3. Build an Earthquake City! Grades 6-8.

    ERIC Educational Resources Information Center

    Rushton, Erik; Ryan, Emily; Swift, Charles

    In this activity, students build a city out of sugar cubes, bouillon cubes, and gelatin cubes. The city is then put through simulated earthquakes to see which cube structures withstand the shaking the best. This activity requires a 50-minute time period for completion. (Author/SOE)

  4. Subband Coding Methods for Seismic Data Compression

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Pollara, F.

    1995-01-01

    This paper presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The compression technique described could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  5. Architecture for one-shot compressive imaging using computer-generated holograms.

    PubMed

    Macfaden, Alexander J; Kindness, Stephen J; Wilkinson, Timothy D

    2016-09-10

    We propose a synchronous implementation of compressive imaging. This method is mathematically equivalent to prevailing sequential methods, but uses a static holographic optical element to create a spatially distributed spot array from which the image can be reconstructed with an instantaneous measurement. We present the holographic design requirements and demonstrate experimentally that the linear algebra of compressed imaging can be implemented with this technique. We believe this technique can be integrated with optical metasurfaces, which will allow the development of new compressive sensing methods.

  6. Evaluation of three lidar scanning strategies for turbulence measurements

    DOE PAGES

    Newman, Jennifer F.; Klein, Petra M.; Wharton, Sonia; ...

    2016-05-03

    Several errors occur when a traditional Doppler beam swinging (DBS) or velocity–azimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar, and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers.Results indicate that the six-beam strategy mitigates some of the errors caused bymore » VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.« less

  7. Evaluation of three lidar scanning strategies for turbulence measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newman, Jennifer F.; Klein, Petra M.; Wharton, Sonia

    Several errors occur when a traditional Doppler beam swinging (DBS) or velocity–azimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar, and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers.Results indicate that the six-beam strategy mitigates some of the errors caused bymore » VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.« less

  8. Restructuring Big Data to Improve Data Access and Performance in Analytic Services Making Research More Efficient for the Study of Extreme Weather Events and Application User Communities

    NASA Astrophysics Data System (ADS)

    Ostrenga, D.; Shen, S.; Vollmer, B.; Meyer, D. L.

    2017-12-01

    NASA climate reanalysis dataset from MERRA-2 contains numerous data for atmosphere, land, and ocean, that are grouped into 95 products of archived volume over 300 TB. The data files are saved as hourly-file, day-file (hourly time interval) and month-file containing up to 125 parameters. Due to the large number of data files and the sheer data volumes, it is a challenging for users, especially those in the application research community, to handle dealing with the original data files. Most of these researchers prefer to focus on a small region or single location using the hourly data for long time periods to analyze extreme weather events or say winds for renewable energy applications. At the GES DISC, we have been working closely with the science teams and the application user community to create several new value added data products and high quality services to facilitate the use of the model data for various types of research. We have tested converting hourly data from one-day per file into different data cubes, such as one-month, one-year, or whole-mission and then continued to analyze the efficiency of the accessibility of this newly structured data through various services. Initial results have shown that compared to the original file structure, the new data has significantly improved the performance for accessing long time series. It is noticed that the performance is associated to the cube size and structure, the compression method, and how the data are accessed. The optimized data cube structure will not only improve the data access, but also enable better online analytic services for doing statistical analysis and extreme events mining. Two case studies will be presented using the newly structured data and value added services, the California drought and the extreme drought of the Northeastern states of Brazil. Furthermore, data access and analysis through cloud storage capabilities will be investigated.

  9. CubeSat Initiatives at KSC

    NASA Technical Reports Server (NTRS)

    Berg, Jared J.

    2014-01-01

    Even though the Small PayLoad Integrated Testing Services or SPLITS line of business is newly established, KSC has been involved in a variety of CubeSat projects and programs. CubeSat development projects have been initiated through educational outreach partnerships with schools and universities, commercial partnerships and internal training initiatives. KSC has also been involved in CubeSat deployment through programs to find launch opportunities to fly CubeSats as auxiliary payloads on previously planned missions and involvement in the development of new launch capabilities for small satellites. This overview will highlight the CubeSat accomplishments at KSC and discuss planning for future projects and opportunities.

  10. Recognition and classification of oscillatory patterns of electric brain activity using artificial neural network approach

    NASA Astrophysics Data System (ADS)

    Pchelintseva, Svetlana V.; Runnova, Anastasia E.; Musatov, Vyacheslav Yu.; Hramov, Alexander E.

    2017-03-01

    In the paper we study the problem of recognition type of the observed object, depending on the generated pattern and the registered EEG data. EEG recorded at the time of displaying cube Necker characterizes appropriate state of brain activity. As an image we use bistable image Necker cube. Subject selects the type of cube and interpret it either as aleft cube or as the right cube. To solve the problem of recognition, we use artificial neural networks. In our paper to create a classifier we have considered a multilayer perceptron. We examine the structure of the artificial neural network and define cubes recognition accuracy.

  11. Interplanetary CubeSat Navigational Challenges

    NASA Technical Reports Server (NTRS)

    Martin-Mur, Tomas J.; Gustafson, Eric D.; Young, Brian T.

    2015-01-01

    CubeSats are miniaturized spacecraft of small mass that comply with a form specification so they can be launched using standardized deployers. Since the launch of the first CubeSat into Earth orbit in June of 2003, hundreds have been placed into orbit. There are currently a number of proposals to launch and operate CubeSats in deep space, including MarCO, a technology demonstration that will launch two CubeSats towards Mars using the same launch vehicle as NASA's Interior Exploration using Seismic Investigations, Geodesy and Heat Transport (InSight) Mars lander mission. The MarCO CubeSats are designed to relay the information transmitted by the InSight UHF radio during Entry, Descent, and Landing (EDL) in real time to the antennas of the Deep Space Network (DSN) on Earth. Other CubeSatts proposals intend to demonstrate the operation of small probes in deep space, investigate the lunar South Pole, and visit a near Earth object, among others. Placing a CubeSat into an interplanetary trajectory makes it even more challenging to pack the necessary power, communications, and navigation capabilities into such a small spacecraft. This paper presents some of the challenges and approaches for successfully navigating CubeSats and other small spacecraft in deep space.

  12. Lossless compression of VLSI layout image data.

    PubMed

    Dai, Vito; Zakhor, Avideh

    2006-09-01

    We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.

  13. Parallel compression of data chunks of a shared data object using a log-structured file system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-10-25

    Techniques are provided for parallel compression of data chunks being written to a shared object. A client executing on a compute node or a burst buffer node in a parallel computing system stores a data chunk generated by the parallel computing system to a shared data object on a storage node by compressing the data chunk; and providing the data compressed data chunk to the storage node that stores the shared object. The client and storage node may employ Log-Structured File techniques. The compressed data chunk can be de-compressed by the client when the data chunk is read. A storagemore » node stores a data chunk as part of a shared object by receiving a compressed version of the data chunk from a compute node; and storing the compressed version of the data chunk to the shared data object on the storage node.« less

  14. Word aligned bitmap compression method, data structure, and apparatus

    DOEpatents

    Wu, Kesheng; Shoshani, Arie; Otoo, Ekow

    2004-12-14

    The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is a relatively efficient method for searching and performing logical, counting, and pattern location operations upon large datasets. The technique is comprised of a data structure and methods that are optimized for computational efficiency by using the WAH compression method, which typically takes advantage of the target computing system's native word length. WAH is particularly apropos to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry, due to the increased computational efficiency of the WAH compressed bitmap index. Some commercial database products already include some version of a bitmap index, which could possibly be replaced by the WAH bitmap compression techniques for potentially increased operation speed, as well as increased efficiencies in constructing compressed bitmaps. Combined together, this technique may be particularly useful for real-time business intelligence. Additional WAH applications may include scientific modeling, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization.

  15. How CubeSats contribute to Science and Technology in Astronomy and Astrophysics

    NASA Astrophysics Data System (ADS)

    Cahoy, Kerri Lynn; Douglas, Ewan; Carlton, Ashley; Clark, James; Haughwout, Christian

    2017-01-01

    CubeSats are nanosatellites, spacecraft typically the size of a shoebox or backpack. CubeSats are made up of one or more 10 cm x 10 cm x 10 cm units weighing 1.33 kg (each cube is called a “U”). CubeSats benefit from relatively easy and inexpensive access to space because they are designed to slide into fully enclosed spring-loaded deployer pods before being attached as an auxiliary payload to a larger vehicle, without adding risk to the vehicle or its primary payload(s). Even though CubeSats have inherent resource and aperture limitations due to their small size, over the past fifteen years, researchers and engineers have miniaturized components and subsystems, greatly increasing the capabilities of CubeSats. We discuss how state of the art CubeSats can address both science objectives and technology objectives in Astronomy and Astrophysics. CubeSats can contribute toward science objectives such as cosmic dawn, galactic evolution, stellar evolution, extrasolar planets and interstellar exploration.CubeSats can contribute to understanding how key technologies for larger missions, like detectors, microelectromechanical systems, and integrated optical elements, can not only survive launch and operational environments (which can often be simulated on the ground), but also meet performance specifications over long periods of time in environments that are harder to simulate properly, such as ionizing radiation, the plasma environment, spacecraft charging, and microgravity. CubeSats can also contribute to both science and technology advancements as multi-element space-based platforms that coordinate distributed measurements and use formation flying and large separation baselines to counter their restricted individual apertures.

  16. Achieving Science with CubeSats: Thinking Inside the Box

    NASA Astrophysics Data System (ADS)

    Zurbuchen, Thomas H.; Lal, Bhavya

    2017-01-01

    We present the results of a study conducted by the National Academies of Sciences, Engineering, and Medicine. The study focused on the scientific potential and technological promise of CubeSats. We will first review the growth of the CubeSat platform from an education-focused technology toward a platform of importance for technology development, science, and commercial use, both in the United States and internationally. The use has especially exploded in recent years. For example, of the over 400 CubeSats launched since 2000, more than 80% of all science-focused ones have been launched just in the past four years. Similarly, more than 80% of peer-reviewed papers describing new science based on CubeSat data have been published in the past five years.We will then assess the technological and science promise of CubeSats across space science disciplines, and discuss a subset of priority science goals that can be achieved given the current state of CubeSat capabilities. Many of these goals address targeted science, often in coordination with other spacecraft, or by using sacrificial or high-risk orbits that lead to the demise of the satellite after critical data have been collected. Other goals relate to the use of CubeSats as constellations or swarms, deploying tens to hundreds of CubeSats that function as one distributed array of measurements.Finally, we will summarize our conclusions and recommendations from this study; especially those focused on nearterm investment that could improve the capabilities of CubeSats toward increased science and technological return and enable the science communities’ use of CubeSats.

  17. Achieving Science with CubeSats: Thinking Inside the Box

    NASA Astrophysics Data System (ADS)

    Lal, B.; Zurbuchen, T.

    2016-12-01

    In this paper, we present a study conducted by the National Academies of Sciences, Engineering, and Medicine. The study focused on the scientific potential and technological promise of CubeSats. We will first review the growth of the CubeSat platform from an education-focused technology toward a platform of importance for technology development, science, and commercial use, both in the United States and internationally. The use has especially exploded in recent years. For example, of the over 400 CubeSats launched since 2000, more than 80% of all science-focused ones have been launched just in the past four years. Similarly, more than 80% of peer-reviewed papers describing new science based on CubeSat data have been published in the past five years. We will then assess the technological and science promise of CubeSats across space science disciplines, and discuss a subset of priority science goals that can be achieved given the current state of CubeSat capabilities. Many of these goals address targeted science, often in coordination with other spacecraft, or by using sacrificial or high-risk orbits that lead to the demise of the satellite after critical data have been collected. Other goals relate to the use of CubeSats as constellations or swarms, deploying tens to hundreds of CubeSats that function as one distributed array of measurements. Finally, we will summarize our conclusions and recommendations from this study; especially those focused on near-term investment that could improve the capabilities of CubeSats toward increased science and technological return and enable the science communities' use of CubeSats.

  18. IceCube

    Science.gov Websites

    . PDF file High pT muons in Cosmic-Ray Air Showers with IceCube. PDF file IceCube Performance with Artificial Light Sources: the road to a Cascade Analyses + Energy scale calibration for EHE. PDF file , 2006. PDF file Thorsten Stetzelberger "IceCube DAQ Design & Performance" Nov 2005 PPT

  19. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-07-07

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.

  20. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.

  1. Data Compression Techniques for Maps

    DTIC Science & Technology

    1989-01-01

    Lempel - Ziv compression is applied to the classified and unclassified images as also to the output of the compression algorithms . The algorithms ...resulted in a compression of 7:1. The output of the quadtree coding algorithm was then compressed using Lempel - Ziv coding. The compression ratio achieved...using Lempel - Ziv coding. The unclassified image gave a compression ratio of only 1.4:1. The K means classified image

  2. Intelligent transportation systems data compression using wavelet decomposition technique.

    DOT National Transportation Integrated Search

    2009-12-01

    Intelligent Transportation Systems (ITS) generates massive amounts of traffic data, which posts : challenges for data storage, transmission and retrieval. Data compression and reconstruction technique plays an : important role in ITS data procession....

  3. Error analysis of multi-needle Langmuir probe measurement technique.

    PubMed

    Barjatya, Aroh; Merritt, William

    2018-04-01

    Multi-needle Langmuir probe is a fairly new instrument technique that has been flown on several recent sounding rockets and is slated to fly on a subset of QB50 CubeSat constellation. This paper takes a fundamental look into the data analysis procedures used for this instrument to derive absolute electron density. Our calculations suggest that while the technique remains promising, the current data analysis procedures could easily result in errors of 50% or more. We present a simple data analysis adjustment that can reduce errors by at least a factor of five in typical operation.

  4. Error analysis of multi-needle Langmuir probe measurement technique

    NASA Astrophysics Data System (ADS)

    Barjatya, Aroh; Merritt, William

    2018-04-01

    Multi-needle Langmuir probe is a fairly new instrument technique that has been flown on several recent sounding rockets and is slated to fly on a subset of QB50 CubeSat constellation. This paper takes a fundamental look into the data analysis procedures used for this instrument to derive absolute electron density. Our calculations suggest that while the technique remains promising, the current data analysis procedures could easily result in errors of 50% or more. We present a simple data analysis adjustment that can reduce errors by at least a factor of five in typical operation.

  5. Structural Stability Assessment of the High Frequency Antenna for Use on the Buccaneer CubeSat in Low Earth Orbit

    DTIC Science & Technology

    2014-05-01

    UNCLASSIFIED UNCLASSIFIED Structural Stability Assessment of the High Frequency Antenna for Use on the Buccaneer CubeSat in Low Earth...DSTO-TN-1295 ABSTRACT The Buccaneer CubeSat will be fitted with a high frequency antenna made from spring steel measuring tape. The geometry...High Frequency Antenna for Use on the Buccaneer CubeSat in Low Earth Orbit Executive Summary The Buccaneer CubeSat will be fitted with a

  6. Applications of data compression techniques in modal analysis for on-orbit system identification

    NASA Technical Reports Server (NTRS)

    Carlin, Robert A.; Saggio, Frank; Garcia, Ephrahim

    1992-01-01

    Data compression techniques have been investigated for use with modal analysis applications. A redundancy-reduction algorithm was used to compress frequency response functions (FRFs) in order to reduce the amount of disk space necessary to store the data and/or save time in processing it. Tests were performed for both single- and multiple-degree-of-freedom (SDOF and MDOF, respectively) systems, with varying amounts of noise. Analysis was done on both the compressed and uncompressed FRFs using an SDOF Nyquist curve fit as well as the Eigensystem Realization Algorithm. Significant savings were realized with minimal errors incurred by the compression process.

  7. Image data compression having minimum perceptual error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1995-01-01

    A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  8. Image-adapted visually weighted quantization matrices for digital image compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1994-01-01

    A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  9. Fuzzy associative memories

    NASA Technical Reports Server (NTRS)

    Kosko, Bart

    1991-01-01

    Mappings between fuzzy cubes are discussed. This level of abstraction provides a surprising and fruitful alternative to the propositional and predicate-calculas reasoning techniques used in expert systems. It allows one to reason with sets instead of propositions. Discussed here are fuzzy and neural function estimators, neural vs. fuzzy representation of structured knowledge, fuzzy vector-matrix multiplication, and fuzzy associative memory (FAM) system architecture.

  10. Data compression techniques applied to high resolution high frame rate video technology

    NASA Technical Reports Server (NTRS)

    Hartz, William G.; Alexovich, Robert E.; Neustadter, Marc S.

    1989-01-01

    An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended.

  11. Perceptual Image Compression in Telemedicine

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications of our technology to the special problems of telemedicine.

  12. Nitrogen removal in moving bed sequencing batch reactor using polyurethane foam cubes of various sizes as carrier materials.

    PubMed

    Lim, Jun-Wei; Seng, Chye-Eng; Lim, Poh-Eng; Ng, Si-Ling; Sujari, Amat-Ngilmi Ahmad

    2011-11-01

    The performance of moving bed sequencing batch reactors (MBSBRs) added with 8 % (v/v) of polyurethane (PU) foam cubes as carrier media in nitrogen removal was investigated in treating low COD/N wastewater. The results indicate that MBSBR with 8-mL cubes achieved the highest total nitrogen (TN) removal efficiency of 37% during the aeration period, followed by 31%, 24% and 19 % for MBSBRs with 27-, 64- and 125-mL cubes, respectively. The increased TN removal in MBSBRs was mainly due to simultaneous nitrification and denitrification (SND) process which was verified by batch studies. The relatively lower TN removal in MBSBR with larger PU foam cubes was attributed to the observation that larger PU foam cubes were not fully attached by biomass. Higher concentrations of 8-mL PU foam cubes in batch reactors yielded higher TN removal. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. JPL-20180416-INSIGHf-0001-Marco Media Reel 1

    NASA Image and Video Library

    2018-04-16

    Mars Cube One is a Mars flyby mission consisting of two CubeSats that is planned for launch alongside NASA's InSight Mars lander mission. This will be the first interplanetary CubeSat mission. If successful, the CubeSats will relay entry, descent, and landing (EDL) data to Earth during InSight's landing.

  14. Indexing and retrieval of MPEG compressed video

    NASA Astrophysics Data System (ADS)

    Kobla, Vikrant; Doermann, David S.

    1998-04-01

    To keep pace with the increased popularity of digital video as an archival medium, the development of techniques for fast and efficient analysis of ideo streams is essential. In particular, solutions to the problems of storing, indexing, browsing, and retrieving video data from large multimedia databases are necessary to a low access to these collections. Given that video is often stored efficiently in a compressed format, the costly overhead of decompression can be reduced by analyzing the compressed representation directly. In earlier work, we presented compressed domain parsing techniques which identified shots, subshots, and scenes. In this article, we present efficient key frame selection, feature extraction, indexing, and retrieval techniques that are directly applicable to MPEG compressed video. We develop a frame type independent representation which normalizes spatial and temporal features including frame type, frame size, macroblock encoding, and motion compensation vectors. Features for indexing are derived directly from this representation and mapped to a low- dimensional space where they can be accessed using standard database techniques. Spatial information is used as primary index into the database and temporal information is used to rank retrieved clips and enhance the robustness of the system. The techniques presented enable efficient indexing, querying, and retrieval of compressed video as demonstrated by our system which typically takes a fraction of a second to retrieve similar video scenes from a database, with over 95 percent recall.

  15. A manual carotid compression technique to overcome difficult filter protection device retrieval during carotid artery stenting.

    PubMed

    Nii, Kouhei; Nakai, Kanji; Tsutsumi, Masanori; Aikawa, Hiroshi; Iko, Minoru; Sakamoto, Kimiya; Mitsutake, Takafumi; Eto, Ayumu; Hanada, Hayatsura; Kazekawa, Kiyoshi

    2015-01-01

    We investigated the incidence of embolic protection device retrieval difficulties at carotid artery stenting (CAS) with a closed-cell stent and demonstrated the usefulness of a manual carotid compression assist technique. Between July 2010 and October 2013, we performed 156 CAS procedures using self-expandable closed-cell stents. All procedures were performed with the aid of a filter design embolic protection device. We used FilterWire EZ in 118 procedures and SpiderFX in 38 procedures. The embolic protection device was usually retrieved by the accessory retrieval sheath after CAS. We applied a manual carotid compression technique when it was difficult to navigate the retrieval sheath through the deployed stent. We compared clinical outcomes in patients where simple retrieval was possible with patients where the manual carotid compression assisted technique was used for retrieval. Among the 156 CAS procedures, we encountered 12 (7.7%) where embolic protection device retrieval was hampered at the proximal stent terminus. Our manual carotid compression technique overcame this difficulty without eliciting neurologic events, artery dissection, or stent deformity. In patients undergoing closed-cell stent placement, embolic protection device retrieval difficulties may be encountered at the proximal stent terminus. Manual carotid compression assisted retrieval is an easy, readily available solution to overcome these difficulties. Copyright © 2015 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  16. Digital TV processing system

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Two digital video data compression systems directly applicable to the Space Shuttle TV Communication System were described: (1) For the uplink, a low rate monochrome data compressor is used. The compression is achieved by using a motion detection technique in the Hadamard domain. To transform the variable source rate into a fixed rate, an adaptive rate buffer is provided. (2) For the downlink, a color data compressor is considered. The compression is achieved first by intra-color transformation of the original signal vector, into a vector which has lower information entropy. Then two-dimensional data compression techniques are applied to the Hadamard transformed components of this last vector. Mathematical models and data reliability analyses were also provided for the above video data compression techniques transmitted over a channel encoded Gaussian channel. It was shown that substantial gains can be achieved by the combination of video source and channel coding.

  17. Image compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.

  18. Enhancement of Satellite Image Compression Using a Hybrid (DWT-DCT) Algorithm

    NASA Astrophysics Data System (ADS)

    Shihab, Halah Saadoon; Shafie, Suhaidi; Ramli, Abdul Rahman; Ahmad, Fauzan

    2017-12-01

    Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) image compression techniques have been utilized in most of the earth observation satellites launched during the last few decades. However, these techniques have some issues that should be addressed. The DWT method has proven to be more efficient than DCT for several reasons. Nevertheless, the DCT can be exploited to improve the high-resolution satellite image compression when combined with the DWT technique. Hence, a proposed hybrid (DWT-DCT) method was developed and implemented in the current work, simulating an image compression system on-board on a small remote sensing satellite, with the aim of achieving a higher compression ratio to decrease the onboard data storage and the downlink bandwidth, while avoiding further complex levels of DWT. This method also succeeded in maintaining the reconstructed satellite image quality through replacing the standard forward DWT thresholding and quantization processes with an alternative process that employed the zero-padding technique, which also helped to reduce the processing time of DWT compression. The DCT, DWT and the proposed hybrid methods were implemented individually, for comparison, on three LANDSAT 8 images, using the MATLAB software package. A comparison was also made between the proposed method and three other previously published hybrid methods. The evaluation of all the objective and subjective results indicated the feasibility of using the proposed hybrid (DWT-DCT) method to enhance the image compression process on-board satellites.

  19. Image compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-03-25

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.

  20. Image coding of SAR imagery

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Kwok, R.; Curlander, J. C.

    1987-01-01

    Five coding techniques in the spatial and transform domains have been evaluated for SAR image compression: linear three-point predictor (LTPP), block truncation coding (BTC), microadaptive picture sequencing (MAPS), adaptive discrete cosine transform (ADCT), and adaptive Hadamard transform (AHT). These techniques have been tested with Seasat data. Both LTPP and BTC spatial domain coding techniques provide very good performance at rates of 1-2 bits/pixel. The two transform techniques, ADCT and AHT, demonstrate the capability to compress the SAR imagery to less than 0.5 bits/pixel without visible artifacts. Tradeoffs such as the rate distortion performance, the computational complexity, the algorithm flexibility, and the controllability of compression ratios are also discussed.

  1. Massively Clustered CubeSats NCPS Demo Mission

    NASA Technical Reports Server (NTRS)

    Robertson, Glen A.; Young, David; Kim, Tony; Houts, Mike

    2013-01-01

    Technologies under development for the proposed Nuclear Cryogenic Propulsion Stage (NCPS) will require an un-crewed demonstration mission before they can be flight qualified over distances and time frames representative of a crewed Mars mission. In this paper, we describe a Massively Clustered CubeSats platform, possibly comprising hundreds of CubeSats, as the main payload of the NCPS demo mission. This platform would enable a mechanism for cost savings for the demo mission through shared support between NASA and other government agencies as well as leveraged commercial aerospace and academic community involvement. We believe a Massively Clustered CubeSats platform should be an obvious first choice for the NCPS demo mission when one considers that cost and risk of the payload can be spread across many CubeSat customers and that the NCPS demo mission can capitalize on using CubeSats developed by others for its own instrumentation needs. Moreover, a demo mission of the NCPS offers an unprecedented opportunity to invigorate the public on a global scale through direct individual participation coordinated through a web-based collaboration engine. The platform we describe would be capable of delivering CubeSats at various locations along a trajectory toward the primary mission destination, in this case Mars, permitting a variety of potential CubeSat-specific missions. Cameras on various CubeSats can also be used to provide multiple views of the space environment and the NCPS vehicle for video monitoring as well as allow the public to "ride along" as virtual passengers on the mission. This collaborative approach could even initiate a brand new Science, Technology, Engineering and Math (STEM) program for launching student developed CubeSat payloads beyond Low Earth Orbit (LEO) on future deep space technology qualification missions. Keywords: Nuclear Propulsion, NCPS, SLS, Mars, CubeSat.

  2. A Novel Method of Newborn Chest Compression: A Randomized Crossover Simulation Study.

    PubMed

    Smereka, Jacek; Szarpak, Lukasz; Ladny, Jerzy R; Rodriguez-Nunez, Antonio; Ruetzler, Kurt

    2018-01-01

    Objective: To compare a novel two-thumb chest compression technique with standard techniques during newborn resuscitation performed by novice physicians in terms of median depth of chest compressions, degree of full chest recoil, and effective compression efficacy. Patients and Methods: The total of 74 novice physicians with less than 1-year work experience participated in the study. They performed chest compressions using three techniques: (A) The new two-thumb technique (nTTT). The novel method of chest compressions in an infant consists in using two thumbs directed at the angle of 90° to the chest while closing the fingers of both hands in a fist. (B) TFT. With this method, the rescuer compresses the sternum with the tips of two fingers. (C) TTHT. Two thumbs are placed over the lower third of the sternum, with the fingers encircling the torso and supporting the back. Results: The median depth of chest compressions for nTTT was 3.8 (IQR, 3.7-3.9) cm, for TFT-2.1 (IQR, 1.7-2.5) cm, while for TTHT-3.6 (IQR, 3.5-3.8) cm. There was a significant difference between nTTT and TFT, and TTHT and TFT ( p < 0.001) for each time interval during resuscitation. The degree of full chest recoil was 93% (IQR, 91-97) for nTTT, 99% (IQR, 96-100) for TFT, and 90% (IQR, 74-91) for TTHT. There was a statistically significant difference in the degree of complete chest relaxation between nTTT and TFT ( p < 0.001), between nTTT and TTHT ( p = 0.016), and between TFT and TTHT ( p < 0.001). Conclusion: The median chest compression depth for nTTT and TTHT is significantly higher than that for TFT. The degree of full chest recoil was highest for TFT, then for nTTT and TTHT. The effective compression efficiency with nTTT was higher than for TTHT and TFT. Our novel newborn chest compression method in this manikin study provided adequate chest compression depth and degree of full chest recoil, as well as very good effective compression efficiency. Further clinical studies are necessary to confirm these initial results.

  3. NASA Near Earth Network (NEN) Support for Lunar and L1/L2 CubeSats

    NASA Technical Reports Server (NTRS)

    Schaire, Scott; Altunc, Serhat; Wong, Yen; Shelton, Marta; Celeste, Peter; Anderson, Michael; Perrotto, Trish

    2017-01-01

    The NASA Near Earth Network (NEN) consists of globally distributed tracking stations, including NASA, commercial, and partner ground stations, that are strategically located to maximize the coverage provided to a variety of orbital and suborbital missions, including those in LEO, GEO, HEO, lunar and L1/L2 orbits. The NENs future mission set includes and will continue to include CubeSat missions. The majority of the CubeSat missions destined to fly on EM-1, launching in late 2018, many in a lunar orbit, will communicate with ground based stations via X-band and will utilize the NASA Jet Propulsion Laboratory (JPL) developed IRIS radio. The NEN recognizes the important role CubeSats are beginning to play in carrying out NASAs mission and is therefore investigating the modifications needed to provide IRIS radio compatibility. With modification, the NEN could potentially expand support to the EM-1 lunar CubeSats.The NEN could begin providing significant coverage to lunar CubeSat missions utilizing three to four of the NENs mid-latitude sites. This coverage would supplement coverage provided by the JPL Deep Space Network (DSN). The NEN, with smaller apertures than DSN, provides the benefit of a larger beamwidth that could be beneficial in the event of uncertain ephemeris data. In order to realize these benefits the NEN would need to upgrade stations targeted based on coverage ability and current configuration/ease of upgrade, to ensure compatibility with the IRIS radio. In addition, the NEN is working with CubeSat radio developers to ensure NEN compatibility with alternative CubeSat radios for Lunar and L1/L2 CubeSats. The NEN has provided NEN compatibility requirements to several radio developers who are developing radios that offer lower cost and, in some cases, more capabilities with fewer constraints. The NEN is ready to begin supporting CubeSat missions. The NEN is considering network upgrades to broaden the types of CubeSat missions that can be supported and is supporting both the CubeSat community and radio developers to ensure future CubeSat missions have multiple options when choosing a network for their communications support.

  4. Data compression for satellite images

    NASA Technical Reports Server (NTRS)

    Chen, P. H.; Wintz, P. A.

    1976-01-01

    An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes.

  5. Planar temperature measurement in compressible flows using laser-induced iodine fluorescence

    NASA Technical Reports Server (NTRS)

    Hartfield, Roy J., Jr.; Hollo, Steven D.; Mcdaniel, James C.

    1991-01-01

    A laser-induced iodine fluorescence technique that is suitable for the planar measurement of temperature in cold nonreacting compressible air flows is investigated analytically and demonstrated in a known flow field. The technique is based on the temperature dependence of the broadband fluorescence from iodine excited by the 514-nm line of an argon-ion laser. Temperatures ranging from 165 to 245 K were measured in the calibration flow field. This technique makes complete, spatially resolved surveys of temperature practical in highly three-dimensional, low-temperature compressible flows.

  6. Study of radar pulse compression for high resolution satellite altimetry

    NASA Technical Reports Server (NTRS)

    Dooley, R. P.; Nathanson, F. E.; Brooks, L. W.

    1974-01-01

    Pulse compression techniques are studied which are applicable to a satellite altimeter having a topographic resolution of + 10 cm. A systematic design procedure is used to determine the system parameters. The performance of an optimum, maximum likelihood processor is analysed, which provides the basis for modifying the standard split-gate tracker to achieve improved performance. Bandwidth considerations lead to the recommendation of a full deramp STRETCH pulse compression technique followed by an analog filter bank to separate range returns. The implementation of the recommended technique is examined.

  7. Integer cosine transform compression for Galileo at Jupiter: A preliminary look

    NASA Technical Reports Server (NTRS)

    Ekroot, L.; Dolinar, S.; Cheung, K.-M.

    1993-01-01

    The Galileo low-gain antenna mission has a severely rate-constrained channel over which we wish to send large amounts of information. Because of this link pressure, compression techniques for image and other data are being selected. The compression technique that will be used for images is the integer cosine transform (ICT). This article investigates the compression performance of Galileo's ICT algorithm as applied to Galileo images taken during the early portion of the mission and to images that simulate those expected from the encounter at Jupiter.

  8. Bit-wise arithmetic coding for data compression

    NASA Technical Reports Server (NTRS)

    Kiely, A. B.

    1994-01-01

    This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.

  9. Compression and contact area of anterior strut grafts in spinal instrumentation: a biomechanical study.

    PubMed

    Pizanis, Antonius; Holstein, Jörg H; Vossen, Felix; Burkhardt, Markus; Pohlemann, Tim

    2013-08-26

    Anterior bone grafts are used as struts to reconstruct the anterior column of the spine in kyphosis or following injury. An incomplete fusion can lead to later correction losses and compromise further healing. Despite the different stabilizing techniques that have evolved, from posterior or anterior fixating implants to combined anterior/posterior instrumentation, graft pseudarthrosis rates remain an important concern. Furthermore, the need for additional anterior implant fixation is still controversial. In this bench-top study, we focused on the graft-bone interface under various conditions, using two simulated spinal injury models and common surgical fixation techniques to investigate the effect of implant-mediated compression and contact on the anterior graft. Calf spines were stabilised with posterior internal fixators. The wooden blocks as substitutes for strut grafts were impacted using a "pressfit" technique and pressure-sensitive films placed at the interface between the vertebral bone and the graft to record the compression force and the contact area with various stabilization techniques. Compression was achieved either with posterior internal fixator alone or with an additional anterior implant. The importance of concomitant ligament damage was also considered using two simulated injury models: pure compression Magerl/AO fracture type A or rotation/translation fracture type C models. In type A injury models, 1 mm-oversized grafts for impaction grafting provided good compression and fair contact areas that were both markedly increased by the use of additional compressing anterior rods or by shortening the posterior fixator construct. Anterior instrumentation by itself had similar effects. For type C injuries, dramatic differences were observed between the techniques, as there was a net decrease in compression and an inadequate contact on the graft occurred in this model. Under these circumstances, both compression and the contact area on graft could only be maintained at high levels with the use of additional anterior rods. Under experimental conditions, we observed that ligamentous injury following type C fracture has a negative influence on the compression and contact area of anterior interbody bone grafts when only an internal fixator is used for stabilization. Because of the loss of tension banding effects in type C injuries, an additional anterior compressing implant can be beneficial to restore both compression to and contact on the strut graft.

  10. CubeSats for Astrophysics: The Current Perspective

    NASA Astrophysics Data System (ADS)

    Ardila, David R.; Shkolnik, Evgenya; Gorjian, Varoujan

    2017-01-01

    Cubesats are small satellites built to multiples of 1U (1000 cm3). The 2016 NRC Report “Achieving Science with CubeSats” indicates that between 2013 and 2018 NASA and NSF sponsored 104 CubeSats. Of those, only one is devoted to astrophysics: HaloSat (PI: P. Kaaret), a 6U CubeSat with an X-ray payload to study the hot galactic halo.Despite this paucity of missions, CubeSats have a lot of potential for astrophysics. To assess the science landscape that a CubeSat astrophysics mission may occupy, we consider the following parameters:1-Wavelength: CubeSats are not competitive in the visible, unless the application (e.g. high precision photometry) is difficult to do from the ground. Thermal IR science is limited by the lack of low-power miniaturized cryocoolers and by the large number of infrared astrophysical missions launched or planned. In the UV, advances in δ-doping processes result in larger sensitivity with smaller apertures. Commercial X-ray detectors also allow for competitive science.2-Survey vs. Pointed observations: All-sky surveys have been done at most wavelengths from X-rays to Far-IR and CubeSats will not be able to compete in sensitivity with them. CubeSat science should then center on specific objects or object classes. Due to poor attitude control, unresolved photometry is scientifically more promising that extended imaging.3-Single-epoch vs. time domain: CubeSat apertures cannot compete in sensitivity with big satellites when doing single-epoch observations. However, time-domain astrophysics is an area in which CubeSats can provide very valuable science return.Technologically, CubeSat astrophysics is limited by:1-Lack of large apertures: The largest aperture CubeSat launched is ~10 cm, although deployable apertures as large as 20 cm could be fitted to 6U buses.2-Poor attitude control: State-of-the-art systems have demonstrated jitter of ~10” on timescales of seconds. Jitter imposes limits on image quality and, coupled with detector errors, limits the S/N.Other technology limitations include the lack of high-bandwidth communication and low-power miniaturized cryocoolers. However, even with today’s technological limitations, astrophysics applications of CubeSats are only limited by our imagination.

  11. Near Earth Network (NEN) CubeSat Communications

    NASA Technical Reports Server (NTRS)

    Schaire, Scott

    2017-01-01

    The NASA Near Earth Network (NEN) consists of globally distributed tracking stations, including NASA, commercial, and partner ground stations, that are strategically located to maximize the coverage provided to a variety of orbital and suborbital missions, including those in LEO (Low Earth Orbit), GEO (Geosynchronous Earth Orbit), HEO (Highly Elliptical Orbit), lunar and L1-L2 orbits. The NEN's future mission set includes and will continue to include CubeSat missions. The first NEN-supported CubeSat mission will be the Cubesat Proximity Operations Demonstration (CPOD) launching into LEO in 2017. The majority of the CubeSat missions destined to fly on EM-1, launching in late 2018, many in a lunar orbit, will communicate with ground-based stations via X-band and will utilize the NASA Jet Propulsion Laboratory (JPL)-developed IRIS (Satellite Communication for Air Traffic Management) radio. The NEN recognizes the important role CubeSats are beginning to play in carrying out NASAs mission and is therefore investigating the modifications needed to provide IRIS radio compatibility. With modification, the NEN could potentially expand support to the EM-1 (Exploration Mission-1) lunar CubeSats. The NEN could begin providing significant coverage to lunar CubeSat missions utilizing three to four of the NEN's mid-latitude sites. This coverage would supplement coverage provided by the JPL Deep Space Network (DSN). The NEN, with smaller apertures than DSN, provides the benefit of a larger beamwidth that could be beneficial in the event of uncertain ephemeris data. In order to realize these benefits the NEN would need to upgrade stations targeted based on coverage ability and current configuration ease of upgrade, to ensure compatibility with the IRIS radio. In addition, the NEN is working with CubeSat radio developers to ensure NEN compatibility with alternative CubeSat radios for Lunar and L1-L2 CubeSats. The NEN has provided NEN compatibility requirements to several radio developers who are developing radios that offer lower cost and, in some cases, more capabilities with fewer constraints. The NEN is ready to begin supporting CubeSat missions. The NEN is considering network upgrades to broaden the types of CubeSat missions that can be supported and is supporting both the CubeSat community and radio developers to ensure future CubeSat missions have multiple options when choosing a network for their communications support.

  12. NASA Near Earth Network (NEN) Support for Lunar and L1/L2 CubeSats

    NASA Technical Reports Server (NTRS)

    Schaire, Scott H.

    2017-01-01

    The NASA Near Earth Network (NEN) consists of globally distributed tracking stations, including NASA, commercial, and partner ground stations, that are strategically located to maximize the coverage provided to a variety of orbital and suborbital missions, including those in LEO, GEO, HEO, lunar and L1/L2 orbits. The NENs future mission set includes and will continue to include CubeSat missions. The first NEN supported CubeSat mission will be the Cubesat Proximity Operations Demonstration (CPOD) launching into low earth orbit (LEO) in early 2017. The majority of the CubeSat missions destined to fly on EM-1, launching in late 2018, many in a lunar orbit, will communicate with ground based stations via X-band and will utilize the NASA Jet Propulsion Laboratory (JPL) developed IRIS radio. The NEN recognizes the important role CubeSats are beginning to play in carrying out NASAs mission and is therefore investigating the modifications needed to provide IRIS radio compatibility. With modification, the NEN could potentially expand support to the EM-1 lunar CubeSats. The NEN could begin providing significant coverage to lunar CubeSat missions utilizing three to four of the NENs mid-latitude sites. This coverage would supplement coverage provided by the JPL Deep Space Network (DSN). The NEN, with smaller apertures than DSN, provides the benefit of a larger beamwidth that could be beneficial in the event of uncertain ephemeris data. In order to realize these benefits the NEN would need to upgrade stations targeted based on coverage ability and current configurationease of upgrade, to ensure compatibility with the IRIS radio.In addition, the NEN is working with CubeSat radio developers to ensure NEN compatibility with alternative CubeSat radios for Lunar and L1/L2 CubeSats. The NEN has provided NEN compatibility requirements to several radio developers who are developing radios that offer lower cost and, in some cases, more capabilities with fewer constraints. The NEN is ready to begin supporting CubeSat missions. The NEN is considering network upgrades to broaden the types of CubeSat missions that can be supported and is supporting both the CubeSat community and radio developers to ensure future CubeSat missions have multiple options when choosing a network for their communications support.

  13. Privacy-preserving data cube for electronic medical records: An experimental evaluation.

    PubMed

    Kim, Soohyung; Lee, Hyukki; Chung, Yon Dohn

    2017-01-01

    The aim of this study is to evaluate the effectiveness and efficiency of privacy-preserving data cubes of electronic medical records (EMRs). An EMR data cube is a complex of EMR statistics that are summarized or aggregated by all possible combinations of attributes. Data cubes are widely utilized for efficient big data analysis and also have great potential for EMR analysis. For safe data analysis without privacy breaches, we must consider the privacy preservation characteristics of the EMR data cube. In this paper, we introduce a design for a privacy-preserving EMR data cube and the anonymization methods needed to achieve data privacy. We further focus on changes in efficiency and effectiveness that are caused by the anonymization process for privacy preservation. Thus, we experimentally evaluate various types of privacy-preserving EMR data cubes using several practical metrics and discuss the applicability of each anonymization method with consideration for the EMR analysis environment. We construct privacy-preserving EMR data cubes from anonymized EMR datasets. A real EMR dataset and demographic dataset are used for the evaluation. There are a large number of anonymization methods to preserve EMR privacy, and the methods are classified into three categories (i.e., global generalization, local generalization, and bucketization) by anonymization rules. According to this classification, three types of privacy-preserving EMR data cubes were constructed for the evaluation. We perform a comparative analysis by measuring the data size, cell overlap, and information loss of the EMR data cubes. Global generalization considerably reduced the size of the EMR data cube and did not cause the data cube cells to overlap, but incurred a large amount of information loss. Local generalization maintained the data size and generated only moderate information loss, but there were cell overlaps that could decrease the search performance. Bucketization did not cause cells to overlap and generated little information loss; however, the method considerably inflated the size of the EMR data cubes. The utility of anonymized EMR data cubes varies widely according to the anonymization method, and the applicability of the anonymization method depends on the features of the EMR analysis environment. The findings help to adopt the optimal anonymization method considering the EMR analysis environment and goal of the EMR analysis. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. CubeSat Integration into the Space Situational Awareness Architecture

    NASA Astrophysics Data System (ADS)

    Morris, K.; Wolfson, M.; Brown, J.

    2013-09-01

    Lockheed Martin Space Systems Company has recently been involved in developing GEO Space Situational Awareness architectures, which allows insights into how cubesats can augment the current national systems. One hole that was identified in the current architecture is the need for timelier metric track observations to aid in the chain of custody. Obtaining observations of objects at GEO can be supported by CubeSats. These types of small satellites are increasing being built and flown by government agencies like NASA and SMDC. CubeSats are generally mass and power constrained allowing for only small payloads that cannot typically mimic traditional flight capability. CubeSats do not have a high reliability and care must be taken when choosing mission orbits to prevent creating more debris. However, due to the low costs, short development timelines, and available hardware, CubeSats can supply very valuable benefits to these complex missions, affordably. For example, utilizing CubeSats for advanced focal plane demonstrations to support technology insertion into the next generation situational awareness sensors can help to lower risks before the complex sensors are developed. CubeSats can augment the planned ground and space based assets by creating larger constellations with more access to areas of interest. To aid in maintaining custody of objects, a CubeSat constellation at 500 km above GEO would provide increased point of light tracking that can augment the ground SSA assets. Key features of the Cubesat include a small visible camera looking along the GEO belt, a small propulsion system that allows phasing between CubeSats, and an image processor to reduce the data sent to the ground. An elegant communications network will also be used to provide commands to and data from multiple CubeSats. Additional CubeSats can be deployed on GSO launches or through ride shares to GEO, replenishing or adding to the constellation with each launch. Each CubeSat would take images of the GEO belt, process out the stars, and then downlink the data to the ground. This data can then be combined with the existing metric track data to enhance the coverage and timeliness. With the current capability of CubeSats and their payloads, along with the launch constraints, the near term focus is to integrate into existing architectures by reducing technology risks, understanding unique phenomenology, and augment mission collection capability. Understanding the near term benefits of utilizing CubeSats will better inform the SSA mission developers how to integrate CubeSats into the next generation of architectures from the start.

  15. A new technique for ordering asymmetrical three-dimensional data sets in ecology.

    PubMed

    Pavoine, Sandrine; Blondel, Jacques; Baguette, Michel; Chessel, Daniel

    2007-02-01

    The aim of this paper is to tackle the problem that arises from asymmetrical data cubes formed by two crossed factors fixed by the experimenter (factor A and factor B, e.g., sites and dates) and a factor which is not controlled for (the species). The entries of this cube are densities in species. We approach this kind of data by the comparison of patterns, that is to say by analyzing first the effect of factor B on the species-factor A pattern, and second the effect of factor A on the species-factor B pattern. The analysis of patterns instead of individual responses requires a correspondence analysis. We use a method we call Foucart's correspondence analysis to coordinate the correspondence analyses of several independent matrices of species x factor A (respectively B) type, corresponding to each modality of factor B (respectively A). Such coordination makes it possible to evaluate the effect of factor B (respectively A) on the species-factor A (respectively B) pattern. The results obtained by such a procedure are much more insightful than those resulting from a classical single correspondence analysis applied to the global matrix that is obtained by simply unrolling the data cube, juxtaposing for example the individual species x factor A matrices through modalities of factor B. This is because a single global correspondence analysis combines three effects of factors in a way that cannot be determined from factorial maps (factor A, factor B, and factor A x factor B interaction) whereas the applications of Foucart's correspondence analysis clearly discriminate two different issues. Using two data sets, we illustrate that this technique proves to be particularly powerful in the analyses of ecological convergence which include several distinct data sets and in the analyses of spatiotemporal variations of species distributions.

  16. GPU-based optical propagation simulator of a laser-processed crystal block for the X'tal cube PET detector.

    PubMed

    Ogata, Yuma; Ohnishi, Takashi; Moriya, Takahiro; Inadama, Naoko; Nishikido, Fumihiko; Yoshida, Eiji; Murayama, Hideo; Yamaya, Taiga; Haneishi, Hideaki

    2014-01-01

    The X'tal cube is a next-generation DOI detector for PET that we are developing to offer higher resolution and higher sensitivity than is available with present detectors. It is constructed from a cubic monolithic scintillation crystal and silicon photomultipliers which are coupled on various positions of the six surfaces of the cube. A laser-processing technique is applied to produce 3D optical boundaries composed of micro-cracks inside the monolithic scintillator crystal. The current configuration is based on an empirical trial of a laser-processed boundary. There is room to improve the spatial resolution by optimizing the setting of the laser-processed boundary. In fact, the laser-processing technique has high freedom in setting the parameters of the boundary such as size, pitch, and angle. Computer simulation can effectively optimize such parameters. In this study, to design optical characteristics properly for the laser-processed crystal, we developed a Monte Carlo simulator which can model arbitrary arrangements of laser-processed optical boundaries (LPBs). The optical characteristics of the LPBs were measured by use of a setup with a laser and a photo-diode, and then modeled in the simulator. The accuracy of the simulator was confirmed by comparison of position histograms obtained from the simulation and from experiments with a prototype detector composed of a cubic LYSO monolithic crystal with 6 × 6 × 6 segments and multi-pixel photon counters. Furthermore, the simulator was accelerated by parallel computing with general-purpose computing on a graphics processing unit. The calculation speed was about 400 times faster than that with a CPU.

  17. The compression and storage method of the same kind of medical images: DPCM

    NASA Astrophysics Data System (ADS)

    Zhao, Xiuying; Wei, Jingyuan; Zhai, Linpei; Liu, Hong

    2006-09-01

    Medical imaging has started to take advantage of digital technology, opening the way for advanced medical imaging and teleradiology. Medical images, however, require large amounts of memory. At over 1 million bytes per image, a typical hospital needs a staggering amount of memory storage (over one trillion bytes per year), and transmitting an image over a network (even the promised superhighway) could take minutes--too slow for interactive teleradiology. This calls for image compression to reduce significantly the amount of data needed to represent an image. Several compression techniques with different compression ratio have been developed. However, the lossless techniques, which allow for perfect reconstruction of the original images, yield modest compression ratio, while the techniques that yield higher compression ratio are lossy, that is, the original image is reconstructed only approximately. Medical imaging poses the great challenge of having compression algorithms that are lossless (for diagnostic and legal reasons) and yet have high compression ratio for reduced storage and transmission time. To meet this challenge, we are developing and studying some compression schemes, which are either strictly lossless or diagnostically lossless, taking advantage of the peculiarities of medical images and of the medical practice. In order to increase the Signal to Noise Ratio (SNR) by exploitation of correlations within the source signal, a method of combining differential pulse code modulation (DPCM) is presented.

  18. Edge-preserving image compression for magnetic-resonance images using dynamic associative neural networks (DANN)-based neural networks

    NASA Astrophysics Data System (ADS)

    Wan, Tat C.; Kabuka, Mansur R.

    1994-05-01

    With the tremendous growth in imaging applications and the development of filmless radiology, the need for compression techniques that can achieve high compression ratios with user specified distortion rates becomes necessary. Boundaries and edges in the tissue structures are vital for detection of lesions and tumors, which in turn requires the preservation of edges in the image. The proposed edge preserving image compressor (EPIC) combines lossless compression of edges with neural network compression techniques based on dynamic associative neural networks (DANN), to provide high compression ratios with user specified distortion rates in an adaptive compression system well-suited to parallel implementations. Improvements to DANN-based training through the use of a variance classifier for controlling a bank of neural networks speed convergence and allow the use of higher compression ratios for `simple' patterns. The adaptation and generalization capabilities inherent in EPIC also facilitate progressive transmission of images through varying the number of quantization levels used to represent compressed patterns. Average compression ratios of 7.51:1 with an averaged average mean squared error of 0.0147 were achieved.

  19. Development of Novel Integrated Antennas for CubeSats

    NASA Technical Reports Server (NTRS)

    Jackson, David; Fink, Patrick W.; Martinez, Andres; Petro, Andrew

    2015-01-01

    The Development of Novel Integrated Antennas for CubeSats project is directed at the development of novel antennas for CubeSats to replace the bulky and obtrusive antennas (e.g., whip antennas) that are typically used. The integrated antennas will not require mechanical deployment and thus will allow future CubeSats to avoid potential mechanical problems and therefore improve mission reliability. Furthermore, the integrated antennas will have improved functionality and performance, such as circular polarization for improved link performance, compared with the conventional antennas currently used on CubeSats.

  20. Impact of Micro Silica on the properties of High Volume Fly Ash Concrete (HVFA)

    NASA Astrophysics Data System (ADS)

    Sripragadeesh, R.; Ramakrishnan, K.; Pugazhmani, G.; Ramasundram, S.; Muthu, D.; Venkatasubramanian, C.

    2017-07-01

    In the current situation, to overcome the difficulties of feasible construction, concrete made with various mixtures of Ordinary Portland Cement (OPC) and diverse mineral admixtures, is the wise choice for engineering construction. Mineral admixtures viz. Ground Granulated Blast Furnace Slag (GGBS), Meta kaolin (MK), Fly Ash (FA) and Silica Fume (SF) etc. are used as Supplementary Cementitious Materials (SCM) in binary and ternary blend cement system to enhance the mechanical and durability properties. Investigation on the effect of different replacement levels of OPC in M25 grade with FA + SF in ternary cement blend on the strength characteristics and beam behavior was studied. The OPC was partially replaced (by weight) with different combinations of SF (5%, 10%, 15%, 20% and 25%) and FA as 50% (High Volume Fly Ash - HVFA). The amount of FA addition is kept constant at 50% for all combinations. The compressive strength and tensile strength tests on cube and cylinder specimens, at 7 and 28 days were carried out. Based on the compressive strength results, optimum mix proportion was found out and flexural behaviour was studied for the optimum mix. It was found that all the mixes (FA + SF) showed improvement in compressive strength over that of the control mix and the mix with 50% FA + 10% SF has 20% increase over the control mix. The tensile strength was also increased over the control mix. Flexural behaviour also showed a significant improvement in the mix with FA and SF over the control mix.

  1. Planning/scheduling techniques for VQ-based image compression

    NASA Technical Reports Server (NTRS)

    Short, Nicholas M., Jr.; Manohar, Mareboyana; Tilton, James C.

    1994-01-01

    The enormous size of the data holding and the complexity of the information system resulting from the EOS system pose several challenges to computer scientists, one of which is data archival and dissemination. More than ninety percent of the data holdings of NASA is in the form of images which will be accessed by users across the computer networks. Accessing the image data in its full resolution creates data traffic problems. Image browsing using a lossy compression reduces this data traffic, as well as storage by factor of 30-40. Of the several image compression techniques, VQ is most appropriate for this application since the decompression of the VQ compressed images is a table lookup process which makes minimal additional demands on the user's computational resources. Lossy compression of image data needs expert level knowledge in general and is not straightforward to use. This is especially true in the case of VQ. It involves the selection of appropriate codebooks for a given data set and vector dimensions for each compression ratio, etc. A planning and scheduling system is described for using the VQ compression technique in the data access and ingest of raw satellite data.

  2. Two cubesat mission to study the Didymos asteroid system

    NASA Astrophysics Data System (ADS)

    Wahlund, J.-E.; Vinterhav, E.; Trigo-Rodríguez, J. M.; Hallmann, M.; Barabash, S.; Ivchenko, N.

    2015-10-01

    Among the growing interest about asteroid impact hazard mitigation in our community the Asteroid Impact & Deflection Assessment (AIDA) mission will be the first space experiment to use a kinetic impactor to demonstrate its capability as reliable deflection system [1]. As a part of the AIDA mission, we have proposed a set of two three-axis stabilized 3U CubeSats (with up to 5 science sensors) to simultaneously rendezvous at close range (<500m) with both the primary and the secondary component of the Didymos asteroid system. The CubeSats will be hosted on the ESA component of the AIDA mission, the monitoring satellite AIM (Asteroid Impact Mission). The CubeSats will characterise the magnetization, the main bulk chemical composition and presence of volatiles as well as do superresolution surface imaging of the Didymos components. The CubeSats will also support the plume characterisation resulting from the DART impact (Double Asteroid Redirection Test, a NASA component of the AIDA mission) at much closer range than the AIM main spacecraft, and provide imaging, composition, and temperature of the plume material. At end of the mission, the two CubeSats can optionally land on one of the asteroids for continued science operation. The science sensors consist of a dual fluxgate magnetometer (MAG), one miniaturized volatile composition analyser (VCA), a narrow angle camera (NAC) and a Video Emission Spectrometer (VES) with a diffraction grating for allowing a sequential chemical study of the emission spectra associated with the impact flare and the expanding plume. Consequently, the different envisioned instruments onboard the CubeSats can provide significant insight into the complex response of asteroid materials during impacts that has been theoretically studied using different techniques [2]. The two CubeSats will remain stowed in CubeSat dispensers aboard the main AIM spacecraft. They will be deployed and commissioned before the AIM impactor reaches the secondary and record the impact event from a closer vantage point than the main spacecraft. The two CubeSats are equipped with relative navigation systems capable of estimating the spacecraft position relative to the asteroids and propulsion system that allow them to operate close to the asteroid bodies. The two CubeSats will rely on mapping data relayed via the AIM main spacecraft but operate autonomously and individually based on schedules and navigation maps uploaded from ground. AIDA's target is the binary Apollo asteroid 65803 Didymos that is also catalogued as Potentially Hazardous Asteroid (PHA) because it experiences close approaches to Earth. Didymos' primary has a diameter of ˜800 meters and the secondary is ˜150 m across. Both bodies are separated about 1.1 km [3]. The rotation period and asymmetry of the secondary object is unknown, and it might be tidally locked to the larger primary body. At least the primary body is expected to be associated with ordinary chondrite material, consisting mostly of silicates, and metal, but the earlier made Xk classification suggested a rubble-pile type with large amount of volatile content. The secondary companion spectral class is unknown, but the total mass of the system suggests that the secondary companion could be of similar class. Detailed empirical information on the physical properties of the Didymos asteroid system, in particular the magnetic field, the (mineralogical) surface composition, the internal composition via the bulk density, the ages of surface units through crater counts and other morphological surface features is valuable in order to make progress in the asteroid field of science. Furthermore, the periodic effect of such a close dynamic system in the presence and temporal displacement of the surface regolith is EPSC Abstracts Vol. 10, EPSC2015-698, 2015 European Planetary Science Congress 2015 c Author(s) 2015 EPSC European Planetary Science Congress unknown, and could be followed using close-up video systems provided by the CubeSats. In conclusion, the proposed two CubeSats as part of the AIDA mission can therefore contribute significantly, since they can monitor the Didymos asteroid components at a very close range around hundred meters, and at the same time monitor in-situ an impact plume when it is created.

  3. Optimal Compression Methods for Floating-point Format Images

    NASA Technical Reports Server (NTRS)

    Pence, W. D.; White, R. L.; Seaman, R.

    2009-01-01

    We report on the results of a comparison study of different techniques for compressing FITS images that have floating-point (real*4) pixel values. Standard file compression methods like GZIP are generally ineffective in this case (with compression ratios only in the range 1.2 - 1.6), so instead we use a technique of converting the floating-point values into quantized scaled integers which are compressed using the Rice algorithm. The compressed data stream is stored in FITS format using the tiled-image compression convention. This is technically a lossy compression method, since the pixel values are not exactly reproduced, however all the significant photometric and astrometric information content of the image can be preserved while still achieving file compression ratios in the range of 4 to 8. We also show that introducing dithering, or randomization, when assigning the quantized pixel-values can significantly improve the photometric and astrometric precision in the stellar images in the compressed file without adding additional noise. We quantify our results by comparing the stellar magnitudes and positions as measured in the original uncompressed image to those derived from the same image after applying successively greater amounts of compression.

  4. EarthCube - A Community-led, Interdisciplinary Collaboration for Geoscience Cyberinfrastructure

    NASA Astrophysics Data System (ADS)

    Dick, Cindy; Allison, Lee

    2016-04-01

    The US NSF EarthCube Test Enterprise Governance Project completed its initial two-year long process to engage the community and test a demonstration governing organization with the goal of facilitating a community-led process on designing and developing a geoscience cyberinfrastructure. Conclusions are that EarthCube is viable, has engaged a broad spectrum of end-users and contributors, and has begun to foster a sense of urgency around the importance of open and shared data. Levels of trust among participants are growing. At the same time, the active participants in EarthCube represent a very small sub-set of the larger population of geoscientists. Results from Stage I of this project have impacted NSF decisions on the direction of the EarthCube program. The overall tone of EarthCube events has had a constructive, problem-solving orientation. The technical and organizational elements of EarthCube are poised to support a functional infrastructure for the geosciences community. The process for establishing shared technological standards has notable progress but there is a continuing need to expand technological and cultural alignment. Increasing emphasis is being given to the interdependencies among EarthCube funded projects. The newly developed EarthCube Technology Plan highlights important progress in this area by five working groups focusing on: 1. Use cases; 2. Funded project gap analysis; 3. Testbed development; 4. Standards; and 5. Architecture. The EarthCube governance implementing processes to facilitate community convergence on a system architecture, which is expected to emerge naturally from a set of data principles, user requirements, science drivers, technology capabilities, and domain needs.

  5. Linking Humans to Data: Designing an Enterprise Architecture for EarthCube

    NASA Astrophysics Data System (ADS)

    Xu, C.; Yang, C.; Meyer, C. B.

    2013-12-01

    National Science Foundation (NSF)'s EarthCube is a strategic initiative towards a grand enterprise that holistically incorporates different geoscience research domains. The EarthCube as envisioned by NSF is a community-guided cyberinfrastructure (NSF 2011). The design of EarthCube enterprise architecture (EA) offers a vision to harmonize processes between the operations of EarthCube and its information technology foundation, the geospatial cyberinfrastructure. (Yang et al. 2010). We envision these processes as linking humans to data. We report here on fundamental ideas that would ultimately materialize as a conceptual design of EarthCube EA. EarthCube can be viewed as a meta-science that seeks to advance knowledge of the Earth through cross-disciplinary connections made using conventional domain-based earth science research. In order to build capacity that enables crossing disciplinary chasms, a key step would be to identify the cornerstones of the envisioned enterprise architecture. Human and data inputs are the two key factors to the success of EarthCube (NSF 2011), based upon which three hypotheses have been made: 1) cross disciplinary collaboration has to be achieved through data sharing; 2) disciplinary differences need to be articulated and captured in both computer and human understandable formats; 3) human intervention is crucial for crossing the disciplinary chasms. We have selected the Federal Enterprise Architecture Framework (FEAF, CIO Council 2013) as the baseline for the envisioned EarthCube EA, noting that the FEAF's deficiencies can be improved upon with inputs from three other popular EA frameworks. This presentation reports the latest on the conceptual design of an enterprise architecture in support of EarthCube.

  6. Automatic Modelling of Rubble Mound Breakwaters from LIDAR Data

    NASA Astrophysics Data System (ADS)

    Bueno, M.; Díaz-Vilariño, L.; González-Jorge, H.; Martínez-Sánchez, J.; Arias, P.

    2015-08-01

    Rubble mound breakwaters maintenance is critical to the protection of beaches and ports. LiDAR systems provide accurate point clouds from the emerged part of the structure that can be modelled to make it more useful and easy to handle. This work introduces a methodology for the automatic modelling of breakwaters with armour units of cube shape. The algorithm is divided in three main steps: normal vector computation, plane segmentation, and cube reconstruction. Plane segmentation uses the normal orientation of the points and the edge length of the cube. Cube reconstruction uses the intersection of three perpendicular planes and the edge length. Three point clouds cropped from the main point cloud of the structure are used for the tests. The number of cubes detected is around 56 % for two of the point clouds and 32 % for the third one over the total physical cubes. Accuracy assessment is done by comparison with manually drawn cubes calculating the differences between the vertexes. It ranges between 6.4 cm and 15 cm. Computing time ranges between 578.5 s and 8018.2 s. The computing time increases with the number of cubes and the requirements of collision detection.

  7. Reply to ‘No correction for the light propagation within the cube: Comment on Relativistic theory of the falling cube gravimeter’

    NASA Astrophysics Data System (ADS)

    Ashby, Neil

    2018-06-01

    The comment (Nagornyi 2018 Metrologia) claims that, notwithstanding the conclusions stated in the paper Relativistic theory of the falling cube gravimeter (Ashby 2008 Metrologia 55 1–10), there is no need to consider the dimensions or refractive index of the cube in fitting data from falling cube absolute gravimeters; additional questions are raised about matching quartic polynomials while determining only three quantities. The comment also suggests errors were made in Ashby (2008 Metrologia 55 1–10) while implementing the fitting routines on which the conclusions were based. The main contention of the comment is shown to be invalid because retarded time was not properly used in constructing a fictitious cube position. Such a fictitious position, fixed relative to the falling cube, is derived and shown to be dependent on cube dimensions and refractive index. An example is given showing how in the present context, polynomials of fourth order can be effectively matched by determining only three quantities, and a new compact characterization of the interference signal arriving at the detector is given. Work of the U.S. government, not subject to copyright.

  8. Survey on the implementation and reliability of CubeSat electrical bus interfaces

    NASA Astrophysics Data System (ADS)

    Bouwmeester, Jasper; Langer, Martin; Gill, Eberhard

    2017-06-01

    This paper provides results and conclusions on a survey on the implementation and reliability aspects of CubeSat bus interfaces, with an emphasis on the data bus and power distribution. It provides recommendations for a future CubeSat bus standard. The survey is based on a literature study and a questionnaire representing 60 launched CubeSats and 44 to be launched CubeSats. It is found that the bus interfaces are not the main driver for mission failures. However, it is concluded that the Inter Integrated Circuit (I2C) data bus, as implemented in a great majority of the CubeSats, caused some catastrophic satellite failures and a vast amount of bus lockups. The power distribution may lead to catastrophic failures if the power lines are not protected against overcurrent. A connector and wiring standard widely implemented in CubeSats is based on the PC/104 standard. Most participants find the 104 pin connector of this standard too large. For a future CubeSat bus interface standard, it is recommended to implement a reliable data bus, a power distribution with overcurrent protection and a wiring harness with smaller connectors compared with PC/104.

  9. Mechanics of additively manufactured porous biomaterials based on the rhombicuboctahedron unit cell.

    PubMed

    Hedayati, R; Sadighi, M; Mohammadi-Aghdam, M; Zadpoor, A A

    2016-01-01

    Thanks to recent developments in additive manufacturing techniques, it is now possible to fabricate porous biomaterials with arbitrarily complex micro-architectures. Micro-architectures of such biomaterials determine their physical and biological properties, meaning that one could potentially improve the performance of such biomaterials through rational design of micro-architecture. The relationship between the micro-architecture of porous biomaterials and their physical and biological properties has therefore received increasing attention recently. In this paper, we studied the mechanical properties of porous biomaterials made from a relatively unexplored unit cell, namely rhombicuboctahedron. We derived analytical relationships that relate the micro-architecture of such porous biomaterials, i.e. the dimensions of the rhombicuboctahedron unit cell, to their elastic modulus, Poisson's ratio, and yield stress. Finite element models were also developed to validate the analytical solutions. Analytical and numerical results were compared with experimental data from one of our recent studies. It was found that analytical solutions and numerical results show a very good agreement particularly for smaller values of apparent density. The elastic moduli predicted by analytical and numerical models were in very good agreement with experimental observations too. While in excellent agreement with each other, analytical and numerical models somewhat over-predicted the yield stress of the porous structures as compared to experimental data. As the ratio of the vertical struts to the inclined struts, α, approaches zero and infinity, the rhombicuboctahedron unit cell respectively approaches the octahedron (or truncated cube) and cube unit cells. For those limits, the analytical solutions presented here were found to approach the analytic solutions obtained for the octahedron, truncated cube, and cube unit cells, meaning that the presented solutions are generalizations of the analytical solutions obtained for several other types of porous biomaterials. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Study of adaptive methods for data compression of scanner data

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The performance of adaptive image compression techniques and the applicability of a variety of techniques to the various steps in the data dissemination process are examined in depth. It is concluded that the bandwidth of imagery generated by scanners can be reduced without introducing significant degradation such that the data can be transmitted over an S-band channel. This corresponds to a compression ratio equivalent to 1.84 bits per pixel. It is also shown that this can be achieved using at least two fairly simple techniques with weight-power requirements well within the constraints of the LANDSAT-D satellite. These are the adaptive 2D DPCM and adaptive hybrid techniques.

  11. GreenCube and RocketCube: Low-Resource Sensorcraft for Atmospheric and Ionospheric Science

    NASA Astrophysics Data System (ADS)

    Bracikowski, P. J.; Lynch, K. A.; Slagle, A. K.; Fagin, M. H.; Currey, S. R.; Siddiqui, M. U.

    2009-12-01

    In situ atmospheric and ionospheric studies benefit greatly from the ability to separate variations in space from variations in time. Arrays of many probes are a method of doing this, but because of the technical character and expense of developing large arrays, so far probe arrays have been the domain of well-funded science missions. CubeSats and low-resource craft (``Picosats") are an avenue for bringing array-based studies of the atmosphere and ionosphere into the mainstream. The Lynch Rocket Lab at Dartmouth College is attempting to develop the instruments, experience, and heritage to implement arrays of many low-resource sensorcraft while doing worthwhile science in the development process. We are working on two CubeSat projects to reach this goal: GreenCube for atmospheric studies and RocketCube for ionospheric studies. GreenCube is an undergraduate student-directed high-altitude balloon-borne 3U CubeSat. GreenCube I was a bus, telemetry, and mechanical system development project. GreenCube I flew in the fall of 2008. The flight was successfully recovered and tracked over the 97km range and through the 29km altitude rise. GreenCube I carried six thermal housekeeping sensors, a GPS, a magnetometer, and a HAM radio telemetry system with a reporting rate of once every 30 seconds. The velocity profile obtained from the GPS data implies the presence of atmospheric gravity waves during the flight. GreenCube II flew in August 2009 with the science goal of detecting atmospheric gravity waves over the White Mountains of New Hampshire. Two balloons with identical payloads were released 90 seconds apart to make 2-point observations. Each payload carried a magnetometer, 5 thermistors for ambient temperature readings, a GPS, and an amateur radio telemetry system with a 7 second reporting cadence. A vertically oriented video camera on one payload and a horizontally oriented video camera on the other recorded the characteristics of gravity waves in the nearby clouds. We expect to be able to detect atmospheric gravity waves from the GPS-derived position and velocity of the two balloons and the ambient temperature profiles. Preliminary analysis of the temperature data shows indications of atmospheric gravity waves. RocketCube is a graduate student-designed low-resource sensorcraft development project being designed for future ionospheric multi-point missions. The FPGA-based bus system, based on GreenCube’s systems, will be able to control and digitize analog data from any low voltage instrument and telemeter that data. RocketCube contains a GPS and high-resolution magnetometer for position and orientation information. The Lynch Rocket Lab's initial interest in developing RocketCube is to investigate the k-spectrum of density irregularities in the auroral ionosphere. To this end, RocketCube will test a new Petite retarding potential analyzer Ion Probe (PIP) for examining subsonic and supersonic thermal ion populations in the ionosphere. The tentatively planned launch will be from a Wallops Flight Facility sounding rocket test flight in 2011. RocketCube serves as a step toward a scientific auroral sounding rocket mission that will feature an array of subpayloads to study the auroral ionosphere.

  12. Design and comparative performance analysis of different chirping profiles of tanh apodized fiber Bragg grating and comparison with the dispersion compensation fiber for long-haul transmission system

    NASA Astrophysics Data System (ADS)

    Dar, Aasif Bashir; Jha, Rakesh Kumar

    2017-03-01

    Various dispersion compensation units are presented and evaluated in this paper. These dispersion compensation units include dispersion compensation fiber (DCF), DCF merged with fiber Bragg grating (FBG) (joint technique), and linear, square root, and cube root chirped tanh apodized FBG. For the performance evaluation 10 Gb/s NRZ transmission system over 100-km-long single-mode fiber is used. The three chirped FBGs are optimized individually to yield pulse width reduction percentage (PWRP) of 86.66, 79.96, 62.42% for linear, square root, and cube root, respectively. The DCF and Joint technique both provide a remarkable PWRP of 94.45 and 96.96%, respectively. The performance of optimized linear chirped tanh apodized FBG and DCF is compared for long-haul transmission system on the basis of quality factor of received signal. For both the systems maximum transmission distance is calculated such that quality factor is ≥ 6 at the receiver and result shows that performance of FBG is comparable to that of DCF with advantages of very low cost, small size and reduced nonlinear effects.

  13. Unsupervised learning of structure in spectroscopic cubes

    NASA Astrophysics Data System (ADS)

    Araya, M.; Mendoza, M.; Solar, M.; Mardones, D.; Bayo, A.

    2018-07-01

    We consider the problem of analyzing the structure of spectroscopic cubes using unsupervised machine learning techniques. We propose representing the target's signal as a homogeneous set of volumes through an iterative algorithm that separates the structured emission from the background while not overestimating the flux. Besides verifying some basic theoretical properties, the algorithm is designed to be tuned by domain experts, because its parameters have meaningful values in the astronomical context. Nevertheless, we propose a heuristic to automatically estimate the signal-to-noise ratio parameter of the algorithm directly from data. The resulting light-weighted set of samples (≤ 1% compared to the original data) offer several advantages. For instance, it is statistically correct and computationally inexpensive to apply well-established techniques of the pattern recognition and machine learning domains; such as clustering and dimensionality reduction algorithms. We use ALMA science verification data to validate our method, and present examples of the operations that can be performed by using the proposed representation. Even though this approach is focused on providing faster and better analysis tools for the end-user astronomer, it also opens the possibility of content-aware data discovery by applying our algorithm to big data.

  14. Cluster analysis in systems of magnetic spheres and cubes

    NASA Astrophysics Data System (ADS)

    Pyanzina, E. S.; Gudkova, A. V.; Donaldson, J. G.; Kantorovich, S. S.

    2017-06-01

    In the present work we use molecular dynamics simulations and graph-theory based cluster analysis to compare self-assembly in systems of magnetic spheres, and cubes where the dipole moment is oriented along the side of the cube in the [001] crystallographic direction. We show that under the same conditions cubes aggregate far less than their spherical counterparts. This difference can be explained in terms of the volume of phase space in which the formation of the bond is thermodynamically advantageous. It follows that this volume is much larger for a dipolar sphere than for a dipolar cube.

  15. Reflectance Hyperspectral Imaging for Investigation of Works of Art: Old Master Paintings and Illuminated Manuscripts.

    PubMed

    Cucci, Costanza; Delaney, John K; Picollo, Marcello

    2016-10-18

    Diffuse reflectance hyperspectral imaging, or reflectance imaging spectroscopy, is a sophisticated technique that enables the capture of hundreds of images in contiguous narrow spectral bands (bandwidth < 10 nm), typically in the visible (Vis, 400-750 nm) and the near-infrared (NIR, 750-2500 nm) regions. This sequence of images provides a data set that is called an image-cube or file-cube. Two dimensions of the image-cube are the spatial dimensions of the scene, and the third dimension is the wavelength. In this way, each spatial pixel in the image has an associated reflectance spectrum. This "big data" image-cube allows for the mining of artists' materials and mapping their distribution across the surface of a work of art. Reflectance hyperspectral imaging, introduced in the 1980s by Goetz and co-workers, led to a revolution in the field of remote sensing of the earth and near planets ( Goetz, F. H.; Vane, G.; Solomon, B. N.; Rock, N. Imaging Spectrometry for Earth Remote Sensing . Science , 1985 , 228 , 1147 - 1152 ). In the subsequent decades, thanks to rapid advances in solid-state sensor technology, reflectance hyperspectral imaging, once only available to large government laboratories, was extended to new fields of application, such as monitoring agri-foods, pharmaceutical products, the environment, and cultural heritage. In the 2000s, the potential of this noninvasive technology for the study of artworks became evident and, consequently, the methodology is becoming more widely used in the art conservation science field. Typically hyperspectral reflectance image-cubes contain millions of spectra. Many of these spectra are similar, making the reduction of the data set size an important first step. Thus, image-processing tools based on multivariate techniques, such as principal component analysis (PCA), automated classification methods, or expert knowledge systems, that search for known spectral features are often applied. These algorithms seek to reduce the large number of high-quality spectra to a common subset, which allow identifying and mapping artists' materials and alteration products. Hence, reflectance hyperspectral imaging is finding its place as the starting point to find sites on polychrome surfaces for spot analytical techniques, such as X-ray fluorescence, Raman spectroscopy, and Fourier transform infrared spectroscopy. Reflectance hyperspectral imaging can also provide image products that are a mainstay in the art conservation field, such as color-accurate images, broadband near-infrared images, and false-color products. This Account reports on the research activity carried out by two research groups, one at the "Nello Carrara" Institute of Applied Physics of the Italian National Research Council (IFAC-CNR) in Florence and the other at the National Gallery of Art (NGA) in Washington, D.C. Both groups have conducted parallel research, with frequent interchanges, to develop multispectral and hyperspectral imaging systems to study works of art. In the past decade, they have designed and experimented with some of the earliest spectral imaging prototypes for museum applications. In this Account, a brief presentation of the hyperspectral sensor systems is given with case studies showing how reflectance hyperspectral imaging is answering key questions in cultural heritage.

  16. ELaNa - Educational Launch of Nanosatellite Providing Routine RideShare Opportunities

    NASA Technical Reports Server (NTRS)

    Skrobot, Garrett Lee; Coelho, Roland

    2012-01-01

    Since the creation of the NASA CubeSat Launch Initiative (NCSLI), the need for CubeSat rideshares has dramatically increased. After only three releases of the initiative, a total of 66 CubeSats now await launch opportunities. So, how is this challenge being resolved? NASA's Launch Services Program (LSP) has studied how to integrate PPODs on Athena, Atlas V, and Delta IV launch vehicles and has been instrumental in developing several carrier systems to support CubeSats as rideshares on NASA missions. In support of the first two ELaNa missions the Poly-Picosatellite Orbital Deployer (P-POD) was adapted for use on a Taurus XL (ELaNa I) and a Delta n (ELaNa III). Four P-PODs, which contained a total eight CubeSats, were used on these first ELaNa missions. Next up is ELaNa VI, which will launch on an Atlas V in August 2012. The four ELaNa VI CubeSats, in three P-PODs, are awaiting launch, having been integrated in the NPSCuLite. To increase rideshare capabilities, the Launch Services Program (LSP) is working to integrate P-PODs on Falcon 9 missions. The proposed Falcon 9 manifest will provide greater opportunities for the CubeSat community. For years, the standard CubeSat size was 1 U to 3U. As the desire to include more science in each cube grows, so does the standard CubeSat size. No longer is a 1 U, 1.5U, 2U or 3U CubeSat the only option available; the new CubeSat standard will include 6U and possibly even 12U. With each increase in CubeSat size, the CubeSat community is pushing the capability of the current P-POD design. Not only is the carrier system affected, but integration to the Launch Vehicle is also a concern. The development of a system to accommodate not only the 3U P-POD but also carriers for larger CubeSats is ongoing. LSP considers payloads in the lkg to 180 kg range rideshare or small/secondary payloads. As new and emerging small payloads are developed, rideshare opportunities and carrier systems need to be identified and secured. The development of a rideshare carrier system is not always cost effective. Sometimes a launch vehicle with an excellent performance record appears to be a great rideshare candidate however, after completing a feasibility study, LSP may determine that the cost of the rideshare carrier system is too great and, due to budget constraints, the development cannot go forward. With the current budget environment, one cost effective way to secure rideshare opportunities is to look for synergy with other government organizations that share the same interest.

  17. SpaceCube Version 1.5

    NASA Technical Reports Server (NTRS)

    Geist, Alessandro; Lin, Michael; Flatley, Tom; Petrick, David

    2013-01-01

    SpaceCube 1.5 is a high-performance and low-power system in a compact form factor. It is a hybrid processing system consisting of CPU (central processing unit), FPGA (field-programmable gate array), and DSP (digital signal processor) processing elements. The primary processing engine is the Virtex- 5 FX100T FPGA, which has two embedded processors. The SpaceCube 1.5 System was a bridge to the SpaceCube 2.0 and SpaceCube 2.0 Mini processing systems. The SpaceCube 1.5 system was the primary avionics in the successful SMART (Small Rocket/Spacecraft Technology) Sounding Rocket mission that was launched in the summer of 2011. For SMART and similar missions, an avionics processor is required that is reconfigurable, has high processing capability, has multi-gigabit interfaces, is low power, and comes in a rugged/compact form factor. The original SpaceCube 1.0 met a number of the criteria, but did not possess the multi-gigabit interfaces that were required and is a higher-cost system. The SpaceCube 1.5 was designed with those mission requirements in mind. The SpaceCube 1.5 features one Xilinx Virtex-5 FX100T FPGA and has excellent size, weight, and power characteristics [4×4×3 in. (approx. = 10×10×8 cm), 3 lb (approx. = 1.4 kg), and 5 to 15 W depending on the application]. The estimated computing power of the two PowerPC 440s in the Virtex-5 FPGA is 1100 DMIPS each. The SpaceCube 1.5 includes two Gigabit Ethernet (1 Gbps) interfaces as well as two SATA-I/II interfaces (1.5 to 3.0 Gbps) for recording to data drives. The SpaceCube 1.5 also features DDR2 SDRAM (double data rate synchronous dynamic random access memory); 4- Gbit Flash for storing application code for the CPU, FPGA, and DSP processing elements; and a Xilinx Platform Flash XL to store FPGA configuration files or application code. The system also incorporates a 12 bit analog to digital converter with the ability to read 32 discrete analog sensor inputs. The SpaceCube 1.5 design also has a built-in accelerometer. In addition, the system has 12 receive and transmit RS- 422 interfaces for legacy support. The SpaceCube 1.5 processor card represents the first NASA Goddard design in a compact form factor featuring the Xilinx Virtex- 5. The SpaceCube 1.5 incorporates backward compatibility with the Space- Cube 1.0 form factor and stackable architecture. It also makes use of low-cost commercial parts, but is designed for operation in harsh environments.

  18. Monitoring compaction and compressibility changes in offshore chalk reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dean, G.; Hardy, R.; Eltvik, P.

    1994-03-01

    Some of the North Sea's largest and most important oil fields are in chalk reservoirs. In these fields, it is important to measure reservoir compaction and compressibility because compaction can result in platform subsidence. Also, compaction drive is a main drive mechanism in these fields, so an accurate reserves estimate cannot be made without first measuring compressibility. Estimating compaction and reserves is difficult because compressibility changes throughout field life. Installing of accurate, permanent downhole pressure gauges on offshore chalk fields makes it possible to use a new method to monitor compressibility -- measurement of reservoir pressure changes caused by themore » tide. This tidal-monitoring technique is an in-situ method that can greatly increase compressibility information. It can be used to estimate compressibility and to measure compressibility variation over time. This paper concentrates on application of the tidal-monitoring technique to North Sea chalk reservoirs. However, the method is applicable for any tidal offshore area and can be applied whenever necessary to monitor in-situ rock compressibility. One such application would be if platform subsidence was expected.« less

  19. Compressed-domain video indexing techniques using DCT and motion vector information in MPEG video

    NASA Astrophysics Data System (ADS)

    Kobla, Vikrant; Doermann, David S.; Lin, King-Ip; Faloutsos, Christos

    1997-01-01

    Development of various multimedia applications hinges on the availability of fast and efficient storage, browsing, indexing, and retrieval techniques. Given that video is typically stored efficiently in a compressed format, if we can analyze the compressed representation directly, we can avoid the costly overhead of decompressing and operating at the pixel level. Compressed domain parsing of video has been presented in earlier work where a video clip is divided into shots, subshots, and scenes. In this paper, we describe key frame selection, feature extraction, and indexing and retrieval techniques that are directly applicable to MPEG compressed video. We develop a frame-type independent representation of the various types of frames present in an MPEG video in which al frames can be considered equivalent. Features are derived from the available DCT, macroblock, and motion vector information and mapped to a low-dimensional space where they can be accessed with standard database techniques. The spatial information is used as primary index while the temporal information is used to enhance the robustness of the system during the retrieval process. The techniques presented enable fast archiving, indexing, and retrieval of video. Our operational prototype typically takes a fraction of a second to retrieve similar video scenes from our database, with over 95% success.

  20. LagLoc - a new surgical technique for locking plate systems.

    PubMed

    Triana, Miguel; Gueorguiev, Boyko; Sommer, Christoph; Stoffel, Karl; Agarwal, Yash; Zderic, Ivan; Helfen, Tobias; Krieg, James C; Krause, Fabian; Knobe, Matthias; Richards, R Geoff; Lenz, Mark

    2018-06-19

    Treatment of oblique and spiral fractures remains challenging. The aim of this study was to introduce and investigate the new LagLoc technique for locked plating with generation of interfragmentary compression, combining the advantages of lag-screw and locking-head-screw techniques. Oblique fracture was simulated in artificial diaphyseal bones, assigned to three groups for plating with a 7-hole locking compression plate. Group I was plated with three locking screws in holes 1, 4 and 7. The central screw crossed the fracture line. In group II the central hole was occupied with a lag screw perpendicular to fracture line. Group III was instrumented applying the LagLoc technique as follows. Hole 4 was predrilled perpendicularly to the plate, followed by overdrilling of the near cortex and insertion of a locking screw whose head was covered by a holding sleeve to prevent temporarily the locking in the plate hole and generate interfragmentary compression. Subsequently, the screw head was released and locked in the plate hole. Holes 1 and 7 were occupied with locking screws. Interfragmentary compression in the fracture gap was measured using pressure sensors. All screws in the three groups were tightened with 4Nm torque. Interfragmentary compression in group I (167 ± 25N) was significantly lower in comparison to groups II (431 ± 21N) and III (379 ± 59N), p≤0.005. The difference in compression between groups II and III remained not significant (p = 0.999). The new LagLoc technique offers an alternative tool to generate interfragmentary compression with the application of locking plates by combining the biomechanical advantages of lag screw and locking screw fixations. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  1. IceCube

    Science.gov Websites

    Press and Public Interest IceCube Acronym Dictionary Articles about IceCube "Inside Story the End of the Earth" LBNL CRD Report Education/ Public Interest A New Window on the Universe Ice

  2. CubeSat evolution: Analyzing CubeSat capabilities for conducting science missions

    NASA Astrophysics Data System (ADS)

    Poghosyan, Armen; Golkar, Alessandro

    2017-01-01

    Traditionally, the space industry produced large and sophisticated spacecraft handcrafted by large teams of engineers and budgets within the reach of only a few large government-backed institutions. However, over the last decade, the space industry experienced an increased interest towards smaller missions and recent advances in commercial-off-the-shelf (COTS) technology miniaturization spurred the development of small spacecraft missions based on the CubeSat standard. CubeSats were initially envisioned primarily as educational tools or low cost technology demonstration platforms that could be developed and launched within one or two years. Recently, however, more advanced CubeSat missions have been developed and proposed, indicating that CubeSats clearly started to transition from being solely educational and technology demonstration platforms to offer opportunities for low-cost real science missions with potential high value in terms of science return and commercial revenue. Despite the significant progress made in CubeSat research and development over the last decade, some fundamental questions still habitually arise about the CubeSat capabilities, limitations, and ultimately about their scientific and commercial value. The main objective of this review is to evaluate the state of the art CubeSat capabilities with a special focus on advanced scientific missions and a goal of assessing the potential of CubeSat platforms as capable spacecraft. A total of over 1200 launched and proposed missions have been analyzed from various sources including peer-reviewed journal publications, conference proceedings, mission webpages as well as other publicly available satellite databases and about 130 relatively high performance missions were downselected and categorized into six groups based on the primary mission objectives including "Earth Science and Spaceborne Applications", "Deep Space Exploration", "Heliophysics: Space Weather", "Astrophysics", "Spaceborne In Situ Laboratory", and "Technology Demonstration" for in-detail analysis. Additionally, the evolution of CubeSat enabling technologies are surveyed for evaluating the current technology state of the art as well as identifying potential areas that will benefit the most from further technology developments for enabling high performance science missions based on CubeSat platforms.

  3. 3D-FFT for Signature Detection in LWIR Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Medvick, Patricia A.; Lind, Michael A.; Mackey, Patrick S.

    Improvements in analysis detection exploitation are possible by applying whitened matched filtering within the Fourier domain to hyperspectral data cubes. We describe an implementation of a Three Dimensional Fast Fourier Transform Whitened Matched Filter (3DFFTMF) approach and, using several example sets of Long Wave Infra Red (LWIR) data cubes, compare the results with those from standard Whitened Matched Filter (WMF) techniques. Since the variability in shape of gaseous plumes precludes the use of spatial conformation in the matched filtering, the 3DFFTMF results were similar to those of two other WMF methods. Including a spatial low-pass filter within the Fourier spacemore » can improve signal to noise ratios and therefore improve detection limit by facilitating the mitigation of high frequency clutter. The improvement only occurs if the low-pass filter diameter is smaller than the plume diameter.« less

  4. Effect of Build Angle on Surface Properties of Nickel Superalloys Processed by Selective Laser Melting

    NASA Astrophysics Data System (ADS)

    Covarrubias, Ernesto E.; Eshraghi, Mohsen

    2018-03-01

    Aerospace, automotive, and medical industries use selective laser melting (SLM) to produce complex parts through solidifying successive layers of powder. This additive manufacturing technique has many advantages, but one of the biggest challenges facing this process is the resulting surface quality of the as-built parts. The purpose of this research was to study the surface properties of Inconel 718 alloys fabricated by SLM. The effect of build angle on the surface properties of as-built parts was investigated. Two sets of sample geometries including cube and rectangular artifacts were considered in the study. It was found that, for angles between 15° and 75°, theoretical calculations based on the "stair-step" effect were consistent with the experimental results. Downskin surfaces showed higher average roughness values compared to the upskin surfaces. No significant difference was found between the average roughness values measured from cube and rectangular test artifacts.

  5. Study of on-board compression of earth resources data

    NASA Technical Reports Server (NTRS)

    Habibi, A.

    1975-01-01

    The current literature on image bandwidth compression was surveyed and those methods relevant to compression of multispectral imagery were selected. Typical satellite multispectral data was then analyzed statistically and the results used to select a smaller set of candidate bandwidth compression techniques particularly relevant to earth resources data. These were compared using both theoretical analysis and simulation, under various criteria of optimality such as mean square error (MSE), signal-to-noise ratio, classification accuracy, and computational complexity. By concatenating some of the most promising techniques, three multispectral data compression systems were synthesized which appear well suited to current and future NASA earth resources applications. The performance of these three recommended systems was then examined in detail by all of the above criteria. Finally, merits and deficiencies were summarized and a number of recommendations for future NASA activities in data compression proposed.

  6. Compression techniques in tele-radiology

    NASA Astrophysics Data System (ADS)

    Lu, Tianyu; Xiong, Zixiang; Yun, David Y.

    1999-10-01

    This paper describes a prototype telemedicine system for remote 3D radiation treatment planning. Due to voluminous medical image data and image streams generated in interactive frame rate involved in the application, the importance of deploying adjustable lossy to lossless compression techniques is emphasized in order to achieve acceptable performance via various kinds of communication networks. In particular, the compression of the data substantially reduces the transmission time and therefore allows large-scale radiation distribution simulation and interactive volume visualization using remote supercomputing resources in a timely fashion. The compression algorithms currently used in the software we developed are JPEG and H.263 lossy methods and Lempel-Ziv (LZ77) lossless methods. Both objective and subjective assessment of the effect of lossy compression methods on the volume data are conducted. Favorable results are obtained showing that substantial compression ratio is achievable within distortion tolerance. From our experience, we conclude that 30dB (PSNR) is about the lower bound to achieve acceptable quality when applying lossy compression to anatomy volume data (e.g. CT). For computer simulated data, much higher PSNR (up to 100dB) is expectable. This work not only introduces such novel approach for delivering medical services that will have significant impact on the existing cooperative image-based services, but also provides a platform for the physicians to assess the effects of lossy compression techniques on the diagnostic and aesthetic appearance of medical imaging.

  7. EarthCube - Results of Test Governance in Geoscience Cyberinfrastructure

    NASA Astrophysics Data System (ADS)

    Davis, R.; Allison, M. L.; Keane, C. M.; Robinson, E.

    2016-12-01

    In September 2016, the EarthCube Test Enterprise Governance Project completed its three-year long process to engage the community and test a demonstration governing organization with the goal of facilitating a community-led process on designing and developing a geoscience cyberinfrastructure to transform geoscience research. The EarthCube initiative is making an important transition from creating a coherent community towards adoption and implemention of technologies that can serve scientists working in and across many domains. The emerging concept of a "system of systems" approach to cyberinfrastructure architecture is a critical concept in the EarthCube program, but has not been fully defined. Recommendations from an NSF-appointed Advisory Committee include: a. developing a succinct definition of EarthCube; b. changing the community-elected governance approach towards structured rather than consensus-driven decision-making; c. restructuring the process to articulate program solicitations; and d. producing an effective implementation roadmap. These are seen as prerequisites to adoption of best practices, system concepts, and evolving to a production track. The EarthCube governing body is preparing responses to the Advisory Committee findings and recommendations with a target delivery date of late 2016 but broader involvement may be warranted. We conclude that there is ample justification to continue evolving to a governance framework that facilitates convergence on a system architecture that guides EarthCube activities and plays an influential role in making operational the EarthCube vision of cyberinfrastructure for the geosciences. There is widespread community expectation for support of a multiyear EarthCube governing effort to put into practice the science, technical, and organizational plans that are continuing to emerge. However, the active participants in EarthCube represent a small sub-set of the larger population of geoscientists.

  8. Evolution of Deformation and Recrystallization Textures in High-Purity Ni and the Ni-5 at. pct W Alloy

    NASA Astrophysics Data System (ADS)

    Bhattacharjee, Pinaki P.; Ray, Ranjit K.; Tsuji, Nobuhiro

    2010-11-01

    An attempt has been made to study the evolution of texture in high-purity Ni and Ni-5 at. pct W alloy prepared by the powder metallurgy route followed by heavy cold rolling ( 95 pct deformation) and recrystallization. The deformation textures of the two materials are of typical pure metal or Cu-type texture. Cube-oriented ( left\\{ {00 1} right\\}left< { 100} rightrangle ) regions are present in the deformed state as long thin bands, elongated in the rolling direction (RD). These bands are characterized by a high orientation gradient inside, which is a result of the rotation of the cube-oriented cells around the RD toward the RD-rotated cube ( left\\{ {0 1 3} right\\}left< { 100} rightrangle ). Low-temperature annealing produces a weak cube texture along with the left\\{ {0 1 3} right\\}left< { 100} rightrangle component, with the latter being much stronger in high-purity Ni than in the Ni-W alloy. At higher temperatures, the cube texture is strengthened considerably in the Ni-W alloy; however, the cube volume fraction in high-purity Ni is significantly lower because of the retention of the left\\{ {0 1 3} right\\}left< { 100} rightrangle component. The difference in the relative strengths of the cube, and the left\\{ {0 1 3} right\\}left< { 100} rightrangle components in the two materials is evident from the beginning of recrystallization in which more left\\{ {0 1 3} right\\}left< { 100} rightrangle -oriented grains than near cube grains form in high-purity Ni. The preferential nucleation of the near cube and the left\\{ {0 1 3} right\\}left< { 100} rightrangle grains in these materials seems to be a result of the high orientation gradients associated with the cube bands that offer a favorable environment for early nucleation.

  9. DAsHER CD: Developing a Data-Oriented Human-Centric Enterprise Architecture for EarthCube

    NASA Astrophysics Data System (ADS)

    Yang, C. P.; Yu, M.; Sun, M.; Qin, H.; Robinson, E.

    2015-12-01

    One of the biggest challenges that face Earth scientists is the resource discovery, access, and sharing in a desired fashion. EarthCube is targeted to enable geoscientists to address the challenges by fostering community-governed efforts that develop a common cyberinfrastructure for the purpose of collecting, accessing, analyzing, sharing and visualizing all forms of data and related resources, through the use of advanced technological and computational capabilities. Here we design an Enterprise Architecture (EA) for EarthCube to facilitate the knowledge management, communication and human collaboration in pursuit of the unprecedented data sharing across the geosciences. The design results will provide EarthCube a reference framework for developing geoscience cyberinfrastructure collaborated by different stakeholders, and identifying topics which should invoke high interest in the community. The development of this EarthCube EA framework leverages popular frameworks, such as Zachman, Gartner, DoDAF, and FEAF. The science driver of this design is the needs from EarthCube community, including the analyzed user requirements from EarthCube End User Workshop reports and EarthCube working group roadmaps, and feedbacks or comments from scientists obtained by organizing workshops. The final product of this Enterprise Architecture is a four-volume reference document: 1) Volume one is this document and comprises an executive summary of the EarthCube architecture, serving as an overview in the initial phases of architecture development; 2) Volume two is the major body of the design product. It outlines all the architectural design components or viewpoints; 3) Volume three provides taxonomy of the EarthCube enterprise augmented with semantics relations; 4) Volume four describes an example of utilizing this architecture for a geoscience project.

  10. Compressed Sensing for Chemistry

    NASA Astrophysics Data System (ADS)

    Sanders, Jacob Nathan

    Many chemical applications, from spectroscopy to quantum chemistry, involve measuring or computing a large amount of data, and then compressing this data to retain the most chemically-relevant information. In contrast, compressed sensing is an emergent technique that makes it possible to measure or compute an amount of data that is roughly proportional to its information content. In particular, compressed sensing enables the recovery of a sparse quantity of information from significantly undersampled data by solving an ℓ 1-optimization problem. This thesis represents the application of compressed sensing to problems in chemistry. The first half of this thesis is about spectroscopy. Compressed sensing is used to accelerate the computation of vibrational and electronic spectra from real-time time-dependent density functional theory simulations. Using compressed sensing as a drop-in replacement for the discrete Fourier transform, well-resolved frequency spectra are obtained at one-fifth the typical simulation time and computational cost. The technique is generalized to multiple dimensions and applied to two-dimensional absorption spectroscopy using experimental data collected on atomic rubidium vapor. Finally, a related technique known as super-resolution is applied to open quantum systems to obtain realistic models of a protein environment, in the form of atomistic spectral densities, at lower computational cost. The second half of this thesis deals with matrices in quantum chemistry. It presents a new use of compressed sensing for more efficient matrix recovery whenever the calculation of individual matrix elements is the computational bottleneck. The technique is applied to the computation of the second-derivative Hessian matrices in electronic structure calculations to obtain the vibrational modes and frequencies of molecules. When applied to anthracene, this technique results in a threefold speed-up, with greater speed-ups possible for larger molecules. The implementation of the method in the Q-Chem commercial software package is described. Moreover, the method provides a general framework for bootstrapping cheap low-accuracy calculations in order to reduce the required number of expensive high-accuracy calculations.

  11. Pulse-compression ghost imaging lidar via coherent detection.

    PubMed

    Deng, Chenjin; Gong, Wenlin; Han, Shensheng

    2016-11-14

    Ghost imaging (GI) lidar, as a novel remote sensing technique, has been receiving increasing interest in recent years. By combining pulse-compression technique and coherent detection with GI, we propose a new lidar system called pulse-compression GI lidar. Our analytical results, which are backed up by numerical simulations, demonstrate that pulse-compression GI lidar can obtain the target's spatial intensity distribution, range and moving velocity. Compared with conventional pulsed GI lidar system, pulse-compression GI lidar, without decreasing the range resolution, is easy to obtain high single pulse energy with the use of a long pulse, and the mechanism of coherent detection can eliminate the influence of the stray light, which is helpful to improve the detection sensitivity and detection range.

  12. Geospace ionosphere research with a MF/HF radio instrument on a cubesat

    NASA Astrophysics Data System (ADS)

    Kallio, E. J.; Aikio, A. T.; Alho, M.; Fontell, M.; van Gijlswijk, R.; Kauristie, K.; Kestilä, A.; Koskimaa, P.; Makela, J. S.; Mäkelä, M.; Turunen, E.; Vanhamäki, H.

    2016-12-01

    Modern technology provides new possibilities to study geospace and its ionosphere, using spacecraft and and computer simulations. A type of nanosatellites, CubeSats, provide a cost effective possibility to provide in-situ measurements in the ionosphere. Moreover, combined CubeSat observations with ground-based observations gives a new view on auroras and associated electromagnetic phenomena. Especially joint and active CubeSat - ground based observation campaigns enable the possibility of studying the 3D structure of the ionosphere. Furthermore using several CubeSats to form satellite constellations enables much higher temporal resolution. At the same time, increasing computation capacity has made it possible to perform simulations where properties of the ionosphere, such as propagation of the electromagnetic waves in the medium frequency, MF (0.3-3 MHz) and high frequency, HF (3-30 MHz), ranges is based on a 3D ionospheric model and on first-principles modelling. Electromagnetic waves at those frequencies are strongly affected by ionospheric electrons and, consequently, those frequencies can be used for studying the plasma. On the other hand, even if the ionosphere originally enables long-range telecommunication at MF and HF frequencies, the frequent occurrence of spatiotemporal variations in the ionosphere disturbs communication channels, especially at high latitudes. Therefore, study of the MF and HF waves in the ionosphere has both a strong science and technology interests. We present computational simulation results and measuring principles and techniques to investigate the arctic ionosphere by a polar orbiting CubeSat whose novel AM radio instrument measures HF and MF waves. The cubesat, which contains also a white light aurora camera, is planned to be launched in 2017 (http://www.suomi100satelliitti.fi/eng). We have modelled the propagation of the radio waves, both ground generated man-made waves and space formed space weather related waves, through the 3D arctic ionosphere with (1) a new 3D ray tracing model and (2) a new 3D full kinetic electromagnetic simulation. These simulations are used to analyse the origin of the radio waves observed by the MH/HF radio instrument and, consequently, to derive information about the 3D ionosphere and its spatial and temporal variations.

  13. Compression-RSA technique: A more efficient encryption-decryption procedure

    NASA Astrophysics Data System (ADS)

    Mandangan, Arif; Mei, Loh Chai; Hung, Chang Ee; Che Hussin, Che Haziqah

    2014-06-01

    The efficiency of encryption-decryption procedures has become a major problem in asymmetric cryptography. Compression-RSA technique is developed to overcome the efficiency problem by compressing the numbers of kplaintext, where k∈Z+ and k > 2, becoming only 2 plaintext. That means, no matter how large the numbers of plaintext, they will be compressed to only 2 plaintext. The encryption-decryption procedures are expected to be more efficient since these procedures only receive 2 inputs to be processed instead of kinputs. However, it is observed that as the numbers of original plaintext are increasing, the size of the new plaintext becomes bigger. As a consequence, it will probably affect the efficiency of encryption-decryption procedures, especially for RSA cryptosystem since both of its encryption-decryption procedures involve exponential operations. In this paper, we evaluated the relationship between the numbers of original plaintext and the size of the new plaintext. In addition, we conducted several experiments to show that the RSA cryptosystem with embedded Compression-RSA technique is more efficient than the ordinary RSA cryptosystem.

  14. Propulsion System and Orbit Maneuver Integration in CubeSats: Trajectory Control Strategies Using Micro Ion Propulsion

    NASA Technical Reports Server (NTRS)

    Hudson, Jennifer; Martinez, Andres; Petro, Andrew

    2015-01-01

    The Propulsion System and Orbit Maneuver Integration in CubeSats project aims to solve the challenges of integrating a micro electric propulsion system on a CubeSat in order to perform orbital maneuvers and control attitude. This represents a fundamentally new capability for CubeSats, which typically do not contain propulsion systems and cannot maneuver far beyond their initial orbits.

  15. Keeping It in Three Dimensions: Measuring the Development of Mental Rotation in Children with the Rotated Colour Cube Test (RCCT)

    ERIC Educational Resources Information Center

    Lutke, Nikolay; Lange-Kuttner, Christiane

    2015-01-01

    This study introduces the new Rotated Colour Cube Test (RCCT) as a measure of object identification and mental rotation using single 3D colour cube images in a matching-to-sample procedure. One hundred 7- to 11-year-old children were tested with aligned or rotated cube models, distracters and targets. While different orientations of distracters…

  16. Cube search, revisited.

    PubMed

    Zhang, Xuetao; Huang, Jie; Yigit-Elliott, Serap; Rosenholtz, Ruth

    2015-03-16

    Observers can quickly search among shaded cubes for one lit from a unique direction. However, replace the cubes with similar 2-D patterns that do not appear to have a 3-D shape, and search difficulty increases. These results have challenged models of visual search and attention. We demonstrate that cube search displays differ from those with "equivalent" 2-D search items in terms of the informativeness of fairly low-level image statistics. This informativeness predicts peripheral discriminability of target-present from target-absent patches, which in turn predicts visual search performance, across a wide range of conditions. Comparing model performance on a number of classic search tasks, cube search does not appear unexpectedly easy. Easy cube search, per se, does not provide evidence for preattentive computation of 3-D scene properties. However, search asymmetries derived from rotating and/or flipping the cube search displays cannot be explained by the information in our current set of image statistics. This may merely suggest a need to modify the model's set of 2-D image statistics. Alternatively, it may be difficult cube search that provides evidence for preattentive computation of 3-D scene properties. By attributing 2-D luminance variations to a shaded 3-D shape, 3-D scene understanding may slow search for 2-D features of the target. © 2015 ARVO.

  17. Cube search, revisited

    PubMed Central

    Zhang, Xuetao; Huang, Jie; Yigit-Elliott, Serap; Rosenholtz, Ruth

    2015-01-01

    Observers can quickly search among shaded cubes for one lit from a unique direction. However, replace the cubes with similar 2-D patterns that do not appear to have a 3-D shape, and search difficulty increases. These results have challenged models of visual search and attention. We demonstrate that cube search displays differ from those with “equivalent” 2-D search items in terms of the informativeness of fairly low-level image statistics. This informativeness predicts peripheral discriminability of target-present from target-absent patches, which in turn predicts visual search performance, across a wide range of conditions. Comparing model performance on a number of classic search tasks, cube search does not appear unexpectedly easy. Easy cube search, per se, does not provide evidence for preattentive computation of 3-D scene properties. However, search asymmetries derived from rotating and/or flipping the cube search displays cannot be explained by the information in our current set of image statistics. This may merely suggest a need to modify the model's set of 2-D image statistics. Alternatively, it may be difficult cube search that provides evidence for preattentive computation of 3-D scene properties. By attributing 2-D luminance variations to a shaded 3-D shape, 3-D scene understanding may slow search for 2-D features of the target. PMID:25780063

  18. Thales SESO's hollow and massive corner cube solutions

    NASA Astrophysics Data System (ADS)

    Fappani, Denis; Dahan, Déborah; Costes, Vincent; Luitot, Clément

    2017-11-01

    For Space Activities, more and more Corner Cubes, used as solution for retro reflection of light (telemetry and positioning), are emerging worldwide in different projects. Depending on the application, they can be massive or hollow Corner Cubes. For corners as well as for any kind of space optics, it usual that use of light/lightened components is always a baseline for purpose of mass reduction payloads. But other parameters, such as the system stability under severe environment, are also major issues, especially for the corner cube systems which require generally very tight angular accuracies. For the particular case of the hollow corner cube, an alternative solution to the usual cementing of the 3 reflective surfaces, has been developed with success in collaboration with CNES to guarantee a better stability and fulfill the weight requirements.. Another important parameter is the dihedral angles that have a great influence on the wavefront error. Two technologies can be considered, either a Corner Cubes array assembled in a very stable housing, or the irreversible adherence technology used for assembling the three parts of a cube. This latter technology enables in particular not having to use cement. The poster will point out the conceptual design, the manufacturing and control key-aspects of such corner cube assemblies as well as the technologies used for their assembling.

  19. Two-Thumb Encircling Technique Over the Head of Patients in the Setting of Lone Rescuer Infant CPR Occurred During Ambulance Transfer: A Crossover Simulation Study.

    PubMed

    Jo, Choong Hyun; Cho, Gyu Chong; Lee, Chang Hee

    2017-07-01

    The purpose of this study was to determine if the over-the-head 2-thumb encircling technique (OTTT) provides better overall quality of cardiopulmonary resuscitation compared with conventional 2-finger technique (TFT) for a lone rescuer in the setting of infant cardiac arrest in ambulance. Fifty medical emergency service students were voluntarily recruited to perform lone rescuer infant cardiopulmonary resuscitation for 2 minutes on a manikin simulating a 3-month-old baby in an ambulance. Participants who performed OTTT sat over the head of manikins to compress the chest using a 2-thumb encircling technique and provide bag-valve mask ventilations, whereas those who performed TFT sat at the side of the manikins to compress using 2-fingers and provide pocket-mask ventilations. Mean hands-off time was not significantly different between OTTT and TFT (7.6 ± 1.1 seconds vs 7.9 ± 1.3 seconds, P = 0.885). Over-the-head 2-thumb encircling technique resulted in greater depth of compression (42.6 ± 1.4 mm vs 41.0 ± 1.4 mm, P < 0.001) and faster rate of compressions (114.4 ± 8.0 per minute vs 112.2 ± 8.2 per minute, P = 0.019) than TFT. Over-the-head 2-thumb encircling technique resulted in a smaller fatigue score than TFT (1.7 ± 1.5 vs 2.5 ± 1.6, P < 0.001). In addition, subjects reported that compression, ventilation, and changing compression to ventilation were easier in OTTT than in TFT. The use of OTTT may be a suitable alternative to TFT in the setting of cardiac arrest of infants during ambulance transfer.

  20. Working RideShare for the U Class Payload

    NASA Technical Reports Server (NTRS)

    Skrobot, Garrett L.

    2014-01-01

    Presentation to describe current status of the Launch Services Program's (LSP) education launch of nano satellite project. U class are payloads that are of a form factor of the 1U CubeSats - 10cm Cubed. Over the past three years these small spacecraft have grown in popularity in both the Government and the Commercial market. There is an increase in the number of NASA CubeSats selected and yet a very low launch rate. Why the low launch rate? - Funding, more money = more launches - CubeSat being selective about the orbit - CubeSats not being ready. This trend is expected to continue with current manifesting practices.

  1. KSC-2013-3996

    NASA Image and Video Library

    2013-11-17

    CAPE CANAVERAL, Fla. -- At the News Center at NASA's Kennedy Space Center in Florida, Andrew Petro, the agency's acting director of the Early Stage Innovation Division of the Office of the Chief Technologist, discusses the agency’s CubeSat Launch initiative. CubeSats provide opportunities for small satellite payloads to fly on rockets planned for upcoming launches. CubeSats, a class of research spacecraft called nanosatellites, are flown as auxiliary payloads on previously planned missions. The cube-shaped satellites are approximately four inches long, have a volume of about one quart and weigh about three pounds. For more information, visit: http://www.nasa.gov/directorates/heo/home/CubeSats_initiative.html Photo credit: NASA/Kim Shiflett

  2. KSC-2013-3993

    NASA Image and Video Library

    2013-11-17

    CAPE CANAVERAL, Fla. -- At the News Center at NASA's Kennedy Space Center in Florida, Andrew Petro, the agency's acting director of the Early Stage Innovation Division of the Office of the Chief Technologist, discusses the agency’s CubeSat Launch initiative. CubeSats provide opportunities for small satellite payloads to fly on rockets planned for upcoming launches. CubeSats, a class of research spacecraft called nanosatellites, are flown as auxiliary payloads on previously planned missions. The cube-shaped satellites are approximately four inches long, have a volume of about one quart and weigh about three pounds. For more information, visit: http://www.nasa.gov/directorates/heo/home/CubeSats_initiative.html Photo credit: NASA/Kim Shiflett

  3. KSC-2013-3995

    NASA Image and Video Library

    2013-11-17

    CAPE CANAVERAL, Fla. -- At the News Center at NASA's Kennedy Space Center in Florida, Andrew Petro, the agency's acting director of the Early Stage Innovation Division of the Office of the Chief Technologist, discusses the agency’s CubeSat Launch initiative. CubeSats provide opportunities for small satellite payloads to fly on rockets planned for upcoming launches. CubeSats, a class of research spacecraft called nanosatellites, are flown as auxiliary payloads on previously planned missions. The cube-shaped satellites are approximately four inches long, have a volume of about one quart and weigh about three pounds. For more information, visit: http://www.nasa.gov/directorates/heo/home/CubeSats_initiative.html Photo credit: NASA/Kim Shiflett

  4. KSC-2013-3994

    NASA Image and Video Library

    2013-11-17

    CAPE CANAVERAL, Fla. -- At the News Center at NASA's Kennedy Space Center in Florida, Andrew Petro, the agency's acting director of the Early Stage Innovation Division of the Office of the Chief Technologist, discusses the agency’s CubeSat Launch initiative. CubeSats provide opportunities for small satellite payloads to fly on rockets planned for upcoming launches. CubeSats, a class of research spacecraft called nanosatellites, are flown as auxiliary payloads on previously planned missions. The cube-shaped satellites are approximately four inches long, have a volume of about one quart and weigh about three pounds. For more information, visit: http://www.nasa.gov/directorates/heo/home/CubeSats_initiative.html Photo credit: NASA/Kim Shiflett

  5. SEMG signal compression based on two-dimensional techniques.

    PubMed

    de Melo, Wheidima Carneiro; de Lima Filho, Eddie Batista; da Silva Júnior, Waldir Sabino

    2016-04-18

    Recently, two-dimensional techniques have been successfully employed for compressing surface electromyographic (SEMG) records as images, through the use of image and video encoders. Such schemes usually provide specific compressors, which are tuned for SEMG data, or employ preprocessing techniques, before the two-dimensional encoding procedure, in order to provide a suitable data organization, whose correlations can be better exploited by off-the-shelf encoders. Besides preprocessing input matrices, one may also depart from those approaches and employ an adaptive framework, which is able to directly tackle SEMG signals reassembled as images. This paper proposes a new two-dimensional approach for SEMG signal compression, which is based on a recurrent pattern matching algorithm called multidimensional multiscale parser (MMP). The mentioned encoder was modified, in order to efficiently work with SEMG signals and exploit their inherent redundancies. Moreover, a new preprocessing technique, named as segmentation by similarity (SbS), which has the potential to enhance the exploitation of intra- and intersegment correlations, is introduced, the percentage difference sorting (PDS) algorithm is employed, with different image compressors, and results with the high efficiency video coding (HEVC), H.264/AVC, and JPEG2000 encoders are presented. Experiments were carried out with real isometric and dynamic records, acquired in laboratory. Dynamic signals compressed with H.264/AVC and HEVC, when combined with preprocessing techniques, resulted in good percent root-mean-square difference [Formula: see text] compression factor figures, for low and high compression factors, respectively. Besides, regarding isometric signals, the modified two-dimensional MMP algorithm outperformed state-of-the-art schemes, for low compression factors, the combination between SbS and HEVC proved to be competitive, for high compression factors, and JPEG2000, combined with PDS, provided good performance allied to low computational complexity, all in terms of percent root-mean-square difference [Formula: see text] compression factor. The proposed schemes are effective and, specifically, the modified MMP algorithm can be considered as an interesting alternative for isometric signals, regarding traditional SEMG encoders. Besides, the approach based on off-the-shelf image encoders has the potential of fast implementation and dissemination, given that many embedded systems may already have such encoders available, in the underlying hardware/software architecture.

  6. Compression of multispectral fluorescence microscopic images based on a modified set partitioning in hierarchal trees

    NASA Astrophysics Data System (ADS)

    Mansoor, Awais; Robinson, J. Paul; Rajwa, Bartek

    2009-02-01

    Modern automated microscopic imaging techniques such as high-content screening (HCS), high-throughput screening, 4D imaging, and multispectral imaging are capable of producing hundreds to thousands of images per experiment. For quick retrieval, fast transmission, and storage economy, these images should be saved in a compressed format. A considerable number of techniques based on interband and intraband redundancies of multispectral images have been proposed in the literature for the compression of multispectral and 3D temporal data. However, these works have been carried out mostly in the elds of remote sensing and video processing. Compression for multispectral optical microscopy imaging, with its own set of specialized requirements, has remained under-investigated. Digital photography{oriented 2D compression techniques like JPEG (ISO/IEC IS 10918-1) and JPEG2000 (ISO/IEC 15444-1) are generally adopted for multispectral images which optimize visual quality but do not necessarily preserve the integrity of scientic data, not to mention the suboptimal performance of 2D compression techniques in compressing 3D images. Herein we report our work on a new low bit-rate wavelet-based compression scheme for multispectral fluorescence biological imaging. The sparsity of signicant coefficients in high-frequency subbands of multispectral microscopic images is found to be much greater than in natural images; therefore a quad-tree concept such as Said et al.'s SPIHT1 along with correlation of insignicant wavelet coefficients has been proposed to further exploit redundancy at high-frequency subbands. Our work propose a 3D extension to SPIHT, incorporating a new hierarchal inter- and intra-spectral relationship amongst the coefficients of 3D wavelet-decomposed image. The new relationship, apart from adopting the parent-child relationship of classical SPIHT, also brought forth the conditional "sibling" relationship by relating only the insignicant wavelet coefficients of subbands at the same level of decomposition. The insignicant quadtrees in dierent subbands in the high-frequency subband class are coded by a combined function to reduce redundancy. A number of experiments conducted on microscopic multispectral images have shown promising results for the proposed method over current state-of-the-art image-compression techniques.

  7. Development of a morphology-based modeling technique for tracking solid-body displacements: examining the reliability of a potential MRI-only approach for joint kinematics assessment.

    PubMed

    Mahato, Niladri K; Montuelle, Stephane; Cotton, John; Williams, Susan; Thomas, James; Clark, Brian

    2016-05-18

    Single or biplanar video radiography and Roentgen stereophotogrammetry (RSA) techniques used for the assessment of in-vivo joint kinematics involves application of ionizing radiation, which is a limitation for clinical research involving human subjects. To overcome this limitation, our long-term goal is to develop a magnetic resonance imaging (MRI)-only, three dimensional (3-D) modeling technique that permits dynamic imaging of joint motion in humans. Here, we present our initial findings, as well as reliability data, for an MRI-only protocol and modeling technique. We developed a morphology-based motion-analysis technique that uses MRI of custom-built solid-body objects to animate and quantify experimental displacements between them. The technique involved four major steps. First, the imaging volume was calibrated using a custom-built grid. Second, 3-D models were segmented from axial scans of two custom-built solid-body cubes. Third, these cubes were positioned at pre-determined relative displacements (translation and rotation) in the magnetic resonance coil and scanned with a T1 and a fast contrast-enhanced pulse sequences. The digital imaging and communications in medicine (DICOM) images were then processed for animation. The fourth step involved importing these processed images into an animation software, where they were displayed as background scenes. In the same step, 3-D models of the cubes were imported into the animation software, where the user manipulated the models to match their outlines in the scene (rotoscoping) and registered the models into an anatomical joint system. Measurements of displacements obtained from two different rotoscoping sessions were tested for reliability using coefficient of variations (CV), intraclass correlation coefficients (ICC), Bland-Altman plots, and Limits of Agreement analyses. Between-session reliability was high for both the T1 and the contrast-enhanced sequences. Specifically, the average CVs for translation were 4.31 % and 5.26 % for the two pulse sequences, respectively, while the ICCs were 0.99 for both. For rotation measures, the CVs were 3.19 % and 2.44 % for the two pulse sequences with the ICCs being 0.98 and 0.97, respectively. A novel biplanar imaging approach also yielded high reliability with mean CVs of 2.66 % and 3.39 % for translation in the x- and z-planes, respectively, and ICCs of 0.97 in both planes. This work provides basic proof-of-concept for a reliable marker-less non-ionizing-radiation-based quasi-dynamic motion quantification technique that can potentially be developed into a tool for real-time joint kinematics analysis.

  8. Using Additive Manufacturing to Print a CubeSat Propulsion System

    NASA Technical Reports Server (NTRS)

    Marshall, William M.

    2015-01-01

    CubeSats are increasingly being utilized for missions traditionally ascribed to larger satellites CubeSat unit (1U) defined as 10 cm x 10 cm x 11 cm. Have been built up to 6U sizes. CubeSats are typically built up from commercially available off-the-shelf components, but have limited capabilities. By using additive manufacturing, mission specific capabilities (such as propulsion), can be built into a system. This effort is part of STMD Small Satellite program Printing the Complete CubeSat. Interest in propulsion concepts for CubeSats is rapidly gaining interest-Numerous concepts exist for CubeSat scale propulsion concepts. The focus of this effort is how to incorporate into structure using additive manufacturing. End-use of propulsion system dictates which type of system to develop-Pulse-mode RCS would require different system than a delta-V orbital maneuvering system. Team chose an RCS system based on available propulsion systems and feasibility of printing using a materials extrusion process. Initially investigated a cold-gas propulsion system for RCS applications-Materials extrusion process did not permit adequate sealing of part to make this a functional approach.

  9. Girls in detail, boys in shape: gender differences when drawing cubes in depth.

    PubMed

    Lange-Küttner, C; Ebersbach, M

    2013-08-01

    The current study tested gender differences in the developmental transition from drawing cubes in two- versus three dimensions (3D), and investigated the underlying spatial abilities. Six- to nine-year-old children (N = 97) drew two occluding model cubes and solved several other spatial tasks. Girls more often unfolded the various sides of the cubes into a layout, also called diagrammatic cube drawing (object design detail). In girls, the best predictor for drawing the cubes was Mental Rotation Test (MRT) accuracy. In contrast, boys were more likely to preserve the optical appearance of the cube array. Their drawing in 3D was best predicted by MRT reaction time and the Embedded Figures Test (EFT). This confirmed boys' stronger focus on the contours of an object silhouette (object shape). It is discussed whether the two gender-specific approaches to drawing in three dimensions reflect two sides of the appearance-reality distinction in drawing, that is graphic syntax of object design features versus visual perception of projective space. © 2012 The British Psychological Society.

  10. Disorganized behavior on Link's cube test is sensitive to right hemispheric frontal lobe damage in stroke patients

    PubMed Central

    Kopp, Bruno; Rösser, Nina; Tabeling, Sandra; Stürenburg, Hans Jörg; de Haan, Bianca; Karnath, Hans-Otto; Wessel, Karl

    2014-01-01

    One of Luria's favorite neuropsychological tasks for challenging frontal lobe functions was Link's cube test (LCT). The LCT is a cube construction task in which the subject must assemble 27 small cubes into one large cube in such a manner that only the painted surfaces of the small cubes are visible. We computed two new LCT composite scores, the constructive plan composite score, reflecting the capability to envisage a cubical-shaped volume, and the behavioral (dis-) organization composite score, reflecting the goal-directedness of cube construction. Voxel-based lesion-behavior mapping (VLBM) was used to test the relationship between performance on the LCT and brain injury in a sample of stroke patients with right hemisphere damage (N = 32), concentrated in the frontal lobe. We observed a relationship between the measure of behavioral (dis-) organization on the LCT and right frontal lesions. Further work in a larger sample, including left frontal lobe damage and with more power to detect effects of right posterior brain injury, is necessary to determine whether this observation is specific for right frontal lesions. PMID:24596552

  11. Software Requirements Specification for Lunar IceCube

    NASA Astrophysics Data System (ADS)

    Glaser-Garbrick, Michael R.

    Lunar IceCube is a 6U satellite that will orbit the moon to measure water volatiles as a function of position, altitude, and time, and measure in its various phases. Lunar IceCube, is a collaboration between Morehead State University, Vermont Technical University, Busek, and NASA. The Software Requirements Specification will serve as contract between the overall team and the developers of the flight software. It will provide a system's overview of the software that will be developed for Lunar IceCube, in that it will detail all of the interconnects and protocols for each subsystem's that Lunar IceCube will utilize. The flight software will be written in SPARK to the fullest extent, due to SPARK's unique ability to make software free of any errors. The LIC flight software does make use of a general purpose, reusable application framework called CubedOS. This framework imposes some structuring requirements on the architecture and design of the flight software, but it does not impose any high level requirements. It will also detail the tools that we will be using for Lunar IceCube, such as why we will be utilizing VxWorks.

  12. High Data Rates for AubieSat-2 A & B, Two CubeSats Performing High Energy Science in the Upper Atmosphere

    NASA Technical Reports Server (NTRS)

    Sims, William H.

    2015-01-01

    This paper will discuss a proposed CubeSat size (3 Units / 6 Units) telemetry system concept being developed at Marshall Space Flight Center (MSFC) in cooperation with Auburn University. The telemetry system incorporates efficient, high-bandwidth communications by developing flight-ready, low-cost, PROTOFLIGHT software defined radio (SDR) payload for use on CubeSats. The current telemetry system is slightly larger in dimension of footprint than required to fit within a 0.75 Unit CubeSat volume. Extensible and modular communications for CubeSat technologies will provide high data rates for science experiments performed by two CubeSats flying in formation in Low Earth Orbit. The project is a collaboration between the University of Alabama in Huntsville and Auburn University to study high energy phenomena in the upper atmosphere. Higher bandwidth capacity will enable high-volume, low error-rate data transfer to and from the CubeSats, while also providing additional bandwidth and error correction margin to accommodate more complex encryption algorithms and higher user volume.

  13. Effects of sodium hydroxide (NaOH) solution concentration on fly ash-based lightweight geopolymer

    NASA Astrophysics Data System (ADS)

    Ibrahim, W. M. W.; Hussin, K.; Abdullah, M. M. A.; Kadir, A. A.; Deraman, L. M.

    2017-09-01

    In this study, the effects of NaOH concentration on properties of fly ash-based lightweight geopolymer were investigated. Lightweight geopolymer was produced using fly ash as source materials and synthetic foaming agents as air entraining agent. The alkaline solutions used in this study are combination of sodium hydroxide (NaOH) and sodium silicate (Na2SiO3) solution. Different molarities of NaOH solution (6M, 8M, 10M, 12M, and 14M) are taken for preparation of 50 x 50 x 50 mm cubes of lightweight geopolymer. The ratio of fly ash/alkaline solution, Na2SiO3/NaOH solution, foaming agent/water and foam/geopolymer paste were kept constant at 2.0, 2.5, 1:10 and 1:1 respectively. The samples were cured at 80°C for 24 hours and left at room temperature for tested at 7 days of ageing. Physical and mechanical properties such as density, water absorption, compressive strength and microstructure property were determined from the cube dried samples. The results show that the NaOH molarity had effects on the properties of lightweight geopolymer with the optimum NaOH molarity found is 12M due to the high strength of 15.6 MPa, lower water absorption (7.3%) and low density (1440 kg/m3). Microstructure analysis shows that the lightweight geopolymer contain some porous structure and unreacted fly ash particles remains.

  14. Some Practical Universal Noiseless Coding Techniques

    NASA Technical Reports Server (NTRS)

    Rice, Robert F.

    1994-01-01

    Report discusses noiseless data-compression-coding algorithms, performance characteristics and practical consideration in implementation of algorithms in coding modules composed of very-large-scale integrated circuits. Report also has value as tutorial document on data-compression-coding concepts. Coding techniques and concepts in question "universal" in sense that, in principle, applicable to streams of data from variety of sources. However, discussion oriented toward compression of high-rate data generated by spaceborne sensors for lower-rate transmission back to earth.

  15. Machine compliance in compression tests

    NASA Astrophysics Data System (ADS)

    Sousa, Pedro; Ivens, Jan; Lomov, Stepan V.

    2018-05-01

    The compression behavior of a material cannot be accurately determined if the machine compliance is not accounted prior to the measurements. This work discusses the machine compliance during a compressibility test with fiberglass fabrics. The thickness variation was measured during loading and unloading cycles with a relaxation stage of 30 minutes between them. The measurements were performed using an indirect technique based on the comparison between the displacement at a free compression cycle and the displacement with a sample. Relating to the free test, it has been noticed the nonexistence of machine relaxation during relaxation stage. Considering relaxation or not, the characteristic curves for a free compression cycle can be overlapped precisely in the majority of the points. For the compression test with sample, it was noticed a non-physical decrease of about 30 µm during the relaxation stage, what can be explained by the greater fabric relaxation in relation to the machine relaxation. Beyond the technique normally used, another technique was used which allows a constant thickness during relaxation. Within this second method, machine displacement with sample is simply subtracted to the machine displacement without sample being imposed as constant. If imposed as a constant it will remain constant during relaxation stage and it will suddenly decrease after relaxation. If constantly calculated it will decrease gradually during relaxation stage. Independently of the technique used the final result will remain unchanged. The uncertainty introduced by this imprecision is about ±15 µm.

  16. Data compression: The end-to-end information systems perspective for NASA space science missions

    NASA Technical Reports Server (NTRS)

    Tai, Wallace

    1991-01-01

    The unique characteristics of compressed data have important implications to the design of space science data systems, science applications, and data compression techniques. The sequential nature or data dependence between each of the sample values within a block of compressed data introduces an error multiplication or propagation factor which compounds the effects of communication errors. The data communication characteristics of the onboard data acquisition, storage, and telecommunication channels may influence the size of the compressed blocks and the frequency of included re-initialization points. The organization of the compressed data are continually changing depending on the entropy of the input data. This also results in a variable output rate from the instrument which may require buffering to interface with the spacecraft data system. On the ground, there exist key tradeoff issues associated with the distribution and management of the science data products when data compression techniques are applied in order to alleviate the constraints imposed by ground communication bandwidth and data storage capacity.

  17. Simultaneous compression and encryption for secure real-time secure transmission of sensitive video transmission

    NASA Astrophysics Data System (ADS)

    Al-Hayani, Nazar; Al-Jawad, Naseer; Jassim, Sabah A.

    2014-05-01

    Video compression and encryption became very essential in a secured real time video transmission. Applying both techniques simultaneously is one of the challenges where the size and the quality are important in multimedia transmission. In this paper we proposed a new technique for video compression and encryption. Both encryption and compression are based on edges extracted from the high frequency sub-bands of wavelet decomposition. The compression algorithm based on hybrid of: discrete wavelet transforms, discrete cosine transform, vector quantization, wavelet based edge detection, and phase sensing. The compression encoding algorithm treats the video reference and non-reference frames in two different ways. The encryption algorithm utilized A5 cipher combined with chaotic logistic map to encrypt the significant parameters and wavelet coefficients. Both algorithms can be applied simultaneously after applying the discrete wavelet transform on each individual frame. Experimental results show that the proposed algorithms have the following features: high compression, acceptable quality, and resistance to the statistical and bruteforce attack with low computational processing.

  18. Compressed domain indexing of losslessly compressed images

    NASA Astrophysics Data System (ADS)

    Schaefer, Gerald

    2001-12-01

    Image retrieval and image compression have been pursued separately in the past. Only little research has been done on a synthesis of the two by allowing image retrieval to be performed directly in the compressed domain of images without the need to uncompress them first. In this paper methods for image retrieval in the compressed domain of losslessly compressed images are introduced. While most image compression techniques are lossy, i.e. discard visually less significant information, lossless techniques are still required in fields like medical imaging or in situations where images must not be changed due to legal reasons. The algorithms in this paper are based on predictive coding methods where a pixel is encoded based on the pixel values of its (already encoded) neighborhood. The first method is based on an understanding that predictively coded data is itself indexable and represents a textural description of the image. The second method operates directly on the entropy encoded data by comparing codebooks of images. Experiments show good image retrieval results for both approaches.

  19. Edge compression techniques for visualization of dense directed graphs.

    PubMed

    Dwyer, Tim; Henry Riche, Nathalie; Marriott, Kim; Mears, Christopher

    2013-12-01

    We explore the effectiveness of visualizing dense directed graphs by replacing individual edges with edges connected to 'modules'-or groups of nodes-such that the new edges imply aggregate connectivity. We only consider techniques that offer a lossless compression: that is, where the entire graph can still be read from the compressed version. The techniques considered are: a simple grouping of nodes with identical neighbor sets; Modular Decomposition which permits internal structure in modules and allows them to be nested; and Power Graph Analysis which further allows edges to cross module boundaries. These techniques all have the same goal--to compress the set of edges that need to be rendered to fully convey connectivity--but each successive relaxation of the module definition permits fewer edges to be drawn in the rendered graph. Each successive technique also, we hypothesize, requires a higher degree of mental effort to interpret. We test this hypothetical trade-off with two studies involving human participants. For Power Graph Analysis we propose a novel optimal technique based on constraint programming. This enables us to explore the parameter space for the technique more precisely than could be achieved with a heuristic. Although applicable to many domains, we are motivated by--and discuss in particular--the application to software dependency analysis.

  20. Optimizing Lidar Scanning Strategies for Wind Energy Measurements (Invited)

    NASA Astrophysics Data System (ADS)

    Newman, J. F.; Bonin, T. A.; Klein, P.; Wharton, S.; Chilson, P. B.

    2013-12-01

    Environmental concerns and rising fossil fuel prices have prompted rapid development in the renewable energy sector. Wind energy, in particular, has become increasingly popular in the United States. However, the intermittency of available wind energy makes it difficult to integrate wind energy into the power grid. Thus, the expansion and successful implementation of wind energy requires accurate wind resource assessments and wind power forecasts. The actual power produced by a turbine is affected by the wind speeds and turbulence levels experienced across the turbine rotor disk. Because of the range of measurement heights required for wind power estimation, remote sensing devices (e.g., lidar) are ideally suited for these purposes. However, the volume averaging inherent in remote sensing technology produces turbulence estimates that are different from those estimated by a sonic anemometer mounted on a standard meteorological tower. In addition, most lidars intended for wind energy purposes utilize a standard Doppler beam-swinging or Velocity-Azimuth Display technique to estimate the three-dimensional wind vector. These scanning strategies are ideal for measuring mean wind speeds but are likely inadequate for measuring turbulence. In order to examine the impact of different lidar scanning strategies on turbulence measurements, a WindCube lidar, a scanning Halo lidar, and a scanning Galion lidar were deployed at the Southern Great Plains Atmospheric Radiation Measurement (ARM) site in Summer 2013. Existing instrumentation at the ARM site, including a 60-m meteorological tower and an additional scanning Halo lidar, were used in conjunction with the deployed lidars to evaluate several user-defined scanning strategies. For part of the experiment, all three scanning lidars were pointed at approximately the same point in space and a tri-Doppler analysis was completed to calculate the three-dimensional wind vector every 1 second. In another part of the experiment, one of the scanning lidars ran a Doppler beam-swinging technique identical to that used by the WindCube lidar while another scanning lidar used a novel six-beam technique that has been presented in the literature as a better alternative for measuring turbulence. In this presentation, turbulence measurements from these techniques are compared to turbulence measured by the WindCube lidar and sonic anemometers on the 60-m meteorological tower. In addition, recommendations are made for lidar measurement campaigns for wind energy applications.

  1. CubeSat Launch Initiative

    NASA Technical Reports Server (NTRS)

    Higginbotham, Scott

    2016-01-01

    The National Aeronautics and Space Administration (NASA) recognizes the tremendous potential that CubeSats (very small satellites) have to inexpensively demonstrate advanced technologies, collect scientific data, and enhance student engagement in Science, Technology, Engineering, and Mathematics (STEM). The CubeSat Launch Initiative (CSLI) was created to provide launch opportunities for CubeSats developed by academic institutions, non-profit entities, and NASA centers. This presentation will provide an overview of the CSLI, its benefits, and its results.

  2. Telemedicine + OCT: toward design of optimized algorithms for high-quality compressed images

    NASA Astrophysics Data System (ADS)

    Mousavi, Mahta; Lurie, Kristen; Land, Julian; Javidi, Tara; Ellerbee, Audrey K.

    2014-03-01

    Telemedicine is an emerging technology that aims to provide clinical healthcare at a distance. Among its goals, the transfer of diagnostic images over telecommunication channels has been quite appealing to the medical community. When viewed as an adjunct to biomedical device hardware, one highly important consideration aside from the transfer rate and speed is the accuracy of the reconstructed image at the receiver end. Although optical coherence tomography (OCT) is an established imaging technique that is ripe for telemedicine, the effects of OCT data compression, which may be necessary on certain telemedicine platforms, have not received much attention in the literature. We investigate the performance and efficiency of several lossless and lossy compression techniques for OCT data and characterize their effectiveness with respect to achievable compression ratio, compression rate and preservation of image quality. We examine the effects of compression in the interferogram vs. A-scan domain as assessed with various objective and subjective metrics.

  3. Power generation and solar panels for an MSU CubeSat

    NASA Astrophysics Data System (ADS)

    Sassi, Soundouss

    This thesis is a power generation study of a proposed CubeSat at Mississippi State University (MSU). CubeSats are miniaturized satellites of 10 x 10 x 10 cm in dimension. Their power source once in orbit is the sun during daylight and the batteries during eclipse. MSU CubeSat is equipped with solar panels. This effort will discuss two types of cells: Gallium Arsenide and Silicon; and which one will suit MSU CubeSat best. Once the cell type is chosen, another decision regarding the electrical power subsystem will be made. Solar array design can only be done once the choice of the electrical power subsystem and the solar cells is made. Then the power calculation for different mission durations will start along with the sizing of the solar arrays. In the last part the batteries are introduced and discussed in order to choose one type of batteries for MSU CubeSat.

  4. Teaching group theory using Rubik's cubes

    NASA Astrophysics Data System (ADS)

    Cornock, Claire

    2015-10-01

    Being situated within a course at the applied end of the spectrum of maths degrees, the pure mathematics modules at Sheffield Hallam University have an applied spin. Pure topics are taught through consideration of practical examples such as knots, cryptography and automata. Rubik's cubes are used to teach group theory within a final year pure elective based on physical examples. Abstract concepts, such as subgroups, homomorphisms and equivalence relations are explored with the cubes first. In addition to this, conclusions about the cubes can be made through the consideration of algebraic approaches through a process of discovery. The teaching, learning and assessment methods are explored in this paper, along with the challenges and limitations of the methods. The physical use of Rubik's cubes within the classroom and examination will be presented, along with the use of peer support groups in this process. The students generally respond positively to the teaching methods and the use of the cubes.

  5. ECITE: A Testbed for Assessment of Technology Interoperability and Integration wiht Architecture Components

    NASA Astrophysics Data System (ADS)

    Graves, S. J.; Keiser, K.; Law, E.; Yang, C. P.; Djorgovski, S. G.

    2016-12-01

    ECITE (EarthCube Integration and Testing Environment) is providing both cloud-based computational testing resources and an Assessment Framework for Technology Interoperability and Integration. NSF's EarthCube program is funding the development of cyberinfrastructure building block components as technologies to address Earth science research problems. These EarthCube building blocks need to support integration and interoperability objectives to work towards a coherent cyberinfrastructure architecture for the program. ECITE is being developed to provide capabilities to test and assess the interoperability and integration across funded EarthCube technology projects. EarthCube defined criteria for interoperability and integration are applied to use cases coordinating science problems with technology solutions. The Assessment Framework facilitates planning, execution and documentation of the technology assessments for review by the EarthCube community. This presentation will describe the components of ECITE and examine the methodology of cross walking between science and technology use cases.

  6. EarthCube: A Community-Driven Cyberinfrastructure for the Geosciences

    NASA Astrophysics Data System (ADS)

    Koskela, Rebecca; Ramamurthy, Mohan; Pearlman, Jay; Lehnert, Kerstin; Ahern, Tim; Fredericks, Janet; Goring, Simon; Peckham, Scott; Powers, Lindsay; Kamalabdi, Farzad; Rubin, Ken; Yarmey, Lynn

    2017-04-01

    EarthCube is creating a dynamic, System of Systems (SoS) infrastructure and data tools to collect, access, analyze, share, and visualize all forms of geoscience data and resources, using advanced collaboration, technological, and computational capabilities. EarthCube, as a joint effort between the U.S. National Science Foundation Directorate for Geosciences and the Division of Advanced Cyberinfrastructure, is a quickly growing community of scientists across all geoscience domains, as well as geoinformatics researchers and data scientists. EarthCube has attracted an evolving, dynamic virtual community of more than 2,500 contributors, including earth, ocean, polar, planetary, atmospheric, geospace, computer and social scientists, educators, and data and information professionals. During 2017, EarthCube will transition to the implementation phase. The implementation will balance "innovation" and "production" to advance cross-disciplinary science goals as well as the development of future data scientists. This presentation will describe the current architecture design for the EarthCube cyberinfrastructure and implementation plan.

  7. Miniature Radioisotope Thermoelectric Power Cubes

    NASA Technical Reports Server (NTRS)

    Patel, Jagdish U.; Fleurial, Jean-Pierre; Snyder, G. Jeffrey; Caillat, Thierry

    2004-01-01

    Cube-shaped thermoelectric devices energized by a particles from radioactive decay of Cm-244 have been proposed as long-lived sources of power. These power cubes are intended especially for incorporation into electronic circuits that must operate in dark, extremely cold locations (e.g., polar locations or deep underwater on Earth, or in deep interplanetary space). Unlike conventional radioisotope thermoelectric generators used heretofore as central power sources in some spacecraft, the proposed power cubes would be small enough (volumes would range between 0.1 and 0.2 cm3) to play the roles of batteries that are parts of, and dedicated to, individual electronic-circuit packages. Unlike electrochemical batteries, these power cubes would perform well at low temperatures. They would also last much longer: given that the half-life of Cm-244 is 18 years, a power cube could remain adequate as a power source for years, depending on the power demand in its particular application.

  8. Use of a wave reverberation technique to infer the density compression of shocked liquid deuterium to 75 GPa.

    PubMed

    Knudson, M D; Hanson, D L; Bailey, J E; Hall, C A; Asay, J R

    2003-01-24

    A novel approach was developed to probe density compression of liquid deuterium (L-D2) along the principal Hugoniot. Relative transit times of shock waves reverberating within the sample are shown to be sensitive to the compression due to the first shock. This technique has proven to be more sensitive than the conventional method of inferring density from the shock and mass velocity, at least in this high-pressure regime. Results in the range of 22-75 GPa indicate an approximately fourfold density compression, and provide data to differentiate between proposed theories for hydrogen and its isotopes.

  9. Compressive self-interference Fresnel digital holography with faithful reconstruction

    NASA Astrophysics Data System (ADS)

    Wan, Yuhong; Man, Tianlong; Han, Ying; Zhou, Hongqiang; Wang, Dayong

    2017-05-01

    We developed compressive self-interference digital holographic approach that allows retrieving three-dimensional information of the spatially incoherent objects from single-shot captured hologram. The Fresnel incoherent correlation holography is combined with parallel phase-shifting technique to instantaneously obtain spatial-multiplexed phase-shifting holograms. The recording scheme is regarded as compressive forward sensing model, thus the compressive-sensing-based reconstruction algorithm is implemented to reconstruct the original object from the under sampled demultiplexed sub-holograms. The concept was verified by simulations and experiments with simulating use of the polarizer array. The proposed technique has great potential to be applied in 3D tracking of spatially incoherent samples.

  10. Compressed sensing system considerations for ECG and EMG wireless biosensors.

    PubMed

    Dixon, Anna M R; Allstot, Emily G; Gangopadhyay, Daibashish; Allstot, David J

    2012-04-01

    Compressed sensing (CS) is an emerging signal processing paradigm that enables sub-Nyquist processing of sparse signals such as electrocardiogram (ECG) and electromyogram (EMG) biosignals. Consequently, it can be applied to biosignal acquisition systems to reduce the data rate to realize ultra-low-power performance. CS is compared to conventional and adaptive sampling techniques and several system-level design considerations are presented for CS acquisition systems including sparsity and compression limits, thresholding techniques, encoder bit-precision requirements, and signal recovery algorithms. Simulation studies show that compression factors greater than 16X are achievable for ECG and EMG signals with signal-to-quantization noise ratios greater than 60 dB.

  11. The application of compressed sensing to long-term acoustic emission-based structural health monitoring

    NASA Astrophysics Data System (ADS)

    Cattaneo, Alessandro; Park, Gyuhae; Farrar, Charles; Mascareñas, David

    2012-04-01

    The acoustic emission (AE) phenomena generated by a rapid release in the internal stress of a material represent a promising technique for structural health monitoring (SHM) applications. AE events typically result in a discrete number of short-time, transient signals. The challenge associated with capturing these events using classical techniques is that very high sampling rates must be used over extended periods of time. The result is that a very large amount of data is collected to capture a phenomenon that rarely occurs. Furthermore, the high energy consumption associated with the required high sampling rates makes the implementation of high-endurance, low-power, embedded AE sensor nodes difficult to achieve. The relatively rare occurrence of AE events over long time scales implies that these measurements are inherently sparse in the spike domain. The sparse nature of AE measurements makes them an attractive candidate for the application of compressed sampling techniques. Collecting compressed measurements of sparse AE signals will relax the requirements on the sampling rate and memory demands. The focus of this work is to investigate the suitability of compressed sensing techniques for AE-based SHM. The work explores estimating AE signal statistics in the compressed domain for low-power classification applications. In the event compressed classification finds an event of interest, ι1 norm minimization will be used to reconstruct the measurement for further analysis. The impact of structured noise on compressive measurements is specifically addressed. The suitability of a particular algorithm, called Justice Pursuit, to increase robustness to a small amount of arbitrary measurement corruption is investigated.

  12. A Novel 2D Image Compression Algorithm Based on Two Levels DWT and DCT Transforms with Enhanced Minimize-Matrix-Size Algorithm for High Resolution Structured Light 3D Surface Reconstruction

    NASA Astrophysics Data System (ADS)

    Siddeq, M. M.; Rodrigues, M. A.

    2015-09-01

    Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.

  13. Constraint Logic Programming approach to protein structure prediction.

    PubMed

    Dal Palù, Alessandro; Dovier, Agostino; Fogolari, Federico

    2004-11-30

    The protein structure prediction problem is one of the most challenging problems in biological sciences. Many approaches have been proposed using database information and/or simplified protein models. The protein structure prediction problem can be cast in the form of an optimization problem. Notwithstanding its importance, the problem has very seldom been tackled by Constraint Logic Programming, a declarative programming paradigm suitable for solving combinatorial optimization problems. Constraint Logic Programming techniques have been applied to the protein structure prediction problem on the face-centered cube lattice model. Molecular dynamics techniques, endowed with the notion of constraint, have been also exploited. Even using a very simplified model, Constraint Logic Programming on the face-centered cube lattice model allowed us to obtain acceptable results for a few small proteins. As a test implementation their (known) secondary structure and the presence of disulfide bridges are used as constraints. Simplified structures obtained in this way have been converted to all atom models with plausible structure. Results have been compared with a similar approach using a well-established technique as molecular dynamics. The results obtained on small proteins show that Constraint Logic Programming techniques can be employed for studying protein simplified models, which can be converted into realistic all atom models. The advantage of Constraint Logic Programming over other, much more explored, methodologies, resides in the rapid software prototyping, in the easy way of encoding heuristics, and in exploiting all the advances made in this research area, e.g. in constraint propagation and its use for pruning the huge search space.

  14. The heat-compression technique for the conversion of platelet-rich fibrin preparation to a barrier membrane with a reduced rate of biodegradation.

    PubMed

    Kawase, Tomoyuki; Kamiya, Mana; Kobayashi, Mito; Tanaka, Takaaki; Okuda, Kazuhiro; Wolff, Larry F; Yoshie, Hiromasa

    2015-05-01

    Platelet-rich fibrin (PRF) was developed as an advanced form of platelet-rich plasma to eliminate xenofactors, such as bovine thrombin, and it is mainly used as a source of growth factor for tissue regeneration. Furthermore, although a minor application, PRF in a compressed membrane-like form has also been used as a substitute for commercially available barrier membranes in guided-tissue regeneration (GTR) treatment. However, the PRF membrane is resorbed within 2 weeks or less at implantation sites; therefore, it can barely maintain sufficient space for bone regeneration. In this study, we developed and optimized a heat-compression technique and tested the feasibility of the resulting PRF membrane. Freshly prepared human PRF was first compressed with dry gauze and subsequently with a hot iron. Biodegradability was microscopically examined in vitro by treatment with plasmin at 37°C or in vivo by subcutaneous implantation in nude mice. Compared with the control gauze-compressed PRF, the heat-compressed PRF appeared plasmin-resistant and remained stable for longer than 10 days in vitro. Additionally, in animal implantation studies, the heat-compressed PRF was observed at least for 3 weeks postimplantation in vivo whereas the control PRF was completely resorbed within 2 weeks. Therefore, these findings suggest that the heat-compression technique reduces the rate of biodegradation of the PRF membrane without sacrificing its biocompatibility and that the heat-compressed PRF membrane easily could be prepared at chair-side and applied as a barrier membrane in the GTR treatment. © 2014 Wiley Periodicals, Inc.

  15. Data compression experiments with LANDSAT thematic mapper and Nimbus-7 coastal zone color scanner data

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Ramapriyan, H. K.

    1989-01-01

    A case study is presented where an image segmentation based compression technique is applied to LANDSAT Thematic Mapper (TM) and Nimbus-7 Coastal Zone Color Scanner (CZCS) data. The compression technique, called Spatially Constrained Clustering (SCC), can be regarded as an adaptive vector quantization approach. The SCC can be applied to either single or multiple spectral bands of image data. The segmented image resulting from SCC is encoded in small rectangular blocks, with the codebook varying from block to block. Lossless compression potential (LDP) of sample TM and CZCS images are evaluated. For the TM test image, the LCP is 2.79. For the CZCS test image the LCP is 1.89, even though when only a cloud-free section of the image is considered the LCP increases to 3.48. Examples of compressed images are shown at several compression ratios ranging from 4 to 15. In the case of TM data, the compressed data are classified using the Bayes' classifier. The results show an improvement in the similarity between the classification results and ground truth when compressed data are used, thus showing that compression is, in fact, a useful first step in the analysis.

  16. Turbulence intensity and spatial integral scale during compression and expansion strokes in a four-cycle reciprocating engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ikegami, M.; Shioji, M.; Nishimoto, K.

    1987-01-01

    A laser homodyne technique is applied to measure turbulence intensities and spatial scales during compression and expansion strokes in a non-fired engine. By using this technique, relative fluid motion in a turbulent flow is detected directly without cyclic variation biases caused by fluctuation in the main flow. Experiments are performed at different engine speeds, compression ratios, and induction swirl ratios. In no-swirl cases the turbulence field near the compression end is almost uniform, whereas in swirled cases both the turbulence intensity and the scale near the cylinder axis are higher than those in the periphery. In addition, based on themore » measured results, the k-epsilon two-equation turbulence model under the influence of compression is discussed.« less

  17. AGILE confirmation of gamma-ray activity from the IceCube-170922A error region

    NASA Astrophysics Data System (ADS)

    Lucarelli, F.; Piano, G.; Pittori, C.; Verrecchia, F.; Tavani, M.; Bulgarelli, A.; Munar-Adrover, P.; Minervini, G.; Ursi, A.; Vercellone, S.; Donnarumma, I.; Fioretti, V.; Zoli, A.; Striani, E.; Cardillo, M.; Gianotti, F.; Trifoglio, M.; Giuliani, A.; Mereghetti, S.; Caraveo, P.; Perotti, F.; Chen, A.; Argan, A.; Costa, E.; Del Monte, E.; Evangelista, Y.; Feroci, M.; Lazzarotto, F.; Lapshov, I.; Pacciani, L.; Soffitta, P.; Sabatini, S.; Vittorini, V.; Pucella, G.; Rapisarda, M.; Di Cocco, G.; Fuschino, F.; Galli, M.; Labanti, C.; Marisaldi, M.; Pellizzoni, A.; Pilia, M.; Trois, A.; Barbiellini, G.; Vallazza, E.; Longo, F.; Morselli, A.; Picozza, P.; Prest, M.; Lipari, P.; Zanello, D.; Cattaneo, P. W.; Rappoldi, A.; Colafrancesco, S.; Parmiggiani, N.; Ferrari, A.; Paoletti, F.; Antonelli, A.; Giommi, P.; Salotti, L.; Valentini, G.; D'Amico, F.

    2017-09-01

    Following the IceCube observation of a high-energy neutrino candidate event, IceCube-170922A, at T0 = 17/09/22 20:54:30.43 UT (https://gcn.gsfc.nasa.gov/gcn3/21916.gcn3), and the detection of increased gamma-ray activity from a previously known Fermi-LAT gamma-ray source (3FGL J0509.4+0541) in the IceCube-170922A error region (ATel #10791), we have analysed the AGILE-GRID data acquired in the days before and after the neutrino event T0, searching for significant gamma-ray excess above 100 MeV from a position compatible with the IceCube and Fermi-LAT error regions.

  18. Expanding Access: An Evaluation of ReadCube Access as an ILL Alternative.

    PubMed

    Grabowsky, Adelia

    2016-01-01

    ReadCube Access is a patron-driven, document delivery system that provides immediate access to articles from journals owned by Nature Publishing Group. The purpose of this study was to evaluate the use of ReadCube Access as an interlibrary loan (ILL) alternative for nonsubscribed Nature journals at Auburn University, a research university with a School of Pharmacy and a School of Veterinary Medicine. An analysis of ten months' usage and costs are presented along with the results of a user satisfaction survey. Auburn University Libraries found ReadCube to be an acceptable alternative to ILL for unsubscribed Nature journals and at current levels of use and cost, consider ReadCube to be financially sustainable.

  19. Constraining sterile neutrinos with AMANDA and IceCube atmospheric neutrino data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Esmaili, Arman; Peres, O.L.G.; Halzen, Francis, E-mail: aesmaili@ifi.unicamp.br, E-mail: halzen@icecube.wisc.edu, E-mail: orlando@ifi.unicamp.br

    2012-11-01

    We demonstrate that atmospheric neutrino data accumulated with the AMANDA and the partially deployed IceCube experiments constrain the allowed parameter space for a hypothesized fourth sterile neutrino beyond the reach of a combined analysis of all other experiments, for Δm{sup 2}{sub 41}∼<1 eV{sup 2}. Although the IceCube data wins the statistics in the analysis, the advantage of a combined analysis of AMANDA and IceCube data is the partial remedy of yet unknown instrumental systematic uncertainties. We also illustrate the sensitivity of the completed IceCube detector, that is now taking data, to the parameter space of 3+1 model.

  20. A data compression technique for synthetic aperture radar images

    NASA Technical Reports Server (NTRS)

    Frost, V. S.; Minden, G. J.

    1986-01-01

    A data compression technique is developed for synthetic aperture radar (SAR) imagery. The technique is based on an SAR image model and is designed to preserve the local statistics in the image by an adaptive variable rate modification of block truncation coding (BTC). A data rate of approximately 1.6 bit/pixel is achieved with the technique while maintaining the image quality and cultural (pointlike) targets. The algorithm requires no large data storage and is computationally simple.

  1. CubeSub

    NASA Technical Reports Server (NTRS)

    Slettebo, Christian; Jonsson, Lars Jonas

    2016-01-01

    This presentation introduces and discusses the development of the CubeSub submersible concept, an Autonomous Underwater Vehicle (AUV) designed around the CubeSat satellite form factor. The presented work is part of the author's MSc thesis in Aerospace Engineering at the Royal Institute of Technology, Stockholm, Sweden, and was performed during an internship at the Mission Design Division of the NASA Ames Research Center, Moffett Field, CA. Still in the early stages of its development, the CubeSub is to become a submersible test-bed for technology qualified for underwater and space environments. With the long-term goal of exploring the underwater environments in outer space, such as the alleged subsurface ocean of Jupiter's moon Europa, a number of technology and operational procedures must be developed and matured. To assist in this, the CubeSub platform is introduced as a tool to allow engineers and scientists to easily test qualified technology underwater. A CubeSat is a class of miniaturized satellite built to a standardized size. The base size is 1U (U for unit), corresponding to a 100 x 100 x 113.5 cu mm cube. A 1U CubeSat can in other words easily be held in one hand. Stacking units give larger satellite sizes such as the also commonly used 1.5U, 2U and 3U. The CubeSat standard is in itself already well established and hundreds of CubeSats have to date been launched into space. Compatible technology is readily available and the know-how exists in the space industry, all of which makes it a firm ground to stand on for the CubeSub. The rationale behind using the CubeSat form factor is to make use of this pre-existing foundation, making the CubeSub easy to develop, modular and readily available. It will thereby aid in the process of maturing the concept of a fully space qualified submersible headed for outer space. As a further clarification, the CubeSub is itself not meant for outer space, but to facilitate development of such a vessel. Along with its uses as a testbed, the CubeSub also holds the potential to become a useful tool for exploration and experimentation here on Earth. A highly standardized system utilizing well-known hardware can reduce the cost and required work load for researchers wishing to perform experiments and exploration. Users could design sensors and experiments to comply with the already well established CubeSat standard, which are then carried by the CubeSub to the region of interest. This in turn means that the end users can focus more on formulating the experiment itself and less about how to get it where they want it. The CubeSub is designed to be built up by modules, which can be assembled in different configurations to fulfill different needs. Each module will be powered individually and intermodular communication will be wireless, removing the need for wiring. The inside of the cylindrical hull will be flooded with ambient water to enhance the interaction between payloads and surrounding environment. The overall torpedo-like shape is similar to that of a conventional AUV, slender and smooth. This is to make for a low drag, reduce the risk of snagging on surrounding objects and make it possible to deploy through an ice sheet via a narrow borehole or navigate in tight areas. To keep costs low and further accelerate development, rapid prototyping is utilized wherever possible. Full-scale prototypes are being constructed through 3D-printing and using COTS (Commercial Off-The-Shelf) components. 3D-printing is used both for the largest hull components and the relatively small and delicate propellers. Arduino boards are used for control and internal communication

  2. Low cost voice compression for mobile digital radios

    NASA Technical Reports Server (NTRS)

    Omura, J. K.

    1985-01-01

    A new technique for low cost rubust voice compression at 4800 bits per second was studied. The approach was based on using a cascade of digital biquad adaptive filters with simplified multipulse excitation followed by simple bit sequence compression.

  3. Evaluation of the robustness of the preprocessing technique improving reversible compressibility of CT images: Tested on various CT examinations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeon, Chang Ho; Kim, Bohyoung; Gu, Bon Seung

    2013-10-15

    Purpose: To modify the preprocessing technique, which was previously proposed, improving compressibility of computed tomography (CT) images to cover the diversity of three dimensional configurations of different body parts and to evaluate the robustness of the technique in terms of segmentation correctness and increase in reversible compression ratio (CR) for various CT examinations.Methods: This study had institutional review board approval with waiver of informed patient consent. A preprocessing technique was previously proposed to improve the compressibility of CT images by replacing pixel values outside the body region with a constant value resulting in maximizing data redundancy. Since the technique wasmore » developed aiming at only chest CT images, the authors modified the segmentation method to cover the diversity of three dimensional configurations of different body parts. The modified version was evaluated as follows. In randomly selected 368 CT examinations (352 787 images), each image was preprocessed by using the modified preprocessing technique. Radiologists visually confirmed whether the segmented region covers the body region or not. The images with and without the preprocessing were reversibly compressed using Joint Photographic Experts Group (JPEG), JPEG2000 two-dimensional (2D), and JPEG2000 three-dimensional (3D) compressions. The percentage increase in CR per examination (CR{sub I}) was measured.Results: The rate of correct segmentation was 100.0% (95% CI: 99.9%, 100.0%) for all the examinations. The median of CR{sub I} were 26.1% (95% CI: 24.9%, 27.1%), 40.2% (38.5%, 41.1%), and 34.5% (32.7%, 36.2%) in JPEG, JPEG2000 2D, and JPEG2000 3D, respectively.Conclusions: In various CT examinations, the modified preprocessing technique can increase in the CR by 25% or more without concerning about degradation of diagnostic information.« less

  4. NPS CubeSat Launcher Design, Process and Requirements

    DTIC Science & Technology

    2009-06-01

    Soviet era ICBM. The first Dnepr launch in July 2006 consisted of fourteen CubeSats in five P-PODs, while the second in April 2007 consisted of...Regulations (ITAR). ITAR restricts the export of defense-related products and technology on the United States Munitions List. Although one might not...think that CubeSat technology would fall under ITAR, in fact a large amount of Aerospace technology , including some that could be used on CubeSats is

  5. Injectant mole-fraction imaging in compressible mixing flows using planar laser-induced iodine fluorescence

    NASA Technical Reports Server (NTRS)

    Hartfield, Roy J., Jr.; Abbitt, John D., III; Mcdaniel, James C.

    1989-01-01

    A technique is described for imaging the injectant mole-fraction distribution in nonreacting compressible mixing flow fields. Planar fluorescence from iodine, seeded into air, is induced by a broadband argon-ion laser and collected using an intensified charge-injection-device array camera. The technique eliminates the thermodynamic dependence of the iodine fluorescence in the compressible flow field by taking the ratio of two images collected with identical thermodynamic flow conditions but different iodine seeding conditions.

  6. The design and performance of IceCube DeepCore

    NASA Astrophysics Data System (ADS)

    Abbasi, R.; Abdou, Y.; Abu-Zayyad, T.; Ackermann, M.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Allen, M. M.; Altmann, D.; Andeen, K.; Auffenberg, J.; Bai, X.; Baker, M.; Barwick, S. W.; Bay, R.; Bazo Alba, J. L.; Beattie, K.; Beatty, J. J.; Bechet, S.; Becker, J. K.; Becker, K.-H.; Benabderrahmane, M. L.; BenZvi, S.; Berdermann, J.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Bindig, D.; Bissok, M.; Blaufuss, E.; Blumenthal, J.; Boersma, D. J.; Bohm, C.; Bose, D.; Böser, S.; Botner, O.; Brown, A. M.; Buitink, S.; Caballero-Mora, K. S.; Carson, M.; Chirkin, D.; Christy, B.; Clevermann, F.; Cohen, S.; Colnard, C.; Cowen, D. F.; Cruz Silva, A. H.; D'Agostino, M. V.; Danninger, M.; Daughhetee, J.; Davis, J. C.; De Clercq, C.; Degner, T.; Demirörs, L.; Descamps, F.; Desiati, P.; de Vries-Uiterweerd, G.; DeYoung, T.; Díaz-Vélez, J. C.; Dierckxsens, M.; Dreyer, J.; Dumm, J. P.; Dunkman, M.; Eisch, J.; Ellsworth, R. W.; Engdegård, O.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Fedynitch, A.; Feintzeig, J.; Feusels, T.; Filimonov, K.; Finley, C.; Fischer-Wasels, T.; Fox, B. D.; Franckowiak, A.; Franke, R.; Gaisser, T. K.; Gallagher, J.; Gerhardt, L.; Gladstone, L.; Glüsenkamp, T.; Goldschmidt, A.; Goodman, J. A.; Góra, D.; Grant, D.; Griesel, T.; Groß, A.; Grullon, S.; Gurtner, M.; Ha, C.; Haj Ismail, A.; Hallgren, A.; Halzen, F.; Han, K.; Hanson, K.; Heinen, D.; Helbing, K.; Hellauer, R.; Hickford, S.; Hill, G. C.; Hoffman, K. D.; Hoffmann, B.; Homeier, A.; Hoshina, K.; Huelsnitz, W.; Hülß, J.-P.; Hulth, P. O.; Hultqvist, K.; Hussain, S.; Ishihara, A.; Jacobi, E.; Jacobsen, J.; Japaridze, G. S.; Johansson, H.; Kampert, K.-H.; Kappes, A.; Karg, T.; Karle, A.; Kenny, P.; Kiryluk, J.; Kislat, F.; Klein, S. R.; Köhne, J.-H.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Koskinen, D. J.; Kowalski, M.; Kowarik, T.; Krasberg, M.; Kroll, G.; Kurahashi, N.; Kuwabara, T.; Labare, M.; Laihem, K.; Landsman, H.; Larson, M. J.; Lauer, R.; Lünemann, J.; Madsen, J.; Marotta, A.; Maruyama, R.; Mase, K.; Matis, H. S.; Meagher, K.; Merck, M.; Mészáros, P.; Meures, T.; Miarecki, S.; Middell, E.; Milke, N.; Miller, J.; Montaruli, T.; Morse, R.; Movit, S. M.; Nahnhauer, R.; Nam, J. W.; Naumann, U.; Nygren, D. R.; Odrowski, S.; Olivas, A.; Olivo, M.; O'Murchadha, A.; Panknin, S.; Paul, L.; Pérez de los Heros, C.; Petrovic, J.; Piegsa, A.; Pieloth, D.; Porrata, R.; Posselt, J.; Price, P. B.; Przybylski, G. T.; Rawlins, K.; Redl, P.; Resconi, E.; Rhode, W.; Ribordy, M.; Richman, M.; Rodrigues, J. P.; Rothmaier, F.; Rott, C.; Ruhe, T.; Rutledge, D.; Ruzybayev, B.; Ryckbosch, D.; Sander, H.-G.; Santander, M.; Sarkar, S.; Schatto, K.; Schmidt, T.; Schönwald, A.; Schukraft, A.; Schultes, A.; Schulz, O.; Schunck, M.; Seckel, D.; Semburg, B.; Seo, S. H.; Sestayo, Y.; Seunarine, S.; Silvestri, A.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Stezelberger, T.; Stokstad, R. G.; Stößl, A.; Strahler, E. A.; Ström, R.; Stüer, M.; Sullivan, G. W.; Swillens, Q.; Taavola, H.; Taboada, I.; Tamburro, A.; Tepe, A.; Ter-Antonyan, S.; Tilav, S.; Toale, P. A.; Toscano, S.; Tosi, D.; van Eijndhoven, N.; Vandenbroucke, J.; Van Overloop, A.; van Santen, J.; Vehring, M.; Voge, M.; Walck, C.; Waldenmaier, T.; Wallraff, M.; Walter, M.; Weaver, Ch.; Wendt, C.; Westerhoff, S.; Whitehorn, N.; Wiebe, K.; Wiebusch, C. H.; Williams, D. R.; Wischnewski, R.; Wissing, H.; Wolf, M.; Wood, T. R.; Woschnagg, K.; Xu, C.; Xu, D. L.; Xu, X. W.; Yanez, J. P.; Yodh, G.; Yoshida, S.; Zarzhitsky, P.; Zoll, M.

    2012-05-01

    The IceCube neutrino observatory in operation at the South Pole, Antarctica, comprises three distinct components: a large buried array for ultrahigh energy neutrino detection, a surface air shower array, and a new buried component called DeepCore. DeepCore was designed to lower the IceCube neutrino energy threshold by over an order of magnitude, to energies as low as about 10 GeV. DeepCore is situated primarily 2100 m below the surface of the icecap at the South Pole, at the bottom center of the existing IceCube array, and began taking physics data in May 2010. Its location takes advantage of the exceptionally clear ice at those depths and allows it to use the surrounding IceCube detector as a highly efficient active veto against the principal background of downward-going muons produced in cosmic-ray air showers. DeepCore has a module density roughly five times higher than that of the standard IceCube array, and uses photomultiplier tubes with a new photocathode featuring a quantum efficiency about 35% higher than standard IceCube PMTs. Taken together, these features of DeepCore will increase IceCube's sensitivity to neutrinos from WIMP dark matter annihilations, atmospheric neutrino oscillations, galactic supernova neutrinos, and point sources of neutrinos in the northern and southern skies. In this paper we describe the design and initial performance of DeepCore.

  7. The Design and Performance of IceCube DeepCore

    NASA Technical Reports Server (NTRS)

    Stamatikos, M.

    2012-01-01

    The IceCube neutrino observatory in operation at the South Pole, Antarctica, comprises three distinct components: a large buried array for ultrahigh energy neutrino detection, a surface air shower array, and a new buried component called DeepCore. DeepCore was designed to lower the IceCube neutrino energy threshold by over an order of magnitude, to energies as low as about 10 GeV. DeepCore is situated primarily 2100 m below the surface of the icecap at the South Pole, at the bottom center of the existing IceCube array, and began taking pbysics data in May 2010. Its location takes advantage of the exceptionally clear ice at those depths and allows it to use the surrounding IceCube detector as a highly efficient active veto against the principal background of downward-going muons produced in cosmic-ray air showers. DeepCore has a module density roughly five times higher than that of the standard IceCube array, and uses photomultiplier tubes with a new photocathode featuring a quantum efficiency about 35% higher than standard IceCube PMTs. Taken together, these features of DeepCore will increase IceCube's sensitivity to neutrinos from WIMP dark matter annihilations, atmospheric neutrino oscillations, galactic supernova neutrinos, and point sources of neutrinos in the northern and southern skies. In this paper we describe the design and initial performance of DeepCore.

  8. An investigative study of multispectral data compression for remotely-sensed images using vector quantization and difference-mapped shift-coding

    NASA Technical Reports Server (NTRS)

    Jaggi, S.

    1993-01-01

    A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.

  9. EarthCube: A Community Organization for Geoscience Cyberinfrastructure

    NASA Astrophysics Data System (ADS)

    Patten, K.; Allison, M. L.

    2014-12-01

    The National Science Foundation's (NSF) EarthCube initiative is a community-driven approach to building cyberinfrastructure for managing, sharing, and exploring geoscience data and information to better address today's grand-challenge science questions. The EarthCube Test Enterprise Governance project is a two-year effort seeking to engage diverse geo- and cyber-science communities in applying a responsive approach to the development of a governing system for EarthCube. During Year 1, an Assembly of seven stakeholder groups representing the broad EarthCube community developed a draft Governance Framework. Finalized at the June 2014 EarthCube All Hands Meeting, this framework will be tested during the demonstration phase in Year 2, beginning October 2014. A brief overview of the framework: Community-elected members of the EarthCube Leadership Council will be responsible for managing strategic direction and identifying the scope of EarthCube. Three Standing Committees will also be established to oversee the development of technology and architecture, to coordinate among new and existing data facilities, and to represent the academic geosciences community in driving development of EarthCube cyberinfrastructure. An Engagement Team and a Liaison Team will support communication and partnerships with internal and external stakeholders, and a central Office will serve a logistical support function to the governance as a whole. Finally, ad hoc Working Groups and Special Interest Groups will take on other issues related to EarthCube's goals. The Year 2 demonstration phase will test the effectiveness of the proposed framework and allow for elements to be changed to better meet community needs. It will begin by populating committees and teams, and finalizing leadership and decision-making processes to move forward on community-selected priorities including identifying science drivers, coordinating emerging technical elements, and coming to convergence on system architecture. A January mid-year review will assemble these groups to analyze the effectiveness of the framework and make adjustments as necessary. If successful, this framework will move EarthCube forward as a collaborative platform and potentially act as a model for future NSF investments in geoscience cyberinfrastructure.

  10. FRESCO: Referential compression of highly similar sequences.

    PubMed

    Wandelt, Sebastian; Leser, Ulf

    2013-01-01

    In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware.

  11. Radar Range Sidelobe Reduction Using Adaptive Pulse Compression Technique

    NASA Technical Reports Server (NTRS)

    Li, Lihua; Coon, Michael; McLinden, Matthew

    2013-01-01

    Pulse compression has been widely used in radars so that low-power, long RF pulses can be transmitted, rather than a highpower short pulse. Pulse compression radars offer a number of advantages over high-power short pulsed radars, such as no need of high-power RF circuitry, no need of high-voltage electronics, compact size and light weight, better range resolution, and better reliability. However, range sidelobe associated with pulse compression has prevented the use of this technique on spaceborne radars since surface returns detected by range sidelobes may mask the returns from a nearby weak cloud or precipitation particles. Research on adaptive pulse compression was carried out utilizing a field-programmable gate array (FPGA) waveform generation board and a radar transceiver simulator. The results have shown significant improvements in pulse compression sidelobe performance. Microwave and millimeter-wave radars present many technological challenges for Earth and planetary science applications. The traditional tube-based radars use high-voltage power supply/modulators and high-power RF transmitters; therefore, these radars usually have large size, heavy weight, and reliability issues for space and airborne platforms. Pulse compression technology has provided a path toward meeting many of these radar challenges. Recent advances in digital waveform generation, digital receivers, and solid-state power amplifiers have opened a new era for applying pulse compression to the development of compact and high-performance airborne and spaceborne remote sensing radars. The primary objective of this innovative effort is to develop and test a new pulse compression technique to achieve ultrarange sidelobes so that this technique can be applied to spaceborne, airborne, and ground-based remote sensing radars to meet future science requirements. By using digital waveform generation, digital receiver, and solid-state power amplifier technologies, this improved pulse compression technique could bring significant impact on future radar development. The novel feature of this innovation is the non-linear FM (NLFM) waveform design. The traditional linear FM has the limit (-20 log BT -3 dB) for achieving ultra-low-range sidelobe in pulse compression. For this study, a different combination of 20- or 40-microsecond chirp pulse width and 2- or 4-MHz chirp bandwidth was used. These are typical operational parameters for airborne or spaceborne weather radars. The NLFM waveform design was then implemented on a FPGA board to generate a real chirp signal, which was then sent to the radar transceiver simulator. The final results have shown significant improvement on sidelobe performance compared to that obtained using a traditional linear FM chirp.

  12. Data Compression Using the Dictionary Approach Algorithm

    DTIC Science & Technology

    1990-12-01

    Compression Technique The LZ77 is an OPM/L data compression scheme suggested by Ziv and Lempel . A slightly modified...June 1984. 12. Witten H. I., Neal M. R. and Cleary G. J., Arithmetic Coding For Data Compression , Communication ACM June 1987. 13. Ziv I. and Lempel A...AD-A242 539 NAVAL POSTGRADUATE SCHOOL Monterey, California DTIC NOV 181991 0 THESIS DATA COMPRESSION USING THE DICTIONARY APPROACH ALGORITHM

  13. C3Winds: A Novel 3D Wind Observing System to Characterize Severe Weather Events

    NASA Astrophysics Data System (ADS)

    Kelly, M. A.; Wu, D. L.; Yee, J. H.; Boldt, J.; Demajistre, R.; Reynolds, E.; Tripoli, G. J.; Oman, L.; Prive, N.; Heidinger, A. K.; Wanzong, S.

    2015-12-01

    The CubeSat Constellation Cloud Winds (C3Winds) is a NASA Earth Venture Instrument (EV-I) concept with the primary objective to resolve high-resolution 3D dynamic structures of severe wind events. Rapid evolution of severe weather events highlights the need for high-resolution mesoscale wind observations. Yet mesoscale observations of severe weather dynamics are quite rare, especially over the ocean where extratropical and tropical cyclones (ETCs and TCs) can undergo explosive development. Measuring wind velocity at the mesoscale from space remains a great challenge, but is critically needed to understand and improve prediction of severe weather and tropical cyclones. Based on compact, visible/IR imagers and a mature stereoscopic technique, C3Winds has the capability to measure high-resolution (~2 km) cloud motion vectors and cloud geometric heights accurately by tracking cloud features from two formation-flying CubeSats, separated by 5-15 minutes. Complementary to lidar wind measurements from space, C3Winds will provide high-resolution wind fields needed for detailed investigations of severe wind events in occluded ETCs, rotational structures inside TC eyewalls, and ozone injections associated with tropopause folding events. Built upon mature imaging technologies and long history of stereoscopic remote sensing, C3Winds provides an innovative, cost-effective solution to global wind observations with the potential for increased diurnal sampling via CubeSat constellation.

  14. A fast 3D region growing approach for CT angiography applications

    NASA Astrophysics Data System (ADS)

    Ye, Zhen; Lin, Zhongmin; Lu, Cheng-chang

    2004-05-01

    Region growing is one of the most popular methods for low-level image segmentation. Many researches on region growing have focused on the definition of the homogeneity criterion or growing and merging criterion. However, one disadvantage of conventional region growing is redundancy. It requires a large memory usage, and the computation-efficiency is very low especially for 3D images. To overcome this problem, a non-recursive single-pass 3D region growing algorithm named SymRG is implemented and successfully applied to 3D CT angiography (CTA) applications for vessel segmentation and bone removal. The method consists of three steps: segmenting one-dimensional regions of each row; doing region merging to adjacent rows to obtain the region segmentation of each slice; and doing region merging to adjacent slices to obtain the final region segmentation of 3D images. To improve the segmentation speed for very large volume 3D CTA images, this algorithm is applied repeatedly to newly updated local cubes. The next new cube can be estimated by checking isolated segmented regions on all 6 faces of the current local cube. This local non-recursive 3D region-growing algorithm is memory-efficient and computation-efficient. Clinical testings of this algorithm on Brain CTA show this technique could effectively remove whole skull, most of the bones on the skull base, and reveal the cerebral vascular structures clearly.

  15. Nanocubes for real-time exploration of spatiotemporal datasets.

    PubMed

    Lins, Lauro; Klosowski, James T; Scheidegger, Carlos

    2013-12-01

    Consider real-time exploration of large multidimensional spatiotemporal datasets with billions of entries, each defined by a location, a time, and other attributes. Are certain attributes correlated spatially or temporally? Are there trends or outliers in the data? Answering these questions requires aggregation over arbitrary regions of the domain and attributes of the data. Many relational databases implement the well-known data cube aggregation operation, which in a sense precomputes every possible aggregate query over the database. Data cubes are sometimes assumed to take a prohibitively large amount of space, and to consequently require disk storage. In contrast, we show how to construct a data cube that fits in a modern laptop's main memory, even for billions of entries; we call this data structure a nanocube. We present algorithms to compute and query a nanocube, and show how it can be used to generate well-known visual encodings such as heatmaps, histograms, and parallel coordinate plots. When compared to exact visualizations created by scanning an entire dataset, nanocube plots have bounded screen error across a variety of scales, thanks to a hierarchical structure in space and time. We demonstrate the effectiveness of our technique on a variety of real-world datasets, and present memory, timing, and network bandwidth measurements. We find that the timings for the queries in our examples are dominated by network and user-interaction latencies.

  16. SpaceCube Mini

    NASA Technical Reports Server (NTRS)

    Lin, Michael; Petrick, David; Geist, Alessandro; Flatley, Thomas

    2012-01-01

    This version of the SpaceCube will be a full-fledged, onboard space processing system capable of 2500+ MIPS, and featuring a number of plug-andplay gigabit and standard interfaces, all in a condensed 3x3x3 form factor [less than 10 watts and less than 3 lb (approximately equal to 1.4 kg)]. The main processing engine is the Xilinx SIRF radiation- hardened-by-design Virtex-5 FX-130T field-programmable gate array (FPGA). Even as the SpaceCube 2.0 version (currently under test) is being targeted as the platform of choice for a number of the upcoming Earth Science Decadal Survey missions, GSFC has been contacted by customers who wish to see a system that incorporates key features of the version 2.0 architecture in an even smaller form factor. In order to fulfill that need, the SpaceCube Mini is being designed, and will be a very compact and low-power system. A similar flight system with this combination of small size, low power, low cost, adaptability, and extremely high processing power does not otherwise exist, and the SpaceCube Mini will be of tremendous benefit to GSFC and its partners. The SpaceCube Mini will utilize space-grade components. The primary processing engine of the Mini is the Xilinx Virtex-5 SIRF FX-130T radiation-hardened-by-design FPGA for critical flight applications in high-radiation environments. The Mini can also be equipped with a commercial Xilinx Virtex-5 FPGA with integrated PowerPCs for a low-cost, high-power computing platform for use in the relatively radiation- benign LEOs (low-Earth orbits). In either case, this version of the Space-Cube will weigh less than 3 pounds (.1.4 kg), conform to the CubeSat form-factor (10x10x10 cm), and will be low power (less than 10 watts for typical applications). The SpaceCube Mini will have a radiation-hardened Aeroflex FPGA for configuring and scrubbing the Xilinx FPGA by utilizing the onboard FLASH memory to store the configuration files. The FLASH memory will also be used for storing algorithm and application code for the PowerPCs and the Xilinx FPGA. In addition, it will feature highspeed DDR SDRAM (double data rate synchronous dynamic random-access memory) to store the instructions and data of active applications. This version will also feature SATA-II and Gigabit Ethernet interfaces. Furthermore, there will also be general-purpose, multi-gigabit interfaces. In addition, the system will have dozens of transceivers that can support LVDS (low-voltage differential signaling), RS-422, or SpaceWire. The SpaceCube Mini includes an I/O card that can be customized to meet the needs of each mission. This version of the SpaceCube will be designed so that multiple Minis can be networked together using SpaceWire, Ethernet, or even a custom protocol. Scalability can be provided by networking multiple SpaceCube Minis together. Rigid-Flex technology is being targeted for the construction of the SpaceCube Mini, which will make the extremely compact and low-weight design feasible. The SpaceCube Mini is designed to fit in the compact CubeSat form factor, thus allowing deployment in a new class of missions that the previous SpaceCube versions were not suited for. At the time of this reporting, engineering units should be available in the summer 2012.

  17. ELaNa - Educational Launch of Nanosatellite Enhance Education Through Space Flight

    NASA Technical Reports Server (NTRS)

    Skrobot, Garrett Lee

    2011-01-01

    One of NASA's missions is to attract and retain students in the science, technology, engineering and mathematics (STEM) disciplines. Creating missions or programs to achieve this important goal helps strengthen NASA and the nation's future work force as well as engage and inspire Americans and the rest of the world. During the last three years, in an attempt to revitalize educational space flight, NASA generated a new and exciting initiative. This initiative, NASA's Educational Launch of Nanosatellite (ELaNa), is now fully operational and producing exciting results. Nanosatellites are small secondary satellite payloads called CubeSats. One of the challenges that the CubeSat community faced over the past few years was the lack of rides into space. Students were building CubeSats but they just sat on the shelf until an opportunity arose. In some cases, these opportunities never developed and so the CubeSat never made it to orbit. The ELaNa initiative is changing this by providing sustainable launch opportunities for educational CubeSats. Across America, these CubeSats are currently being built by students in high school all the way through graduate school. Now students know that if they build their CubeSat, submit their proposal and are selected for an ELaNa mission, they will have the opportunity to fly their satellite. ELaNa missions are the first educational cargo to be carried on expendable launch vehicles (ELY) for NASA's Launch Services Program (LSP). The first ELaNa CubeSats were slated to begin their journey to orbit in February 2011 with NASA's Glory mission. Due to an anomaly with the launch vehicle, ELaNa II and Glory failed to reach orbit. This first ELaNa mission was comprised of three IU CubeSats built by students at Montana State University (Explorer Prime Flight 1), the University of Colorado (HERMES), and Kentucky Space, a consortium of state universities (KySat). The interface between the launch vehicle and the CubeSat, the Poly-Picosatellite Orbital Deployer (P-POD), was developed and built by students at California Polytechnic State University (Cal Poly). Integrating a P-POD on a NASA ELV was not an easy task. The creation of new processes and requirements as well as numerous reviews and approvals were necessary within NASA before the first ELaNa mission could be attached to a NASA launch vehicle (LV). One of the key objectives placed on an ELaNa mission is that the CubeSat and PPOD does not increase the baseline risk to the primary mission and launch vehicle. The ELaNa missions achieve this objective by placing a rigorous management and engineering process on both the LV and CubeSat teams. So, what is the future of ELaNa? Currently there are 16 P-POD missions manifested across four launch vehicles to support educational CubeSats selected under the NASA CubeSat Initiative. From this initiative, a rigorous selection process produced 22-student CubeSat missions that are scheduled to fly before the end of 2012. For the initiative to continue, organizations need to submit proposals to the annual CubeSat initiative call so they have the opportunity to be manifested and launched.

  18. Three-dimensional construction and omni-directional rolling analysis of a novel frame-like lattice modular robot

    NASA Astrophysics Data System (ADS)

    Ding, Wan; Wu, Jianxu; Yao, Yan'an

    2015-07-01

    Lattice modular robots possess diversity actuation methods, such as electric telescopic rod, gear rack, magnet, robot arm, etc. The researches on lattice modular robots mainly focus on their hardware descriptions and reconfiguration algorithms. Meanwhile, their design architectures and actuation methods perform slow telescopic and moving speeds, relative low actuation force verse weight ratio, and without internal space to carry objects. To improve the mechanical performance and reveal the locomotion and reconfiguration binary essences of the lattice modular robots, a novel cube-shaped, frame-like, pneumatic-based reconfigurable robot module called pneumatic expandable cube(PE-Cube) is proposed. The three-dimensional(3D) expanding construction and omni-directional rolling analysis of the constructed robots are the main focuses. The PE-Cube with three degrees of freedom(DoFs) is assembled by replacing the twelve edges of a cube with pneumatic cylinders. The proposed symmetric construction condition makes the constructed robots possess the same properties in each supporting state, and a binary control strategy cooperated with binary actuator(pneumatic cylinder) is directly adopted to control the PE-Cube. Taking an eight PE-Cube modules' construction as example, its dynamic rolling simulation, static rolling condition, and turning gait are illustrated and discussed. To testify telescopic synchronization, respond speed, locomotion feasibility, and repeatability and reliability of hardware system, an experimental pneumatic-based robotic system is built and the rolling and turning experiments of the eight PE-Cube modules' construction are carried out. As an extension, the locomotion feasibility of a thirty-two PE-Cube modules' construction is analyzed and proved, including dynamic rolling simulation, static rolling condition, and dynamic analysis in free tipping process. The proposed PE-Cube module, construction method, and locomotion analysis enrich the family of the lattice modular robot and provide the instruction to design the lattice modular robot.

  19. PolarCube: A High Resolution Passive Microwave Satellite for Sounding and Imaging at 118 GHz

    NASA Astrophysics Data System (ADS)

    Weaver, R. L.; Gallaher, D. W.; Gasiewski, A. J.; Sanders, B.; Periasamy, L.; Hwang, K.; Alvarenga, G.; Hickey, A. M.

    2013-12-01

    PolarCube is a 3U CubeSat hosting an eight-channel passive microwave spectrometer operating at the 118.7503 GHz oxygen resonance that is currently in development. The project has an anticipated launch date in early 2015. It is currently being designed to operate for approximately12 months on orbit to provide the first global 118-GHz spectral imagery of the Earth over full seasonal cycle and to sound Arctic vertical temperature structure. The principles used by PolarCube for temperature sounding are well established in number of peer-reviewed papers going back more than two decades, although the potential for sounding from a CubeSat has never before been demonstrated in space. The PolarCube channels are selected to probe atmospheric emission over a range of vertical levels from the surface to lower stratosphere. This capability has been available operationally for over three decades, but at lower frequencies and higher altitudes that do not provide the spatial resolution that will be achieved by PolarCube. While the NASA JPSS ATMS satellite sensor provides global coverage at ~32 km resolution, the PolarCube will improve on this resolution by a factor of two, thus facilitating the primary science goal of determining sea ice concentration and extent while at the same time collecting profile data on atmospheric temperature. Additionally, we seek to correlate freeze-thaw line data from SMAP with our near simultaneously collected atmospheric temperature data. In addition to polar science, PolarCube will provide a first demonstration of a very low cost passive microwave sounder that if operated in a fleet configuration would have the potential to fulfill the goals of the Precipitation Atmospheric Temperature and Humidity (PATH) mission, as defined in the NRC Decadal Survey. PolarCube 118-GHz passive microwave spectrometer in deployed configuration

  20. Neutrino Astronomy with IceCube

    NASA Astrophysics Data System (ADS)

    Meagher, Kevin J.

    The IceCube Neutrino Observatory is a cubic kilometer neutrino telescope located at the Geographic South Pole. Cherenkov radiation emitted by charged secondary particles from neutrino interactions is observed by IceCube using an array of 5160 photomultiplier tubes embedded between a depth of 1.5 km to 2.5 km in the Antarctic glacial ice. The detection of astrophysical neutrinos is a primary goal of IceCube and has now been realized with the discovery of a diffuse, high-energy flux consisting of neutrino events from tens of TeV up to several PeV. Many analyses have been performed to identify the source of these neutrinos: correlations with active galactic nuclei, gamma-ray bursts, and the galactic plane. IceCube also conducts multi-messenger campaigns to alert other observatories of possible neutrino transients in real-time. However, the source of these neutrinos remains elusive as no corresponding electromagnetic counterparts have been identified. This proceeding will give an overview of the detection principles of IceCube, the properties of the observed astrophysical neutrinos, the search for corresponding sources (including real-time searches), and plans for a next-generation neutrino detector, IceCube-Gen2.

  1. Laser geodynamic satellite thermal/optical/vibrational analysis and testing, volume 2, book 2. [cubes and far fields

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The main tasks described involved an interferometric evaluation of several cubes, a prediction of their dihedral angles, a comparison of these predictions with independent measurements, a prediction and comparison of far field performance, recommendations as to revised dihedral angles and a subsequent analysis of cubes which were reworked to confirm the recommendations. A tolerance study and theoretical evaluation of several cubes was also performed to aid in understanding the results. The far field characteristics evaluated included polarization effects and treated both intensity distribution and encircled energy data. The energy in the 13.2 - 16.9 arc-sec annular region was tabulated as an indicator of performance sensitivity. The results are provided in viewgraph form, and show the average dihedral angle of an original set of test cubes to have been 1.8 arc-sec with an average far field annulus diameter of 18 arc-sec. Since the peak energy in the 13.2 - 16.9 arc-sec annulus was found to occur for a 1.35 arc-sec cube, and since cube tolerances were shown to increase the annulus diameter slightly, a nominal dihedral angle of 1.25 arc-sec was recommended.

  2. Dynamic Deformation Behavior of Soft Material Using Shpb Technique and Pulse Shaper

    NASA Astrophysics Data System (ADS)

    Lee, Ouk Sub; Cho, Kyu Sang; Kim, Sung Hyun; Han, Yong Hwan

    This paper presents a modified Split Hopkinson Pressure Bar (SHPB) technique to obtain compressive stress strain data for NBR rubber materials. An experimental technique with a modified the conventional SHPB has been developed for measuring the compressive stress strain responses of materials with low mechanical impedance and low compressive strengths, such as the rubber and the polymeric material. This paper uses an aluminum pressure bar to achieve a closer impedance match between the pressure bar and the specimen materials. In addition, a pulse shaper is utilized to lengthen the rising time of the incident pulse to ensure dynamic stress equilibrium and homogeneous deformation of NBR rubber materials. It is found that the modified technique can determine the dynamic deformation behavior of rubbers more accurately.

  3. Survey Of Lossless Image Coding Techniques

    NASA Astrophysics Data System (ADS)

    Melnychuck, Paul W.; Rabbani, Majid

    1989-04-01

    Many image transmission/storage applications requiring some form of data compression additionally require that the decoded image be an exact replica of the original. Lossless image coding algorithms meet this requirement by generating a decoded image that is numerically identical to the original. Several lossless coding techniques are modifications of well-known lossy schemes, whereas others are new. Traditional Markov-based models and newer arithmetic coding techniques are applied to predictive coding, bit plane processing, and lossy plus residual coding. Generally speaking, the compression ratio offered by these techniques are in the area of 1.6:1 to 3:1 for 8-bit pictorial images. Compression ratios for 12-bit radiological images approach 3:1, as these images have less detailed structure, and hence, their higher pel correlation leads to a greater removal of image redundancy.

  4. CUBE: Information-optimized parallel cosmological N-body simulation code

    NASA Astrophysics Data System (ADS)

    Yu, Hao-Ran; Pen, Ue-Li; Wang, Xin

    2018-05-01

    CUBE, written in Coarray Fortran, is a particle-mesh based parallel cosmological N-body simulation code. The memory usage of CUBE can approach as low as 6 bytes per particle. Particle pairwise (PP) force, cosmological neutrinos, spherical overdensity (SO) halofinder are included.

  5. Evaluation of the Impact of an Additive Manufacturing Enhanced CubeSat Architecture on the CubeSat Development Process

    DTIC Science & Technology

    2016-09-15

    Investigative Questions This research will quantitatively address the impact of proposed benefits of a 3D printed satellite architecture on the...subsystems of a CubeSat. The objective of this research is to bring a quantitative analysis to the discussion of whether a fully 3D printed satellite...manufacturers to quantitatively address what impact the architecture would have on the subsystems of a CubeSat. Summary of Research Gap, Research Questions, and

  6. Force Limited Vibration Testing and Subsequent Redesign of the Naval Postgraduate School CubeSat Launcher

    DTIC Science & Technology

    2014-06-01

    release is controlled by a non-explosive actuator (NEA). Once the NEA is actuated, it releases the P-POD door, which springs open due to torsion ...deemed to be undesirable to OSL as it limited flexibility in final CubeSat position choices on NPSCuL. 24 Building on the lessons learned from the...OUTSat mission that included maintaining flexibility of CubeSat positions on NPSCuL, it was decided that the option to proto-qualify a CubeSat on the

  7. Accurate reconstruction of hyperspectral images from compressive sensing measurements

    NASA Astrophysics Data System (ADS)

    Greer, John B.; Flake, J. C.

    2013-05-01

    The emerging field of Compressive Sensing (CS) provides a new way to capture data by shifting the heaviest burden of data collection from the sensor to the computer on the user-end. This new means of sensing requires fewer measurements for a given amount of information than traditional sensors. We investigate the efficacy of CS for capturing HyperSpectral Imagery (HSI) remotely. We also introduce a new family of algorithms for constructing HSI from CS measurements with Split Bregman Iteration [Goldstein and Osher,2009]. These algorithms combine spatial Total Variation (TV) with smoothing in the spectral dimension. We examine models for three different CS sensors: the Coded Aperture Snapshot Spectral Imager-Single Disperser (CASSI-SD) [Wagadarikar et al.,2008] and Dual Disperser (CASSI-DD) [Gehm et al.,2007] cameras, and a hypothetical random sensing model closer to CS theory, but not necessarily implementable with existing technology. We simulate the capture of remotely sensed images by applying the sensor forward models to well-known HSI scenes - an AVIRIS image of Cuprite, Nevada and the HYMAP Urban image. To measure accuracy of the CS models, we compare the scenes constructed with our new algorithm to the original AVIRIS and HYMAP cubes. The results demonstrate the possibility of accurately sensing HSI remotely with significantly fewer measurements than standard hyperspectral cameras.

  8. Design of a motion JPEG (M/JPEG) adapter card

    NASA Astrophysics Data System (ADS)

    Lee, D. H.; Sudharsanan, Subramania I.

    1994-05-01

    In this paper we describe a design of a high performance JPEG (Joint Photographic Experts Group) Micro Channel adapter card. The card, tested on a range of PS/2 platforms (models 50 to 95), can complete JPEG operations on a 640 by 240 pixel image within 1/60 of a second, thus enabling real-time capture and display of high quality digital video. The card accepts digital pixels for either a YUV 4:2:2 or an RGB 4:4:4 pixel bus and has been shown to handle up to 2.05 MBytes/second of compressed data. The compressed data is transmitted to a host memory area by Direct Memory Access operations. The card uses a single C-Cube's CL550 JPEG processor that complies with the baseline JPEG. We give broad descriptions of the hardware that controls the video interface, CL550, and the system interface. Some critical design points that enhance the overall performance of the M/JPEG systems are pointed out. The control of the adapter card is achieved by an interrupt driven software that runs under DOS. The software performs a variety of tasks that include change of color space (RGB or YUV), change of quantization and Huffman tables, odd and even field control and some diagnostic operations.

  9. A comparison of the lattice discrete particle method to the finite-element method and the K&C material model for simulating the static and dynamic response of concrete.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Jovanca J.; Bishop, Joseph E.

    2013-11-01

    This report summarizes the work performed by the graduate student Jovanca Smith during a summer internship in the summer of 2012 with the aid of mentor Joe Bishop. The projects were a two-part endeavor that focused on the use of the numerical model called the Lattice Discrete Particle Model (LDPM). The LDPM is a discrete meso-scale model currently used at Northwestern University and the ERDC to model the heterogeneous quasi-brittle material, concrete. In the first part of the project, LDPM was compared to the Karagozian and Case Concrete Model (K&C) used in Presto, an explicit dynamics finite-element code, developed atmore » Sandia National Laboratories. In order to make this comparison, a series of quasi-static numerical experiments were performed, namely unconfined uniaxial compression tests on four varied cube specimen sizes, three-point bending notched experiments on three proportional specimen sizes, and six triaxial compression tests on a cylindrical specimen. The second part of this project focused on the application of LDPM to simulate projectile perforation on an ultra high performance concrete called CORTUF. This application illustrates the strengths of LDPM over traditional continuum models.« less

  10. Rice husk (RH) as additive in fly ash based geopolymer mortar

    NASA Astrophysics Data System (ADS)

    Yahya, Zarina; Razak, Rafiza Abd; Abdullah, Mohd Mustafa Al Bakri; Rahim, Mohd Azrin Adzhar; Nasri, Armia

    2017-09-01

    In recent year, the Ordinary Portland Cement (OPC) concrete is vastly used as main binder in construction industry which lead to depletion of natural resources in order to manufacture large amount of OPC. Nevertheless, with the introduction of geopolymer as an alternative binder which is more environmental friendly due to less emission of carbon dioxide (CO2) and utilized waste materials can overcome the problems. Rice husk (RH) is an agricultural residue which can be found easily in large quantity due to production of paddy in Malaysia and it's usually disposed in landfill. This paper investigated the effect of rice husk (RH) content on the strength development of fly ash based geopolymer mortar. The fly ash is replaced with RH by 0%, 5%, 10%, 15% and 20% where the sodium silicate and sodium hydroxide was used as alkaline activator. A total of 45 cubes were casted and their compressive strength, density and water absorption were evaluated at 1, 3, and 7 days. The result showed compressive strength decreased when the percentage of RH increased. At 5% replacement of RH, the maximum strength of 17.1MPa was recorded at day 7. The geopolymer has lowest rate of water absorption (1.69%) at 20% replacement of RH. The density of the sample can be classified as lightweight geopolymer concrete.

  11. In vivo optical elastography: stress and strain imaging of human skin lesions

    NASA Astrophysics Data System (ADS)

    Es'haghian, Shaghayegh; Gong, Peijun; Kennedy, Kelsey M.; Wijesinghe, Philip; Sampson, David D.; McLaughlin, Robert A.; Kennedy, Brendan F.

    2015-03-01

    Probing the mechanical properties of skin at high resolution could aid in the assessment of skin pathologies by, for example, detecting the extent of cancerous skin lesions and assessing pathology in burn scars. Here, we present two elastography techniques based on optical coherence tomography (OCT) to probe the local mechanical properties of skin. The first technique, optical palpation, is a high-resolution tactile imaging technique, which uses a complaint silicone layer positioned on the tissue surface to measure spatially-resolved stress imparted by compressive loading. We assess the performance of optical palpation, using a handheld imaging probe on a skin-mimicking phantom, and demonstrate its use on human skin. The second technique is a strain imaging technique, phase-sensitive compression OCE that maps depth-resolved mechanical variations within skin. We show preliminary results of in vivo phase-sensitive compression OCE on a human skin lesion.

  12. Ground Demonstration on the Autonomous Docking of Two 3U CubeSats Using a Novel Permanent-Magnet Docking Mechanism

    NASA Technical Reports Server (NTRS)

    Pei, Jing; Murchison, Luke; BenShabat, Adam; Stewart, Victor; Rosenthal, James; Follman, Jacob; Branchy, Mark; Sellers, Drew; Elandt, Ryan; Elliott, Sawyer; hide

    2017-01-01

    Small spacecraft autonomous rendezvous and docking is an essential technology for future space structure assembly missions. A novel magnetic capture and latching mechanism is analyzed that allows for docking of two CubeSats without precise sensors and actuators. The proposed magnetic docking hardware not only provides the means to latch the CubeSats but it also significantly increases the likelihood of successful docking in the presence of relative attitude and position errors. The simplicity of the design allows it to be implemented on many CubeSat rendezvous missions. A CubeSat 3-DOF ground demonstration effort is on-going at NASA Langley Research Center that enables hardware-in-the loop testing of the autonomous approach and docking of a follower CubeSat to an identical leader CubeSat. The test setup consists of a 3 meter by 4 meter granite table and two nearly frictionless air bearing systems that support the two CubeSats. Four cold-gas on-off thrusters are used to translate the follower towards the leader, while a single reaction wheel is used to control the attitude of each CubeSat. An innovative modified pseudo inverse control allocation scheme was developed to address interactions between control effectors. The docking procedure requires relatively high actuator precision, a novel minimal impulse bit mitigation algorithm was developed to minimize the undesirable deadzone effects of the thrusters. Simulation of the ground demonstration shows that the Guidance, Navigation, and Control system along with the docking subsystem leads to successful docking under 3-sigma dispersions for all key system parameters. Extensive simulation and ground testing will provide sufficient confidence that the proposed docking mechanism along with the choosen suite of sensors and actuators will perform successful docking in the space environment.

  13. Shock-adiabatic to quasi-isentropic compression of warm dense helium up to 150 GPa

    NASA Astrophysics Data System (ADS)

    Zheng, J.; Chen, Q. F.; Gu, Y. J.; Li, J. T.; Li, Z. G.; Li, C. J.; Chen, Z. Y.

    2017-06-01

    Multiple reverberation compression can achieve higher pressure, higher temperature, but lower entropy. It is available to provide an important validation for the elaborate and wider planetary models and simulate the inertial confinement fusion capsule implosion process. In the work, we have developed the thermodynamic and optical properties of helium from shock-adiabatic to quasi-isentropic compression by means of a multiple reverberation technique. By this technique, the initial dense gaseous helium was compressed to high pressure and high temperature and entered the warm dense matter (WDM) region. The experimental equation of state (EOS) of WDM helium in the pressure-density-temperature (P-ρ -T) range of 1 -150 GPa , 0.1 -1.1 g c m-3 , and 4600-24 000 K were measured. The optical radiations emanating from the WDM helium were recorded, and the particle velocity profiles detecting from the sample/window interface were obtained successfully up to 10 times compression. The optical radiation results imply that dense He has become rather opaque after the 2nd compression with a density of about 0.3 g c m-3 and a temperature of about 1 eV. The opaque states of helium under multiple compression were analyzed by the particle velocity measurements. The multiple compression technique could efficiently enhanced the density and the compressibility, and our multiple compression ratios (ηi=ρi/ρ0,i =1 -10 ) of helium are greatly improved from 3.5 to 43 based on initial precompressed density (ρ0) . For the relative compression ratio (ηi'=ρi/ρi -1) , it increases with pressure in the lower density regime and reversely decreases in the higher density regime, and a turning point occurs at the 3rd and 4th compression states under the different loading conditions. This nonmonotonic evolution of the compression is controlled by two factors, where the excitation of internal degrees of freedom results in the increasing compressibility and the repulsive interactions between the particles results in the decreasing compressibility at the onset of electron excitation and ionization. In the P-ρ -T contour with the experiments and the calculations, our multiple compression states from insulating to semiconducting fluid (from transparent to opaque fluid) are illustrated. Our results give an elaborate validation of EOS models and have applications for planetary and stellar opaque atmospheres.

  14. Image quality measures to assess hyperspectral compression techniques

    NASA Astrophysics Data System (ADS)

    Lurie, Joan B.; Evans, Bruce W.; Ringer, Brian; Yeates, Mathew

    1994-12-01

    The term 'multispectral' is used to describe imagery with anywhere from three to about 20 bands of data. The images acquired by Landsat and similar earth sensing satellites including the French Spot platform are typical examples of multispectral data sets. Applications range from crop observation and yield estimation, to forestry, to sensing of the environment. The wave bands typically range from the visible to thermal infrared and are fractions of a micron wide. They may or may not be contiguous. Thus each pixel will have several spectral intensities associated with it but detailed spectra are not obtained. The term 'hyperspectral' is typically used for spectral data encompassing hundreds of samples of a spectrum. Hyperspectral, electro-optical sensors typically operate in the visible and near infrared bands. Their characteristic property is the ability to resolve a large number (typically hundreds) of contiguous spectral bands, thus producing a detailed profile of the electromagnetic spectrum. Like multispectral sensors, recently developed hyperspectral sensors are often also imaging sensors, measuring spectral over a two dimensional spatial array of picture elements of pixels. The resulting data is thus inherently three dimensional - an array of samples in which two dimensions correspond to spatial position and the third to wavelength. The data sets, commonly referred to as image cubes or datacubes (although technically they are often rectangular solids), are very rich in information but quickly become unwieldy in size, generating formidable torrents of data. Both spaceborne and airborne hyperspectral cameras exist and are in use today. The data is unique in its ability to provide high spatial and spectral resolution simultaneously, and shows great promise in both military and civilian applications. A data analysis system has been built at TRW under a series of Internal Research and Development projects. This development has been prompted by the business opportunities, by the series of instruments built here and by the availability of data from other instruments. The products of the processing system has been used to process data produced by TRW sensors and other instruments. Figure 1 provides an overview of the TRW hyperspectral collection, data handling and exploitation capability. The Analysis and Exploitation functions deal with the digitized image cubes. The analysis system was designed to handle various types of data but the emphasis was on the data acquired by the TRW instruments.

  15. Neural network for image compression

    NASA Astrophysics Data System (ADS)

    Panchanathan, Sethuraman; Yeap, Tet H.; Pilache, B.

    1992-09-01

    In this paper, we propose a new scheme for image compression using neural networks. Image data compression deals with minimization of the amount of data required to represent an image while maintaining an acceptable quality. Several image compression techniques have been developed in recent years. We note that the coding performance of these techniques may be improved by employing adaptivity. Over the last few years neural network has emerged as an effective tool for solving a wide range of problems involving adaptivity and learning. A multilayer feed-forward neural network trained using the backward error propagation algorithm is used in many applications. However, this model is not suitable for image compression because of its poor coding performance. Recently, a self-organizing feature map (SOFM) algorithm has been proposed which yields a good coding performance. However, this algorithm requires a long training time because the network starts with random initial weights. In this paper we have used the backward error propagation algorithm (BEP) to quickly obtain the initial weights which are then used to speedup the training time required by the SOFM algorithm. The proposed approach (BEP-SOFM) combines the advantages of the two techniques and, hence, achieves a good coding performance in a shorter training time. Our simulation results demonstrate the potential gains using the proposed technique.

  16. Fast and Adaptive Lossless On-Board Hyperspectral Data Compression System for Space Applications

    NASA Technical Reports Server (NTRS)

    Aranki, Nazeeh; Bakhshi, Alireza; Keymeulen, Didier; Klimesh, Matthew

    2009-01-01

    Efficient on-board lossless hyperspectral data compression reduces the data volume necessary to meet NASA and DoD limited downlink capabilities. The techniques also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed 'Fast Lossless' algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware, which makes it practical for flight implementations of pushbroom instruments. A prototype of the compressor (and decompressor) of the algorithm is available in software, but this implementation may not meet speed and real-time requirements of some space applications. Hardware acceleration provides performance improvements of 10x-100x vs. the software implementation (about 1M samples/sec on a Pentium IV machine). This paper describes a hardware implementation of the JPL-developed 'Fast Lossless' compression algorithm on a Field Programmable Gate Array (FPGA). The FPGA implementation targets the current state of the art FPGAs (Xilinx Virtex IV and V families) and compresses one sample every clock cycle to provide a fast and practical real-time solution for Space applications.

  17. Hardware Implementation of Lossless Adaptive and Scalable Hyperspectral Data Compression for Space

    NASA Technical Reports Server (NTRS)

    Aranki, Nazeeh; Keymeulen, Didier; Bakhshi, Alireza; Klimesh, Matthew

    2009-01-01

    On-board lossless hyperspectral data compression reduces data volume in order to meet NASA and DoD limited downlink capabilities. The technique also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed 'Fast Lossless' algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware. A modified form of the algorithm that is better suited for data from pushbroom instruments is generally appropriate for flight implementation. A scalable field programmable gate array (FPGA) hardware implementation was developed. The FPGA implementation achieves a throughput performance of 58 Msamples/sec, which can be increased to over 100 Msamples/sec in a parallel implementation that uses twice the hardware resources This paper describes the hardware implementation of the 'Modified Fast Lossless' compression algorithm on an FPGA. The FPGA implementation targets the current state-of-the-art FPGAs (Xilinx Virtex IV and V families) and compresses one sample every clock cycle to provide a fast and practical real-time solution for space applications.

  18. File Compression and Expansion of the Genetic Code by the use of the Yin/Yang Directions to find its Sphered Cube

    PubMed Central

    Castro-Chavez, Fernando

    2014-01-01

    Objective The objective of this article is to demonstrate that the genetic code can be studied and represented in a 3-D Sphered Cube for bioinformatics and for education by using the graphical help of the ancient “Book of Changes” or I Ching for the comparison, pair by pair, of the three basic characteristics of nucleotides: H-bonds, molecular structure, and their tautomerism. Methods The source of natural biodiversity is the high plasticity of the genetic code, analyzable with a reverse engineering of its 2-D and 3-D representations (here illustrated), but also through the classical 64-hexagrams of the ancient I Ching, as if they were the 64-codons or words of the genetic code. Results In this article, the four elements of the Yin/Yang were found by correlating the 3×2=6 sets of Cartesian comparisons of the mentioned properties of nucleic acids, to the directionality of their resulting blocks of codons grouped according to their resulting amino acids and/or functions, integrating a 384-codon Sphered Cube whose function is illustrated by comparing six brain peptides and a promoter of osteoblasts from Humans versus Neanderthal, as well as to Negadi’s work on the importance of the number 384 within the genetic code. Conclusions Starting with the codon/anticodon correlation of Nirenberg, published in full here for the first time, and by studying the genetic code and its 3-D display, the buffers of reiteration within codons codifying for the same amino acid, displayed the two long (binary number one) and older Yin/Yang arrows that travel in opposite directions, mimicking the parental DNA strands, while annealing to the two younger and broken (binary number zero) Yin/Yang arrows, mimicking the new DNA strands; the graphic analysis of the of the genetic code and its plasticity was helpful to compare compatible sequences (human compatible to human versus neanderthal compatible to neanderthal), while further exploring the wondrous biodiversity of nature for educational purposes. PMID:25340175

  19. SAR data compression: Application, requirements, and designs

    NASA Technical Reports Server (NTRS)

    Curlander, John C.; Chang, C. Y.

    1991-01-01

    The feasibility of reducing data volume and data rate is evaluated for the Earth Observing System (EOS) Synthetic Aperture Radar (SAR). All elements of data stream from the sensor downlink data stream to electronic delivery of browse data products are explored. The factors influencing design of a data compression system are analyzed, including the signal data characteristics, the image quality requirements, and the throughput requirements. The conclusion is that little or no reduction can be achieved in the raw signal data using traditional data compression techniques (e.g., vector quantization, adaptive discrete cosine transform) due to the induced phase errors in the output image. However, after image formation, a number of techniques are effective for data compression.

  20. A Comparison of LBG and ADPCM Speech Compression Techniques

    NASA Astrophysics Data System (ADS)

    Bachu, Rajesh G.; Patel, Jignasa; Barkana, Buket D.

    Speech compression is the technology of converting human speech into an efficiently encoded representation that can later be decoded to produce a close approximation of the original signal. In all speech there is a degree of predictability and speech coding techniques exploit this to reduce bit rates yet still maintain a suitable level of quality. This paper is a study and implementation of Linde-Buzo-Gray Algorithm (LBG) and Adaptive Differential Pulse Code Modulation (ADPCM) algorithms to compress speech signals. In here we implemented the methods using MATLAB 7.0. The methods we used in this study gave good results and performance in compressing the speech and listening tests showed that efficient and high quality coding is achieved.

  1. Cu₂O template synthesis of high-performance PtCu alloy yolk-shell cube catalysts for direct methanol fuel cells.

    PubMed

    Ye, Sheng-Hua; He, Xu-Jun; Ding, Liang-Xin; Pan, Zheng-Wei; Tong, Ye-Xiang; Wu, Mingmei; Li, Gao-Ren

    2014-10-21

    Novel PtCu alloy yolk-shell cubes were fabricated via the disproportionation and displacement reactions in Cu2O yolk-shell cubes, and they exhibit significantly improved catalytic activity and durability for methanol electrooxidation.

  2. First Image from MarCO-B

    NASA Image and Video Library

    2018-05-15

    The first image captured by one of NASA's Mars Cube One (MarCO) CubeSats. The image, which shows both the CubeSat's unfolded high-gain antenna at right and the Earth and its moon in the center, was acquired by MarCO-B on May 9. MarCO is a pair of small spacecraft accompanying NASA's InSight (Interior Investigations Using Seismic Investigations, Geodesy and Heat Transport) lander. Together, MarCO-A and MarCO-B are the first CubeSats ever sent to deep space. InSight is the first mission to ever explore Mars' deep interior. If the MarCO CubeSats make the entire journey to Mars, they will attempt to relay data about InSight back to Earth as the lander enters the Martian atmosphere and lands. MarCO will not collect any science, but are intended purely as a technology demonstration. They could serve as a pathfinder for future CubeSat missions. An annotated version is available at https://photojournal.jpl.nasa.gov/catalog/PIA22323

  3. An Asymmetric Image Encryption Based on Phase Truncated Hybrid Transform

    NASA Astrophysics Data System (ADS)

    Khurana, Mehak; Singh, Hukum

    2017-09-01

    To enhance the security of the system and to protect it from the attacker, this paper proposes a new asymmetric cryptosystem based on hybrid approach of Phase Truncated Fourier and Discrete Cosine Transform (PTFDCT) which adds non linearity by including cube and cube root operation in the encryption and decryption path respectively. In this cryptosystem random phase masks are used as encryption keys and phase masks generated after the cube operation in encryption process are reserved as decryption keys and cube root operation is required to decrypt image in decryption process. The cube and cube root operation introduced in the encryption and decryption path makes system resistant against standard attacks. The robustness of the proposed cryptosystem has been analysed and verified on the basis of various parameters by simulating on MATLAB 7.9.0 (R2008a). The experimental results are provided to highlight the effectiveness and suitability of the proposed cryptosystem and prove the system is secure.

  4. Monosodium glutamate in chicken and beef stock cubes using high-performance liquid chromatography.

    PubMed

    Demirhan, Buket Er; Demirhan, Burak; Sönmez, Ceren; Torul, Hilal; Tamer, Uğur; Yentür, Gülderen

    2015-01-01

    In this survey monosodium glutamate (MSG) levels in chicken and beef stock cube samples were determined. A total number of 122 stock cube samples (from brands A, B, C, D) were collected from local markets in Ankara, Turkey. High-performance liquid chromatography with diode array detection (HPLC-DAD) was used for quantitative MSG determination. Mean MSG levels (±SE) in samples of A, B, C and D brands were 14.6 ± 0.2 g kg⁻¹, 11.9 ± 0.3 g kg⁻¹, 9.7 ± 0.1 g kg⁻¹ and 7.2 ± 0.1 g kg⁻¹, respectively. Differences between mean levels of brands were significant. Also, mean levels of chicken stock cube samples were lower than in beef stock cubes. Maximum limits for MSG in stock cubes are not specified in the Turkish Food Codex (TFC). Generally the limit for MSG in foods (except some foods) is established as 10 g kg⁻¹ (individually or in combination).

  5. Ka-Band Parabolic Deployable Antenna (KaPDA) Enabling High Speed Data Communication for CubeSats

    NASA Technical Reports Server (NTRS)

    Sauder, Jonathan F.; Chahat, Nacer; Hodges, Richard; Thomson, Mark W.; Rahmat-Samii, Yahya

    2015-01-01

    CubeSats are at a very exciting point as their mission capabilities and launch opportunities are increasing. But as instruments become more advanced and operational distances between CubeSats and earth increase communication data rate becomes a mission-limiting factor. Improving data rate has become critical enough for NASA to sponsor the Cube Quest Centennial Challenge when: one of the key metrics is transmitting as much data as possible from the moon and beyond Currently, many CubeSats communicate on UHF bands and those that have high data rate abilities use S-band or X-band patch antennas. The CubeSat Aneas, which was launched in September 2012, pushed the envelope with a half-meter S-band dish which could achieve 100x the data rate of patch antennas. A half-meter parabolic antenna operating at Ka-band would increase data rates by over 100x that of the AMOS antenM and 10,000 that of X-band patch antennas.

  6. Neutrino astronomy at the South Pole: Latest results from the IceCube neutrino observatory and its future development

    NASA Astrophysics Data System (ADS)

    Toscano, S.; IceCube Collaboration

    2017-12-01

    The IceCube Neutrino Observatory is a cubic-kilometer neutrino telescope located at the geographic South Pole. Buried deep under the Antarctic ice sheet, an array of 5160 Digital Optical Modules (DOMs) is used to capture the Cherenkov light emitted by relativistic particles generated from neutrino interactions. The main goal of IceCube is the detection of astrophysical neutrinos. In 2013 the IceCube neutrino telescope discovered a high-energy diffuse flux of neutrino events with energy ranging from tens of TeV up to few PeV of cosmic origin. Meanwhile, different analyses confirm the discovery and search for possible correlations with astrophysical sources. However, the source of these neutrinos remains a mystery, since no counterparts have been identified yet. In this contribution we give an overview of the detection principles of IceCube, the most recent results, and the plans for a next-generation neutrino detector, dubbed IceCube-Gen2.

  7. Application of Compressive Sensing to Gravitational Microlensing Data and Implications for Miniaturized Space Observatories

    NASA Technical Reports Server (NTRS)

    Korde-Patel, Asmita (Inventor); Barry, Richard K.; Mohsenin, Tinoosh

    2016-01-01

    Compressive Sensing is a technique for simultaneous acquisition and compression of data that is sparse or can be made sparse in some domain. It is currently under intense development and has been profitably employed for industrial and medical applications. We here describe the use of this technique for the processing of astronomical data. We outline the procedure as applied to exoplanet gravitational microlensing and analyze measurement results and uncertainty values. We describe implications for on-spacecraft data processing for space observatories. Our findings suggest that application of these techniques may yield significant, enabling benefits especially for power and volume-limited space applications such as miniaturized or micro-constellation satellites.

  8. Analyzing Molecular Clouds with the Spectral Correlation Function

    NASA Astrophysics Data System (ADS)

    Rosolowsky, E. W.; Goodman, A. A.; Williams, J. P.; Wilner, D. J.

    1997-12-01

    The Spectral Correlation Function (SCF) is a new data analysis algorithm that measures how the properites of spectra vary from position to position in a spectral-line map. For each spectrum in a data cube, the SCF measures the ``difference" between that spectrum and a specified subset of its neighbors. This algorithm is intended for use on both simulated and observed position-position-velocity data cubes. In initial tests of the SCF, we have shown that a histogram of the SCF for a map is a good descriptor of the spatial-velocity distribution of material. In one test, we compare the SCF distributions for: 1) a real data cube; 2) a cube made from the real cube's spectra with randomized positions; and 3) the results of a preliminary MHD simulation by Gammie, Ostriker, and Stone. The results of the test show that the real cloud and the simulation are much closer to each other in their SCF distributions than is either to the randomized cube. We are now in the process of applying the SCF to a larger set of observed and simulated data cubes. Our ultimate aim is to use the SCF both on its own, as a descriptor of the spatial-kinetic properties of interstellar gas, and also as a tool for evaluating how well simulations resemble observations. Our expectation is that the SCF will be more discriminatory (less likely to produce a false match) than the data cube descriptors currently available.

  9. The Effects of Composition and gamma'/gamma Lattice Parameter Mismatch on the Critical Resolved Shear Stresses for Octahedral and Cube Slip in NiAlCrX Alloys

    NASA Technical Reports Server (NTRS)

    Miner, R. V.

    1997-01-01

    Prototypical single-crystal NiAlCrX superalloys were studied to examine the effects of the common major alloying elements, Co, Mo, Nb, Ta, Ti, and W, on yielding behavior. The alloys contained about 10 at. pct Cr, 60 vol pct of the gamma' phase, and about 3 at. pct of X in the gamma'. The critical resolved shear stresses (CRSSs) for octahedral and primary cube slip were measured at 760 C, which is about the peak strength temperature. The CRSS(sub oct) and CRSS(sub cube) are discussed in relation to those of Ni, (Al, X) gamma' alloys taken from the literature and the gamma'/gamma lattice mismatch. The CRSS(sub oct) of the gamma + gamma' alloys reflected a similar compositional dependence to that of both the CRSS(sub cube) of the gamma' phase and the gamma'/gamma lattice parameter mismatch. The CRSS(sub cube) of the gamma + gamma' alloys also reflected the compositional dependence of the gamma'/gamma mismatch, but bore no similarity to that of CRSS(sub cube) for gamma' alloys since it is controlled by the gamma matrix. The ratio of CRSS(sub cube)/CRSS(sub oct) was decreased by all alloying elements except Co, which increased the ratio. The decrease in CRSS(sub cube)/CRSS(sub oct) was related to the degree in which elements partition to the gamma' rather than the gamma phase.

  10. EarthCube Cyberinfrastructure: The Importance of and Need for International Strategic Partnerships to Enhance Interconnectivity and Interoperability

    NASA Astrophysics Data System (ADS)

    Ramamurthy, M. K.; Lehnert, K.; Zanzerkia, E. E.

    2017-12-01

    The United States National Science Foundation's EarthCube program is a community-driven activity aimed at transforming the conduct of geosciences research and education by creating a well-connected cyberinfrastructure for sharing and integrating data and knowledge across all geoscience disciplines in an open, transparent, and inclusive manner and to accelerate our ability to understand and predict the Earth system. After five years of community engagement, governance, and development activities, EarthCube is now transitioning into an implementation phase. In the first phase of implementing the EarthCube architecture, the project leadership has identified the following architectural components as the top three priorities, focused on technologies, interfaces and interoperability elements that will address: a) Resource Discovery; b) Resource Registry; and c) Resource Distribution and Access. Simultaneously, EarthCube is exploring international partnerships to leverage synergies with other e-infrastructure programs and projects in Europe, Australia, and other regions and discuss potential partnerships and mutually beneficial collaborations to increase interoperability of systems for advancing EarthCube's goals in an efficient and effective manner. In this session, we will present the progress of EarthCube on a number of fronts and engage geoscientists and data scientists in the future steps toward the development of EarthCube for advancing research and discovery in the geosciences. The talk will underscore the importance of strategic partnerships with other like eScience projects and programs across the globe.

  11. White House Maker Faire

    NASA Image and Video Library

    2014-06-18

    Joey Hudy demonstrates his Intel Galileo-based 10x10x10 LED Cube during the first ever White House Maker Faire which brings together students, entrepreneurs, and everyday citizens who are using new tools and techniques to launch new businesses, learn vital skills in science, technology, engineering, and math (STEM), and fuel the renaissance in American manufacturing, at the White House, Wednesday, June 18, 2014 in Washington. Photo Credit: (NASA/Bill Ingalls)

  12. Reduction of time-resolved space-based CCD photometry developed for MOST Fabry Imaging data*

    NASA Astrophysics Data System (ADS)

    Reegen, P.; Kallinger, T.; Frast, D.; Gruberbauer, M.; Huber, D.; Matthews, J. M.; Punz, D.; Schraml, S.; Weiss, W. W.; Kuschnig, R.; Moffat, A. F. J.; Walker, G. A. H.; Guenther, D. B.; Rucinski, S. M.; Sasselov, D.

    2006-04-01

    The MOST (Microvariability and Oscillations of Stars) satellite obtains ultraprecise photometry from space with high sampling rates and duty cycles. Astronomical photometry or imaging missions in low Earth orbits, like MOST, are especially sensitive to scattered light from Earthshine, and all these missions have a common need to extract target information from voluminous data cubes. They consist of upwards of hundreds of thousands of two-dimensional CCD frames (or subrasters) containing from hundreds to millions of pixels each, where the target information, superposed on background and instrumental effects, is contained only in a subset of pixels (Fabry Images, defocused images, mini-spectra). We describe a novel reduction technique for such data cubes: resolving linear correlations of target and background pixel intensities. This step-wise multiple linear regression removes only those target variations which are also detected in the background. The advantage of regression analysis versus background subtraction is the appropriate scaling, taking into account that the amount of contamination may differ from pixel to pixel. The multivariate solution for all pairs of target/background pixels is minimally invasive of the raw photometry while being very effective in reducing contamination due to, e.g. stray light. The technique is tested and demonstrated with both simulated oscillation signals and real MOST photometry.

  13. NanoRacks CubeSat Deployment

    NASA Image and Video Library

    2014-02-11

    ISS038-E-044916 (11 Feb. 2014) --- A set of NanoRacks CubeSats is photographed by an Expedition 38 crew member after the deployment by the Small Satellite Orbital Deployer (SSOD). The CubeSats program contains a variety of experiments such as Earth observations and advanced electronics testing.

  14. SpaceCube Technology Brief Hybrid Data Processing System

    NASA Technical Reports Server (NTRS)

    Petrick, Dave

    2016-01-01

    The intent of this presentation is to give status to multiple audience types on the SpaceCube data processing technology at GSFC. SpaceCube has grown to support multiple missions inside and outside of NASA, and we are being requested to give technology overviews in various forums.

  15. Applications of Nano-Satellites and Cube-Satellites in Microwave and RF Domain

    NASA Technical Reports Server (NTRS)

    Simons, Rainee N.; Goverdhanam, Kavita

    2015-01-01

    This paper presents an overview of microwave technologies for Small Satellites including NanoSats and CubeSats. In addition, examples of space communication technology demonstration projects using CubeSats are presented. Furthermore, examples of miniature instruments for Earth science measurements are discussed.

  16. Applications of Nano-satellites and Cube-satellites in Microwave and RF Domain

    NASA Technical Reports Server (NTRS)

    Simons, Rainee N.; Goverdhanam, Kavita

    2015-01-01

    This paper presents an overview of microwave technologies for Small Satellites including NanoSats and CubeSats. In addition, examples of space communication technology demonstration projects using CubeSats are presented. Furthermore, examples of miniature instruments for Earth science measurements are discussed.

  17. A Hybrid Data Compression Scheme for Power Reduction in Wireless Sensors for IoT.

    PubMed

    Deepu, Chacko John; Heng, Chun-Huat; Lian, Yong

    2017-04-01

    This paper presents a novel data compression and transmission scheme for power reduction in Internet-of-Things (IoT) enabled wireless sensors. In the proposed scheme, data is compressed with both lossy and lossless techniques, so as to enable hybrid transmission mode, support adaptive data rate selection and save power in wireless transmission. Applying the method to electrocardiogram (ECG), the data is first compressed using a lossy compression technique with a high compression ratio (CR). The residual error between the original data and the decompressed lossy data is preserved using entropy coding, enabling a lossless restoration of the original data when required. Average CR of 2.1 × and 7.8 × were achieved for lossless and lossy compression respectively with MIT/BIH database. The power reduction is demonstrated using a Bluetooth transceiver and is found to be reduced to 18% for lossy and 53% for lossless transmission respectively. Options for hybrid transmission mode, adaptive rate selection and system level power reduction make the proposed scheme attractive for IoT wireless sensors in healthcare applications.

  18. On-Chip Neural Data Compression Based On Compressed Sensing With Sparse Sensing Matrices.

    PubMed

    Zhao, Wenfeng; Sun, Biao; Wu, Tong; Yang, Zhi

    2018-02-01

    On-chip neural data compression is an enabling technique for wireless neural interfaces that suffer from insufficient bandwidth and power budgets to transmit the raw data. The data compression algorithm and its implementation should be power and area efficient and functionally reliable over different datasets. Compressed sensing is an emerging technique that has been applied to compress various neurophysiological data. However, the state-of-the-art compressed sensing (CS) encoders leverage random but dense binary measurement matrices, which incur substantial implementation costs on both power and area that could offset the benefits from the reduced wireless data rate. In this paper, we propose two CS encoder designs based on sparse measurement matrices that could lead to efficient hardware implementation. Specifically, two different approaches for the construction of sparse measurement matrices, i.e., the deterministic quasi-cyclic array code (QCAC) matrix and -sparse random binary matrix [-SRBM] are exploited. We demonstrate that the proposed CS encoders lead to comparable recovery performance. And efficient VLSI architecture designs are proposed for QCAC-CS and -SRBM encoders with reduced area and total power consumption.

  19. Coil Compression for Accelerated Imaging with Cartesian Sampling

    PubMed Central

    Zhang, Tao; Pauly, John M.; Vasanawala, Shreyas S.; Lustig, Michael

    2012-01-01

    MRI using receiver arrays with many coil elements can provide high signal-to-noise ratio and increase parallel imaging acceleration. At the same time, the growing number of elements results in larger datasets and more computation in the reconstruction. This is of particular concern in 3D acquisitions and in iterative reconstructions. Coil compression algorithms are effective in mitigating this problem by compressing data from many channels into fewer virtual coils. In Cartesian sampling there often are fully sampled k-space dimensions. In this work, a new coil compression technique for Cartesian sampling is presented that exploits the spatially varying coil sensitivities in these non-subsampled dimensions for better compression and computation reduction. Instead of directly compressing in k-space, coil compression is performed separately for each spatial location along the fully-sampled directions, followed by an additional alignment process that guarantees the smoothness of the virtual coil sensitivities. This important step provides compatibility with autocalibrating parallel imaging techniques. Its performance is not susceptible to artifacts caused by a tight imaging fieldof-view. High quality compression of in-vivo 3D data from a 32 channel pediatric coil into 6 virtual coils is demonstrated. PMID:22488589

  20. Textual data compression in computational biology: a synopsis.

    PubMed

    Giancarlo, Raffaele; Scaturro, Davide; Utro, Filippo

    2009-07-01

    Textual data compression, and the associated techniques coming from information theory, are often perceived as being of interest for data communication and storage. However, they are also deeply related to classification and data mining and analysis. In recent years, a substantial effort has been made for the application of textual data compression techniques to various computational biology tasks, ranging from storage and indexing of large datasets to comparison and reverse engineering of biological networks. The main focus of this review is on a systematic presentation of the key areas of bioinformatics and computational biology where compression has been used. When possible, a unifying organization of the main ideas and techniques is also provided. It goes without saying that most of the research results reviewed here offer software prototypes to the bioinformatics community. The Supplementary Material provides pointers to software and benchmark datasets for a range of applications of broad interest. In addition to provide reference to software, the Supplementary Material also gives a brief presentation of some fundamental results and techniques related to this paper. It is at: http://www.math.unipa.it/ approximately raffaele/suppMaterial/compReview/

  1. Two-dimensional compression of surface electromyographic signals using column-correlation sorting and image encoders.

    PubMed

    Costa, Marcus V C; Carvalho, Joao L A; Berger, Pedro A; Zaghetto, Alexandre; da Rocha, Adson F; Nascimento, Francisco A O

    2009-01-01

    We present a new preprocessing technique for two-dimensional compression of surface electromyographic (S-EMG) signals, based on correlation sorting. We show that the JPEG2000 coding system (originally designed for compression of still images) and the H.264/AVC encoder (video compression algorithm operating in intraframe mode) can be used for compression of S-EMG signals. We compare the performance of these two off-the-shelf image compression algorithms for S-EMG compression, with and without the proposed preprocessing step. Compression of both isotonic and isometric contraction S-EMG signals is evaluated. The proposed methods were compared with other S-EMG compression algorithms from the literature.

  2. Characterization of particle deformation during compression measured by confocal laser scanning microscopy.

    PubMed

    Guo, H X; Heinämäki, J; Yliruusi, J

    1999-09-20

    Direct compression of riboflavin sodium phosphate tablets was studied by confocal laser scanning microscopy (CLSM). The technique is non-invasive and generates three-dimensional (3D) images. Tablets of 1% riboflavin sodium phosphate with two grades of microcrystalline cellulose (MCC) were individually compressed at compression forces of 1.0 and 26.8 kN. The behaviour and deformation of drug particles on the upper and lower surfaces of the tablets were studied under compression forces. Even at the lower compression force, distinct recrystallized areas in the riboflavin sodium phosphate particles were observed in both Avicel PH-101 and Avicel PH-102 tablets. At the higher compression force, the recrystallization of riboflavin sodium phosphate was more extensive on the upper surface of the Avicel PH-102 tablet than the Avicel PH-101 tablet. The plastic deformation properties of both MCC grades reduced the fragmentation of riboflavin sodium phosphate particles. When compressed with MCC, riboflavin sodium phosphate behaved as a plastic material. The riboflavin sodium phosphate particles were more tightly bound on the upper surface of the tablet than on the lower surface, and this could also be clearly distinguished by CLSM. Drug deformation could not be visualized by other techniques. Confocal laser scanning microscopy provides valuable information on the internal mechanisms of direct compression of tablets.

  3. Compressed Sensing for Body MRI

    PubMed Central

    Feng, Li; Benkert, Thomas; Block, Kai Tobias; Sodickson, Daniel K; Otazo, Ricardo; Chandarana, Hersh

    2016-01-01

    The introduction of compressed sensing for increasing imaging speed in MRI has raised significant interest among researchers and clinicians, and has initiated a large body of research across multiple clinical applications over the last decade. Compressed sensing aims to reconstruct unaliased images from fewer measurements than that are traditionally required in MRI by exploiting image compressibility or sparsity. Moreover, appropriate combinations of compressed sensing with previously introduced fast imaging approaches, such as parallel imaging, have demonstrated further improved performance. The advent of compressed sensing marks the prelude to a new era of rapid MRI, where the focus of data acquisition has changed from sampling based on the nominal number of voxels and/or frames to sampling based on the desired information content. This paper presents a brief overview of the application of compressed sensing techniques in body MRI, where imaging speed is crucial due to the presence of respiratory motion along with stringent constraints on spatial and temporal resolution. The first section provides an overview of the basic compressed sensing methodology, including the notion of sparsity, incoherence, and non-linear reconstruction. The second section reviews state-of-the-art compressed sensing techniques that have been demonstrated for various clinical body MRI applications. In the final section, the paper discusses current challenges and future opportunities. PMID:27981664

  4. Learning Experiences in a Giant Interactive Environment: Insights from The Cube

    ERIC Educational Resources Information Center

    Stoodley, Ian; Sayyad Abdi, Elham; Bruce, Christine; Hughes, Hilary

    2018-01-01

    In November 2012, Queensland University of Technology in Australia launched a giant interactive learning environment known as "The Cube". This article reports a phenomenographic investigation into visitors' different experiences of learning in The Cube. At present very little is known about people's learning experience in spaces…

  5. Search for counterpart to IceCube-171015A with ANTARES

    NASA Astrophysics Data System (ADS)

    Dornic, Damien; Colei, Alexis

    2017-10-01

    Damien Dornic (CPPM/CNRS) and Alexis Coleiro (IFIC/APC) report on behalf of the ANTARES Collaboration. Using online data from the ANTARES detector, we have performed a follow-up analysis of the recently reported high-energy starting event (HESE) neutrino IceCube-171015 (AMON IceCube HESE 56068624_130126).

  6. Geometrical optics analysis of the structural imperfection of retroreflection corner cubes with a nonlinear conjugate gradient method.

    PubMed

    Kim, Hwi; Min, Sung-Wook; Lee, Byoungho

    2008-12-01

    Geometrical optics analysis of the structural imperfection of retroreflection corner cubes is described. In the analysis, a geometrical optics model of six-beam reflection patterns generated by an imperfect retroreflection corner cube is developed, and its structural error extraction is formulated as a nonlinear optimization problem. The nonlinear conjugate gradient method is employed for solving the nonlinear optimization problem, and its detailed implementation is described. The proposed method of analysis is a mathematical basis for the nondestructive optical inspection of imperfectly fabricated retroreflection corner cubes.

  7. Invariant Deformation Element Model Interpretation to the Crystallography of Diffusional Body-Centered-Cube to Face-Centered-Cube Phase Transformations

    NASA Astrophysics Data System (ADS)

    Liu, Hongwei; Liu, Jiangwen; Su, Guangcai; Li, Weizhou; Zeng, Jianmin; Hu, Zhiliu

    2012-10-01

    The crystallography of body-centered-cube to face-centered cube (bcc-to-fcc) diffusion phase transformations in a duplex stainless steel and a Cu-Zn alloy, including long axis, orientation relationship (OR), habit plane (HP), and dislocation spacing, is successfully interpreted with one-step rotation from the Bain lattice relationship by applying a simplified invariant line (IL) analysis. It is proposed that the dislocation slipping direction in the matrix plays an important role in controlling the crystallography of precipitation.

  8. RM-CLEAN: RM spectra cleaner

    NASA Astrophysics Data System (ADS)

    Heald, George

    2017-08-01

    RM-CLEAN reads in dirty Q and U cubes, generates rmtf based on the frequencies given in an ASCII file, and cleans the RM spectra following the algorithm given by Brentjens (2007). The output cubes contain the clean model components and the CLEANed RM spectra. The input cubes must be reordered with mode=312, and the output cubes will have the same ordering and thus must be reordered after being written to disk. RM-CLEAN runs as a MIRIAD (ascl:1106.007) task and a Python wrapper is included with the code.

  9. Basic properties of lattices of cubes, algorithms for their construction, and application capabilities in discrete optimization

    NASA Astrophysics Data System (ADS)

    Khachaturov, R. V.

    2015-01-01

    The basic properties of a new type of lattices—a lattice of cubes—are described. It is shown that, with a suitable choice of union and intersection operations, the set of all subcubes of an N-cube forms a lattice, which is called a lattice of cubes. Algorithms for constructing such lattices are described, and the results produced by these algorithms in the case of lattices of various dimensions are illustrated. It is proved that a lattice of cubes is a lattice with supplements, which makes it possible to minimize and maximize supermodular functions on it. Examples of such functions are given. The possibility of applying previously developed efficient optimization algorithms to the formulation and solution of new classes of problems on lattices of cubes.

  10. Design, Analysis and Testing of a PRSEUS Pressure Cube to Investigate Assembly Joints

    NASA Technical Reports Server (NTRS)

    Yovanof, Nicolette; Lovejoy, Andrew E.; Baraja, Jaime; Gould, Kevin

    2012-01-01

    Due to its potential to significantly increase fuel efficiency, the current focus of NASA's Environmentally Responsible Aviation Program is the hybrid wing body (HWB) aircraft. Due to the complex load condition that exists in HWB structure, as compared to traditional aircraft configurations, light-weight, cost-effective and manufacturable structural concepts are required to enable the HWB. The Pultruded Rod Stitched Efficient Unitized Structure (PRSEUS) concept is one such structural concept. A building block approach for technology development of the PRSEUS concept is being conducted. As part of this approach, a PRSEUS pressure cube was developed as a risk reduction test article to examine a new integral cap joint concept. This paper describes the design, analysis and testing of the PRSEUS pressure cube test article. The pressure cube was required to withstand a 2P, 18.4 psi, overpressure load requirement. The pristine pressure cube was tested to 2.2P with no catastrophic failure. After the addition of barely visible impact damage, the cube was pressure loaded to 48 psi where catastrophic failure occurred, meeting the scale-up requirement. Comparison of pretest and posttest analyses with the cube test response agree well, and indicate that current analysis methods can be used to accurately analyze PRSEUS structure for initial failure response.

  11. Cube texture formation during the early stages of recrystallization of Al-1%wt.Mn and AA1050 aluminium alloys

    NASA Astrophysics Data System (ADS)

    Miszczyk, M. M.; Paul, H.

    2015-08-01

    The cube texture formation during primary recrystallization was analysed in plane strain deformed samples of a commercial AA1050 alloy and an Al-1%wt.Mn model alloy single crystal of the Goss{110}<001> orientation. The textures were measured with the use of X-ray diffraction and scanning electron microscopy equipped with an electron backscattered diffraction facility. After recrystallization of the Al-1%wt.Mn single crystal, the texture of the recrystallized grains was dominated by four variants of the S{123}<634> orientation. The cube grains were only sporadically detected by the SEM/EBSD system. Nevertheless, an increased density of <111> poles corresponding to the cube orientation was observed. The latter was connected with the superposition of four variants of the S{123}<634> orientation. This indicates that the cube texture after the recrystallization was a ‘compromise texture’. In the case of the recrystallized AA1050 alloy, the strong cube texture results from both the increased density of the particular <111> poles of the four variants of the S orientation and the ∼40°(∼< 111>)-type rotation. The first mechanism transforms the Sdef-oriented areas into Srex ones, whereas the second the near S-oriented, as-deformed areas into near cube-oriented grains.

  12. A computer program for simulating geohydrologic systems in three dimensions

    USGS Publications Warehouse

    Posson, D.R.; Hearne, G.A.; Tracy, J.V.; Frenzel, P.F.

    1980-01-01

    This document is directed toward individuals who wish to use a computer program to simulate ground-water flow in three dimensions. The strongly implicit procedure (SIP) numerical method is used to solve the set of simultaneous equations. New data processing techniques and program input and output options are emphasized. The quifer system to be modeled may be heterogeneous and anisotropic, and may include both artesian and water-table conditions. Systems which consist of well defined alternating layers of highly permeable and poorly permeable material may be represented by a sequence of equations for two dimensional flow in each of the highly permeable units. Boundaries where head or flux is user-specified may be irregularly shaped. The program also allows the user to represent streams as limited-source boundaries when the streamflow is small in relation to the hydraulic stress on the system. The data-processing techniques relating to ' cube ' input and output, to swapping of layers, to restarting of simulation, to free-format NAMELIST input, to the details of each sub-routine 's logic, and to the overlay program structure are discussed. The program is capable of processing large models that might overflow computer memories with conventional programs. Detailed instructions for selecting program options, for initializing the data arrays, for defining ' cube ' output lists and maps, and for plotting hydrographs of calculated and observed heads and/or drawdowns are provided. Output may be restricted to those nodes of particular interest, thereby reducing the volumes of printout for modelers, which may be critical when working at remote terminals. ' Cube ' input commands allow the modeler to set aquifer parameters and initialize the model with very few input records. Appendixes provide instructions to compile the program, definitions and cross-references for program variables, summary of the FLECS structured FORTRAN programming language, listings of the FLECS and FORTRAN source code, and samples of input and output for example simulations. (USGS)

  13. Pulse compression favourable aperiodic infrared imaging approach for non-destructive testing and evaluation of bio-materials

    NASA Astrophysics Data System (ADS)

    Mulaveesala, Ravibabu; Dua, Geetika; Arora, Vanita; Siddiqui, Juned A.; Muniyappa, Amarnath

    2017-05-01

    In recent years, aperiodic, transient pulse compression favourable infrared imaging methodologies demonstrated as reliable, quantitative, remote characterization and evaluation techniques for testing and evaluation of various biomaterials. This present work demonstrates a pulse compression favourable aperiodic thermal wave imaging technique, frequency modulated thermal wave imaging technique for bone diagnostics, especially by considering the bone with tissue, skin and muscle over layers. In order to find the capabilities of the proposed frequency modulated thermal wave imaging technique to detect the density variations in a multi layered skin-fat-muscle-bone structure, finite element modeling and simulation studies have been carried out. Further, frequency and time domain post processing approaches have been adopted on the temporal temperature data in order to improve the detection capabilities of frequency modulated thermal wave imaging.

  14. Universal data compression

    NASA Astrophysics Data System (ADS)

    Lindsay, R. A.; Cox, B. V.

    Universal and adaptive data compression techniques have the capability to globally compress all types of data without loss of information but have the disadvantage of complexity and computation speed. Advances in hardware speed and the reduction of computational costs have made universal data compression feasible. Implementations of the Adaptive Huffman and Lempel-Ziv compression algorithms are evaluated for performance. Compression ratios versus run times for different size data files are graphically presented and discussed in the paper. Required adjustments needed for optimum performance of the algorithms relative to theoretical achievable limits will be outlined.

  15. Video bandwidth compression system

    NASA Astrophysics Data System (ADS)

    Ludington, D.

    1980-08-01

    The objective of this program was the development of a Video Bandwidth Compression brassboard model for use by the Air Force Avionics Laboratory, Wright-Patterson Air Force Base, in evaluation of bandwidth compression techniques for use in tactical weapons and to aid in the selection of particular operational modes to be implemented in an advanced flyable model. The bandwidth compression system is partitioned into two major divisions: the encoder, which processes the input video with a compression algorithm and transmits the most significant information; and the decoder where the compressed data is reconstructed into a video image for display.

  16. On Gravitational Radiation: A Nonlinear Wave Theory in a Viscoelastic Kerr-Lambda Spacetime

    NASA Astrophysics Data System (ADS)

    Gamble, Ronald

    This project presents the experimental results concerning the mix design, fresh and hardened properties of an ultra-high strength concrete that has already been developed for high performance construction applications but now needs to be evaluated for a 3D printing process. The concrete is designed to be extruded through a nozzle and pump system, and have layers printed to analyze deformation within printed layers. The key factors for printable concrete are, the ability to be extruded through a pump and nozzle (flowability) and buildability. The flow of mortar will be studied by looking at the rheological properties of the mix and assessing the acceptable range of shear strength. Three different water to cement ratios and varying dosages of superplasticizers were incorporated to optimize a workable mortar/concrete mix to be applied for 3D printing. A Brookfield DV-III Ultra programmable rheometer was used to determine the viscosity and yield strength of the mortar mixes; these values were used to calculate the shear strength of the printable concrete. Compressive strengths of optimal mixtures were taken to assess the feasibility of 3D printed concrete as compared to traditional means. Compression test was conducted on a High Capacity Series Compression Testing Machine with 2" x 2" mortars cubes. The results indicated that the mortars that have shear ranges between of 0.3 - 0.9 kPa could be used in a 3D printer. The compressive strength of the concrete made with a 25% water/cement ratio and 10% superplasticizer dosage reached 62.8 MPa, which qualifies it as ultrahigh strength mortar. An optimum mix will be validated by printing the most filaments until deformation occurs. The end goal of this project is to develop an optimal concrete to produce the strength needed for 3D printed concrete. Using our predesigned ultra-high strength concrete mix ingredients, we will optimize that mix to have the same performance characteristics and be used in 3D printing applications.

  17. A multifunctional solar panel antenna for cube satellites

    NASA Astrophysics Data System (ADS)

    Fawole, Olutosin C.

    The basic cube satellite (CubeSat) is a modern small satellite that has a standard size of about one liter (the 1U CubeSat). Three 1U CubeSats could be stacked to form a 3U CubeSat. Their low-cost, short development time, and ease of deployment make CubeSats popular for space research, geographical information gathering, and communication applications. An antenna is a key part of the CubeSat communication subsystem. Traditionally, antennas used on CubeSats are wrapped-up wire dipole antennas, which are deployed after satellite launch. Another antenna type used on CubeSats is the patch antenna. In addition to their low gain and efficiency, deployable dipole antennas may also fail to deploy on satellite launch. On the other hand, a solid patch antenna will compete for space with solar cells when placed on a CubeSat face, interfering with satellite power generation. Slot antennas are promising alternatives to dipole and patch antennas on CubeSats. When excited, a thin slot aperture etched on a conductive sheet (ground plane) is an efficient bidirectional radiator. This open slot antenna can be backed by a reflector or cavity for unidirectional radiation, and solar cells can be placed in spaces on the ground plane not occupied by the slot. The large surface areas of 3U CubeSats can be exploited for a multifunctional antenna by integrating multiple thin slot radiators, which are backed by a thin cavity on the CubeSat surfaces. Solar cells can then be integrated on the antenna surface. Polarization diversity and frequency diversity improve the overall performance of a communication system. Having a single radiating structure that could provide these diversities is desired. It has been demonstrated that when a probe excites a square cavity with two unequal length crossed-slots, the differential radiation from the two slots combines in the far-field to yield circular polarization. In addition, it has been shown that two equal-length proximal slots, when both fed with a stripline, resonate at a frequency due to their original lengths, and also resonate at a lower frequency due to mutual coupling between the slots, leading to a dual-band operation. The multifunctional antenna designs presented are harmonizations and extensions of these two independent works. In the multifunctional antenna designs presented, multiple slots were etched on a 83 mm x 340 mm two-layer shallow cavity. The slots were laid out on the cavity such when the cavity was excited by a probe at a particular point, the differential radiation from the slots would combine in the far-field to yield Left-Handed Circular Polarization (LHCP). Furthermore, when the cavity was excited by another probe at an opposite point, the slots would produce Right-Handed Circular Polarization (RHCP). In addition, as forethought, these slots were laid out on the cavity such that some slots were close together enough to give Linearly Polarized (LP) dual-band operation when fed with a stripline. This antenna was designed and optimized via computer simulations, fabricated using Printed Circuit Board (PCB) technology, and characterized using a Vector Network Analyzer (VNA) and NSI Far Field Systems.

  18. A novel model for the chaotic dynamics of superdiffusion

    NASA Astrophysics Data System (ADS)

    Cushman, J. H.; Park, M.; O'Malley, D.

    2009-04-01

    Previously we've shown that by modeling the convective velocity in a turbulent flow field as Brownian, one obtains Richardson super diffusion where the expected distance between pairs of particles scales with time cubed. By proving generalized central limit type theorems it's possible to show that modeling the velocity or the acceleration as α-stable Levy gives rise to more general scaling laws that can easily explain other super diffusive regimes. The problem with this latter approach is that the mean square displacement of a particle is infinite. Here we provide an alternate approach that gives a power law mean square displacement of any desired order. We do so by constructing compressed and stretched extensions to Brownian motion. The finite size Lyapunov exponent, the underlying stochastic differential equation and its corresponding Fokker-Planck equations are derived. The fractal dimension of these processes turns out to be the same as that of classical Brownian motion.

  19. Large single domain 123 material produced by seeding with single crystal rare earth barium copper oxide single crystals

    DOEpatents

    Todt, V.; Miller, D.J.; Shi, D.; Sengupta, S.

    1998-07-07

    A method of fabricating bulk YBa{sub 2}Cu{sub 3}O{sub x} where compressed powder oxides and/or carbonates of Y and Ba and Cu present in mole ratios to form YBa{sub 2}Cu{sub 3}O{sub x} are heated in the presence of a Nd{sub 1+x}Ba{sub 2{minus}x}Cu{sub 3}O{sub y} seed crystal to a temperature sufficient to form a liquid phase in the YBa{sub 2}Cu{sub 3}O{sub x} while maintaining the seed crystal solid. The materials are slowly cooled to provide a YBa{sub 2}Cu{sub 3}O{sub x} material having a predetermined number of domains between 1 and 5. Crack-free single domain materials can be formed using either plate shaped seed crystals or cube shaped seed crystals with a pedestal of preferential orientation material. 7 figs.

  20. Large single domain 123 material produced by seeding with single crystal rare earth barium copper oxide single crystals

    DOEpatents

    Todt, Volker; Miller, Dean J.; Shi, Donglu; Sengupta, Suvankar

    1998-01-01

    A method of fabricating bulk YBa.sub.2 Cu.sub.3 O.sub.x where compressed powder oxides and/or carbonates of Y and Ba and Cu present in mole ratios to form YBa.sub.2 Cu.sub.3 O.sub.x are heated in the presence of a Nd.sub.1+x Ba.sub.2-x Cu.sub.3 O.sub.y seed crystal to a temperature sufficient to form a liquid phase in the YBa.sub.2 Cu.sub.3 O.sub.x while maintaining the seed crystal solid. The materials are slowly cooled to provide a YBa.sub.2 Cu.sub.3 O.sub.x material having a predetermined number of domains between 1 and 5. Crack-free single domain materials can be formed using either plate shaped seed crystals or cube shaped seed crystals with a pedestal of preferential orientation material.

  1. Mechanical properties of concrete containing recycled concrete aggregate (RCA) and ceramic waste as coarse aggregate replacement

    NASA Astrophysics Data System (ADS)

    Khalid, Faisal Sheikh; Azmi, Nurul Bazilah; Sumandi, Khairul Azwa Syafiq Mohd; Mazenan, Puteri Natasya

    2017-10-01

    Many construction and development activities today consume large amounts of concrete. The amount of construction waste is also increasing because of the demolition process. Much of this waste can be recycled to produce new products and increase the sustainability of construction projects. As recyclable construction wastes, concrete and ceramic can replace the natural aggregate in concrete because of their hard and strong physical properties. This research used 25%, 35%, and 45% recycled concrete aggregate (RCA) and ceramic waste as coarse aggregate in producing concrete. Several tests, such as concrete cube compression and splitting tensile tests, were also performed to determine and compare the mechanical properties of the recycled concrete with those of the normal concrete that contains 100% natural aggregate. The concrete containing 35% RCA and 35% ceramic waste showed the best properties compared with the normal concrete.

  2. Thermodynamic properties derived from the free volume model of liquids

    NASA Technical Reports Server (NTRS)

    Miller, R. I.

    1974-01-01

    An equation of state and expressions for the isothermal compressibility, thermal expansion coefficient, heat capacity, and entropy of liquids have been derived from the free volume model partition function suggested by Turnbull. The simple definition of the free volume is used, and it is assumed that the specific volume is directly related to the cube of the intermolecular separation by a proportionality factor which is found to be a function of temperature and pressure as well as specific volume. When values of the proportionality factor are calculated from experimental data for real liquids, it is found to be approximately constant over ranges of temperature and pressure which correspond to the dense liquid phase. This result provides a single-parameter method for calculating dense liquid thermodynamic properties and is consistent with the fact that the free volume model is designed to describe liquids near the solidification point.

  3. A crystallographic model for nickel base single crystal alloys

    NASA Technical Reports Server (NTRS)

    Dame, L. T.; Stouffer, D. C.

    1988-01-01

    The purpose of this research is to develop a tool for the mechanical analysis of nickel-base single-crystal superalloys, specifically Rene N4, used in gas turbine engine components. This objective is achieved by developing a rate-dependent anisotropic constitutive model and implementing it in a nonlinear three-dimensional finite-element code. The constitutive model is developed from metallurgical concepts utilizing a crystallographic approach. An extension of Schmid's law is combined with the Bodner-Partom equations to model the inelastic tension/compression asymmetry and orientation-dependence in octahedral slip. Schmid's law is used to approximate the inelastic response of the material in cube slip. The constitutive equations model the tensile behavior, creep response and strain-rate sensitivity of the single-crystal superalloys. Methods for deriving the material constants from standard tests are also discussed. The model is implemented in a finite-element code, and the computed and experimental results are compared for several orientations and loading conditions.

  4. SCADA Protocol Anomaly Detection Utilizing Compression (SPADUC) 2013

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gordon Rueff; Lyle Roybal; Denis Vollmer

    2013-01-01

    There is a significant need to protect the nation’s energy infrastructures from malicious actors using cyber methods. Supervisory, Control, and Data Acquisition (SCADA) systems may be vulnerable due to the insufficient security implemented during the design and deployment of these control systems. This is particularly true in older legacy SCADA systems that are still commonly in use. The purpose of INL’s research on the SCADA Protocol Anomaly Detection Utilizing Compression (SPADUC) project was to determine if and how data compression techniques could be used to identify and protect SCADA systems from cyber attacks. Initially, the concept was centered on howmore » to train a compression algorithm to recognize normal control system traffic versus hostile network traffic. Because large portions of the TCP/IP message traffic (called packets) are repetitive, the concept of using compression techniques to differentiate “non-normal” traffic was proposed. In this manner, malicious SCADA traffic could be identified at the packet level prior to completing its payload. Previous research has shown that SCADA network traffic has traits desirable for compression analysis. This work investigated three different approaches to identify malicious SCADA network traffic using compression techniques. The preliminary analyses and results presented herein are clearly able to differentiate normal from malicious network traffic at the packet level at a very high confidence level for the conditions tested. Additionally, the master dictionary approach used in this research appears to initially provide a meaningful way to categorize and compare packets within a communication channel.« less

  5. Fractal-Based Image Compression, II

    DTIC Science & Technology

    1990-06-01

    data for figure 3 ----------------------------------- 10 iv 1. INTRODUCTION The need for data compression is not new. With humble beginnings such as...the use of acronyms and abbreviations in spoken and written word, the methods for data compression became more advanced as the need for information...grew. The Morse code, developed because of the need for faster telegraphy, was an early example of a data compression technique. Largely because of the

  6. A Randomized Control Trial of Cardiopulmonary Feedback Devices and Their Impact on Infant Chest Compression Quality: A Simulation Study.

    PubMed

    Austin, Andrea L; Spalding, Carmen N; Landa, Katrina N; Myer, Brian R; Donald, Cure; Smith, Jason E; Platt, Gerald; King, Heather C

    2017-10-27

    In effort to improve chest compression quality among health care providers, numerous feedback devices have been developed. Few studies, however, have focused on the use of cardiopulmonary resuscitation feedback devices for infants and children. This study evaluated the quality of chest compressions with standard team-leader coaching, a metronome (MetroTimer by ONYX Apps), and visual feedback (SkillGuide Cardiopulmonary Feedback Device) during simulated infant cardiopulmonary resuscitation. Seventy voluntary health care providers who had recently completed Pediatric Advanced Life Support or Basic Life Support courses were randomized to perform simulated infant cardiopulmonary resuscitation into 1 of 3 groups: team-leader coaching alone (control), coaching plus metronome, or coaching plus SkillGuide for 2 minutes continuously. Rate, depth, and frequency of complete recoil during cardiopulmonary resuscitation were recorded by the Laerdal SimPad device for each participant. American Heart Association-approved compression techniques were randomized to either 2-finger or encircling thumbs. The metronome was associated with more ideal compression rate than visual feedback or coaching alone (104/min vs 112/min and 113/min; P = 0.003, 0.019). Visual feedback was associated with more ideal depth than auditory (41 mm vs 38.9; P = 0.03). There were no significant differences in complete recoil between groups. Secondary outcomes of compression technique revealed a difference of 1 mm. Subgroup analysis of male versus female showed no difference in mean number of compressions (221.76 vs 219.79; P = 0.72), mean compression depth (40.47 vs 39.25; P = 0.09), or rate of complete release (70.27% vs 64.96%; P = 0.54). In the adult literature, feedback devices often show an increase in quality of chest compressions. Although more studies are needed, this study did not demonstrate a clinically significant improvement in chest compressions with the addition of a metronome or visual feedback device, no clinically significant difference in Pediatric Advanced Life Support-approved compression technique, and no difference between compression quality between genders.

  7. QRFXFreeze: Queryable Compressor for RFX.

    PubMed

    Senthilkumar, Radha; Nandagopal, Gomathi; Ronald, Daphne

    2015-01-01

    The verbose nature of XML has been mulled over again and again and many compression techniques for XML data have been excogitated over the years. Some of the techniques incorporate support for querying the XML database in its compressed format while others have to be decompressed before they can be queried. XML compression in which querying is directly supported instantaneously with no compromise over time is forced to compromise over space. In this paper, we propose the compressor, QRFXFreeze, which not only reduces the space of storage but also supports efficient querying. The compressor does this without decompressing the compressed XML file. The compressor supports all kinds of XML documents along with insert, update, and delete operations. The forte of QRFXFreeze is that the textual data are semantically compressed and are indexed to reduce the querying time. Experimental results show that the proposed compressor performs much better than other well-known compressors.

  8. Novel approach to multispectral image compression on the Internet

    NASA Astrophysics Data System (ADS)

    Zhu, Yanqiu; Jin, Jesse S.

    2000-10-01

    Still image coding techniques such as JPEG have been always applied onto intra-plane images. Coding fidelity is always utilized in measuring the performance of intra-plane coding methods. In many imaging applications, it is more and more necessary to deal with multi-spectral images, such as the color images. In this paper, a novel approach to multi-spectral image compression is proposed by using transformations among planes for further compression of spectral planes. Moreover, a mechanism of introducing human visual system to the transformation is provided for exploiting the psycho visual redundancy. The new technique for multi-spectral image compression, which is designed to be compatible with the JPEG standard, is demonstrated on extracting correlation among planes based on human visual system. A high measure of compactness in the data representation and compression can be seen with the power of the scheme taken into account.

  9. A concurrent distributed system for aircraft tactical decision generation

    NASA Technical Reports Server (NTRS)

    Mcmanus, John W.

    1990-01-01

    A research program investigating the use of AI techniques to aid in the development of a tactical decision generator (TDG) for within visual range (WVR) air combat engagements is discussed. The application of AI programming and problem-solving methods in the development and implementation of a concurrent version of the computerized logic for air-to-air warfare simulations (CLAWS) program, a second-generation TDG, is presented. Concurrent computing environments and programming approaches are discussed, and the design and performance of prototype concurrent TDG system (Cube CLAWS) are presented. It is concluded that the Cube CLAWS has provided a useful testbed to evaluate the development of a distributed blackboard system. The project has shown that the complexity of developing specialized software on a distributed, message-passing architecture such as the Hypercube is not overwhelming, and that reasonable speedups and processor efficiency can be achieved by a distributed blackboard system. The project has also highlighted some of the costs of using a distributed approach to designing a blackboard system.

  10. Visualization and quantification of deformation processes controlling the mechanical response of alloys in aggressive environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Ian M.

    The overall objective of this program was to develop the technique of electron tomography for studies of defects and to couple it with real time dynamic experiments such that four-dimensional (time and three spatial dimensions) characterization of dislocation interactions with defects is feasible and apply it to discovery of the fundamental unit processes of dislocation-defect interactions in metallic systems. Strategies to overcome the restrictions normally associated with electron tomography and to make it practical within the constraints of conducting a dynamic experiment in the transmission electron microscope were developed. These methods were used to determine the mechanism controlling the transfermore » of slip across grain boundaries in FCC and HCP metals, dislocation precipitate interactions in Al alloys, and dislocation-dislocation interactions in HCP Ti. In addition, preliminary investigations of slip transfer across cube-on-cube and incoherent twin interfaces in a multi-layered system, thermal stability of grains in nanongrained Ni and Fe, and on corrosion of Fe films were conducted.« less

  11. CubeSat Material Limits For Design for Demise

    NASA Technical Reports Server (NTRS)

    Kelley, R. L.; Jarkey, D. R.

    2014-01-01

    The CubeSat form factor of nano-satellite (a satellite with a mass between one and ten kilograms) has grown in popularity due to their ease of construction and low development and launch costs. In particular, their use as student led payload design projects has increased due to the growing number of launch opportunities. CubeSats are often deployed as secondary or tertiary payloads on most US launch vehicles or they may be deployed from the ISS. The focus of this study will be on CubeSats launched from the ISS. From a space safety standpoint, the development and deployment processes for CubeSats differ significantly from that of most satellites. For large satellites, extensive design reviews and documentation are completed, including assessing requirements associated with reentry survivability. Typical CubeSat missions selected for ISS deployment have a less rigorous review process that may not evaluate aspects beyond overall design feasibility. CubeSat design teams often do not have the resources to ensure their design is compliant with reentry risk requirements. A study was conducted to examine methods to easily identify the maximum amount of a given material that can be used in the construction of a CubeSats without posing harm to persons on the ground. The results demonstrate that there is not a general equation or relationship that can be used for all materials; instead a limiting value must be defined for each unique material. In addition, the specific limits found for a number of generic materials that have been previously used as benchmarking materials for reentry survivability analysis tool comparison will be discussed.

  12. CubeSat Material Limits for Design for Demise

    NASA Technical Reports Server (NTRS)

    Kelley, R. L.; Jarkey, D. R.

    2014-01-01

    The CubeSat form factor of nano-satellite (a satellite with a mass between one and ten kilograms) has grown in popularity due to their ease of construction and low development and launch costs. In particular, their use as student led payload design projects has increased due to the growing number of launch opportunities. CubeSats are often deployed as secondary or tertiary payloads on most US launch vehicles or they may be deployed from the ISS. The focus of this study will be on CubeSats launched from the ISS. From a space safety standpoint, the development and deployment processes for CubeSats differ significantly from that of most satellites. For large satellites, extensive design reviews and documentation are completed, including assessing requirements associated with re-entry survivability. Typical CubeSat missions selected for ISS deployment have a less rigorous review process that may not evaluate aspects beyond overall design feasibility. CubeSat design teams often do not have the resources to ensure their design is compliant with re-entry risk requirements. A study was conducted to examine methods to easily identify the maximum amount of a given material that can be used in the construction of a CubeSats without posing harm to persons on the ground. The results demonstrate that there is not a general equation or relationship that can be used for all materials; instead a limiting value must be defined for each unique material. In addition, the specific limits found for a number of generic materials that have been previously used as benchmarking materials for re-entry survivability analysis tool comparison will be discussed.

  13. Reproducibility, Reliability, and Validity of Fuchsin-Based Beads for the Evaluation of Masticatory Performance.

    PubMed

    Sánchez-Ayala, Alfonso; Farias-Neto, Arcelino; Vilanova, Larissa Soares Reis; Costa, Marina Abrantes; Paiva, Ana Clara Soares; Carreiro, Adriana da Fonte Porto; Mestriner-Junior, Wilson

    2016-08-01

    Rehabilitation of masticatory function is inherent to prosthodontics; however, despite the various techniques for evaluating oral comminution, the methodological suitability of these has not been completely studied. The aim of this study was to determine the reproducibility, reliability, and validity of a test food based on fuchsin beads for masticatory function assessment. Masticatory performance was evaluated in 20 dentate subjects (mean age, 23.3 years) using two kinds of test foods and methods: fuchsin beads and ultraviolet-visible spectrophotometry, and silicone cubes and multiple sieving as gold standard. Three examiners conducted five masticatory performance trials with each test food. Reproducibility of the results from both test foods was separately assessed using the intraclass correlation coefficient (ICC). Reliability and validity of fuchsin bead data were measured by comparing the average mean of absolute differences and the measurement means, respectively, regarding silicone cube data using the paired Student's t-test (α = 0.05). Intraexaminer and interexaminer ICC for the fuchsin bead values were 0.65 and 0.76 (p < 0.001), respectively; those for the silicone cubes values were 0.93 and 0.91 (p < 0.001), respectively. Reliability revealed intraexaminer (p < 0.001) and interexaminer (p < 0.05) differences between the average means of absolute differences of each test foods. Validity also showed differences between the measurement means of each test food (p < 0.001). Intra- and interexaminer reproducibility of the test food based on fuchsin beads for evaluation of masticatory performance were good and excellent, respectively; however, the reliability and validity were low, because fuchsin beads do not measure the grinding capacity of masticatory function as silicone cubes do; instead, this test food describes the crushing potential of teeth. Thus, the two kinds of test foods evaluate different properties of masticatory capacity, confirming fushsin beads as a useful tool for this purpose. © 2015 by the American College of Prosthodontists.

  14. SWEET CubeSat - Water detection and water quality monitoring for the 21st century

    NASA Astrophysics Data System (ADS)

    Antonini, Kelly; Langer, Martin; Farid, Ahmed; Walter, Ulrich

    2017-11-01

    Water scarcity and contamination of clean water have been identified as major challenges of the 21st century, in particular for developing countries. According to the International Water Management Institute, about 30% of the world's population does not have reliable access to clean water. Consequently, contaminated water contributes to the death of about 3 million people every year, mostly children. Access to potable water has been proven to boost education, equality and health, reduce hunger, as well as help the economy of the developing world. Currently used in-situ water monitoring techniques are sparse, and often difficult to execute. Space-based instruments will help to overcome these challenges by providing means for water level and water quality monitoring of medium-to-large sweet (fresh) water reservoirs. Data from hyperspectral imaging instruments on past and present governmental missions, such as Envisat and Aqua, has been used for this purpose. However, the high cost of large multi-purpose space vessels, and the lack of dedicated missions limits the continuous monitoring of inland and coastal water quality. The proposed CubeSat mission SWEET (Sweet Water Earth Education Technologies) will try to fill this gap. The SWEET concept is a joint effort between the Technical University of Munich, the German Space Operations Center and the African Steering Committee of the IAF. By using a novel Fabry-Perot interferometer-based hyperspectral imager, the mission will deliver critical data directly to national water resource centers in Africa with an unmatched cost per pixel ratio and high temporal resolution. Additionally, SWEET will incorporate education of students in CubeSat design and water management. Although the aim of the mission is to deliver local water quality and water level data to African countries, further coverage could be achieved with subsequent satellites. Finally, a constellation of SWEET-like CubeSats would extend the coverage to the whole planet, delivering daily data to ensure reliable access to clean water for millions of people worldwide.

  15. Towards a Conceptual Design of a Cross-Domain Integrative Information System for the Geosciences

    NASA Astrophysics Data System (ADS)

    Zaslavsky, I.; Richard, S. M.; Valentine, D. W.; Malik, T.; Gupta, A.

    2013-12-01

    As geoscientists increasingly focus on studying processes that span multiple research domains, there is an increased need for cross-domain interoperability solutions that can scale to the entire geosciences, bridging information and knowledge systems, models, software tools, as well as connecting researchers and organization. Creating a community-driven cyberinfrastructure (CI) to address the grand challenges of integrative Earth science research and education is the focus of EarthCube, a new research initiative of the U.S. National Science Foundation. We are approaching EarthCube design as a complex socio-technical system of systems, in which communication between various domain subsystems, people and organizations enables more comprehensive, data-intensive research designs and knowledge sharing. In particular, we focus on integrating 'traditional' layered CI components - including information sources, catalogs, vocabularies, services, analysis and modeling tools - with CI components supporting scholarly communication, self-organization and social networking (e.g. research profiles, Q&A systems, annotations), in a manner that follows and enhances existing patterns of data, information and knowledge exchange within and across geoscience domains. We describe an initial architecture design focused on enabling the CI to (a) provide an environment for scientifically sound information and software discovery and reuse; (b) evolve by factoring in the impact of maturing movements like linked data, 'big data', and social collaborations, as well as experience from work on large information systems in other domains; (c) handle the ever increasing volume, complexity and diversity of geoscience information; (d) incorporate new information and analytical requirements, tools, and techniques, and emerging types of earth observations and models; (e) accommodate different ideas and approaches to research and data stewardship; (f) be responsive to the existing and anticipated needs of researchers and organizations representing both established and emerging CI users; and (g) make best use of NSF's current investment in the geoscience CI. The presentation will focus on the challenges and methodology of EarthCube CI design, in particular on supporting social engagement and interaction between geoscientists and computer scientists as a core function of EarthCube architecture. This capability must include mechanisms to not only locate and integrate available geoscience resources, but also engage individuals and projects, research products and publications, and enable efficient communication across many EarthCube stakeholders leading to long-term institutional alignment and trusted collaborations.

  16. Compressed air injection technique to standardize block injection pressures.

    PubMed

    Tsui, Ban C H; Li, Lisa X Y; Pillay, Jennifer J

    2006-11-01

    Presently, no standardized technique exists to monitor injection pressures during peripheral nerve blocks. Our objective was to determine if a compressed air injection technique, using an in vitro model based on Boyle's law and typical regional anesthesia equipment, could consistently maintain injection pressures below a 1293 mmHg level associated with clinically significant nerve injury. Injection pressures for 20 and 30 mL syringes with various needle sizes (18G, 20G, 21G, 22G, and 24G) were measured in a closed system. A set volume of air was aspirated into a saline-filled syringe and then compressed and maintained at various percentages while pressure was measured. The needle was inserted into the injection port of a pressure sensor, which had attached extension tubing with an injection plug clamped "off". Using linear regression with all data points, the pressure value and 99% confidence interval (CI) at 50% air compression was estimated. The linearity of Boyle's law was demonstrated with a high correlation, r = 0.99, and a slope of 0.984 (99% CI: 0.967-1.001). The net pressure generated at 50% compression was estimated as 744.8 mmHg, with the 99% CI between 729.6 and 760.0 mmHg. The various syringe/needle combinations had similar results. By creating and maintaining syringe air compression at 50% or less, injection pressures will be substantially below the 1293 mmHg threshold considered to be an associated risk factor for clinically significant nerve injury. This technique may allow simple, real-time and objective monitoring during local anesthetic injections while inherently reducing injection speed.

  17. IceCube

    Science.gov Websites

    the IEEE Spectrum reflects on the deployment of IceCube's last string. (February 2011). From the Daily Californian (January 26, 2011) includes current group photo. From the Guardian and Observer (UK) (January 23 , 2011) NSF Press Release (December 2010) Major Milestone - Completion of the IceCube Detector December

  18. NanoRacks CubeSat Deployment

    NASA Image and Video Library

    2014-02-13

    ISS038-E-046586 (13 Feb. 2014) --- A set of NanoRacks CubeSats is photographed by an Expedition 38 crew member after the deployment by the NanoRacks Launcher attached to the end of the Japanese robotic arm. The CubeSats program contains a variety of experiments such as Earth observations and advanced electronics testing.

  19. NanoRacks CubeSat Deployment

    NASA Image and Video Library

    2014-02-13

    ISS038-E-046579 (13 Feb. 2014) --- A set of NanoRacks CubeSats is photographed by an Expedition 38 crew member after the deployment by the NanoRacks Launcher attached to the end of the Japanese robotic arm. The CubeSats program contains a variety of experiments such as Earth observations and advanced electronics testing.

  20. On the verge of an astronomy CubeSat revolution

    NASA Astrophysics Data System (ADS)

    Shkolnik, Evgenya L.

    2018-05-01

    CubeSats are small satellites built in standard sizes and form factors, which have been growing in popularity but have thus far been largely ignored within the field of astronomy. When deployed as space-based telescopes, they enable science experiments not possible with existing or planned large space missions, filling several key gaps in astronomical research. Unlike expensive and highly sought after space telescopes such as the Hubble Space Telescope, whose time must be shared among many instruments and science programs, CubeSats can monitor sources for weeks or months at time, and at wavelengths not accessible from the ground such as the ultraviolet, far-infrared and low-frequency radio. Science cases for CubeSats being developed now include a wide variety of astrophysical experiments, including exoplanets, stars, black holes and radio transients. Achieving high-impact astronomical research with CubeSats is becoming increasingly feasible with advances in technologies such as precision pointing, compact sensitive detectors and the miniaturization of propulsion systems. CubeSats may also pair with the large space- and ground-based telescopes to provide complementary data to better explain the physical processes observed.

  1. Detection prospects for high energy neutrino sources from the anisotropic matter distribution in the local Universe

    NASA Astrophysics Data System (ADS)

    Mertsch, Philipp; Rameez, Mohamed; Tamborra, Irene

    2017-03-01

    Constraints on the number and luminosity of the sources of the cosmic neutrinos detected by IceCube have been set by targeted searches for point sources. We set complementary constraints by using the 2MASS Redshift Survey (2MRS) catalogue, which maps the matter distribution of the local Universe. Assuming that the distribution of the neutrino sources follows that of matter, we look for correlations between ``warm'' spots on the IceCube skymap and the 2MRS matter distribution. Through Monte Carlo simulations of the expected number of neutrino multiplets and careful modelling of the detector performance (including that of IceCube-Gen2), we demonstrate that sources with local density exceeding 10-6 Mpc-3 and neutrino luminosity Lν lesssim 1042 erg s-1 (1041 erg s-1) will be efficiently revealed by our method using IceCube (IceCube-Gen2). At low luminosities such as will be probed by IceCube-Gen2, the sensitivity of this analysis is superior to requiring statistically significant direct observation of a point source.

  2. Structural and chemical orders in N i64.5Z r35.5 metallic glass by molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Tang, L.; Wen, T. Q.; Wang, N.; Sun, Y.; Zhang, F.; Yang, Z. J.; Ho, K. M.; Wang, C. Z.

    2018-03-01

    The atomic structure of N i64.5Z r35.5 metallic glass has been investigated by molecular dynamics (MD) simulations. The calculated structure factors from the MD glassy sample at room temperature agree well with the x-ray diffraction (XRD) and neutron diffraction (ND) experimental data. Using the pairwise cluster alignment and clique analysis methods, we show that there are three types of dominant short-range order (SRO) motifs around Ni atoms in the glass sample of N i64.5Z r35.5 , i.e., mixed-icosahedron(ICO)-cube, intertwined-cube, and icosahedronlike clusters. Furthermore, chemical order and medium-range order (MRO) analysis show that the mixed-ICO-cube and intertwined-cube clusters exhibit the characteristics of the crystalline B2 phase. Our simulation results suggest that the weak glass-forming ability (GFA) of N i64.5Z r35.5 can be attributed to the competition between the glass forming ICO SRO and the crystalline mixed-ICO-cube and intertwined-cube motifs.

  3. Sequential neural text compression.

    PubMed

    Schmidhuber, J; Heil, S

    1996-01-01

    The purpose of this paper is to show that neural networks may be promising tools for data compression without loss of information. We combine predictive neural nets and statistical coding techniques to compress text files. We apply our methods to certain short newspaper articles and obtain compression ratios exceeding those of the widely used Lempel-Ziv algorithms (which build the basis of the UNIX functions "compress" and "gzip"). The main disadvantage of our methods is that they are about three orders of magnitude slower than standard methods.

  4. Gain compression and its dependence on output power in quantum dot lasers

    NASA Astrophysics Data System (ADS)

    Zhukov, A. E.; Maximov, M. V.; Savelyev, A. V.; Shernyakov, Yu. M.; Zubov, F. I.; Korenev, V. V.; Martinez, A.; Ramdane, A.; Provost, J.-G.; Livshits, D. A.

    2013-06-01

    The gain compression coefficient was evaluated by applying the frequency modulation/amplitude modulation technique in a distributed feedback InAs/InGaAs quantum dot laser. A strong dependence of the gain compression coefficient on the output power was found. Our analysis of the gain compression within the frame of the modified well-barrier hole burning model reveals that the gain compression coefficient decreases beyond the lasing threshold, which is in a good agreement with the experimental observations.

  5. Compression of surface myoelectric signals using MP3 encoding.

    PubMed

    Chan, Adrian D C

    2011-01-01

    The potential of MP3 compression of surface myoelectric signals is explored in this paper. MP3 compression is a perceptual-based encoder scheme, used traditionally to compress audio signals. The ubiquity of MP3 compression (e.g., portable consumer electronics and internet applications) makes it an attractive option for remote monitoring and telemedicine applications. The effects of muscle site and contraction type are examined at different MP3 encoding bitrates. Results demonstrate that MP3 compression is sensitive to the myoelectric signal bandwidth, with larger signal distortion associated with myoelectric signals that have higher bandwidths. Compared to other myoelectric signal compression techniques reported previously (embedded zero-tree wavelet compression and adaptive differential pulse code modulation), MP3 compression demonstrates superior performance (i.e., lower percent residual differences for the same compression ratios).

  6. Compact storage of medical images with patient information.

    PubMed

    Acharya, R; Anand, D; Bhat, S; Niranjan, U C

    2001-12-01

    Digital watermarking is a technique of hiding specific identification data for copyright authentication. This technique is adapted here for interleaving patient information with medical images to reduce storage and transmission overheads. The text data are encrypted before interleaving with images to ensure greater security. The graphical signals are compressed and subsequently interleaved with the image. Differential pulse-code-modulation and adaptive-delta-modulation techniques are employed for data compression, and encryption and results are tabulated for a specific example.

  7. Information extraction and transmission techniques for spaceborne synthetic aperture radar images

    NASA Technical Reports Server (NTRS)

    Frost, V. S.; Yurovsky, L.; Watson, E.; Townsend, K.; Gardner, S.; Boberg, D.; Watson, J.; Minden, G. J.; Shanmugan, K. S.

    1984-01-01

    Information extraction and transmission techniques for synthetic aperture radar (SAR) imagery were investigated. Four interrelated problems were addressed. An optimal tonal SAR image classification algorithm was developed and evaluated. A data compression technique was developed for SAR imagery which is simple and provides a 5:1 compression with acceptable image quality. An optimal textural edge detector was developed. Several SAR image enhancement algorithms have been proposed. The effectiveness of each algorithm was compared quantitatively.

  8. IceCube results from point-like source searches using 6 years of through-going muon data

    NASA Astrophysics Data System (ADS)

    Coenders, Stefan

    2016-04-01

    The IceCube Neutrino Observatory located at the geographic South Pole was designed to study and discover high energy neutrinos coming from both galactic and extra-galactic astrophysical sources. Track-like events induced by charged-current muon-neutrino interactions close to the IceCube detector give an angular resolution better than 1∘ above TeV energies. We present here the results of searches for point-like astrophysical neutrino sources on the full sky using 6 years of detector livetime, of which three years use the complete IceCube detector. Within 2000 days of detector livetime, IceCube is sensitive to a steady flux substantially below E2∂ϕ/∂E = 10-12 TeV cm-2 s-1 in the northern sky for neutrino energies above 10 TeV.

  9. Interplay between spherical confinement and particle shape on the self-assembly of rounded cubes.

    PubMed

    Wang, Da; Hermes, Michiel; Kotni, Ramakrishna; Wu, Yaoting; Tasios, Nikos; Liu, Yang; de Nijs, Bart; van der Wee, Ernest B; Murray, Christopher B; Dijkstra, Marjolein; van Blaaderen, Alfons

    2018-06-08

    Self-assembly of nanoparticles (NPs) inside drying emulsion droplets provides a general strategy for hierarchical structuring of matter at different length scales. The local orientation of neighboring crystalline NPs can be crucial to optimize for instance the optical and electronic properties of the self-assembled superstructures. By integrating experiments and computer simulations, we demonstrate that the orientational correlations of cubic NPs inside drying emulsion droplets are significantly determined by their flat faces. We analyze the rich interplay of positional and orientational order as the particle shape changes from a sharp cube to a rounded cube. Sharp cubes strongly align to form simple-cubic superstructures whereas rounded cubes assemble into icosahedral clusters with additionally strong local orientational correlations. This demonstrates that the interplay between packing, confinement and shape can be utilized to develop new materials with novel properties.

  10. CuboCube: Student creation of a cancer genetics e-textbook using open-access software for social learning.

    PubMed

    Seid-Karbasi, Puya; Ye, Xin C; Zhang, Allen W; Gladish, Nicole; Cheng, Suzanne Y S; Rothe, Katharina; Pilsworth, Jessica A; Kang, Min A; Doolittle, Natalie; Jiang, Xiaoyan; Stirling, Peter C; Wasserman, Wyeth W

    2017-03-01

    Student creation of educational materials has the capacity both to enhance learning and to decrease costs. Three successive honors-style classes of undergraduate students in a cancer genetics class worked with a new software system, CuboCube, to create an e-textbook. CuboCube is an open-source learning materials creation system designed to facilitate e-textbook development, with an ultimate goal of improving the social learning experience for students. Equipped with crowdsourcing capabilities, CuboCube provides intuitive tools for nontechnical and technical authors alike to create content together in a structured manner. The process of e-textbook development revealed both strengths and challenges of the approach, which can inform future efforts. Both the CuboCube platform and the Cancer Genetics E-textbook are freely available to the community.

  11. CuboCube: Student creation of a cancer genetics e-textbook using open-access software for social learning

    PubMed Central

    Seid-Karbasi, Puya; Ye, Xin C.; Zhang, Allen W.; Gladish, Nicole; Cheng, Suzanne Y. S.; Rothe, Katharina; Pilsworth, Jessica A.; Kang, Min A.; Doolittle, Natalie; Jiang, Xiaoyan; Stirling, Peter C.; Wasserman, Wyeth W.

    2017-01-01

    Student creation of educational materials has the capacity both to enhance learning and to decrease costs. Three successive honors-style classes of undergraduate students in a cancer genetics class worked with a new software system, CuboCube, to create an e-textbook. CuboCube is an open-source learning materials creation system designed to facilitate e-textbook development, with an ultimate goal of improving the social learning experience for students. Equipped with crowdsourcing capabilities, CuboCube provides intuitive tools for nontechnical and technical authors alike to create content together in a structured manner. The process of e-textbook development revealed both strengths and challenges of the approach, which can inform future efforts. Both the CuboCube platform and the Cancer Genetics E-textbook are freely available to the community. PMID:28267757

  12. Compressed ECG biometric: a fast, secured and efficient method for identification of CVD patient.

    PubMed

    Sufi, Fahim; Khalil, Ibrahim; Mahmood, Abdun

    2011-12-01

    Adoption of compression technology is often required for wireless cardiovascular monitoring, due to the enormous size of Electrocardiography (ECG) signal and limited bandwidth of Internet. However, compressed ECG must be decompressed before performing human identification using present research on ECG based biometric techniques. This additional step of decompression creates a significant processing delay for identification task. This becomes an obvious burden on a system, if this needs to be done for a trillion of compressed ECG per hour by the hospital. Even though the hospital might be able to come up with an expensive infrastructure to tame the exuberant processing, for small intermediate nodes in a multihop network identification preceded by decompression is confronting. In this paper, we report a technique by which a person can be identified directly from his / her compressed ECG. This technique completely obviates the step of decompression and therefore upholds biometric identification less intimidating for the smaller nodes in a multihop network. The biometric template created by this new technique is lower in size compared to the existing ECG based biometrics as well as other forms of biometrics like face, finger, retina etc. (up to 8302 times lower than face template and 9 times lower than existing ECG based biometric template). Lower size of the template substantially reduces the one-to-many matching time for biometric recognition, resulting in a faster biometric authentication mechanism.

  13. Polydimethylsiloxane pressure sensors for force analysis in tension band wiring of the olecranon.

    PubMed

    Zens, Martin; Goldschmidtboeing, Frank; Wagner, Ferdinand; Reising, Kilian; Südkamp, Norbert P; Woias, Peter

    2016-11-14

    Several different surgical techniques are used in the treatment of olecranon fractures. Tension band wiring is one of the most preferred options by surgeons worldwide. The concept of this technique is to transform a tensile force into a compression force that adjoins two surfaces of a fractured bone. Currently, little is known about the resulting compression force within a fracture. Sensor devices are needed that directly transduce the compression force into a measurement quality. This allows the comparison of different surgical techniques. Ideally the sensor devices ought to be placed in the gap between the fractured segments. The design, development and characterization of miniaturized pressure sensors fabricated entirely from polydimethylsiloxane (PDMS) for a placement within a fracture is presented. The pressure sensors presented in this work are tested, calibrated and used in an experimental in vitro study. The pressure sensors are highly sensitive with an accuracy of approximately 3 kPa. A flexible fabrication process for various possible applications is described. The first in vitro study shows that using a single-twist or double-twist technique in tension band wiring of the olecranon has no significant effect on the resulting compression forces. The in vitro study shows the feasibility of the proposed measurement technique and the results of a first exemplary study.

  14. Interband coding extension of the new lossless JPEG standard

    NASA Astrophysics Data System (ADS)

    Memon, Nasir D.; Wu, Xiaolin; Sippy, V.; Miller, G.

    1997-01-01

    Due to the perceived inadequacy of current standards for lossless image compression, the JPEG committee of the International Standards Organization (ISO) has been developing a new standard. A baseline algorithm, called JPEG-LS, has already been completed and is awaiting approval by national bodies. The JPEG-LS baseline algorithm despite being simple is surprisingly efficient, and provides compression performance that is within a few percent of the best and more sophisticated techniques reported in the literature. Extensive experimentations performed by the authors seem to indicate that an overall improvement by more than 10 percent in compression performance will be difficult to obtain even at the cost of great complexity; at least not with traditional approaches to lossless image compression. However, if we allow inter-band decorrelation and modeling in the baseline algorithm, nearly 30 percent improvement in compression gains for specific images in the test set become possible with a modest computational cost. In this paper we propose and investigate a few techniques for exploiting inter-band correlations in multi-band images. These techniques have been designed within the framework of the baseline algorithm, and require minimal changes to the basic architecture of the baseline, retaining its essential simplicity.

  15. Evaluation of Three Different Processing Techniques in the Fabrication of Complete Dentures

    PubMed Central

    Chintalacheruvu, Vamsi Krishna; Balraj, Rajasekaran Uttukuli; Putchala, Lavanya Sireesha; Pachalla, Sreelekha

    2017-01-01

    Aims and Objectives: The objective of the present study is to compare the effectiveness of three different processing techniques and to find out the accuracy of processing techniques through number of occlusal interferences and increase in vertical dimension after denture processing. Materials and Methods: A cross-sectional study was conducted on a sample of 18 patients indicated for complete denture fabrication was selected for the study and they were divided into three subgroups. Three processing techniques, compression molding and injection molding using prepolymerized resin and unpolymerized resin, were used to fabricate dentures for each of the groups. After processing, laboratory-remounted dentures were evaluated for number of occlusal interferences in centric and eccentric relations and change in vertical dimension through vertical pin rise in articulator. Data were analyzed using statistical test ANOVA and SPSS software version 19.0 by IBM was used. Results: Data obtained from three groups were subjected to one-way ANOVA test. After ANOVA test, results with significant variations were subjected to post hoc test. Number of occlusal interferences with compression molding technique was reported to be more in both centric and eccentric positions as compared to the two injection molding techniques with statistical significance in centric, protrusive, right lateral nonworking, and left lateral working positions (P < 0.05). Mean vertical pin rise (0.52 mm) was reported to more in compression molding technique as compared to injection molding techniques, which is statistically significant (P < 0.001). Conclusions: Within the limitations of this study, injection molding techniques exhibited less processing errors as compared to compression molding technique with statistical significance. There was no statistically significant difference in processing errors reported within two injection molding systems. PMID:28713763

  16. Evaluation of Three Different Processing Techniques in the Fabrication of Complete Dentures.

    PubMed

    Chintalacheruvu, Vamsi Krishna; Balraj, Rajasekaran Uttukuli; Putchala, Lavanya Sireesha; Pachalla, Sreelekha

    2017-06-01

    The objective of the present study is to compare the effectiveness of three different processing techniques and to find out the accuracy of processing techniques through number of occlusal interferences and increase in vertical dimension after denture processing. A cross-sectional study was conducted on a sample of 18 patients indicated for complete denture fabrication was selected for the study and they were divided into three subgroups. Three processing techniques, compression molding and injection molding using prepolymerized resin and unpolymerized resin, were used to fabricate dentures for each of the groups. After processing, laboratory-remounted dentures were evaluated for number of occlusal interferences in centric and eccentric relations and change in vertical dimension through vertical pin rise in articulator. Data were analyzed using statistical test ANOVA and SPSS software version 19.0 by IBM was used. Data obtained from three groups were subjected to one-way ANOVA test. After ANOVA test, results with significant variations were subjected to post hoc test. Number of occlusal interferences with compression molding technique was reported to be more in both centric and eccentric positions as compared to the two injection molding techniques with statistical significance in centric, protrusive, right lateral nonworking, and left lateral working positions ( P < 0.05). Mean vertical pin rise (0.52 mm) was reported to more in compression molding technique as compared to injection molding techniques, which is statistically significant ( P < 0.001). Within the limitations of this study, injection molding techniques exhibited less processing errors as compared to compression molding technique with statistical significance. There was no statistically significant difference in processing errors reported within two injection molding systems.

  17. Compression technique for large statistical data bases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eggers, S.J.; Olken, F.; Shoshani, A.

    1981-03-01

    The compression of large statistical databases is explored and are proposed for organizing the compressed data, such that the time required to access the data is logarithmic. The techniques exploit special characteristics of statistical databases, namely, variation in the space required for the natural encoding of integer attributes, a prevalence of a few repeating values or constants, and the clustering of both data of the same length and constants in long, separate series. The techniques are variations of run-length encoding, in which modified run-lengths for the series are extracted from the data stream and stored in a header, which ismore » used to form the base level of a B-tree index into the database. The run-lengths are cumulative, and therefore the access time of the data is logarithmic in the size of the header. The details of the compression scheme and its implementation are discussed, several special cases are presented, and an analysis is given of the relative performance of the various versions.« less

  18. 3D Printing the Complete CubeSat

    NASA Technical Reports Server (NTRS)

    Kief, Craig

    2015-01-01

    The 3D Printing the Complete CubeSat project is designed to advance the state-of-the-art in 3D printing for CubeSat applications. Printing in 3D has the potential to increase reliability, reduce design iteration time and provide greater design flexibility in the areas of radiation mitigation, communications, propulsion, and wiring, among others. This project is investigating the possibility of including propulsion systems into the design of printed CubeSat components. One such concept, an embedded micro pulsed plasma thruster (mPPT), could provide auxiliary reaction control propulsion for a spacecraft as a means to desaturate momentum wheels.

  19. NASAs EDSN Aims to Overcome the Operational Challenges of CubeSat Constellations and Demonstrate an Economical Swarm of 8 CubeSats Useful for Space Science Investigations

    NASA Technical Reports Server (NTRS)

    Smith, Harrison Brodsky; Hu, Steven Hung Kee; Cockrell, James J.

    2013-01-01

    Operators of a constellation of CubeSats have to confront a number of daunting challenges that can be cost prohibitive, or operationally prohibitive, to missions that could otherwise be enabled by a satellite constellation. Challenges including operations complexity, intersatellite communication, intersatellite navigation, and time sharing tasks between satellites are all complicated by operating with the usual CubeSat size, power, and budget constraints. EDSN pioneers innovative solutions to these problems as they are presented on the nano-scale satellite platform.

  20. Evaluation of Additives to Reduce Solid Propellant Flammability in Ambient Air.

    DTIC Science & Technology

    1975-12-01

    been applied successfully to reduce the flammability of plastics and polymers. From that experimental data base, the following have been shown to be...consumption rate of the cube) are reported since they are more repeatable than the linear burning rate data . B. Free Convection Effects Several series of...Steady State Burning Rate Measurements Obtaining steady state burning rate data in air requires a technique for holding the characteristic length

Top